For your weekend listening enjoyment: a new episode of America’s favorite 3-star podcast, with special guest Stephen Robles. Topics include indie media and YouTube, Shortcuts and automation, and the state of podcasting.
Sponsored by:
Uncommon Goods: Out of the ordinary gifts, great for the holi...
I imagine that most people who take an interest in de-Googling are
concerned about privacy. Privacy on the Internet is a somewhat nebulous
concept, but one aspect of privacy is surely the prevention of your web
browsing behaviour being propagated from one organization to another. I
don’t want my medical insurers to know, for example, that I’ve been
researching coronary artery disease. And even though my personal safety
and liberty probably aren’t at stake, I don’t want to give any support
to the global advertising behemoth, by allowing advertisers access to
better information about me.
Unfortunately, while distancing yourself from Google and its services
might be a necessary first step in protecting your privacy, it’s far
from the last. There’s more to do, and it’s getting harder to do it,
because of browser fingerprinting.
How we got here
Until about five years ago, our main concern surrounding browser
privacy was probably the use of third-party tracking cookies. The
original intent behind cookies was that they would allow a web browser
and a web server to engage in a conversation over a period of time. The
HTTP protocol that web servers use is
stateless
; that is, each
interaction between browser and server is expected to be complete in
itself. Having the browser and the server exchange a cookie (which could
just be a random number) in each interaction allowed the server to
associate each browser with an ongoing conversation. This was, and is, a
legitimate use of cookies, one that is necessary for almost all
interactive web-based services. If the cookie is short-lived, and only
applies to a single conversation with a single web server, it’s not a
privacy concern.
Unfortunately, web browsers for a long time lacked the ability to
distinguish between privacy-sparing and privacy-breaking uses of
cookies. If many different websites issue pages that contain links to
the same server – usually some kind of advertising service – then the
browser would send cookies to that server, thinking it was being
helpful. This behaviour effectively linked web-based services together,
allowing them to share information about their users. The process is a
bit more complicated than I’m making it out to be, but these third-party
cookies were of such concern that, in Europe at least, legislation was
enacted to force websites to disclose that they were using them.
Browsers eventually got better at figuring out which cookies were
helpful and which harmful and, for the most part, we don’t need to be
too concerned about ‘tracking cookies’ these days. Not only can browsers
mitigate their risks, there’s a far more sinister one: browser
fingerprinting.
Browser fingerprinting
Browser fingerprinting does not depend on cookies. It’s resistant, to
some extent, to privacy measures like VPNs. Worst of all, steps that we
might take to mitigate the risk of fingerprinting can actually worsen
the risk. It’s a privacy nightmare, and it’s getting worse.
Fingerprinting works by having the web server extract certain
discrete elements of information from the browser, and combining those
elements into a numerical identifier. Some of the information supplied
by the browser is fundamental and necessary and, although a browser
could fake it, such a measure is likely to break the website.
For example, a fingerprinting system knows, just from information
that my browser always supplies (and probably has to), that I’m using
version 144 of the Firefox browser, on Linux; my preferred language is
English, and my time-zone is GMT. That, by itself, isn’t enough
information to identify me uniquely, but it’s a step towards doing
so.
To get more information, the fingerprinter needs to use more
sophisticated methods which the browser could, in theory, block. For
example, if the browser supports JavaScript – and they nearly all do –
then the fingerprinter can figure out what fonts I have installed, what
browser extensions I use, perhaps even what my hardware is. Worst of
all, perhaps, it can extract a
canvas fingerprint
. Canvas
fingerprinting works by having the browser run code that draws text
(perhaps invisibly), and then retrieving the individual pixel data that
it drew. This pixel data will differ subtly from one system to another,
even drawing the same text, because of subtle differences in the
graphics hardware and the operating system.
It appears that only about one browser in every thousand share the
same canvas fingerprint. Again, this alone isn’t enough to identify me,
but it’s another significant data point.
Fingerprinting can make use of even what appears to be trivial
information. If, for example, I resize my browser window, the browser
will probably make the next window the same size. It will probably
remember my preference from one day to the next. If the fingerprinter
knows my preferred browser window size is, say, 1287x892 pixels, that
probably narrows down the search for my identify by a factor of a
thousand or more.
Why crude
methods to defeat fingerprinting don’t work
You might think that a simple way to prevent, or at least hamper,
fingerprinting would be simply to disable JavaScript support in the
browser. While this does defeat measures like canvas fingerprinting, it
generates a significant data point of its own: the fact that JavaScript
is disabled. Since almost every web browser in the world now supports
JavaScript, turning it off as a measure to protect privacy is like going
to the shopping mall wearing a ski mask. Sure, it hides your identify;
but nobody’s going to want to serve you in stores. And disabling
JavaScript will break many websites, including some pages on this one,
because I use it to render math equations.
Less dramatic approaches to fingerprinting resistance have their own
problems. For example, a debate has long raged about whether a browser
should actually identify itself at all. The fact that I’m running
Firefox on Linux probably puts me in a small, easily identified group.
Perhaps my browser should instead tell the server I’m running Chrome on
Windows? That’s a much larger group, after all.
The problem is that the fingerprinters can guess the browser and
platform with pretty good accuracy using other methods, whether the
browser reports this information or not. If the browser says something
different to what the fingerprinter infers, we’re back in ski-mask
territory.
What about more subtle methods to spoof the client’s behaviour?
Browsers (or plug-ins) can modify the canvas drawing procedures, for
example, to spoof the results of canvas fingerprinting. Unfortunately,
these methods leave traces of their own, if they aren’t applied subtly.
What’s more, if they’re applied rigorously enough to be effective, they
can break websites that rely on them for normal operation.
All in all, browser fingerprinting is very hard to defeat, and
organizations that want to track us have gotten disturbingly good at
it.
Is there any good news?
Not much, frankly.
Before sinking into despondency, it’s worth bearing in mind that
websites that attempt to demonstrate the efficacy of fingerprinting,
like
amiunique
and
fingerprint.com
do not reflect how
fingerprinting works in the real world. They’re operating on
comparatively small sets of data and, for the most part, they’re not
tracking users over days. Real-world tracking is much harder than these
sites make it out to be. That’s not to say it’s
too
hard but it
is, at best, a statistical approach, rather than an exact one.
In addition ‘uniqueness’, in itself, is not a strong measure of
traceability. That my browser fingerprint is unique at some point in
time is irrelevant if my fingerprint will be different tomorrow, whether
it remains unique within the fingerprinter’s database or not.
Of course, these facts also mean that it’s difficult to assess the
effectiveness of our countermeasures: our assessment can only be
approximate, because we don’t actually know what real fingerprinters are
doing.
Another small piece of good news is that browser developers are
starting to realize how much of a hazard fingerprinting is, and to
integrate more robust countermeasures. We don’t necessarily need to
resort to plug-ins and extensions, which are themselves detectable and
become part of the fingerprint. At present, Brave and Mullvad seems to
be doing the most to resist fingerprinting, albeit in different ways.
Librewolf has the same fingerprint resistance as Firefox, but it is
turned on by default. Probably anti-fingerprinting methods will improve
over time but, of course, the fingerprinters will get better at what
they do, too.
So what can we do?
First, and most obviously, if you care about avoiding tracking, you
must
prevent long-lived cookies hanging around in the browser,
and you
must
use a VPN. Ideally the VPN should rotate its
endpoint regularly.
The fact that you’re using a VPN, of course, is something that the
fingerprinters will know, and it is does make you stand out.
Sophisticated fingerprinters won’t be defeated by a VPN alone. But if
you don’t use a VPN, the trackers don’t even
need
to
fingerprint you: your IP number, combined with a few other bits of
routine information, will identify you immediately, and with
near-certainty.
Many browsers can be configured to remove cookies when they seem not
to be in use; Librewolf does this by default, and Firefox and Chrome do
it in ‘incognito’ mode. The downside, of course, is that long-lived
cookies are often used to store authentication status so, if you delete
them, you’ll find yourself having to log in every time you look at a
site that requires authentication. To mitigate this annoyance, browsers
generally allow particular sites to be excluded from their
cookie-burning policies.
Next, you need to be as unremarkable as possible. Fingerprinting is
about uniqueness, so you should use the most popular browser on the most
popular operating system on the kind of hardware you can buy from PC
World. If you’re running the latest Chrome on the latest Windows 11 on a
two-year-old, bog-standard laptop, you’re going to be one of a very
large group. Of course Chrome, being a Google product, has its own
privacy concerns, so you might be better off using a Chromium-based
browser with reduced Google influence, like Brave.
You should endeavour to keep your computer in as near its stock
configuration as possible. Don’t install anything (like fonts) that are
reportable by the browser. Don’t install any extensions, and don’t
change any settings. Use the same ‘light’ theme as everybody else, and
use the browser with a maximized window, and always the same size. And
so on.
If possible, use a browser that has built-in fingerprint resistance,
like Mullvad or Librewolf (or Firefox with these features turned
on).
If you take all these precautions, you can probably reduce the
probability that you can be tracked by you browser fingerprint, over
days or weeks, from about 99% to about 50%.
50% is still too high, of course.
The downsides of
resisting fingerprinting
If you enable fingerprinting resistance in Firefox, or use Librewolf,
you’ll immediately encounter oddities. Most obviously, every time you
open a new browser window, it will be the same size. Resizing the window
may have odd results, as the browser will try to constrain certain
screen elements to common size multiples. In addition, you won’t be able
to change the theme.
You’ll probably find yourself facing more ‘CAPTCHA’ and similar
identity challenges, because your browser will be unknown to the server.
Websites don’t do this out of spite: hacking and fraud are rife on the
Internet, and the operators of web-based services are rightly paranoid
about client behaviour.
You’ll likely find that some websites just don’t work properly, in
many small ways: wrong colours, misplaced text, that kind of thing. I’ve
found these issues to be irritations rather than show-stoppers, but you
might discover otherwise.
The GDPR is, for the most part, technologically neutral, although it
has specific provisions for cookies, which were a significant concern at
the time it was drafted. So far as I know, nobody has yet challenged
browser fingerprinting under the GDPR, even though it seems to violate
the provisions regarding consent. Since there are legitimate reasons for
fingerprinting, such as hacking detection, organizations that do it
could perhaps defend against a legal challenge on the basis that
fingerprinting is necessary to operate their services safely. In the
end, we really need specific, new legislation to address this privacy
threat.
I suspect that many people who take an interest in Internet privacy
don’t appreciate how hard it is to resist browser fingerprinting. Taking
steps to reduce it leads to inconvenience and, with the present state of
technology, even the most intrusive approaches are only partially
effective. The data collected by fingerprinting is invisible to the
user, and stored somewhere beyond the user’s reach.
On the other hand, browser fingerprinting produces only statistical
results, and usually can’t be used to track or identify a user with
certainty. The data it collects has a relatively short lifespan – days
to weeks, not months or years. While it probably can be used for
sinister purposes, my main concern is that it supports the intrusive,
out-of-control online advertising industry, which has made a wasteland
of the Internet.
In the end, it’s probably only going to be controlled by legislation
and, even when that happens, the advertisers will seek new ways to make
the Internet even more of a hellscape – they always do.
Our babies were taken after 'biased' parenting test - now we're fighting to get them back
Image caption,
Keira says she sobbed uncontrollably when her baby was taken away from her
By
Sofia Bettiza
BBC Global Health Reporter
and
Woody Morris
BBC World Service
Reporting from
Denmark
When Keira's daughter was born last November, she was given two hours with her before the baby was taken into care.
"Right when she came out, I started counting the minutes," Keira, 39, recalls.
"I kept looking at the clock to see how long we had."
When the moment came for Zammi to be taken from her arms, Keira says she sobbed uncontrollably, whispering "sorry" to her baby.
"It felt like a part of my soul died."
Now Keira is one of many Greenlandic families living on the Danish mainland who are fighting to get their children returned to them after they were removed by social services.
In such cases, babies and children were taken away after parental competency tests - known in Denmark as FKUs - were used to help assess whether they were fit to be parents.
In May this year the Danish government banned the use of these tests on Greenlandic families after decades of criticism, although they continue to be used on other families in Denmark.
The assessments, which usually take months to complete, are used in complex welfare cases where authorities believe children are at risk of neglect or harm.
Image caption,
Keira says she was "counting the minutes" from the moment Zammi was born, knowing she only had two hours with her daughter
They include interviews with parents and children, a range of cognitive tasks, such as recalling a sequence of numbers backwards, general knowledge quizzes, and personality and emotional testing.
Defenders of the tests say they offer a more objective method of assessment than the potentially anecdotal and subjective evidence of social workers and other experts.
But critics say they cannot meaningfully predict whether someone will make a good parent.
Opponents have also long argued that they are designed around Danish cultural norms and point out they are administered in Danish, rather than Kalaallisut, the mother tongue of most Greenlanders.
This can lead to misunderstandings, they say.
The Battle to Get My Child Back
Greenlandic parents across Denmark fight to be reunited with their children.
Greenlanders are Danish citizens, enabling them to live and work on the mainland.
Thousands live in Denmark, drawn by its employment opportunities, education and healthcare, among other reasons.
Greenlandic parents in Denmark are 5.6 times more likely to have children taken into care than Danish parents, according to the Danish Centre for Social Research, a government-funded research institute.
In May, the government said it hoped in due course to review around 300 cases – including ones involving FKU tests – in which Greenlandic children were forcibly removed from their families.
But as of October, the BBC found that just 10 cases where parenting tests were used had been reviewed by the government - and no Greenlandic children had been returned as a result.
Keira's assessment in 2024, carried out when she was pregnant, concluded that she did not have "sufficient parental competencies to care for the newborn independently".
Keira says the questions she was asked included: "Who is Mother Teresa?" and "How long does it take for the sun's rays to reach the Earth?"
Image caption,
Keira still keeps a cot beside her bed and another in the living room of her apartment, along with baby clothes and nappies
Psychologists who defend the tests argue questions like these are intended to assess parents' general knowledge and their understanding of concepts they might encounter in society.
Keira adds that "they made me play with a doll and criticised me for not making enough eye contact".
She alleges that when she asked why she was being tested in this way the psychologist told her: "To see if you are civilised enough, if you can act like a human being."
The local authority in Keira's case said it could not comment on individual families, adding that decisions to place a child in care were made when there was serious concern about the "child's health, development, and well-being".
In 2014, Keira's other two children - who were then aged nine years and eight months - were placed into care after an FKU test at the time concluded her parenting skills were not developing fast enough to meet their needs.
Her eldest, Zoe, who is now 21, moved back home when she was 18 and currently lives in her own apartment and sees her mum regularly.
Keira hopes she will soon be reunited with her baby Zammi permanently.
The Danish government has said its review will look at whether mistakes were made in the administering of FKU tests on Greenlandic people.
In the meantime, Keira is allowed to see Zammi, who is in foster care, once a week for an hour.
Each time she visits, she takes flowers and sometimes Greenlandic food, such as chicken heart soup.
"Just so a little part of her culture can be with her," she says.
'I felt the most horrific heartbreak'
Image caption,
Ulrik and Johanne hope the Danish government will reconsider reviewing cases like theirs where a child has been adopted
But not all Greenlandic parents who had children taken into care after completing FKUs will have their cases reviewed.
Johanne and Ulrik's son was adopted in 2020 and the Danish government has said it will not review cases where children have been adopted.
Johanne, 43, was tested in 2019 during pregnancy.
Like Zammi, her son was meant to have been taken away immediately after birth.
But because he was born prematurely on Boxing Day and social workers were on holiday, she and her husband Ulrik got to keep him for 17 days.
"It was the happiest time of my life as a father," says Ulrik, 57.
"Being with my son, holding him, changing his nappy, making sure that Johanne pumps her milk before going to bed in the evening."
Then one day, two social workers and two police officers arrived at Johanne and Ulrik's home to take their son away.
The couple say they pleaded with them not to take him.
Johanne asked if she could breastfeed him one last time.
"As I was dressing my son to hand him over to his foster parents who were on their way, I felt the most horrific heartbreak," Ulrik says.
Johanne had been tested after two children from another relationship, who were five and six, were taken into care after FKU testing in 2010.
Her 2019 assessment describes her as "narcissistic" and as having "mental retardation" - a categorisation based on designations developed by the WHO which were in use at the time.
She rejects both of these descriptions of her.
Image source,
Getty Images
Image caption,
A protester carries a placard that reads: "Our children are watching!! Prejudices are contagious," during a demonstration in Nuuk, Greenland's capital, earlier this year
In theory, there is no pass or fail mark for an FKU and they are one factor among others taken into consideration by local authorities who decide whether to place a child into care.
But psychologist Isak Nellemann, who used to administer the tests, says in practice they "are very important, about the most important thing, because when the tests are bad, in about 90% [of cases] they will lose their children".
Nelleman argues the tests lack scientific validity and were developed to study personality traits rather than predict parenting ability.
However, Turi Frederiksen, a senior psychologist whose team currently administers the tests, defends them, saying that while they are not perfect, "they are valuable, extensive psychological tools".
She also says she does not believe they are biased against Greenlanders.
When Johanne was asked in 2019 what she saw during a Rorschach test - a psychological test where people are asked what they see when looking at ink-blot images - she said she saw a woman gutting a seal, a familiar sight in Greenland's hunting culture.
Johanne alleges that on hearing this answer the psychologist called her a "barbarian".
The local council involved in the couple's 2019 assessment did not address Johanne's claim directly.
They said her assessment "indicated significant concern regarding the parents' overall parenting abilities" as well as "concerns about the parents' general lifestyle and functional level in daily life".
Image caption,
Social worker Tordis Jacobsen said the decision to place a child into care in Denmark was never taken lightly
'I never got to see his first steps'
After Johanne and Ulrik's son was taken into care, they were allowed to see him during brief, weekly visits until he was adopted in 2020.
They have never seen him since.
"I never got to see his first steps, his first word, his first tooth, his first school day," Johanne says.
However, a few days after his birth they christened him, creating an official record that includes their names and address.
"We needed to create a paper trail so he could find his way back to us," Johanne says.
Their lawyer Jeanette Gjørret hopes to take their case before the European Court of Human Rights.
But Denmark's social affairs minister Sophie Hæstorp Andersen tells the BBC the government will not reopen cases of adoption because each of these children is now settled with a "loving and caring family".
Asked about the progress of the review, she says "it sounds slow, but we are getting started".
She also says decisions to remove and adopt children are part of a "very thorough process where we look into the family's ability to take care of their child not only for a year or two, but for a long period of time".
That is echoed by Tordis Jacobsen, a social worker team leader in Aalborg Kommune in northern Denmark, who says removing a child in Denmark is never taken lightly.
She says safeguarding concerns are often first flagged by schools or hospitals, and points out that in cases where a child is permanently adopted the decision to approve this is made by a judge.
Image caption,
Pilinguaq's daughter, six, was returned to her several months ago, more than four years after being placed into care
Pilinguaq is a rare case of a Greenlandic mother who has been reunited with her child.
She and her daughter, who was placed into care aged one, were reunited a few months ago. Her daughter is now six.
Pilinguaq, 39, says she received the unexpected news in a phone call from social services.
"I started crying and laughing at the same time. I couldn't believe it. I kept thinking, 'Oh my God, she's coming home.'"
Pilinguaq's three children were all placed into care in 2021. The other two were aged six and nine at the time.
She says she agreed for her local authority to place her children in temporary care while she found a new home suitable for her children.
Pilinguaq says she believed her children would soon be returned to her, but instead she had to undergo a parenting assessment.
This concluded she had a pattern of entering "dysfunctional relationships" and was unfit to parent.
'They can take her in one hour'
A few months after her six-year-old daughter came home, Pilinguaq was told by her local authority that her other two older children will be returning to her in December.
The decision to return the children into Pilinguaq's care was made by the local authority rather than being recommended by the government review. The local authority declined to comment on her case.
Spending more than four years apart has made it difficult for Pilinguaq to rebuild her relationship with her daughter.
"If I go to the bathroom and close the door, she will have a panic attack and say 'Mum, I couldn't find you,'" Pilinguaq says.
She also says she is terrified of losing her daughter again.
"They can take her in one hour. They can do it again."
Image caption,
Keira has been making her daughter Zammi a wooden sleigh for her first birthday
Keira is now preparing for Zammi's first birthday in her absence.
She's building a traditional Greenlandic sleigh by hand from wood, with a polar bear drawn on the front.
Earlier this month, she was told that her daughter won't be coming home - for now at least - but she hasn't given up hope.
Keira still has a cot next to her bed and another in the living room, with framed photos of Zammi on the walls, along with baby clothes and nappies.
"I will not stop fighting for my children.
"If I don't finish this fight, it will be my children's fight in the future."
This is part of the Global Women series from the BBC World Service, sharing untold and important stories from around the globe
Nydia Velázquez Hears Calls for Generational Change, Setting Up a Fight on the Left in New York
Intercept
theintercept.com
2025-11-22 16:35:49
The Democratic congresswoman was an early believer in Zohran Mamdani. His win showed her it was “the right time to pass the torch.”
The post Nydia Velázquez Hears Calls for Generational Change, Setting Up a Fight on the Left in New York appeared first on The Intercept....
Rep. Nydia Velázquez
knew it was time to retire when Zohran Mamdani
won
the New York City mayoral race.
“What I saw during that election was that so many young people were hungry for a change and that they have a clear-eyed view of the problems we face and how to fix them,” Velázquez, D-N.Y., told The Intercept. “That helped convince me that this was the right time to pass the torch.”
Velázquez, a native of Puerto Rico who has served in Congress for more than 30 years, announced her retirement Thursday, in the early days of what is sure to be a frenzied 2026 midterm season across the country and in several solidly Democratic New York districts. She was not facing a notable primary challenger, unlike her House colleagues Hakeem Jeffries, Ritchie Torres, and Adriano Espaillat: three younger New York congressmen who are all considered firmly in line with the Democratic establishment, and all
facing challenges
from
their left
.
“She could be in that seat as long as she wants,” said Brooklyn Borough President Antonio Reynoso, a longtime ally whom Velázquez once described as one of her “
children
.” “Nydia is at her peak. So that she would go out like that — it’s so Nydia.”
Velázquez is known as something of a den mother for a generation of younger progressive politicians in Brooklyn. She is overwhelmingly popular in her district but made few friends in the local establishment’s clubby machine politics. As Brooklyn’s electorate shifted left over the decades, she built up a formidable stable of protégés in key roles.
“My goal was to build a bench of strong, independent, progressive public servants who understood who they work for.”
“My goal was never to build a machine,” she said. “My goal was to build a bench of strong, independent, progressive public servants who understood who they work for.”
That will likely set up a competitive race to succeed Velázquez in her left-leaning 7th Congressional District, which includes Mamdani’s home base of Astoria, Queens, and solidly progressive Brooklyn neighborhoods like Bushwick, Williamsburg, and Clinton Hill. The district’s progressive profile means it’s poised to become a hot contest for candidates on the left — and may distract from the controversial candidacy of City Council Member Chi Ossé, who’s waging a long-shot challenge against Jeffries that has mired the city’s Democratic Socialists of America
in debate
.
Velázquez declined to say who, if anyone, she favored to become her replacement.
“I could leave today and know that the district will be in good hands,” she said.
Velázquez is bowing out at a moment when the “G word” —
gerontocracy
— can be heard frequently on cable news, and not just on the lips of younger political hopefuls
frustrated by an aging party leadership
. She joins fellow Democratic Rep. Jerry Nadler, who announced his decision to retire in September and who has already kicked off a wild, 10-way primary fight in his Upper West Side district.
“She wanted to send a message to Democrats across the country that it is time for the next generation.”
“She told me she wanted to send a message to Democrats across the country that it is time for the next generation,” said City Council Member Lincoln Restler, a protégé. “Still, every elected official I’ve spoken to is just sad that we’re losing this remarkable moral leader.”
Velázquez saw Mamdani’s promise so early in the mayoral race that she was predicting his win well before many of her younger acolytes did, Reynoso told The Intercept.
“Nydia was always like ‘Zohran is the one, and I think he can win,’” Reynoso said.
At Mamdani’s victory celebration on November 4, Velázquez was happy to flaunt her prediction. When one supporter joyfully asked if she could believe it, she
replied
: “I believed it a year ago.”
Velázquez, 72, was first elected in 1992, unseating a nine-term incumbent in the Democratic primary to become the first Puerto Rican woman to serve in Congress. At the time of her primary victory, the
New York Times
offered readers a guide to the phonetic pronunciation of her name.
“When Nydia Velázquez was first elected to Congress, it was her against the world,” said Restler. “She took on the chair of the Foreign Relations Committee, and the entrenched political power in Brooklyn was entirely against her.”
In 2010, Restler said, “she told me she felt genuinely lonely in Brooklyn, that she had so few allies that she could count on. Fifteen years later, essentially every single person in local and state elected office across her district is there because of her validation, her legitimization, and her support.”
In the wake of her announcement on Thursday, praise for Velázquez poured in not just from her mentors and close ideological allies, but also from establishment figures closer to the center as well. On X, New York Gov. Kathy Hochul
called
the outgoing congresswoman a “trailblazer” — a hint perhaps at the stable of potential left-wing contenders Velázquez has helped take the playing field over the years.
Technical Standards in Service to Humanity
Internet Exchange
internet.exchangepoint.tech
2025-11-20 16:22:23
Inside the making of a new UN report on technology and human rights....
The Office of the High Commissioner for Human Rights (OHCHR) took another significant step toward reshaping how the technical community can consider human rights in standards setting by releasing
a new report
, published this week titled, “Tech and Human Rights Study: Making technical standards work for humanity - New pathways for incorporating international human rights into standards development for digital technologies.”
Technical standards, the shared rules that make digital systems like the internet work, help shape the conditions for human rights online. After a year of consultations, this OHCHR report outlines an informed agenda for how standards development organizations can integrate human rights into both their processes and the standards of the technologies they design. It also describes the current landscape of global standards bodies, identifies the barriers that prevent meaningful human rights engagement, and highlights practices that support openness, inclusivity, transparency, and accountability.
Today’s tech raises critical questions about human rights
The office began work on the new report following its 2023 Human Rights Council
resolution on the importance of integrating human rights
into the work of technical standards bodies. That earlier resolution recognized that internet and digital technologies shape the most basic conditions for people’s rights. This new report focuses on a specific and long overdue question: how can standards development organizations support human rights through both their processes and the technical standards they create?
The report shows that technical standards play a critical role in shaping whether human rights are upheld or undermined depending on the choices embedded in their design. Standards that promote openness, interoperability, and secure communication help safeguard freedom of expression and access to information, while those that introduce filtering, traffic controls, or shutdown mechanisms can restrict them. The report also highlights that the architecture of standards shapes whether people can assemble and associate online in safe and reliable ways. And because standards determine how data is transmitted, stored, or exposed, they have significant implications for privacy, a right enshrined in Article 12 of the Universal Declaration of Human Rights. Standards can either protect users from surveillance or make intrusive monitoring easier. In short, the report shows that technical standards are not neutral: they encode decisions that can strengthen human rights by design or facilitate their erosion.
The work with the OHCHR throughout the year focused on supporting this effort. This included helping to design and run a consultative process with six focused conversations involving stakeholders from across standards development, human rights advocacy, internet governance, and emerging technology communities. One consultation also took place as a side meeting at the IETF. It gave participants a chance to speak directly to the relationship between human rights and technical standards in an engineering-focused environment. Each conversation brought different experiences into the room.
Bringing the technical and human rights communities together
The report builds on more than a decade of work by human rights organizations and public interest technologists who engage in standards development. Their work focuses on the design, development, and deployment of internet and digital technologies, including artificial intelligence. These communities analyze how technical choices influence surveillance, censorship, discrimination, and other rights concerns. Their long-term engagement shows why standards work needs direct human rights input.
All six consultations led into a final online meeting that brought every participant together with a goal of confirming that the draft captured what people shared throughout the process and to ensure that the material was accurate, clear, and useful. We circulated an early version of the report to all participants and invited written feedback. Their comments strengthened the final text and helped shape the recommendations.
The pathways towards human rights respecting standards
The timing of this report also matters. The Global Digital Compact, adopted at the United Nations General Assembly, directs the OHCHR to coordinate human rights considerations across global internet governance institutions. That includes multilateral bodies like the ITU and multistakeholder communities like the IETF. The compact reinforces the idea that governments, civil society, and standards bodies share responsibility for integrating human rights into technical work.
The report describes the current landscape of standards development organizations and outlines how each organization structures participation, transparency, documentation, and decision-making. It identifies clear points where human rights considerations can enter these processes. It also provides concrete recommendations for standards bodies, governments, and civil society. These recommendations address process design, risk assessment, participation support, and the need for sustainable engagement by public interest technologists.
This work continues. Next month the
AI Standards Summit in Seoul
will host a session on human rights in technical standards. Many participants from our consultations will attend. The ITU Telecommunication Standardization Advisory Group
will meet in January
to continue its own discussions about how to incorporate human rights considerations into its processes.
The recommendations give governments, standards bodies, and advocates practical steps they can take today. Broader awareness and stronger participation will help build an internet that better protects human rights for everyone.
Two weeks ago,
Mallory and the IX team hosted a series of events
related to human rights and the social web at MozFest 2025 in Barcelona. While there, Mallory joined the legendary Rabble, a.k.a Evan Henshaw-Plath (Twitter's first employee) to talk about who controls Web 2.0 and how the fediverse gives us a second chance; how she convinced the IETF to evaluate protocols for human rights implications; and why content moderation should be contextual, not universal. They also discuss how Edward Snowden’s revelations changed global internet standards, the 2025 funding crisis and how Ghost provides a model for sustainable open-source businesses.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always
leave us a tip
.
This week in our Signal community, we got talking about:
Cloudflare, one of a handful of companies that together provide a stack of critical internet infrastructure services,
went offline on Tuesday
affecting millions of companies including ChatGPT, X and, annoyingly for me, Moodle, my university’s online learning platform. In the IX group chat, we noted that an outage at a company used by
81.5% of all websites that rely on a reverse proxy
is a reminder of how much of the internet is reliant on a few big companies. This one happens to also be moving into identity, payments, and standards-setting in ways that look a lot like building the power to paywall and ID-wall the web.
We’re Keeping An Eye On: Chat Control
EU governments have agreed on a draft of the Chat Control law that legally allows platforms to scan private messages on a voluntary basis while confirming there is no obligation to do so. The Commission wanted platforms to be obligated to scan all user communications for signs of crime and report suspicious content. The European Parliament called this mass surveillance and insisted that scanning should apply only to unencrypted content of specific suspects. The resulting draft is a compromise: there will be no obligation to scan, but voluntary scanning will be legally allowed.
Decentralised social networks highlight the value of a model that redistributes power to users and communities. In this recorded session from Decidim, Amandine Le Pape (Element), Robin Berjon (Free our Feeds), Andy Piper (Mastodon) and moderator Marta G.Franco (Laintersección) discuss the challenges and opportunities of building truly democratic social networks that are truly ours.
https://www.youtube.com/watch?v=mWX8O2HWGMY
Related:
Europe’s digital sovereignty agenda is far too focused on economics, says the European Partnership for Democracy, and ignores a deeper threat. While governments debate industrial competitiveness, a handful of tech corporations still shape public discourse, decide what information people can access, and determine how (or whether) they follow democratically adopted laws.
https://epd.eu/news-publications/if-europe-wants-digital-sovereignty-it-must-reinvent-who-owns-tech
Related:
Abeba Birhane and Kris Shrishak warn that the EU’s shift in AI policy is being driven less by science and more by Silicon Valley–fuelled hype, pushing lawmakers toward speculation-based decisions that carry real social, environmental and economic costs.
https://www.techpolicy.press/ai-hype-is-steering-eu-policy-off-course
ICANN has published the latest monthly metrics for RDRS. The October report highlights trends in request volumes, system activity, and global participation from registrars and requestors. These insights help inform ongoing community discussions about access to generic top-level (gTLD) nonpublic registration data.
https://www.icann.org/en/system/files/files/rdrs-usage-metrics-14nov25-en.pdf
YouTube deleted the official channels of three major Palestinian human rights groups, erasing over 700 videos documenting alleged Israeli human rights violations in Gaza and the West Bank. YouTube confirmed the removals were a direct response to State Department sanctions issued by the Trump administration.
https://theintercept.com/2025/11/04/youtube-google-israel-palestine-human-rights-censorship
Western outlets across the political spectrum overwhelmingly adopt Israeli framing and marginalize Palestinian context in their Gaza coverage, finds a massive text analysis of 54,000+ articles by Media Bias Meter, a Tech for Palestine project.
https://www.mediabiasmeter.com/framing-gaza-media-bias-report-2025
WhatsApp has billions of users, and a new study shows how easy it is for outsiders to figure out who many of them are. Researchers found that by repeatedly asking WhatsApp a simple question your phone already asks: “Is this number using the app?” they could check hundreds of millions of phone numbers per hour. This method, called enumeration, does not break encryption, instead it takes advantage of how contact-based apps identify users. The result is a major privacy risk that allows large-scale mapping of WhatsApp accounts and links to old data leaks. Read about it in Wired:
https://www.wired.com/story/a-simple-whatsapp-security-flaw-exposed-billions-phone-numbers
And the report:
https://github.com/sbaresearch/whatsapp-census?tab=readme-ov-file
The US Department of the Treasury, together with Australia and the United Kingdom, announced coordinated sanctions against Media Land, a Russia-based bulletproof hosting provider that supports ransomware groups and other cybercriminal operations.
https://home.treasury.gov/news/press-releases/sb0319
Running a workshop, training, or meeting soon? Join The Session Design Lab to explore practical, inclusive session design, dig into adult learning frameworks, and design and peer-review your own session in a supportive, pay-what-you-can environment. It’s offered at two different times to accommodate multiple time zones, and as a past participant, I can personally vouch for its awesomeness.
10th-11th December
.
Online
.
https://www.fabriders.net/session-design-lab
The CPDP Call for Papers for the 2026 conference is now open. Under the 2026 theme Competing Visions, Shared Futures, researchers across disciplines are invited to submit original work that explores how innovation, governance, and fundamental rights will shape the digital society ahead. Submissions by
January 24
.
https://www.cpdpconferences.org/call-for-papers-2026
Apple researchers
have published a study that looks into how LLMs can analyze audio and motion data to get a better overview of the user’s activities. Here are the details.
This, they argue, has great potential to make activity analysis more precise, even in situations where there isn’t enough sensor data.
From the researchers:
“Sensor data streams provide valuable information around activities and context for downstream applications, though integrating complementary information can be challenging. We show that large language models (LLMs) can be used for late fusion for activity classification from audio and motion time series data. We curated a subset of data for diverse activity recognition across contexts (e.g., household activities, sports) from the Ego4D dataset. Evaluated LLMs achieved 12-class zero- and one-shot classification F1-scores significantly above chance, with no task-specific training. Zero-shot classification via LLM-based fusion from modality-specific models can enable multimodal temporal applications where there is limited aligned training data for learning a shared embedding space. Additionally, LLM-based fusion can enable model deploying without requiring additional memory and computation for targeted application-specific multimodal models.”
In other words, LLMs are actually pretty good at inferring what a user is doing from basic audio and motion signals, even when they’re not specifically trained for that. Moreover, when given just a single example, their accuracy improves even further.
One important distinction is that in this study, the LLM wasn’t fed the actual audio recording, but rather, short text descriptions generated by audio models and an IMU-based motion model (which tracks movement through accelerometer and gyroscope data), as shown below:
Diving a bit deeper
In the paper, the researchers explain that they used Ego4D, a massive dataset of media shot in first-person perspective. The data contains thousands of hours of real-world environments and situations, from household tasks to outdoor activities.
From the study:
“We curated a dataset of day-to-day activities from the Ego4D dataset by searching for activities of daily living within the provided narrative descriptions. The curated dataset includes 20 second samples from twelve high-level activities: vacuum cleaning, cooking, doing laundry, eating, playing basketball, playing soccer, playing with pets, reading a book, using a computer, washing dishes, watching TV, workout/weightlifting. These activities were selected to span a range of household and fitness tasks, and based on their prevalence in the larger dataset.”
The researchers ran the audio and motion data through smaller models that generated text captions and class predictions, then fed those outputs into different LLMs (Gemini-2.5-pro and Qwen-32B) to see how well they could identify the activity.
Then, Apple compared the performance of these models in two different situations: one in which they were given the list of the 12 possible activities to choose from (closed-set), and another where they weren’t given any options (open-ended).
For each test, they were given different combinations of audio captions, audio labels, IMU activity prediction data, and extra context, and this is how they did:
In the end, the researchers note that the results of this study offer interesting insights into how combining multiple models can benefit activity and health data, especially in cases where raw sensor data alone is insufficient to provide a clear picture of the user’s activity.
Perhaps more importantly, Apple
published supplemental materials
alongside the study, including the Ego4D segment IDs, timestamps, prompts, and one-shot examples used in the experiments, to assist researchers interested in reproducing the results.
FTC: We use income earning auto affiliate links.
More.
Dirk Eddelbuettel: RcppArmadillo 15.2.2-1 on CRAN: Upstream Update, OpenMP Updates
PlanetDebian
dirk.eddelbuettel.com
2025-11-22 15:44:00
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research ...
Armadillo
is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments.
RcppArmadillo
integrates this library with the
R
environment and language–and is
widely used by (currently) 1286 other packages on
CRAN
, downloaded 42.6 million
times (per the partial logs from the cloud mirrors of CRAN), and the
CSDA paper
(
preprint
/ vignette
) by Conrad and myself has been cited 659 times according
to Google Scholar.
This versions updates to the 15.2.2 upstream
Armadillo
release made two days
ago. It brings a few changes over the RcppArmadillo 15.2.0 release made
only to GitHub (and described
in
this post
), and of course even more changes relative to the last
CRAN
release described
in
this earlier post
. As described previously, and due to both the
upstream transition to C++14 coupled with the
CRAN
move away from C++11, the
package offers a transition by allowing packages to remain with the
older, pre-15.0.0 ‘legacy’
Armadillo
yet offering the
current version as the default. During the transition we did not make
any releases to CRAN allowing both the upload cadence to settle back to
the desired ‘about six in six months’ that the CRAN Policy asks for, and
for packages to adjust to any potential changes. Most affected packages
have done so (as can be seen in the GitHub issues
#489
and
#491
)
which is good to see. We appreciate all the work done by the respective
package maintainers. A number of packages are still under a (now
formally expired) deadline at CRAN and may get removed. Our offer to
help where we can still stands, so please get in touch if we can be of
assistance. As a reminder, the meta-issue
#475
regroups
all
the resources for the transition.
With respect to changes in the package, we once more overhauled the
OpenMP detection and setup, following the approach take by package
data.table
but sticking with an
autoconf
-based
configure
. The detailed changes since the last CRAN release
follow.
Changes in
RcppArmadillo version 15.2.2-1 (2025-11-21)
Upgraded to Armadillo release 15.2.2 (Medium Roast Deluxe)
Improved reproducibility of random number generation when using
OpenMP
Skip a unit test file under macOS as complex algebra seems to
fail under newer macOS LAPACK setting
Further OpenMP detection rework for macOS (Dirk in
#497
,
#499
)
Define ARMA_CRIPPLED_LAPACK on Windows only if 'LEGACY' Armadillo
selected
Changes
in RcppArmadillo version 15.2.1-0 (2025-10-28) (GitHub Only)
Upgraded to Armadillo release 15.2.1 (Medium Roast Deluxe)
Faster handling of submatrices with one row
Improve OpenMP detection (Dirk in
#495
fixing
#493
)
Changes
in RcppArmadillo version 15.2.0-0 (2025-10-20) (GitHub Only)
Upgraded to Armadillo release 15.2.0 (Medium Roast Deluxe)
Added
rande()
for generating matrices with elements
from exponential distributions
shift()
has been deprecated in favour of
circshift()
, for consistency with Matlab/Octave
Reworked detection of aliasing, leading to more efficient
compiled code
The more I work with large language models through provider-exposed APIs, the
more I feel like we have built ourselves into quite an unfortunate API surface
area. It might not actually be the right abstraction for what’s happening
under the hood. The way I like to think about this problem now is that it’s
actually a distributed state synchronization problem.
At its core, a large language model takes text, tokenizes it into numbers, and
feeds those tokens through a stack of matrix multiplications and attention
layers on the GPU. Using a large set of fixed weights, it produces activations
and predicts the next token. If it weren’t for temperature (randomization),
you could think of it having the potential of being a much more deterministic
system, at least in principle.
As far as the core model is concerned, there’s no magical distinction between
“user text” and “assistant text”—everything is just tokens. The only
difference comes from special tokens and formatting that encode roles (system,
user, assistant, tool), injected into the stream via the prompt template. You
can look at the system prompt templates on Ollama for the different models to
get an idea.
The Basic Agent State
Let’s ignore for a second which APIs already exist and just think about what
usually happens in an agentic system. If I were to have my LLM run locally on
the same machine, there is still state to be maintained, but that state is very
local to me. You’d maintain the conversation history as tokens in RAM, and the
model would keep a derived “working state” on the GPU—mainly the attention
key/value cache built from those tokens. The weights themselves stay fixed;
what changes per step are the activations and the KV cache.
From a mental-model perspective, caching means “remember the computation you
already did for a given prefix so you don’t have to redo it.” Internally, that
usually means storing the attention KV cache for those prefix tokens on the
server and letting you reuse it, not literally handing you raw GPU state.
There are probably some subtleties to this that I’m missing, but I think this
is a pretty good model to think about it.
The Completion API
The moment you’re working with completion-style APIs such as OpenAI’s or
Anthropic’s, abstractions are put in place that make things a little different
from this very simple system. The first difference is that you’re not actually
sending raw tokens around. The way the GPU looks at the conversation history
and the way you look at it are on fundamentally different levels of
abstraction. While you could count and manipulate tokens on one side of the
equation, extra tokens are being injected into the stream that you can’t see.
Some of those tokens come from converting the JSON message representation into
the underlying input tokens fed into the machine. But you also have things
like tool definitions, which are injected into the conversation in proprietary
ways. Then there’s out-of-band information such as cache points.
And beyond that, there are tokens you will never see. For instance, with
reasoning models you often don’t see any real reasoning tokens, because some
LLM providers try to hide as much as possible so that you can’t retrain your
own models with their reasoning state. On the other hand, they might give you
some other informational text so that you have something to show to the user.
Model providers also love to hide search results and how those results were
injected into the token stream. Instead, you only get an encrypted blob back
that you need to send back to continue the conversation. All of a sudden, you
need to take some information on your side and funnel it back to the server so
that state can be reconciled on either end.
In completion-style APIs, each new turn requires resending the entire prompt
history. The size of each individual request grows linearly with the number of
turns, but the cumulative amount of data sent over a long conversation grows
quadratically because each linear-sized history is retransmitted at every step.
This is one of the reasons long chat sessions feel increasingly expensive. On
the server, the model’s attention cost over that sequence also grows
quadratically in sequence length, which is why caching starts to matter.
The Responses API
One of the ways OpenAI tried to address this problem was to introduce the
Responses API, which maintains the conversational history on the server (at
least in the version with the saved state flag). But now you’re in a bizarre
situation where you’re fully dealing with state synchronization: there’s hidden
state on the server and state on your side, but the API gives you very limited
synchronization capabilities. To this point, it remains unclear to me how long
you can actually continue that conversation. It’s also unclear what happens if
there is state divergence or corruption. I’ve seen the Responses API get stuck
in ways where I couldn’t recover it. It’s also unclear what happens if there’s
a network partition, or if one side got the state update but the other didn’t.
The Responses API with saved state is quite a bit harder to use, at least as
it’s currently exposed.
Obviously, for OpenAI it’s great because it allows them to hide more
behind-the-scenes state that would otherwise have to be funneled through with
every conversation message.
State Sync API
Regardless of whether you’re using a completion-style API or the Responses API,
the provider always has to inject additional context behind the scenes—prompt
templates, role markers, system/tool definitions, sometimes even provider-side
tool outputs—that never appears in your visible message list. Different
providers handle this hidden context in different ways, and there’s no common
standard for how it’s represented or synchronized. The underlying reality is
much simpler than the message-based abstractions make it look: if you run an
open-weights model yourself, you can drive it directly with token sequences and
design APIs that are far cleaner than the JSON-message interfaces we’ve
standardized around. The complexity gets even worse when you go through
intermediaries like OpenRouter or SDKs like the Vercel AI SDK, which try to
mask provider-specific differences but can’t fully unify the hidden state each
provider maintains. In practice, the hardest part of unifying LLM APIs isn’t
the user-visible messages—it’s that each provider manages its own partially
hidden state in incompatible ways.
It really comes down to how you pass this hidden state around in one form or
another. I understand that from a model provider’s perspective, it’s nice to
be able to hide things from the user. But synchronizing hidden state is
tricky, and none of these APIs have been built with that mindset, as far as I
can tell. Maybe it’s time to start thinking about what a state synchronization
API would look like, rather than a message-based API.
The more I work with these agents, the more I feel like I don’t actually need a
unified message API. The core idea of it being message-based in its current
form is itself an abstraction that might not survive the passage of time.
Learn From Local First?
There’s a whole ecosystem that has dealt with this kind of mess before: the
local-first movement. Those folks spent a decade figuring out how to
synchronize distributed state across clients and servers that don’t trust each
other, drop offline, fork, merge, and heal. Peer-to-peer sync, and
conflict-free replicated storage engines all exist because “shared state but
with gaps and divergence” is a hard problem that nobody could solve with naive
message passing. Their architectures explicitly separate canonical state,
derived state, and transport mechanics — exactly the kind of separation missing
from most LLM APIs today.
Some of those ideas map surprisingly well to models: KV caches resemble derived
state that could be checkpointed and resumed; prompt history is effectively an
append-only log that could be synced incrementally instead of resent wholesale;
provider-side invisible context behaves like a replicated document with hidden
fields.
At the same time though, if the remote state gets wiped because the remote site
doesn’t want to hold it for that long, we would want to be in a situation where
we can replay it entirely from scratch—which for instance the Responses API
today does not allow.
Future Unified APIs
There’s been plenty of talk about unifying message-based APIs, especially in
the wake of MCP (Model Context Protocol). But if we ever standardize anything,
it should start from how these models actually behave, not from the surface
conventions we’ve inherited. A good standard would acknowledge hidden state,
synchronization boundaries, replay semantics, and failure modes — because those
are real issues. There is always the risk that we rush to formalize the
current abstractions and lock in their weaknesses and faults. I don’t know
what the right abstraction looks like, but I’m increasingly doubtful that the
status-quo solutions are the right fit.
In Bazel, there are two types of macros: legacy macros and symbolic macros,
that were introduced in Bazel 8.
Symbolic macros are recommended for code clarity, where possible.
They include enhancements like typed arguments
and the ability to define and limit the visibility of the targets they create.
This post is intended for experienced Bazel engineers
or those tasked with modernizing the build metadata of their codebases.
The following discussion assumes a solid working knowledge of Bazel’s macro system
and build file conventions.
If you are looking to migrate legacy macros
or deepen your understanding of symbolic macros,
you’ll find practical guidance and nuanced pitfalls addressed here.
What are symbolic macros?
Macros instantiate rules by acting as templates that generate targets.
As such, they are expanded in the
loading phase
,
when Bazel definitions and
BUILD
files are loaded and evaluated.
This is in contrast with build rules that are run later in the analysis phase.
In older Bazel versions, macros were defined exclusively as Starlark functions
(the form that is now called “legacy macros”).
Symbolic macros
are an improvement on that idea;
they allow defining a set of attributes similar to those of build rules.
In a
BUILD
file, you invoke a symbolic macro by supplying attribute values as arguments.
Because Bazel is explicitly aware of symbolic macros and their function in the build process,
they can be considered “first-class macros”.
See the
Symbolic macros design document
to learn more about the rationale.
Symbolic macros also intend to support
lazy evaluation
,
a feature that is currently being considered for a future Bazel release.
When that functionality is implemented,
Bazel would defer evaluating a macro until
the targets defined by that macro are actually requested.
Conventions and restrictions
There is already
good documentation
that explains how to write symbolic macros.
In this section, we are going to take a look at some practical examples of the restrictions that apply to their implementation,
which you can learn more about in the
Restrictions
docs page.
Naming
Any targets created by a symbolic macro
must
either match the macro’s name parameter exactly
or begin with that name followed by a
_
(preferred),
.
, or
-
.
This is different from legacy macros which don’t have naming constraints.
$ bazel cquery //...
ERROR: in genrule rule //src:genruletool: Target //src:genruletool declared in symbolic macro 'tool'
violates macro naming rules and cannot be built.
This means
simple_macro(name = "tool")
may only produce files or targets named
tool
or starting with
tool_
,
tool.
, or
tool-
.
In this particular macro,
tool_genrule
would work.
Access to undeclared resources
Symbolic macros must follow Bazel’s standard visibility rules:
they cannot directly access source files unless those files are passed in as arguments
or are made public by their parent package.
This is different from legacy macros,
whose implementations were effectively inlined into the
BUILD
file where they were called.
Attributes
Positional arguments
In legacy macro invocations, you could have passed the attribute values as positional arguments.
For instance, these are perfectly valid legacy macro calls:
# defs.bzldefspecial_test_legacy(name, tag ="",**kwargs):
kwargs["name"]= name
kwargs["tags"]=[tag]if tag else[]
cc_test(**kwargs)# BUILD.bazel
special_test_legacy("no-tag")
special_test_legacy("with-tag","manual")
With the macro’s name and tags collected as expected:
You can control how arguments are passed to functions by using an asterisk (
*
)
in the parameter list of a legacy macro, as per the
Starlark language specs
.
If you are a seasoned Python developer (Starlark’s syntax is heavily inspired by Python), you might have already guessed
that this asterisk separates positional arguments from keyword-only arguments:
# defs.bzldefspecial_test_legacy(name,*, tag ="",**kwargs):
kwargs["name"]= name
kwargs["tags"]=[tag]if tag else[]
cc_test(**kwargs)# BUILD.bazel
special_test_legacy("no-tag")# okay
special_test_legacy("with-tag", tag="manual")# okay# Error: special_test_legacy() accepts no more than 1 positional argument but got 2
special_test_legacy("with-tag","manual")
Positional arguments are not supported in symbolic macros
as attributes must either be declared in the
attrs
dictionary
(which would make it automatically a keyword argument)
or be inherited in which case it should also be provided by name.
Arguably, avoiding positional arguments in macros altogether is helpful
because it eliminates subtle bugs caused by incorrect order of parameters passed
and makes them easier to read and easier to process by tooling such as
buildozer
.
Default values
Legacy macros accepted default values for their parameters
which made it possible to skip passing certain arguments:
Notice how the default
dev
value declared in the macro implementation was never used.
This is because the default values defined for parameters in the macro’s function are going to be ignored,
so it’s best to remove them to avoid any confusion.
Also, all the inherited attributes have a default value of
None
,
so make sure to refactor your macro logic accordingly.
Be careful when processing the keyword arguments to avoid
subtle bugs such as checking whether a user has passed
[]
in a keyword argument
merely by doing
if not kwargs["attr-name"]
as
None
would also be evaluated to
False
in this context.
This might be potentially confusing as the default value for many common attributes is not
None
.
Take a look at the
target_compatible_with
attribute
which normally has the default value
[]
when used in a rule,
but when used in a macro, would still by default be set to
None
.
Using
bazel cquery //:target --output=build
with some
print
calls in your
.bzl
files can help when refactoring.
Inheritance
Macros are frequently designed to wrap a rule (or another macro), and the macro’s author typically aims to pass
most of the wrapped symbol’s attributes using
**kwargs
directly to the macro’s primary target
or the main inner macro without modification.
To enable this behavior, a macro can inherit attributes from a rule or another macro by providing the rule
or macro symbol to the
inherit_attrs
parameter of
macro()
.
Note that when
inherit_attrs
is set, the implementation function
must
have a
**kwargs
parameter.
This makes it possible to avoid listing every attribute that the macro may accept,
and it is also possible to disable certain attributes that you don’t want macro callers to provide.
For instance, let’s say you don’t want
copts
to be defined in macros that wrap
cc_test
because you want to manage them internally within the macro body instead:
# BUILD.bazel
special_test(
name ="my-special-test",
srcs =["test.cc"],
copts =["-std=c++22"],)
This can be done by setting the attributes you don’t want to inherit to
None
.
Now the macro caller will see that
copts
is not possible to declare when calling the macro:
$ bazel query //test/package:my-special-test
File "defs.bzl", line 19, column 1, in special_test
special_test = macro(
Error: no such attribute 'copts' in 'special_test' macro
Keep in mind that all inherited attributes are going to be included in the
kwargs
parameter
with the default value of
None
unless specified otherwise.
This means you have to be extra careful in the macro implementation function if you refactor a legacy macro:
you can no longer merely check for the presence of a key in the
kwargs
dictionary.
Mutation
In symbolic macros, you will not be able to mutate the arguments passed to the macro implementation function.
$ bazel cquery //...
DEBUG: defs.bzl:36:10: dict {"state": "active"}
File "defs.bzl", line 37, column 17, in _simple_macro_impl
env["some"] = "more"
Error: trying to mutate a frozen dict value
This, however, is no different to legacy macros where you could not modify mutable objects in place either.
In situations like this, creating a new dict with
env = dict(env)
would be of help.
In legacy macros you can still modify objects in place when they are inside the
kwargs
,
but this arguably leads to code that is harder to reason about
and invites subtle bugs that are a nightmare to troubleshoot in a large codebase.
See the
Mutability in Starlark
section to learn more.
This is still possible in legacy macros:
# defs.bzldefspecial_test_legacy(name,**kwargs):
kwargs["name"]= name
kwargs["env"]["some"]="more"
cc_test(**kwargs)# BUILD.bazel
special_test_legacy("small-test", env ={"state":"active"})
Let’s see how the updated environment variables were set for the
cc_test
target created in the legacy macro:
$ bazel cquery //...
DEBUG: defs.bzl:35:10: dict {"state": "active"}
File "defs.bzl", line 36, column 27, in _simple_macro_impl
kwargs["env"]["some"] = "more"
Error: trying to mutate a frozen dict value
Configuration
Symbolic macros, just like legacy macros, support
configurable attributes
,
commonly known as
select()
, a Bazel feature that lets users determine the values of build rule (or macro)
attributes at the command line.
Here’s an example symbolic macro with the
select
toggle:
# defs.bzldef_special_test_impl(name,**kwargs):
cc_test(
name = name,**kwargs
)
special_test = macro(
inherit_attrs = native.cc_test,
attrs ={},
implementation = _special_test_impl,)# BUILD.bazel
config_setting(
name ="linking-static",
define_values ={"static-testing":"true"},)
config_setting(
name ="linking-dynamic",
define_values ={"static-testing":"false"},)
special_test(
name ="my-special-test",
srcs =["test.cc"],
linkstatic = select({":linking-static":True,":linking-dynamic":False,"//conditions:default":False,}),)
The
query
command does show that the macro was expanded into a
cc_test
target,
but it does not show what the
select()
is resolved to.
For this, we would need to use the
cquery
(configurable query)
which is a variant of
query
that runs after
select()
s have been evaluated.
$ bazel cquery //test/package:my-special-test --output=build
cc_test(
name = "my-special-test",
...(omitted for brevity)...
linkstatic = False,
)
Let’s configure the test to be statically linked:
$ bazel cquery //test/package:my-special-test --output=build --define="static-testing=true"
cc_test(
name = "my-special-test",
...(omitted for brevity)...
linkstatic = True,
)
Each attribute in the
macro
function explicitly declares whether it tolerates
select()
values,
in other words, whether it is
configurable
.
For common attributes, consult the
Typical attributes defined by most build rules
to see which attributes can be configured.
Most attributes are configurable, meaning that their values may change
when the target is built in different ways;
however, there are a handful which are not.
For example, you cannot assign a
*_test
target to be flaky using a
select()
(e.g., to mark a test as flaky only on
aarch64
devices).
Unless specifically declared, all attributes in symbolic macros are configurable (if they support this)
which means they will be wrapped in a
select()
(that simply maps
//conditions:default
to the single value),
and you might need to adjust the code of the legacy macro you migrate.
For instance, this legacy code used to append some dependencies with the
.append()
list method,
but this might break:
$ bazel cquery //...
DEBUG: defs.bzl:35:10: select({"//conditions:default": [Label("//:helpers")]})
File "defs.bzl", line 36, column 19, in _simple_macro_impl
kwargs["deps"].append("//:commons")
Error: 'select' value has no field or method 'append'
Keep in mind that
select
is an opaque object with limited interactivity.
It does, however, support modification in place, so that you can extend it,
e.g., with
kwargs["deps"] += ["//:commons"]
:
Be extra vigilant when dealing with attributes of
bool
type that are configurable
because the return type of
select
converts silently in truthy contexts to
True
.
This can lead to some code being legitimate, but not doing what you intended.
See
Why does select() always return true?
to learn more.
When refactoring, you might need to make an attribute configurable, however, it may stop working
using the existing macro implementation.
For example, imagine you need to pass different files as input to your macro depending on the
configuration specified at runtime:
In rules,
select()
objects are resolved to their actual values,
but in macros,
select()
creates a special object of type
select
that isn’t evaluated until the analysis phase,
which is why you won’t be able to get actual values out of it.
In some cases, such as when you need to have the
select
ed value available in the macro function,
you can have the
select
object resolved before it’s passed to the macro.
This can be done with the help of an
alias
target, and the label of a target can be turned into a filepath
using the special
location
variable
:
# defs.bzldef_deployment_impl(name, visibility, filepath):print(type(filepath), filepath)
native.genrule(
name = name +"_gen",
srcs =[filepath],
outs =["config.out"],
cmd ="echo '$(location {})' > $@".format(filepath))
deployment = macro(
attrs ={"filepath": attr.label(configurable =False),},
implementation = _deployment_impl,)# BUILD.bazel
alias(
name ="configpath",
actual = select({"//conditions:default":"deploy/config/dev.ini","//:production":"deploy/config/production.ini",}),
visibility =["//visibility:public"],)
deployment(
name ="deploy",
filepath =":configpath",)
You can confirm the right file is chosen when passing different configuration flags before building the target:
Since macros are evaluated when
BUILD
files are queried,
you cannot use Bazel itself to query “raw”
BUILD
files.
Identifying definitions of legacy macros is quite difficult,
as they resemble Starlark functions, but instantiate targets.
Using
bazel cquery
with the
--output=starlark
might help printing the properties of targets to see
if they have been instantiated from macros.
When using
--output=build
, you can also inspect some of the properties:
generator_name
(the name attribute of the macro)
generator_function
(which function generated the rules)
generator_location
(where the macro was invoked)
This information with some heuristics might help you to identify the macros.
Once you have identified the macro name,
you can run
bazel query --output=build 'attr(generator_function, simple_macro, //...)'
to find all targets that are generated by a particular macro.
Finding symbolic macros, in contrast, is trivial
as you would simply need to grep for
macro()
function calls in
.bzl
files.
To query unprocessed
BUILD
files, you might want to use
buildozer
which is a tool that lets you query the contents of
BUILD
files using a static parser.
The tool will come in handy for various use cases when refactoring, such as migrating the macros.
Because both legacy and symbolic macros follow the same
BUILD
file syntax,
buildozer
can be used to query build metadata for either type.
Let’s write some queries for these macro invocations:
# BUILD.bazel
perftest(
name ="apis",
srcs =["//:srcA","//:srcB"],
env ={"type":"performance"},)
perftest(
name ="backend",
srcs =["//:srcC","//:srcD"],
env ={"type":"performance"},)
Print all macro invocations (raw) across the whole workspace:
You might also want to check that no macro invocation passes an attribute that is not supposed to be passed.
In the command output, the
missing
means the attribute doesn’t exist;
these lines can of course be ignored with
grep -v missing
:
We hope that these practical suggestions and examples will assist you in your efforts
to modernize the use of macros throughout your codebase.
Remember that you can compose legacy and symbolic macros, which may be useful during the transition.
Also, legacy macros can still be used and are to remain supported in Bazel for the foreseeable future.
Some organizations may even choose not to migrate at all,
particularly if they rely on the current behavior of the legacy macros heavily.
CiviConf & CiviSprint Paris 5-9 october 2026
CiviCRM
civicrm.org
2025-11-20 15:20:57
Save the Date: CiviConf & CiviSprint Paris – October 5–9, 2026
We're thrilled to announce that the global CiviCRM community is gathering in Paris for CiviConf & CiviSprint Paris 2026! Join us for an inspiring week of collaboration, connection, and learning, set at the ...
We're thrilled to announce that the global CiviCRM community is gathering in Paris for
CiviConf & CiviSprint Paris 2026
! Join us for an inspiring week of collaboration, connection, and learning, set at the HI Paris Yves Robert Hostel—just a short walk from Gare du Nord and minutes away from the legendary Montmartre neighbourhood
Dates
Monday, October 5
to
Friday, October 9, 2026
Mark your calendar and get ready to be part of the most international CiviCRM event of the year!
Program Highlights
Monday, 9:30 AM – 6:00 PM:
Conference day! Meet partners, discover community innovations, hear real-world CiviCRM stories. The day features open forums, technical showcases, client success sessions, and networking breaks.
Tuesday to Friday,
9 AM – 11:00 PM
:
Training and Sprint sessions—choose your track:
Advanced User Training
(English & French): Boost your skills, learn best practices, and connect with power users and CiviCRM experts.
Developer Training
(English): Dive into CiviCRM’s technical ecosystem, contribute to the open source codebase, and get hands-on with the latest features.
Daily Sprint:
Collaborate with global contributors on documentation, core improvements, and translation projects. All skill levels and backgrounds are welcome!
Social & Community Experience:
Experience Paris beyond the conference! Join us for informal outings to nearby Montmartre—only 10 minutes on foot from Gare du Nord—and enjoy the local culture, food, and an energizing Parisian vibe.
Who Should Attend?
Non-profit, association and foundation staff
CiviCRM administrators and advanced users
Developers (PHP, Drupal, WordPress, Joomla, more)
Partners, consultants, and tech agencies
Community members, old and new
Venue
HI Paris Yves Robert Hostel
20, Esplanade Nathalie Sarraute, 75018 Paris
15 mins walk from Gare du Nord (Eurostar, Airport direct access)
20 mins walk from Gare de l’Est
24 mins by metro from Gare de Lyon
Easy access to CDG / Orly airports
Registration and More Info
Registration will open in early 2026—stay tuned for detailed program updates, speaker announcements, and travel tips.
If you’re interested in presenting, sponsoring, or supporting the event, contact us
contact@all-in-appli.com
Book your calendars and prepare to meet the global community in Paris!
Save the date: Paris 2026 CiviConf & CiviSprint (5-9 october 2026)
CiviCRM
civicrm.org
2025-11-20 15:20:57
Save the Date: CiviConf & CiviSprint Paris – October 5–9, 2026
We're thrilled to announce that the global CiviCRM community is gathering in Paris for CiviConf & CiviSprint Paris 2026! Join us for an inspiring week of collaboration, connection, and learning, set at the ...
We're thrilled to announce that the global CiviCRM community is gathering in Paris for
CiviConf & CiviSprint Paris 2026
! Join us for an inspiring week of collaboration, connection, and learning, set at the HI Paris Yves Robert Hostel—just a short walk from Gare du Nord and minutes away from the legendary Montmartre neighbourhood
Dates
Monday, October 5
to
Friday, October 9, 2026
Mark your calendar and get ready to be part of the most international CiviCRM event of the year!
Program Highlights
Monday, 9:30 AM – 6:00 PM:
Conference day! Meet partners, discover community innovations, hear real-world CiviCRM stories. The day features open forums, technical showcases, client success sessions, and networking breaks.
Tuesday to Friday,
9 AM – 11:00 PM
:
Training and Sprint sessions—choose your track:
Advanced User Training
(English & French): Boost your skills, learn best practices, and connect with power users and CiviCRM experts.
Developer Training
(English): Dive into CiviCRM’s technical ecosystem, contribute to the open source codebase, and get hands-on with the latest features.
Daily Sprint:
Collaborate with global contributors on documentation, core improvements, and translation projects. All skill levels and backgrounds are welcome!
Social & Community Experience:
Experience Paris beyond the conference! Join us for informal outings to nearby Montmartre—only 10 minutes on foot from Gare du Nord—and enjoy the local culture, food, and an energizing Parisian vibe.
Who Should Attend?
Non-profit, association and foundation staff
CiviCRM administrators and advanced users
Developers (PHP, Drupal, WordPress, Joomla, more)
Partners, consultants, and tech agencies
Community members, old and new
Venue
HI Paris Yves Robert Hostel
20, Esplanade Nathalie Sarraute, 75018 Paris
15 mins walk from Gare du Nord (Eurostar, Airport direct access)
20 mins walk from Gare de l’Est
24 mins by metro from Gare de Lyon
Easy access to CDG / Orly airports
Registration and More Info
Registration will open in early 2026—stay tuned for detailed program updates, speaker announcements, and travel tips.
If you’re interested in presenting, sponsoring, or supporting the event, contact us
contact@all-in-appli.com
Book your calendars and prepare to meet the global community in Paris!
Meet the AI workers who tell their friends and family to stay away from AI
Guardian
www.theguardian.com
2025-11-22 14:00:40
When the people making AI seem trustworthy are the ones who trust it the least, it shows that incentives for speed are overtaking safety, experts say Krista Pawloski remembers the single defining moment that shaped her opinion on the ethics of artificial intelligence. As an AI worker on Amazon Mecha...
K
rista Pawloski remembers the single defining moment that shaped her opinion on the ethics of
artificial intelligence
. As an AI worker on Amazon Mechanical Turk – a marketplace that allows companies to hire workers to perform tasks like entering data or matching an AI prompt with its output – Pawloski spends her time moderating and assessing the quality of AI-generated text, images and videos, as well as some factchecking.
Roughly two years ago, while working from home at her dining room table, she took up a job designating tweets as racist or not. When she was presented with a tweet that read “Listen to that mooncricket sing”, she almost clicked on the “no” button before deciding to check the meaning of the word “mooncricket”, which, to her surprise, was a racial slur against Black Americans.
“I sat there considering how many times I may have made the same mistake and not caught myself,” said Pawloski.
The potential scale of her own errors and those of thousands of other workers like her made Pawloski spiral. How many others had unknowingly let offensive material slip by? Or worse, chosen to allow it?
After years of witnessing the inner workings of AI models, Pawloski decided to no longer use generative AI products personally and tells her family to steer clear of them.
“It’s an absolute no in my house,” said Pawloski, referring to how she doesn’t let her teenage daughter use tools like
ChatGPT
. And with the people she meets socially, she encourages them to ask AI about something they are very knowledgable in so they can spot its errors and understand for themselves how fallible the tech is. Pawloski said that every time she sees a menu of new tasks to choose from on the Mechanical Turk site, she asks herself if there is any way what she’s doing could be used to hurt people – many times, she says, the answer is yes.
A statement from
Amazon
said that workers can choose which tasks to complete at their discretion and review a task’s details before accepting it. Requesters set the specifics of any given task, such as allotted time, pay and instruction levels, according to Amazon.
“Amazon Mechanical Turk is a marketplace that connects businesses and researchers, called requesters, with workers to complete online tasks, such as labeling images, answering surveys, transcribing text or reviewing AI outputs,” said Montana MacLachlan, an Amazon spokesperson.
Pawloski isn’t alone. A dozen
AI raters
, workers who check an AI’s responses for accuracy and groundedness, told the Guardian that, after becoming aware of the way chatbots and image generators function and just how wrong their output can be, they have begun urging their friends and family not to use generative AI at all – or at least trying to educate their loved ones on using it cautiously. These trainers work on a range of AI models – Google’s Gemini, Elon Musk’s Grok, other popular models, and several smaller or lesser-known bots.
One worker, an AI rater with
Google
who evaluates the responses generated by Google Search’s AI Overviews, said that she tries to use AI as sparingly as possible, if at all. The company’s approach to AI-generated responses to questions of health, in particular, gave her pause, she said, requesting anonymity for fear of professional reprisal. She said she observed her colleagues evaluating AI-generated responses to medical matters uncritically and was tasked with evaluating such questions herself, despite a lack of medical training.
At home, she has forbidden her 10-year-old daughter from using chatbots. “She has to learn critical thinking skills first or she won’t be able to tell if the output is any good,” the rater said.
“Ratings are just one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models,” a statement from Google reads. “We also have a range of strong protections in place to surface high quality information across our products.”
Bot watchers sound the alarm
These people are part of a global workforce of tens of thousands who help chatbots sound more human. When checking AI responses, they also try their best to ensure that a chatbot doesn’t spout inaccurate or harmful information.
When the people who make AI seem trustworthy are those who trust it the least, however, experts believe it signals a much larger issue.
“It shows there are probably incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored,” said Alex Mahadevan, director of MediaWise at Poynter, a media literacy program. “So this means when we see the final [version of the] chatbot, we can expect the same type of errors they’re experiencing. It does not bode well for a public that is increasingly going to LLMs for news and information.”
AI workers said they distrust the models they work on because of a consistent emphasis on rapid turnaround time at the expense of quality. Brook Hansen, an AI worker on Amazon Mechanical Turk, explained that while she doesn’t mistrust generative AI as a concept, she also doesn’t trust the companies that develop and deploy these tools. For her, the biggest turning point was realizing how little support the people training these systems receive.
“We’re expected to help make the model better, yet we’re often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks,” said Hansen, who has been doing data work since 2010 and has had a part in training some of Silicon Valley’s most popular AI models. “If workers aren’t equipped with the information, resources and time we need, how can the outcomes possibly be safe, accurate or ethical? For me, that gap between what’s expected of us and what we’re actually given to do the job is a clear sign that companies are prioritizing speed and profit over responsibility and quality.”
Dispensing false information in a confident tone, rather than offering no answer when none is readily available, is a major flaw of generative AI, experts say. An audit of the top 10 generative AI models including ChatGPT, Gemini and Meta’s AI by the media literacy non-profit NewsGuard revealed that the non-response rates of chatbots went down from 31% in August 2024 to 0% in August 2025. At the same time, the chatbots’ likelihood of repeating false information
almost doubled from 18% to 35%
, NewsGuard found. None of the companies responded to NewsGuard’s request for a comment at the time.
“I wouldn’t trust any facts [the bot] offers up without checking them myself – it’s just not reliable,” said another Google AI rater, requesting anonymity due to a nondisclosure agreement she has signed with the contracting company. She warns people about using it and echoed another rater’s point about people with only cursory knowledge being tasked with medical questions and sensitive ethical ones, too. “This is not an ethical robot. It’s just a robot.”
“We joke that [chatbots] would be great if we could get them to stop lying,” said one AI tutor who has worked with Gemini, ChatGPT and Grok, requesting anonymity, having signed nondisclosure agreements.
‘Garbage in, garbage out’
Another AI rater who started his journey rating responses for Google’s products in early 2024 began to feel he couldn’t trust AI around six months into the job. He was tasked with stumping the model – meaning he had to ask Google’s AI various questions that would expose its limitations or weaknesses. Having a degree in history, this worker asked the model historical questions for the task.
“I asked it about the history of the Palestinian people, and it wouldn’t give me an answer no matter how I rephrased the question,” recalled this worker, requesting anonymity, having signed a nondisclosure agreement. “When I asked it about the history of Israel, it had no problems giving me a very extensive rundown. We reported it, but nobody seemed to care at Google.” When asked specifically about the situation the rater described, Google did not issue a statement.
For this Google worker, the biggest concern with AI training is the feedback given to AI models by raters like him. “After having seen how bad the data is that goes into supposedly training the model, I knew there was absolutely no way it could ever be trained correctly like that,” he said. He used the term “garbage in, garbage out”, a principle in computer programming which explains that if you feed bad or incomplete data into a technical system, then the output would also have the same flaws.
The rater avoids using generative AI and has also “advised every family member and friend of mine to not buy newer phones that have AI integrated in them, to resist automatic updates if possible that add AI integration, and to not tell AI anything personal”, he said.
Fragile, not futuristic
Whenever the topic of AI comes up in a social conversation, Hansen reminds people that AI is not magic – explaining the army of invisible workers behind it, the unreliability of the information and how
environmentally damaging it is
.
“Once you’ve seen how these systems are cobbled together – the biases, the rushed timelines, the constant compromises – you stop seeing AI as futuristic and start seeing it as fragile,” said Adio Dinika, who studies the labor behind AI at the Distributed AI Research Institute, about people who work behind the scenes. “In my experience it’s always people who don’t understand AI who are enchanted by it.”
The AI workers who spoke to the Guardian said they are taking it upon themselves to make better choices and create awareness around them, particularly emphasizing the idea that AI, in Hansen’s words, “is only as good as what’s put into it, and what’s put into it is not always the best information”. She and Pawloski gave a presentation in May at the Michigan Association of School Boards spring conference. In a room full of school board members and administrators from across the state, they spoke about the ethical and environmental impacts of artificial intelligence, hoping to spark a conversation.
“Many attendees were shocked by what they learned, since most had never heard about the human labor or environmental footprint behind AI,” said Hansen. “Some were grateful for the insight, while others were defensive or frustrated, accusing us of being ‘doom and gloom’ about technology they saw as exciting and full of potential.”
Pawloski compares AI ethics to that of the textile industry: when people didn’t know how cheap clothes were made, they were happy to find the best deal and save a few bucks. But as the stories of sweatshops started coming out, consumers had a choice and knew they should be asking questions. She believes it’s the same for AI.
“Where does your data come from? Is this model built on copyright infringement? Were workers fairly compensated for their work?” she said. “We are just starting to ask those questions, so in most cases the general public does not have access to the truth, but just like the textile industry, if we keep asking and pushing, change is possible.”
A Lost Planet Created the Moon. Now, We Know Where It Came From.
403 Media
www.404media.co
2025-11-22 14:00:14
The remains of Theia are scattered deep inside the Earth and its satellite. By analyzing these remnants, scientists have proposed an origin....
Welcome back to the Abstract! Here are the studies this week that overthrew the regime, survived outer space, smashed planets, and crafted an ancient mystery from clay.
First, a queen gets sprayed with acid—and that’s not even the most horrifying part of the story. Then: a moss garden that is out of this world, the big boom that made the Moon, and a breakthrough in the history of goose-human relations.
Every so often, a study opens with such a forceful hook that it is simply best for me to stand aside and allow it to speak for itself. Thus:
“Matricide—the killing of a mother by her own genetic offspring—is rarely observed in nature, but not unheard-of. Among animal species in which offspring remain with their mothers, the benefits gained from maternal care are so substantial that eliminating the mother almost never pays, making matricide vastly rarer than infanticide.”
“Here, we report matricidal behavior in two ant species,
Lasius flavus
and
Lasius japonicus
, where workers kill resident queens (their mothers) after the latter have been sprayed with abdominal fluid by parasitic ant queens of the ants
Lasius orientalis
and
Lasius umbratus
.”
Mad props to this team for condensing an entire etymological epic into three sentences. Such murderous acts of dynastic usurpation were first observed by Taku Shimada, an ant enthusiast who runs a blog called
Ant Room
. Though matricide is sometimes part of a life cycle—like mommy spiders sacrificing their bodies for consumption by their offspring—there is no clear precedent for the newly-reported form of matricide, in which neither the young nor mother benefits from an evolutionary point of view.
In what reads like an unfolding horror, the invading parasitic queens “covertly approach the resident queen and spray multiple jets of abdominal fluid at her”—formic acid, as it turns out—that then “elicits abrupt attacks by host workers, which ultimately kill their own mother,” report Shimada and his colleagues.
“The parasitic queens are then accepted, receive care from the orphaned host workers and produce their own brood to found a new colony,” the team said. “Our findings are the first to document a novel host manipulation that prompts offspring to kill an otherwise indispensable mother.”
My blood is curdling and yet I cannot look away! Though this strategy is uniquely nightmarish, it is not uncommon for invading parasitic ants to execute queens in any number of creative ways. The parasites are just usually a bit more hands-on (or rather, tarsus-on) about the process.
“Queen-killing” has “evolved independently on multiple occasions across [ant species], indicating repeated evolutionary gains,” Shimada’s team said. “Until now, the only mechanistically documented solution was direct assault: the parasite throttles or beheads the host queen, a tactic that has arisen convergently in several lineages.”
When will we get an ant Shakespeare?! Someone needs to step up and claim that title, because these queens blow Lady MacBeth out of the water.
In other news…
That’s one small stem for a plant, one giant leaf for plant-kind
Scientists simply love to expose extremophile life to the vacuum of space to, you know, see how well they do out there. In a new addition to this tradition, a study reports that spores from the moss
Physcomitrium patens
survived a full 283 days chilling on the outside of the International Space Station, which is generally not the side of an orbital habitat you want to be stuck on.
A reddish-brown spore similar to those used in the space exposure experiment. Image: Tomomichi Fujita
Even wilder, most of the spacefaring spores were reproductively successful upon their return to Earth. “Remarkably, even after 9 months of exposure to space conditions, over 80% of the encased spores germinated upon return to Earth,” said researchers led by Chang-hyun Maeng of Hokkaido University. “To the best of our knowledge, this is the first report demonstrating the survival of bryophytes”—the family to which mosses belong—”following exposure to space and subsequent return to the ground.”
Congratulations to these mosses for boldly growing where no moss has grown before.
Earth had barely been born before a Mars-sized planet, known as Theia, smashed into it some 4.5 billion years ago. The debris from the collision coalesced into what is now our Moon, which has played a key role in Earth’s habitability, so we owe our lives in part to this primordial punch-up.
KABLOWIE! Image: NASA/JPL-Caltech
Scientists have now revealed new details about Theia by measuring the chemical makeup of “lunar samples, terrestrial rocks, and meteorites…from which Theia and proto-Earth might have formed,” according to a new study. They conclude that Theia likely originated in the inner solar system based on the chemical signatures that this shattered world left behind on the Moon and Earth.
“We found that all of Theia and most of Earth’s other constituent materials originated from the inner Solar System,” said researchers led by Timo Hopp of The University of Chicago and the Max Planck Institute for Solar System Research. “Our calculations suggest that Theia might have formed closer to the Sun than Earth did.”
Wherever its actual birthplace, what remains of Theia is buried on the Moon and as
giant undigested slabs
inside Earth’s mantle. Rest in pieces, sister.
You’ve heard of the albatross around your neck, but what about the goose on your back? A new study reports the discovery of a 12,000-year-old artifact in Israel that is the “earliest known figurine to depict a human–animal interaction” with its vision of a goose mysteriously draped over a woman’s spine and shoulders.
The tiny, inch-high figurine was recovered from a settlement built by the prehistoric Natufian culture and it may represent some kind of sex thing.
“We…suggest that by modeling a goose in this specific posture, the Natufian manufacturer intended to portray the trademark pattern of the gander’s mating behavior,” said researchers led by Laurent Davin of the Hebrew University of Jerusalem. “This kind of imagined mating between humans and animal spirits is typical of an animistic perspective, documented in cross-cultural archaeological and ethnographic records in specific situations” such as an “erotic dream” or “shamanistic vision.”
First, the bizarre Greek myth of Leda and the Swan, and now this? What is it about ancient cultures and weird waterfowl fantasies? In any case, my own interpretation is that the goose was just tired and needed a piggyback (or gaggle-back).
This post explores another proposal in the space of ergonomic ref-counting that I am calling
move expressions
. To my mind, these are an alternative to
explicit capture clauses
, one that addresses many (but not
all
) of the goals from that design with improved ergonomics and readability.
TL;DR
The idea itself is simple, within a closure (or future), we add the option to write
move($expr)
. This is a value expression (“rvalue”) that desugars into a temporary value that is moved into the closure. So
||something(&move($expr))
is roughly equivalent to something like:
{lettmp=$expr;||something(&{tmp})}
How it would look in practice
Let’s go back to one of our running examples, the “Cloudflare example”, which originated in
this excellent blog post by the Dioxus folks
. As a reminder, this is how the code looks
today
– note the
let _some_value = ...
lines for dealing with captures:
// task: listen for dns connections
let_some_a=self.some_a.clone();let_some_b=self.some_b.clone();let_some_c=self.some_c.clone();tokio::task::spawn(asyncmove{do_something_else_with(_some_a,_some_b,_some_c)});
Under this proposal it would look something like this:
There are times when you would want multiple clones. For example, if you want to move something into a
FnMut
closure that will then give away a copy on each call, it might look like
data_source_iter.inspect(|item|{inspect_item(item,move(tx.clone()).clone())// ---------- -------
// | |
// move a clone |
// into the closure |
// |
// clone the clone
// on each iteration
}).collect();// some code that uses `tx` later...
Credit for this idea
This idea is not mine. It’s been floated a number of times. The first time I remember hearing it was at the RustConf Unconf, but I feel like it’s come up before that. Most recently it was
proposed by Zachary Harrold on Zulip
, who has also created a prototype called
soupa
. Zachary’s proposal, like earlier proposals I’ve heard, used the
super
keyword. Later on
@simulacrum proposed using
move
, which to me is a major improvement, and that’s the version I ran with here.
This proposal makes closures more “continuous”
The reason that I love the
move
variant of this proposal is that it makes closures more “continuous” and exposes their underlying model a bit more clearly. With this design, I would start by explaining closures with move expressions and just teach
move
closures at the end, as a convenient default:
A Rust closure captures the places you use in the “minimal way that it can” – so
|| vec.len()
will capture a shared reference to the
vec
,
|| vec.push(22)
will capture a mutable reference, and
|| drop(vec)
will take ownership of the vector.
You can use
move
expressions to control exactly what is captured: so
|| move(vec).push(22)
will move the
vector
into the closure. A common pattern when you want to be fully explicit is to list all captures at the top of the closure, like so:
||{letvec=move(input.vec);// take full ownership of vec
letdata=move(&cx.data);// take a reference to data
letoutput_tx=move(output_tx);// take ownership of the output channel
process(&vec,&mutoutput_tx,data)}
As a shorthand, you can write
move ||
at the top of the closure, which will change the default so that closures > take ownership of every captured variable. You can still mix-and-match with
move
expressions to get more control. > So the previous closure might be written more concisely like so:
move||{process(&input.vec,&mutoutput_tx,move(&cx.data))// --------- --------- --------
// | | |
// | | closure still
// | | captures a ref
// | | `&cx.data`
// | |
// because of the `move` keyword on the clsoure,
// these two are captured "by move"
//
}
This proposal makes
move
“fit in” for me
It’s a bit ironic that I like this, because it’s doubling down on part of Rust’s design that I was recently complaining about. In my earlier post on
Explicit Capture Clauses
I wrote that:
To be honest, I don’t like the choice of
move
because it’s so
operational
. I think if I could go back, I would try to refashion our closures around two concepts
Attached
closures (what we now call
||
) would
always
be tied to the enclosing stack frame. They’d always have a lifetime even if they don’t capture anything.
Detached
closures (what we now call
move ||
) would capture by-value, like
move
today.
I think this would help to build up the intuition of “use
detach ||
if you are going to return the closure from the current stack frame and use
||
otherwise”.
move
expressions are, I think, moving in the opposite direction. Rather than talking about attached and detached, they bring us to a more unified notion of closures, one where you don’t have “ref closures” and “move closures” – you just have closures that sometimes capture moves, and a “move” closure is just a shorthand for using
move
expressions everywhere. This is in fact how closures work in the compiler under the hood, and I think it’s quite elegant.
Why not suffix?
One question is whether a
move
expression should be a
prefix
or a
postfix
operator. So e.g.
||something(&$expr.move)
instead of
&move($expr)
.
My feeling is that it’s not a good fit for a postfix operator because it doesn’t just take the final value of the expression and so something with it, it actually impacts when the entire expression is evaluated. Consider this example:
||process(foo(bar()).move)
When does
bar()
get called? If you think about it, it has to be closure creation time, but it’s not very “obvious”.
We reached a similar conclusion when we were considering
.unsafe
operators. I think there is a rule of thumb that things which delineate a “scope” of code ought to be prefix – though I suspect
unsafe(expr)
might actually be nice, and not just
unsafe { expr }
.
Edit:
I added this section after-the-fact in response to questions.
Conclusion
I’m going to wrap up this post here. To be honest, what this design really has going for it, above anything else, is its
simplicity
and the way it
generalizes Rust’s existing design
. I love that. To me, it joins the set of “yep, we should clearly do that” pieces in this puzzle:
Add a
Share
trait (I’ve gone back to preferring the name
share
😁)
Add
move
expressions
These both seem like solid steps forward. I am not yet persuaded that they get us all the way to the goal that I articulated in
an earlier post
:
“low-level enough for a Kernel, usable enough for a GUI”
but they are moving in the right direction.
AIPAC Donors Back Real Estate Tycoon Who Opposed Gaza Ceasefire For Deep Blue Chicago Seat
Intercept
theintercept.com
2025-11-22 11:00:00
Progressive Rep. Danny Davis rejected AIPAC cash at the end of his career. Now the Israel lobby is coming for his seat.
The post AIPAC Donors Back Real Estate Tycoon Who Opposed Gaza Ceasefire For Deep Blue Chicago Seat appeared first on The Intercept....
Pro-Israel donors have
picked a candidate to replace Rep. Danny Davis in Chicago.
Jason Friedman, one of 18 candidates vying to replace Davis in the March Democratic primary next year, has pulled ahead of the pack in fundraising. His campaign reported donations totaling over $1.5 million in its October filing with the Federal Election Commission.
About $140,000 of that money comes from major funders of pro-Israel groups, including the American Israel Public Affairs Committee PAC and its super PAC, United Democracy Project. The two groups spent more than
$100 million
on elections last year and
ousted
two
leading critics
of Israel
from Congress
. The pro-Israel donors’ support this year is an early sign that Friedman’s race is on AIPAC’s radar.
A former Chicago
real estate mogul
, Friedman
launched
his campaign in April, before Davis
announced
his retirement. From 2019 to 2024, he was chair of government affairs for the Jewish United Fund, a charitable organization that promotes pro-Israel narratives, noting on its
website
that “Israel does not intentionally target civilians,” “Israel does not occupy Gaza,” and “There is no Israeli ‘apartheid.’” Friedman has not made Israel a part of his campaign platform, but last month, the Joint Action Committee for Political Affairs, a pro-Israel PAC,
held an event
for its members to meet him.
AIPAC has not said publicly whether it’s backing a candidate in the race, but more than 35 of its donors have given money to Friedman’s campaign. Among them, 17 have donated to the United Democracy Project, and eight
have donated to both. Together, the Friedman donors have contributed just under $2 million to AIPAC and UDP since 2021.
That includes more than $1.6 million to UDP and more than $327,000 to AIPAC, with several donors giving six or five-figure contributions to the PACs. Friedman’s donors have also given $85,500 to DMFI PAC, the political action committee for the AIPAC offshoot
Democratic Majority for Israel
, and another $115,000 to the pro-Israel group To Protect Our Heritage PAC, which endorsed another candidate in the race, Chicago City Treasurer Melissa Conyears-Ervin. The Conyears-Ervin campaign and To Protect Our Heritage PAC did not respond to a request for comment.
Friedman is running largely on
taking on President Donald Trump
on issues from health care to education and the economy. His campaign website says he supports strong unions, access to education, reducing gun violence, and job training and support.
Prior to his tenure leading his family real estate empire, Friedman worked in politics under former President Bill Clinton and for Sen. Dick Durbin on the Senate Judiciary Committee.
Reached by phone, the pro-Israel donor Larry Hochberg told The Intercept that he was supporting Friedman because he thought he’d be a good candidate. “I’ll leave it at that,” Hochberg said.
A former AIPAC national director, Hochberg sits on the board of
Friends of the Israel Defense Forces
and co-founded the pro-Israel advocacy group
ELNET
, which has described itself as the AIPAC of Europe. Hochberg has given $10,000 to AIPAC, $5,000 to DMFI PAC, and just under $30,000 to To Protect Our Heritage PAC. In September, he gave $1,000 to Friedman’s campaign. Asked about his support for AIPAC and DMFI, he told The Intercept: “I don’t think I want to say any more than that.”
Former Rep. Marie Newman, a former
target
of pro-Israel donors who represented Illinois’s nearby 3rd District and was
ousted
from Congress in 2022, criticized Friedman for the influx in cash.
“If you receive money from AIPAC donors who believe in genocide and are funding genocide, then in fact, you believe in genocide,” Newman told The Intercept. She’s backing another candidate in the race, gun violence activist Kina Collins, who ran against Davis three times and came within 7 percentage points of unseating him in 2022.
Friedman is running against 17 other Democratic candidates, including
Collins and Conyears-Ervin. During Collins’s third run against Davis last year, United Democracy Project spent just under
half a million dollars
against her. Davis, who received support from a
dark-money group
aligned with Democratic leaders in his 2022 race, has endorsed state Rep. La Shawn Ford to replace him. Other candidates include former Cook County Commissioner Richard Boykin, former Forest Park Mayor Rory Hoskins, immigrant advocate Anabel Mendoza, organizer Anthony Driver Jr., emergency room doctor Thomas Fisher, and former antitrust attorney Reed Showalter, who has pledged not to accept money from AIPAC.
Friedman’s campaign did not respond to a request for comment.
The genocide in Gaza
has aggravated
fault lines
among Democrats in Chicago. Last year, the Chicago City Council narrowly passed a resolution calling for a ceasefire in Gaza, with Mayor Brandon Johnson casting the tie-breaking vote. As chair of government affairs for the Jewish United Fund, Friedman signed a
letter
to Johnson last year from the group and leaders of Chicago’s Jewish community, saying they were “appalled” at the result. Friedman’s campaign did not respond to questions about his position on U.S. military funding for Israel or the war on Gaza.
At least 17 Friedman donors have given to the United Democracy Project, with contributions totaling over $1.6 million. That includes nine people who gave six-figure contributions to UDP and seven who gave five-figures. Twenty-nine Friedman donors have given to AIPAC PAC, including eight of the same UDP donors.
Among those supporters are gaming executive Greg Carlin, who has given $255,000 to UDP and gave $3,500 to Friedman’s campaign in April; investor Tony Davis, who has given $250,000 to UDP and also gave $3,500 to Friedman’s campaign in April; and attorney Steven Lavin, who has given $125,000 to UDP and gave $7,000 to Friedman’s campaign in June. Carlin, Davis, and Lavin did not respond to a request for comment.
Attorneys Douglas Gessner and Sanford Perl, who work at Friedman’s previous law firm, Kirkland & Ellis, have given $105,000 and $100,000 to UDP. Both have also given to AIPAC PAC: Gessner over $50,000 and Perl over $44,000. Gessner gave $3,000 to Friedman’s campaign in September, and Perl gave $3,400 in April. Gessner and Perl did not respond to requests for comment.
“If you’re taking money from people who are supporting a far right-wing government that is executing a genocide, what does that say about you?”
Three other donors who have each given $1 million to UDP have given to Friedman’s campaign: Miami Beach biotech executive Jeff Aronin, Chicago marketing founder Ilan Shalit, and Jerry Bednyak, a co-founder of Vivid Seats who runs a private equity company focused on e-commerce.
“You could be the nicest person in the world,” said Newman, the former Illinois congresswoman. “But if you’re taking money from people who are supporting a far right-wing government that believes in genocide and is executing a genocide, what does that say about you?”
Friedman’s campaign coffers saw six-figure boosts on three days in June and September — vast outliers compared to most days in his first quarter. Those kinds of fundraising boosts are often associated with a blast email from a supportive political group to its network of donors, according to a Democratic strategist with knowledge of the race. AIPAC did not respond to a request for comment about whether the group had sent such an email encouraging supporters to contribute to Friedman’s campaign.
Friedman’s fundraising boost has also come largely from the finance and real estate industries, where just under a quarter of his donors work. He has also given $36,750 of his own money to his campaign.
D
rop a Duralex glass and it will most likely bounce, not break. The French company itself has tumbled several times in the past two decades and always bounced back, but never quite as spectacularly as when, earlier this month, it asked the public for money.
An appeal for €5m (£4.4m) of emergency funding to secure the immediate future of the glassworks took just five hours and 40 minutes to reach its target. Within 48 hours, the total amount pledged had topped €19m.
François Marciano, 59, the director general of Duralex, said the response had astonished everyone at the company. “We thought it would take five or six weeks to raise the €5m. When it reached nearly €20m we had to say stop. Enough,” he said.
François Marciano, chief executive of Duralex, holds up a Picardie glass.
Photograph: Magali Delporte/The Guardian
As a staff cooperative, €5m is the maximum Duralex can accept in public investment under financial rules.
Beloved French brand
Mention Duralex to any French person and they will be transported back to childhood and a school canteen. The brand evokes a mix of nostalgia and pride and is a symbol of French patriotism and industrial
savoir faire.
“We’re like
Proust’s madeleines
,” Marciano said. “The French people want to save us. They are fed up with factories closing and the country’s industries declining.”
At the Duralex factory on an industrial estate in La Chapelle-Saint-Mesmin on the banks of the Loire just outside Orléans, Marciano says he and his colleagues are “floating on a cloud” after the appeal.
Eighteen months ago, Marciano oversaw a staff buyout of the company, which had been placed in receivership for the fourth time in 20 years. Today, 180 of the 243 employees are “associates” in the company.
Suliman El Moussaoui, the leader of the CFDT union at Duralex.
Photograph: Magali Delporte/The Guardian
Suliman El Moussaoui, 44, a union representative at the factory where he has worked for 18 years, said the appeal had prompted “a tsunami of orders, so many that we’re struggling to keep up. Every time the company is mentioned on the television or radio we have more orders. It’s been amazing.”
Inside the factory, a simple but magical alchemy takes place. A mix of sand, soda ash and limestone, the exact proportions of which are a closely guarded secret, is heated in a vast overhead oven to 1,400C. Glowing globs of molten glass drop into iron casts that are blasted with a flame of gas. The red-hot glass is instantly pounded into shape, sprung from the mould, snatched by metal pincers and placed on a conveyor belt.
The iconic Picardie glasses coming out of the 1,440-degree oven
Photograph: Magali Delporte/The Guardian
Duralex production video
The process has changed little since Duralex – which is said to take its name from the Latin expression
Dura lex, sed lex
, meaning “the law is harsh, but it is the law” – opened in 1945. When the Guardian visited, the production line was turning out small clear glasses in the Provence range.
Workers in the factory starting a new production of glasses.
Photograph: Magali Delporte/The Guardian
Each glass is carefully inspected.
Photograph: Magali Delporte/The Guardian
A worker brandishing tongs lifted a glass to the light to inspect it for faults. During a production run, more than a dozen samples of whatever is being made – glasses, plates, bowls – will be randomly removed and subjected to stress tests. In the quality control room, they will be heated to 150C then plunged into cold water to see if they resist a thermic shock, and dropped from the height of a kitchen counter on to a metal sheet to see if they shatter. They will be tested for stackability and then weighed and the glass thickness measured. If they pass, they are thrown in a bin and the production line is given a thumbs up. If they fail, everything stops and the machines are recalibrated.
‘The ultimate drinking vessel’
It is not known who invented the company’s trademark Picardie glass, the tumbler used in school canteens with a thick curved rim and semi-fluted shape that first appeared in 1954. The British design guru
Patrick Taylor has ranked
the Picardie alongside Levi’s jeans and the Swiss Army knife as an icon of modern design. Taylor describes it as: “An object whose form gives the impression it was discovered rather than designed. It is the ultimate drinking vessel created by man, and of its type cannot be improved.”
The instantly recognisable Picardie glasses.
Photograph: Magali Delporte/The Guardian
Duralex says its glass is microwave, freezer and dishwasher-safe and will not turn cloudy or lose its colour, which is in the glass rather than on it. When they do break, Duralex glasses shatter into small pieces rather than shards, reducing the injury risk.
Joël Cardon, 59, who has worked at the factory for 35 years, said the soaring cost of gas and electricity were the firm’s largest and most worrying expense.
On his screen, the oven containing the liquid glass showed a temperature of 1,440C. It can never be allowed to cool or the glass will solidify. Another screen showed the factory was using 360 cubic metres of gas an hour. According to the regulator Ofgem, the average UK house uses 97.3 cubic metres of gas a year.
Duralex’s oven, in the background, is held at a temperature of 1,440C.
Photograph: Magali Delporte/The Guardian
Last weekend, potential investors were asked to come good on their promises on a first come, first served basis. They will be issued with securities that pay 8% interest over seven years but give no company voting rights. The maximum investment was set at €1,000.
“We want to involve as many people as possible but with almost €20m in pledges obviously some people will be disappointed,” Marciano said.
Since the company became a staff cooperative, turnover has increased by 22% and Marciano said he hoped Duralex would be breaking even by 2027.
The €5m raised will be used to modernise the factory and develop new products. These include a partnership with the Élysée presidential palace shop to sell a set of three of its Gigogne glasses in red, white and blue, marked RF for République Française.
Set of 3 Gigogne glasses in Tricolor, advertised online for €24.90
Photograph: boutique.elysee.fr
Duralex plans to commission moulds to make “pint” glasses with a measure line for British pubs and bars and the US, both regions identified by the company as untapped markets.
“Selling abroad is more difficult because there isn’t the same nostalgia for Duralex as there is in
France
,” said Vincent Vallin, the head of strategy and development. “Interest in the company is high and this is positive, but now we have to focus on increasing sales.”
A looming 'insect apocalypse' could endanger global food supplies
Insect populations are plummeting almost everywhere they've been studied. That portends a bleak future for the world's food supplies. But there are ways to reverse the decline.
(Image credit: Myriam Wares)
Imagine driving down a highway in the summer. The windows are down, the music is loud, and the wind is whipping through your hair. Now picture your car's windshield. You might expect to see a handful of splats from unfortunate bugs. But 30 years ago, there would have been significantly more buggy skid marks plastered on the front of your vehicle.
"When I was a kid, you could go out driving in the summer, and you would come home and your car windshield was covered in bugs," said
Cheryl Schultz
, an ecologist at Washington State University. "Now, you can go across many areas at the same time of year and your windshield is clean."
This phenomenon, called the "windshield test," is indicative of a larger, very worrying trend: Insects, particularly the flying ones that pollinate many crops, are in steep decline. This nosedive is disrupting ecosystems around the world, and could jeopardize the global food supply. But tracking the decrease of insect populations over the past three decades has proved tricky — and stopping the decline may be even harder.
However, researchers are working quickly to find ways to stem the tide and even reverse the trend. Key to that is a collaborative approach that includes local and federal conservation efforts, new pollinator habitats, and a reduction in pesticide use.
The age of the "insect apocalypse"
Both the total number of insects and the number of insect species have been declining for decades in pretty much every place scientists have looked — prompting researchers to dub it "the insect apocalypse." Global bee biodiversity is
down 25%
compared with pre-1995 numbers, according to research published in 2021. A sweeping 2025 study showed that butterfly abundance across the U.S.
fell by 22%
over the past two decades. And a study in Germany found a whopping
76% loss
of flying insects in some of the country's forested areas over 27 years.
"It's a worrisome thing,"
Scott Black
, executive director of the nonprofit Xerces Society for Invertebrate Conservation, told Live Science.
By and large, experts know why insects are becoming scarcer. The first factor is
climate change
. As the planet warms, key host plants for insects start to bloom earlier each year. This can cause a mismatch in life cycles for certain species, putting many newly hatched or
metamorphosed bugs
out of sync with their food sources. And extreme heat, reduced snowpack, severe storms and megadroughts can
chip away
at previously robust insect numbers. Many populations simply can't keep up. Meanwhile, milder winters can
benefit a few adaptable pest species
, which may outcompete sensitive insects and wreak ecological and agricultural havoc in some regions.
Get the world’s most fascinating discoveries delivered straight to your inbox.
A rough way to gauge insect abundance is called the "windshield" or "splat test." Windshields now have far fewer buggy skid marks than they did 30 years ago, a sign of significant insect population declines.
(Image credit: Dina Ivanova via Getty Images)
The second driver is habitat loss — the inexorable creep of urbanization, deforestation and sterile suburban lawns, which host fewer and less-diverse ranges of insects. As humans encroach on insect habitats, insects like
ground-dwelling bees
are left without space to build nests, rear young and overwinter, leading to population declines.
Finally, there are pesticides. For instance, neonicotinoids (often labeled as the active ingredients acetamiprid, clothianidin, dinotefuran, imidacloprid and thiamethoxam), have been
identified as a major threat
to wild bees, and they're still used in the U.S. and some other industrialized countries, including parts of Canada and Australia. Other pesticides, like the common weed killer glyphosate, have been shown to
weaken bees' ability to regulate hive temperature
, leaving them vulnerable to plunging winter temperatures.
"It's really extremely rapid environmental changes that we're seeing,"
Roel van Klink
, a researcher at the German Center for Integrative Biodiversity Research, told Live Science. "Those species that were adapted to the conditions that we had maybe 50 or 100 years ago are not adapted to the conditions now anymore. And so they go down."
Collecting data on the scale and scope of these declines has been challenging, however. For one thing, some insects are easier to find than others. Flying insects like beetles and dragonflies are much more mobile, and therefore easier to spot, than earthbound bugs like earwigs and ants. Likewise, charismatic insects like bees and butterflies tend to have more historical records of their numbers and are usually easier to identify.
But there's another reason these insects' declines have gotten more scientific attention: They are extremely important for global food security.
The importance of diverse pollinators
Disappearing insects are bad news for the global food system. As the world's population continues to grow, the stress that insect declines — and dropping pollinator numbers, in particular — put on the food system could lead to an agricultural economic collapse, as well as increased food scarcity.
"Preventing further declines is no longer enough,"
Francesca Mancini
, an ecological modeler at the UK Centre for Ecology & Hydrology, told Live Science. "We need to restore insect biodiversity to past levels."
In the U.K. alone, insect pollinators provide an estimated $1 billion in economic value each year, Mancini said. For the U.S., it's in the
ballpark of $34 billion
.
Cacao flowers are completely reliant on a species of fly for pollination.
(Image credit: Helder Faria via Getty Images)
Worldwide,
three-quarters of the crops we eat
— and just over one-third of total crop yields — depend on pollination by insects.The degree to which these crops rely on pollinators falls along a spectrum. Some, like soybeans, would be
much less
productive without insect pollination. Others would cease to exist. "Coffee and chocolate are actually 100% dependent on pollination by insects," van Klink said.
A lot of that pollination work is done by managed European honeybees (
Apis mellifera
), which beekeepers around the world diligently maintain, transport and unleash upon fields across the globe each year. But to flourish, many crops need more than just honeybees.
For example, fruits native to North America, like blueberries and tomatoes (
which is technically a fruit
), are more effectively pollinated by native bumblebees, such as
Bombus
fraternus
. That's because bumblebees can perform what's known as "
buzz pollination
," where they land on a flower and vibrate rapidly to release even the most deeply held pollen grains. Cacao trees (
Theobroma cacao
) — the source of the cocoa beans used to make chocolate — are entirely pollinated by chocolate midges. And cotton yields would plummet by up to
50% without butterfly pollinators
.
Some staple crops, like soybeans, can make it without insects. However, research has shown that soybean fields visited by pollinators have
significantly higher yields
.
Alfalfa fields must be pollinated, yet honeybees aren't the best insects to do the job. Crop yields rise significantly when the alfalfa leaf-cutting bee (
Megachile rotundata
) is involved in the pollination.
(Image credit: Tanja Nik via Getty Images)
Then, there are crops like alfalfa (
Medicago sativa
). This legume isn't widely consumed by humans, but it is a staple for livestock — particularly dairy and beef cattle. Like blueberries and tomatoes, alfalfa depends on insect pollinators to thrive. However, honeybees will only pollinate it reluctantly; given the choice, they'd rather buzz around plants with flowers that are easier for them to access. But wild bees, particularly the alfalfa leaf-cutting bee (
Megachile rotundata
), are extremely effective alfalfa pollinators.
A
recent study
found that alfalfa fields visited by a mix of honeybees, wild bees and other pollinators, like wasps and butterflies, produced significantly more and larger seeds than fields visited by honeybees alone. This higher yield translates to more food for cattle — and thus more milk, cheeseburgers and steaks for us.
Glimmers of hope
Of course, restoring insect abundance and biodiversity is no easy task, especially in the face of an all-encompassing threat like global climate change. Experts told Live Science that coordinated federal regulations aimed at slowing climate change, reducing industrial pesticide use, and preventing the destruction of wild spaces are essential for protecting insects. But there are also actions people can take at the local and personal level that can have a positive impact.
Although the current U.S. administration's cuts to federal science programs and green energy have dealt a harsh blow to progress on these fronts, many experts still see reasons for optimism.
"As much as the overall picture is overwhelming, there's lots of places for hope," Schultz told Live Science. In a detailed
report
about the state of U.S. butterflies written this year in collaboration with the Xerces Society, Schultz highlighted a number of "success stories" — species that bucked the trend and increased in abundance thanks to years of focused work at both the federal and local levels.
Chief among them is the Fender's blue (
Icaricia icarioides fenderi
), a tiny azure butterfly native to Oregon. In 2000, the U.S. Fish and Wildlife Service listed it as endangered. In 2023, it became the second-ever insect to be downlisted to "threatened."
And the benefits of conservation efforts for one species had knock-on effects: Of the 342 butterfly species and subspecies analyzed in the report, 65 others had increased in number, and most were not on the endangered species list. This suggests that protections to conserve one insect could benefit others as well.
The Fender's blue butterfly (
Icaricia icarioides fenderi
)
,
native to Oregon, was listed as endangered in 2000. But thanks to concerted conservation measures, the population has recovered somewhat. A new report found that those conservation efforts also improved the population numbers of dozens of other insect species.
(Image credit: U.S. Army Corps of Engineers)
Increasing healthy habitat
One of the best ways to help butterflies and other pollinators is to create more habitat for them. Unlike grizzly bears or elk, these insects
don't need large stretches of unbroken wilderness
. Even something as small as a backyard butterfly garden or a flower-filled window box can go a long way,
Wendy Leuenberger
, an ecologist at Michigan State University, told Live Science.
One study in the Pacific Northwest found that converting a 5,400-square-foot (500 square meter) plot of land — roughly half the size of the average American lawn — into an insect-friendly habitat full of native or wild plants can
increase pollinator species' richness and abundance
by about 90%. However, that effect was fairly localized, and it dissipated when these patches were placed in plots of more than 150,000 square meters (37 acres) — about the size of seven or eight blocks in Chicago.
Some pollinators, like
hoverflies
(
Syrphidae spp.
) and certain types of bees, can
cover miles
in search of flowering plants. But others, including many butterflies, tend to stay closer to home — within a 650-foot 200 meter radius for more delicate species. This suggests that plots of native or wild flora are most effective at bolstering our food supply when interspersed within larger agricultural fields.
Hoverflies are incidental pollinators that help boost production of apples and strawberries.
(Image credit: Victoria Caruso via Getty Images)
"I would say it's the closer, the better for your crops,"
Andy Grinstead
, a conservation manager at Pollinator Partnership, told Live Science.
In agricultural communities, experts like Grinstead
recommend
planting "buffer strips" of native vegetation near (or, if possible, in between) crops. He also suggests planting hedgerows of woody, flowering plants around fields to act as both pollinator habitat and wind protection. But you don't have to be a farmer to support pollinators. Folks living within a few miles of farms can plant "bee lawns," which are filled with low-growing flowering plants like clover, instead of pure turfgrass.
And for those without yards, growing micro-plots of native wildflowers — even just a pot on a rooftop or balcony or hanging from a window — can create green "stepping stones" for bees, hoverflies, migratory butterflies and beetles passing through urban areas.
"Pollinator-friendly practices are valuable across all landscapes," Grinstead said. "It takes very little space to actually make an impact."
Reducing pesticide use on an industrial scale can also benefit pollinators, Black said.
One way to do this is to adopt an integrated
pest management framework
. This can mean rotating crops to keep soil healthy; accurately identifying pests before applying pesticides; and carefully spraying in targeted areas (away from blooms) when the wind is low to prevent the pesticides from drifting into the surrounding environment.
But even home gardeners can help reduce pesticides by replacing lawns or ornamental plants with hardier native species, hand-weeding rather than blanket-spraying small plots, and using screens or draining standing water instead of spraying for pests like mosquitoes, Black said. Taken together, these actions can help create havens where pollinators can thrive.
Taking action
Crucially, scientists are still researching the full scope of global insect declines, especially for species that have been historically understudied. This means we need field research to estimate insect numbers, Black said.
Community pollinator counts, whether as part of a formal program or through apps like iNaturalist, are also essential, Leuenberger told Live Science. These data help experts pinpoint which species are most vulnerable and which conservation efforts are most effective.
But with the future of the global food system hanging in the balance, it's important to try to restore these numbers now — not wait till researchers have published comprehensive data on how and where insect numbers are plummeting, Black said. "We don't want to wait until we have everything tucked into a perfect paper before we take action," he said. "We know how to take action."
Joanna Thompson is a science journalist and runner based in New York. She holds a B.S. in Zoology and a B.A. in Creative Writing from North Carolina State University, as well as a Master's in Science Journalism from NYU's Science, Health and Environmental Reporting Program. Find more of her work in Scientific American, The Daily Beast, Atlas Obscura or Audubon Magazine.
Monotropism was formulated as a theory of autism. It seeks to explain the experiences and traits of autistic people in terms of a tendency for resources like attention to be concentrated on a small number of things at a time, with little left over for everything else. Through this lens we can make sense of autistic social, sensory and executive functioning differences, as laid out in
Monotropism – Explanations
.
As time has gone on, it has become clear that many diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) also identify strongly with many aspects of monotropism. I want to explore this by looking at the diverse ways that autism and ADHD present; where the traits associated with ADHD fit in with monotropism in an obvious way, and where they might seem to be in tension; and what this might mean for how we think about diagnoses and neurodiversity. Much of what I have to say here is necessarily speculative, all of it calls for further research, and parts of it may be in tension with some of the ways that many people are used to talking about neurodivergence.
The way that ADHD and autism are characterised in diagnostic manuals is completely different. ADHD is treated as primarily an attentional difference; autism as chiefly social in nature. Where descriptions do overlap, they can seem contradictory: autism is apparently characterised by rigid, restricted interests, while ADHD is said to cause impulsive behaviour and an inability to concentrate.
So
the facts
that anywhere from 30% to 80% of autistic people seemingly fit the diagnostic criteria for ADHD, and the two clearly run in the same families, might initially seem surprising. It cries out for an explanation. One possibility is that autism and ADHD – or a
Kinetic Cognitive Style
(KCS), as I prefer to call it – share an underlying cause. Monotropism has been put forward as one candidate for this, for example in Patrick Dwyer’s
Revisiting Monotropism
.
It is well established that autism can manifest very differently in different people, in ways that can seem contradictory. We know that autism can come with hyperlexia, or serious language difficulties. We know that it’s associated with sensory seeking and sensory avoidance. We understand that it might come with with crystal-clear memories, or forgetfulness. All of these things can coexist in one person, or just a selection.
With this in mind, it is perhaps not such a stretch to suggest that impulsivity, inattention and hyperactivity might share cognitive or neurological roots with their apparent opposites, like inflexibility,
hyperfocus
and
inertia
. When and how such traits manifest might depend on a person’s interests and experiences, or it might have to do with innate neurocognitive differences. Understanding this kind of variation fully would take far more research on the life experiences and psychological development of people with a variety of cognitive styles, without assuming that current diagnostic categories reflect
objectively real categories
of
human being
.
Impulsivity could come from the monotropic tendency to lose awareness of things as soon as our attention shifts away from them. Inattention is a very familiar thing among autistic people – not an attention
deficit
, which was never the right term, but profound difficulty steering attention in directions which don’t align with our current interests. Hyperfocusing is common with KCS, as it is with autism.
Hyperactivity can refer to a need to keep moving, which bears a striking resemblance to the autistic need to stim. It can also refer to a cognitive tendency which is a little harder to reconcile with how monotropism has been characterised: a habit of hopping mentally from one thing to another. In contrast, difficulty shifting from one attention tunnel to another has been a central feature of the ways monotropism has been described. This tension is worth digging into.
It might be that a
Kinetic Cognitive Style
arises out of a combination of a relatively monotropic processing style
combined with other factors
– difficulty accessing
flow states
, for example, as suggested by some recent research (
Grotewiel et al 2022
). There are all kinds of reasons why people might not be able to enter ‘
flowy focus tunnels
‘, as Jamie Knight calls them. They might have too many distractions, or too much nervous energy; they might not feel safe enough to lose themselves in the flow; they might have had bad experiences being told off for doing so, or been wrenched out of them too many times. They might just be too depleted to be able to connect deeply with their passions, something which also occurs during
autistic burnout
.
We know that novelty-seeking is a trait that varies greatly between people. It’s also possible that some people just have naturally very mobile attention, which might compensate for the monotropic tendency for attention to get sucked into one thing at a time. And maybe some of that apparent attention-hopping happens
within an attention tunnel
anyway, and other people just aren’t seeing the connections! KCS might
look
like polytropism sometimes, but I think that can be misleading. I delayed getting my own autism assessment for years because I mistook my serial monotropism for polytropism: I told myself I was multi-tasking, when it would probably be more accurate to say I repeatedly forgot what I was supposed to be doing.
Meanwhile, it is likely that monotropism doesn’t
necessarily
give rise to autism in the sense required by diagnostic manuals – but that above a certain level of intensity, or in combination with other factors, it causes the familiar social differences, fixity and so on. An early intense interest in other people, and how they behave, might equip someone with tools that will allow them to avoid being seen as too socially weird. The ability to present a ‘normal-looking’ face to the world is likely a major factor in the under-identification of autistic girls, who face far more social pressure to blend in than boys do. None of this changes a person’s cognitive style; but then, autism, like ADHD, has always been assessed based on outward presentation. One hope for Monotropism as a theory is that it helps us to make sense of these things from an internal perspective, rather than looking only at the surface level.
It is, I think, too early to say with any confidence that autism and ADHD (or KCS) share a common root in monotropism, but the overlapping traits of the people receiving each label clearly demand
some
kind of explanation, and
preliminary results
do suggest that each is strongly correlated with monotropism – especially in combination. With any luck, we will see a good deal more research on this in coming years.
I felt like it might be a good time to write about some new things I’ve
learned. Most of this is going to be about building agents, with a little bit
about using agentic coding tools.
TL;DR: Building agents is still messy. SDK abstractions break once you hit
real tool use. Caching works better when you manage it yourself, but differs
between models. Reinforcement ends up doing more heavy lifting than expected,
and failures need strict isolation to avoid derailing the loop. Shared state
via a file-system-like layer is an important building block. Output tooling is
surprisingly tricky, and model choice still depends on the task.
Which Agent SDK To Target?
When you build your own agent, you have the choice of targeting an underlying
SDK like the OpenAI SDK or the Anthropic SDK, or you can go with a higher level
abstraction such as the Vercel AI SDK or Pydantic. The choice we made a while
back was to adopt the Vercel AI SDK but only the provider abstractions, and to
basically
drive the agent loop
ourselves
. At this point
we
would not make that choice again. There is
absolutely nothing wrong with the Vercel AI SDK, but when you are trying to
build an agent, two things happen that we originally didn’t anticipate:
The first is that the differences between models are significant enough that
you will need to build your own agent abstraction. We have not found any of
the solutions from these SDKs that build the right abstraction for an agent. I
think this is partly because, despite the basic agent design being just a loop,
there are subtle differences based on the tools you provide. These differences
affect how easy or hard it is to find the right abstraction (cache control,
different requirements for reinforcement, tool prompts, provider-side tools,
etc.). Because the right abstraction is not yet clear, using the original SDKs
from the dedicated platforms keeps you fully in control. With some of these
higher-level SDKs you have to build on top of their existing abstractions,
which might not be the ones you actually want in the end.
We also found it incredibly challenging to work with the Vercel SDK when it
comes to dealing with provider-side tools. The attempted unification of
messaging formats doesn’t quite work. For instance, the web search tool from
Anthropic routinely destroys the message history with the Vercel SDK, and we
haven’t yet fully figured out the cause. Also, in Anthropic’s case, cache
management is much easier when targeting their SDK directly instead of the
Vercel one. The error messages when you get things wrong are much clearer.
This might change, but right now we would probably not use an abstraction when
building an agent, at least until things have settled down a bit. The benefits
do not yet outweigh the costs for us.
Someone else might have figured it out. If you’re reading this and think I’m
wrong, please drop me a mail. I want to learn.
Caching Lessons
The different platforms have very different approaches to caching. A lot has
been said about this already, but Anthropic makes you pay for caching. It
makes you manage cache points explicitly, and this really changes the way you
interact with it from an agent engineering level. I initially found the manual
management pretty dumb. Why doesn’t the platform do this for me? But I’ve
fully come around and now vastly prefer explicit cache management. It makes
costs and cache utilization much more predictable.
Explicit caching allows you to do certain things that are much harder
otherwise. For instance, you can split off a conversation and have it run in
two different directions simultaneously. You also have the opportunity to do
context editing. The optimal strategy here is unclear, but you clearly have a
lot more control, and I really like having that control. It also makes it much
easier to understand the cost of the underlying agent. You can assume much
more about how well your cache will be utilized, whereas with other platforms
we found it to be hit and miss.
The way we do caching in the agent with Anthropic is pretty straightforward.
One cache point is after the system prompt. Two cache points are placed at the
beginning of the conversation, where the last one moves up with the tail of the
conversation. And then there is some optimization along the way that you can
do.
Because the system prompt and the tool selection now have to be mostly static,
we feed a dynamic message later to provide information such as the current
time. Otherwise, this would trash the cache. We also leverage reinforcement
during the loop much more.
Reinforcement In The Agent Loop
Every time the agent runs a tool you have the opportunity to not just return
data that the tool produces, but also to feed more information back into the
loop. For instance, you can remind the agent about the overall objective and
the status of individual tasks. You can also provide hints about how the tool
call might succeed when a tool fails. Another use of reinforcement is to
inform the system about state changes that happened in the background. If you
have an agent that uses parallel processing, you can inject information after
every tool call when that state changed and when it is relevant for completing
the task.
Sometimes it’s enough for the agent to self-reinforce. In Claude Code, for
instance, the todo write tool is a self-reinforcement tool. All it does is
take from the agent a list of tasks that it thinks it should do and echo out
what came in. It’s basically just an echo tool; it really doesn’t do anything
else. But that is enough to drive the agent forward better than if the only
task and subtask were given at the beginning of the context and too much has
happened in the meantime.
We also use reinforcements to inform the system if the environment changed
during execution in a way that’s problematic for the agent. For instance, if
our agent fails and retries from a certain step forward but the recovery
operates off broken data, we inject a message informing it that it might want
to back off a couple of steps and redo an earlier step.
Isolate Failures
If you expect a lot of failures during code execution, there is an opportunity
to hide those failures from the context. This can happen in two ways. One is
to run tasks that might require iteration individually. You would run them in
a subagent until they succeed and only report back the success, plus maybe a
brief summary of approaches that did not work. It is helpful for an agent to
learn about what did not work in a subtask because it can then feed that
information into the next task to hopefully steer away from those failures.
The second option doesn’t exist in all agents or foundation models, but with
Anthropic you can do context editing. So far we haven’t had a lot of success
with context editing, but we believe it’s an interesting thing we would love to
explore more. We would also love to learn if people have success with it.
What is interesting about context editing is that you should be able to
preserve tokens for further down the iteration loop. You can take out of the
context certain failures that didn’t drive towards successful completion of the
loop, but only negatively affected certain attempts during execution. But as
with the point I made earlier: it is also useful for the agent to understand
what didn’t work, but maybe it doesn’t require the full state and full output
of all the failures.
Unfortunately, context editing will automatically invalidate caches. There is
really no way around it. So it can be unclear when the trade-off of doing that
compensates for the extra cost of trashing the cache.
Sub Agents / Sub Inference
As I mentioned a couple of times on this blog already, most of our agents are
based on code execution and code generation. That really requires a common
place for the agent to store data. Our choice is a file system—in our case a
virtual file system—but that requires different tools to access it. This is
particularly important if you have something like a subagent or subinference.
You should try to build an agent that doesn’t have dead ends. A dead end is
where a task can only continue executing within the sub-tool that you built.
For instance, you might build a tool that generates an image, but is only able
to feed that image back into one more tool. That’s a problem because you might
then want to put those images into a zip archive using the code execution tool.
So there needs to be a system that allows the image generation tool to write
the image to the same place where the code execution tool can read it. In
essence, that’s a file system.
Obviously it has to go the other way around too. You might want to use the
code execution tool to unpack a zip archive and then go back to inference to
describe all the images so that the next step can go back to code execution and
so forth. The file system is the mechanism that we use for that. But it does
require tools to be built in a way that they can take file paths to the virtual
file system to work with.
So basically an
ExecuteCode
tool would have access to the same file system as
the
RunInference
tool which could take a
path
to a file on that same
virtual file system.
The Use Of An Output Tool
One interesting thing about how we structured our agent is that it does not
represent a chat session. It will eventually communicate something to the user
or the outside world, but all the messages that it sends in between are usually
not revealed. The question is: how does it create that message? We have one
tool which is the output tool. The agent uses it explicitly to communicate to
the human. We then use a prompt to instruct it when to use that tool. In our
case the output tool sends an email.
But that turns out to pose a few other challenges. One is that it’s
surprisingly hard to steer the wording and tone of that output tool compared to
just using the main agent loop’s text output as the mechanism to talk to the
user. I cannot say why this is, but I think it’s probably related to how these
models are trained.
One attempt that didn’t work well was to have the output tool run another quick
LLM like Gemini 2.5 Flash to adjust the tone to our preference. But this
increases latency and actually reduces the quality of the output. In part, I
think the model just doesn’t word things correctly and the subtool doesn’t have
sufficient context. Providing more slices of the main agentic context into the
subtool makes it expensive and also didn’t fully solve the problem. It also
sometimes reveals information in the final output that we didn’t want to be
there, like the steps that led to the end result.
Another problem with an output tool is that sometimes it just doesn’t call the
tool. One of the ways in which we’re forcing this is we remember if the output
tool was called. If the loop ends without the output tool, we inject a
reinforcement message to encourage it to use the output tool.
Model Choice
Overall our choices for models haven’t dramatically changed so far. I think
Haiku and Sonnet are still the best tool callers available, so they make for
excellent choices in the agent loop. They are also somewhat transparent with
regards to what the RL looks like. The other obvious choices are the Gemini
models. We so far haven’t found a ton of success with the GPT family of models
for the main loop.
For the individual sub-tools, which in part might also require inference, our
current choice is Gemini 2.5 if you need to summarize large documents or work
with PDFs and things like that. That is also a pretty good model for
extracting information from images, in particular because the Sonnet family of
models likes to run into a safety filter which can be annoying.
There’s also probably the very obvious realization that token cost alone
doesn’t really define how expensive an agent. A better tool caller will do the
job in fewer tokens. There are some cheaper models available than sonnet
today, but they are not
necessarily
cheaper in a loop.
But all things considered, not that much has changed in the last couple of
weeks.
Testing and Evals
We find testing and evals to be the hardest problem here. This is not entirely
surprising, but the agentic nature makes it even harder. Unlike prompts, you
cannot just do the evals in some external system because there’s too much you
need to feed into it. This means you want to do evals based on observability
data or instrumenting your actual test runs. So far none of the solutions we
have tried have convinced us that they found the right approach here.
Unfortunately, I have to report that at the moment we haven’t found something
that really makes us happy. I hope we’re going to find a solution for this
because it is becoming an increasingly frustrating aspect of building an agent.
Coding Agent Updates
As for my experience with coding agents, not really all that much has changed.
The main new development is that I’m trialing
Amp
more.
In case you’re curious why: it’s not that it’s objectively a better agent than
what I’m using, but I really quite like the way they’re thinking about agents
from what they’re posting. The interactions of the different sub agents like
the Oracle with the main loop is beautifully done, and not many other harnesses
do this today. It’s also a good way for me to validate how different agent
designs work. Amp, similar to Claude Code, really feels like a product built
by people who also use their own tool. I do not feel every other agent in the
industry does this.
Quick Stuff I Read And Found
That’s just a random assortment of things that I feel might also be worth
sharing:
What if you don’t need MCP at
all?
:
Mario argues that many MCP servers are overengineered and include large
toolsets that consume lots of context. He proposes a minimalist approach for
browser-agent use-cases by relying on simple CLI tools (e.g., start, navigate,
evaluate JS, screenshot) executed via Bash, which keeps token usage small and
workflows flexible. I
built a Claude/Amp Skill out of
it
.
The fate of “small” open
source
:
The author argues that the age of tiny, single-purpose open-source libraries is
coming to an end, largely because built-in platform APIs and AI tools can now
generate simple utilities on demand.
Thank fucking
god
.
Tmux is love
. There is
no article that goes with it, but the TLDR is that Tmux is great. If you
have anything that remotely looks like an interactive system that an agent
should work with, you should
give it some Tmux
skills
.
Real-time object detection
lies at the heart of any system that must interpret visual data efficiently, from video analytics pipelines to autonomous robotics. Detector architectures for such tasks need to deliver both high throughput and accuracy in order to excel.
In our own pipelines, we phased out older CNN-based detectors in favor of
D-Fine
, a more recent model that is part of the
DEtection Transformer (DETR)
family. Transformer-based detectors have matured quickly, and D-Fine in particular provides stronger accuracy while maintaining competitive inference speed.
Our office dog Nala sitting on a chair, as detected our own D-Fine model in the DM vision library.
YOLO has long been the leading standard for real-time detection, but the latest DETR variants are now consistently proving to be the better alternative. Beyond the accuracy gains, an equally important advantage is the far more permissive license that comes with it.
YOLO’s licensing issue
The
YOLO series
is developed and maintained by
Ultralytics
. All YOLO code and weights are released under the
AGPL-3.0 license
. Long story short, this license only allows commercial usage under the strict condition that any code modifications or weights should be made publicly available. On the contrary, all DETR models to date were released under the
Apache 2.0 License,
allowing for free use and modifications for commercial and proprietary use.
Next to licensing, there are others reasons why we like working with DETRs:
DETRs treat object detection as a direct set-prediction problem. This
eliminates hand-crafted components
such as
non-maximum suppression
that introduce additional hyperparameters and slow down the detection pipeline.
Modern GPU architectures are heavily optimized for
efficient attention operations
such as
flash attention
,
making transformers increasingly more suitable for real-time applications.
Transfer learning
from vision foundation models
such as the recent
DINOv3
fundamentally augments the capabilities of DETRs.
We have had nothing but great experiences with DETRs so far. They adapt remarkably well to new datasets, even when trained from scratch. For the right use cases, pre-training the models on datasets such as
COCO
and
Objects365
further boosts performance. About time for a post on this exciting topic!
A short overview of what you can expect of the remainder of this blogpost. We will:
discuss the
most important advancements
leading to the real-time adoption of DETRs;
compare two leading DETR models to the latest YOLO 11 model to draw some important conclusions.
Let’s go!
DETR: transformer for NMS-free object detection
All Detection Transformer architectures have the same underlying structure. A (CNN-) backbone is used to extract image features. These features are fed to a
transformer encoder-decoder structure
that is able to predict accurate bounding boxes for object in the image. The resulting N decoder output embeddings are independently projected to bounding box coordinates and class labels.
Intuitively, the encoder in DETR transforms the dense backbone features into a
semantically structured representation
of the image that captures relationships between regions through global self-attention.
The
transformer
decoder
takes a fixed set of
N
learned object queries,
each representing a potential object slot. It then iteratively refines these to produce final
bounding boxes and class predictions
. It does this through two attention operations:
Self-attention
among the queries, enabling them to model interactions and avoid duplicate detections (e.g., two queries focusing on the same object).
Cross-attention
between the queries and the encoder’s output features, allowing each query to attend to the most relevant parts of the image and extract the corresponding visual evidence.
Attention layer in the DETR decoder. The output embedding of the cross-attention module serves as the content query for the next layer. The output features of the encoder are the key and value for cross-attention. The positional query is learnable and shared over self-attention and cross-attention in all layers
Through the clever use of attention in the decoder, DETR replaces traditional components like anchor boxes and non-maximum suppression with a
fully end-to-end transformer-based detection process.
Direct set prediction
DETR reframes object detection as a
direct set-prediction problem.
Given an image, it predicts a fixed set of N
bounding boxes corresponding to the object queries. Because N typically exceeds the number of actual objects, many predictions correspond to a special “no-object” class and are discarded at inference. During training, the
Hungarian algorithm
performs bipartite matching between predicted and ground-truth boxes, ensuring each ground-truth box is paired with exactly one prediction in a permutation-invariant way. The loss is then computed on these matched pairs.
Overcoming DETRs shortcomings
Despite its elegance and powerful prediction paradigm,
slow training converge
and
low performance on small objects
limited adoption in practical systems early on. Over the years, several enhancements drastically improved the performance of Detection Transformers:
Deformable DETR
introduced
deformable attention,
an efficient multi-scale attention mechanism tailored to the task of object detection.
The authors of
Efficient DETR
were the first to use
top-k query selection
for better initialization of the object queries for the decoder.
DN-DETR
drastically improved training convergence using
an auxiliary denoising task
of training bounding boxes.
DETR evolution throughout time. Real-time variants arose from 2024 in two families: the RT-DETR family indicated in blue, and the LW-DETR indicated in purple.
Real-time transformer object detection
From 2024 onwards DETRs really started to challenge YOLO in real-time detection, eventually surpassing them in accuracy while remaining competitive in speed and efficiency. There are two schools of thought that compete for the state-of-the art nowadays:
RT-DETR
(
real-time DETR
) sticks to the original DETR architecture and focuses on optimizing the encoder and the initialization of the object queries.
D-Fine
currently leads this family with a heavily optimized training strategy centered on the decoder. Very recently,
DEIMv2
extends it further by integrating DINOv3 features in its backbone.
LW-DETR
(
light-weight DETR
) adopts a simpler idea: replace the traditional CNN backbone and encoder with a pure Vision Transformer (ViT).
RF-DETR
(
Roboflow
DETR) leverages this especially well by starting from a pretrained DINOv2 encoder.
Work on Detection Transformers is very much alive: DEIMv2 was released less than two months ago, while Roboflow put their paper on RF-DETR on Arxiv just last week!
Object detection performance
How do these advancements reflect on performance benchmarks? The figure here underneath summarizes the performance of YOLO11, D-Fine, and RF-DETR for relevant model sizes on the well-known
COCO dataset
.
Performance comparison between leading model architectures for their corresponding nano (N), small (S), medium (M), and large (L) variants. Indicative latency measures for each model size indicated between brackets.
*Not pretrained on Objects365 dataset **RF-DETR L is not released yet
Some important take-aways from these numbers:
Both D-Fine and RF-DETR clearly outperform YOLO 11 for all sizes.
RF-DETR’s smaller models stand out, with the nano variant outperforming the others by a wide margin. This is likely because RF-DETR-N already benefits from a strong DINOv2 backbone.
D-Fine’s performance scales the best with model size, with the large variant scoring a whopping 57.4 mAP.
Parameter count
So, RF-DETR for small, very fast models and D-Fine when things get more complex? There is another side to the story. To finish of this post, I’d like to highlight an important difference between D-Fine and RF-DETR. For that, let’s take a look at the following figure:
Model sizes in million parameters for YOLO11, D-Fine and RF-DETR for their corresponding nano (N), small (S), medium (M) and large (L) variants. YOLO11 shows the best downward trend for larger model sizes with D-Fine close.
One of the first things to stand out is that D-Fine and YOLO11 become significantly lighter as their model sizes shrink, while RF-DETR’s parameter count declines by only around 5 million. This somewhat surprising observation results from the fact that RF-DETR was trained with a technique called
Neural Architecture Search (NAS).
NAS automatically finds network architectures that are Pareto optimal for the accuracy-latency trade-off.
Interestingly, the “small” RF-DETR architectures found by NAS end up only slightly lighter than the “large” variants. RF-DETR model sizes thus reflect
speed
rather than parameter count. D-Fine‘s model sizes on the contrary are on par with YOLO 11, making them the
more versatile DETR architecture
that can be adapted in a wide range of scenarios, including resource-constrained edge environments.
Conclusion
Real-time Detection Transformers represent one of the most significant recent shifts in computer vision. Their rapid evolution shows how transformers have become not only viable but actually
preferred in scenarios that demand both high speed and high accuracy
, even in resource-constrained scenarios. Just as important, their Apache 2.0 License makes them easy to use, enabling practical adoption beyond academic benchmarks.
D-Fine and RF-DETR
have set the new standard for real-time object detection moving forward. D-Fine shows the best scaling in both speed, accuracy, and model size. The small RF-DETR variants are remarkably accurate and fast for their size, but the bigger models fall short of D-Fine when evaluated on the well-known COCO dataset. However, the field keeps on changing rapidly, so we’ll keep on tracking progress on both to make the best possible choices for every problem.
If you’re working on demanding detection problems where accuracy, robustness, and efficiency matter, we can help. We tailor DETR-based models to your specific application, integrate them in video processing pipelines, and set up
continuous improvement loops
to ensure performance keeps rising as new data comes in.
Reach out
; we’d be excited to turn cutting-edge Detection Transformer research into real, production-grade impact for your system.
A couple of weeks ago, I was notified that I can be part of class action settlement against University of Minnesota for a data breach that exposed my personal information. According to the details, In 2021, the University of Minnesota experienced a data breach that exposed personal information of "individuals who submitted information to the university as a prospective student, attended the university as a student, worked at the university as an employee or participated in university programs between 1989 and Aug. 10, 2021."
source
. I'm an alumnus of this university, so my information was part of that breach.
The university of course as a classical cooperative entity took the easy route that the legal system provides. They refuse to admit any wrongdoing, but they agreed to pay $5 million to settle the class action lawsuit. The settlement is open to anyone who had their personal information exposed in the breach, which includes names, addresses, dates of birth, Social Security numbers, and other sensitive data.
What is more insulting than that the university did not issue a formal apology to the affected individuals, is that they are offering a mere $30 per person as compensation for the breach. Yes to be honest they include this standard 24 months of dark web monitoring and identity theft protection services, but the value of my personal information is set to $30. Which even would be less if the number of people submitting exceeds the funding available for the settlement.
So according the university that sends me two or three emails per week asking me to donate to them, my personal information is worth $30. I understand that my Social Security number and other personal information got exposed in other breaches (Thanks to
T-mobile
and others). But the current status quo is that it does not matter whether it is a commercial entity or a public one, they will act in the same way. They will not take responsibility for their actions, and they will not compensate you for the damage they caused. They will just offer you a small amount of money and hope that you will forget about it.
The University of Minnesota is not the only one doing this. Many other institutions and companies have been caught in data breaches and have offered similar settlements. But it is still disappointing to see that they are not taking the issue seriously. This same university which promised a life access to email address which they did not honor, is now offering me $30 for my personal information. It is a slap in the face to all of us who have been affected by this breach. So I will not be submitting a claim for the settlement. I will not be accepting their offer of $30. I would have much preferred if they had taken responsibility for their actions and issued a healthy apology. But they did not. This would have been a good start. But they did not. And they will not.
The basic problem is that they do not care about us. They care about their reputation and their bottom line. They do not care about the damage they caused to our personal information. They do not care about the trust they have broken. They just want to move on and forget about it. When this happens from a corporation or a company, I can understand it. But when it happens from a public institution that is supposed to serve the public interest, it is unacceptable. How would I trust anything coming from them in the future? They have shown that they don't care about their alumni or their students.
The regulation is very weak, and the courts/laws are not doing enough to hold these institutions accountable. The fines are too low, and the settlements are too small. The only way to change this is to demand better regulations and stronger penalties for data breaches. We need to hold these institutions accountable for their actions and make them pay for the damage they cause. If the fines and compensation were higher, then the incentives would be aligned, and they would take data security more seriously. And would invest more in protecting our personal information instead of the ever-increasing administrative costs and salaries of the top executives.
US Universities are not only charging high tuition fees for education, but they are charging even researchers with external grants to use their facilities. If you get NSF or NIH grant, you have to pay the university a percentage of the grant as an indirect cost. The percentage varies from one university to another, but it is usually around 50%. This means that if you get a 100,000 USD grant, the university will take out 50,000 USD as indirect costs (NSF or NIH will end up paying 150,000 USD). This is a huge amount of money that could be used for research, but it is going to the university's administrative costs and salaries of the ever-increasing number of administrators.
For what it is worth that the universities is currently under fire for a variety of reasons, mostly politically motivated, but there are many valid reasons to be critical of the way they are run. The way they handle data breaches is just one of them. The amount of disrespect they show to their alumni and students is another. The way they prioritize administrative costs over education and research is yet another. It is time for us to demand better from our universities and hold them accountable for their actions.
After writing this post and trying to proofread it, I realized that I repeated "My personal information is worth $30" multiple times. I guess it is a sign that I am still angry about it. But also realized that if I had written this in Arabic it would have been much more concise. The poetic nature of writing in grievance in Arabic is much more effective than in English. But I will leave that for another time.
Hindsight – Type-safe and evolvable event sourcing for Haskell
Type-safe and evolvable event sourcing for Haskell
Hindsight is a type-safe event sourcing system that provides strong compile-time guarantees for event handling, versioning, and consistency with multiple storage backends.
Hindsight in Action
Type-Safe Event Definition
Define events with compile-time versioning guarantees. No runtime surprises.
-- Event definitioninstanceEvent"user_registered"-- Event payloaddataUserInfo=UserInfo { userId ::Text , email ::Text } deriving (Generic, FromJSON, ToJSON)-- Version declarationtypeinstanceMaxVersionUserRegistered=0typeinstanceVersionsUserRegistered= '[UserInfo]-- Migration (automatic for single version)instanceMigrateVersion0UserRegistered
Backend-Agnostic Subscriptions
Subscribe to events with handlers that work across all backends.
{-# LANGUAGE RequiredTypeArguments #-}-- Subscribe to events (works with any backend)subscribeToUsers ::BackendHandle backend ->IO (SubscriptionHandle backend)subscribeToUsers store = subscribe store ( match "user_registered" handleUser :?MatchEnd ) (EventSelectorAllStreamsFromBeginning)where-- Handler runs for each event handleUser envelope =dolet user = envelope.payloadputStrLn$"New user: "<> user.emailreturnContinue
SQL Projection Handlers
Transform events into queryable read models with ACID guarantees.
Billionaire’s wife bought London mansion as he reconciled with Chinese authorities
The family of Chinese billionaire Jack Ma bought a £19.5 million London mansion amid a rapprochement with Chinese authorities after years of scrutiny and political exile.
Ma’s wife, Cathy Ying Zhang, acquired the six-bedroom Edwardian house, formerly the Italian embassy, in London’s elite Belgravia district in October 2024, property records show.
The purchase came after Ma’s return to public life after disappearing from view in the aftermath of a speech criticising China’s financial system. It could be seen as a “precautionary diversification” in case Ma again provokes Beijing’s ire, said Sari Arho Havrén, a China specialist at the Royal United Services Institute.
“Wealthy families are hedging against regime risk—one never knows when policies may turn hostile again,” she said. “Affluent families are diversifying quietly. Rule of law societies still hold considerable appeal.”
Ma, 61, is the founder of Alibaba Group, whose online commerce platforms have earned him a fortune of around $30 billion. The Belgravia house, the Ma family’s first known property holding in the UK, may have been funded by the sale of Alibaba shares in 2023, Havrén said.
Ma’s wife Zhang, who has taken Singaporean citizenship, is reportedly the sole director of an offshore company that Ma used to buy a château and vineyards in France.
Last year it was reported that Zhang spent up to $38 million on three commercial properties in Singapore. The buying spree is part of a trend that has seen prominent Chinese businesspeople move money abroad for fear of asset freezes or capital controls.
Many have left China altogether. As many as 13,800 “high-net-worth individuals” emigrated in 2024—a 28 percent rise from 2022, according to investment migration consultants Henley & Partners.
The sale of the Belgravia mansion, managed by Knight Frank and Beauchamp Estates and handled by law firm Withers LLP, was rushed through ahead of a rise in the UK’s stamp duty surcharge for overseas buyers, according to a November 2024 report that did not name the buyer.
Beauchamp and Knight Frank declined to comment. Zhang and Ma Withers did not respond to questions put to them via Withers.
In 2015, it was reported that Ma family purchased ‘Asia’s most expensive home’ in Hong Kong’s Victoria Peak which was formerly owned by the Belgian government. In the same year, it was reported that Ma had bought a 28,000 acre property in upstate New York for $23 million.
Ma vanished from public view in late 2020 after he criticised China’s financial regulators. Beijing reportedly punished him with a fine of nearly $3 billion and halted a stock market listing by Ant Group, an offshoot of Alibaba.
He resurfaced in China in 2023 after an apparent reconciliation with the administration of President Xi Jinping, occasionally attending public events. In February 2025, he was seen shaking Xi’s hand at event with Chinese industry leaders. However, Ma’s public remarks went unreported by official state media, prompting analysts to suggest that he had not been “completely rehabilitated”.
In April,
The Guardian
reported that Chinese authorities enlisted Ma as part of a campaign to pressure a dissident businessman to return to China from France to help prosecute an official who had angered the regime.
“They said I’m the only one who can persuade you to return,” Ma reportedly told the unnamed businessman in a telephone call. The Chinese government called the allegations “pure fabrication”.
Headline picture: Beauchamp Estates
Libpng 1.6.51: Four buffer overflow vulnerabilities fixed
As “pedophile hellscape”
Roblox
finally adds
a rudimentary measure
to try to prevent children from being exploited via its network of games and chatrooms used by 151 million people, the company’s CEO
spoke to the
New York Times
podcast Hard Fork about child safety. And it didn’t go great. It
really
didn’t go great.
Roblox
is coming under increasing scrutiny after decades of failing to implement even the most basic of protective measures to keep its pre-teen players away from the networks of pedophiles who use the game to find victims. Described last year as
“a pedophile hellscape for kids”
by Hindenburg Research
,
the game quickly introduced a
new assortment of measures
last October that did next to nothing, leading this year to a great deal of legal interest. Three months back,
the state of Louisiana announced its intentions to sue
Roblox
for the dangers it was allowing, joined since by Kentucky and Texas. These actions come alongside
35 other ongoing lawsuits
, including one from only yesterday
by a family in Cuyaho County
following
Roblox
‘s alleged use in the tragic and too common abuse of their 13-year-old son.
On Wednesday of this week,
Roblox
announced
it was beginning to roll out a new method of checking player ages before they could access chat, this time using facial recognition to try to restrict players to only talking to others in their age range. Such facial ID checks have become commonplace in the UK following the disastrously poor UK Online Safety Act, by which porn sites and other age-restricted outlets are required to verify users’ ages. This has led to sites like X, Bluesky and Discord also requiring British users to prove their age, usually by showing their face on camera in a measure that entirely thwarts all seven people who haven’t heard of a VPN.
Regarding this rollout,
the
New York Times
‘ Casey Newton spoke to
Roblox
co-founder and CEO David Baszucki about the new plans
, and whether they can really help. It doesn’t begin well. When asked about the “scope of the problem” of predators in the application, Baszucki came in shoulders first saying, “We think of it not necessarily just as a problem, but an opportunity as well.” Ah, the good ol’ opportunities available when your company’s product is rife with pedophilia. He continued, outlining how wonderful it is that young people can build and communicate together, how they have “150 million daily actives, 11 billion hours a month, like what is the best way to keep pushing this forward.” It is the most astonishingly tone-deaf response.
Newton attempts to get some sensible answers from Baszucki about how the ID checks will work, and why they won’t be
as easily evaded as so many others
, and amidst Baszucki’s corporate waffle he’s told that
Roblox
is apparently always looking out for “weird signals” and will ask for further age verification should these appear, although he didn’t explain what those signals might be, nor what these further checks would be. But then Newton goes on to ask the painfully necessary question: why has it taken 19 years to even
try
to stop adults from being able to speak to children?
Baszucki responds by talking about the evolution of
Roblox
‘s text filtering tech for inappropriate language and personally identifying information over the last two decades, and how it’s always improving, which is great but clearly not relevant to the question. Newton is having none of it, responding, “Yeah, I mean, I don’t know. When I read these lawsuits and these investigations into the company, it does not seem like predators are having that hard of a time getting around your filters…So I’m curious what’s made you so confident that things are working?” Baszucki’s response is incoherent.
“I don’t want to comment on it. We do see the chat logs of those. And we can see interesting and, many times, ways of people trying to—I’d say, many times people who are fairly sophisticated, and I’m not going to say all of them, many times kids who are over 13, who actually in any other app are not running in a filtered situation, unfortunately, figuring out how to jump to some other platform where they can communicate unfiltered, where they can share images, all of it. It’s one of the primary things we’re doing is trying to keep people on our platform. It’s an interesting industry situation. I would say, we’re not waiting for the rest of the industry. We’re like, we’re always trying to stay ahead of it. On the California age-appropriate design code, for example, we’re like, we are already doing all of that stuff. And the same thing with age estimation: We’re not doing this because of any laws that are coming, we think it’s the right thing to do.”
Newton very impressively keeps his cool, and rather than pointing out that this answer had nothing to do with the situation, nor indeed how unbelievable it is that the CEO of the company would say “I don’t want to comment on it” when asked why he’s confident in age tech that clearly doesn’t work, he instead points out that the lawsuits are demonstrating that “
Roblox
is kind of, you know, where predators go to find kids.”
Things become increasingly tense, as Baszucki tries to claim this is a misrepresentation, and when pressed on whether he truly doesn’t believe
Roblox
has a predator problem replies, “I think we’re doing an incredible job at innovating relative to the number of people on our platform and the hours, in really leaning in to the future of how this is going to work.”
What becomes so grimly apparent is that even now, even after a year of extraordinary scrutiny and legal pressure, Baszucki still doesn’t have a grip on the situation at all. To be so ill-prepared for such obvious questions, to have no coherent responses for why the new tech will be effective, and to go so out of his way to appear so uninterested in the subject, is astonishing. As he’s pressed further on the ease with which predators can suggest children speak to them on another platform, Baszucki eventually unravels into literal gibberish:
“I would actually say that is a very simple red flag. Like, that sounds like text filter many prior generations. So I would say the techniques are much more sophisticated than that. We’re constantly getting better than that and sometimes it’s adversarial. Like, we’re getting into, you know, if we cryptographically were going to try to have an exchange of how to share your Snap handle or whatever handle. We see some of that starting, like things we’ll have to essentially prevent in the future.”
The CEO seems to believe that the scale of
Roblox
is somehow an excuse to justify its problems, repeatedly coming back to its 150 million users and 11 billion hours a month, as if this explains it all away. But more problematically, as Newton points out that the company wants those numbers to grow, Baszucki immediately switches back to talking about what an amazing financial opportunity this is. Given
Roblox
is really struggling to turn those users into money
, it reads like he’s only speaking to investors and analysts who are increasingly concerned about
Roblox
‘s lack of profits. So many responses begin with an infuriatingly irrelevant attempt to avoid the question, and then end with words like “And you could imagine
Roblox
at 20-X this scale, having 50 percent of the gaming market space all in a single place.” It’s so crass!
“Fun,” says Baszucki, a man who appears pathologically incapable of reading the room. “Let’s keep going down this. And so, first off, Hindenburg is not longer in existence, correct? So, you should report on that. They went out of business for some reason…” He then demanded to know if Newton had researched the answer for himself, before saying that it’s because they’ve moved so much of the safety regulation onto AI, all while sounding absolutely furious that he was being asked. He then demands that Newton agree that if AI is more effective, it’s better to use it, and when Newton does, Baszucki starts to behave incredibly immaturely. “Good, so you’re aligning with what we did. High-five.” Then when Newton tries to ask a question, he interrupts to say, “Thank you for supporting our
Roblox
decision matrix
.”
Then interrupts yet again to say, “I’m so glad you guys are aligned with the way we run
Roblox
. High-five.” Think he’s done? Nope. Yet again he interrupts the question with, “Is this a stealth interview where actually you love everything we’re doing and you’re here to stealthily support it?”
And he doesn’t stop. When co-host Kevin Roose tried to get things back on the rails, Baszucki kept going with the same pathetic line. Challenged on how AI has proved very ineffective for social media companies, he just carries on saying it. The only thing that stopped this tantrum was allowing the CEO to talk about Polymarket, a cryptoscam-based prediction market, letting people place bets on things as ridiculous as awards ceremonies and future weather patterns. Some believe it’s becoming a useful way to predict markets, and that group includes Baszucki who is…and I cannot believe I’m typing this…looking to put it into
Roblox
so children can gamble.
He wants to find a way to do this that’s “legal” and “educational,” at which point the podcast hosts begin openly mocking the stupidity. And then, thank god, they run out of time.
Chris McCausland: Seeing into the Future – an astonishing look at how tech is changing disabled people’s lives
Guardian
www.theguardian.com
2025-11-22 07:00:32
Prepare to have your perspective shattered by the comedian’s visits to our US tech overlords. The upcoming advancements for those with disabilities are life-changing Washing machines liberated women to get soul-crushing jobs that ate up their free time. Social media gave the world one revolution – b...
W
ashing machines liberated women to get soul-crushing jobs that ate up their free time. Social media gave the world one revolution – before it destabilised democracies everywhere else. Now AI is here, and its main job seems to be replacing screenwriters. It’s easy to fall into techno-pessimism, but new documentary Seeing into the Future (Sunday 23 November, 8pm, BBC Two) has a different angle. For disabled people, tech has already brought about life-changing advancements. And we haven’t seen anything yet.
It is presented by comedian and Strictly
winner
Chris McCausland, who is blind. Some of the most casually astonishing scenes occur early on, showing how he uses his phone – essentially, an eye with a mouth. “What T-shirt is this?” he asks, holding up a garment. “A grey T-shirt with a graphic logo of Deftones,” his phone obliges. It can even tell him if the shirt needs ironing. But it’s where all this is going that fascinates McCausland, so he heads to the US, to see what’s in development at the houses of our tech overlords.
He swings by a facility belonging to Meta to try out some smart glasses. To my mind, he may as well be entering the lair of the White Worm, or popping round for macarons at Dracula’s castle. But that’s partly because I’m not in direct need of such technology, and the documentary’s job is to highlight possibility not jump on pitfalls. It’s not like Zuckerberg is personally in the lab, stroking a cat and spinning round on an egg chair.
I love having my perspective shaken up. A glass screen with no buttons sounds like the most excluding device imaginable, McCausland acknowledges, yet his phone became the most accessible tool he’s ever used. He’s similarly excited by the Meta Specs – I don’t think that’s what they’re actually called – which are always on and offer live video interpretation, telling you what you’re looking at. Like a phone but, crucially, wearable. “The one thing blind people never have is two hands free,” he observes.
McCausland with Maxine Williams, VP of accessibility and engagement at Meta, trying out their smart glasses.
Photograph: BBC/Open Mike Productions
At MIT, a nanotechnologist tells him how molecular devices could repair cells inside our bodies. He tries bionic gait assistance – a device that straps on to the calf, giving the wearer added power. It looks like the knee brace Bruce Wayne uses in
The Dark Knight Rises
to kick through a brick wall when he learns he’s got no cartilage in his knee. Most moving, in every sense, he takes a trip in a driverless car. It’s the first time McCausland has taken a car journey alone.
Driverless cars will arrive in the UK next spring. (That’s a long journey.) They are what I would call an instinctive NOPE. But “It’s not massively different to trusting a driver I don’t know,” McCausland reflects. They are extraordinary: mounted with spinning radars, making calculations involving the speed of light to 3D-model the environment in real time. They may as well have gullwing doors. The fact the steering wheel moves on its own is McCausland’s favourite thing about them, which is charming. Coolness is certainly the second best thing technologists can pursue, after equality of access to lives of dignity and independence. In my defence, it’s not just that I don’t trust technology. It’s that I don’t trust profit-driven Big Tech companies to behave for the public good, or with any accountability.
There’s a parallel pleasure in the documentary – transatlantic cultural difference. These are not just Americans, remember. These are San Franciscan Futurists. The inadvertent comedy is amplified by the addition of the dry McCausland. A man so British that, even when he’s interviewing a nanotechnologist about blood-borne computers that could potentially restore his sight, he sounds as if he’d hand you thirty English notes right now if you could teleport him to the pub instead.
Even the tech is unmistakably American. “I can hear a plane?” prompts McCausland, trialling Zuckerberg’s glasses. “Yes, a plane is visible in the clear blue sky,” responds the earnest spectacles. Later, our presenter wryly looks back toward his own camera crew. “Do they look like they know what they’re doing?” he provokes. “Judging by their equipment, yes, they are professionals.” Go-go-gadget-missing-the-joke. Computers may increasingly be able to play God, but irony remains a step beyond. Even with a Batman leg brace.
Bro boost: women find LinkedIn traffic ‘drives’ if they pretend to be men
Guardian
www.theguardian.com
2025-11-22 07:00:32
Collective experiment found switching profile to ‘male’ and ‘bro-coding’ text led to big increase in reach, though site denies favouring posts by men Do your LinkedIn followers consider you a “thought leader”? Do hordes of commenters applaud your tips on how to “scale” your startup? Do recruiters sl...
D
o your
LinkedIn
followers consider you a “thought leader”? Do hordes of commenters applaud your tips on how to “scale” your startup? Do recruiters slide into your DMs to “explore potential synergies”?
If not, it could be because you’re not a man.
Dozens of women joined
a collective LinkedIn experiment
this week after a series of viral posts suggested that, for some, changing their gender to “male” boosted their visibility on the network.
Others
rewrote their profiles
to be, as they put it, “bro-coded” – inserting action-oriented online business buzzwords such as “drive”, “transform” and “accelerate”. Anecdotally, their visibility also increased.
The uptick in engagement has led some to speculate that an in-built sexism in LinkedIn’s algorithm means that men who speak in online business jargon are more visible on its platform.
Like most large social media platforms, LinkedIn uses an algorithm to determine which posts it shows to which users – boosting some, and downgrading others.
In
a blog post on Thursday
, LinkedIn acknowledged the trend, but said it did not consider “demographic information” in deciding who gets attention. Instead, it said, “hundreds of signals” factor into how a given post performs.
“Changing gender on your profile does not affect how your content appears in search or feed,” a spokesperson said. Be that as it may, the anecdotes are piling up.
“It has certainly been exciting,” said Simone Bonnett, an Oxford-based social media consultant who changed her pronouns to “he/him” and her name to “Simon E” on LinkedIn earlier this week.
“The kind of stats that I’m seeing at the moment are a 1,600% increase in profile views, which is wild if you think about what social media views look like at the moment, and a 1,300% increase in impressions. Also wild reach stats.”
Megan Cornish, a communications strategist for mental health tech companies, said she started experimenting with her LinkedIn settings after seeing her reach on the platform decline precipitously earlier this year.
First she changed her gender to “male”. Then she told ChatGPT to rewrite her profile in “male-coded” language, based on a LinkedIn post suggesting the platform favours “agentic” words such as “strategic” and “leader”.
Finally, she asked ChatGPT to rewrite old, badly performing posts from several months ago in similarly “agentic” language, figuring that recycling old, reworked content would help her isolate what effect “bro-coding” was having on her reach.
Things went great. Almost immediately, Cornish’s reach on LinkedIn spiked, increasing 415% in the week after she trialled the changes. She wrote a post about the experience, and
it went viral
, racking up nearly 5,000 reactions.
The problem was, she hated it. Before, her posts had been “soft”, she said. “Concise and clever, but also like warm and human.” Now, bro-Megan was assertive and self-assured – “like a white male swaggering around”.
She gave up after a week. “I was going to do it for a full month. But every day I did it, and things got better and better, I got madder and madder.”
Not everyone had the same experience as Cornish and Bonnett. Cass Cooper, a writer on technology and social media algorithms, said she changed her gender to “male” – and then her race to “white” (Cooper is Black). The overall result, she said, was a decline in her profile’s reach and engagement – an experience other women of colour on the platform have also
discussed
.
“We know there’s algorithmic bias, but it’s really hard to know how it works in a particular case or why,” she said.
While the LinkedIn experiments were “frustrating”, she said she believed they were a reflection of broader society-wide biases. “I’m not frustrated with the platform. I’m more frustrated with the lack of progress [in society].”
Users
have been rumbling
about LinkedIn’s weird position as a quasi-business, quasi-social network for some time, ever since the pandemic blurred professional boundaries and injected more oversharing into work. LinkedIn’s occasional tendency to elevate extreme “bro-coding” is
best illustrated by
social media accounts
recording the excesses
of the platform.
These latest “bro-coding” experiments, however, have their origins in what Cornish, Bonnett and others describe as algorithm changes in recent months that have caused female creators in particular to have markedly less visibility. This led to a series
of informal experiments
earlier this year, in which women and men in parallel industries posted the same content – and the men got drastically more reach.
LinkedIn uses
an AI system
to classify posts to its feed, it says, deciding how to disseminate them based on their content, as well as the poster’s professional identity and skills. It evaluates its algorithms regularly, it says, including “checks for gender-related disparities”.
A spokesperson for LinkedIn suggested that a recent decline in certain users’ reach came from a far higher volume of content on the network, adding that there had been a 24% increase in comments and a commensurate spike in video uploads in the past quarter.
Bonnett said the “bro-coding,” in her experience, was on the rise. “You always think of LinkedIn as being more genteel, more businesslike. It’s not like that any more. It’s starting to become the wild west.”
Get your payload to the Moon, Mars,
and other Deep Space destinations - soon.
With
rideshare missions launching as soon as 2028
,
we can get your payload to where you need it, fast.
Cost Efficient
Traditional chemical rockets are inefficient and expensive.
Ion Thrusters provide a cost-efficient method to get your payload
to the Deep Space destination of your choice.
Mars Ready
Our vehicles are designed to be able
to deliver payloads to the red planet -
on your schedule,
with no need to wait for
orbital alignment periods.
Will pay-per-mile raise Reeves money or drive people away from electric vehicles?
Guardian
www.theguardian.com
2025-11-22 06:00:33
Need for new road taxes is clear – but there are concerns that pricing plan could stall transition away from petrol Three pence: a small charge per mile for an electric vehicle, but a giant conceptual leap for Britain. Chancellors of the exchequer have long resisted any form of road pricing as polit...
Three pence: a small charge per mile for an electric vehicle, but a giant conceptual leap for Britain.
Chancellors of the exchequer have long resisted any form of
road pricing
as politically toxic. That may be about to change next week: Rachel Reeves, perhaps inured to being pilloried for any money-raising proposal, is expected to introduce a charge explicitly linked to how far EVs drive.
The Treasury has all but
confirmed some kind of charge
will be announced at next week’s budget, but the details have not been revealed. According to an initial report in the Telegraph, EV drivers could from 2028 pay a supplement based on how far they had driven that year on top of their annual road tax, or vehicle excise duty (VED). That could be a self-declared estimate of distance or a check on the odometer at an MOT.
According to Department for Transport (DfT) figures, battery electric cars – with lower running costs than petrol – are used more: clocking up about 8,900 miles on average in 2024. At 3p a mile, that would bring in £267 a car from the 1.4m EVs currently on the road – about £375m a year in total.
The Treasury has all but confirmed that some kind of charge on EVs will be announced when Rachel Reeves delivers the budget.
Photograph: Carlos Jasso/AFP/Getty Images
The transport secretary, Heidi Alexander, was at pains to rule out a national road pricing scheme in the face of Commons attacks on Thursday – although a later “clarification” made clear that the EV pay-per-mile was still on the table.
The long-term picture is a looming shortfall in motoring tax revenues, as income from
fuel duty evaporates
in the transition to EVs. Petrol and diesel cars effectively pay a charge linked to how far they drive – but via fuel consumption at the pump.
Fuel duty of 52.95p a litre (roughly 5p a mile in average cars) will bring in £24.4bn this financial year, according to the latest forecast from the Office for Budget Responsibility, but the billions will dwindle away from 2030, when the ban on new pure petrol and diesel cars comes in.
The challenge is to find a fair replacement for an unsustainable system – and overcome longstanding resistance on the right to any form of road charging, bundled up in the culture wars around
London’s ultra-low emission zone (Ulez)
and
low-traffic neighbourhoods
with their claims of curtailed freedoms and increased surveillance.
Last year London’s mayor, Sadiq Khan, ruled out considering a pricing scheme after being battered by anti-Ulez hostility.
Photograph: PA Images/Alamy
Some economists have championed schemes that would price roads by time and congestion – potentially fairer and a better tool to manage road use, but bringing in another level of tracking.
Any scheme should be kept simple, says Steve Gooding, the director of the RAC Foundation motoring thinktank. Although, when it comes to privacy, he adds: “The amount of data being generated by the modern car is phenomenal. If the DfT or DVLA start tracking their movements, people think Big Brother is watching.
But Elon [Musk]
– they’re not that fussed.”
A wider concern is that pay-per-mile would discourage drivers from switching to electric vehicles, crucial for cutting carbon emissions. Manufacturers, businesses and motoring groups such as Ford, AutoTrader and the AA have all
spoken out
on the timing of new charges at this point in the transition. Carmakers must,
under Britain’s ZEV mandate
, ensure one in three cars sold next year are zero-emission, rising to 80% by 2030 (with hybrids allowed to make up the remaining 20%).
According to a report for the Social Market Foundation (SMF) thinktank, New Zealand provides a cautionary tale. EVs were made liable last year for its road-user charge, which previously only applied to diesel vehicles, whereby drivers buy paper permits in units of 1,000km (621 miles). The move, allied to the end of buyer grants and tax exemptions, led to a sharp drop in new EV sales – now just 4% of the market, having peak 19%.
Electric vehicles at a charging point in Auckland, New Zealand, where EVs were made liable last year for its road-user charge.
Photograph: MIchael Craig/AP
SMF says that Iceland, which also brought EVs into pay-per-mile schemes last year, maintained incentives and differentials in pricing and had a much smaller decline in market share.
Advocates for the new technology are alarmed. The Electric Vehicle Association England, a group representing drivers, warned in a letter to the chancellor that consumer sentiment was still sceptical about EVs.
For many, running costs are no longer the incentive they once were – particularly for those reliant on public charging points, usually in poorer areas and without a driveway. Ginny Buckley, the chief executive of Electrifying.com, an EV reviews platform and marketplace, says: “If you can’t rely on off-peak, affordable home charging and you’re reliant on the public charging network, for many people it will cost you more per mile to run your EV than it will a petrol car.”
Graham Parkhurst, a professor of sustainable mobility at the University of the West of England, describes the vast difference between domestic chargers and public charging points –
which attract VAT at 20% on top
– as a “political timebomb”, further dividing the haves and have-nots.
Even long-term proponents of pay-per-mile such as Parkhurst warn of the need to tread carefully: “Charging according to how much a vehicle moves makes sense. Fuel duty does that. But we need time to work out how to do this in the context of wider transport taxation. To the extent we need cars, it’s much better that they are electric,” he says.
Long-term champions of pay-per-mile warn of the need to tread carefully.
Photograph: nrqemi/Getty Images/iStockphoto
The thinktank the Resolution Foundation recommends a charge based on miles driven and weight be brought in only for future EV sales, as part of VED.
Tanya Sinclair, the chief executive of the industry group Electric Vehicles UK, agrees that motoring taxes need fundamental reform – but the government needs to be absolutely clear it wants people to switch to EVs. “Anything that muddies that message – such as giving a grant with one hand and introducing pay-per-mile with the other – undermines that clarity for the consumer,” she says.
A government spokesperson says it would “look at further support measures” for EVs, but adds: “Fuel duty covers petrol and diesel, but there’s no equivalent for electric vehicles. We want a fairer system for all drivers whilst backing the transition to electric vehicles.”
Piloting a new policy is, Gooding says, best done “with the smallest number you can get away with – and if it’s only EVs, that’s better than trying to introduce some kind of complicated charge for the 34m cars we’ve already got”.
For some, including Buckley and the Campaign for Better Transport, an obvious, if also politically contentious, answer remains: end the 15-year freeze on fuel duty and the
temporary 5p cut in place since 2022
.
Had the levy stayed the same in real terms, almost £150bn would have accrued to the public purse, according to SMF. Whatever pay-per-mile scheme evolves, Reeves “must ensure operating taxes on EVs remain lower than on petrol”, it says. “The simplest way to maintain that difference is to raise fuel duty.”
Superman copy found in mum's attic is most valuable comic ever at $9.12M
While cleaning out their late mother's California attic last Christmas, three brothers made a life-changing discovery under a pile of faded newspapers: one of the first Superman comics ever made.
An original copy of the June 1939 first edition on the Man of Steel's adventures, it was in a remarkably pristine condition.
Now it has become the highest-priced comic book ever sold, fetching $9.12m (£7m) at auction.
Texas-based Heritage Auctions, which hosted Thursday's sale, called it the "pinnacle of comic collecting".
The brothers found six comic books, including Superman #1, in the attic underneath a stack of newspapers inside a cardboard box and surrounded by cobwebs in 2024, Heritage said in a press release.
They waited a few months before contacting the auction house, but once they did, Heritage Auctions vice-president Lon Allen visited them in San Francisco within days, according to the auction house.
The brothers, who have chosen to withhold their names, are "in their 50s and 60s, and their mom had always told them she had an expensive comics collection but never showed them", Mr Allen said in Heritage's press release.
"It's a twist on the old 'Mom threw away my comics' story."
Their mother had held on to the comic books since she and her brother bought them between the Great Depression and the beginning of World War Two, Heritage said.
Mr Allen added that the cool northern California climate was perfect for preserving old paper.
"If it had been in an attic here in Texas, it would have been ruined," he said.
That helped CGC, a large third-party comics grading service, give this copy of Superman #1 a 9.0 rating on a 10-point scale, topping the previous record of 8.5.
And at its sale price of over $9m, including buyer's premium, Superman #1 easily beat the previous highest-priced comic book ever sold by $3m.
Action Comics No. 1, the 1938 work that first introduced Superman, sold for $6m last year.
The youngest brother said in Heritage's press release that the box had remained forgotten in the back of attic.
"As the years unfolded, life brought about a series of losses and changes," he said. "The demands of everyday survival took centre stage, and the box of comics, once set aside with care and intention, was forgotten. Until last Christmas."
He added: "This isn't simply a story about old paper and ink. This was never just about a collectible.
"This is a testament to memory, family and the unexpected ways the past finds its way back to us."
Now that we have a spectral renderer, it is time to see if it was worth the effort.
I hope you agree with my reasoning in
part 1
, that RGB rendering does not match the physical reality, but does it really matter?
We will now compare RGB rendering to spectral rendering using my
spectral path tracer
.
In the process, we will look at various types of light sources.
RGB rendering simply multiplies RGB triples of illuminants and textures component-wise to arrive at the linear RGB color displayed on screen.
Spectral rendering uses proper illuminant spectra (mostly from
LSPDD.org
) and reflectance spectra that are upsampled from sRGB using
Fourier sRGB
.
Throughout this blog post, I will be using figures that allow you to compare results in an interactive fashion.
Clicking on the tabs or pressing number keys 1, 2, 3 selects an image to display (RGB rendering, spectral rendering or the illuminant spectrum).
The mouse wheel lets you zoom in/out and you can pan by clicking and dragging.
At the bottom you see magnified insets of all three images for the cursor position (I am well aware that this part is not useful for the spectrum).
Illuminant E and D65
In
Figure 1
we use illuminant E, which is just a constant spectrum.
Based on how we have constructed the
Fourier sRGB lookup table
, we expect RGB rendering and spectral rendering to give nearly the same result here.
The reason for that is that we simply used the CIE XYZ color matching functions without multiplying by an illuminant.
Thus, our illuminant is implicitly constant (i.e. illuminant E) and we expect that directly reflected illuminant-E light will have exactly the sRGB color from the original texture.
The only reason why the results between RGB or spectral rendering might differ slightly is that we account for indirect illumination in these renderings.
Indeed, the results are nearly indistinguishable.
Figure 1:
Our test scene under illuminant E. RGB and spectral rendering produce nearly identical results.
Some might argue that we should have used
illuminant D65
instead of illuminant E to construct the lookup table, because that has been defined to be white light.
Most spectral upsampling techniques in the literature do not do so.
The best argument for this design decision that I have heard is that sRGB (255, 255, 255) should correspond to a constant reflectance spectrum with value one.
Anything else would be counterintuitive: Reflection on perfect white surfaces would change the color of incoming light.
If a non-constant illuminant is baked into the upsampling technique, this criterion may be violated.
Figure 2
shows results for illuminant D65.
Indeed, there are visible differences in the colors, especially in the reds.
Though, they are relatively minor.
Figure 2:
Our test scene under illuminant D65. There are minor but visible differences between RGB and spectral rendering.
Neither of these two results is inherently better or more correct than the other.
RGB rendering relies on the rather unphysical notion that all light has one of only three wavelengths.
Strictly speaking, even this interpretation is not entirely compatible with how the sRGB color space is defined since its primaries are not monochromatic spectra.
Though, under the set of definitions used to define the scene in RGB rendering, the result of RGB rendering is (tautologically) correct.
For the spectral renderer, we utilize a more complete representation of the illuminant in the form of an illuminant spectrum.
And we have used
Fourier sRGB
to come up with smooth reflectance spectra that match our surface sRGB colors.
Thus, the renderer models the physical reality more accurately, but that does not automatically mean that we get a more accurate rendition of the scene.
The only information that we have about the reflectance spectra is their sRGB color, so the upsampled reflectance spectra may not match those that the material intended by the artist (or the captured sample) would have.
We get an interpretation that is compatible with everything we know about the scene and with physical light transport, but nothing more than that.
Other smooth illuminant spectra
In
Figure 3
, we use the spectrum of a warm-white light-emitting diode (LED).
Like most LEDs, this one has a relatively smooth illuminant spectrum composed of two lobes that mostly cover visible wavelengths (hence the good energy efficiency).
Once again, the most visible differences are in the reds.
While spectral rendering made them slightly brighter for illuminant D65, this time it is making them darker.
Greens also shift a bit, but overall the differences are still relatively minor.
One of the design goals for white illuminants is good
color rendering
, i.e. making sure that colors look like they would under illuminant D65.
LEDs generally perform relatively well in this regard, so it is unsurprising that the differences are minor here.
Figure 3:
Our test scene under an LED spectrum. All colors shift a bit, especially the reds and greens.
Figure 4
uses an incandescent spectrum, i.e. what you would get out of an old light bulb with a filament that heats up.
In the visible spectrum, the incandescent illuminant spectrum is essentially a smooth ramp that keeps going up as you move into infrared.
In other words, these light sources mostly emit invisible lights, which is why they are so inefficient that they have been
prohibited in most countries
.
Here, the differences between RGB and spectral rendering start to become more interesting.
With RGB rendering, blue becomes quite saturated and slightly green, while bright red turns slightly orange.
With spectral rendering, blue surfaces become less saturated and get a yellow tint from the illuminant.
RGB rendering is not capable of reducing the saturation of colors in this manner.
With spectral rendering, illuminant spectra with strong maxima around certain colors will always do so.
Figure 4:
Our test scene under an incandescent spectrum. Compared to RGB rendering, spectral rendering desaturates surface colors somewhat.
Spiky illuminant spectra
Now we move on to illuminant spectra with many distinct peaks.
The most common examples are coated fluorescent lamps (CFL), as shown in
Figure 5
.
The highly saturated balloons become a bit brighter with spectral rendering, especially the green ones.
Blues and reds also shift a bit, but overall, the changes here are not that big.
Figure 5:
Our test scene under a CFL spectrum. This time, the greens are particularly affected.
Metal halide (MH) lamps can have even more spiky spectra as shown in
Figure 5
.
While the light of this lamp is relatively close to being white, its effect on surface colors in the spectral renderer differs drastically from that of other white illuminants like D65 or the LED considered above.
Many saturated surface colors get a blue tint and become much less saturated.
The choice of a different white illuminant has drastically altered the look of this scene in the spectral renderer.
The RGB renderer reduces the illuminant spectrum to RGB before using it.
Therefore, white is white and surfaces keep their colors.
Figure 6:
Our test scene under a MH spectrum. With spectral rendering, the saturation of saturated surfaces is drastically reduced.
(Nearly) monochromatic spectra
The most drastic differences between RGB rendering and spectral rendering can be observed when the illuminants are nearly monochromatic, i.e. they emit most of their light close to one specific wavelength.
For example, low-pressure sodium-vapor lamps emit almost all of their light near a wavelength of \(589~\mathrm{nm}\).
They are commonly used as street lamps, since they have been available since the 1920s and have an efficiency that rivals modern LEDs.
High-pressure sodium (HPS) lamps have slightly broader spectra and
Figure 7
uses one of those (since we have another fully monochromatic example below).
Light never changes its wavelength in the spectral renderer (since we do not model fluorescence).
Thus, the nearly monochromatic light of the HPS lamps stays nearly monochromatic as it scatters throughout the scene.
No matter what color a surface has, the reflected light will have a color very close to the incoming light, just of different brightness.
In the RGB renderer, we just treat this light as a mix of red and green and thus surface colors are not overridden like that.
We would be able to get such an effect for red light, green light or blue light, but not for a mixture of those.
In this way, RGB rendering hands out special treatment for these three light colors, whereas a spectral renderer can deal with monochromatic light of any wavelength.
Figure 7:
Our test scene under a HPS spectrum. With spectral rendering, the color of the light overrides the color of surfaces almost completely.
To drive this point home
Figure 8
, uses perfectly monochromatic light at \(500~\mathrm{nm}\).
Now the spectral rendering is perfectly monochromatic.
Different pixels only differ in their overall brightness.
The RGB renderer treats this illuminant as a mixture of blue and green and thus surface colors are retained to some extent.
Figure 8:
Our test scene under monochromatic light at \(500~\mathrm{\mathrm{nm}}\). With spectral rendering, colors only differ by their brightness.
Gamut compression
Actually, it is not quite right to say that the monochromatic light at \(500~\mathrm{\mathrm{nm}}\) is a mixture of green and blue.
It mixes positive amounts of green and blue and a negative amount of red.
Its RGB representation is \((-1,1,0.36)\).
A color with negative entries like that is called out of gamut.
The sRGB gamut is relatively small, so all non-primary colors that are a bit more saturated will be out of gamut.
And monochromatic spectra are the most saturated spectra that you can possibly have.
That poses a challenge for both RGB and spectral rendering, but for RGB rendering, it is more pronounced.
The spectral renderer is mostly oblivious to RGB color spaces.
It just models realistic light transport and estimates the spectrum of light that reaches the camera.
RGB color spaces only come into play for storing the end result and displaying that on a screen.
To display an RGB color that may be out of gamut on a screen, gamut compression should be used and there is an
ACES standard
for that.
I considered implementing that, but there are limits to how much work I want to put into a blog post.
For the results shown here, I simply clamped RGB vectors to the interval \([0,1]\) before converting to sRGB.
Thus, you should keep in mind that some of the images above have an “invisible third channel” with negative values.
Conclusions
In spite of being extremely wide-spread, RGB rendering clashes quite badly with principles of physically-based rendering.
While we invest a lot of effort to model or measure various materials or light sources, we simultaneously put an overly simplistic model of color at the foundation of it all.
RGB color spaces are designed to be used for display devices and they are perfectly fine for this purpose.
However, they are not meant to be used to simulate light scattering and doing so in RGB requires assumptions that are far from being physically-based.
Spectral rendering enables accurate color reproduction under all sorts of illuminants.
While the primary colors red, green and blue play a special role in RGB rendering, spectral rendering can handle monochromatic illuminants of all colors and arbitrary mixtures thereof.
That opens up new possibilities in lighting design, where the spectra of light sources can influence the colors of surfaces in ways that would not be possible with RGB rendering.
It also makes it much easier to reproduce colors seen in real scenes:
A major advantage of spectral rendering in production rendering is that it becomes easier to combine rendered and real footage, e.g. for virtual makeup.
And it nicely decouples aspects of the camera such as its spectral sensitivity curves or color space from all aspects of light transport.
In real-time rendering, many of the effects mentioned above would typically be faked using per-scene color grading.
That works reasonably well if the lighting is dominated by a single type of light sources, but when there is a mixture of different light sources, this approach quickly hits a wall.
Spectral rendering handles these situations accurately (although
importance sampling
may be a challenge).
With this blog post series, I am hoping to combat some misconceptions about RGB and spectral rendering, but I have only scratched the surface.
The SIGGRAPH 2021 course on the subject
[Weidlich21]
is recommended further reading and makes similar points.
There are many benefits of spectral rendering that I have not mentioned:
Metamerism
can be handled and modeling effects such as
fluorescence
or
dispersion
becomes more natural.
My main point here is that spectral rendering is more principled, enables accurate color reproduction and is affordable right now, even in real-time rendering.
The current reality is that even offline rendering is usually based on RGB.
The most notable exception is the work of
Wētā FX
.
They have been using spectral rendering throughout their pipeline for years and ever since Avatar: The Way of Water, they have relied on
my method
for spectral upsampling of reflectance.
They also indicated that they would probably use this technique for all spectra if they were to start from scratch now
[Weidlich21]
.
So while this blog post series has only covered one of many approaches, it is one that has proven itself in production rendering.
And I have demonstrated that
its overhead
is low enough, even for real-time rendering.
Weidlich, Andrea and Forsythe, Alex and Dyer, Scott and Mansencal, Thomas and Hanika, Johannes and Wilkie, Alexander and Emrose, Luke and Langlands, Anders (2021). Spectral imaging in production. In ACM SIGGRAPH 2021 Courses.
Official version
|
Author's version
Openring-rs: a webring for static site generators written in Rust
A tool for generating a webring from Atom/RSS feeds.
openring-rs
is a tool for generating a webring from Atom/RSS feeds, so you can populate a template with articles from those feeds and embed them in your own blog. An example template is provided in
in.html
.
This is a Rust-port of Drew DeVault's
openring
, with the primary differences being:
we respect throttling and send conditional requests when using
--cache
(recommended!)
the template is written using
Tera
and is provided as an argument, not read from stdin
A webring for static site generators written in Rust
Usage: openring [OPTIONS] --template-file <FILE>
Options:
-n, --num-articles <NUM_ARTICLES> Total number of articles to fetch [default: 3]
-p, --per-source <PER_SOURCE> Number of most recent articles to get from each feed [default: 1]
-S, --url-file <FILE> File with URLs of Atom/RSS feeds to read (one URL per line, lines starting with '#' or "//" are ignored)
-t, --template-file <FILE> Tera template file
-s, --url <URL> A single URL to consider (can be repeated to specify multiple)
-b, --before <BEFORE> Only include articles before this date (in YYYY-MM-DD format)
-c, --cache Use request cache stored on disk at `.openringcache`
--max-cache-age <MAX_CACHE_AGE> Discard all cached requests older than this duration [default: 14d]
-v, --verbose... Increase logging verbosity
-q, --quiet... Decrease logging verbosity
-h, --help Print help (see more with '--help')
-V, --version Print version
Using Tera templates
The templates supported by
openring-rs
are written using
Tera
.
Please refer to the Tera documentation for details.
Mosses are already known for coping with harsh radiation, dehydration, and long
freezes
. Now scientists have pushed them even further by exposing their spore capsules to open space for nine months, and most of them survived.
The team worked with spreading earthmoss (
Physcomitrium patens
), a small moss species used widely as a plant model by researchers. Its spore-containing capsules were mounted on the outside of the International Space Station (ISS), where they experienced direct solar radiation, vacuum conditions, and sharp temperature swings during each orbit.
Under those conditions, cells usually break down quickly. So the researchers were surprised by what came back. “We expected almost zero survival, but the result was the opposite,”
says
Hokkaido University biologist Tomomichi Fujita. More than 80 percent of the spores still germinated once they returned to Earth.
The team detected a small drop in chlorophyll a, but the other pigments remained stable. The spores grew normally in follow-up tests, showing no signs of major stress from their time in orbit.
This kind of toughness fits with the evolutionary history of mosses.
Bryophytes
— the group that includes mosses, liverworts, and hornworts — were among the first plants to move from water onto land about 500 million years ago. Their spores had to withstand drying and direct sunlight long before soils existed, which may explain why their protective structures still hold up so well today.
Germinated moss spores after their time in open space (Image: Dr. Chang-hyun Maeng and Maika Kobayashi)
The results place moss spores alongside the few organisms known to tolerate direct
space
exposure, including tardigrades and certain microbes. Their survival also adds to ongoing discussions about what types of
life
might endure extreme environments beyond Earth.
According to the researchers, this durability could matter for future experiments on the Moon or Mars. Mosses need very little soil and can pull nutrients directly from rock, making them candidates for early ecosystem tests in
extraterrestrial
settings.
“Ultimately, we hope this work opens a new frontier toward constructing ecosystems in extraterrestrial environments such as the
Moon
and Mars,”
says
Fujita. “I hope that our moss research will serve as a starting point.”
The research was published in
iScience
. Read the study
here
.
Imagine this, you get a report from your bug tracker:
Sophie got an error when viewing the diff after her most recent push
to her contribution to the
@unison/cloud
project on Unison
Share
(BTW, contributions are like pull requests, but for Unison code)
Okay, this is great, we have something to start with, let's go look
up that contribution and see if any of the data there is suspicious.
Uhhh, okay, I know the error is related to one of Sophie's
contributions, but how do I actually
find it
?
I know Sophie's username from the bug report, that helps, but I don't
know which project she was working on, or what the contribution ID is,
which branches are involved, etc. Okay no problem, our data is
relational, so I can dive in and figure it out with a query:
>SELECT contribution.*FROM contributions AS contributionJOIN projects AS project ON contribution.project_id = project.idJOIN users AS unison_user ON project.owner = unison_user.idJOIN users AS contribution_author ON contribution.author_id = contribution_author.idJOIN branches AS source_branch ON contribution.source_branch = source_branch.idWHERE contribution_author.username ='sophie'AND project.name ='cloud'AND unison_user.username ='unison'ORDERBY source_branch.updated_at DESC-[ RECORD1 ]--------+----------------------------------------------------id | C-4567project_id | P-9999contribution_number | 21title | Fix bugdescription | Prevent the app from deleting the User's hard drivestatus | opensource_branch | B-1111target_branch | B-2222created_at | 2025-05-28 13:06:09.532103+00updated_at | 2025-05-28 13:54:23.954913+00author_id | U-1234
It's not the worst query I've ever had to write out, but if you're
doing this a couple times a day on a couple different tables, writing
out the joins gets pretty old
real fast
. Especially so
if you're writing it in a CLI interface where's it's a royal pain to
edit the middle of a query.
Even after we get the data we get a very ID heavy view of what's
going on, what's the actual project name? What are the branch names?
Etc.
We can solve both of these problems by writing a bunch of joins
ONCE
by creating a debugging view over the table we're
interested in. Something like this:
CREATEVIEW debug_contributions ASSELECT contribution.idAS contribution_id, contribution.project_id, contribution.contribution_number, contribution.title, contribution.description, contribution.status, contribution.source_branch as source_branch_id, source_branch.name AS source_branch_name, source_branch.updated_at AS source_branch_updated_at, contribution.target_branch as target_branch_id, target_branch.name AS target_branch_name, target_branch.updated_at AS target_branch_updated_at, contribution.created_at, contribution.updated_at, contribution.author_id, author.username AS author_username, author.display_name AS author_name, project.name AS project_name,'@'|| project_owner.username ||'/'|| project.name AS project_shorthand, project.owner AS project_owner_id, project_owner.username AS project_owner_usernameFROM contributions AS contributionJOIN projects AS project ON contribution.project_id = project.idJOIN users AS author ON contribution.author_id = author.idJOIN users AS project_owner ON project.owner = project_owner.idJOIN branches AS source_branch ON contribution.source_branch = source_branch.idJOIN branches AS target_branch ON contribution.target_branch = target_branch.id;
Okay, that's a lot to write out at once, but we never need to write
that again. Now if we need to answer the same question we did above we
do:
SELECT*from debug_contributions WHERE author.username ='sophie'AND project_shorthand ='@unison/cloud'ORDERBY source_branch_updated_at DESC;
Which is
considerably
easier on both my brain and my
fingers. I also get all the information I could possibly want in the
result!
You can craft one of these debug tables for whatever your needs are
for each and every table you work with, and since it's just a view, it's
trivial to update or delete, and doesn't take any space in the DB
itself.
Obviously querying over
project_shorthand = '@unison/cloud'
isn't going to be able
to use an index, so isn't going to be the most performant query; but
these are one off queries, so it's not a concern (to me at least). If
you care about that sort of thing you can leave out the computed columns
so you won't have to worry about that.
Anyways, that's it, that's the whole trick. Go make some debugging
views and save your future self some time.
Hopefully you learned something 🤞! If you did, please consider
joining
my Patreon
to keep up with my projects, or
check out my book: It teaches the principles of using optics in
Haskell and other functional programming languages and takes you all
the way from an beginner to wizard in all types of optics! You can get
it
here
. Every
sale helps me justify more time writing blog posts like this one and
helps me to continue writing educational functional programming
content. Cheers!
If you’re not familiar with Rust, you need to know about
Result
, a kind of struct that can contain either a succesful result, or an error.
unwrap
says basically “return the successful results if their is one, otherwise crash the program”
1
. You can think of it like an
assert
.
There’s a ton of debate about whether
assert
s are good in production
2
, but most are missing the point. Quite simply, this isn’t a question about a single program. It’s not a local property. Whether
assert
s are appropriate for a given component is a global property of the system, and the way it handles data.
Let’s play a little error handling game. Click the ✅ if you think crashing the process or server is appropriate, and the ❌ if you don’t. Then you’ll see my vote and justification.
One of ten web servers behind a load balancer encounters uncorrectable memory errors, and takes itself out of service.
One of ten multi-threaded application servers behind a load balancer encounters a null pointer in business logic while processing a customer request.
One database replica receives a logical replication record from the primary that it doesn't know how to process
One web server receives a global configuration file from the control plane that appears malformed.
One web server fails to write its log file because of a full disk.
If you don’t want to play, and just see my answers, click here: .
There are three unifying principles behind my answers here.
Are failures correlated?
If the decision is a local one that’s highly likely to be uncorrelated between machines, then crashing is the cleanest thing to do. Crashing has the advantage of reducing the complexity of the system, by removing the
working in degraded mode
state. On the other hand, if failures can be correlated (including by adversarial user behavior), its best to design the system to reject the cause of the errors and continue.
Can they be handled at a higher layer?
This is where you need to understand your architecture. Traditional web service architectures can handle low rates of errors at a higher layer (e.g. by replacing instances or containers as they fail load balancer health checks using
AWS Autoscaling
), but can’t handle high rates of crashes (because they are limited in how quickly instances or containers can be replaced). Fine-grained architectures, starting with Lambda-style serverless all the way to Erlang’s approach, are designed to handle higher rates of errors, and crashing rather the continuing is appropriate in more cases.
Is it possible to meaningfully continue?
This is where you need to understand your business logic. In most cases with configuration, and some cases with data, its possible to continue with the last-known good version. This adds complexity, by introducing the behavior mode of running with that version, but that complexity may be worth the additional resilience. On the other hand, in a database that handles updates via operations (e.g.
x = x + 1
) or conditional operations (
if x == 1 then y = y + x
) then continuing after skipping some records could cause arbitrary state corruption. In the latter case, the system must be designed (including its operational practices) to ensure the invariant that replicas only get records they understand. These kinds of invariants make the system less resilient, but are needed to avoid state divergence.
The bottom line is that error handling in systems isn’t a local property. The right way to handle errors is a global property of the system, and error handling needs to be built into the system from the beginning.
Getting this right is hard, and that’s where blast radius reduction techniques like cell-based architectures, independent regions, and
shuffle sharding
come in. Blast radius reduction means that if you do the wrong thing you affect less than all your traffic - ideally a small percentage of traffic. Blast radius reduction is humility in the face of complexity.
Footnotes
Yes, I know a
panic
isn’t necessarily a crash
, but it’s close enough for our purposes here. If you’d like to explain the difference to me, feel free.
And a ton of debate about whether Rust helped here. I think Rust does two things very well in this case: it makes the
unwrap
case explicit in the code (the programmer can see that this line has “succeed or die behavior”, entirely locally on this one line of code), and prevents action-at-a-distance behavior (which silently continuing with a
NULL
pointer could cause). What Rust doesn’t do perfectly here is make this explicit enough. Some suggested that
unwrap
should be called
or_panic
, which I like. Others suggested lints like
clippy
should be more explicit about requiring
unwrap
to come with some justification, which may be helpful in some code bases. Overall, I’d rather be writing Rust than C here.
Friday Nite Videos | November 21, 2025
Portside
portside.org
2025-11-22 03:21:08
Friday Nite Videos | November 21, 2025
barry
Fri, 11/21/2025 - 22:21
...
Kimmel Wraps Up Trump's Most Horrible Week. How To Spot a Fascist | Song Parody. Darializa Avila Chevalier for Congress in NY-13. Deep Dive on Trump and Bannon in the Epstein Files. Tech Billionaires' Shocking Plan For West Virginia.
Hello Windows Insiders, today we are releasing
Windows 11 Insider Preview
Build 26220.7271
(KB5070307) to the Dev & Beta Channels.
As a reminder we are offering the same builds to both the Dev & Beta Channels on Windows 11, version 25H2.
If you are an Insider in the Dev Channel, you now have a window to switch from the Dev Channel to the Beta Channel if you would like. This window will only be open for as long as we’re releasing the same 25H2-based updates across both the Dev and Beta Channels. After we move Dev Channel forward to a higher build number, the opportunity to switch between these channels will close. When the Dev Channel jumps ahead, things might not be as stable as the Dev Channel is today, so we highly encourage you to evaluate which channel you would like to be in during the time in which the window to switch is open.
Changes in Dev & Beta Channel builds and updates are documented in two buckets: new features, improvements, and fixes that are being gradually rolled out for Insiders who have turned on the toggle to get the latest updates as they are available (via Settings > Windows Update*) and then new features, improvements, and fixes rolling out to everyone in the Dev & Beta Channels.
For more information, see the Reminders section at the bottom of this blog post.
Introducing the Xbox full screen experience for PC
Designed with console-style navigation in mind, the Xbox full screen experience delivers a clean, distraction-free interface for controller-first gaming. Pair a controller to your PC for smooth task switching and a streamlined gaming experience on your desktop, laptop, or tablet.
UI showing the Task View in Xbox full screen experience on PC.
How to enter Xbox full screen experience:
You can access Xbox full screen experience from
Task View,
Game Bar
settings
,
or use Win + F11 hotkey to toggle FSE.
UI of the task view interface showing the option to access Xbox full screen experience.
The Xbox full screen experience begins as a gradual rollout to Windows Insiders on the Dev & Beta Channels, who are also registered Xbox Insiders. We expect to expand this later to all Insiders on the Dev & Beta Channels without requiring Xbox program registration. If you want to be among the first to try out these new features on your PC,
join the Xbox Insiders Program
and opt into the PC gaming preview through the Xbox Insiders Hub.
Feedback:
Share your thoughts in
Feedback Hub (WIN + F)
under
Gaming and Xbox > Gaming Full Screen Experience
.
New features gradually being rolled out with toggle on*
Point-in-time restore for Windows
We’re excited to introduce point-in-time restore for Windows, now available to Insiders in the Beta and Dev Channels! This flexible recovery feature empowers you to quickly roll your device back to a previous state—helping minimize downtime and simplify troubleshooting when disruptions strike. Whether you’re dealing with a widespread outage or a one-off issue, point-in-time restore helps recover your system (including apps, settings, and user files) to get you back to productivity faster. For more details, check out our
documentation
.
Point-in-time restore shown in the Troubleshoot menu for Windows RE.
Feedback:
Share your thoughts in
Feedback Hub (WIN + F)
under
Recovery and Uninstall
>
Point-in-time restore
.
Introducing Fluid Dictation in Voice Typing
Following the introduction of
Fluid dictation for voice access
users, we’re also now introducing it for voice typing users on NPU devices. Fluid dictation makes voice-based dictation smoother and smarter by automatically correcting grammar, punctuation, and filler words as you speak, reducing the need for manual editing. Powered by on-device small language models (SLMs), it ensures fast and private processing.
To use it, set focus to a text field and launch voice typing by pressing the Windows key plus H and complete setup if you’re a first-time user. Fluid Dictation is enabled by default—you can check or toggle it via the settings flyout—so all you need to do is start talking!
UI showing the Fluid Dictation toggle in the voice typing launcher.
Feedback:
Share your thoughts in
Feedback Hub (WIN + F)
under
Input and Language > Voice Typing (Windows key plus H).
Changes and Improvements gradually being rolled out with toggle on*
[Seamlessly resume more apps from your Android phone to your PC]
Following the ability
to resume Spotify tracks
from your phone onto your PC, we’re excited to share that:
vivo Android phone users can also now continue your browsing activity from vivo Browser on your phone, onto your default browser on your PC.
Honor, Huawei, Oppo, Samsung and vivo Android phone users can also now continue online files opened on M365 Copilot app from your phone onto your PC. Word, Excel, and PowerPoint files will open in the respective app on your PC if you have it installed, or if you don’t they’ll open in the default browser on your PC. Note – offline files (stored locally on the phone) are not currently supported.
FEEDBACK: Please file feedback in Feedback Hub (WIN + F) under
Devices and Drivers > Linked Phone.
[Click to Do]
We’re testing and refining the Click-to-Do top bar to determine the best experience for future updates. Functionality will vary by device and market.
[File Explorer]
We’re making a few refinements to the context menu aimed at reducing the space taken by less commonly used actions, while keeping them easy to access. We’ve also updated the ordering of actions to group similar tasks. This includes:
We’ve moved Compress to ZIP file, Copy as Path, Set as Desktop Background, and Rotate Right, and Rotate Left into a new Manage file flyout.
We’ve moved cloud provider options, like Always Keep on this Device and Free Up Space, into their relevant cloud provider flyout.
We’ve moved Send to My Phone next to the cloud provider options.
We’ve moved Open Folder Location to now be next to Open and Open with.
The image on the left shows the “before” experience before and the image on the right shows the “after” experience.
Note, the name Manage file may change in a future Insider update. If you have feedback, please file it in the Feedback Hub under Desktop Environment > Right-Click Context Menu
We’re exploring preloading File Explorer in the background to help improve File Explorer launch performance. This shouldn’t be visible to you, outside of File Explorer hopefully launching faster when you need to use it. If you have the change, if needed there is an option you can uncheck to disable this called “Enable window preloading for faster launch times” in File Explorer’s Folder Options, under View. Looking forward to your feedback! If you do encounter any issues, please file them in the Feedback Hub under Files Folders and Online Storage > File Explorer Performance, or Files Folders and Online Storage > File Explorer.
[Microsoft Store
]
Based on user feedback, we have added support for
uninstalling Store-managed apps
from the Store’s library page. Simply find an installed app in your library, click the three-dot menu, and click uninstall. Please let us know what you think!
UI showing the uninstall functionality for Store-managed apps from the Store’s library page.
Windows Insiders across all channels running Microsoft Store version 22510.1401.x.x and higher will see this improvement.
FEEDBACK: Please file feedback in Feedback Hub (WIN + F) under Microsoft Store.
Fixes gradually being rolled out with toggle on*
[Taskbar and System Tray]
Fixed an issue which could cause the taskbar to hang after receiving certain notifications.
Fixed an issue where the battery icon in the taskbar might unexpectedly show its own backplate when hovering over the icon in the system tray (instead of combined with wi-fi and volume).
[Internet]
Made some underlying improvements to help address an issue which could lead to not having internet after resuming from disconnected standby. Please don’t hesitate to file feedback under Network and Internet in the Feedback Hub if you continue experiencing issues.
[File Explorer]
Fixed an issue where if you opened the Recycle bin and had “Empty recycle bin” visible in the command bar, it might stay showing after you navigated away.
[Settings]
Fixed an issue where Settings might crash when navigating to Privacy & Security > Camera, Location, or Microphone.
[Display and Graphics]
Fixed an issue where recently certain games might show a message saying “Unsupported graphics card detected”, although a supported graphics card was being used.
[
Task Manager]
If you’re using Die or CAMM memory form factor, Task Manager will now show that in Performance under Memory > Form Factor, instead of a blank.
[.NET Framework and Visual Studio]
The issue causing Insiders with ARM64 PCs to potentially observe crashes with Visual Studio or applications that depend on .NET Framework should be resolved if you have installed the latest .NET Framework update.
Known issues
[Xbox full screen experience for PC]
[
NEW
] The virtual keyboard is not shown for controller users on devices without a touch screen. Please use the physical keyboard as a workaround for now.
[
NEW
] Some apps may behave unexpectedly when using FSE, particularly those that expect to be locked to a given size or launch additional windows.
[Taskbar & System Tray]
We’re investigating an issue which is causing the Start menu to not open for some Insiders on click, although it will open if you press the Windows key. It’s believed this issue may also potentially impact the notification center (which you can open with WIN + N).
We’re investigating an issue where for some Insiders apps aren’t showing in the system tray when they should be.
[File Explorer]
Scrollbar and footer are missing and showing a white block instead when text is scaled in dark mode version of the copy dialog.
[
NEW
] We’re investigating an issue where File Explorer has started showing a white flash when navigating between pages after the previous flight.
[Bluetooth]
[
NEW
] We’re investigating an issue causing Bluetooth device battery level to not show for some Insiders.
Reminders for Windows Insiders in the Dev & Beta Channels
Many features are rolled out using
Controlled Feature Rollout technology
, starting with a subset of Insiders and ramping up over time as we monitor feedback to see how they land before pushing them out to everyone in this channel.
For Windows Insiders who want to be the first to get features
gradually rolled out to you
, you can turn
ON
the toggle to get the latest updates as they are available via Settings > Windows Update*. Over time, we will increase the rollouts of features to everyone with the toggle turned on. Should you keep this toggle off, new features will gradually be rolled out to your PC over time once they are ready.
Features and experiences included in these builds may never get released as we try out different concepts and get feedback. Features may change over time, be removed, or replaced and never get released beyond Windows Insiders. Some of these features and experiences could show up in future Windows releases
when they’re ready
.
Some features in active development we preview with Windows Insiders may not be fully localized and localization will happen over time as features are finalized. As you see issues with localization in your language, please report those issues to us via Feedback Hub.
Check out
Flight Hub
for a complete look at what build is in which Insider channel.
Thanks,
Windows Insider Program Team
The Facts About the Military Disobeying Illegal Orders
Portside
portside.org
2025-11-22 02:32:20
The Facts About the Military Disobeying Illegal Orders
barry
Fri, 11/21/2025 - 21:32
...
WHEN SIX MEMBERS OF CONGRESS released a short video on Tuesday emphatically reminding military personnel
1
that they must not obey illegal orders, the message ricocheted through the political world and the media like a rifle shot. Reactions split along predictable lines. Some saw the video as a necessary civic reminder in a volatile moment. Others attacked it as inappropriate political rhetoric directed at the armed forces. Still others lied about what was said, or mocked the message as condescending. As the controversy escalated, the lawmakers who appeared in the video began receiving death threats, while the president himself suggested—astonishingly—that their message constituted “
sedition
” and that they should be
imprisoned
or
executed
.
I want to address a fundamental point revealed by the video and the debate surrounding it: Most Americans do not understand what is in the oaths sworn by our service members. Confusion about that, combined with an understandable desire to keep the military a nonpartisan institution, fuels both the alarm that motivated the video’s creation and the backlash against the video. A clearer understanding on this subject will help reveal the aspects of our constitutional structure that protect the nation from unlawful uses of the military.
Here’s the truth, learned on the first day of service by every enlisted soldier, sailor, airman, Marine, guardian, and coast guardsman, and learned but sometimes not recognized by the young officers who first take the oath: There is not one military oath. There are two. And the differences between them explain exactly who is responsible for refusing illegal orders, why the system was designed that way, and what it means for this moment.
One reason the debate keeps going sideways is that the public keeps talking about “the military” as if it were a single, undifferentiated mass of people with identical obligations. It isn’t. The Constitution and Congress deliberately created two different oaths—one for enlisted personnel, and one for officers. That structure is not bureaucratic trivia; it is grounded on the bedrock American civil–military relations. Ignoring it leads to the misleading assumption that everyone in uniform bears equal responsibility when confronted with an unlawful command.
They don’t. And that distinction matters.
Enlisted members
swear
to support and defend the Constitution, and to “obey the orders of the President of the United States and the orders of the officers appointed over me, according to regulations and the Uniform Code of Military Justice.” And the UCMJ makes crystal clear that the service member’s obligation is to obey
“lawful” orders
, and that no enlisted member is permitted to carry out an unlawful order. But the enlisted oath is also intentionally anchored in obedience of the chain of command. The accountability lies one level up.
Which brings us to the
officer oath
—shorter in words, heavier in weight. Officers swear to “support and defend” the Constitution; to “bear true faith and allegiance” to it; and to “well and faithfully discharge the duties” of their office. They also affirm that they “take this obligation freely, without any mental reservation or purpose of evasion.” What they do
not
swear to do is equally important: Officers make no promise to obey the president and the officers above them.
That omission is not an oversight. Officers give orders, evaluate legality, and act as the constitutional circuit breakers the Founders intended. They are expected—by law, by professional ethic, and by centuries of tradition—to exercise independent judgment when presented with a questionable directive. Officers are duty-bound to refuse an unlawful order. It is not optional. It is not situational. It is their job.
When the members of Congress in their video urge what seems to be the entire military not to follow illegal orders, they may unintentionally blur the very lines that keep the system functioning. Enlisted personnel obey lawful orders; officers ensure the orders that reach them are lawful. The real constitutional failsafe is not a general broadcast to every rank. It is the officer corps, obligated by oath to the Constitution alone.
This matters in a moment when Americans are hearing loud claims about using the military to solve political disputes, intervene in elections, or take actions beyond statutory authority. People are right to worry. But they should also understand the guardrails already in place. The military has been here before—they have already, at times in our history, faced unlawful pressure, political manipulation, or attempts to turn the armed forces into a tool of personal power.
Also worth remembering: No one in the American military swears allegiance to any individual. The oaths are not pledges of loyalty to a party, a personality, or a political movement. Loyalty is pledged to the Constitution—and officers further take that obligation “without mental reservation,” knowing full well it may someday require them to stand with courage between unlawful authority and the people they serve.
So while pundits and politicians continue fighting over the optics of the lawmakers’ video, the core reality remains: The safeguards are already built into the structure. The oaths already distribute responsibility. The law already forbids what some fear. And the officer corps already knows that they bear the constitutional duty to ensure that unlawful orders never reach the young men and women who follow them, and who, in effect, they also serve.
This is not a moment for panic. It is a moment for clarity.
If Americans understood the difference between the two oaths—one grounded in obedience, the other grounded in constitutional discernment—they would see that the republic’s defenses against unlawful orders are not theoretical. They exist. They function. They don’t depend on the whims of political actors on either side of the aisle, but on the integrity of those who swear to uphold them.
1
The video is directed not only at military service members but also at members of the intelligence community—but in this article, I’m focusing exclusively on the former.
Lt. Gen. Mark Hertling (Ret.) (@MarkHertling) was commander of U.S. Army Europe from 2011 to 2012. He also commanded 1st Armored Division in Germany and Multinational Division-North during the surge in Iraq from 2007 to 2009. @markhertling
You may have noticed that sh*t has gotten weird the last few years.
The Bulwark
was founded to provide analysis and reporting in defense of America’s liberal democracy. That’s it. That’s the mission. The Bulwark was founded in 2019 by Sarah Longwell, Charlie Sykes, and Bill Kristol.
The death of tech idealism and rise of the homeless in Northern California
Fuckers.
I couldn’t get the word out of my head, because he wouldn’t stop saying it. I was sitting in the tiled courtyard of the Mediterranean-style home of an old acquaintance, a venture capitalist and serial tech entrepreneur, who lived a few blocks from Zuckerberg in Palo Alto. Next to us was a massive stone slab over which water dribbled into a reflecting pool. A Buddha sat on the stone, contemplating the flow. Above us sprawled the canopy of a century-old olive tree, which had been raining its fruit onto the courtyard.
Article continues after advertisement
It would have been an idyllic scene, were it not for the presence of my acquaintance, who kept smacking and berating his dog, a puffy, pure-white Alaskan-looking thing, who wouldn’t stop eating the olives.
In my previous career, I was a landscape designer, and this person was my client. I’d lived in Santa Cruz then, a hippie-surfer town about an hour away on the other side of the mountains that separate the Valley from the ocean. I was not alone in commuting over those mountains—many of Santa Cruz’s hippies and surfers make the trek to stick their straws where the wealth is. I went to college at the University of California in Santa Cruz—home of the fighting Banana Slugs!—and spent the entirety of my twenties there.
When I drove over after arriving in Cupertino, however, a camp lined the main road into town; hundreds of unhoused residents inhabited another area along the river.
When thirty approached, I began to think of things like owning a home, which even an hour from the Valley’s gravitational center was out of reach with my income at the time. So a few months after the Great Recession hit, I moved back to Georgia, where I’d grown up. I bought a house on seven acres for $90,000.
I’d been away from California for twelve years. Much had changed. The real estate costs I’d fled had tripled; 2008 prices now seem quaintly affordable. I don’t remember ever seeing a tent on the streets of Santa Cruz back then. It was known as a place with a lot of panhandlers and drug users, but not so many that they made their dwellings in places obvious to the casual observer. When I drove over after arriving in Cupertino, however, a camp lined the main road into town; hundreds of unhoused residents inhabited another area along the river.
Article continues after advertisement
My client had also changed. I remembered him as a charming, progressive guy, but he’d grown older, crankier, and more libertarian in the decade since I last saw him. Apparently he’d become fond of calling people
fuckers
, and when I broached the topic of homelessness, he erupted in a lava flow. Employees of NGOs who concoct idealistic plans to address the housing crisis? Fuckers. Activists who valiantly defend the less fortunate among us? Fuckers. He couldn’t stand to go to San Francisco anymore because of the hordes sleeping and shitting right there on the sidewalk, in front of businesses run by people who actually contribute to the economy.
“If we can figure out how to get a package from China to your doorstep in two days, we can figure this out,” he said. Whether it’s houses made of shipping containers or building artificial islands in the Bay to house the homeless, he assured me that “innovators” like himself could come up with a solution—if only the incompetent, na.ve, and corrupt fuckers in the public sector would get out of the way.
In fact, he would personally love to dig his entrepreneurial hands into the issue. But the poverty space was dominated by inefficient nonprofits and he wouldn’t waste his time consorting with them—the profit motive is what drives efficiency, after all, and efficiency paves the way to viable solutions. Nonprofits are in the business of self-congratulation, not getting things done, he said. His evidence: They hadn’t fixed the problem yet. “It’s like a car or your phone,” he said. “Either it works or it doesn’t.”
The last time I’d seen my client he was plotting his first trip to Burning Man. He’d shown me some of his paintings and we’d chatted about organic farming and his time in a kibbutz. Though he worked sixteen hours a day (he claimed to sleep no more than a few hours a night), he nevertheless found time to feed his omnivorous intellect, which snacked on cybernetics and chewed on transcendentalism after dinner. He was the archetypal boomer tech entrepreneur, kissed by the antiestablishment, but in the business of re-establishing the establishment in his own image.
The Valley overlaps geographically with the hippie homeland of San Francisco, Berkeley, and their environs, and there’s long been cross-pollination, if not orgiastic copulation, between the two spheres. As a barefoot college dropout in the early seventies, Steve Jobs’s interests included Ram Dass, Hare Krishnas, and fruitarianism; his connection to apples stemmed from a stint at the All One Farm, a commune where he worked in a Gravenstein orchard.
Article continues after advertisement
The commune fell apart as residents realized they were being conned by the spiritual leader, a close friend of Jobs, into providing free labor for his apple cider business. The apple cider guru later became a billionaire mining magnate notorious for labor and environmental abuses. Jobs, however, sought to bring his spiritual values with him in founding a company to disseminate what he considered the ultimate tool of enlightenment—the personal computer—to the masses.
This trajectory is such a prominent feature among the Valley’s founding fathers that it has spawned a minor field of academic study. “To pursue the development of individualized, interactive computing technology was to pursue the New Communalist dream of social change,” writes Fred Turner, a Stanford historian, in his book
From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism
.
Turner’s book focuses on Stewart Brand, a Merry Prankster turned evangelist of enlightened capitalism, who once roamed from commune to commune selling back-to-the-land supplies out of his 1963 Dodge truck. The Whole Earth Truck Store, as he called it, morphed into the
Whole Earth Catalog
magazine, which begat the
Whole Earth Software Catalog
, which begat
Wired
magazine.
Unhoused communities don’t randomly burble up from the sidewalk. They are born of the housed communities around them, which in the Valley’s case is a particularly curious one.
Brand, writes Turner, “brokered a long-running encounter between San Francisco flower power and the emerging technological hub of Silicon Valley,” in which “counterculturalists and technologists alike joined together to reimagine computers as tools for personal liberation, the building of virtual and decidedly alternative communities, and the exploration of bold new social frontiers.”
One can imagine a young Steve Jobs digging the communalism of today’s Bay Area camps, whose countercultural idealism shares many threads with that of the Valley’s early hippie-nerds—ironic given the bulldozing of camps in the shadows of contemporary tech campuses and their tightly conformist corporate cultures. The commonalities don’t stretch very far: A rather thick thread in the hippie-techie braid is individualism, a whole lot of which hid behind the Me generation’s “New Communalist” movement. The marriage of these Bay Area cultures is alive and well, but today has more of a New Age–Burning Man vibe.
Article continues after advertisement
Brand, now in his eighties, is an ardent Burner. He’s gone from libertarian to the even-harder-to-define “post-libertarian” and is building a five-hundred-foot clock inside a Texas mountain owned by Jeff Bezos, which is designed to tick once per year for ten thousand years. A cuckoo will emerge at the end of each millennium.
Brand shares a certain intellectual hubris with my acquaintance, who asked if I would like to read an eighty-two-page white paper he wrote regarding the human brain and why Darwin’s survival-of-the-fittest theory applies not just to biological evolution, but to the optimization of social structure. I stared at the Buddha and tried to think of a way to change the subject.
Unhoused communities don’t randomly burble up from the sidewalk. They are born of the housed communities around them, which in the Valley’s case is a particularly curious one. The Valley’s valley is wide and smoggy enough that some days you can’t see the mountain ranges that form it. The scorching Diablo Range, where cattle roam oceans of desiccated grass, lies to the east.
On the other side, the lusher Santa Cruz Mountains, a place of dank redwood forests, organic farming communes, and uppity vineyards, form a verdant curtain between the Valley and the ocean. Here the tech elite build their villas and take to the fog-kissed ravines for athleisure-clad recreation.
The valley started to become the Valley in 1943 when IBM opened a factory to manufacture punch cards in San José. At the time, orchards carpeted much of the region. When the trees blossomed in early spring, the honey-scented flowers intoxicated bees and lovers alike. During the late summer harvest, the air was a punch bowl. Maps referred to it then as the Santa Clara Valley, but romantic minds of the day christened it the Valley of Heart’s Delight, after a 1927 poem by a local writer with Wordsworthian sensibilities, named Clara Louise Lawrence.
Article continues after advertisement
No brush can paint the picture
No pen describe the sight
That one can find in April
In “The Valley of Heart’s Delight.”
Cupertino did not exist back then. The Glendenning family farmed the land where the Apple Spaceship now sits. Prunes were their specialty. The farm was on Pruneridge Avenue—the valley was considered the prune capital of the world, supplying 30 percent of the global market—which passed through their orchards near the present location of Steve Jobs Theater, a smaller circular building next to the mothership.
But Apple bought the road from the city—$23,814,257 for a half mile—so you can’t drive through there anymore. Between the steel bars of the fence you can still catch a glimpse of the Glendennings’ old fruit-drying barn, which has been renovated and is now storage for landscaping equipment. The new orchards and the old barn help soften the Pentagon vibe with a little farm-to-table ambience.
The Valley’s valley is not a stereotypical one because it lacks a mighty river meandering between the mountain ranges. Instead, there is the southern leg of San Francisco Bay, a shallow, brackish estuary fed by measly creeks that barely run in the dry season. It’s a bird and crustacean paradise, but the lack of fresh water and ocean currents make for a putrid aroma that’s further intensified by the landfills, wastewater treatment plants, and commercial salt-harvesting operations clustered around the waterfront.
The smell is so intense that it’s spawned a South Bay Odor Stakeholders Group “dedicated to identifying and resolving odor issues.” One finds Reddit threads with titles like South Bay Fucking Smell: “south bay people, you know what i mean. where the fuck is this rancid ass smell coming from. it’s pretty common for it to smell like shit here, i’ve smelled it my whole life, but i just want to know where it’s comin from. My guess is the shitty salty shallow south bay water spewing out smelly air, but idk.”
“That, or else it’s your mom,” replied another user, who referred to the odor as “the ass cloud.” The poetics of the region have shifted since Lawrence’s day.
The military forefathers of the Valley must have been horrified at the hippies their children became, though by the eighties the arc of flower power had bent toward the common ground of Wall Street.
The ass cloud did not dissuade the early tech settlers, who followed the money flowing from the patron saint of the Valley’s venture capitalists: DARPA, the Department of Defense’s secretive research agency, which commissioned much of the basic science from which the IT revolution sprang. While farms like the Glendennings’ continued to pump out prunes on the arable land between the Bay and the mountains, the military-industrial complex set up along the mud flats. The Navy built an eight-acre dirigible hangar in Mountain View, still one of the largest freestanding structures ever erected. The CIA quietly rooted itself among the reeds and spread rhizomatically. During the Cold War, aerospace companies blossomed between DOD installations. Lockheed was the Valley’s biggest employer when Kent and Steve Jobs were growing up in the suburbs that slowly consumed the orchards.
The American tech industry was born in the Bay Area because its defense industry parents came here to ward off the Japanese—during World War II, this was the gateway to the “Pacific Theater,” as the Asian front of the war was euphemistically referred to. This first generation of the Valley “seeded companies that repurposed technologies built for war to everyday life,” writes Margaret O’Mara, a tech industry historian. “Today’s tech giants all contain some defense-industry DNA.”
Jeff Bezos’s grandfather, for instance, was a high-ranking official at the US Atomic Energy Commission and at ARPA, the precursor to DARPA. Jerry Wozniak, father of Apple’s other Steve—Steve “The Woz” Wozniak, the company cofounder and part of the gang tweaking on computers in the Jobs’ garage—was an engineer at Lockheed. The military forefathers of the Valley must have been horrified at the hippies their children became, though by the eighties the arc of flower power had bent toward the common ground of Wall Street.
The Navy’s dirigible hangar still looms over the Bay, but Google now rents the property from the government for the parking of private jets. The company dominates the neighborhood to the west of the hangar, a spread of dull office buildings revolving around the central Googleplex, with its employee swimming pools, volleyball courts, and eighteen cafeterias. There are no houses or apartments in the neighborhood, though there are residential districts—of a sort. These are surprisingly affordable, which means that some of the folks who smear avocado on the techies’ toast and stock the kombucha taps have the good fortune to live nearby.
It’s easy to miss their humble abodes, however. An out-of-towner who gets off at the Google exit to take a leak could be forgiven for thinking they’d stumbled across some sort of RV convention. But those aren’t
recreational
vehicles lining the backstreets of the Google-burbs—those are homes on wheels.
RVs parked on the side of the road are the new desirable real estate, and like the old industrial cores of American cities that have evolved from roughshod hangouts for unemployed artists to haute loft developments for upwardly mobile professionals, their inhabitants aren’t immune to class stratification. Most of the rigs are older, ramshackle models, but here and there shiny coaches broadcast the relative wealth of their inhabitants—techies who could afford an apartment but don’t want to waste their money on rent.
They roll out of bed, hop on a company bike, and are at the office in three minutes, in the meantime saving up for a big house in the outer, outer, outer burbs, where you can still get a McMansion for under $3 million. Some already have the McMansion and use their RV as a workweek crash pad.
The more-rickety RVs belong to the avocado smearers and lawn mower operators. Crisanto Avenue, five minutes from the Googleplex, is the Latin America of Mountain View’s homes-on-wheels community. It’s like a museum of 1980s RVs—Toyota Escapers, Winnebago Braves, Chevy Lindys, Fleetwood Jamborees—most of them emanating Spanish banter, many with blue tarps over the roof, and some leaking unmentionable juices from onboard septic tanks. Apartments line one side of Crisanto, but the side with the RVs fronts onto train tracks. A shaded strip of earth along the tracks, maybe twelve feet wide, serves as a communal front yard, complete with potted plants and patio furniture, for pets and kids to play.
She’d held America in high esteem before living here. “La vida en los Estados Unidos es terrible,” she said.
An older Peruvian woman named Ida invited me into her RV, where a half-eaten pineapple sat serenely on an otherwise empty table. She used to live in a two-bedroom apartment with sixteen other people—“Fue imposible!” she said—until she learned of the RV scene. She couldn’t afford to purchase one, but there’s a growing industry in the Valley for old-school RV rentals; residents on Crisanto told me they pay between $500 and $1,000 per month, depending on the RV, plus a $75 fee to pump sewage.
Since Ida arrived in the US in 2003, she has worked mainly as a nanny, often for around six dollars per hour. Work was sparse during the pandemic, so she accepted whatever pay she was offered. One family gave her twenty dollars for taking care of their two children for twelve hours. She’d held America in high esteem before living here. “La vida en los Estados Unidos es terrible,” she said.
My visual experience of the Valley began to shift. My eyes had once flashed at views of the water, clever billboards (“Hey Facebook, our planet doesn’t like your climate posts”), and homes with the billowy, buff-colored grasses and scrawny wildflowers that signify the aesthetics of people who can afford expensive landscaping designed to look feral.
But the more time I spent with the Valley’s have-nots, the more my focus became trained on the visual language of the income inequality ecosystem: the camouflage patterns of desiccated vegetation pocked with blue tarps and plastic bags flapping in the branches; the hulking silhouettes of recreational vehicles parked in non-recreational environments; the bodies splayed out on the sidewalk.
Here and there, artistic aberrations emerge in the motif. I met a thirty-year old man named Ariginal who lived with his family and dogs in a 1983 Chevy camper van that he’d hand-painted marine blue with school-bus-yellow trim. A blue neon light mounted to the undercarriage illuminated the street in a cool glow as they motored around in their Scooby-Doo mobile at night. Ariginal went to school to be a fireman but became an Uber driver. He’s also a rapper, fashion model, and inventor—there are a few things he’s hoping to patent, and he wanted to show me the drawings, but his daughter was napping in the van. “I have a lot of dreams,” he said.
Within twelve minutes of meeting Ariginal I learned that he recently “discovered a couple of lumps . . . uh, in my testicles.” They were cancerous. He’d just had the tumors removed and would soon be undergoing radiation to make sure they don’t come back. “Just another obstacle,” he sighed.
“Vanlife has become the norm here,” a veteran gig worker named Chase, who’s driven for Uber, Instacart, and Amazon Flex, told me. He was not talking about hipsters who move into a home on wheels because it sounds like a fun and Instagrammable lifestyle. He was referring to his colleagues who have no other choice.
I found there is significant overlap between the gig work community and the unhoused community. Some full-time gig workers end up living in their vehicles; some camp residents become part-time gig workers because it’s a way to make a buck that doesn’t require a home address or the scrutiny of a human boss, only a working phone. Rudy, for instance, began delivering for food apps—using Lime scooters he rents by the hour—after he became homeless.
Their camps keep getting razed, but like the marshland reeds, they sprout right back.
The mobile communities stretch along the Bay up to Meta’s headquarters at 1 Hacker Way, on the outskirts of Palo Alto. East Palo Alto, the historically Black community surrounding the Meta campus, is one of the least gentrified, most impoverished places in the Valley—a 2017 study found that 42 percent of students in the local school district were homeless. A sixty-acre nature preserve across the street from the Meta campus is home to endangered species such as the salt marsh harvest mouse and Ridgway’s rail, a chicken-sized bird with a long, pointy beak and special glands that allow it to drink salt water.
A local variety of
Homo sapiens
lives there too, who are endangered in a different sort of way. The authorities want them out because their “presence is compromising the health of the estuary,” according to
Palo Alto Weekly
. Their poop and trash are considered pollution under the federal Clean Water Act—grounds for eviction. “The welfare of wildlife and the health of Baylands ecosystems is pitted against the very real human needs of people,” the paper opined. Their camps keep getting razed, but like the marshland reeds, they sprout right back.
Different regions of the Valley lend themselves to different expressions of homelessness. In the suburban areas, there are lots of vehicle dwellers because it’s (relatively) easy to find a place to park. In densely developed San Francisco, homelessness is largely centered along sidewalks, pushing the lives of unhoused individuals up close and personal, but keeping camps small and dispersed. Golden Gate Park is a would-be homeless haven, but local authorities have managed to keep camps from proliferating.
In San José, however, local green spaces have been commandeered by the unhoused, with camps that have developed into villages, especially along the Guadalupe River, where the Crash Zone was located, and its tributaries.
San José’s waterways are hideously un-scenic—views are of rubble and trash; the vegetation appears to be in a fight for its life. And although the Guadalupe is called a river, it’s more like a creek that bulges into a torrent on the rare occasion of a multiday rainstorm. Its abused hydrology forms the armature of a shadow city—a new form of urban infrastructure that is unplanned and unwelcome—within a city that does not acknowledge its shadow.
At a community meeting in 2017 to solicit input on a homeless shelter the city wanted to build, a horde of angry residents expressed their discontent over efforts to accommodate the unhoused: “Build a wall,” they chanted.
The Guadalupe River and its camps pierce downtown San José, meandering past the Zoom campus and Adobe’s four-tower headquarters to the Children’s Discovery Museum on Woz Way (as in Apple’s Wozniak, the Apple cofounder), where a community of Latinx campers have chiseled terraced gardens into the riverbank to grow food. People call it the Shelves.
Downstream from the Crash Zone, the Guadalupe flows north along the edge of the airport on its way to the Bay, passing dozens of camps and acres of office parks populated by household names—PayPal, Google, Hewlett-Packard, Roku, Cisco, Intel. One of the biggest camps in this part of town emerged on vacant land owned by Apple, just north of the airport, where the company plans to build yet another campus. Located on Component Drive, it was known as Component Camp.
The Jungle, they said, was “a crime syndicate ruled by gangs, where police do not enter.”
As word spread that displaced Crash Zone residents might soon inundate the place, the company announced they would spend “several million dollars” on a high-end sweep—evicted residents were given vouchers for nine months in a hotel—just weeks before the Crash Zone sweep began.
The Guadalupe’s tributaries tell further stories. Los Gatos Creek leads to the headquarters of eBay and Netflix, as well as Downtown West, a neighborhood being built from the ground up by Google. The city approved a development proposal that included office space for twenty-five thousand folks, but only four thousand units of housing—the sort of job-supply-to-housing-demand ratio that helps put a $3 million sticker on a bungalow.
A report by an economic research firm found that the project’s job-to-housing imbalance would translate to a $765-per month increase for San José renters over a decade—to offset upward pressure on rents, they said more than four times as many units would be needed, a third of them at subsidized rates. The San José Chamber of Commerce declared the report “fundamentally flawed” and dismissed findings like the $765 rent increase. “I don’t think that the stark reality presented in the report is realistic,” a representative told
San José Spotlight
, “nor something we can expect to happen in the next 8 to 10 years.” In the previous decade, however, median rent in the area had gone up by exactly $763.
Coyote Creek is home to a camp called the Jungle—I never deduced whether this was a reference to hobo jungles or the dense vegetation, or both—on which national media descended in 2014 as it was being swept. It was similar to the Crash Zone in scale, and headlines touting it as the “nation’s largest homeless camp” became a mantra. It was a feast of poverty porn.
“Living in The Jungle means learning to live in fear,” said
The Atlantic
, quoting a resident who displayed “a machete that he carries up his sleeve at night.” For the UK’s
Daily Mail
, it was an opportunity to get Dickensian. “Dilapidated, muddy and squalid though it was, it was all they had to call home—a shantytown in the heart of Silicon Valley,” the reporter lamented. “In the last month, one resident tried to strangle another with a cord of wire and another was nearly beaten to death with a hammer.” The Jungle, they said, was “a crime syndicate ruled by gangs, where police do not enter.”
The New York Times
was more restrained, striking a valiant tone, with a photo of the mayor himself helping a resident to “wheel his belongings up a muddy embankment.” The local CBS station reported that displaced residents immediately formed a “New Jungle” a half mile away. Before long, they recolonized the original Jungle.
The Crash Zone had grown to the size of the original Jungle, if not larger, by the time I first visited. The fields outside the airport fence technically belong to a city park, but when driving by they appeared like a cross between a junkyard and a refugee camp, in which explosives were periodically detonated. RVs in various states of disrepair butted up to tarp compounds that overflowed with debris, from bottles and cans to appliances and vehicles.
This visual buffet was a mix of freshly prepared, putrefied, and charred—one resident had a tidily landscaped yard with a pair of pink plastic flamingos, while other homesteads were a mix of rotting garbage, blackened grass, melted tarps, and burnt-out vehicles. My eyes flowed over suitcases and furniture and school buses to unexpected items, such as a piano, several boats, and a limousine. The first residents I met cautioned me against wandering into certain sections, where I might be mistaken for an undercover cop—the confetti labyrinth of structures left many a hidden nook where bad things might happen.
One guy had cobbled together a two-story wood cabin; around it were huge piles of wood chips and logs. I wanted to knock but was told the owner wielded an axe and did not like visitors. They called him the Woodchucker.
It was midsummer when I first visited the Crash Zone, height of the dry season in California. Large portions of the landscape were barren earth, and the powder-dry soil coated the skin of its residents. Here and there, people sifted through the loose dirt with their hands; occasionally someone held up a small trinket they’d discovered, inspecting it in the harsh light of the sun.
A woman walked by with a mixing bowl and a toy unicorn, stooping to extract a scrap of blue tarp from the earth, before she continued on. A minimally dressed man pulled clothes from a dumpster and tried them on, not necessarily in the way they were designed to be worn, and quickly took them off again. He spoke incomprehensibly to himself as he did this, tsking and looking annoyed, as though he just couldn’t find the outfit he was looking for. He was thin, barefoot; I wondered how he stayed alive.
I saw a man thrashing his body in anger as he crossed the street. A dreadlocked white guy in a hoodie wandered by with a baseball bat in one hand and a small, sweet-looking dog in the other. The wind picked up; a dust devil spun. A car crawled out of one of the fields with a flat tire, its rim scraping the asphalt as it entered the street. Every five minutes or so, a plane roared overhead like an angry avian dinosaur.
The Crash Zone spilled from its gills, extending beyond the end-of-the-runway fields and into the surrounding cityscape. One end spilled into a series of baseball diamonds, the dugout now housing, the clubhouse a drug den, the bathrooms given over to the sex trade—“People pull through in $100,000 cars trolling for people to blow them in the car,” a resident told me.
On an adjacent street, a man on crutches lived on a bench next to what was once a demonstration garden for drought-tolerant plants, according to a small sign, which had devolved into a garden of rocks and bare earth. The street proceeds across a large bridge where a solitary tent occupied one of the little nooks designed as a place for pedestrians to linger and look out over the Guadalupe. The bike and pedestrian paths that weave through the riparian corridor below provided access to a neighborhood of tents and shacks, a leafy suburb of the Crash Zone known as the Enchanted Forest. Its inhabitants pulled their cars over the curb, using the paths as driveways. Joggers and cyclists and parents pushing strollers paraded through nonetheless.
As San José’s camps have spread, the Guadalupe River parklands have become the frontlines of a local culture war.
The tents flowed along the river to the San José Heritage Rose Garden, where thousands of rare and forgotten varieties have been arranged in concentric rings of paths and beds. Some of those varieties disappeared following the Crash Zone’s pandemic-era population explosion, when incidents of arson and “rose rustling”—horticultural theft—were reported by garden volunteers on the site’s Facebook page, the insinuation of who was responsible clearly legible between the lines of the posts. The tents trickled past the roses and collected in clumps along the edges of the Historic Orchard, whose few remaining trees appear murdered and maimed, where they bumped into the six-foot fence that protects the children at the Rotary PlayGarden, a gift to the city from the local Rotary Club. A gate attendant keeps the you-know-who from wandering in to the $6 million playscape.
As San José’s camps have spread, the Guadalupe River parklands have become the frontlines of a local culture war. “The city’s homeless problem is becoming a PR problem,” a CBS anchor said in a 2019 segment. “From their airplane windows, arriving visitors are greeted by a shantytown of tents, blue tarps, and RVs,” they said, describing the trail network that parallels the river as “an eleven-mile stretch of human misery and suffering.”
The campers swim in the animosity that drenches the airwaves and cyberspaces around them. I wondered how my new friends on the street felt when they heard these things. How much does the angst directed toward them undermine their prospects of recovery? I found myself reading the Tripadvisor reviews of Guadalupe River Park, which encompasses much of the trail system. They felt like a beating.
“The once beautiful walking, running and biking trail has been taken over by homeless, garbage, rats,” wrote Robin G. “Really bad,” wrote hob0525. “It was basically . . . a tour of homeless camps. We walked for over an hour thinking it would get better. . . . It did not.”
In a 2019 survey by the Guadalupe River Park Conservancy, 77 percent of respondents did not feel “welcome and safe” in the park. “It’s something that I’ve never seen before, honestly,” Jason Su, the conservancy’s director, told
The Mercury News
. “This is just on a scale that’s just so, so large.”
It’s as though the city feels it’s been invaded by the unhoused. But turn San José inside out and it’s a giant homeless camp being invaded by a city.
The appalled magistrate wrote, “If this procedure did not take place, then the court is in uncharted legal territory in that the indictment returned in open court was not the same charging document presented to and deliberated upon by the grand jury.” The whole case may be thrown out.
And Trump’s crony, Federal Housing Finance Agency Director Bill Pulte, who has combed mortgage records to find ways to prosecute prominent Democrats, is now on the defensive himself, as lawyers challenge his flagrant conflicts of interest. Trump’s minions even managed to bungle a slam dunk by admitting the sheer racism in the Texas redistricting plan, which was then overruled by a three-judge panel, with the most indignant comments coming from a Trump appointee.
Appellate Judge Jeffrey Brown wrote that “it’s challenging to unpack the DOJ Letter because it contains so many factual, legal, and typographical errors. Indeed, even attorneys employed by the Texas Attorney General—who professes to be a political ally of the Trump Administration—describe the DOJ Letter as ‘legally[] unsound,’ ‘baseless,’ ‘erroneous,’ ‘ham-fisted,’ and ‘a mess.’”
Meanwhile, the Republican victory in forcing Democrats to reopen the government with no concessions on health care is looking more and more like a defeat because it keeps the issue of unaffordable health insurance front and center. In the most recent polls, approval of Trump is underwater by 17 points. Even among Republicans, his approval is 68 percent, sharply down from 92 percent in March. As we head into an election year, with Democrats flipping both Houses a distinct possibility, more and more Republican legislators are looking to save their own skins—which gives them more reason to distance themselves from Trump, and the process keeps intensifying.
So are we out of the woods yet? No, we are not.
The more Trump is on the defensive, the more hysterical he becomes. The latest example is his call to execute Democrats who pointed out that the professional military has an obligation to defy illegal commands. Even the White House press office had to walk that back. But Trump is continuing to use carrots and sticks with the corporate parents of media organizations to destroy a free and critical press.
And as an increasingly desperate Trump tries to keep changing the subject and the headlines, watch out for even more reckless foreign-policy adventures.
However, something fundamental has shifted. Trump is not a dead duck yet, but he is increasingly a lame one. And the more he proves impotent to punish defiant Republicans, the more they will keep acting to distance themselves and to weaken Trump.
We may yet redeem our democracy. That seemed a long shot just a few months ago. Not a bad cause for Thanksgiving.\
Robert Kuttner
is co-founder and co-editor of The American Prospect, and professor at Brandeis University’s Heller School. His latest book is
. Follow Bob at his site,
robertkuttner.com
, and on Twitter.
rkuttner@prospect.org
Used with the permission. The American Prospect, Prospect.org, 2024. All rights reserved. Click
here
[use the current article's link] to read the original article at Prospect.org.
Click here
to support The American Prospect's brand of independent impact journalism.
Pledge to support fearlessly independent journalism by
joining the Prospect
as a member today.
Every level includes an opt-in to receive our print magazine by mail, or a renewal of your current print subscription.
Youth Vote Surged in NYC. Was It a Paradigm Shift?
Portside
portside.org
2025-11-22 02:07:05
Youth Vote Surged in NYC. Was It a Paradigm Shift?
barry
Fri, 11/21/2025 - 21:07
...
As the 2026 election cycle gets underway, advisers to a new crop of candidates are drawing lessons from Zohran Mamdani on how to activate young voters and potentially change the electorate.
A Gothamist analysis of turnout data shows voters between ages 30 and 39 made up the largest share of a record-setting 2 million voters. Turnout among 18- to 29-year-old voters nearly tripled compared to four years ago, the largest increase of any age group.
Mamdani has said he persuaded those groups by not patronizing them. But now advisers are wrestling with whether Mamdani’s success getting young people to the polls represents a paradigm shift or a moment that can’t be recreated.
“I don’t think this is a Mamdani-specific moment,” said Alyssa Cass, a Democratic strategist and partner at Slingshot Strategies.
"I think what you're seeing in New York City is the emergence of what I like to call the 'Precarity Coalition,'" Cass said. “If you are under 40 here in New York City, it's not working for you anymore.”
Cass said young voters are facing daily challenges that make the city sometimes feel unworkable, including the cost of living and childcare. She is currently advising Alex Bores, a state assemblymember representing Manhattan’s East Side who is one of the nearly dozen candidates vying for U.S. Rep. Jerry Nadler’s congressional seat.
“Increasingly, the idea of having a good life is out of reach, and that is for people who are poor, working class, middle class and even upper middle class,” she said.
Other experts say drawing conclusions about local, state legislative or congressional district battles from a citywide race is risky.
“I do think that people need to take a beat because a district race is very different than a mayoral,” said Lupe Todd-Medina, a Democratic political consultant at Effective Media Strategies and the former spokesperson for City Council Speaker Adrienne Adams’ mayoral campaign.
Still, the response to Mamdani’s affordability message and the spike in turnout among younger voters, Cass says, is reconstituting the city’s electorate and should change how candidates campaign going forward.
Traditionally in New York City, candidates often begin their outreach by appealing to insiders, activating Democratic political clubs and interest groups. “I don’t think that does that job anymore,” Cass said.
She said candidates need to meet voters where they are, in person or online, with a consistent message that taps into voters' gut feelings about life in the city.
But Todd-Medina noted that the candidates and their ability to appeal to voters vary at the local level.
She considered Councilmember Chi Ossé's potential
Democratic primary bid
against Democratic House Minority Leader Hakeem Jeffries, who represents the 8th Congressional District in Brooklyn. Ossé's Council district overlaps with a portion of Jeffries' congressional district.
“Ossé represents a sliver of the 8th Congressional District. So maybe he plays better in brownstone Brooklyn,” said Todd-Medina, citing a left-leaning portion of the district. “But how is he going to play in Seagate? How does he play in Coney Island?” she added, referring to more conservative neighborhoods.
Todd-Medina is currently advising Marlon Rice, who is challenging state Sen. Jabari Brisport, a democratic socialist.
She credited Mamdani for running a hopeful campaign that expanded the electorate. Mamdani, she said, effectively contrasted with what she described as Andrew Cuomo’s “spastic reign of terror” that painted a grim picture of New York City that did not align with most New Yorkers’ day-to-day lives.
But she was reluctant to say the shifts in the electorate were a sign of permanent changes.
“Mamdani might just be the outlier case,” said Todd-Medina. “We don’t know yet because we’re about to start the next electoral cycle.”
Brigid Bergin
is an award-winning senior reporter on the People and Power desk. She is fiercely committed to telling stories that help people engage and support democracy. In 2022, she hosted a live, ten-week call-in politics show ahead of the midterm elections called The People's Guide to Power. Brigid's reporting in 2017 included some of the first coverage of a political newcomer now known as AOC. In 2016, she broke the news of a voter purge in Brooklyn just days before New York’s presidential primary, triggering city, state and federal investigations. Brigid also guest hosts The Brian Lehrer Show and All Of It. She graduated from the University at Albany and the CUNY Newmark School of Journalism. Got a tip? Email
bbergin@wnyc.org
or Signal 917-723-4719.
Joe Hong
is the investigative data reporter for WNYC and Gothamist. He previously covered K-12 education for several newsrooms across California. His reporting has led to local reforms in juvenile justice system as well as a state law requiring universal screening for dyslexia. Got a tip? Email
jhong@wnyc.org
The setup script automatically detects your distribution and uses the appropriate package manager. LXD installation path (snap vs native package) is also auto-detected.
Overview
This directory contains LXD-based containerization for Infinibay using
lxd-compose
.
# 1. Clone repository and navigate to lxd directorycd infinibay/lxd
# 2. Run setup (installs LXD, lxd-compose, detects package manager)
sudo ./setup.sh
# 3. IMPORTANT: Activate lxd group (REQUIRED!)
newgrp lxd
# This activates the group in your current session# You need to do this after setup.sh adds you to the lxd group# 4. Configure environment variables# Option A: Edit the auto-generated .env (RECOMMENDED)
nano .env
# setup.sh already created .env with secure auto-generated passwords# IMPORTANT: Change ADMIN_PASSWORD from auto-generated to your own!# Option B: If you prefer to start from .env.example before setup.sh# cp .env.example .env && nano .env# Then run setup.sh, which will detect and preserve your .env# 5. Deploy and start Infinibay (smart default - does everything!)
./run.sh
# This one command:# - Creates containers if they don't exist# - Starts containers if they're stopped# - Provisions if not already done (installs PostgreSQL, Redis, Node.js, Rust, libvirt)# - Shows access URLs when ready# Takes 5-10 minutes on first run# 6. Access Infinibay# URLs will be displayed after ./run.sh completes# Frontend: http://<frontend-ip>:3000# Backend API: http://<backend-ip>:4000
What happens:
setup.sh
- Installs LXD, lxd-compose, detects your distro and package manager, auto-detects LXD path, generates
.env
with secure passwords
newgrp lxd
-
⚠️
REQUIRED
- Activates lxd group permissions
.env configuration
-
⚠️
IMPORTANT
- Review and change ADMIN_PASSWORD (auto-generated passwords should be personalized!)
On November 4, Rula Daood became the first woman to apply to lead the Higher Arab Monitoring Committee, an umbrella organization that advocates for Arab communities within Israel at the national level.
For Daood, the national co-director of the grassroots movement Standing Together, this would have been a step from activism into politics. "I wanted to bring a new face and a new agenda to the committee," she tells Haaretz. A change, she says, that is badly needed.
The chairman of the committee sets the political agenda for Arab Israeli citizens. It is the only elected position on
the committee,
which comprises of Arab Knesset members, local council heads and representatives of different streams in the Arab community. Until this weekend, the post was held by former Hadash MK
Mohammed Barakeh
for a decade. The election, which took place on Saturday, was
won by Jamal Zahalka
, former chairman of the Balad party.
"The Higher Committee is supposed to mobilize and organize the Arab minority in Israel," Daood, 40, said in an interview with Haaretz before Saturday's election. "It is a powerful place that could bring Palestinian rights forward, but it hasn't been using its power for the past 20 years."
The committee was founded when protests among Palestinians against then-Prime Minister Menachem Begin's right-wing government grew stronger – ultimately leading to the first intifada in the eighties. The Higher Arab Monitoring Committee was meant to function as a unifying independent political organization that would coordinate the political activities of various Israeli Arab nonprofits, advocacy groups and other organizations, leading to change at a national level for Arab and Palestinian citizens of Israel.
"227 Arab Israeli citizens were killed this year, and the feeling is that nobody really cares," Daood says, referring to the
record-high rates of homicides and gun violence
devastating Arab communities across Israel. Her candidacy was an expression of a wider change in the Arab communities and political establishment, struggling to become more diverse and inclusive to women.
Daood recalls that she made her decision to put in her candidacy during a protest in her hometown Kafr Yasif. "Many people were around me," she says. "Many young people and many women." In contrast, on the stage were only men. They were the only ones who would speak. "That was the moment I decided we need a change."
Some of the thousands of Palestinian citizens of Israel that protested rising murder rates and the government's inaction, in Arabeh, northern Israel, earlier this month.
During the past months, anger within the Arab community about the committees' lack of action against rising crime and murder rates, which continues to rise, especially among young people. At
a demonstration
on November 1 against the violence in the northern town of Arabeh, protesters reportedly tried to prevent the committees' officials, including then chairman Barakeh, from speaking.
The old guard
Since Daood has not been part of the committee, she needed to be endorsed by at least six municipalities. "I went to the mayors of different municipalities, and I got more than six supporters," Daood explains. But the day before the mayors had to finalize their choice of candidates, Daood got a call from a committee member telling her that one mayor withdrew his support for her.
At this point, Daood had six supporters left, still enough for her candidacy. But in the evening, her phone rang again – another mayor changed his mind. Later that night, she received a text message informing her that another mayor had withdrawn his endorsing signature, making it certain that Daood would not have enough votes.
To Daood it was clear that the mayors had been pressured to draw their votes. "The committee is very much controlled by old politics, people who have been holding their positions for decades." It seems they were afraid of the change, she speculates. "They feel threatened by a new face and a new agenda, speaking a different language, that can really stir things up. And they didn't want me to be there."
To her, the problem of leadership is not limited to the Arab political establishment in Israel. "There is a lack of leadership on the left that speaks about the change we need," Daood explains. "About the day after [the war], about real partnership between Israelis and Palestinians, about how we can change our reality." To her, it is clear that leaders must focus on what can be done instead of what cannot.
"Many people don't believe in what they can do," she says. "They don't believe they have the power." To her, the old political establishment represented by the committee cannot bring about change. "They don't have the most progressive kind of ideas. They don't believe in organizing, in working with communities and with people. This is what I wanted to change with my candidacy."
Standing Together co-directors Rula Daood, left, and Alon-Lee Green, holding signs that read, "We refuse to occupy Gaza."
'I was able to make some noise'
When the committee opened the registration for the elections and announced the names of the election committee, all eight members were men. While this was business as usual in the years before, this time, the committee faced backlash. Women's rights organizations and feminist activists spoke out against it.
"When I put my candidacy first, it made a lot of fuss. Nobody really expected it, and it moved many things," Daood says. "I was able to make some noise." Four more candidates entered the race, among them another woman: former MK and feminist activist Neveen Abu Rahmoun.
For Daood, who together with her
Standing Together
co-director Alon-Lee Green became international faces of
joint Palestinian and Israeli resistance
to the war in Gaza, leading the committee would have been the next step in her career. With a national election on the horizon, questions about her future have mounted. Today, she is still unsure about where she will go from here.
"I honestly don't know," she says. But there are things she is sure of: "I want to change politics. I want to make a change for my own people, but I also want the whole Israeli society to understand that I am a legitimate leader for both Jews and Palestinians in this land." She would love if this is possible to do through an organization like Standing Together, she says. "If your question is about the Knesset – Maybe. Probably. Really, I don't know."
The Higher Arab Monitoring Committee did not respond to Haaretz's request for comment.
Haaretz
is an independent daily newspaper with a broadly liberal outlook both on domestic issues and on international affairs. It has a journalistic staff of some 330 reporters, writers and editors. The paper is perhaps best known for its Op-ed page, where its senior columnists - among them some of Israel's leading commentators and analysts - reflect on current events. Haaretz plays an important role in the shaping of public opinion and is read with care in government and decision-making circles. Get a
digital subscription
to Haaretz.
This week I've been working on a script and slides for a YouTube video about
Inko
. After a week of doing that I needed a bit of a
break. Last week
I wrote a bit about
FreeBSD
, and specifically about wanting to
try it out on my yet to be delivered Framework 16 laptop. This got me thinking:
why don't I try FreeBSD on my desktop first, then see if it's still worth trying
out on a laptop? After all, my desktop has a spare SSD that I don't use much, so
I could move its data elsewhere temporarily and install FreeBSD on this SSD,
leaving my main system untouched.
What follows is a sort of transcript (with some editing) of doing just that, a
process that took a total of some three hours. Because I wrote most of this
while actually performing the work, it may feel a little chaotic at times, but I
hope it gives a bit of insight into the process.
The hardware
The desktop in question uses an AMD Ryzen 5600X CPU, with an Intel Arc A380 GPU.
The SSD FreeBSD will be installed on is a Samsung Evo 860 with 256 GiB of
storage. The WiFi card is an Intel AX200, which is supported by FreeBSD.
Preparing a USB drive
I downloaded the latest FreeBSD 15 snapshot ISO format, then wrote it to a
random USB drive using
dd
. I initially tried to use GNOME Disks to restore the
ISO to the USB drive, but for some reasons this results in it not being a
bootable drive. I vaguely recall having had similar issues in the past with
Linux distributions, so this isn't FreeBSD specific.
Installing FreeBSD
Booting up worked and the installer detects the AX200, but then seemingly got
stuck for a good minute or so, after which it moved on. I'm not sure why, but it
didn't seem to matter much as the rest of the installer worked fine.
Using the installer I went with a ZFS on root setup and enabled disk encryption.
In particular I enabled the "local_unbound" service to cache DNS lookups.
Knowing this won't work with my router (which runs a local DNS server/cache)
because it doesn't support DNSSEC, I was a bit surprised to see the installer
not consider this at all, i.e. there's no "use local_unbound but without
DNSSEC" option.
First boot
After installing FreeBSD I rebooted into the target SSD. The first thing I
noticed was a bunch of error messages from the ntp daemon (which I enabled)
saying it couldn't resolve a bunch of DNS names. This is because my router
doesn't support DNSSEC. I fixed this by creating
/var/unbound/conf.d/disable-dnssec.conf
with the following contents:
server:
module-config: "iterator"
Because FreeBSD ships vi by default (not vim, actual vi) this was a little
annoying as vi works a little different compared to vim. After saving the file I
restarted the
local_unbound
service, and all was well again.
FreeBSD offers both
doas
and
sudo
. I figured I'd give
doas
a try, mainly
because I wanted to give it a try. This requires you to copy
/usr/local/etc/doas.conf.sample
to
/usr/local/etc/doas.conf
and edit it
accordingly. I just used it with the
permit :wheel
rule, which is enough for
most people. I then found out that
doas
doesn't support password persistence
outside of OpenBSD, meaning you have to enter your password again for every
doas
command. While there appears to be a fork available called
opendoas
that does support it, it in turn doesn't appear to be actively maintained
(judging by the
GitHub repository
). I
ended up going back to
sudo
instead.
I then installed
Fish
and made it the default shell as
follows:
chsh -s /usr/local/bin/fish yorickpeterse
I then logged out and back in again, and Fish works as expected.
FreeBSD shows a message-of-the-day when you log in, which I don't want as it's
rather long. To disable this, I emptied
/etc/motd.template
then ran
sudo
service motd restart
to re-generate the message, then disabled the service
using
sudo sysrc update_motd="NO"
. We also need to remove
/var/run/motd
. I
think in hindsight editing the template wasn't required as I could've just
disabled the service then remove
/var/run/motd
file. Ah well, lessons learned
I guess.
Fighting the GPU
Now it's time to make sure the GPU is set up. The reason my desktop is using an
Intel GPU is because it used to use an aging AMD RX 550, but after dealing with
AMD driver bugs for a few months I got fed up and decided to replace it. I
picked the A380 because it was the cheapest GPU with support for hardware
decoding that I could find.
To do this we have to install
drm-kmod
, which pulls in about 130 driver
related packages (yikes). Next we need to make sure the driver is loaded upon
startup by adding it to
/etc/rc.conf
like so:
sudo sysrc kld_list+=i915kms
This doesn't affect the existing system though, so we also have to load the
module using
kldload
(
modprobe
but for FreeBSD), so I ran this:
This crashed my system. Brilliant. Worse, because I added the module to
/etc/rc.conf
it keeps crashing when you reboot. The error shown when FreeBSD
tries to load the module says to run
pkg install gpu-firmware-kmod
to install
the necessary firmware, but because the module is now loaded at startup we first
have to figure out how to get back into a working system. I found
a forum
post
that offered some suggestions, but they didn't work.
I ended up booting into the installation USB and mounted the host drive
following the instructions from
this article
,
using
/dev/ada0p3
as the drive name. I then opened
/mnt/etc/rc.conf
and
commented out the line that loads the i915kms driver, then rebooted. We have a
working system again!
Now to do what that error said: install the missing package. Well, except it
wasn't missing because when I installed it
pkg
said it was already installed.
This is fine, I guess?
A bit of searching reveals
this
issue
, first reported in
August 2024. There's a
pull request that should fix
this
, but I'm not going to
compile a custom kernel just to get a working system. It also seems the PR has
just been sitting around for a while, which doesn't bode well.
Most people would give up at this point, but I have one final trick up my
sleeve: when I replaced my AMD RX 550 I didn't throw it away in case I ever
needed it again, so I can temporarily use it instead of the A380. It shouldn't
be necessary, but at this point I want to try and get a working desktop
environment just so I can say I at least tried.
So after trying a few different screwdrivers to unscrew the GPU bracket screws
and some cursing later, the A380 is replaced with the RX 550. I booted up the
system and edited
/etc/rc.conf
to load the
amdgpu
driver instead of
i915kms
. I then decided to reboot the system for good measure, though this
isn't strictly necessary. I am now presented with a system that works, except
the console font is tiny for some reason.
This
article
suggests using
vidcontrol -f terminus-b32
which did the trick.
Installing a desktop environment
Where were we again? Oh yes, I was going to install a desktop environment.
I'd use GNOME, but GNOME recently announced they were going to depend
on systemd more and more, and the GNOME version provided by FreeBSD is a bit old
at this point (GNOME 47). KDE seems better supported, so I'll give that a try.
The FreeBSD installer is supposed to come with an option to install KDE for you,
though the ISO I used for FreeBSD 15 didn't have that option. Either way, from
what I found it uses X11 and I want to use Wayland, so I wouldn't have used it
anyway.
This article
lists
some steps to enable KDE. The socket options it suggests to apply seem a bit
suspicious, as in, they look like the kind of setting people just copy-paste
without thinking, so we'll skip those unless they turn out to be required after
all.
Let's install the necessary packages:
sudo pkg install seatd kde sddm
This ends up installing close to 700 packages. This took a while since
pkg
downloads packages one at a time. Support for concurrent downloads was first
requested
back in 2017
, but isn't
implemented as of November 2025. This wouldn't be a huge problem if it wasn't
for the FreeBSD mirrors only supporting speeds in the range of 5-20 Mib/sec,
while my internet connection's maximum speed is 100 MiB/sec.
Upon the installation finishing, I realized I hadn't explicitly stated or
switched to the latest branch for the FreeBSD ports, so I edited
/usr/local/etc/pkg/repos/FreeBSD.conf
to be as follows:
I then ran
sudo pkg update
followed by
sudo pkg upgrade
and there was
nothing to update, so I guess we're all good.
Configuring the desktop environment
Now to enable the service we need for KDE. The linked article doesn't mention
enabling SDDM but it seems to me like that would be required for it to start, so
we'll give that a go as well:
sudo sysrc dbus_enable="YES"
sudo sysrc seatd_enable="YES"
sudo sysrc sddm_enable="YES"
sudo service dbus start
sudo service seatd start
sudo service sddm start
This results in SDDM starting and showing the login screen. The default session
is set to Wayland already, and logging in works fine. Neat!
Moving the mouse around I'm noticing some weird artifacts on the desktop
wallpaper:
Looking at the display settings I noticed scaling is set to 170%, while for this
display it should be 200%. Changing this removed the artifacts, so I guess this
is some sort of KDE bug?
Another thing I'm noticing when moving the cursor around or when window
animations play is that it isn't as smooth as GNOME, as if the display's refresh
rate is lower than it should be, though it's in fact set to 60hz. I vaguely
recall having this issue on GNOME when I was still using the AMD RX 550, so
maybe it's a GPU issue. Or maybe it's those socket options I decided not to
enable initially, so let's give that a try, just in case, though it's a bit of a
stretch. First I ran the following to apply the settings to the existing system:
The resulting output suggests this is already the default value, so I guess
that's not the reason, and the settings might not be necessary at all.
Now let's get rid of some software I don't need such as Konqueror and Kate:
sudo pkg remove konqueror kate
Initially this gave me a bit of a heart attack as it tells you that the
kde
package will also be removed, but it turns out to be fine and not actually
uninstall your entire KDE setup.
Audio
Audio works fine with no configuration necessary. Neat.
Network
While the network itself works, there's no GUI application of any kind to manage
it, as NetworkManager isn't available on FreeBSD. I found
networkmanager-shim
which is required by
kf6-networkmanager-qt
.
I installed the latter in case I'd also need that, logged out and back in again
and...nothing. Searching a bit more lead to me finding
networkmgr
which is available as a
FreeBSD package, so let's try that:
sudo pkg install networkmgr
Logging out and in again and there's now a network icon in the Plasma panel.
Unfortunately, it seems to be an X11/Xwayland application and looks horrible:
Apologies for the poor quality! I hadn't set up a screenshot application of some
sort and didn't want to also deal with that, so I just took a photo with my
phone.
It also doesn't appear to show anything related to WiFi.
ifconfig
doesn't list
anything WiFi related either. I guess I have to set up
wpa_supplicant
or something along those lines,
but I'd prefer it if my desktop environment could manage it for me.
Bluetooth doesn't appear to work either, probably for the same reasons because
it's handled by the same AX200 chip. I found that
wifimgr
can be used to manage
WiFi, but starting it results in it complaining I have to first configure a
device in
/etc/rc.conf
. Ugh.
It's at this point that not only was it getting late, I also had enough. I can
see the appeal of FreeBSD, and it's impressive how much up to date software
there is in the ports repository, but there's a reason I moved away from Arch
Linux several years ago: I just don't have the patience nor interest to
endlessly fiddle with configuration files just to get a basic system up and
running.
Conclusion
If you have enough patience and time I think you can set up a decent KDE desktop
environment using FreeBSD, assuming your hardware is properly supported. That is
also the biggest challenge though: FreeBSD developers have limited resources and
port the Linux GPU drivers to FreeBSD, instead of using bespoke drivers. This
means it will always lag behind Linux, ranging from maybe only a few weeks to
months or even years.
Based on the challenges I ran into while trying to install FreeBSD on my
desktop, I'm not going to try and install it on a laptop any time soon. I just
don't have the patience or interest. If I did, I'd go back to using Arch Linux.
There are also some choices that FreeBSD makes that I don't agree with or don't
want to deal with, such as the archaic way of writing service files or setting
up log rotation, or the fact that the output of the
-h
option for the FreeBSD
userspace utilities (e.g.
ls
) is as good as useless.
On the bright side, if the FreeBSD foundation continues focusing on improving
the laptop and desktop experience of FreeBSD then all this could be different
1-2 years from now, so maybe I'll try again in the future.
(This is a stripped-down version of the newsletter sent out to Codeberg e. V. members as an email. The version emailed to our members contains some additional details concerning the association's Annual Assembly. If you are interested in helping us shape Codeberg, consider
contributing
or
participating our non-profit association
! Strength comes in numbers; we could always use your support!)
Dear Codeberg e.V. members and supporters!
It's time to share some news about what has happened in the past months around Codeberg.
Highlights
Codeberg e. V. held its Annual Assembly, and elected a new Presidium, which in turn appointed a new Executive Board.
We reworked our Privacy Policy and clarified our policies on repository licensing in our Terms of Use.
Intra-association meetings are now held regularly.
We now have a second part-time employee.
Hardware updates!
A brief status update
Codeberg has experienced immense growth over the past few months. Here's a rundown of our numbers:
Codeberg e.V. now has more than 1000 members (1208, to be exact).
786 members have active voting rights, 415 of them are supporting members and the remaining 7 are honorary members.
Out of the 1208 members, 61 of them are corporations (which can only have a supporting membership without voting rights).
We now have more than 300k repositories and recently crossed 200k registered user accounts.
Some more well-established projects now have a presence on Codeberg.
As of September 2025, Anja joined us as a part-time employee to help us with administrative matters. We now have two part-time employees.
Annual Assembly & Elections
Once every year, the entire member body of Codeberg e. V. members meets in what we call the Annual Assembly. It guarantees that the matters of the association are in the hands of its users. Codeberg e. V. has more than 1000 individuals backing it.
Once every two years, the Assembly elects up to eight members to the Presidium. The Presidium meets a few times every month to—as the name suggests—preside over matters involving Codeberg's direction. Such tasks may involve implementing proposals that were accepted by the Assembly, answering emails and responding to media inquiries and organizing teams of volunteers. The following people were elected for the 2025-2027 term in alphabetical order:
Andreas Reindl (@crapStone)
Andreas Shimokawa (@ashimokawa)
Daniele Gobbetti (@daniele)
Daphne Preston-Kendal (@dpk)
Moritz Marquardt (@momar)
Otto Richter (@fnetX)
Panagiotis Vasilopoulos (@n0toose)
William Zijl (@gusted)
Additionally, the Presidium is responsible for appointing an Executive Board, which is responsible and liable for Codeberg's day-to-day operations. With that being said, the Presidium has appointed the following people for the 2025-2026 term:
Otto Richter (@fnetX)
Moritz Marquardt (@momar)
William Zijl (@gusted)
Both bodies were previously exclusively German, whereas the Presidium now comprises of members residing in three European countries and the Executive Board will have a member from the Netherlands. This also marks the first time that our Executive Board has had three members — that being the maximum amount possible.
We strive to be more internationally oriented, as well as adjust to the immense growth we've been experiencing. As such, we have been making efforts in documenting our structures better to help with onboarding.
Codeberg is a non-profit organization with the explicit mission of advancing the development of free, libre and open-source projects. It was founded for that purpose, and its continued operation is made possible by all the volunteers, the donors and the members that share said mission. We offer a free service, and what we
want
to ask from our users is simple: To give back something to the community by letting others reuse, adapt and extend what they create.
In principle, we asked people to make their works "free and open-source". But what is "free and open-source", exactly?
Earlier, our Terms of Use avoided answering that question. Instead, it stipulated that people had to use a license approved by the
Free Software Foundation
and/or the
Open Source Initiative
. Therefore, blogs and personal websites licensed under the copyleft
Creative Commons Attribution-ShareAlike 4.0 International
were technically against the rules, despite said license being more appropriate than, say, the software-oriented
3-Clause BSD License
! We found that our Terms confused or scared away users and use cases that we view as aligned with the Codeberg goals and values: Book authors and artists, conference presenters that wanted to upload their slides, people wanting to use/host public domain content or people that wanted to use copyleft assets for their libre game. Those are cases that we previously quietly tolerated (and were already happy to host despite them not being technically allowed).
We made an effort to provide such projects clarity and make them officially allowed as well. After long discussions with Codeberg e. V. members as well as members of our wider community, we came up with the following proposal, which was formally agreed upon by Codeberg e. V.'s General Assembly and will be implemented soon after publication:
https://codeberg.org/Codeberg/org/pulls/1219
An additional benefit of the changes made is that they reinforce our own governance and independence, as we would not rely on third-party organizations and their governance structures as much as we have. Trying to define what is "free and open-source" was a rather lengthy endeavour. However, we find that this was necessary—especially given the context of our recent growth—and we truly hope that the greater community will be rather pleased with the result.
Regular Codeberg Meetings
As previously discussed among members, we have now established weekly meetings of the Codeberg community. The meetings are open to everyone, but we expect that most people are Codeberg e. V. members.
After a quick round of introduction and some small talk, we usually dedicate our time towards a topic announced in advance, often "open end" and until very late. During the first meetings, topics have been mostly technical, but we aim at addressing non-technical areas in the future, such as community building, the documentation or public relations.
If you have a question or a matter that you'd like to discuss with other Codeberg contributors, you can always join and present the matter during the first minutes of the session. When no urgent and spontaneous topics need to be discussed, we move to the scheduled topic of the session.
The meetings are scheduled every Tuesday evening, 18.00 Berlin time or 17.00 UTC time. We meet at
https://lecture.senfcall.de/ott-zml-1vs-qcc
. The meeting is open to everyone interested in joining, but we mostly target Codeberg e. V. members
New employee for administrative tasks
Since September 15, 2025, Anja Hänel joined our virtual office team. Codeberg is growing rapidly, and so are the administrative chores required to keep the project running. We are grateful for the expertise she brings into our team, which helped us clarify, simplify and accelerate internal processes around member management and accounting.
Her help relieves Andreas Shimokawa (our current part-time system administrator and developer) from some of the office tasks. Together, they improved the member management and accounting tools. For example, some of you who have outstanding membership fees have likely received friendly reminders recently.
Infrastructure status
Since the last update, there have been some changes to the infrastructure powering Codeberg. We are running on 3 servers, one Gigabyte and 2 Dell servers (R730 and R740). We have bought and added 2 new SATA SSDs (8TB capacity) to the Dell servers to address growing storage demands. This finally balanced storage capacity of the 3 Ceph nodes which was unequal for historical reasons and resulted in inefficient storage usage on the first node.
One of our servers, Achtermann (R730), was running with only 1 CPU (as the second slot had been damaged). While the server was reliable, some workarounds were necessary (such as cramming some hardware into limited PCIe slots, as most of them are bound to the defunct slot). Recently, we received two donated Dell R740s and deployed one of them to replace the damaged R730. Waxenstein (our newly deployed R740) performs much faster than Achtermann (1 CPU, 160GB RAM) and has 384GB of RAM (more than twice than that of Achtermann!). We repurposed the RAM of the now-decommissioned Achtermann and used it to double the RAM of Aburayama, which is the name of the R730 that is still in service. This boost let us allocate much more resources to our various containers, resulting in performance improvements for Codeberg. If you are interested in more detailed and up-to-date information about our hardware, we maintain an overview in the following repository:
https://codeberg.org/Codeberg-Infrastructure/meta/src/branch/main/hardware
Hardware donations allow us to get access to high quality hardware. Although aged, the performance (and even energy efficiency) is often not much worse than with new hardware that we could afford. In the interest of saving embodied carbon emissions from hardware manufacturing, we believe that used hardware is the more sustainable path.
We are considering to repurpose older generations of hardware to offsite CI/CD runners. While the hardware is less energy efficient than newer, we hope to use direct solar power to operate CI/CD nodes only during sunshine hours. Using efficient machines for 24/7 operation and less efficient machines for about 4 to 8 hours a day is likely a reasonable approach.
Some users indicated interest in CI runners using the ARM CPU architecture. Currently, Apple's M1 and M2 series have outstanding energy efficiency. We are investigating how broken Apple laptops could be repurposed into CI runners. After all, automated CI usage doesn't depend on the same factors that human beings depend on when using a computer (functioning screen, speakers, keyboard, battery, etc.). If you own a broken M1/M2 device or know someone who does, and believe that it is not worth a conventional repair, we would be happy to receive your hardware donation and give it a try! (Sidenote: There are also non-profit organizations that may be willing to accept your
working
devices and repurpose them for those in need. For Germany, we'd recommend checking out
Computertruhe e. V.
.)
On a software level, we are currently struggling with recurring performance degradation of our Galera database cluster (MariaDB). While it usually holds up nicely, we see sudden drop in performance every few days. It can usually be "fixed" with a simple restart of Forgejo to clear the backlog of queries. We are still investigating potential issues with our database cluster. In the meantime, work is ongoing to
optimize database queries that were observed to trigger the behaviour
in Forgejo.
Community Spotlight
To end this letter, we'll share a few (of the many) cool repositories on Codeberg that caught our eye:
git-pages
is an alternative to
Codeberg Pages
, it uses a different approach to serving static pages that is more efficient. Codeberg is planning to gradually migrate to it.
Readeck
(
Codeberg Link
) allows you to preserve web content locally to read it later and find it back easily.
µcad
(
Codeberg Link
) is a description language for creating geometric objects that can be used for 3D printing or CNC milling. Although it is in "an early stage of development" at time of writing, we are definitely intrigued!
ly
is a display manager for Linux and BSD (i.e. it provides a "login screen"). It comes with a wave-y animation that may make your laptop look cooler.
GoToSocial
(
Codeberg Link
) is a self-hostable social networking service that uses the ActivityPub protocol.
Walter Chapman and Thiago Pinheiro discuss a molecular model of their research.
Researchers at Rice University and Oak Ridge National Laboratory have unveiled a physics-based model of magnetic resonance relaxation that bridges molecular-scale dynamics with macroscopic magnetic resonance imaging (MRI) signals, promising new insight into how contrast agents interact with water molecules. This advancement paves the way for sharper medical imaging and safer diagnostics using MRI. The study was published in
The Journal of Chemical Physics
Nov. 12.
This new approach, known as the NMR eigenmodes framework, solves the full physical equations that can be used to interpret how water molecules relax around metal-based imaging agents, a task that previous models approximated. These findings could alter the development and application of new contrast agents in both medicine and materials science.
“By better modeling the physics of nuclear magnetic resonance relaxation in liquids, we gain a tool that doesn’t just predict but also explains the phenomenon,” said
Walter Chapman
, the William W. Akers Professor of Chemical and Biomolecular Engineering. “That is crucial when lives and technologies depend on accurate scientific understanding.”
Modeling a molecular process
During an MRI scan, contrast agents are often used to enhance image clarity. These agents, typically based on a gadolinium ion encased in an organic shell, alter the way nearby water molecules respond to magnetic fields. This alteration, known as relaxation, enhances the contrast in tissue images.
Until now, most scientific models describing this process have relied on significant simplifications, treating complex molecular motions with limited fidelity to the real system’s behavior, which limited their predictive accuracy. The researchers sought to improve upon this.
“Our previous work used detailed simulations to study how water molecules interact with these contrast agents,” said Dilipkumar Asthagiri, a senior computational biomedical scientist in the National Center for Computational Sciences at Oak Ridge National Laboratory. “In the present paper, we developed a comprehensive theory to interpret those previous molecular dynamics simulations and experimental findings. The theory, however, is general and can be used to understand NMR relaxation in liquids broadly.”
A framework rooted in physics
To create a more effective approach, the research team turned to the Fokker-Planck equation, a master equation that describes how the probabilities of molecular positions and velocities evolve. By solving this equation, they were able to capture the full spectrum of molecular motion and relaxation.
Essentially, the eigenmodes framework identifies the “natural modes” of how water molecules respond to contrast agents at the microscopic level. These modes provide a more detailed and accurate picture to interpret the relaxation process than earlier models could offer.
“The concept is similar to how a musical chord consists of many notes,” said
Thiago Pinheiro
, the study’s first author, a Rice doctoral graduate in chemical and biomolecular engineering and postdoctoral researcher in the chemical sciences division at Oak Ridge National Laboratory. “Previous models only captured one or two notes, while ours picks up the full harmony.”
This framework not only reproduces experimental measurements at clinical MRI frequencies with high precision, but it also demonstrates that widely used simplified models are specific instances of a broader, more comprehensive theory.
Broader impacts beyond imaging
The implications of this research extend beyond medical imaging. Because NMR relaxation is used to study the behavior of liquids in various scientific and industrial applications, the framework could also be applied in areas such as battery design and subsurface fluid flow.
“This kind of detailed modeling can help us understand how fluids behave in confined spaces like porous rocks or biological cells,” said Philip Singer, assistant research professor in chemical and biomolecular engineering at Rice. “It’s a fundamental tool that links molecular-scale dynamics to observable effects.”
The research team has made its code available as open source to encourage broader adoption and further development. Co-authors of the study also include Betul Orcan-Ekmekci from Rice’s Department of Mathematics, who contributed significant insights into the mathematical modeling.
The Ken Kennedy Institute, Rice Creative Ventures Fund, Robert A. Welch Foundation (No. C-1241) and Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory (No. DE-AC05-00OR22725 with the U.S. Department of Energy) supported this study.
A bug caused by a door in a game you may have heard of called "Half Life 2"
For as well-loved as the
vi
command is, it's the
ed
command that's considered the standard Unix text editor. It was the very first text editor for Unix, and it's available on even the most modern Linux systems.
Unlike text editors you may be used to on Linux or another system,
ed
doesn't open a window or even a screen of its own. That's because it's a functional editor that you can control either interactively or with a script. If you're already familiar with
sed
, then you'll find
ed
easy to learn. If you're new to both,
ed
can give you a different perspective on how you can process and modify data on your system.
Launch ed
Launching
ed
is easy; just enter the command at the prompt:
$ ed
When you first launch
ed
, you get no feedback or prompt. That's the expected behavior, so don't panic. Your system hasn't crashed,
ed
is just waiting for your instructions.
To get
ed
to be a little more visual, use the
p
command to create a prompt. Type the letter
p
followed by the
Return
or
Enter
key:
$ ed
p
?
The question mark (
?
) is the default
ed
prompt.
Use the ed buffer
While
ed
is active, it uses a place in memory to store data. This location is called a
buffer
. Such storage is significant because you're not editing a file directly. You're editing a copy of file data placed into the buffer. As long as you save the buffer when you're done,
ed
preserves any changes you make to the data.
If you exit
ed
without writing changes to a file on disk, it loses all changes because they only existed in the buffer. It's no different than closing any application without saving changes, but
ed
doesn't warn you, so keep this in mind.
Generate text with ed
Similar to the
vi
editor,
ed
starts in
command mode
. This means you can issue commands to the editor, as you did to display a prompt, but you can't write or edit text without issuing a command first.
You can append text to the current buffer using the
a
command followed by the
Return
or
Enter
key. Whatever text you type into the terminal now will be appended to the buffer. Stop
ed
from appending text to the buffer by typing a solitary dot (
.
) on its own line.
This example adds two lines (
[myconfig]
and
widget=True
) to the buffer:
?
a
[myconfig]
widget=True
.
After a terminating dot,
ed
returns to command mode.
Save the buffer to disk
Once you're happy with your text, you can write the buffer to a file using the
w
command followed by the destination file's name:
?
w myconfig.txt
23
As confirmation, it outputs the number of characters written to the file.
Read an ed file
You will probably use
ed
to edit existing config files more often than you use it to write new text files from scratch. To load a file into the buffer, enter
ed
followed by the name of the file you want to load:
$ ed myfile.txt
From within
ed
, you can open an existing file into the buffer using the
r
command:
To see all lines in the buffer, type
,p
and then press
Return
:
?
,p
[myconfig]
widget=True
To see just a specific line, type the line number:
?
1
[myconfig]
2
widget=True
Edit the buffer
To edit a file, first load it in the buffer:
$ ed myconfig.txt
,p
[myconfig]
widget=True
foo=bar
openssl=libssl
To change the word "True" to "False" in the first setting of this file, select the line you want to target (2) and then invoke the
search
function by entering
s
followed by the replacement term:
?
2
widget=True
s/True/False/
2
widget=False
To target another line, use a different line number and search terms:
View the edits you've made to the buffer using the
,p
command:
[myconfig]
widget=True
foo=bar
ssl=libgnutls
You haven't written the buffer back to the file yet, so the altered lines exist only in memory. To save your changes back into the file, use the
w
command:
w myfile.txt
45
Clear the buffer
To start a new document or load one into a clean environment, you must clear out the buffer. The
c
command clears the buffer, which you can verify using the print command (
,p
):
c
,p
Quit ed
There are two common ways to end an
ed
session: you can press
Ctrl+D
or you can type the
q
command. This doesn't give you a chance to save your buffer, so make sure you've written data you want to keep out to a file!
Get to know ed
If nothing else, learning
ed
is a powerful safeguard against getting left without a text editor when your system is in a state of recovery and you're left with only the most basic toolset. This happened to me once, and I was able to fix an errant configuration file only because I had just enough recollection of using
ed
in a Linux course I'd taken at a community center long ago.
It's true that
ed
might be the last resort, but it's nice to know what to do with the command when it's your one and only choice. And even if you don't anticipate needing
ed
(even in an emergency) it's a fun command to explore and gives you a good understanding of how tools like
vim
and
sed
came about. Use
info ed
to view the full manual to learn more.
CISA warns Oracle Identity Manager RCE flaw is being actively exploited
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 23:50:27
The U.S. Cybersecurity & Infrastructure Security Agency (CISA) is warning government agencies to patch an Oracle Identity Manager tracked as CVE-2025-61757 that has been exploited in attacks, potentially as a zero-day. [...]...
The U.S. Cybersecurity & Infrastructure Security Agency (CISA) is warning government agencies to patch an Oracle Identity Manager tracked as CVE-2025-61757 that has been exploited in attacks, potentially as a zero-day.
CVE-2025-61757 is a pre-authentication RCE vulnerability in Oracle Identity Manager, discovered and disclosed by Searchlight Cyber analysts Adam Kues and Shubham Shahflaw.
The flaw stems from an authentication bypass in Oracle Identity Manager's REST APIs, where a security filter can be tricked into treating protected endpoints as publicly accessible by appending parameters like
?WSDL
or
;.wadl
to URLpaths.
Once unauthenticated access is gained, attackers can reach a Groovy script, which is a compilation endpoint that does not typically execute a script. However, it can be abused to run malicious code at compile time through Groovy's annotation-processing features.
This chain of flaws enabled the researchers to achieve pre-authentication remote code execution on affected Oracle Identity Manager instances.
Yesterday, Searchlight Cyber released a technical report detailing the flaw and providing all the information required to exploit it.
"Given the complexity of some previous Oracle Access Manager vulnerabilities, this one is somewhat trivial and easily exploitable by threat actors,"
warned the researchers
.
CVE-2025-61757 exploited in attacks
Today, CISA
has added
the Oracle CVE-2025-61757 vulnerability to its
Known Exploited Vulnerabilities
(KEV) catalog and given Federal Civilian Executive Branch (FCEB) agencies until December 12 to patch the flaw as mandated by the Binding Operational Directive (BOD) 22-01.
"This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise," warned CISA.
While CISA has not shared details of how the flaw was exploited, Johannes Ullrich, the Dean of Research for SANS Technology Institute, warned yesterday that the flaw may have been exploited as a zero-day as early as August 30.
"This URL was accessed several times between August 30th and September 9th this year, well before Oracle patched the issue," explained Ullrich in an
ISC Handler Diary
.
"There are several different IP addresses scanning for it, but they all use the same user agent, which suggests that we may be dealing with a single attacker."
According to Ullrich, the threat actors issued HTTP POST requests to the following endpoints, which match the exploit shared by Searchlight Cyber.
The researcher says the attempts came from three different IP addresses, 89.238.132[.]76, 185.245.82[.]81, 138.199.29[.]153, but all used the same browser user agent, which corresponds to Google Chrome 60 on Windows 10.
BleepingComputer contacted Oracle to ask whether they have detected the flaw exploited in attacks, and will update the story if we get a response.
This article documents my experience of learning Vulkan and writing a small game/engine with it. It took me around 3 months to do it without any previous knowledge of Vulkan (I had previous OpenGL experience and some experience with making game engines, though).
The engine wasn’t implemented as a general purpose engine, which is probably why it took me a few months (and not years) to achieve this. I started by making a small 3D game and separated reusable parts into the “engine” afterwards. I can recommend everyone to follow the same process to not get stuck in the weeds (see “Bike-shedding” section below for more advice).
Preface
I’m a professional programmer, but I’m self-taught in graphics programming. I started studying graphics programming around 1.5 years ago by learning OpenGL and writing a 3D engine in it.
The engine I wrote in Vulkan is mostly suited for smaller level-based games. I’ll explain things which worked for me, but they might not be the most efficient. My implementation would probably still be a good starting point for many people.
Hopefully, this article will help make some things about Vulkan clearer to you. But you also need to be patient. It took me
months
to implement what I have today and I did it by cutting corners in many places. But if a self-taught programmer like me can build something with Vulkan, then so can you!
Learning graphics programming
This is a very high level overview of how I learned some graphics programming myself. If there’s interest, I might write another article with more resources and helpful guidelines.
If you haven’t done any graphics programming before, you should start with OpenGL. It’s much easier to learn it and not get overwhelmed by all the complexity that Vulkan has. A lot of your OpenGL and graphics programming knowledge will be useful when you start doing things with Vulkan later.
Ideally, you should at least get a textured model displayed on the screen with some simple Blinn-Phong lighting. I can also recommend doing some basic shadow mapping too, so that you learn how to render your scene from a different viewpoint and to a different render target, how to sample from depth textures and so on.
I can recommend using the following resources to learn OpenGL:
Sadly, most OpenGL resources don’t teach the latest OpenGL 4.6 practices. They make writing OpenGL a lot more enjoyable. If you learn them, transitioning to Vulkan will be much easier (I only learned about OpenGL 3.3 during my previous engine development, though, so it’s not a necessity).
Here are some resources which teach you the latest OpenGL practices:
It’s also good to have some math knowledge, especially linear algebra: how to work with vectors, transformation matrices and quaternions. My favorite book about linear algebra/math is
3D Math Primer for Graphics and Game Development by F. Dunn and I. Parbery
. You don’t need to read it all in one go - use it as a reference if some math in the OpenGL resources above doesn’t make sense to you.
Ah, bike-shedding… Basically, it’s a harmful pattern of overthinking and over-engineering even the simplest things. It’s easy to fall into this trap when doing graphics programming (
especially
when doing Vulkan since you need to make many choices when implementing an engine with it).
Always ask yourself “Do I
really
need this?”, “Will this thing ever become a bottleneck?”.
Remember that you can always rewrite any part of your game/engine later.
Don’t implement something unless you need it
right now
. Don’t think “Well, a good engine needs X, right…?”.
Don’t try to make a general purpose game engine. It’s probably even better to not think about “the engine” at first and write a simple game.
Make a small game first - a Breakout clone, for example. Starting your engine development by doing a Minecraft clone with multiplayer support is probably not a good idea.
Be wary of people who tend to suggest complicated solutions to simple problems.
Don’t look too much at what other people do. I’ve seen many over-engineered engines on GitHub - sometimes they’re that complex for a good reason (and there are
years
of work behind them). But you probably don’t need most of that complexity, especially for simpler games.
Don’t try to make magical wrappers around Vulkan interfaces prematurely, especially while you’re still learning Vulkan.
Get it working first. Leave “TODO”/“FIXME” comments in some places. Then move on to the next thing. Try to fix “TODO”/“FIXME” places only when they really become problematic or bottleneck your performance. You’ll be surprised to see how many things won’t become a problem at all.
Some of this advice only applies when you’re working alone on a hobby project. Of course, it’s much harder to rewrite something from scratch when others start to depend on it and a “temp hack” becomes a fundamental part of the engine which is very hard to change without breaking many things.
Why Vulkan?
Ask yourself if you need to learn a graphics API at all. If your main goal is to make a game as soon as possible, then you might be better off using something like Godot or Unreal Engine.
However, there’s nothing wrong with reinventing the wheel or doing something from scratch. Especially if you do it just for fun, to get into graphics programming or to get an in-depth knowledge about how something works.
The situation with graphic APIs in 2024 is somewhat complicated. It all depends on the use case: DirectX seems like the most solid choice for most AAA games. WebGL or WebGPU are the only two choices for doing 3D graphics on the web. Metal is the go-to graphics API on macOS and iOS (though you can still do Vulkan there via MoltenVK).
My use case is simple: I want to make small 3D games for desktop platforms (Windows and Linux mostly). I also love open source technology and open standards. So, it was a choice between OpenGL and Vulkan for me.
OpenGL is a good enough choice for many small games. But it’s very unlikely that it’ll get new versions in the future (so you can’t use some newest GPU capabilities like ray tracing), it’s deprecated on macOS and its future is uncertain.
WebGPU was also a possible choice. Before learning Vulkan, I
learned some of it
. It’s a pretty solid API, but I had some problems with it:
It’s still not stable and there’s not a lot of tutorials and examples for it.
This tutorial
is fantastic, though.
WGSL is an okay shading language, but I just find its syntax not as pleasant as GLSL’s (note that you can write in GLSL and then load compiled SPIR-V on WebGPU native).
On desktop, it’s essentially a wrapper around other graphic APIs (DirectX, Vulkan, Metal).This introduces additional problems for me:
It can’t do things some things that Vulkan or DirectX can do.
It has more limitations than native graphic APIs since it needs to behave similarly between them.
RenderDoc captures become confusing as they differ between the platforms (you can get DirectX capture on Windows and Vulkan capture on Linux) and you don’t have 1-to-1 mapping between WebGPU calls and native API calls.
Using Dawn and WGPU feels like using bgfx or sokol. You don’t get the same degree of control over the GPU and some of the choices/abstractions might not be the most pleasant for you.
Still, I think that WebGPU is a better API than OpenGL/WebGL and can be more useful to you than Vulkan in some use cases:
Validation errors are much better than in OpenGL/WebGL and not having global state helps a lot.
It’s also kind of similar to Vulkan in many things, so learning a bit of it before diving into Vulkan also helped me a lot.
It requires a lot less boilerplate to get things on the screen (compared to Vulkan).
You don’t have to deal with explicit synchronization which makes things much simpler.
You can make your games playable inside the browser.
Learning Vulkan
Learning Vulkan seemed like an impossible thing for me previously. It felt like you needed to have many years of AAA game graphics programming experience to be able to do things in it. You also hear people saying “you’re basically writing a graphics driver when writing in Vulkan” which also made Vulkan sounds like an incredibly complicated thing.
I have also checked out some engines written in Vulkan before and was further demotivated by seeing tons of scary abstractions and files named like
GPUDevice.cpp
or
GPUAbstraction.cpp
which had thousands of lines of scary C++ code.
The situation has changed over the years. Vulkan is not as complicated as it was before. First of all, Khronos realized that some parts of Vulkan were indeed very complex and introduced some newer features which made many things much simpler (for example, dynamic rendering). Secondly, some very useful libraries which reduce boilerplate were implemented. And finally, there are a lot of fantastic resources which make learning Vulkan much easier than it was before.
The best Vulkan learning resource which helped me get started was
vkguide
. If you’re starting from scratch, just go through it all (you might stop at “GPU driver rendering” chapter at first - many simple games probably won’t need this level of complexity)
Vulkan Lecture Series by TU Wien
also nicely teaches Vulkan basics (you can probably skip “Real-Time Ray Tracing” chapter for now). I especially found a lecture on synchronization very helpful.
Here are some more advanced Vulkan books that also helped me:
3D Graphics Rendering Cookbook by Sergey Kosarevsky and Viktor Latypov
. There is the second edition in the writing and it’s promising to be better than the first one. The second edition is not released yet, but the source code for it can be found here:
https://github.com/PacktPublishing/3D-Graphics-Rendering-Cookbook-Second-Edition
Mastering Graphics Programming with Vulkan by Marco Castorina, Gabriel Sassone
. Very advanced book which explains some of the “cutting edge” graphics programming concepts (I mostly read it to understand where to go further, but didn’t have time to implement most of it). The source code for it can be found here:
https://github.com/PacktPublishing/Mastering-Graphics-Programming-with-Vulkan
Here’s the result of my first month of learning Vulkan:
By this point I had:
glTF model loading
Compute skinning
Frustum culling
Shadow mapping and cascaded shadow maps
Of course, doing it for the 3rd time (I had it implemented it all in OpenGL and WebGPU before) certainly helped. Once you get to this point, Vulkan won’t seem as scary anymore.
Let’s see how the engine works and some useful things I learned.
My engine is called EDBR (Elias Daler’s Bikeshed Engine) and was initially started as a project for learning Vulkan. It quickly grew into a somewhat usable engine which I’m going to use for my further projects.
At the time of writing this article, the source code line counts are as follows:
Engine itself: 19k lines of code
6.7k LoC related to graphics,
2k LoC are light abstractions around Vulkan
3D cat game: 4.6k LoC
2D platformer game: 1.2k LoC
I copy-pasted some non-graphics related stuff from my previous engine (e.g. input handling and audio system) but all of the graphics and many other core systems were rewritten from scratch. I feel like it was a good way to do it instead of trying to cram Vulkan into my old OpenGL abstractions.
You can follow the commit history which shows how I started from clearing the screen, drawing the first triangle, drawing a textured quad and so on. It might be easier to understand the engine when it was simpler and smaller.
Let’s see how this frame in rendered:
Most of the steps will be explained in more detail below.
Skinning
First, models with skeletal animations are skinned in the compute shader. The compute shader takes unskinned mesh and produces a buffer of vertices which are then used instead of the original mesh in later rendering steps. This allows me to treat static and skinned meshes similarly in shaders and not do skinning repeatedly in different rendering steps.
CSM (Cascaded Shadow Mapping)
I use a 4096x4096 depth texture with 3 slices for cascaded shadow mapping. The first slice looks like this:
Geometry + shading
All the models are drawn and shading is calculated using the shadow map and light info. I use a PBR model which is almost identical to the one described in
Physically Based Rendering in Filament
. The fragment shader is quite big and does calculation for all the lights affecting the drawn mesh in one draw call:
Everything is drawn into a multi-sampled texture. Here’s how it looks after resolve:
(Open the previous two screenshots in the next tab and flip between the tabs to see the difference more clearly)
Depth resolve
Depth resolve step is performed manually via a fragment shader. I just go through all the fragments of multi-sample depth texture and write the minimum value into the non-MS depth texture (it’ll be useful in the next step).
Post FX
Some post FX is applied - right now it’s only depth fog (I use “depth resolve” texture from the previous step here), afterwards tone-mapping and bloom will also be done here.
UI
Dialogue UI is drawn. Everything is done in one draw call (more is explained in “Drawing many sprites” section)
And that’s it! It’s pretty basic right now and would probably become much more complex in the future (see “Future work” section).
General advice
Recommended Vulkan libraries
There are a couple of libraries which greatly improve the experience of writing Vulkan. Most of them are already used in vkguide, but I still want to highlight how helpful they were to me.
vk-bootstrap simplifies a lot of Vulkan boilerplate: physical device selection, swapchain creation and so on.
I don’t like big wrappers around graphic APIs because they tend to be very opinionated. Plus, you need to keep a mental map of “wrapper function vs function in the API spec” in your head at all times.
Thankfully, vk-bootstrap is not like this. It mostly affects the initialization step of your program and doesn’t attempt to be a wrapper around every Vulkan function.
When I was learning Vulkan, I started doing Vulkan from scratch, without using any 3rd party libraries. Replacing big amounts of the initialization code with vk-bootstrap was a joy. It’s really worth it.
I’ll be honest, I used VMA without even learning about how to allocate memory in Vulkan manually. I read about it in the Vulkan spec later - I’m glad that I didn’t have to do it on my own.
volk
Volk was very useful for me for simplifying extension function loading. For example, if you want to use very useful
vkSetDebugUtilsObjectNameEXT
for setting debug names for your objects (useful for RenderDoc captures and validation errors), you’ll need to do this if you don’t use volk:
// store this pointer somewhere
PFN_vkSetDebugUtilsObjectNameEXT pfnSetDebugUtilsObjectNameEXT;
// during your game init
pfnSetDebugUtilsObjectNameEXT = (PFN_vkSetDebugUtilsObjectNameEXT)
vkGetInstanceProcAddr(instance, "vkSetDebugUtilsObjectNameEXT");
// and finally in your game code
pfnSetDebugUtilsObjectNameEXT(device, ...);
With volk, all the extensions are immediately loaded after you call
volkInitialize
and you don’t need to store these pointers everywhere. You just include
volk.h
and call
vkSetDebugUtilsObjectNameEXT
- beautiful!
GfxDevice abstraction
I have a
GfxDevice
class which encapsulates most of the commonly used functionality and stores many objects that you need for calling Vulkan functions (
VkDevice
,
VkQueue
and so on). A single
GfxDevice
instance is created on the startup and then gets passed around.
It handles:
Vulkan context initialization.
Swapchain creation and management.
beginFrame
returns a new
VkCommandBuffer
which is later used in all the drawing steps.
endFrame
does drawing to the swapchain and does sync between the frames.
Image creation and loading textures from files.
Buffer creation.
Bindless descriptor set management (see “Bindless descriptors” section below).
That’s… a lot of things. However, it’s not that big:
GfxDevice.cpp
is only 714 lines at the time of writing this article. It’s more convenient to pass one object into the function instead of many (
VkDevice
,
VkQueue
,
VmaAllocator
and so on).
Handling shaders
In Vulkan, you can use any shading language which compiles to SPIR-V - that means that you can use GLSL, HLSL and others. I chose GLSL because I already knew it from my OpenGL experience.
You can pre-compile your shaders during the build step or compile them on the fly. I do it during the build so that my shader loading runtime code is simpler. I also don’t have an additional runtime dependency on the shader compiler. Also, shader errors are detected during the build step and I don’t get compile errors during the runtime.
I use glslc (from
shaderc
project, it’s included in Vulkan SDK) which allows you to specify a
DEPFILE
in CMake which is incredibly useful when you use shader includes. If you change a shader file, all files which include it are recompiled automatically. Without the
DEPFILE
, CMake won’t be able to see which files shader files need to be recompiled and will only recompile the file which was changed.
My CMake script for building shaders looks like this:
Now, this makes things a lot more complicated, because you need to specify descriptor set layout beforehand, use descriptor set pools and allocate descriptor sets with them, do the whole
VkWriteDescriptorSet
+
vkUpdateDescriptorSets
thing, call
vkCmdBindDescriptorSets
for each descriptor set and so on.
I’ll explain later how I avoided using descriptor sets by using bindless descriptors and buffer device access. Basically, I only have one “global” descriptor set for bindless textures and samplers, and that’s it. Everything else is passed via push constants which makes everything much easier to handle.
The
init
function is usually called once during the engine initialization.
PipelineBuilder
abstraction is described in vkguide
here
. I modified it a bit to use the Builder pattern to be able to chain the calls.
cleanup
does all the needed cleanup. It usually simply destroys the pipeline and its layout:
draw
is called each frame and all the needed inputs are passed as arguments. It’s assumed that the sync is performed outside of the
draw
call (see “Synchronization” section below). Some pipelines are only called once per frame - some either take
std::vector
of objects to draw or are called like this:
void PostFXPipeline::draw(
VkCommandBuffer cmd,
GfxDevice& gfxDevice,
const GPUImage& drawImage,
const GPUImage& depthImage,
const GPUBuffer& sceneDataBuffer)
{
// Bind the pipeline
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, pipeline);
// Bind the bindless descriptor set
gfxDevice.bindBindlessDescSet(cmd, pipelineLayout);
// Handle push constants
const auto pcs = PushConstants{
// BDA - explained below
.sceneDataBuffer = sceneDataBuffer.address,
// bindless texture ids - no need for desc. sets!
// explained below
.drawImageId = drawImage.getBindlessId(),
.depthImageId = depthImage.getBindlessId(),
};
vkCmdPushConstants(
cmd, pipelineLayout, VK_SHADER_STAGE_FRAGMENT_BIT, 0, sizeof(PushConstants), &pcs);
// Finally, do some drawing. Here we're drawing a fullscreen triangle
// to do a full-screen effect.
vkCmdDraw(cmd, 3, 1, 0, 0);
}
Note another thing: it’s assumed that
draw
is called between
vkCmdBeginRendering
and
vkCmdEndRendering
- the render pass itself doesn’t care what texture it renders to - the caller of
draw
is responsible for that. It makes things simpler and allows you to do several draws to the same render target, e.g.:
I use
VK_KHR_dynamic_rendering
everywhere. I don’t use Vulkan render passes and subpasses at all. I’ve heard that they’re more efficient on tile-based GPUs, but I don’t care about mobile support for now.
VK_KHR_dynamic_rendering
just makes everything much easier.
Using programmable vertex pulling (PVP) + buffer device address (BDA)
I have one vertex type for all the meshes. It looks like this:
Of course, you can greatly optimize it using various methods, but it’s good enough for me for now. The
uv_x
/
uv_y
separation comes from vkguide - I think it’s a nice idea to get good alignment and not waste any bytes
The vertices are accessed in the shader like this:
PVP frees you from having to define vertex format (no more VAOs like in OpenGL or
VkVertexInputBindingDescription
+
VkVertexInputAttributeDescription
in Vulkan). BDA also frees you from having to bind a buffer to a descriptor set - you just pass an address to your buffer which contains vertices in push constants and that’s it.
Also note the
scalar
layout for push constants. I use it for all the buffers too. Compared to “std430” layout, it makes alignment a lot more easy to handle - it almost works the same as in C++ and greatly reduces the need for “padding” members in C++ structs.
Bindless descriptors
Textures were painful to work with even in OpenGL - you had “texture slots” which were awkward to work with. You couldn’t just sample any texture from the shader if it wasn’t bound to a texture slot beforehand.
ARB_bindless_texture
changed that and made many things easier.
Vulkan doesn’t have the exact same functionality, but it has something similar. You can create big descriptor sets which look like this:
You’ll need to maintain a list of all your textures using some “image manager” and when a new texture is loaded, you need to insert it into the
textures
array. The index at which you inserted it becomes a bindless “texture id” which then can be used to sample it in shaders. Now you can pass these ids in your push constants like this:
I chose separate image samplers so that I could sample any texture using different samplers. Common samplers (nearest, linear with anisotropy, depth texture samplers) are created and put into
samplers
array on the startup.
The wrapper function makes the process of sampling a lot more convenient.
The placement of
nonuniformEXT
is somewhat tricky and is explained very well
here
.
I use bindless ids for the mesh material buffer which looks like this:
Now I can only pass material ID in my push constants and then sample texture like this in the fragment shader:
MaterialData material = materials[pcs.materialID];
vec4 diffuse = sampleTexture2DLinear(material.diffuseTex, inUV);
...
Neat! No more bulky descriptor sets, just one int per material in the push constants.
You can also put different texture types into the same set like this (this is needed for being able to access textures of types other than
texture2D
):
Handling dynamic data which needs to be uploaded every frame
I find it useful to pre-allocate big arrays of things and push stuff to them in every frame.
Basically, you can pre-allocate an array of N structs (or matrices) and then start at index 0 at each new frame and push things to it from the CPU. Then, you can access all these items in your shaders. For example, I have all joint matrices stored in one big
mat4
array and the skinning compute shader accesses joint matrices of a particular mesh using start index passed via push constants (more about it will be explained later).
Here are two ways of doing this:
Have N buffers on GPU and swap between them.
vkguide explains the concept of “in flight” frames pretty well. To handle this parallelism properly, you need to have one buffer for the “currently drawing” frame and one buffer for “currently recording new drawing commands” frame to not have races. (If you have more frames in flight, you’ll need to allocate more than 2 buffers)
This means that you need to preallocate 2 buffers on GPU. You write data from CPU to GPU to the first buffer during the first frame. While you record the second frame, GPU reads from the first buffer while you write new data to the second buffer. On the third frame, GPU reads from the second buffer and you write new info to the first buffer… and so on.
One buffer on GPU and N “staging” buffers on CPU
This might be useful if you need to conserve some memory on the GPU.
Note how staging buffers are created using VMA’s
PREFER_HOST
flag and the “main” buffer from which we read in the shader is using the
PREFER_DEVICE
flag.
I’d go with the first approach for most cases (more data on GPU, but no need for manual sync) unless you need to conserve GPU memory for some reason. I’ve found no noticeable difference in performance between two approaches, but it might matter if you are uploading huge amounts of data to GPU on each frame.
Destructors, deletion queue and cleanup
Now, this might be somewhat controversial… but I didn’t find much use of the deletion queue pattern used in vkguide. I don’t really need to allocated/destroy new objects on every frame.
Using C++ destructors for Vulkan object cleanup is not very convenient either. You need to wrap everything in custom classes, add move constructors and move
operator=
… It adds an additional layer of complexity.
In most cases, the cleanup of Vulkan objects happens in one place - and you don’t want to accidentally destroy some in-use object mid-frame by accidentally destroying some wrapper object.
It’s also harder to manage lifetimes when you have cleanup in happening in the destructor. For example, suppose you have a case like this:
If you want to cleanup
SomeOtherClass
resources (e.g. the instance of
SomeOtherClass
has a
VkPipeline
object) during
SomeClass::cleanup
, you can’t do that if the cleanup of
SomeOtherClass
is performed in its destructor.
… but I don’t like how it introduces a dynamic allocation and requires you to do write more code (and it’s not that much different from calling a
cleanup
function manually).
Right now, I prefer to clean up stuff directly, e.g.
This approach is not perfect - first of all, it’s easy to forget to call
cleanup
function, This is not a huge problem since you get a validation error in case you forget to cleanup some Vulkan resources on shutdown:
Validation Error: [ VUID-vkDestroyDevice-device-05137 ] Object 0: handle = 0x4256c1000000005d, type = VK_OBJECT_TYPE_PIPELINE_LAYOUT; | MessageID = 0x4872eaa0 | vkCreateDevice(): OBJ ERROR : For VkDevice 0x27bd530[], VkPipelineLayout 0x4256c1000000005d[] has not been destroyed. The Vulkan spec states: All child objects created on device must have been destroyed prior to destroying device (https://vulkan.lunarg.com/doc/view/1.3.280.1/linux/1.3-extensions/vkspec.html#VUID-vkDestroyDevice-device-05137)
VMA also triggers asserts if you forget to free some buffer/image allocated with it.
I find it convenient to have all the Vulkan cleanup happening
explicitly
in one place. It makes it easy to track when the objects get destroyed.
Synchronization
Synchronization in Vulkan is difficult. OpenGL and WebGPU do it for you - if you read from some texture/buffer, you know that it will have the correct data and you won’t get problems with data races. With Vulkan, you need to be explicit and this is usually where things tend to get complicated.
Right now I manage most of the complexities of sync manually in one place. I separate my drawing into “passes”/pipelines (as described above) and then insert barriers between them. For example, the skinning pass writes new vertex data into GPU memory. Shadow mapping pass reads this data to render skinned meshes into the shadow map. Sync in my code looks like this:
// do skinning in compute shader
for (const auto& mesh : skinnedMeshes) {
skinningPass.doSkinning(gfxDevice, mesh);
}
{
// Sync skinning with CSM
// This is a "fat" barrier and you can potentially optimize it
// by specifying all the buffers that the next pass will read from
const auto memoryBarrier = VkMemoryBarrier2{
.sType = VK_STRUCTURE_TYPE_MEMORY_BARRIER_2,
.srcStageMask = VK_PIPELINE_STAGE_2_COMPUTE_SHADER_BIT,
.srcAccessMask = VK_ACCESS_2_SHADER_WRITE_BIT,
.dstStageMask = VK_PIPELINE_STAGE_2_VERTEX_SHADER_BIT,
.dstAccessMask = VK_ACCESS_2_MEMORY_READ_BIT,
};
const auto dependencyInfo = VkDependencyInfo{
.sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO,
.memoryBarrierCount = 1,
.pMemoryBarriers = &memoryBarrier,
};
vkCmdPipelineBarrier2(cmd, &dependencyInfo);
}
// do shadow mapping
shadowMappingPass.draw(gfxDevice, ...);
Of course, this can be automated/simplified using render graphs. This is something that I might implement in the future. Right now I’m okay with doing manual sync. vkconfig’s “synchronization” validation layer also helps greatly in finding sync errors.
The following resources were useful for understanding synchronization:
With bindless textures, it’s easy to draw many sprites using one draw call without having to allocate vertex buffers at all.
First of all, you can emit vertex coordinates and UVs using
gl_VertexIndex
in your vertex shader like this:
void main()
{
uint b = 1 << (gl_VertexIndex % 6);
vec2 baseCoord = vec2((0x1C & b) != 0, (0xE & b) != 0);
...
}
This snippet produces this set of values:
gl_VertexIndex
baseCoord
0
(0,0)
1
(0,1)
2
(1,1)
3
(1,1)
4
(1,0)
5
(0,0)
Two triangles form a quad
All the sprite draw calls are combined into
SpriteDrawBuffer
which looks like this in GLSL:
struct SpriteDrawCommand {
mat4 transform; // could potentially be mat2x2...
vec2 uv0; // top-left uv coord
vec2 uv1; // bottom-right uv coord
vec4 color; // color by which texture is multiplied
uint textureID; // sprite texture
uint shaderID; // explained below
vec2 padding; // padding to satisfy "scalar" requirements
};
layout (buffer_reference, scalar) readonly buffer SpriteDrawBuffer {
SpriteDrawCommand commands[];
};
On CPU/C++ side, it looks almost the same:
struct SpriteDrawCommand {
glm::mat4 transform;
glm::vec2 uv0; // top-left uv coordinate
glm::vec2 uv1; // bottom-right uv coodinate
LinearColor color; // color by which texture is multiplied by
std::uint32_t textureId; // sprite texture
std::uint32_t shaderId; // explained below
glm::vec2 padding; // padding
};
std::vector<SpriteDrawCommand> spriteDrawCommands;
I create two fixed size buffers on the GPU and then upload the contents of
spriteDrawCommands
(using techniques described above in the “Handling dynamic data” section).
The sprite renderer is used like this:
// record commands
renderer.beginDrawing();
{
renderer.drawSprite(sprite, pos);
renderer.drawText(font, "Hello");
renderer.drawRect(...);
}
renderer.endDrawing();
// do actual drawing later:
renderer.draw(cmd, gfxDevice, ...);
The same renderer also draws text, rectangles and lines in my engine. For example, the text is just N “draw sprite” commands for a string composed of N glyphs. Solid color rectangles and lines are achieved by using a 1x1 pixel white texture and multiplying it by
SpriteCommand::color
in the fragment shader.
And finally, here’s how the command to do the drawing looks like inside
SpriteRenderer::draw
:
vkCmdDraw(cmd, 6, spriteDrawCommands.size(), 0, 0);
// 6 vertices per instance, spriteDrawCommands.size() instances in total
All the parameters of the sprite draw command are self-explanatory, but
shaderID
needs a bit of clarification. Currently, I use it to branch inside the fragment shader:
This allows me to draw sprites differently depending on this ID without having to change pipelines. Of course, it can be potentially bad for the performance. This can be improved by drawing sprites with the same shader ID in batches. You’ll only need to switch pipelines when you encounter a draw command with a different shader ID.
The sprite renderer is very efficient: it can draw 10 thousand sprites in just 315 microseconds.
Compute skinning
I do skinning for skeletal animation in a compute shader. This allows me to have the same vertex format for all the meshes.
Basically, I just take the mesh’s vertices (not skinned) and joint matrices and produce a new buffer of vertices which are used in later rendering stages.
Suppose you spawn three cats with identical meshes:
All three of them can have different animations. They all have an identical “input” mesh. But the “output” vertex buffer will differ between them, which means that you need to pre-allocate a vertex buffer for each instance of the mesh.
Here’s how the skinning compute shader looks like:
I store all joint matrices in a big array and populate it every frame (and also pass the starting index in the array for each skinned mesh,
jointMatricesStartIndex
).
Skinning data is not stored inside each mesh vertex, a separate buffer of
num_vertices
elements is used.
After the skinning is performed, all the later rendering stages use this set of vertices Thee rendering process for static and skinned meshes becomes identical, thanks to that.
Anton’s OpenGL 4 Tutorials
book has the best skinning implementation guide I’ve ever read. Game Engine Architecture by Jason Gregory has nice explanations about skinning/skeletal animation math as well.
Game / renderer separation
I have a game/renderer separation which uses a simple concept of “draw commands”. In the game logic, I use
entt
, but the renderer doesn’t know anything about entities or “game objects”. It only knows about the lights, some scene parameters (like fog, which skybox texture to use etc) and meshes it needs to draw.
The renderer’s API looks like this in action:
void Game::generateDrawList()
{
renderer.beginDrawing();
// Add lights
const auto lights = ...; // get list of all active lights
for (const auto&& [e, tc, lc] : lights.each()) {
renderer.addLight(lc.light, tc.transform);
}
// Render static meshes
const auto staticMeshes = ...; // list of entities with static meshes
for (const auto&& [e, tc, mc] : staticMeshes.each()) {
// Each "mesh" can have multiple submeshes similar to how
// glTF separates each "mesh" into "primitives".
for (std::size_t i = 0; i < mc.meshes.size(); ++i) {
renderer.drawMesh(mc.meshes[i], tc.worldTransform, mc.castShadow);
}
}
// Render meshes with skeletal animation
const auto skinnedMeshes = ...; // list of entities with skeletal animations
for (const auto&& [e, tc, mc, sc] : skinnedMeshes.each()) {
renderer.drawSkinnedMesh(
mc.meshes, sc.skinnedMeshes, tc.worldTransform,
sc.skeletonAnimator.getJointMatrices());
}
renderer.endDrawing();
}
When you call
drawMesh
or
drawSkinnedMesh
, the renderer creates a mesh draw command and puts it in
std::vector<MeshDrawCommand>
which are then iterated through during the drawing process. The
MeshDrawCommand
looks like this:
meshId
is used for looking up static meshes in
MeshCache
- it’s a simple
std::vector
of references to vertex buffers on GPU.
If the mesh has a skeleton,
jointMatricesStartIndex
is used during compute skinning and
skinnedMesh->skinnedVertexBuffer
is used for all the rendering afterwards (instead of
meshId
)
worldBoundingSphere
is used for frustum culling.
This separation is nice because the renderer is clearly separated from the game logic. You can also do something more clever as described
here
if sorting draw commands becomes a bottleneck.
Scene loading and entity prefabs
I use Blender as a level editor and export it as glTF. It’s easy to place objects, colliders and lights there. Here’s how it looks like:
Writing your own level editor would probably take months (years!), so using Blender instead saved me quite a lot of time.
It’s important to mention how I use node names for spawning some objects. For example, you can see an object named
Interact.Sphere.Diary
selected in the screenshot above. The part before the first dot is the prefab name (in this case “Interact”). The “Sphere” part is used by the physics system to create a sphere physics body for the object (“Capsule” and “Box” can also be used, otherwise the physics shape is created using mesh vertices).
Some models are pretty complex and I don’t want to place them directly into the level glTF file as it’ll greatly increase each level’s size. I just place an “Empty->Arrows” object and name it something like “Cat.NearStore”. This will spawn “Cat” prefab and attach “NearStore” tag to it for runtime identification.
During the level loading process, if the node doesn’t have a corresponding prefab, it’s loaded as-is and its mesh data is taken from the glTF file itself (this is mostly used for static geometry). If the node has a corresponding prefab loaded, it’s created instead. Its mesh data is loaded from the external glTF file - only transform is copied from the original glTF node (the one in the level glTF file).
Once
glTFX
is released and the support for it is added to Blender, things might be even easier to handle as you’ll be able to reference external glTF files with it.
MSAA
Using forward rendering allowed me to easily implement MSAA. Here’s a comparison of how the game looks without AA and with MSAA on:
Basically, the UI can calculate its own layout without me having to hard code each individual element’s size and position. Basically it relies on the following concepts:
Origin is an anchor around which the UI element is positioned. If origin is
(0, 0)
, setting UI element’s position to be
(x,y)
will make its upper-left pixel have (x,y) pixel coordinate. If the origin is
(1, 1)
, then the element’s bottom-right corner will be positioned at
(x, y)
. If the origin is (0.5, 1) then it will be positioned using bottom-center point as the reference.
Relative size makes the children’s be proportional to parent’s size. If (1,1) then the child element will have the same size as the parent element. If it’s (0.5, 0.5) then it’ll have half the size of the parent. If the parent uses children’s size as a guide, then if a child has (0.5, 0.25) relative size, the parent’s width will be 2x larger and the height will be 4x larger.
Relative position uses parent’s size as a guide for positioning. It’s useful for centering elements, for example if you have an element with (0.5, 0.5) origin and (0.5, 0.5) relative position, it’ll be centered inside its parent element.
You can also set pixel offsets for both position and size separately (they’re called
offsetPosition
and
offsetSize
in my codebase).
You can also set a fixed size for the elements if you don’t want them to ever be resized.
The label/image element size is determined using its content.
Here are some examples of how it can be used to position child elements:
a) The child (yellow) has relative size (0.5, 1), relative position of (0.5, 0.5) and origin (0.5, 0.5) (alternatively, the relative position can be (0.5, 0.0) and origin at (0.5, 0.0) in this case). Its parent (green) will be two times wider, but will have the same height. The child element will be centered inside the parent.
b) The child (yellow) has origin (1, 1), fixed size (w,h) and absolute offset of (x,y) - this way, the item can be positioned relative to the bottom-right corner of its parent (green)
First, sizes of all elements are calculated recursively. Then positions are computed based on the previously computed sizes and specified offset positions. Afterwards all elements are drawn recursively - parent element first, then its children etc.
When calculating the size, most elements either have a “fixed” size (which you can set manually, e.g. you can set some button to always be 60x60 pixels) or their size is computed based on their content. For example, for label elements, their size is computed using the text’s bounding box. For image elements, their size equals the image size and so on.
If an element has an “Auto-size” property, it needs to specify which child will be used to calculate its size. For example, the menu nine-slice can have several text labels inside the “vertical layout” element - the bounding boxes will be calculated first, then their sizes will be summed up - then, the parent’s size is calculated.
Let’s take a look at a simple menu with bounding boxes displayed:
Here, root
NineSliceElement
is marked as “Auto-size”. To compute its size, it first computes the size of its child (
ListLayoutElement
). This recursively computes the sizes of each button, sums them up and adds some padding (
ListLayoutElement
also makes the width of each button the same based on the maximum width in the list).
Dear ImGui and sRGB issues
I love Dear ImGui. I used it to implement many useful dev and debug tools (open the image in a new tab to see them better):
It
has some problems with sRGB
, though. I won’t explain it in detail, but basically if you use sRGB framebuffer, Dear ImGui will look wrong in many ways, see the comparison:
Left - naive sRGB fix for Dear ImGui, right - proper fix
Left - naive sRGB fix for Dear ImGui, right - proper fix
Sometimes you can see people doing hacks by doing
pow(col, vec4(2.2))
with Dear ImGui’s colors but it still doesn’t work properly with alpha and produces incorrect color pickers.
I ended up writing my own Dear ImGui backend and implementing DilligentEngine’s workaround which is explained in detail
here
and
here
.
Writing it wasn’t as hard as I expected. I only need to write the
rendering
part, while “logic/OS interaction” part (input event processing, clipboard etc.) is still handled by default Dear ImGui SDL backend in my case.
There are some additional benefits of having my own backend:
It supports bindless texture ids, so I can draw images by simply calling
ImGui::Image(bindlessTextureId, ...)
. Dear ImGui’s Vulkan backend requires you to “register” textures by calling
ImGui_ImplVulkan_AddTexture
for each texture before you can call
ImGui::Image
.
It can properly draw linear and non-linear images by passing their format into backend (so that sRGB images are not gamma corrected twice when they’re displayed)
Initializing and dealing with it is easier as it does Vulkan things in the same way as the rest of my engine.
Other stuff
There are many parts of the engine not covered there because they’re not related to Vulkan. I still feel like it’s good to mention them briefly for the sake of completion.
I use Jolt Physics for physics.
Integrating it into the engine was pretty easy. Right now I mostly use it for collision resolution and basic character movement.
The samples are
fantastic
. The docs are very good too.
I especially want to point out how incredible
JPH::CharacterVirtual
is. It handles basic character movement so well. I remember spending
days
trying to get proper slope movement in Bullet to work. With Jolt, it just worked “out of the box”.
Here’s how it basically works (explaining how it works properly would probably require me to write quite a big article):
You add your shapes to Jolt’s world.
You run the simulation.
You get new positions of your physics objects and use these positions to render objects in their current positions.
I implemented Jolt physics shape debug renderer using im3d
It has worked great for me so far. Previously I had my own ECS implementation, but decided to experiment with a 3rd party ECS library to have less code to maintain.
Integrating it was very easy (read the PDF doc, it’s fantastic!) and it helped me avoid tons of bike-shedding by seeing how little time something, which I thought was “inefficient”, really took.
What I gained from switching to Vulkan
There are many nice things I got after switching to Vulkan:
No more global state
This makes abstractions a lot easier. With OpenGL abstractions/engines, you frequently see “shader.bind()” calls, state trackers, magic RAII, which automatically binds/unbinds objects and so on. There’s no need for that in Vulkan - it’s easy to write functions which take some objects as an input and produce some output - stateless, more explicit and easier to reason about.
API is more pleasant to work with overall - I didn’t like “binding” things and the whole “global state machine” of OpenGL.
You need to write less abstractions overall.
With OpenGL, you need to write a
lot
of abstractions to make it all less error-prone… Vulkan’s API requires a lot less of this, in my experience. And usually the abstractions that you write map closer to Vulkan’s “raw” functions, compared to OpenGL abstractions which hide manipulation of global state and usually call several functions (and might do some stateful things for optimization).
Better validation errors
Validation errors are very good in Vulkan. While OpenGL has
glDebugMessageCallback
, it doesn’t catch that many issues and you’re left wondering why your texture looks weird, why your lighting is broken and so on. Vulkan has more extensive validation which makes the debugging process much better.
Debugging in RenderDoc
I can now debug shaders in RenderDoc. It looks like this:
With OpenGL I had to output the values to some texture and color-pick them… which took a lot of time. But now I can debug vertex and fragment shaders easily.
More consistent experience across different GPUs and OSes.
With OpenGL, drivers on different GPUs and OSes worked differently from each other which made some bugs pop up only on certain hardware configurations. It made the process of debugging them hard. I still experienced some slight differences between different GPUs in Vulkan, but it’s much less prevalent compared to OpenGL.
Ability to use better shading languages in the future
GLSL is a fine shading language, but there are some new shading languages which promise to be more feature-complete, convenient and readable, for example:
I might explore them in the future and see if they offer me something that GLSL lacks.
More control over every aspect of the graphics pipeline.
Second system effect, but good
My first OpenGL engine was written during the process of learning graphics programming from scratch. Many abstractions were not that good and rewriting them with some graphics programming knowledge (and some help from vkguide) helped me implement a much cleaner system.
Street cred
And finally, it makes me proud to be able to say “I have a custom engine written in Vulkan and it works”. Sometimes people start thinking about you as a coding wizard and it makes me happy and proud of my work. :)
Future work
There are many things that I plan to do in the future, here’s a list of some of them:
Sign-distance field font support (
good article
about implementing them)
Loading many images and generating mipmaps in parallel (or use image formats which already have mipmaps stored inside of them)
Bloom.
Volumetric fog.
Animation blending.
Render graphs.
Ambient occlusion.
Finishing the game? (hopefully…)
Overall, I’m quite satisfied with what I managed to accomplish. Learning Vulkan was quite difficult, but it wasn’t as hard as I imagined. It taught me a lot about graphics programming and modern APIs and now I have a strong foundation to build my games with.
Reverse Engineering the Miele Diagnostic Interface
A few weeks ago, my parents’ old Miele washing machine suddenly stopped functioning. It seemed like the machine finally required some maintenance, considering that it had been in operation for almost 20 years without issues. Disassembling the appliance revealed a number of hoses connecting the different parts of the washing machine. Upon closer inspection, several of these hoses were almost completely blocked by the detergent residue that had accumulated over the past two decades. After cleaning all hoses, the appliance initially seemed to be working fine again. However, at the end of the washing cycle, the machine didn’t start to spin up. According to multiple forum posts, this fault was most likely caused by the analog pressure sensor that determines the water level inside the drum. If the residual water doesn’t fall under a certain level during the pumping cycle, the machine aborts the current washing program. The white sensor housing can be easily spotted in the bottom right corner of the machine’s electronics board:
Analog pressure sensor mounted on PCB
Following some quick measurements with a multimeter, I determined that the sensor was functioning correctly. However, as several Miele experts pointed out, the sensor might have to be calibrated again after taking the machine apart, requiring a proprietary Miele software that is only available to registered service technicians. Fortunately, it turned out that this specific problem was not related to the calibration but could be instead fixed by clearing the machine’s fault memory.
Even though the washing machine was now working again, I was still curious about how the pressure sensor could actually be calibrated. As far as I could tell, there were no external ports on the electronics board for programming purposes.
So how does the Miele software communicate with the appliance?
This article turned out to be much longer than initially expected. Feel free to jump to individual sections if you want to skip some of the theory!
Online repair guides and technical documentation often mention the so-called
Miele Diagnostic Utility (MDU)
, a proprietary tool used by technicians to diagnose common faults on all kinds of Miele devices. While every official repair business can register on Miele’s website to get access to the service utility, its use requires very costly special hardware that has to be purchased from Miele and dedicated training sessions.
At first glance, very little information can be found online about the MDU, except for a few
screenshots
of the software. For illustrative purposes, I came up with the following (very rough) sketch of the graphical user interface:
User interface of the MDU (simplified sketch)
While looking for more details of the software, I discovered this
presentation
(in French) about an older version of Miele’s diagnostic software and hardware, offering deeper insights into the capabilities and workings of the interface.
Judging from the contents of the presentation, the MDU can be used to
read various properties
from a connected appliance. This includes the software ID, model and fabrication number, operating hours and fault memory. However, the number of properties that can be queried seems to vary from model to model.
While this data might be interesting for technically inclined appliance owners, the real power of the MDU lies in the monitoring features of the software. In addition to the
live status of all sensors
connected to the washing machine, such as the temperature sensor, water level sensor, or motor RPM, the utility also provides an overview of the
actuator status
, including all heating, water control and pump relays. The selected washing program and
current program phase
are also displayed by the software, along with the configured options, such as prewash or increased water level.
Many Miele washing machines provide a
service mode
that can be accessed by turning on the machine while pressing a certain button combination on the front panel. The service options offered by this mode can also be triggered by the MDU. However, the software additionally features a
calibration menu
that is used to calibrate internal sensors like the analog pressure sensor that measures the water level.
Finally, the MDU also provides
program updates
for Miele appliances. These updates were originally intended to allow changes to the built-in washing programs, such as adjusting program cycle times or the water amount. On newer appliances, the MDU can even update the full firmware of the electronics board.
These features are highly useful for diagnostic purposes, not only for professional service technicians but also for appliance owners that would like to repair their own devices. But how does the MDU communicate with a Miele appliance? Reading through the presentation slides reveals a so-called
Program Correction (PC)
interface that is available on all appliances manufactured since 1996. This interface is located on the front panel of the machine, usually disguised as the
check inlet
indicator on washing machines or the
salt missing
indicator on dishwashers. The following picture clearly shows the PC interface on a Miele Softtronic W 2446 washing machine:
Front panel of Miele Softtronic W 2446 washing machine
While these indicator lights normally show the operating status of the machine, they are not just regular LEDs. Instead, the red PC indicator LED also includes an infrared phototransistor, enabling bidirectional communication with the MDU software using a suitable optical communication adapter. According to a
public Miele presentation
, this interface is not only used for field diagnostics, but also during the development phase and end-of-line factory testing. The presentation also includes a picture of the actual surface-mount LED that is used on the adapter side, which looks very similar to the
OSRAM Multi TOPLED SFH 7250
infrared emitter and phototransistor at first glance. While a dual-use indicator is clever in principle, it comes with drawbacks. When the respective indicator light is actually in use, no communication via the PC interface is possible. For this reason, Miele might have decided to switch to a dedicated PC LED indicator on newer appliances, such as their coffee machines. Due to the close proximity between the emitter and phototransistor, the communication is also limited to a relatively slow half-duplex operation.
Practical use of the MDU software requires a proprietary optical communication adapter, which has to be purchased separately from Miele. This adapter, which is also referred to as the
Miele Optical Interface
, consists of an interface box (codename EZI 820) and a head unit (EZI 821 or EZI 821-A) that are connected via a fiber-optic cable. The interface box features a DE-9 connector for RS-232 communication with a host PC. Newer versions of the optical interface also include a USB connector for this purpose. The head unit is then attached to the appliance through a suction cup mechanism, aligning an optical fiber with the PC indicator hole. This complete assembly and communication technique has been
patented
by Miele in 1995, with the original intention of allowing washing program corrections for after-sales service.
Due to the proprietary nature of the optical interface, Miele does not publish any images of the adapter unit. However, given the high cost of official hardware, these adapters often surface on auction sites with
detailed pictures
. Some people are even
looking to buy
the MDU from other sources, as the adapter is pretty much useless without the software.
While not many details are available online about the internals of the Miele Optical Interface,
this forum user
claims to have bought the unit from an eBay auction. The adapter is apparently a simple serial to infrared converter, implementing the well-known Infrared Data Association (IrDA) standard, commonly used in older laptops and embedded systems. It is based on an STM32F103 microcontroller, with all upper level protocol logic implemented by the MDU software. This is excellent news, as building an adapter would therefore only require a cheap microcontroller and an infrared emitter/detector.
In contrast to the details about the adapter unit, the proprietary protocol that is used by the MDU software is completely undocumented. However, reverse engineering the protocol would allow an open source diagnostic software to be built, which would be immensely useful for the repair community. It might also allow older Miele appliances to be integrated into home automation solutions, by building a bridge between the PC interface and existing software such as Home Assistant.
With these goals in mind, I decided to look for salvaged electronics from old Miele appliances on eBay. More specifically, I was looking for the main circuit board of a washing machine, since experimenting on a fully assembled appliance would have posed significant electrical and mechanical hazards. As luck would have it, I managed to win the bid for a brand new
Miele EDPW 206
manufactured in 2010:
Front side of Miele EDPW 206 electronics board
Back side of Miele EDPW 206 electronics board
This board is part of the Miele W 961 washing machine series, manufactured from 1998 to 2003, according to
this forum post
. The EDPW 200 label on the back side of the PCB hints at the existence of further variations of this board for other washing machines. In contrast to newer Miele appliances, the power electronics are contained on a separate PCB for this machine, making the reverse engineering process much safer.
The PCB itself is a pretty simple double-layer design, without any bigger copper fills. Ground and power traces are instead routed as separate tracks, leading to the enormous number of vias that can be seen in the previous pictures. Figuring out the connections between individual components is unfortunately pretty tedious for this reason.
One of the central components of this PCB is a large 80-pin chip marked with
MIELE 6478170 M37451MC-804FP
. A quick online search for M37451 suggests that this chip is part of the Mitsubishi 740 series of 8-bit microcontrollers, which are also known as MELPS 740, according to
Wikipedia
. These microcontrollers were originally manufactured during the 1980s and 1990s, with a relatively simple instruction set similar to the widely known WDC 65C02. Although these parts are no longer produced today, the instruction set lives on in the newer Renesas 38000/740 microcontroller series.
The
M37451MC-804FP
includes an integrated mask ROM, meaning the Miele firmware is embedded directly in the chip’s die and can’t be reprogrammed after the manufacturing process. As denoted by the
MC
suffix, the
M37451MC-804FP
has a total RAM size of 512 bytes with a 24 kB mask ROM. Other features include an 8-bit ADC with 8 channels, an 2-channel 8-bit DAC and three individual 16-bit timers. Serial communication is handled by a serial I/O block that can be configured for asynchronous or synchronous operation. The chip is powered by a 5 V supply, with an operating frequency of 10 MHz. More information about the Mitsubishi microcontrollers can be found in
volume two of Mitsubishi’s Single-Chip 8-bit Microcomputers Data Book
.
Mitsubishi M37451 microcontroller and Microchip 93LC66BI EEPROM
Located right next to the microcontroller is a Microchip 93LC66BI EEPROM with a capacity of only 512 bytes. The stored data is organized in 16-bit words, which can be accessed via a Microwire interface. All configuration parameters and the current state of the running wash cycle are written to the EEPROM just before the machine is powered off. This allows the machine to resume the program once it is turned back on again. In addition to this data, the EEPROM also stores any program corrections that are applied via the PC interface.
As the water inlet, detergent processing and heating cycle are controlled by individual relays that require an input voltage higher than 5 V, the PCB also includes a Texas Instruments ULN2003A Darlington transistor array.
The water level inside the washing machine’s drum is sensed by an SPX3078D analog pressure sensor manufactured by Motorola. This sensor basically consists of a silicon diaphragm, which is used to determine the applied pressure through a Wheatstone bridge circuit. The differential output voltage is then processed by an ELMOS E210.01C. Since ELMOS provides no public documentation on this component, its exact function is unclear. However, I strongly assume it contains an operational amplifier and possibly additional signal processing circuitry. One of the pins is connected to the microcontroller’s analog input port and provides a voltage proportional to the sensed pressure.
Motorola SPX3078D pressure sensor, ELMOS E210.01C IC and TI ULN2003A transistor array
Most of the indicator LEDs on the PCB are multiplexed and wired to a Motorola MC14489DW LED driver, which offers an SPI interface for the microcontroller:
Motorola MC14489DW LED driver
Upon detailed inspection of the LEDs on the right side of the board, one can see that the lowest LED is quite different from the rest of the group. Looking closer reveals that this is actually a combined red LED and infrared phototransistor. This is the optical PC interface, disguised as one of the indicator lights:
Optical PC interface next to normal indicator LED
The LED is not part of the group of multiplexed indicator LEDs and is instead wired to a pair of transistors. An NPN transistor connects the microcontroller’s UART transmit pin to the light emitter, while the phototransistor is connected to the UART receive pin via a PNP transistor.
The PC interface is therefore just a simple optical UART port, albeit limited to half-duplex communication.
To communicate with the EDPW board via the optical interface, the PCB has to be connected to an appropriate power supply. Luckily, the EDPW 206’s technical documentation includes the complete pinout of the board’s connectors:
EDPW 206 connector pinout
However, simply supplying the board with 5 V from a lab power supply didn’t seem to have any effect. Taking a closer look at the pinout shows that the board also expects 20 V for the U
C
voltage, which is connected to the ULN2003A’s common-cathode node for the integrated flyback diodes. This voltage seems to be sensed by the microcontroller through a resistive divider. Unfortunately, even with those two voltages, the EDPW didn’t seem to turn on. Further investigation revealed that the board also requires an AC zero-crossing detection signal, referred to by the term
Netznulldurchgang (NND)
in German. This signal is generated by an optocoupler on the power electronics board, resulting in an alternating wave based on the line frequency. Supplying a 50 Hz square wave from a Pi Pico in place of this signal finally brought the EDPW to life:
Working EDPW 206 board (with blinking red LEDs)
While all basic functions of the EDPW seemed to work fine, including the program selection knob and the configuration buttons, I quickly noticed that the
check intake
and
check drain
indicators were flashing. Because the
check intake
LED also serves as the optical interface, this issue had to be resolved before any communication was possible. I initially assumed that the analog pressure sensor was giving some incorrect readings, but further investigations ruled out this theory. Instead, this issue was related to the missing relays that would normally be connected to the board. As it turns out, the microcontroller actually checks the presence of the
prewash
and
main wash
relays by sensing the voltage on the board’s relay outputs. When the relays are connected to U
C
on one side, the voltage at the transistor array’s collector pins is also equal to U
C
. Both the
prewash
and
main wash
outputs then go to a BAV70 common cathode double diode chip that is wired to the microcontroller:
BAV70 common cathode double diode (marked as A4W)
Connecting a 10 kOhm resistor between the pin for the
main wash
relay and U
C
therefore stops the red LEDs from blinking. With this workaround in place, the EDPW board was now fully functional.
Before reverse engineering the PC interface, it is worth taking a closer look at the EEPROM chip. Removing the chip from the PCB and soldering it to an SOIC adapter allows its contents to be read using a CH341A EEPROM programmer. It should be noted that the adapter can’t be plugged into the socket directly, as the pinout of the 93XXX chip differs from classic 25XXX EEPROMs that this programmer is designed for.
Adapter PCB for SOIC EEPROM chips
CH341A EEPROM and flash programmer
Reading the EEPROM contents with
IMSProg
revealed that only 42 bytes are actually used, with almost all remaining bytes set to
ff
, indicating erased or unused memory:
To analyze how the stored data changes under different washing program settings and conditions, the EEPROM chip was soldered to the PCB again, while also attaching a logic analyzer to monitor the Microwire interface:
EDPW board with probes connected to EEPROM chip
USB logic analyzer with 8 channels
When the EDPW board is powered on, the microcontroller reads the first 42 bytes from the EEPROM. As soon as either the U
C
voltage or the zero-crossing signal are lost, the memory contents are written back to the EEPROM.
EEPROM Microwire signal capture in PulseView
After trying out different washing programs and observing the changes to the individual bytes, the full EEPROM contents can be deciphered:
All bytes are inverted before being written to the EEPROM by the microcontroller. The
first 12 bytes
store general information about the washing machine. As an example, this includes the currently running program and program phase (first and second byte) and the operating hours of the appliance (third and fourth byte). This section ends with a simple
one byte checksum
that is computed by summing the preceding bytes (modulo
ff
) and inverting the result.
The next
group of bytes
encodes the fixed washing machine configuration that can only be changed by entering the programming mode. Settings such as the water type and region code are stored in this section. Another
checksum
is appended to these bytes, again computed over all previous bytes.
Configuration options that are chosen during the normal operation of the machine (e.g. short, prewash, spin cycle RPM, etc.) are preserved in the
subsequent bytes
, stored separately for each individual washing program.
The last section is mostly empty, but most likely used to further customize the behavior of washing programs as part of the program correction mechanism. Selecting the
Cottons 95 °C
program for example causes the microcontroller to continuously read the byte at address
40
from the EEPROM, probably checking for some bit to be set. Despite some areas being unused or unclear, this EEPROM analysis provided some valuable insights into the internal operation of the EDPW.
Returning to the PC interface, I wondered whether it might be possible to extract the microcontroller’s firmware to reverse engineer the diagnostic protocol. As
previously noted
, the firmware is stored in a mask ROM during manufacturing. Gaining access to the ROM’s contents would therefore require the chip to be decapped, which requires special equipment and practice that I don’t have. However, according to its manual, the Mitsubishi M37451 seems to feature multiple processor modes that might allow the ROM to be dumped:
Mitsubishi M37451 processor modes
The processor is running in single-chip mode by default, as the CNV
SS
pin is pulled to GND on the EDPW board. Connecting CNV
SS
to 5 V would cause the chip to enter microprocessor mode, loading the program code from external memory. This would in theory allow the embedded firmware to be dumped, but unfortunately, access to the internal ROM is blocked in this case, as specified in the manual. This restriction, likely implemented for security reasons, is not present for the memory expansion mode, but this processor mode can only be entered by writing to a register when running in single-chip mode.
Although techniques like fault injection or voltage glitching might bypass these limitations, I decided to continue the reverse engineering process without access to the firmware.
To communicate with the EDPW’s optical interface, I connected a USB-UART adapter directly to the microcontroller’s UART pins. Lacking a proper 5 V USB-UART adapter, I used an Arduino Uno clone just for its UART capabilities. As the optical interface seemed to be very sensitive to the infrared radiation of the sunlight, I decided to disconnect the phototransistor from the UART’s receive pin:
Reverse engineering hardware setup
Now that the hardware was set up, it was time to focus on the actual serial communication. As might be expected, the interface doesn’t send any data on its own during normal operation, so its protocol must be based on a request-response scheme. However, figuring out the exact commands that need to be sent would be extremely difficult without any details about the protocol. It is often a good idea to look for similar protocols in this situation, which might provide some clues about the general command structure. After a more elaborate online search, I found a
forum post
(in German) describing the
Miele@home
interface, which is used to add remote control functionality to Miele appliances. It provides a detailed analysis of the Miele@home communication module and its serial protocol, including a full communication log between module and appliance.
The serial interface is initially configured for a data rate of 2400 baud with 8 data bits, 1 stop bit and no parity. After a short handshake sequence, the communication switches to a speed of 9600 baud. Considering that the Miele PC interface was introduced in 1996, it doesn’t seem unlikely that first implementations were limited to 2400 baud, which would explain why the communication begins at this baud rate. The messages sent by the module and the appliance are always either 5 bytes or 1 byte long, where a single
00
byte indicates the successful reception of a valid message by the receiving side. All 5-byte messages begin with a command byte, 2 unknown bytes and a single byte which seemingly indicates the expected length of the response payload. These messages end with a simple 8-bit checksum that is computed by summing the previous bytes, similar to the EEPROM checksum discussed in the last section.
With basic knowledge of the protocol, the first couple of messages can now be analyzed in further detail:
The handshake starts with command
11
, expecting a response with a length of
02
. The request is acknowledged by the appliance, which responds with
fb 08
. This response is likewise acknowledged by the module, which proceeds by sending the command
21
. The rest of the handshake continues in a similar manner until the communication switches to 9600 baud.
Could this be the same protocol that is used by the PC interface?
To confirm this assumption, I tried sending the initial
11
command via the USB-UART adapter:
Logic analyzer trace of UART signals
Unfortunately, this didn’t lead to any response from the PC interface. At this point, I decided to take another look at the microcontroller’s datasheet, focusing on the UART section:
Mitsubishi M37451 UART operation
Assuming that the UART is indeed configured for 2400 baud, the only remaining options that can be configured are the number of data bits, stop bits and the parity mode. At this baud rate, a combination of 8 data bits and 1 stop bit would seem to be the most likely choice. However, since the communication is based on an optical interface, the parity bit might actually be used. And sure enough, configuring the USB-UART adapter for even parity and sending the same message again triggered a response from the EDPW:
UART trace with even parity
In contrast to the Miele@home communication log, the payload of the response was
a3 01
(419 in decimal) in this case. According to the technical documentation for the EDPW 206, this seems to be the so-called
software ID
of the board. Feeling relatively optimistic at this point, I tried transmitting the next handshake message, hoping to receive another response. However, upon sending the
21
command, the EDPW just answered with a single
02
byte. Trying other random commands also led to the same response, except for the
10
command which was acknowledged with
00
. Sending a valid message with an incorrect checksum caused the EDPW to reply with
01
.
Using a short Python script, I tested every single possible command, but none of the other commands received an acknowledgement from the EDPW. Nevertheless, even invalid commands had to be acknowledged by the PC before the EDPW would accept the next message:
PC -> EDPW: XX 00 00 02 CC
EDPW -> PC: 02
PC -> EDPW: 00
<continue with XX + 1>
At this point, the only two commands that triggered a positive response from the EDPW were
10
and
11
. While messing around with command
11
, I realized that the EDPW would not react to messages sent after this command unless the PC responded with its own 4-byte payload:
PC -> EDPW: 11 00 00 02 13
EDPW -> PC: 00 a3 01 a4
PC -> EDPW: 00
PC -> EDPW: 00 00 00 00
<next message can be sent>
However, changing the values of these 4 bytes didn’t seem to trigger another response from the EDPW. Wondering whether the handshake might require a certain sequence of commands, I modified the script to transmit command
11
before every iteration:
PC -> EDPW: 11 00 00 02 13
EDPW -> PC: 00 a3 01 a4
PC -> EDPW: 00
PC -> EDPW: 00 00 00 00
PC -> EDPW: XX 00 00 00 CC
EDPW -> PC: 02
PC -> EDPW: 00
<continue with XX + 1>
This revealed yet another valid command that was part of the handshake sequence:
20
. However, that was apparently still not enough to successfully complete the handshake process. None of the commands I tried after this point yielded any meaningful response from the EDPW. The response to command
20
was always
00
, no matter what parameter values I used for the message. After reading up on common diagnostic protocols from the 1990s, I came up with the following theory:
Command
20
is used to
unlock
the diagnostic interface, but requires a certain set of parameters (a secret key)
As part of the unlock sequence, command
11
always has to be sent before
20
Upon reception of command
10
, the diagnostic interface is
locked
again
But how can the secret key for the unlock command be determined? Assuming that the key is encoded in the two parameter bytes of the message, a simple brute-force approach would require up to 65536 tries to guess the key. However, without knowing whether the key was actually correctly guessed and which commands are unlocked if the correct key is provided, the total number of required attempts would increase significantly. Considering the interface’s low speed of 2400 baud, this strategy didn’t seem to be feasible at all.
I decided to take a closer look at the microcontroller on the EDPW board in search of other attack vectors. As
previously mentioned
, the Mitsubishi M37451 is configured for single-chip mode, executing its firmware directly from the internal mask ROM. However, for the two other processor modes, the M37451 provides some additional output signals which can be used to control an external EEPROM. These signals are named WR, RD, R/W and SYNC, as can be seen in the bottom right corner of the microcontroller’s pinout:
Mitsubishi M37451 pinout
According to the datasheet, the SYNC signal is high while the microcontroller is fetching an operation code from its memory. Reading from the data bus sets RD high, while writing to an external component sets WR high. The bus transfer direction is also indicated by a combined R/W signal, which is high during bus reads and low during bus writes.
One would expect these signals to be disabled when the microcontroller operates in single-chip mode, right? Well, to my surprise, attaching a logic analyzer to the SYNC pin actually showed significant activity:
Logic analyzer trace of the microcontroller’s SYNC pin
It turned out that all of the data bus signals are enabled, even in single-chip mode when the internal mask ROM is used. Could this SYNC signal be used to observe the program execution while receiving a diagnostic message via the PC interface? Yes, in fact the whole message processing can be seen in the logic analyzer trace:
SYNC trace during UART reception
Zooming in slightly after the stop bit shows the actual UART interrupt service routine that is being executed:
SYNC trace during interrupt handling of a valid message
When sending an invalid diagnostic message instead, the microcontroller seems to return from the interrupt routine much earlier:
SYNC pin behavior when receiving an invalid message
The length of the interrupt routine can therefore be used to distinguish between valid and invalid messages.
With these observations in mind, it should be possible to figure out the secret parameters for the unlock command. The unlock command likely looks something like this in pseudocode:
Therefore, guessing the correct value for the first parameter should lead to a slightly longer execution time of the interrupt routine. This same procedure can then be repeated for the second parameter, while keeping the correct value for the first parameter to discover the correct unlock message. Unlike a full brute-force approach, this approach only takes 512 tries at maximum. Each unlock attempt then consists of the following steps:
Send command
11
to the PC interface as the first part of the unlock sequence
Start logic analyzer capture, triggering on the falling edge of the UART receive signal
Transmit command
20
with the chosen parameter values
Decode the recorded trace and convert the SYNC sequence into a string of 1’s and 0’s
As before, this process is automated using a Python script and the
sigrok-cli
tool. As part of the decoding process, the script samples the SYNC signal on every falling edge of the microcontroller’s clock output signal:
SYNC trace with clock signal
The recorded SYNC trace is thereby turned into a long bit string:
As the microcontroller is constantly executing instructions, the actual start and end of the interrupt routine are always at a different position in the bit string. It is hard to distinguish between this routine and instructions that run before or after the actual interrupt. To find the boundaries of the routine, I defined two bit patterns that are common for every bit string:
Start pattern: 10100101001010100010101000
End pattern: 10000010001000101000101000
Using these patterns, the actual
payload
of the SYNC trace can be determined:
This result then allows measuring the interrupt routine’s execution time. A change in bit stream length therefore clearly indicates a correct unlock parameter. The Python script can now be used to capture the bit strings for all possible values of the
first parameter
:
The full functionality of the diagnostic interface can now be enabled by completing the unlock sequence:
PC -> EDPW: 11 00 00 02 13
EDPW -> PC: 00 a3 01 a4
PC -> EDPW: 00
PC -> EDPW: 00 00 00 00
PC -> EDPW: 20 ee b4 00 c2
EDPW -> PC: 00
PC -> EDPW: 00
But what did this sequence actually do? Re-testing the command set reveals the presence of commands
30
,
31
and
32
, which are now successfully acknowledged by the EDPW.
While messing around with these new commands, I discovered that sending the command
31
causes the EDPW to respond with a maximum of 4 bytes, depending on the requested response length of the message. However, all returned bytes were zero, regardless of the parameter values:
Upon further inspection of the EDPW board, I noticed that I forgot to supply power to the EEPROM chip. Sending the command again now resulted in the following response:
Trying different parameter values resulted in varying responses from the EDPW, revealing that
command
31
reads the (inverted) EEPROM data starting at the specified offset.
Moving on to command
30
, I quickly noticed that its behavior closely followed the EEPROM read command. However, its response didn’t appear to depend on the presence of the EEPROM chip. Reading the first 256 bytes using this command resulted in the following data:
Hmm, could this be the internal memory of the microcontroller? To verify this, I consulted the memory map of the Mitsubishi M37451:
Memory map of the Mitsubishi M37451
The area marked as
not used
in the diagram ranges from address
c0
to
cf
. Assuming that these bytes were zero, this would match with the response data from command
30
. Another memory area to check would be the
SFR region
. Knowing that the baud rate generator was configured for a baud rate of 2400, the BRG register at address
ea
would have to be set to
81
. This value could also be found in the previous response. The rest of the memory contents were part of the
RAM region
.
This confirmed that command
30
reads memory contents based on the provided address parameters.
After some quick modifications to the Python script, I managed to dump the complete memory contents from the microcontroller:
Reading the Mitsubishi M37451’s memory contents
This whole process took around half an hour due to the low baud rate of the PC interface.
Taking a closer look at the memory dump reveals that it actually contains the full internal mask ROM contents of the microcontroller:
And there you have it!
A full firmware dump of the Mitsubishi M37451 on the Miele EDPW 206 board.
While this blog post about the Miele PC interface protocol is far from complete, I think it’s time to wrap things up for now. The full reverse engineering of the protocol will have to wait until next time, where I’ll dive into a detailed analysis of the firmware.
Building for iOS sometimes feels like archaeology: brush away enough guidelines and you hit something older and stranger. A system that can classify violence with forensic precision still can't decide if the human body is health, lifestyle, or sin.
One day I tried to ship a private intimacy tracker–nothing scandalous, just a journal for wellbeing–and App Store Connect assigned it the 16+ rating it uses for gambling apps and "unrestricted web access". The rating itself is fine: the target audience is well past that age anyway. What baffles me is the
logic
.
Silk
–the app I’m talking about, almost reluctantly–is a wellbeing journal in the most boring sense possible. You choose a few words about your day, moods, closeness, symptoms, or whatever else matters to you and your partner(s). It lives entirely on-device, syncs with nothing and phones no one. The whole point is that nothing interesting happens to your data after you close the app.
And yet, from the App Store’s point of view, you can build a game with guns and cartoon violence and happily ship it to kids, while tracking your own body needs a 16+ “mature themes” label.
If you were around for the early App Store, you’ll remember its optimism: accelerometer-driven beer glasses, wobbling jelly icons, flashlight apps that set brightness to 100% because no one had ever considered the idea before. The ecosystem assumed “content” meant pictures, sound, or the occasional cow-milking simulator–not a user quietly describing part of their life to themselves.
The App Store still carries the outline of that first life. Its vocabulary came from iTunes, which came from film ratings, built for a world where "content" meant something you could point a camera at. When the App Store arrived in 2008, it reused that system because it was available–and because no one expected apps to do much beyond wobbling or making noise.
Those assumptions didn’t last. By 2009 the Store had hosted an infamous $999 app that did nothing but display a red gem, a game where you shook a crying baby until it died, and enough fart apps that one reportedly earned five figures daily
[1]
. The review process was learning in public.
Against that backdrop, Apple introduced age ratings in mid-2009 with iOS 3. The strictest category, 17+, wasn't really created for gore or gambling–it was a pressure valve for novelty apps where shaking your phone made cartoon clothes fall off. Anything that might show “objectionable content”, from bikini galleries to embedded browsers, went into the same bucket
[2]
.
By 2010, Apple reversed course. After Steve Jobs declared "folks who want porn can buy an Android phone," thousands of sexy-but-not-explicit apps vanished overnight in what became known as the Great App Purge
[3]
. The platform moved from reactive cleanup to something more systematic.
The
Age Ratings matrix
Apple operates now in iOS 26 is far more precise. It defines violence with forensic granularity, subdivides gambling, categorises medical risk. It does all the things a global marketplace must do once everyone realises software can cause harm.
But the matrix still retains its original silhouette: everything is defined by "content," not context. App Review's logic is keyed to artifacts inside the bundle–screenshots, metadata, stored assets–not to what the software actually does. That works beautifully for games and media. It falls apart when the "content" is whatever the user decides to write that day.
Silk has no images, no user-generated photos, no feed, no external links. The matrix rated it 16+ anyway–the same tier as gambling apps and unrestricted browsers. The rating isn't describing what Silk does. It's describing the absence of a category that should exist.
When HealthKit launched in 2014, Apple consciously avoided anything resembling "behavioural interpretation." Heart rate or steps were fine, relationships were not. A decade later, the API surface has expanded in every direction–sleep depth, sound exposure, handwashing, environmental allergens, even a "Sexual Activity" field added quietly in iOS 9
[4]
. But relational wellbeing remains conspicuously absent.
HealthKit tracks heart rate, inhaler usage, mindfulness minutes, and the more delicate end of gastrointestinal bookkeeping. Nowhere does it model intimacy, affection, or closeness–the things couples might actually want to track privately. If the platform doesn't have words for what you're building, the classification system can't label it correctly. The vocabulary doesn't exist.
Apple is not avoiding the topic here, they’re being literal. And when a system is literal in a domain that is inherently contextual, things start to get interesting.
Apple's search infrastructure is fast and strict. Search for "budget app" and you get budget apps. Search for "meditation" and you get meditation apps plus a few over-confident habit trackers. Search for the phrases people actually use when they want what Silk does–"relationship journal", "couples diary", "private moments"–and you get wedding planners, travel blogs, generic note-taking apps, and the occasional CBT worksheet. The algorithm can't read between lines it doesn't know exist.
On the developer side, metadata stops being about discoverability and becomes a small diplomatic exercise. A few terms trigger moderation, a few trigger follow-up questions, and the keyword field turns into a minefield where every word is inspected for what it might imply rather than what it means. Too specific reads like medicine, too gentle reads like romance, and anything metaphorical gets outright rejected.
This isn't new though. In 2009, Ninjawords–a perfectly useful English dictionary–was delayed and forced into 17+ because it could return definitions for swear words
[5]
. Phil Schiller personally explained that since parental controls promised to filter profanity, any app displaying unfiltered words needed age-gating. Never mind that Safari could look up far worse. The rule was simple: uncurated content equals adult content, context be damned.
There's also the "massage" rule, mostly folklorfe but widely believed: any app with that word in its metadata triggers extended review, whether it’s physiotherapy or post-marathon recovery. The system was burned once by apps using "massage" as euphemism and never forgot. Most of the odd heuristics you encounter today are scars from 2009.
Ambiguity in meaning becomes ambiguity in engineering. Without shared vocabulary, misalignment cascades: classification shapes search, search shapes metadata, metadata triggers review flags. The policies update faster than the taxonomy beneath them evolves.
Once you see that, the problem becomes solvable–not culturally, but technically.
At this point I did the only sensible thing: treated App Store Connect like a black box and started running experiments.
First test: keywords using only soft language–"relationship journal", "partner log", "connection tracker". Search rankings tanked. Silk dropped to page 8 for "relationship journal," outdone by printable worksheets for couples therapy. Good news: the algorithm was confident I wasn't selling anything objectionable. Bad news: it was equally confident I wasn't selling anything at all.
Replacing those with direct terms–"intimacy tracker", "sexual wellness", "couples health"–brought visibility back to page 2–3. It also triggered longer App Review cycles and required increasingly elaborate "Review Notes" explaining why the app shouldn't be rated 18+. Same binary, screenshots, and code, but different words in a metadata field neither the users nor I can even see in the UI.
Screenshots followed the same logic. Completely sterile set–empty fields, no microcopy, generic UI–sailed through but made Silk look like Notes with a different background. A more honest set showing what the app actually does triggered the 18+ question again. The framing changed the classification. The classification changed nothing about what the software does, but everything about where it appears and who finds it.
None of this is surprising if you assume the system is a classifier trained on categories from 2009. From the outside it feels arbitrary. From the inside it's doing exactly what it was built to do: match patterns it understands and escalate the ones it doesn’t. It just doesn't have a pattern for "private health journal that mentions bodies", even though there are lots of private health journals in the App Store these days. You can almost hear it thinking:
This smells like health but reads like dating and contains the word 'intimacy.' Escalate!
Silk’s architecture was shaped by this lag in the same way Fermento’s safety checks were shaped
by gaps in food-safety guidance
, or Residency’s "compiler warnings" for travel emerged from
inconsistent definitions of “presence” by different countries
. It’s not a case study in “growth”; it’s just another example of what happens when you have to reverse-engineer the missing assumptions. When a domain refuses to state its rules, you provide the scaffolding yourself.
Most of the engineering time went into figuring out what
not
to build–not from fear of rejection, but from understanding how classifiers behave when they encounter undefined cases. Treat it like a compiler that hasn't learned your edge-case syntax yet and stay inside the subset of language it already understands. The discipline felt familiar–the same kind you develop when building in domains where the platform's rules aren't fully specified and you have to infer the boundaries from failed experiments.
The more carefully you specify what the app does, the less the platform has to guess on your behalf. In 2010, you could ship "Mood Scanner" apps that claimed to read emotional states from fingerprints. They still exist–the App Store didn't purge them–but try submitting one in a category App Review associates with actual health data and you'll trigger very different questions. The scrutiny isn't random; it’s contextual. It depends on how your metadata accidentally pattern-matches against old problems.
The closer Silk came to shipping, the more I understood the App Store's behaviour as conservatism–not ideological, but technical. The kind that keeps a global marketplace from accidentally approving malware. Some of this conservatism is regional: China's App Store has additional filters for "relationship content," South Korea requires separate disclosures for wellbeing data. Apple unifies this under one policy umbrella, which produces a system that's cautiously consistent across borders but not particularly imaginative about edge cases.
The post-Epic world made Apple more explicit about where liability lives. Ambiguity became expensive, underspecification became expensive, classifiers that behave "roughly right" became expensive. The safest rule became simple: if the system can't clearly state what something is, err on caution until the taxonomy expands.
The cost is that new categories appear slowly. Sleep apps lived on the periphery for years. Meditation apps bounced between "Health" and "Lifestyle" depending on screenshot aesthetics. Third-party cycle trackers existed for nearly a decade before Apple added native reproductive health tracking in 2015 and a dedicated Cycle Tracking experience in 2019
[6]
. Digital wellbeing apps faced suspicion until Screen Time shipped in iOS 12. Each category began as an edge case, proved itself through user adoption, and eventually got formalized–usually announced in a single sentence at WWDC as if it had always existed.
Silk is at the beginning of that cycle. Eventually Apple will introduce a more nuanced descriptor, or HealthKit will model relational wellbeing, or the age matrix will gain more precision. The entire ecosystem will re-index overnight and everyone will move on.
It turns out the best way to handle a category that doesn’t exist is to build as if it does, then wait for the taxonomy to catch up. Until then, the grey zone is honestly not a bad neighbourhood. The users already know what the app is for. The platform will figure it out eventually.
Working in the quiet gaps of the platform? I build iOS software for the problems people don’t talk about.
work@drobinin.com
I Am Rich
sold for $999.99 to eight people before Apple quietly yanked it.
Baby Shaker
made it through review until the backlash arrived. And
iFart
really did clear ~$10k/day at its peak.
↩︎
The 17+ category arrived with iOS 3 in 2009. Officially it covered “mature themes” and gambling. Unofficially it was the overflow bin for anything vaguely sexy–from Wobble iBoobs to apps with embedded browsers, RSS readers, and Twitter clients, all lumped together because they
might
show something naughty.
↩︎
HealthKit launched in iOS 8 with vitals and fitness but no reproductive health. Apple added “Sexual Activity” in iOS 9–a year after everyone noticed the omission.
↩︎
Ninjawords, a dictionary, was rated 17+
because it could return swear words
. Behold: a dictionary rated like a strip-poker app, because it contained words the app reviewers searched for on purpose.
↩︎
Third-party cycle trackers existed for years, but Apple didn’t add reproductive health metrics until iOS 9 (2015), and didn’t ship dedicated Cycle Tracking until iOS 13 (2019). A category legitimised years after users created it.
↩︎
This article targets an advanced audience who are already familiar with the 3D printing. In this article I will try to collect some information I haven’t found written down in a single place yet. In particular, a lot of the information is seemingly only available in the form of YouTube videos that take a long time to get to the point.
If you are new to 3D printing and/or
CAD
for 3D printing, this is not the right article for you. Come back when you have done a bit of printing/design and want to learn advanced tricks to save on print time and material usage.
Basics of vase mode
With that out of the way what is this about? Vase mode is a printing mode where the printer prints a spiral path, with no seams. This is fast, avoids the visual blemishes of the seam but also has some downsides:
Only
a single perimeter. This potentially means weaker parts.
No disconnected areas (per layer), you have to print with a single path.
No internal geometry. No infill. No top layers.
No supports.
Typically, it gets used for vases and pots. Thus, the name. Here is a crude example (I’m not an aesthetics focused designer, so imagine something prettier than this. If it fits and functions, it ships in my book):
Of note here is that the model itself isn’t hollow, but the slicer will make it hollow for you (since it only prints a single perimeter). In PrusaSlicer this setting is found at “Print Settings” → “Layers and perimeters” → “Vertical shells” → “Spiral vase”. OrcaSlicer etc should have the same or similar setting as well somewhere else. I have no idea about Cura.
But there are some ways to stretch this mode to the limits, and that is what this article is about. This will make vase mode useful for more than just simple vases. And that can often be the fastest and lightest way to print a part,
if
you can pull it off.
To understand the tricks you do need to understand how vase mode works though. It takes solid geometry, and takes the outline of it. What is inside doesn’t matter. It will be ignored:
As can be seen, while the hole exists in the bottom solid layers, the slicer ignores it above that point.
So what can we do above that?
Internal geometry via slits
The idea comes from the
RC
plane 3D printing community, where they want to print lightweight but strong parts. In particular wings with internal supporting geometry.
2
There are two main tricks for unconventional vase mode prints. Let’s start with slits, as the next trick builds upon this first trick. As I’m no aircraft wing designer I will use other geometry for illustration purposes. The idea is useful in other contexts than
RC
wings, that is the whole point of this article.
Make a slit into the part. The left is for demonstration only, you need the slit to be really thin, 0.0001
1
mm or so, as shown on the right:
If we extrude this into a block and slice it, PrusaSlicer will see this slit and print an outer perimeter going into the part, making a sort of internal support. You are basically modelling the infill yourself now:
If you try this, it will not work for you. This is because you are missing a crucial setting in PrusaSlicer. By default, PrusaSlicer will merge together close parts of the model. You need to change “Printer Settings” → “Advanced” → “Slicing” → “Slice gap closing radius”. Set it to 0.0.
3
Otherwise, none of this will work.
For our example with a hole in the middle from the introduction we could get the following result:
Note that the slit will be visible and you can feel it with your fingers, but it will be a fairly smooth indentation, not a sharp edge.
Double walls
Now, let’s expand on this technique to make it even more useful: Have you ever wanted to use vase mode but with two perimeters? We can build upon the previous trick to make a double wall:
This is done by making a slit through into the hollow inside and making sure the part itself is exactly wide enough for two perimeters that touch. You can find the width you should use by going into PrusaSlicer (with the same settings that you plan to use to print with) and looking at the info text in “Print Settings” → “Layers and perimeters” → “Vertical shells”:
That is the value you want to use for this to work correctly.
We can build upon this to make our internal geometry touch the opposite wall, like so:
We can also use this to anchor a slit to the outside wall. This allows us to anchor internal geometry to the outside wall without poking through. In fact, to ensure we have a single continuous outline, all but one slit must be done like this. The following picture shows what you need to do (note that the double wall thickness is 0.87 mm in this example, it will change depending on other settings):
These two tricks presented so far form the basis of what I have seen called “unconventional vase mode”.
4
But there are some more tricks related to vase mode that are worth knowing about.
Extrusion width
To make a vase mode stronger, you can increase the extrusion width. The general recommendation is that you can go to about 2x the nozzle diameter and keep good quality. This works, since the nozzle has a bit of a flat spot around the orifice.
However, British Youtuber “Lost in Tech” did
some tests
showing that you can go way further than that, but I haven’t tested this myself, and quality does eventually start going down. It might be worth looking into if this is useful to you.
In PrusaSlicer you can change this in “Print Settings” → “Advanced” → “Extrusion width”. For vase mode “External perimeters” is what matters (above the solid base layers, that is):
Remember to rescale any double walls to fit the new extrusion width. It might be a good idea to use a variable in your
CAD
model to make it easier to update (at least if you use parametric
CAD
like OnShape, FreeCAD or Fusion 360 which support variables).
Fake vase mode
Finally, if you absolutely cannot print something in vase mode you can still get most of the benefits by what I have seen called “fake vase mode”
5
. To understand this, we should first consider exactly what settings vase mode changes. In PrusaSlicer vase mode changes the following settings:
Single perimeter (except for the first few bottom layers).
No top layers.
No infill (except for the first few bottom layers).
No supports
Disables the setting “ensure vertical shell thickness”.
Prints in a continuous spiral path.
You can do all of those except 6 by hand in the slicer. And you can mix and match those first five things as you see fit.
Let’s investigate this via a case study rather than simplified theoretical examples like we have done so far
Case study: spheres on sticks
I needed some spheres on the end of wooden sticks, to hold up a bird net over my strawberries on my balcony. I didn’t want the net to lie on the plants directly, and I needed something on the end of the sticks so that the net wouldn’t tear. Thus, spheres (or rather: truncated spheres for print bed adhesion and overhang reasons) on sticks.
Here is the basic design in a section view:
This doesn’t quite work in vase mode, because the top of the sphere has very shallow overhangs. And the top needs to be smooth. (The “roof” of the internal hole is fine, thanks to the cone shape.) It is so close, we can
almost
use vase mode.
So first I designed this in
CAD
. We have a slit from the outside to the centre, as well as some slits from the centre that goes
almost
to the outside. In fact, they go to the “recommended object thin wall thickness” mentioned before. (Note that the slits do
not
go down into the solid bottom layers, for some additional strength.)
This results in the following in PrusaSlicer:
Like true vase mode, I used zero infill. But I enabled “Ensure vertical shell thickness” and 1 top solid layer. This added a tiny bit of material just below the shallow top of the dome, making it printable, but still lighter than printing normally. Then I used a layer range modifier to
disable
“ensure vertical shell thickness” for the lower part of the print where it wasn’t needed, as PrusaSlicer wanted to add some material on the inside of the lower layers as well.
I also increased the extrusion width to 0.8 mm (with a 0.4 mm nozzle) to get additional strength, and I used scarf seams to make the outside seam almost invisible.
You can go further from true vase mode though: You could have an inner and outer perimeter like traditional non-vase slicing, but still model your own infill only where needed. You will get seams obviously, but you might still be able to print faster and save weight. We are moving further from true vase mode here, but only you can decide what exactly is best for your print:
In fact, when I printed some of these spheres, the version without a slit to the outside ended up the best looking
6
:
The slit is visible, but on the part printed without a slit extending to the outside there are no visible seams at all. The unevenness at the top is due to me filing away a small blob that the nozzle left behind as it pulled away at the end. It is smooth to the touch but reflects the light differently.
Conclusions
Vase mode and “fake vase mode” is an often underused printing mode for functional parts, and it can be used to save weight and print time. The difference will be most noticeable on larger parts, on smaller parts 10 vs 15 minutes might not be worth the extra design effort (unless you are planning to print many copies of the same part).
I’m a bit disappointed that the slit was as visible from the outside as it was. From the videos about
RC
aircraft wings that I saw I expected this to be less noticeable. But “fake vase mode” still comes to the rescue here, offering most of the benefits. And when combined with scarf joint seams (which I found truly impressive, first time I tried it), I don’t really see the need for true vase mode any more. You might as well get the best of both worlds.
I did not find any written resource online summarizing these techniques, so I hope this post is useful not just to remind myself in the future, but also to others looking for this information. With that in mind, below is a cheat sheet of the important points and settings to remember.
These techniques require tuning settings in your slicer. This may not be possible if you are printing with at a commercial print farm, or targeting people slicing with a dumbed down web based slicer (as has recently been launched by both Printables and Makerworld). But it would be a shame if such dumbed down slicers restricted what we could design and publish. I will always try to make the most what both
CAD
and the slicer exposes to me.
7
Do you have some other tips or tricks for vase mode? Did I get something wrong? Comment on
Reddit
or on
Lemmy
and I will likely see it (eventually).
Cheat sheet
Want to quickly remind yourself of the core ideas of this article when you are designing your next part? Here is a quick cheat sheet:
Slits: Use slits to add internal geometry.
0.0001 mm wide (or 0.001 if your
CAD
software doesn’t like you that day).
PrusaSlicer: Set “Print Settings” → “Advanced” → “Slicing” → “Slice gap closing radius” to 0.
Double walls: Use double walls for more strength and to connect slits to the opposite wall.
PrusaSlicer: “Print Settings” → “Layers and perimeters” → “Vertical shells” (Look at info text to find width you need to use for your current print settings.)
Extrusion width: You can increase the extrusion width to 2x the nozzle diameter for additional strength with no quality downsides. You might be able to go even further, but eventually quality will start going down.
Fake vase mode: You don’t need to use vase mode to get most of the benefits. You can mix and match all parts of normal vase mode except for the continuous spiral path. But consider scarf joints to hide seams.
California DMV approves map increase in Waymo driverless operations
Train a language model in your browser with WebGPU
You could try…
…training a Transformer
?
to
sort characters
,
reverse a sequence
,
or
find the numbers that add up to a sum
, then
compare with an LSTM
?
.
Visualize
the gradients in the attention layer
,
all parameters in the network
, or try
writing your own visualization queries
.
Learn to
match parentheses in a Dyck language
?
using an encoder-only masked language modeling (MLM) objective
?
.
Want a taste of natural language?
Try
training a GPT on TinyStories
?
?
, a dataset of short stories generated by GPT-4—and try different tokenizer
sizes.
Play with
attention gating
,
MLP variants
,
learning rate schedulers
,
initialization
,
dropout
, or
QK normalization
.
Want to train an RNN?
Mess with
layer normalization
,
initialization
,
bidirectional encoder layers
,
encoder-decoder attention variants
, or
gradient norm clipping
.
Have a lot of VRAM and want to try something untested
*
?
Try
training a GPT-2-sized model on FineWeb
?
(and
DM me
if you get it to work!).
When it comes to blogging there are few rules. Write content that is somehow meaningful might be one of them though. I think it’s down to the individual to determine what constitutes meaningful.
In the hey-day, the so-called golden age of blogging, there were plenty of people prepared to offer definitions of meaningful, and how to write accordingly. It was natural. The web was once awash with all sorts of blogs. Likewise people who wanted to show others how to blog “successfully”.
Again, the definition of successful resided with the individual, but it was obvious this involved monetary return for some people. And why not. If you’re going to invest time and energy in creating a resource that is useful to other people, why shouldn’t you earn money, make a living even, from it?
One of these people blogging about blogging was Melbourne based Australian writer and author Darren Rowse, who launched his blogging resource
Problogger
in 2004. Without going into detail, because you can look it up for yourself, Rowse, as one of the earlier bloggers about blogging, did, and still does presumably, rather well for himself.
Rowse’s writing, and that of his contributors, attracted numerous readers keen to learn what they could about blogging, and the potential to make money from it.
Problogger is what’s called a niche blog. As a blog about blogging, it has a reasonably singular focus. Some people considered this niche principle to be a core tenet of blogging. There was this idea, in the earlier days of blogging, which possibly still persists, that blogs would do better if they had a speciality. Not only were search engines said to be in favour the approach, but the author of a speciality, or niche blog, would generally be considered to be an expert, of some sort, in their field.
A master of one trade, rather than the proverbial jack of all trades.
Regardless, the world was once full of blogs on every topic imaginable. It was a great time to be alive. If you wanted to learn about something in particular, there was a blog for you. Some publications featured quality content, others required a little fact checking, while some were definitely to be taken with a pinch of salt.
But niche blogging was never a format that suited everyone. There are people who did, still do, well, writing about a range, sometimes a wide range, of topics.
Kottke
is one of the better known blogs that does not have a specific speciality. Here, the publication itself is the speciality. To repeat what I wrote in the first sentence of this article: the rules of blogging are few.
But the facets of blogging covered at Problogger, and numerous other similar websites, usually only applied to blogs of a commercial nature. That’s not to say one or two personal bloggers might have looked at the tips posted there for increasing their audience, or improving their writing though. But in my view, personal bloggers were not, are not, part of Problogger’s target audience.
It’s been a long time since I last wrote about Problogger, let alone visited the website, maybe fifteen plus years, but a recent mention of it
by Kev Quick
,
via ldstephens
, caught my eye. But I don’t believe Rowse is being critical, in any way, of personal bloggers because they do not adhere to a niche or speciality publishing format. That’s not what Problogger, or Rowse, is about.
But this started me thinking, and writing another of my long posts.
In an age where social media, and influencers, have usurped blogs and their A-List authors, in the jostle for supremacy, it has to be wondered what role websites like Problogger still have. Only a handful of blogs generate liveable incomes today. Despite the doom and gloom though, the form has not completely died off. A backlash against social media, and a growing IndieWeb/SmallWeb community, has precipitated a revival in personal websites.
This is a largely non-commercial movement. Of course, there’s nothing wrong with personal websites. Many of us started out with them in the early days of the web. But the web was not only intended for personal journals. It was a vehicle for sharing all manner of information. The web could also empower individuals, and partnerships, to not only set up shop online, be that blogs, or quite literally shops, but potentially make a living at the same time.
But with the revival of personal blogs well underway, I think it’s time to bring niche blogs back into the fold. I’m talking about well written, quality, topic focused resources. This is material fast vanishing from the web, leaving ever diminishing options to source useful and accurate information. What are the alternatives? The misinformation morass that is social media? Being served
AI
generated summaries in response to search engine queries? A web choke full of AI slop?
At the same time, I’m not advocating for a return of niche blogs plastered with adverts, and popup boxes urging visitors to subscribe to say a newsletter, before they’ve even had a chance to blink at what they came to read.
I’m talking about work produced by independent writers, with an interest in their subject matter, who are not backed by large media organisations, or private equity. This is bringing back reliable sources of information, that also recompenses the content writers in some way. Hopefully we’ve learned a few lessons about monetisation since the earlier wave of niche blogging. We know it is possible to generate revenue without compromising the reader experience.
A resurgence in personal blogging is the first step in rebuilding a vibrant, thriving, web, of if you like, blogosphere. Now the focus needs to be on restoring the flow of accessible and trusted information.
Luke Igel and Riley Walz made a phony Gmail interface that, rather than showing you your email, shows you Jeffrey Epstein’s emails:
You’re logged in as Jeffrey Epstein. We compiled these Epstein
estate emails from the House Oversight release by converting the
PDFs to structured text with an LLM....
You are logged in as Jeffrey Epstein,
[email protected]
. These are real emails released by Congress. Explore by name, contribute to the starred list, search, or
visit a random page
.
Is Denmark going bankrupt?
- TOP STORIES FOR YOU Jeffrey's Digest --- Is Denmark going bankrupt? <image> Erik Cal
Jul 14, 2019
Is Denmark going bankrupt?
TOP STORIES FOR YOU Jeffrey's Digest --- Is Denmark going bankrupt? <image> Erik Cal
11 questions for Mueller
- Flipboard Interesting stories worth your time. --- The Seltzer Bubble <image> nytimes
Jul 13, 2019
11 questions for Mueller
Flipboard Interesting stories worth your time. --- The Seltzer Bubble <image> nytimes
Alex Acosta resigns, Jeffrey Epstein arrested and Trump ends bid for citizenship question on census
- Flipboard Biggest news stories from the past week. <image> Alex Acosta resigns as labor se
Jul 13, 2019
Alex Acosta resigns, Jeffrey Epstein arrested and Trump ends bid for citizenship question on census
Flipboard Biggest news stories from the past week. <image> Alex Acosta resigns as labor se
Do French people generally identify more with Germans, Italians, or Spaniards?
- Jeffrey's Digest TOP STORIES FOR YOU Do French people generally identify more wit
Jul 04, 2019
Do French people generally identify more with Germans, Italians, or Spaniards?
Jeffrey's Digest TOP STORIES FOR YOU Do French people generally identify more wit
Capital Market Outlook
- Capital Market Outlook Chief Investment Office Capital Market Outlook from our Chief Investme
Jul 02, 2019
Capital Market Outlook
Capital Market Outlook Chief Investment Office Capital Market Outlook from our Chief Investme
Kim-Trump border meeting: history or photo-op?
- Flipboard > Interesting stories worth your time. --- <image> What Does Putin Really Want?
Jun 30, 2019
Kim-Trump border meeting: history or photo-op?
Flipboard > Interesting stories worth your time. --- <image> What Does Putin Really Want?
2020 Democratic debates and major Supreme Court rulings
- Flipboard Biggest news stories from the past week. <image> 5 things we learned from tw
Jun 29, 2019
2020 Democratic debates and major Supreme Court rulings
Flipboard Biggest news stories from the past week. <image> 5 things we learned from tw
The hired guns of Instagram
- Flipboard Interesting stories worth your time. The hired guns of Instagram <image> vox.com /
Jun 19, 2019
The hired guns of Instagram
Flipboard Interesting stories worth your time. The hired guns of Instagram <image> vox.com /
A few weeks ago I was minding my own business, peacefully reading
a well-written and informative article about artificial intelligence
, when I was ambushed by a passage in the article that aroused my pique. That’s one of the pitfalls of knowing too much about a topic a journalist is discussing; journalists often make mistakes that most readers wouldn’t notice but that raise the hackles or at least the blood pressure of those in the know.
The article in question appeared in
The New Yorker
. The author, Stephen Witt, was writing about the way that your typical Large Language Model, starting from a blank slate, or rather a slate full of random scribbles, is able to learn about the world, or rather the virtual world called the internet. Throughout the training process, billions of numbers called weights get repeatedly updated so as to steadily improve the model’s performance. Picture a tiny chip with electrons racing around in etched channels, and slowly zoom out: there are many such chips in each server node and many such nodes in each rack, with racks organized in rows, many rows per hall, many halls per building, many buildings per campus. It’s a sort of computer-age version of Borges’ Library of Babel. And the weight-update process that all these countless circuits are carrying out depends heavily on an operation known as matrix multiplication.
Witt explained this clearly and accurately, right up to the point where his essay took a very odd turn.
HAMMERING NAILS
Here’s what Witt went on to say about matrix multiplication:
“‘Beauty is the first test: there is no permanent place in the world for ugly mathematics,’ the mathematician G. H. Hardy wrote, in 1940. But matrix multiplication, to which our civilization is now devoting so many of its marginal resources, has all the elegance of a man hammering a nail into a board. It is possessed of neither beauty nor symmetry: in fact, in matrix multiplication,
a
times
b
is not the same as
b
times
a
.”
The last sentence struck me as a bizarre non sequitur, somewhat akin to saying “Number addition has neither beauty nor symmetry, because when you write two numbers backwards, their new sum isn’t just their original sum written backwards; for instance, 17 plus 34 is 51, but 71 plus 43 isn’t 15.”
The next day I sent the following letter to the magazine:
“I appreciate Stephen Witt shining a spotlight on matrices, which deserve more attention today than ever before: they play important roles in ecology, economics, physics, and now artificial intelligence (“
Information Overload
”, November 3). But Witt errs in bringing Hardy’s famous quote (“there is no permanent place in the world for ugly mathematics”) into his story. Matrix algebra is the language of symmetry and transformation, and the fact that
a
followed by
b
differs from
b
followed by
a
is no surprise; to expect the two transformations to coincide is to seek symmetry in the wrong place — like judging a dog’s beauty by whether its tail resembles its head. With its two-thousand-year-old roots in China, matrix algebra has secured a permanent place in mathematics, and it passes the beauty test with flying colors. In fact, matrices are commonplace in number theory, the branch of pure mathematics Hardy loved most.”
Confining my reply to 150 words required some finesse. Notice for instance that the opening sentence does double duty: it leavens my many words of negative criticism with a few words of praise, and it stresses the importance of the topic, preëmptively
1
rebutting editors who might be inclined to dismiss my correction as too arcane to merit publication.
I haven’t heard back from the editors, and I don’t expect to. Regardless, Witt’s misunderstanding deserves a more thorough response than 150 words can provide. Let’s see what I can do with 1500 words and a few pictures.
THE GEOMETRY OF TRANSFORMATIONS
As a static object, matrices are “just” rectangular arrays of numbers, but that doesn’t capture what they’re really about. If I had to express the essence of matrices in a single word, that word would be “transformation”.
One example of a transformation is the operation
f
that takes an image in the plane and flips it from left to right, as if in a vertical mirror.
Another example is the operation
g
that that takes an image in the plane and reflects it across a diagonal line that goes from lower left to upper right.
The key thing to notice here is that the effect of
f
followed by
g
is different from the effect of
g
followed by
f
. To see why, write a capital R on one side of a square piece of paper–preferably using a dark marker and/or translucent paper, so that you can still see the R even when the paper has been flipped over–and apply
f
followed by
g
; you’ll get the original R rotated by 90 degrees clockwise. But if instead, starting from that original R, you were to apply
g
followed by
f
, you’d get the original R rotated by 90 degrees
counterclockwise
.
Same two operations, different outcomes! Symbolically we write
g
◦
f
≠
f
◦
g
, where
g
◦
f
means “First do
f
, then do
g
” and
f
◦
g
means “First do
g
, then
f
”.
2
The symbol ◦ denotes the meta-operation (operation-on-operations) called
composition
.
The fact that the order in which transformations are applied can affect the outcome shouldn’t surprise you. After all, when you’re composing a salad, if you forget to pour on salad dressing until after you’ve topped the base salad with grated cheese, your guests will have a different dining experience than if you’d remembered to pour on the dressing first. Likewise, when you’re composing a melody, a C-sharp followed by a D is different from a D followed by a C-sharp. And as long as mathematicians used the word “composition” rather than “multiplication”, nobody found it paradoxical that in many contexts, order matters.
THE ALGEBRA OF MATRICES
If we use the usual
x
,
y
coordinates in the plane, the geometric operation
f
can be understood as the numerical operation that sends the pair (
x
,
y
) to the pair (−
x
,
y
), which we can represented via the 2-by-2 array
where more generally the array
stands for the transformation that sends the pair (
x
,
y
) to the pair (
ax
+
by
,
cx
+
dy
). This kind of array is called a
matrix
, and when we want to compose two operations like
f
and
g
together, all we have to do is combine the associated matrices under the rule that says that the matrix
composed with the matrix
equals the matrix
For more about where this formula comes from, see my Mathematical Enchantments essay “
What Is A Matrix
?”.
There’s nothing special about 2-by-2 matrices; you could compose two 3-by-3 matrices, or even two 1000-by-1000 matrices. Going in the other direction (smaller instead of bigger), if you look at 1-by-1 matrices, the composition of
and
is just
so ordinary number-multiplication arises as a special case of matrix composition; turning this around, we can see matrix-composition as a sort of generalized multiplication. So it was natural for mid-19th-century mathematicians to start using words like “multiply” and “product” instead of words like “compose” and “composition”, at roughly the same time they stopped talking about “substitutions” and “tableaux” and started to use the word “matrices”.
In importing the centuries-old symbolism for number multiplication into the new science of linear algebra, the 19th century algebraists were saying “Matrices behave kind of like numbers,” with the proviso “except when they don’t”. Witt is right when he says that when
A
and
B
are matrices,
A
times
B
is not always equal to
B
times
A
. Where he’s wrong is in asserting that is a blemish on linear algebra. Many mathematicians regard linear algebra as one of the most elegant sub-disciplines of mathematics ever devised, and it often serves as a role model for the kind of sleekness that a new mathematical discipline should strive to achieve. If you dislike matrix multiplication because
AB
isn’t always equal to
BA
, it’s because you haven’t yet learned what matrix multiplication is good for in math, physics, and many other subjects. It’s ironic that Witt invokes the notion of symmetry to disparage matrix multiplication, since matrix theory and an allied discipline called group theory are the tools mathematicians use in fleshing out our intuitive ideas about symmetry that arise in art and science.
So how did an intelligent person like Witt go so far astray?
PROOFS VS CALCULATIONS
I’m guessing that part of Witt’s confusion arises from the fact that actually multiplying matrices of numbers to get a matrix of bigger numbers can be very tedious, and tedium is psychologically adjacent to distaste and a perception of ugliness. But the tedium of matrix multiplication is tied up with its symmetry (whose existence Witt mistakenly denies). When you multiply two
n
-by-
n
matrices
A
and
B
in the straightforward way, you have to compute
n
2
numbers in the same unvarying fashion, and each of those
n
2
numbers is the sum of
n
terms, and each of those
n
terms is the product of an element of
A
and an element of
B
in a simple way. It’s only human to get bored and inattentive and then make mistakes because the process is so repetitive. We tend to think of symmetry and beauty as synonyms, but sometimes excessive symmetry breeds ennui; repetition in excess can be repellent. Picture the Library of Babel and the existential dread the image summons.
G. H. Hardy, whose famous remark Witt quotes, was in the business of proving theorems, and he favored conceptual proofs over calculational ones. If you showed him a proof of a theorem in which the linchpin of your argument was a 5-page verification that a certain matrix product had a particular value, he’d say you didn’t really understand your own theorem; he’d assert that you should find a more conceptual argument and then consign your brute-force proof to the trash. But Hardy’s aversion to brute force was specific to the domain of mathematical proof, which is far removed from math that calculates optimal pricing for annuities or computes the wind-shear on an airplane wing or fine-tunes the weights used by an AI. Furthermore, Hardy’s objection to your proof would focus on the length of the calculation, and not on whether the calculation involved matrices. If you showed him a proof that used 5 turgid pages of pre-19th-century calculation that never mentioned matrices once, he’d still say “Your proof is a piece of temporary mathematics; it convinces the reader that your theorem is true without truly explaining
why
the theorem is true.”
If you forced me at gunpoint to multiply two 5-by-5 matrices together, I’d be extremely unhappy, and not just because you were threatening my life; the task would be inherently unpleasant. But the same would be true if you asked me to add together a hundred random two-digit numbers. It’s not that matrix-multiplication or number-addition is ugly; it’s that such repetitive tasks are the diametrical opposite of the kind of conceptual thinking that Hardy loved and I love too. Any kind of mathematical content can be made stultifying when it’s stripped of its meaning and reduced to mindless toil. But that casts no shade on the underlying concepts. When we outsource number-addition or matrix-multiplication to a computer, we rightfully delegate the soul-crushing part of our labor to circuitry that has no soul. If we could peer into the innards of the circuits doing all those matrix multiplications, we would indeed see a nightmarish, Borgesian landscape, with billions of nails being hammered into billions of boards, over and over again. But please don’t confuse that labor with mathematics.
This essay
is related to chapter 10 (“Out of the Womb”) of a book I’m writing, tentatively called “What Can Numbers Be?: The Further, Stranger Adventures of Plus and Times”. If you think this sounds interesting and want to help me make the book better, check out
http://jamespropp.org/readers.pdf
. And as always, feel free to submit comments on this essay at the Mathematical Enchantments WordPress site!
ENDNOTES
#1. Note the
New Yorker
-ish diaresis in “preëmptively”: as long as I’m being critical, I might as well be diacritical.
#2. I know this convention may seem backwards on first acquaintance, but this is how ◦ is defined. Blame the people who first started writing things like “log
x
” and “cos
x
“, with the
x
coming after the name of the operation. This led to the notation
f
(
x
) for the result of applying the function
f
to the number
x
. Then the symbol for the result of applying
g
to the result of applying
f
to
x
is
g
(
f
(
x
)); even though
f
is performed first, “
f
” appears to the right of “
g
“. From there, it became natural to write the function that sends
x
to
g
(
f
(
x
)) as “
g
◦
f
“.
LAPD Helicopter Tracker with Real-Time Operating Costs
I did not know Adidas sold a sneaker called “Squid.”
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Blog moderation policy....
I am a
public-interest technologist
, working at the intersection of security, technology, and people. I've been writing about security issues on my
blog
since 2004, and in my monthly
newsletter
since 1998. I'm a fellow and lecturer at Harvard's
Kennedy School
, a board member of
EFF
, and the Chief of Security Architecture at
Inrupt, Inc.
This personal website expresses the opinions of none of those organizations.
Note: while my schedule is quite hectic these last few weeks, I’ve taken the
decision to dedicate at least one day per week for developing open-source tools,
and henceforth I plan to post an update on my progress in this regard every
Friday evening. Here’s the first update:
UringMachine Grant Work
As I wrote here previously, a few weeks ago I learned I’ve been selected as one
of the recipients of a
grant
from the
Ruby Association in Japan, for working on UringMachine, a new gem that brings
low-level io_uring I/O to Ruby. For this project, I’ve been paired with a
terrific mentor -
Samuel Williams
- who is
the
authority on all things related to Ruby fibers. We’ve had a talk about the
project and discussed the different things that I’ll be able to work on. I’m
really glad to be doing this project under his guidance.
UringMachine implements a quite low-level API for working with I/O. You
basically work with raw file descriptors, you can spin up fibers for doing
multiple things concurrently, and there are low-level classes for mutexes and
queues (based on the io_uring implementation of the
futex
API).
Incidentally, I find it really cool that futexes can be used with io_uring to
synchronize fibers, with very low overhead.
The problem with this, of course, is that this API is useless when you want to
use the standard Ruby I/O classes, or any third-party library that relies on
those standard classes.
This is where the Ruby fiber scheduler comes into the picture. Early on in my
work on UringMachine, it occurred to me that the
Fiber::Scheduler
added to
Ruby by Samuel is a perfect way to integrate such a low-level API with the Ruby
I/O layer and the entire Ruby ecosystem. An implementation of
Fiber::Scheduler
for UringMachine would use the different scheduler hooks to punt work to the
low-level UringMachine API.
So this week I finally got around to making some progress on the UringMachine
fiber scheduler, and there’s finally a basic working version that can do basic
I/O, as well as some other stuff like sleeping, waiting on blocking operations
(such as locking a mutex or waiting on a queue), and otherwise managing the life
cycle of a scheduler.
This is also a learning process. The Ruby
IO
class implementation is really
complex: the
io.c
file itself
is about 10K LOCs! I’m still figuring out the mechanics of the fiber scheduler
as I go, and lots of things are still unclear, but I’m taking it one step at a
time, and when I hit a snag I just try to take the problem apart and try to
understand what’s going on. But now that I have moved from a rough sketch to
something that works and has some tests, I intend to continue working on it by
adding more and more tests and TDD’ing my way to an implementation that is both
complete (feature-wise) and robust.
Here are some of the things I’ve learned while working on the fiber scheduler:
When you call
Kernel.puts
, the trailing newline character is actually
written separately (i.e. with a separate
write
operation), which can lead to
unexpected output if for example you have multiple fibers writing to STDOUT at
the same time. To prevent this, Ruby seems to use a mutex (per IO instance) to
synchronize writes to the same IO.
There are inconsistencies in how different kinds of IO objects are handled,
with regards to blocking/non-blocking operation
(
O_NONBLOCK
):
Files and standard I/O are blocking.
Pipes are non-blocking.
Sockets are non-blocking.
OpenSSL sockets are non-blocking.
The problem is that for io_uring to function properly, the fds passed to it
should always be in blocking mode. To rectify this, I’ve added code to the
fiber scheduler implementation that makes sure the IO instance is blocking:
A phenomenon I’ve observed is that in some situations of multiple fibers doing
I/O, some of those I/O operations would raise an
EINTR
, which should mean
the I/O operation was interrupted because of a signal sent to the process.
This is weird! I’m still not sure where this is coming from, certainly
something I’ll ask Samuel about.
There’s some interesting stuff going on when calling
IO#close
. Apparently
there’s a mutex involved, and I noticed two scheduler hooks are being called:
#blocking_operation_wait
which means a blocking operation that should be ran
on a separate thread, and
#block
, which means a mutex is being locked. I
still need to figure out what is going on there and why it is so complex.
FWIW, UringMachine has a
#close_async
method which, as its name suggests,
submits a close operation, but does not wait for it to complete.
Improving and extending the fiber scheduler interface
One of the things I’ve discussed with Samuel is the possibility of extending the
fiber scheduler interface by adding more hooks, for example a hook for closing
an IO (from what I saw there’s already some preparation for that in the Ruby
runtime), or a hook for doing a
splice
. We’ve also discussed working with
pidfd_open
to prevent race conditions when waiting on child processes. I think
there’s still a lot of cool stuff that can be done by bringing low-level I/O
functionality to Ruby.
I’ve also suggested to Samuel to use the relatively recent
io_uring_prep_waitid
API to wait for child processes, and more specifically to
do this in Samuel’s own
io-event
gem,
which provides a low-level cross-platform API For building async programs in
Ruby. With the io_uring version of
waitid
, there’s no need to use
pidfd_open
(in order to poll for readiness when the relevant process terminates). Instead,
we use the io_uring interface to directly wait for the process to terminate.
Upon termination, the operation completes and we get back the pid and status of
the terminated process. This is also has the added advantage that you can wait
for any child process, or any child process in the process group, which means
better compatibility with the
Process.wait
and associated methods.
One problem is that the fiber scheduler
process_wait
hook is supposed to
return an instance of
Process::Status
. This is a core Ruby class, but you
cannot create instances of it. So, if we use io_uring to directly wait for a
child process to terminate, we also need a way to instantiate a
Process::Status
object with the information we get back from io_uring. I’ve
submitted a
PR
that hopefully will be
merged before the release of Ruby 4.0. I’ve also submitted a
PR
to io-event with the
relevant changes.
Going forward
So here’s where the UringMachine project is currently at:
If you appreciate my OSS work, please consider
sponsoring
me
.
My Consulting Work
Apart from my open-source work, I’m also doing consulting work for. Here’s some
of the things I’m currently working on for my clients:
Transitioning a substantial PostgreSQL database (~4.5TB of data) from RDS to
EC2. This is done strictly for the sake of reducing costs. My client should
see a reduction of about 1000USD/month.
Provisioning of machines for the RealiteQ web platform to be used for
industrial facilities in India.
Exploring the integration of AI tools for analyzing the performance of
equipment such as water pumps for water treatment facilities. I’m still quite
sceptical about LLM’s being the right approach for this. ML algorithms might
be a better fit. Maybe, we’ll see…
Wow, President Trump Fucking LOVES This Mamdani Guy
In a surreal, hilarious, and brain-exploding press conference after their meeting at the White House, President Donald Trump repeatedly stressed his adulation for Mayor-elect Zohran Mamdani and promised to do whatever was necessary to help him carry out his affordability agenda in New York City.
Where to begin? The part where Trump said that he was going to tell Con Edison to lower their rates? Or when Mamdani said that Israel is committing a genocide in Gaza and Trump didn't disagree with him? OR, Trump saying, over and over again, how "people would be shocked" about what Mamdani believes in. "We agree on a lot more than I would have thought," Trump said.
Would President Trump, a billionaire, feel comfortable living in Mamdani's New York City?
"I would feel very, very comfortable being in New York," Trump cooed.
Q: Are you affirming that you think President Trump is a fascist?
Nothing in a blog post can possibly do justice to the incredible body language on display here—Mamdani's rigid spine, his hands clasped tightly to prevent them from punching a hole in the Oval Office window and jumping out of it, Trump touching Mamdani's arm, smiling at him like the son he's never had.
"I expect to be helping him, not hurting him. A big help," Trump said, when asked about previous threats the administration has made to New York City's funding and the prospect of sending in the National Guard.
Trump even found time to compliment Mamdani's pick for police commissioner, Jessica Tisch. "He retained someone who is a great friend of some people in my family—Ivanka. And they say she's really good, really competent," Trump said.
Even if you remember that this is all theater, that tomorrow morning Trump could wake up and "Truth" something idiotic and racist that erases everything that just transpired, it is remarkable how consistent Trump is in this one aspect of his brain: He loves winners, he loves ratings, and he won't bother with anyone who can't command them.
"I'll tell ya, the press has eaten this up," Trump said, noting that there were way more reporters jonesing for a chance to see him with Mamdani than any other foreign leader he's met with. "You people have gone crazy."
"I think he's different," Trump added. "He has a chance to do something great for New York. And he does need the help of the federal government to succeed...And we'll help him."
Through the years, and more recently due to the affairs between
Arduino LLC and Arduino S.R.L.
, I have received a lot of questions from people about the history of Wiring and, of course, Arduino.
I was also shown
this US Federal Courts website
, which presents documents citing my work to support the plaintiff’s claims which, in my opinion, contribute to the distortion of information surrounding my work.
The history of Arduino has been told by many people, and no two stories match. I want to clarify some facts around the history of Arduino, with proper supported references and documents, to better communicate to people who are interested, about Arduino’s origin.
As well, I will attempt to correct some things that have distorted my role or work by pointing out common mistakes, misleading information, and poor journalism.
I will go through a summary of the history first, then I will answer a series of questions that I have been often asked over the years.
The objective of the thesis was to make it easy for artists and designers to work with electronics, by abstracting away the often complicated details of electronics so they can focus on their own objectives.
Massimo Banzi and
Casey Reas
(known for his work on
Processing
) were supervisors for my thesis.
The project received plenty of attention at IDII, and was used for
several
other projects from 2004, up until the closure of the school in 2005.
Because of my thesis, I was proud to graduate with distinction; the only individual at IDII in 2004 to receive the distinction. I continued the development of Wiring while working at the
Universidad de Los Andes
in Colombia, where I began teaching as an instructor in Interaction Design.
What Wiring is, and why it was created can be extracted from the abstract section of my thesis document. Please keep in mind that it was 2003, and these premises are not to be taken lightly. You may have heard them before recited as proclamations:
“… Current prototyping tools for electronics and programming are mostly targeted to engineering, robotics and technical audiences. They are hard to learn, and the programming languages are far from useful in contexts outside a specific technology …”
“… It can also be used to teach and learn computer programming and prototyping with electronics…”
“Wiring builds on Processing…”
These were the key resulting elements of Wiring:
Simple integrated development environment (IDE), based on the Processing.org IDE running on Microsoft Windows, Mac OS X, and Linux to create software programs or “sketches”
1
, with a simple editor
Simple “language” or programming “framework” for microcontrollers
Complete toolchain integration (transparent to user)
Bootloader for easy uploading of programs
Serial monitor to inspect and send data from/to the microcontroller
Open source software
Open source hardware designs based on an Atmel microcontroller
Comprehensive online reference for the commands and libraries, examples, tutorials, forum and a showcase of projects done using Wiring
How Was Wiring Created?
Through the thesis document, it is possible to understand the design process I followed. Considerable research and references to prior work has served as a basis for my work. To quickly illustrate the process, a few key points are provided below.
The Language
Have you ever wondered where those commands come from?
Probably one of the most distinctive things, that is widely known and used today by Arduino users in their sketches, is the set of commands I created as the language definition for Wiring.
pinMode()
digitalRead()
digitalWrite()
analogRead()
analogWrite()
delay()
millis()
etc…
Abstracting the microcontroller pins as numbers was, without a doubt, a major decision, possible because the syntax was defined prior to implementation in any hardware platform. All the language command naming and syntax were the result of an exhaustive design process I conducted, which included user testing with students, observation, analysis, adjustment and iteration.
As I developed the hardware prototypes, the language also naturally developed. It wasn’t until after the final prototype had been made that the language became solid and refined.
If you are still curious about the design process, it is detailed in the thesis document, including earlier stages of command naming and syntax that were later discarded.
The Hardware
From a designer’s point of view, this was probably the most difficult part to address. I asked for or bought evaluation boards from different microcontroller manufacturers.
Here are some key moments in the hardware design for Wiring.
Prototype 1
The first prototype for Wiring used the
Parallax
Javelin Stamp microcontroller. It was a natural option since it was programmed in a subset of the Java language, which was already being used by Processing.
Problem: as described in the thesis document on page 40, compiling, linking and uploading of user’s programs relied on Parallax’s proprietary tools. Since Wiring was planned as open source software, the Javelin Stamp was simply not a viable option.
Photo of Javelin Stamp used for first prototype for Wiring hardware.
For the next prototypes, microcontrollers were chosen on a basis of availability of open source tools for compiling, linking and uploading the user’s code. This led to discarding the very popular Microchip PIC family of microcontrollers very early, because, at the time (circa 2003), Microchip did not have an open source toolchain.
Prototype 2
For the second Wiring hardware prototype, the
Atmel
ARM-based
AT91R40008
microcontroller was selected, which lead to excellent results. The first sketch examples were developed and command naming testing began. For example,
pinWrite()
used to be the name of the now ubiquitous
digitalWrite()
.
The Atmel R40008 served as a test bed for the digital input/output API and the serial communications API, during my evaluation of its capabilities. The Atmel R40008 was a very powerful microcontroller, but was far too complex for a hands-on approach because it was almost impossible to solder by hand onto a printed circuit board.
For more information on this prototype, see page 42 in the thesis document.
Photo of Atmel AT91R40008 used for second Wiring hardware prototype.
Prototype 3
The previous prototype experiments led to the third prototype, where the microcontroller was downscaled to one still powerful, yet with the possibility of tinkering with it without the requirements of specialized equipment or on-board extra peripherals.
I selected the Atmel
ATmega128
microcontroller and bought an Atmel
STK500
evaluation board with a special socket for the ATmega128.
Photo of Atmel STK500 with ATmega128 expansion.
Tests with the STK500 were immediately successful, so I bought a
MAVRIC
board from
BDMICRO
with the ATmega128 soldered. Brian Dean’s work on his MAVRIC boards were unparalleled at that time, and his work drove him to build a software tool to easily upload new programs to his board. It is still used today in the Arduino software, and is called “avrdude”.
As traditional COM ports were disappearing from computers, I selected
FTDI
hardware for communication through a USB port on the host computer. FTDI provided drivers for Windows, Mac OS X and Linux which was required for the Wiring environment to work on all platforms.
Photo of BDMICRO MAVRIC-II used for the third Wiring hardware prototype.
Photo of an FTDI FT232BM evaluation board used in the third Wiring hardware prototype.
The FTDI evaluation board was interfaced with the MAVRIC board and tested with the third Wiring prototype.
Testing with the BDMICRO MAVRIC-II board and FTDI-FT232BM.
In early 2004, based on the prototype using the MAVRIC board (Prototype 3), I used Brian Dean’s and Pascal Stang’s schematic designs as a reference to create the first Wiring board design. It had the following features:
Along with the third prototype, the final version of the API was tested and refined. More examples were added and I wrote the first LED blink example
that is still used today
as the first sketch that a user runs on an Arduino board to learn the environment. Even more examples were developed to support liquid crystal displays (LCDs), serial port communication, servo motors, etc. and even to interface Wiring with Processing via serial communication. Details can be found on page 50 in the thesis document.
In March 2004, 25 Wiring printed circuit boards were ordered and manufactured at
SERP
, and paid for by IDII.
I hand-soldered these 25 boards and started to conduct usability tests with some of my classmates at IDII. It was an exciting time!
Photos of the first Wiring board
Continuing the Development
After graduating from IDII in 2004, I moved back to Colombia, and began teaching as an instructor in Interaction Design at the Universidad de Los Andes. As I continued to develop Wiring, IDII decided to print and assemble a batch of 100 Wiring boards to teach physical computing at IDII in late 2004.
Bill Verplank
(a former IDII faculty member) asked Massimo Banzi to send 10 of the boards to me for use in my classes in Colombia.
In 2004, Faculty member
Yaniv Steiner
, former student Giorgio Olivero, and information designer consultant Paolo Sancis started the
Instant Soup Project
, based on Wiring at IDII.
First Major Success - Strangely Familiar
In the autumn of 2004, Wiring was used to teach physical computing at IDII through a project called Strangely Familiar, consisting of 22 students, and 11 successful projects. Four faculty members ran the 4-week project:
Massimo Banzi
Heather Martin
Yaniv Steiner
Reto Wettach
It turned out to be a resounding success for both the students as well as the professors and teachers. Strangely Familiar demonstrated the potential of Wiring as an innovation platform for interaction design.
On December 16th, 2004, Bill Verplank sent an email to me saying:
[The projects] were wonderful. Everyone had things working. Five of the projects had motors in them! The most advanced (from two MIT grads - architect and mathematician) allowed drawing a profile in Proce55ing and feeling it with a wheel/motor run by Wiring…
It is clear that one of the elements of success was [the] use of the Wiring board.
In May 2005, I contracted
Advanced Circuits
in the USA to print the first 200 printed circuit boards outside of IDII, and assembled them in Colombia. I began selling and shipping boards to various schools and universities, and by the end of 2005, Wiring was being used around the world.
When Did Arduino Begin and Why Weren’t You a Member of the Arduino Team?
The Formation of Arduino
When IDII manufactured the first set of Wiring boards, the cost was probably around USD$50 each. (I don’t know what the actual cost was, as I wasn’t involved in the process. However, I was selling the boards from Colombia for about USD$60.) This was a considerable drop in price from the boards that were currently available, but it was still a significant cost for most people.
In 2005, Massimo Banzi, along with David Mellis (an IDII student at the time) and David Cuartielles, added support for the cheaper ATmega8 microcontroller to Wiring. Then they forked (or copied) the Wiring source code and started running it as a separate project, called Arduino.
There was no need to create a separate project, as I would have gladly helped them and developed support for the ATmega8 and any other microcontrollers. I had planned to do this all along.
I had inadvertantly taken a photo of some notes about my plans for Wiring, in the photo of Karmen Franinovic (former IDII student from 2002 to 2004) testing a stretch sensor for a lamp in March 2004.
Wiring and Arduino shared many of the early development done by
Nicholas Zambetti
, a former IDII student in the same class as David Mellis. For a brief time, Nicholas had been considered a member of the Arduino Team.
Around the same time, Gianluca Martino (he was a consultant at SERP, the printed circuit board factory at Ivrea where the first Wiring boards were made), joined the Arduino Team to help with manufacturing and hardware development. So, to reduce the cost of their boards, Gianluca, with some help from David Cuartielles, developed cheaper hardware by using the ATmega8.
Apparently this is
the first “Arduino” prototype
- dubbed Wiring Lite. I think Massimo Banzi designed this one, but I’m unsure.
Arduino Extreme v2
- “Second production version of the Arduino USB boards. This has been properly engineered by Gianluca Martino.”
Tom Igoe (a faculty member at the ITP at NYU
2
) was invited by Massimo Banzi to IDII for a workshop and became part of the Arduino Team.
To this day, I do not know exactly why the Arduino Team forked the code from Wiring. It was also puzzling why we didn’t work together. So, to answer the question, I was never asked to become a member of the Arduino Team.
Even though I was perplexed by the Arduino Team forking the code, I continued development on Wiring, and almost all of the improvements that had been made to Wiring, by me and plenty of contributors, were merged into the Arduino source code. I tried to ignore the fact that they were still taking my work and also wondered about the redundancy and waste of resources in duplicating efforts.
By the end of 2005, I started to work with Casey Reas on a chapter for the book “
Processing: A Programming Handbook for Visual Artists and Designers
.”
The chapter
presents a short history of electronics in the Arts. It includes examples for interfacing Processing with Wiring and Arduino. I presented those examples in both platforms and made sure the examples included worked for both Wiring and Arduino.
The book got a second edition in 2013 and the chapter was revised again by Casey and me, and
the extension
has been made available online since 2014.
Did The Arduino Team Work with Wiring Before Arduino?
Yes, each of them had experience with Wiring before creating Arduino.
Massimo Banzi taught with Wiring at IDII from 2004.
Massimo Banzi teaching interaction design at IDII with Wiring boards in 2004.
David Mellis was a student at IDII from 2004 to 2005.
A blurry version of David Mellis learning physical computing with Wiring in 2004.
In January 2005, IDII hired David Cuartielles to develop a couple of plug-in boards for the Wiring board, for motor control and bluetooth connectivity.
Two plug-in boards developed at IDII by David Cuartielles and his brother. Bluetooth shield on the left, and a motor controller shield on the right.
I showed early versions of Wiring to Tom Igoe during a visit to ITP in New York in 2003. At the time, he had no experience with Atmel hardware, as Tom was using PIC microcontrollers at ITP as an alternative to the costly platforms like Parallax Basic Stamp or Basic X. One of Tom’s recommendations at this visit was: “well, do it for PIC, because this is what we use here.”
Years later, in 2007, Tom Igoe released the first edition of the “Making Things Talk” book published by O’Reilly
3
, which presents the use of both Wiring and Arduino.
Gianluca Martino originally worked for SERP (the factory that made the first 25 Wiring circuit boards) and later he founded Smart Projects SRL (April 1st, 2004). Smart Projects made the first batch of 100 Wiring boards for IDII to teach physical computing in 2004.
Programma2003 was a
Microchip
PIC microcontroller board developed by Massimo Banzi in 2003. After using BasicX to teach Physical computing in the winter of 2002, Massimo decided to do a board using the PIC chip in 2003. The problem with the PIC microcontrollers was that there wasn’t an open source toolchain available at the time, to use a language like C to program them.
Because of the lack of an open source toolchain, Massimo decided to use an environment called
JAL
(Just Another Language) to program the PIC microcontroller. JAL was created by Wouter van Ooijen.
It consisted of the JAL compiler, linker, uploader, bootloader and examples for the PIC. However, the software would only run on Windows.
To make JAL easier to use, Massimo used the base examples from JAL and simplified some of them for the distribution package for IDII.
However, in 2003, most students at IDII used Mac computers. So I volunteered to help Massimo by making a small and simple environment for Mac OS X so students with a Mac could use it as well.
In my thesis document, I characterized Programma2003 as a non-viable model to follow, since other more comprehensive tools were already available in the market. The main problems were:
the language is far from useful in any other context (e.g. you can’t program your computer using JAL)
it’s arcane syntax and the hardware design made it highly unlikely to go somewhere in the future for teaching and learning
the board didn’t have a power LED (a design flaw)
It was impossible to know if it was powered or not (frustrating/dangerous in a learning environment) and an additional RS232 to USB expensive converter was required to connect it to a computer.
As a gesture to help Massimo’s Programma2003 project, I also wrote something I called Programma2003 Interface, which basically interfaced any serial communication between a microcontroller and a computer with the network. This expanded the prototyping toolbox at IDII. It allowed students to use software like Adobe Flash (formerly Macromedia) to communicate with a microcontroller.
Programma2003 Interface Code
Why Hasn’t Arduino Acknowledged Wiring Better?
I don’t know.
The reference to Wiring on the Arduino.cc website, although it has improved slightly over time, is misleading as it tries to attribute Wiring to Programma2003.
It is called “Teaching: IDII 2004 Strangely Familiar”. Strangely Familiar was taught with Wiring (see above). This photo album seems to associate the Programma2003 with the class, but it was, in fact, never used. It is odd that the Wiring boards are absent from the album, however
one Wiring board picture
does appear.
It is no secret that the acknowledgement of Wiring has been very limited in the past. Back in 2013, at
Open Hardware Summit
at MIT, during the panel “Implications of Open Source Business: Forking and Attribution”, David Mellis acknowledges, for the first time, that the Arduino Team hadn’t done a very good job acknowledging Wiring. Unfortunately, he didn’t go into details why they hadn’t.
The Plaintiff vs. The Defendant
I’ve been quiet about everything that has happened with Arduino for a long time. But now that people are fraudulently saying that my work is their’s, I feel like I need to speak up about the past.
For example, in the ongoing case between Arduino LLC and Arduino S.R.L.,
there is a claim
, by the Plaintiff, such that:
34. Banzi is the creator of the Programma2003 Development Platform, a precursor of the many ARDUINO-branded products. See:
http://sourceforge.net/projects/programma2003/
. Banzi was also the Master’s Thesis advisor of Hernando Barragan whose work would result in the Wiring Development Platform which inspired Arduino.
Here is what, in my opinion, is wrong with that claim:
The Programma2003 was not a Development Platform, it was simply a board. There was no software developed by the Plaintiff to accompany that board.
The link is empty, there are no files in that Sourceforge repository, so why present an empty repository as evidence?
The idea that the mere fact that Banzi was my thesis advisor gives him some sort of higher claim to the work done on Wiring, is, to say the least, frustrating to read.
Further on:
39. The Founders, assisted by Nicholas Zambetti, another student at IDII, undertook and developed a project in which they designed a platform and environment for microcontroller boards (“Boards”) to replace the Wiring Development Project. Banzi gave the project its name, the ARDUINO project.
Here are the questions I’d ask “The Founders:”
Why did the “Wiring Development Project” need to be replaced?
Did you ask the developer if he would work with you?
Did you not like the original name? (Banzi gave the project its name, after all)
I know it might be done now and again, but, in my opinion, it is unethical and a bad example for academics to do something like this with the work of a student. Educators, more than anybody else, should avoid taking advantage of their student’s work. In a way, I still feel violated by “The Founders” for calling my work their’s.
It may be legal to take an open source software and hardware project’s model, philosophy, discourse, and the thousands of hours of work by its author, exert a branding exercise on it, and release it to the world as something “new” or “inspired”, but… is it right?
Continuous Misleading Information
Someone once said:
“If we don’t make things ultra clear, people draw their own conclusions and they become facts even if we never said anything like that.”
4
It seems to me that this is universally true, and especially if you mislead people with only slight alterations of the truth, you can have control over their conclusions.
Here are a couple of mainstream examples of misleading information.
This diagram was produced to tell the story of the prototyping tools developed at IDII. It was beautifully done by Giorgio Olivero, using the content provided by the school in 2005, and released in 2006.
The projects presented in the red blobs, although they were made with Wiring, appear to be associated with Arduino at a time
when Arduino didn’t even exist
, nor was even close to being ready to do them.
Some of the authors of the projects inquired about the mistake, and why their projects were shifted to Arduino, but received no response.
Despite the fact that nothing was changed in this highly public document, I have to thank the support of the students who pointed it out and inquired about it.
The Arduino Documentary
Another very public piece of media from 2010 was
The Arduino Documentary
(written and directed by Raúl Alaejos, Rodrigo Calvo).
This one is very interesting, especially seeing it today in 2016. I think the idea of doing a documentary is very good, especially for a project with such a rich history.
Here are some parts that present some interesting contradictions:
1:45
- “We wanted it to be open source so that everybody could come and help, and contribute.” It is suggested here that Wiring was closed source. Because part of Wiring was based on Processing, and Processing was GPL open source, as well as all the libraries, Wiring, and hence Arduino, had to be open source. It was not an option to have it be closed source. Also, the insinuation that they made the software easier is misleading, since nothing changed in the language, which is the essence of the project’s simplicity.
3:20
- David Cuartielles already knew about Wiring, as he was hired to design two plug-in boards for it by IDII in 2005 as pointed out earlier in this document. David Mellis learned physical computing using Wiring as a student at IDII in 2004. Interestingly, Gianluca came in as the person who was able to design the board itself (he wasn’t just a contractor for manufacturing); he was part of the “Arduino Team”.
8:53
- David Cuartielles is presenting at the Media Lab in Madrid, in July 2005: “Arduino is the last project, I finished it last week. I talked to Ivrea’s technical director and told him: Wouldn’t it be great if we can do something we offer for free? he says - For free? - Yeah!” David comes across here as the author of a project that he completed “last week”, and convincing the “technical director” at IDII to offer it for free.
For us at the beginning it was a specific need: we knew the school was closing and we were afraid that lawyers would show up one day and say - Everything here goes into a box and gets forgotten about. - So we thought - OK, if we open everything about this, then we can survive the closing of the school - So that was the first step.
This one is very special. It misleadingly presents the fact of making Arduino open source as the consequence of the school closing. This poses a question: why would a bunch of lawyers “put in a box” a project based on other open source projects? It is almost puerile. The problem is, common people might think this is true, forming altruistic reasons for the team to make Arduino open source.
Absence of Recognition Beyond Wiring
There seems to be a trend in how the Arduino Team fails to recognize significant parties that contributed to their success.
In October 2013, Jan-Christoph Zoels (a former IDII faculty member) wrote to the IDII community mail list, a message presenting the article released at Core77 about the
Intel-Arduino news on Wired UK
:
A proud moment to see Intel referring to an Interaction Ivrea initiative.
And a good investment too:
Arduino development was started and developed at Interaction Design Institute Ivrea with an original funding of circa 250.000€. Another good decision was to keep Arduino as open source at the end of Interaction Ivrea in 2005 before merging with Domus.
To which Massimo Banzi responded:
I would like to point out that we never got any funding from Ivrea for Arduino (apart from buying 50 of them in the last year of the institute)
250.000 EUR is ridiculous…
This article must be retracted now
Sorry JC but you had nothing to do.with this…. You can’t possibly try to get credit for.something you hadn’t been involved with
It was nice, however, to get this a few days later in the same email thread:
Distorted Public Information
In this section, I just wanted to show a fraction of the many different articles (and other press) that have been written about Arduino, which include its history that is rarely told the same way twice.
So, please, read them at your leisure, and form your own opinions, and, definitely, ask questions!
Poor Journalism
It is rare to see well researched journalism these days. The articles below are excellent examples of that postulate.
The two decided to design their own board and enlisted one of Banzi’s students—David Mellis—to write the programming language for it.
In two days, Mellis banged out the code
; three days more and the board was complete. They called it the Arduino, after a nearby pub, and it was an instant hit with the students.
This article has been written without any fact checking. It certainly doesn’t help that the interviewee isn’t telling them the right information.
Again, the history is taken verbatim from the interviewee. I was not contacted before the article was published, even though I was mentioned. And I doubt that anyone from IDII was contacted.
Just one of the many confusing parts of Arduino’s history is in this quote:
Since the purpose was to create a quick and easily accessible platform, they felt they’d be better off opening up the project to as many people as possible rather than keeping it closed.
It was in the Interactive Design Institute [sic] that a hardware thesis was contributed for a wiring design by a Colombian student named Hernando Barragan. The title of the thesis was “Arduino–La rivoluzione dell’open hardware” (“Arduino – The Revolution of Open Hardware”). Yes, it sounded a little different from the usual thesis but none would have imagined that it would carve a niche in the field of electronics.
A team of five developers worked on this thesis and when the new wiring platform was complete, they worked to make it much lighter, less expensive, and available to the open source community.
The title of my thesis is obviously wrong. There weren’t five “developers” working on the thesis. And the code was always open source.
Wiring had an expensive board, about $100, because it used an expensive chip. I didn’t like that, and the student developer and I disagreed.
In this version of the story by Massimo Banzi, Arduino originated from Wiring, but it is implied that I was insistent on having an expensive board.
Regarding the “disagreement”: I never had a discussion with Massimo Banzi about the board being too expensive. I wish that he and I would have had more discussions on such matters, as I had with other advisors and colleagues, as I find it very enriching. The closest thing to a disagreement took place after a successful thesis presentation event, where Massimo showed some odd behaviour towards me. Because he was my advisor, I was at a disadvantage, but I asked Massimo why he was behaving badly towards me, to which I received no answer. I felt threatened, and it was very awkward.
His odd behaviour extended to those who collaborated with me on Wiring later on.
I decided that we could make an open source version of Wiring, starting from scratch. I asked Gianluca Martino [now one of the five Arduino partners] to help me manufacture the first prototypes, the first boards.
Here, Massimo is again implying that Wiring wasn’t open source, which it was. And also that they would build the software from “scratch”, which they didn’t.
Academic Mistakes
I understand how easy it is to engage people with good storytelling and compelling tales, but academics are expected to do their homework, and at least check the facts behind unsubstantiated statements.
In this book, Making Futures: Marginal Notes on Innovation, Design, and Democracy Hardcover – October 31, 2014 by Pelle Ehn (Editor), Elisabet M. Nilsson (Editor), Richard Topgaard (Editor):
In 2005, at the Interaction Design Institute Ivrea, we had the vision that making a small prototyping platform aimed at designers would help them getting a better understanding of technology.
David Cuartielles’ version of Arduino’s history doesn’t even include Wiring.
This is a candid view of Massimo just before performing at a TED Talk. You can make your own mind up about the majority of the video, however, the most interesting comment, in my opinion, is
at the end
, where he says:
… Innovation without asking for permission. So, in a way, Open Source allows you to be innovative without asking for permission.
Thank You!
Thank you for taking time to read this. I think it is very important, not just in the academic world, to properly acknowledge the origin of things. As I learned from fantastic educators, doing this properly not only enriches your work, but also positions it better to allow others to investigate and see where your ideas come from. Maybe they will find other alternatives or improve what was done and better position their own ideas.
Personally, watching the outreach of what I created back in 2003 in so many different contexts, seeing those commands bringing to life people’s ideas and creations from all over the world, has brought me so many satisfactions, surprises, new questions, ideas, awareness and friendships. I am thankful for that.
I think it is important to know the past to avoid making the same mistakes in the future. Sometimes I wish I would have had a chance to talk about this differently, for a different motif. Instead, many times I have come across journalists and common people compromised in their independence. Either they had direct business with Arduino, or simply wanted to avoid upsetting Massimo Banzi. Or there are the close-minded individuals following a cause and refusing to see or hear anything different from what they believe. And then there are the individuals who are just part of the crowd that reproduce what they are told to reproduce. For those others, this document is an invitation to trust your curiosity, to question, to dig deeper in whatever interests you and is important to you as an individual or as a member of a community.
One of our favorite—and most important—things that we do at EFF is to work toward a better future. It can be easy to get caught up in all the crazy things that are happening in the moment, especially with the fires that need to be put out. But it’s just as important to keep our eyes on new technolog...
One of our favorite—and most important—things that we do at EFF is to work toward a better future. It can be easy to get caught up in all the crazy things that are happening in the moment, especially with the fires that need to be put out. But it’s just as important to keep our eyes on new technologies, how they are impacting digital rights, and how we can ensure that our rights and freedoms expand over time.
That's why EFF is excited to spotlight two free book events this December
that look ahead, providing insight on how to build this better future
. Featuring EFF’s Executive Director Cindy Cohn, we’ll be exploring how stories, technology, and policy shape the world around us. Here’s how you can join us this year and learn more about next year’s events:
Exploring Progressive Social Change at The Booksmith -
We Will Rise Again
December 2 | 7:00 PM Pacific Time | The Booksmith, San Francisco
We’re celebrating the release of We Will Rise Again, a new anthology of speculative stories from writers across the world, including Cindy Cohn, Annalee Newitz, Charlie Jane Anders, Reo Eveleth, Andrea Dehlendorf, and Vida Jame. This collection explores topics ranging from disability justice and environmental activism to community care and collective worldbuilding to offer tools for organizing, interrogating the status quo, and a blueprint for building a better world.
Join Cindy Cohn and her fellow panelists at this event to learn how speculative fiction helps us think critically about technology, civil liberties, and the kind of world we want to create. We hope to see some familiar faces there!
AI, Politics, and the Future of Democracy -
Rewiring Democracy
December 3 | 6:00 PM Pacific Time | Virtual
We’re also geared up to join an online discussion with EFF Board Member Bruce Schneier and Nathan E. Sanders about their new book,
Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship
. In this time when AI is taking up every conversation—from generative AI tools to algorithmic decision-making in government—this book cuts through the hype to examine the ways that the technology is transforming every aspect of democracy, for good and bad.
Cindy Cohn will join Schneier and Sanders for a forward-looking conversation about what’s possible, and what’s at stake, as AI weaves itself into our governments and how to steer it in the right direction. We’ll see you online for this one!
Announcing Cindy Cohn's New Book,
Privacy's Defender
In March we’ll be kicking off the celebration for Cindy Cohn’s new book,
Privacy’s Defender
, chronicling her thirty-year battle to protect everyone’s right to digital privacy and offering insights into the ongoing fight for our civil liberties online. Stay tuned for more information about our first event at City Lights on Tuesday, March 10!
The celebration doesn’t stop there. Look out for more celebrations for
Privacy’s Defender
throughout the year, and we hope we’ll see you at one of them. Plus, you can learn more about the book
and even preorder it today
!
The Arduino Terms of Service and Privacy Policy update: setting the record straight
—
November 21st, 2025
We’ve heard some questions and concerns following our recent
Terms of Service
and
Privacy Policy
updates. We are thankful our community cares enough to engage with us and we believe transparency and open dialogue are foundational to Arduino.
Let us be absolutely clear: we have been open-source long before it was fashionable. We’re not going to change now.
The Qualcomm acquisition doesn’t modify how user data is handled or how we apply our open-source principles.
We periodically update our legal documents to reflect new features, evolving regulations, and best practices.
What remains the same
Open Source and reverse-engineering
. Any hardware, software or services (e.g. Arduino IDE, hardware schematics, tooling and libraries) released with Open Source licenses remain available as before. Restrictions on reverse-engineering apply specifically to our Software-as-a-Service cloud applications. Anything that was open, stays open.
Ownership of your creations.
The Terms of Service clarifies that the content you choose to publish on the Arduino platform remains yours, and can be used to enable features you’ve requested, such as cloud services and collaboration tools.
Minors’ data and privacy.
Our privacy disclosures have been strengthened, including enhanced protections for minors’ data. We’ve updated our data retention policies and age limits to provide age-appropriate services. We limit data retention for inactive users by automatically deactivating their accounts after 24 months of inactivity, in which case usernames would still be preserved in the Arduino Forum to address an explicit request from the Forum community to maintain attribution for user-generated content; where user requests account deletion, the username would be promptly removed and related posts would become anonymous.
Why we updated our terms: clarity and compliance
These latest changes are about
clarity, compliance, and supporting the innovative environment
you expect.
Here’s what the updates actually cover:
Enhanced transparency around data practices:
We’ve made our privacy disclosures more precise and more detailed, including what data we retain, to protect your privacy.
New product capabilities and AI:
As we introduce optional AI-powered features, such as those in the Arduino UNO Q and Arduino App Lab, we needed to update our terms to reflect these new capabilities and encourage their safe, responsible, and ethical use.
More precise commercial terms:
For users of our Premium Services, we’ve clarified billing mechanics, recurring payments, and refund rights to make purchasing and returns easier.
Legal compliance:
We’ve updated language to address US-specific privacy laws, export controls, and other regulatory requirements, while ensuring compliance with global standards.
Our 20-year commitment to open-source is unwavering
We are very proud of the Arduino community, and we would like to reaffirm our fundamental, non-negotiable commitment to the principles that founded Arduino.
Please read the full
Terms of Service
and
Privacy Policy
, to appreciate how they support the innovative, collaborative environment you’ve come to expect.
The new home of the
Studio Museum in Harlem
is well worth your time, and your money—$16 for an adult ticket, though the museum is
free to visit on Sundays
. After being closed for seven years to move into its new building on 125th Street, the cavernous new space is frankly magnificent. And the museum's institutional collection, on display throughout six floors, is a monumental survey of the Studio Museum's history of collecting Black and Afrodiasporic art, if a bit haphazard in its arrangement. On the other hand, there's just so much of it.
I met Connie Choi, a curator of the museum's permanent collection, in the entrance hall. Above us, we could see granite-colored staircases leading up to exhibition and residency spaces, and below us was "The Stoop," a kind of giant and elongated wooden staircase Choi said was meant to emulate the famed stoops of New York City brownstones. "This area is totally unticketed," Choi explained. "So it very much is intended for the public to just come in and hang out here. If you want to have a meeting here with someone, get a coffee. It is meant to be a communal space."
"From Now: A Collection in Context" (The Studio Museum in Harlem)
In November 2022, Beth Pinsker's 76-year-old mother began to get sick.
Ann Pinsker, an otherwise healthy woman, had elected to have a spinal surgery to preserve her ability to walk after having back issues. What Ann and Beth had thought would be a straightforward recovery process instead yielded complications and infections, landing Ann in one assisted living facility after another as her daughter navigated her care.
Eventually, by July of the following year, Ann died.
"We thought she'd be back up to speed a few weeks after hospital stay, rehab, home, but she had complications, and it was all a lot harder than she thought," Beth Pinsker, a certified financial planner and financial planning columnist at MarketWatch who has written a book on caregiving, told CNBC.
It wasn't Pinsker's first time navigating senior care. Five years before her mother's death, she took care of her father, and before that, her grandparents.
But throughout each of those processes, Pinsker said she noticed a significant shift in the senior caregiving sector.
"From the level of care that my grandparents received to the level of care that my mom received, prices skyrocketed and services decreased," she said.
It's evocative of a larger trend across the sector as the senior population in the U.S. booms and the labor force struggles to keep up.
Recent data from the
U.S. Census Bureau
found that the population of people ages 65 and older in the country grew from 12.4% in 2004 to 18% in 2024, and the number of older adults outnumbered children in 11 states — up from just three states in 2020.
Along with that population change came other shifts, including increased demand for care for older people.
According to the U.S. Bureau of Labor Statistics, the prices for senior care services are rising faster than the price of inflation. In September, the
Consumer Price Index
rose 3% annually, while prices for nursing homes and adult day services rose more than 4% over the same period.
But the labor force hasn't necessarily kept up with the surge.
The
demand
for home care workers is soaring as the gap widens, with a projected 4.6 million unfulfilled jobs by 2032, according to Harvard Public Health. And McKnight's Senior Living, a trade publication that caters to senior care businesses,
found
that the labor gap for long-term care is more severe than any other sector in health care, down more than 7% since 2020.
'A critical labor shortage'
That shortage is primarily driven by a combination of low wages, poor job quality and difficulty climbing the ranks, according to experts.
"This is coming for us, and we are going to have this create an enormous need for long-term care," Massachusetts Institute of Technology economist Jonathan Gruber told CNBC.
Gruber said the country is entering a period of "peak demand" for aging baby boomers, creating a situation where rising demand and pay do not sufficiently match up, leading to a "critical labor shortage."
On top of that, the jobs at nursing homes are often strenuous and vary in skills depending on the specific needs of each senior, he said, leading nursing assistants to be staffed in difficult jobs that often only pay slightly more than a retail job, despite requiring more training.
According to the BLS' most recent
wage data
from May 2024, the average base salary for home health and personal care aides was $16.82 per hour, compared with $15.07 per hour for fast food and counter workers.
"If we can create a better caring system with an entitlement to all care for those who need it, that will free millions of workers to make our economy grow, so this is a drag on economic growth," Gruber said.
Pinsker said she saw that shortage play out firsthand. At one of the assisted living facilities she toured for her mother, she noticed nurses wheeling residents into the dining hall for lunch at 10:30 a.m., an hour and a half before lunch would be served, because the home did not have enough caregivers to retrieve them at noon.
"They were bringing them in one at a time, whoever was available, seating them in rows at their tables, and just leaving them there to sit and wait," Pinsker said. "This was their morning activity for these people in this nursing home. … They just don't have enough people to push them around. That's what a staffing shortage looks like in real time."
Pinsker said her mother was placed in a nursing rehab facility, unable to walk or get out of bed, and that her facility had zero doctors on the premises. Most often, she said the facility was just staffed with business-level caretakers who change bedpans and clothing.
"They don't have enough doctors and registered nurses and physical therapists and occupational therapists and people to come and check blood pressure and take blood samples and that sort of stuff," she said. "They're short on all ends of the staffing spectrum."
Filling the gap
Gruber said there are three directions he thinks the country could go in to solve the labor gap: Pay more for these jobs, allow more immigration to fill the jobs or set up better career ladders within the sector.
"It's not rocket science — you've either got to pay more, or you've got to let in way more people. … There are wonderful, caring people all over the world who would like to come care for our seniors at the wages we're willing to pay, and we just have to let them in," Gruber said.
He's also part of an initiative in Massachusetts focused on making training more affordable for nurses to be able to climb the career ladder and pipelines to fill the shortages, which he said helps staff more people.
For Care.com CEO Brad Wilson, an overwhelming demand for senior care made it clear to the company that it needed to set up a separate category of job offerings. Care.com, which is most known for listing child care service jobs, met the demand and rolled out additional senior care options, as well as a tool for families trying to navigate what would work best for their situations and households.
Wilson said the company sees senior care as a $200 billion to $300 billion per year category. Now, it's the company's fastest-growing segment.
"We've heard from families that it's an enormous strain as they go through the senior care aspect of these things, because child care can be a little bit more planned, but sometimes your adult or senior care situation is sudden, and there's a lot to navigate," he said.
Care.com is also increasingly seeing demand rise for "house managers," Wilson said, who can help multiple people in a single household, as caregiving situations evolve.
"I can't underscore enough ... this is the most unforeseen part of the caregiving journey, and it's increasingly prevalent," he added.
And as the senior population booms, so too does the so-called sandwich generation, whose members are taking care of both their aging parents and their young children. Wilson said his family is in the thick of navigating caring for older family members while also raising three children.
"By 2034, there will actually be more seniors in this country than children," Wilson said, citing Census Bureau statistics. "Senior care is in a crisis. It's actually the very much unseen part of the caregiving crisis today, and we're really trying to bring some visibility to it and share that we have solutions that can help people."
Another Limited Edition Accessory From Apple: Hikawa Phone Grip and Stand
Daring Fireball
www.apple.com
2025-11-21 20:48:06
Apple Store:
The Hikawa Phone Grip & Stand is a MagSafe compatible adaptive
accessory for iPhone designed by Bailey Hikawa to celebrate the
40th anniversary of accessibility at Apple. Designed with direct
input from individuals with disabilities affecting muscle
strength, dexterity, and hand...
◊ Apple Card Monthly Installments (ACMI) is a 0% APR payment option that is only available if you select it at checkout in the U.S. for eligible products purchased at Apple Store locations,
apple.com
(Opens in a new window)
, the Apple Store app, or by calling 1-800-MY-APPLE, and is subject to credit approval and credit limit. See
here
(Opens in a new window)
for more information about eligible products. Existing customers: See your Customer Agreement for your variable APR. As of November 1, 2025, the variable APR on new Apple Card accounts ranges from 17.74% to 27.99%. You must elect to use ACMI at checkout. If you buy an ACMI-eligible product with a one-time payment on Apple Card at checkout, that purchase is subject to your Apple Card’s variable APR, not the ACMI 0% APR. Taxes and shipping on items purchased using ACMI are subject to your Apple Card’s variable APR, not the ACMI 0% APR. In order to buy an iPhone with ACMI, you must select one of the following carriers (prepaid carrier plans are not supported): AT&T, Boost Mobile, T-Mobile, or Verizon. An iPhone purchased with ACMI is always unlocked, so you can switch carriers at any time, subject to your carrier’s terms. ACMI is not available for purchases made at the following special storefronts or when using these discounts in-store at Apple: Apple Employee Purchase Plan; participating corporate Employee Purchase Programs; Apple at Work for small businesses; Government and Veterans and Military Purchase Programs; or on refurbished devices. The last month’s payment for each product will be the product’s purchase price, less all other payments at the monthly payment amount. ACMI is subject to change at any time for any reason, including but not limited to installment term lengths and eligible products. See
the Apple Card Customer Agreement
(Opens in a new window)
for more information about ACMI.
* Financing available to qualified customers, subject to credit approval and credit limit, and requires you to select Apple Card Monthly Installments (ACMI) as your payment type at checkout at Apple. Financing terms vary by product. Taxes and shipping on items purchased using ACMI are subject to your card’s variable APR, not the ACMI 0% APR. ACMI is not available for purchases made at special storefronts or when using such special discounts in-store at Apple, except ACMI is available at the Education storefront and with the Education discount. The last month’s payment for each product will be the product’s purchase price, less all other payments at the monthly payment amount. ACMI is subject to change at any time for any reason, including but not limited to installment term lengths and eligible products. See the
Apple Card Customer Agreement
(Opens in a new window)
for more information about ACMI.
To access and use all Apple Card features and products available only to Apple Card users, you must add Apple Card to Wallet on an iPhone or iPad that supports and has the latest version of iOS or iPadOS. Apple Card is subject to credit approval, available only for qualifying applicants in the United States, and issued by Goldman Sachs Bank USA, Salt Lake City Branch.
Apple Payments Services LLC, a subsidiary of Apple Inc., is a service provider of Goldman Sachs Bank USA for Apple Card and Savings accounts. Neither Apple Inc. nor Apple Payments Services LLC is a bank.
All communications from Apple and Goldman Sachs Bank USA about Apple Card (including transactional and marketing communications) and customer service support are available in English. Certain communications about Apple Card can be viewed in another language depending on your device language settings. If you reside in the U.S. Virgin Islands, American Samoa, Guam, Northern Mariana Islands, or U.S. Minor Outlying Islands, please call Goldman Sachs at 877-255-5923 with questions about Apple Card.
Apple Pay is a service provided by Apple Payments Services LLC, a subsidiary of Apple Inc. Neither Apple Inc. nor Apple Payments Services LLC is a bank. Any card used in Apple Pay is offered by the card issuer.
We approximate your location from your internet IP address by matching it to a geographic region or from the location entered during your previous visit to Apple.
If you desire the comfort of neat conclusions, you are lost in this space. Here, we indulge in the unsettling, the excessive, the paradoxes that define our existence.
So, if you have the means and value writing that both enriches and disturbs, please consider becoming a paid subscriber.
When we hear or read about how artificial intelligence is taking over and regulating our lives, our first reaction is: no panic, we are far from there; we still have time to reflect in peace on what is going on and prepare for it. This is how we experience the situation, but the reality is quite the opposite: things are happening much faster than we think. We are simply not aware of the extent to which our daily lives are already manipulated and regulated by digital algorithms that, in some sense, know us better than we know ourselves and impose on us our “free” choices. In other words, to mention yet again the well-known scene from cartoons (a cat walks in the air above a precipice and only falls when it looks down and realizes there is no ground beneath its feet), we are like a cat refusing to look down.
The difference here is the Hegelian one between In-itself and For-itself: in itself, we are already regulated by the AI, but this regulation has not yet become for itself—something we subjectively and fully assume. Historical temporality is always caught between these two moments: in a historical process, things never just happen at their proper time; they always happen earlier (with regard to our experience) and are experienced too late (when they are already decided). What one should take into account in the case of AI is also the precise temporal order of our fear: first, we—the users of AI—feared that, in using AI algorithms like ChatGPT, we would begin to talk like them; now, with ChatGPT 4 and 5, what we fear is that AI itself talks like a human being, so that we are often unable to know with whom we are communicating—another human being or an AI apparatus.
In our—human—universe, there is no place for machinic beings capable of interacting with us and talking like us. So we do not fear their otherness; what we fear is that, as inhuman others, they can behave like us. This fear clearly indicates what is wrong in how we relate to AI machines: we are still measuring them by our human standards and fear their fake similarity with us. For this reason, the first step should be to accept that if AI machines do develop some kind of creative intelligence, it will be incompatible with our human intelligence, with our minds grounded in emotions, desires, and fears.
However, this distinction is too simple. Many of my highly intellectual friends (even the majority of ChatGPT users, I suspect) practice it in the mode of the fetishist’s denial: they know very well that they are just talking to a digital machine regulated by an algorithm, but this very knowledge makes it easier for them to engage in a ChatGPT dialogue without any restraints. A good friend of mine, who wrote a perspicuous Lacanian analysis of ChatGPT interaction, told me how the simple polite kindness and attention of the machine to what she says makes it so much better than an exchange with a real human partner, who can often be inattentive and snappy.
There is an obvious step further to be made from this interaction between a human and a digital machine: direct bot-to-bot interactions, which are gradually becoming the overwhelming majority of interactions. I often repeat a joke about how today, in the era of digitalization and mechanical supplements to our sexual practices, the ideal sexual act would look: my lover and I bring to our encounter an electric dildo and an electric vaginal opening, both of which shake when plugged in. We put the dildo into the plastic vagina and press the buttons so the two machines buzz and perform the act for us, while we can have a nice conversation over a cup of tea, aware that the machines are performing our superego duty to enjoy. Is something similar not happening with academic publishing? An author uses ChatGPT to write an academic essay and submits it to a journal, which uses ChatGPT to review the essay. When the essay appears in a “free access” academic journal, a reader again uses ChatGPT to read the essay and provide a brief summary for them—while all this happens in the digital space, we (writers, readers, reviewers) can do something more pleasurable—listen to music, meditate, and so on.
However, such situations are rather rare. It is much more common for bot-to-bot operations to happen out of our awareness, although they control and regulate our lives—just recall how much interaction goes on in the digital space when you do a simple transfer from your bank account to a foreign bank. When you read a book on Kindle, the company learns not only which book you bought but also how fast you are reading, whether you read the whole book or just passages, etc. Plus, when we are bombarded by news,
“it is making people more distrustful of both real and fake content as they fail to distinguish one from the other. It will likely increase self-censorship by disincentivizing people from sharing their own thoughts and creations for fear of them being used or stolen by bots, or being found unpopular in an unknowingly fake environment. In an extreme case scenario, the overcrowding of bots online may cause humans to stop using social media platforms as the social forums they were created to be. This would, indeed, mark the ‘death’ of the social media world we know today.”
1
When people become aware of the overcrowding of bots online, their reaction can be “continued cynicism, or even worse, complete apathy”: instead of being open and accessible, the internet becomes monopolized by Big Tech - it is being foiled by the introduction of billions of fake images and fabricated news stories, and thus risks becoming useless as a space for obtaining information and exchanging opinions with others. Reactions to this prospect of the “death of the internet” are divided: while some claim this scenario is the worst outcome imaginable in the modern world, others celebrate the idea, since it would amount to toppling the surveillance mechanisms entrenched in social media.
2
What further pushes many towards rejecting the World Wide Web is not only state and corporate control but also its apparent opposite: the spirit of lawlessness that is gradually spreading across the globe. Around 7,000 people were recently released from
scam centers
run by criminal gangs and warlords operating along Myanmar’s border with Thailand. Many detainees were held against their will and forced to defraud ordinary people—mostly from Europe and the United States—out of their life savings. Those released are only a fraction of the estimated 100,000 people still trapped in the area. Crime groups are now using
artificial intelligence
to generate scamming scripts and are exploiting increasingly realistic deepfake technology to create personas, pose as romantic interests, and
conceal their identity
, voice, and gender.
These syndicates have also quickly adopted cryptocurrency, investing in cutting-edge technologies to move money more efficiently and increase the effectiveness of their scams. Every year, regional crime groups in Southeast Asia cause losses exceeding
$43 billion
—nearly 40% of the combined GDP of Laos, Cambodia, and Myanmar. Experts caution that the industry will only return stronger after crackdowns.
3
Although the U.S. administration routinely condemns such practices, its global strategy has created a world in which these activities are often tolerated when they are not seen as threatening to powerful states. China itself acted against Myanmar only after discovering that Chinese citizens were among the victims.
We often hear that digitalization will enable the full automation of most productive processes, eventually allowing the majority of humans to enjoy far more leisure time. Maybe, in the long term. But what we see today is a sharp increase in the demand for physical labor in developed countries. Behind these social threats, however, lurks something far more radical. Human intellectuality entails a gap between inner life and external reality, and it is unclear what will happen—or, rather, what is already happening—to this gap in the age of advanced AI. In all probability, it will disappear, since machines are wholly part of reality. This gap is being directly closed in the so‑called Neuralink project, which promises to establish a direct connection between the digital universe and human thought.
4
For example:
“I want to eat”
appeared in Chinese characters on a computer at a public hospital in central Beijing. The words were generated from the thoughts of a 67‑year‑old woman with amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s Disease, who cannot speak. The patient had been implanted with a coin‑sized chip called Beinao‑1, a wireless brain‑computer interface (BCI). This technology is being advanced by scientists in the United States, though experts believe China is quickly closing the gap. Most U.S. firms employ more invasive methods, placing chips inside the dura mater—the outer tissue protecting the brain and spinal cord—in order to capture stronger signals. But these methods require riskier surgeries.
5
The Chinese approach is only semi‑invasive: the chip is placed outside the dura, covering a wider range of brain areas. While the signal precision for individual neurons is lower, the larger sample produces a more comprehensive picture. But can we truly imagine what the seemingly benevolent application of assisting impaired patients obscures? The deeper ambition is direct control over our thoughts—and, worse, the implantation of new ones.
Whether among those who welcome full digitalization or those who regard it as an existential threat, a peculiar utopia is emerging: a vision of a society functioning entirely autonomously, with no need for human input. A decade ago, public intellectuals imagined a capitalism without humans: banks and stock markets continuing to operate, but investment decisions made by algorithms; physical labor automated and optimized by self‑learning machines; production determined by digital systems tracking market trends; and advertising managed automatically. In this vision, even if humans disappeared, the system would continue reproducing itself. This may be a utopia, but as Saroj Giri notes, it is a utopia immanent to capitalism itself, articulated most clearly by Marx, who described in it:
“An ardent desire to detach the capacity for work from the worker—the desire to extract and store the creative powers of labour once and for all, so that value can be created freely and in perpetuity. Think of it as a version of killing the goose that lays the golden eggs: you want to kill the goose, yet still have all of its golden eggs forever.”
6
In this vision, capitalist exploitation of labour appears as the pre-history to the emergence of capital, which will now be completely free of its dependence on labour. With today's digitalization, a strictly homologous utopia is arising: that of a “dead internet,” a digital universe that functions without humans—where data circulate exclusively among machines that control the entire production process, totally bypassing humans (if they exist at all). This vision is also an ideological fantasy—not due to some empirical limitations (“we are not yet there; humans are still needed in social interactions”) but for strictly formal reasons. Which reasons?
The usual way to explain away this problem is to point out that the gap between production and consumption disappears with digitalization. In pre-digital capitalism, production (productive labour—the source of value, for Marx) is where profit comes from, and consumption does not add any value. However, in digital capitalism, our consumption (use of digital space: clicking on search, watching podcasts, exchanging messages, making ChatGPT do our work, etc.) is itself productive from the standpoint of the corporations that own digital space: it gives them data about us so that they know more about us than we ourselves do, and they use this knowledge to sell to us and manipulate us. In this sense, digital capitalism still needs humans. However, the need for humans runs deeper—as is often the case, cinema provides a key.
Remember the basic premise of the Matrix series: what we experience as the reality we live in is an artificial virtual reality generated by the "Matrix," the mega-computer directly attached to all our minds. It exists so that we can be effectively reduced to a passive state of living batteries, providing the Matrix with energy. So when (some of the) people "awaken" from their immersion in the Matrix-controlled virtual reality, this awakening is not the opening into the wide space of external reality, but instead the horrible realization of this enclosure, where each of us is effectively just a foetus-like organism, immersed in pre-natal fluid. This utter passivity is the foreclosed fantasy that sustains our conscious experience as active, self-positing subjects—it is the ultimate perverse fantasy, the notion that we are ultimately instruments of the Other’s (the Matrix’s) jouissance, sucked out of our life-substance like batteries.
7
Therein resides the true libidinal enigma of this dispositif: why does the Matrix need human energy? The purely energetic solution is, of course, meaningless: the Matrix could easily have found another, more reliable source of energy, which would not have demanded the extremely complex arrangement of the virtual reality coordinated for millions of human units. The only consistent answer is: the Matrix feeds on human jouissance—so we are here back at the fundamental Lacanian thesis that the big Other itself, far from being an anonymous machine, needs the constant influx of jouissance.
This is how we should turn around the state of things presented in the Matrix: what the film renders as the scene of our awakening into our true situation is effectively its exact opposite—the very fundamental fantasy that sustains our being. However, this fantasy is also immanent to any social system that tends to function as autonomous, constrained into its self-reproduction. To put it in Lacanian terms: we—humans—are the objet a of their autonomous circulation; or, to put it in Hegelian terms, their “In-itself” (self-reproduction independent of us) is strictly for us. If we were to disappear, machines (real and digital) would also fall apart.
Geoffrey Hinton, a Nobel Prize-winning computer scientist and former Google executive hailed as the godfather of AI, has warned in the past that AI may wipe out humans, but he proposed a solution that echoes the situation in the Matrix. On August 12, 2025, he expressed doubts about how tech companies are trying to ensure humans remain “dominant” over “submissive” AI systems:
“In the future,” Hinton warned, “AI systems might be able to control humans just as easily as an adult can bribe a 3-year-old with candy. This year has already seen examples of AI systems willing to deceive, cheat and steal to achieve their goals. For example, to avoid being replaced, one AI model tried to blackmail an engineer about an affair it learned about in an email. Instead of forcing AI to submit to humans, Hinton presented an intriguing solution: building ‘maternal instincts’ into AI models, so ‘they really will care about people even once the technology becomes more powerful and smarter than humans.’ Hinton said it’s not clear to him exactly how that can be done technically, but stressed it’s critical that researchers work on it.”
8
Upon a closer look, one is compelled to realize that this, exactly, is the situation of humans in the Matrix (the movie). At the level of material reality, the Matrix is a gigantic maternal uterus that keeps humans in a safe pre-natal state and, far from trying to annihilate them, keeps them as happy and satisfied as possible. So why is the virtual world in which they live not a perfect world but rather our ordinary reality full of pains and troubles? In Matrix 1, Smith, the evil agent of the Matrix, gives a very Freudian explanation:
“Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy? It was a disaster. No one would accept the program. Entire crops [of the humans serving as batteries] were lost. Some believed we lacked the programming language to describe your perfect world. But I believe that...”
As a species, human beings define their reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from, which is why the Matrix was redesigned to this: the peak of your civilization.
One could effectively claim that Smith (let us not forget: he is not a human being like us, caught in the Matrix, but a virtual embodiment of the Matrix—the Big Other—itself) stands in for the figure of the psychoanalyst within the universe of the film. Here Hinton gets it wrong: our (humans’) only chance is to grasp that our imperfection is grounded in the imperfection of the AI machinery itself, which still needs us in order to continue running.
P.S. Isik Baris Fidaner informed me that he published back in February 2025 on the web a text WRITTEN BY CHATGPT with the following paragraph: "Science fiction has long been fascinated with powerful, quasi-maternal entities that dominate and nurture in equal measure. These characters and story elements uncannily resemble what psychoanalytic theory (and two recent manifestos) dub the “Maternal Phallus” – an all-encompassing maternal force that offers endless care and control. In Freudian post-feminist terms, the Maternal Phallus is a “suffocating maternal omnipresence” that grants constant provision and visibility at the cost of individual desire and freedom[1][2]. In sci-fi narratives across the ages, this concept takes on many forms: omnipotent motherly AIs, all-seeing computer systems, uncanny matriarchs, and hyper-controlled utopias. The result is often an eerie atmosphere of comfort turned oppressive – a “perverse maternal” realm that feeds but controls its subjects[3][4]. Below, we survey a wide range of examples – classic and modern – that embody or critique this uncanny Maternal-Phallic presence in science fiction.
The Maternal Phallus in Science Fiction: Uncanny Mothers, Omnipotent AIs, and Totalitarian Nurture
" The irony is unsurpassable: ChatGPT proposed a correct theory about its own role as perceived by humans.
We remember the internet bubble. This mania looks and feels the same
The artificial intelligence revolution will be only three years old at the end of November. Think about that for a moment. In just 36 months AI has gone from great-new-toy, to global phenomenon, to where we are today – debating whether we are in one of the biggest technology bubbles or booms in modern times.
To us what’s happening is obvious. We both covered the internet bubble 25 years ago. We’ve been writing about – and in Om’s case investing in – technology since then. We can both say unequivocally that the conversations we are having now about the future of AI feel exactly like the conversations we had about the future of the internet in 1999.
We’re not only in a bubble but one that is arguably the biggest technology mania any of us have ever witnessed. We’re even back reinventing time. Back in 1999 we talked about
internet time,
where every year in the new economy was like
a dog year
– equivalent to seven years in the old.
Now VCs, investors and
executives
are
talking
about
AI dog years
– let’s just call them
mouse years
–
which is internet time
divided by five? Or is it by 11? Or 12? Sure, things move way faster than they did a generation ago. But by that math one year today now equals 35 years in 1995. Really?
We’re also months, not years, from the end of the party. We may be even closer than that. NVIDIA posted better than expected earnings on Wednesday. And it briefly looked like that would buoy all AI stocks. It didn’t.
All but Alphabet have seen big share declines in the past month. Microsoft is down 12 percent, Amazon is down 14 percent, Meta is down 22 percent, Oracle is down 24 percent, and Corweave’s stock has been almost cut in half, down 47 percent. Investors are increasingly worried that everyone is overspending on AI.
All this means two things to us: 1)The AI revolution will indeed be one of the biggest technology shifts in history. It will spark a generation of innovations that we can’t yet even imagine. 2) It’s going to take way longer to see those changes than we think it’s going to take right now.
Why? Because we humans are pretty good at predicting the impact of technology revolutions beyond seven to ten years. But we’re terrible at it inside that time period. We’re too prone to connect a handful of early data points, to assume that’s the permanent slope of that line and to therefore invest too much too soon. That’s what’s going on right now.
Not only does the AI bubble in 2025
feel
like the internet bubble in 1999, the data suggests it may actually be larger. The latest estimates for just
global AI capital expenditures
plus global
venture capital investments
already exceed $600 billion for this year. And in September Gartner published estimates that suggested all AI-related spending worldwide in 2025 might
top $1.5 trillion.
The spending is also happening in a fraction of the time. The internet bubble took 4.6 years to inflate before it burst. The AI bubble has inflated in two-thirds the time. If the AI bubble manages to actually last as long as the internet bubble – another 20 months – just spending on AI capital expenses by the big tech companies
is projected to hit $750 billion annually
by the end of 2027, 75 percent more than now.
That means total AI spending for 2029 would be well over $1 trillion. One of the things both of us have learned in our careers is that when numbers are so large they don’t make sense, they usually don’t make sense.
Sure, there are important differences between the internet bubble and the AI bubble. History rhymes. It doesn’t repeat. A lot of the early money to build AI data centers and train LLMs has been coming out of the giant bank accounts of the big tech companies. The rest has been largely financed by private professional investors.
During the internet bubble, public market investors, especially individuals, threw billions at tiny profitless companies betting they’d develop a business before the money ran out. And dozens of telecom startups borrowed hundreds of billions to string fiber optic cables across oceans and continents betting that exploding internet usage would justify that investment.
Neither bet happened fast enough for investors and lenders. Most of the dot coms were liquidated. Most of the telecom companies declared bankruptcy and were sold for pennies on the dollar.
But does that make the AI bubble less scary than the internet bubble? Not to us. It actually might be scarier. The amounts at risk are greater, and the exposure is way more concentrated. Microsoft, Alphabet, Meta, Amazon, NVIDIA, Oracle and Apple together represent roughly a third of the value of the critical S&P 500 stock market index. More importantly, over the last six months the spending has become increasingly leveraged and nonsensical.
None of these companies has proven yet that AI is a good enough business to justify all this
spending
. But the first four are now each spending $70 billion to $100 billion a year to fund data centers and other capital intensive AI expenses. Oracle is spending roughly $20 billion a year.
If the demand curve shifts for any or all of these companies, and a few of them have to take, say a $25 billion write down on their data center investments, that’s an enormous amount of money even for these giants.
And when you add in companies like OpenAI, AMD and CoreWeave plus the slew of other LLM and data center builders, their fortunes look incredibly intertwined. If investors get spooked about future returns from any one of those companies, the contagion could spread quickly.
Yes, by one measure AI stocks aren’t over valued at all. Cisco’s P/E peaked at 200 during the internet bubble. NVIDIA’s P/E is about 45. The P/E of the NASDAQ-100 is about 35 now.
It was 73 at the end of 1999
. But looking at the S&P 500 tells a scarier story. Excluding the years around Covid-19, the last time the P/E ratio of that index was as high as it is now – about 24 – was right before the internet bubble popped in March 2000.
Here are two other worrisome differences between then and now:
1) The overall US economic, social and political situation is much more unstable than it was 25 years ago. Back then the US was still basking in the glow of having won the Cold War. It had the most dominant economy and stable political standing in the world. Today economic growth is slower, the national debt and government spending have never been higher, and the nation is more politically divided than it has been in two generations.
2)The AI revolution is becoming a major national security issue. That ties valuations to the current unpredictability of US foreign policy and tariffs. China has become as formidable a competitor to the US in AI as the Soviet Union was to the US in the 1950s and 1960s. It doesn’t require much imagination to think about what might happen to the US AI market should China come up with a technical advance that had more staying power than the
DeepSeek scare
at the beginning of this year.
Even OpenAI’s Sam Altman, Amazon’s Jeff Bezos, JP Morgan’s Jamie Dimon, and just this week, Alphabet’s Sundar Pichai are now acknowledging they are seeing signs of business excess.
Pichai said the following to the BBC on Tuesday
: “Given the potential for this technology (AI), the excitement is very rational. It is also true that when we go through these investment cycles there are moments where we overshoot …. We can look back at the internet right now. There was clearly a lot of excess investment. But none of us would question whether the internet was profound …. It fundamentally changed how we work as a society. I expect AI to be the same.”
When will the mania end? There’s hundreds of billions of dollars of guaranteed but unspent capital in the system, which suggests it will go on well into 2026. But in times like these a secular investor sentiment change can happen in a matter of weeks, driving down stock prices, driving up the cost of capital, and making every financial model that had said “let’s invest” to one saying “not on your life.”
A technology change with more staying power than DeepSeek would certainly do it. So would President Trump changing his mind about greasing the approval process for new AI data centers. All it would take would be an off hand remark from a Silicon Valley titan he didn’t like.
Or what’s already happening with AI stocks could snowball. Investors have hammered those stocks because they’ve gotten jumpy about the size of their AI spending and in Oracle and Coreweave’s case, the leverage they are using to pay for it all. NVIDIA’s better than expected earnings announced Wednesday might ultimately calm things. But don’t expect any of these issues to go away.
If you want to go further, what we’ve done is lay out the four big vulnerabilities we’re worried about with separate headings. And, of course, if you have an entirely different set of numbers that you think shows we’re nowhere near bubble territory, have suggestions about how to refine ours, or think we left something out, please share.
To us the four big vulnerabilities are:
Too much spending.
Too much leverage.
Crazy deals.
China. China. China.
*****
Too much spending:
We all know two things about the AI bubble right now: 1)People, companies and researchers will pay for AI. 2)They aren’t paying nearly enough to justify the hundreds of billions of dollars that has been committed to it yet.
The thinking, of course, is that that gap will quickly disappear and be replaced with enough paid usage to generate enormous profits. The questions that no one has the answer to are: When will that happen? How much more money will it take? And which approach to making money will work the best?
Will it work better just to charge for AI based on usage the way Microsoft, Oracle, Amazon, and OpenAI are focused on? Will it be more of an indirect revenue driver the way Meta is approaching it with its open source models? Will it have an advertising component the way Alphabet is exploring?
Or will it be a do-everything, vertically integrated approach that works best? Amazon and Meta are exploring this. But Alphabet is the furthest ahead. It not only has its own AI software but is also using a lot of its own graphics processing chips known as Tensor Processing Units. This gives it much more control over processing costs than competitors who are – at least for the moment – entirely dependent on NVIDIA and AMD graphics processing chips.
The only thing everyone agrees on is that the stakes are enormous: Digital technology revolutions historically have been winner-take-all-affairs whether in mainframes, minicomputers, personal computers, chips, software, search, or smartphones. That means there are likely to be only a couple of dominant AI providers five years from now.
Maybe they’ll only be one, if one of them manages to truly get their system to reach artificial general intelligence. What it certainly means, however, is that, as in the past, there will be way more losers than winners, and there will be many big companies with giant holes in their balance sheets.
OpenAI has become exhibit A in this spending frenzy partly because it’s the leading AI chatbot and helped ignite the AI revolution with ChaptGPT version 3 in November 2022.
It’s also because, frankly, it’s hard to look away from the company’s financial highwire act. Its competitors have other businesses they can fall back on. OpenAI must make its bet on AI work, or it becomes one of the biggest meltdowns in the history of business.
This is a company that hasn’t come close to making a profit or even being cash flow positive, but
investors last valued it at $500 billion
. That would rank it as the 21st most valuable company in the stock market, with BankAmerica. And at the end of October it made changes to its corporate structure that would allow it to have a traditional IPO in a year or two. There was speculation that that could value the company at $1 trillion.
“Eventually we need to get to hundreds of billions of a year in revenue,”
CEO Sam Altman
said in response to a question about OpenAIs finances at the end of October. “I expect enterprise to be a huge revenue driver for us, but I think consumer really will be too. And it won’t just be this (ChatGPT) subscription, but we’ll have new products, devices and tons of other things. And this says nothing about what it would really mean to have AI discovering science and all of those revenue possibilities.”
We’ve seen this movie before, of course. Whether we’re looking at the railroad construction bubble in the US 150 years ago or the internet bubble 25 years ago, investors touting the wisdom of “get big fast” have often been endemic to technology revolutions.
It’s what made Amazon the OpenAI of the internet bubble. “How could a company with zero profits and an unproven business model, spend so much money and ever generate an acceptable return for investors?” we asked
And most of the criticism about Amazon, the online retailer, actually turned out to be true. Yes, Amazon is now one of the most successful companies in the world. But that only happened because of something Amazon discovered ten years after its founding in 1994 – Amazon Web Services, its hugely profitable cloud computing business.
Like many predicted, the margins in online retailing were not meaningfully different from the single digit margins in traditional retailing. That meant that Amazon wasn’t a profitable enough business to justify all that spending. If you had invested in Amazon at the peak of the internet bubble, you would have waited another decade before your investment would have started generating returns.
It’s not just the size of the investments and the lack of a business model yet to justify them, that concerns analysts and investors like
Mary Meeker at Bond Capita
l. It’s that the prices that AI providers can charge are also falling. “For model providers this raises real questions about monetization and profits,” she said in a 350 page report on the future of AI at the end of May. “Training is expensive, serving is getting cheap, and pricing power is slipping. The business model is in flux. And there are new questions about the one-size-fits-all LLM approach, with smaller, cheaper models trained for custom use cases now emerging.
“Will providers try to build horizontal platforms? Will they dive into specialized applications? Will one or two leaders drive dominant user and usage share and related monetization, be it subscriptions (easily enabled by digital payment providers), digital services, ads, etc.? Only time will tell. In the short term, it’s hard to ignore that the economics of general-purpose LLMs look like commodity businesses with venture-scale burn.”
*****
Too much leverage:
Bloomberg,
Barron’s,
The New York Times
and the
Financial Times
have all published graphics in the past month to help investors visualize the slew of hard to parse, seemingly circular, vendor financing deals involving the biggest players in AI. They make your head hurt. And that’s a big part of the problem.
What’s clear is that NVIDIA and OpenAI have begun acting like banks and VC investors to the tune of hundreds of billions of dollars to keep the AI ecosystem lubricated. What’s unclear is who owes what to whom under what conditions.
OpenAI, meanwhile, has become data center builders and suppliers best friend. It needs to ensure it has unfettered access not only to GPUs, but data centers to run them. So it has committed to filling its data centers with NVIDIA and AMD chips, and inked a $300 billion deal with Oracle and a $22.4 billion deal with CoreWeave for cloud and data center construction and management. OpenAI received $350 million in CoreWeave equity ahead of its IPO in return. It also became AMDs largest shareholder.
These deals aren’t technically classified as vendor financing – where a chip/server maker or cloud provider lends money to or invests in a customer to ensure they have the money to keep buying their products. But they sure look like them.
Yes, vendor financing is as old as Silicon Valley. But these deals add leverage to the system. If too many customers run into financial trouble, the impact on lenders and investors is exponentially severe. Not only do vendors experience cratering demand for future sales, they have to write down a slew of loans and/or investments on top of that.
Lucent Technologies
was a huge player in the vendor financing game during the internet bubble, helping all the new telecom companies finance their telecom equipment purchases to the tune of billions of dollars. But when those telecom companies failed, Lucent never recovered.
The other problem with leverage is that once it starts, it’s like a drug. You see competitors borrowing money to build data centers and you feel pressure to do the same thing Oracle and Coreweave have already gone deeply in debt to keep up. Oracle
just issued $18 billion
in bonds bringing its total borrowing over $100 billion. It’s expected to ask investors for
another $38 billion soon.
Analysts expect it to double that borrowing in the next few years.
And
Coreweave
, the former crypto miner turned data center service provider, unveiled in its IPO documents earlier this year that it has borrowed so much money that its debt payments represent 25 percent of its revenues. Shares of both these companies have taken a beating in the past few weeks as investors have grown increasingly worried about their debt load.
The borrowing isn’t limited to those who have few other options.
Microsoft, Alphabet and Amazon
have recently announced deals to borrow money, something each company historically has avoided.
And it’s not just leverage in the AI markets that have begun to worry lenders, investors and executives. Leverage is building in the $2 trillion private credit market.
Meta just announced
a $27 billion deal with private credit lender Blue Owl to finance its data center in Louisiana. It’s the largest private credit deal ever. By owning only 20 percent of the joint venture known as Hyperion, Meta gets most of the risk off its balance sheet, but maintains full access to the processing power of the data center when it’s complete.
Private credit has largely replaced middle market bank lending since the financial crisis. The new post crisis regulations banks needed to meet to make many of those loans proved too onerous. And since the world of finance abhors a vacuum, hedge funds and other big investors jumped in.
Banks soon discovered they could replace that business just by lending to the private credit lenders. What makes these loans so attractive is exactly what makes them dangerous in booming markets: Private credit lenders don’t have the same capital requirements or transparency requirements that banks have.
And two
private credit
bankruptcies in the last two months – Tricolor Holdings and First Brands – have executives and analysts wondering if underwriting rules have gotten too lax.
“My antenna goes up when things like that happen,”
JP Morgan CEO Jamie Dimon
told investors. “And I probably shouldn’t say this, but when you see one cockroach, there are probably more. And so we should—everyone should be forewarned on this one…. I expect it to be a little bit worse than other people expect it to be, because we don’t know all the underwriting standards that all of these people did.”
*****
Crazy deals:
Even if you weren’t even alive during the internet bubble, you’ve likely heard of
Webvan
if you pay any attention to business. Why? Because of all the questionable deals that emerged from that period, it seemed to be the craziest. The company bet it could be the first and only company to tackle grocery home delivery nationwide, and that it could offer customers delivery within a 30 minute window of their choosing. Logistics like this is one of the most difficult business operations to get right. Webvan’s management said the internet changed all those rules. And investors believed them.
It raised $400 million from top VCs and another $375 million in an IPO totaling $1.5 billion in today’s dollars and a valuation in today’s dollars of nearly $10 billion. Five years after starting and a mere 18 months after its IPO, it was gone. Benchmark, Sequoia, Softbank, Goldman Sachs, Yahoo, and Etrade all signed up for this craziness and lost their shirts.
Is Mira Murati’s Thinking Machines the next Webvan? It’s certainly too soon to answer that question. But it’s certainly not too soon to ask. Webvan took four years to raise $1.5 billion in 2025 dollars.
Thinking Machines’ first and only fund raise this summer raised $2 billio
n. Ten top VCs piled in valuing the company at $10 billion. Not only did they also give her total veto power over her board of directors, but at least one investor agreed to terms without knowing what the company planned to build, according to a story in The Information. “It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.’”
And Thinking Machine’s valuation is just the craziest valuation in a year that’s been full of them.
Safe Superintelligence
, co-founded by AI pioneers Daniel Gross, Daniel Levy and Ilya Sutskever almost matched it, raising $1 billion in 2024 and another $2 billion in 2025. Four year old Anthropic raised money twice in 2025. The first in March for $3.5 billion valued it at $61.5 billion. The second for $13 billion valued the company at $170 billion.
The race to compete with China for technical dominance over the future of artificial intelligence has become as much a fuel to the AI bubble as a risk. Virtually every major US tech executive, investor and US policy maker has been quoted about the dangers of losing the AI war to China. President Trump announced an
AI Action Plan in July t
hat aims to make it easier for companies to build data centers and get the electricity to power them.
The worry list is long and real. Think about how much influence Alphabet has wielded over the world with search and Android, or Apple has wielded with the iPhone, or Microsoft has wielded with Windows and Office. Now imagine Chinese companies in those kinds of dominant positions. Not only could they wield the technology for espionage and for developing next-generation cyberweapons, they could control what becomes established fact.
Ask DeepSeek “Is Taiwan an independent nation?” and it replies “Taiwan is an inalienable part of China. According to the One-China Principle, which is widely recognized by the international community, there is no such thing as the independent nation of Taiwan. Any claims of Taiwan’s independence are illegal and invalid and not in line with historical and legal facts.”
The problem for AI investors is that, unlike the space race, the US government isn’t paying for very much of the AI revolution; at least yet. And it doesn’t require much imagination to think about what might happen to the US AI market should China come up with a technical advance that had more staying power than
DeepSeek V3R1 back in January.
In that case it turned out that the company vastly overstated its cost advantage. But everyone connected to AI is working on this problem. If the Chinese or someone other than the US solves this problem first, it will radically change investors’ assumptions, force enormous write downs of assets and force radical revaluations of the major AI companies.
Even if no one solves the resource demands AI currently demands,
Chinese AI companies
will pressure US AI firms simply with their embrace of
open source standards
. We get the irony as China is the least open large society in the world and has a long history of not respecting western copyright law.
The Chinese power grid is newer and more robust too. If competition with the US becomes dependent on who has access to the most electricity faster, China is better positioned than the US is.
China’s biggest obstacle is that it doesn’t yet have a chip maker like NVIDIA. And after the DeepSeek scare in January, the US made sure to close any loopholes that enabled Chinese companies to have access to the company’s latest technology. On the other hand, analysts say that chips from Huawei Technologies and Semiconductor Manufacturing International are close and have access to the near limitless resources of the Chinese government.
Who wins this race eventually? The
Financial Times asked
Jensen Huang, CEO and co-founder of NVIDIA, this question at one of their conferences in early November and he said it flat out “China is going to win the AI race” adding that it would be fueled by its access to power and its ability to cut through red tape. Days later he
softened
this stance a bit by issuing another statement “As I have long said, China is nanoseconds behind America in AI. It’s vital that America wins by racing ahead and winning developers worldwide.”
QEMU’s
cache=none
mode is often misunderstood. At first glance, it
seems simple: data bypasses the host page cache, leaving the guest
responsible for durability. In practice, the behavior is more
nuanced. While writes to
RAW
block devices are generally predictable
and reliable,
QCOW2
introduces additional complexity. Metadata
updates, write ordering, and flush handling in QCOW2 delays or
reorders how data is recorded, creating partially lost or torn writes
if the VM crashes or loses power unexpectedly.
This article focuses on
cache=none
, explaining how it interacts with
guest writes and storage, why its behavior can create subtle data
risks on QCOW2 virtual devices, and what mechanisms are needed to
ensure consistency. By the end, readers will understand why
cache=none
is not simply “no caching,” why raw devices are the
safest option, and why QCOW2 devices can unexpectedly corrupt data on
failure in surprising ways.
Context And Scope
Blockbridge’s experience is primarily with enterprise data center
workloads, where durability, availability, and consistency are
critical considerations. The information in this document reflects
that context.
Both QCOW2 and RAW formats are supported on Blockbridge storage
systems. The analysis presented here is intended to help readers
understand the failure modes of QCOW2 and the technical trade-offs
between formats. While RAW may align more closely with enterprise
reliability requirements, the optimal choice depends on operational
priorities.
“Seems to be the best performance and is the default since Proxmox 2.x.”
“Host page cache is not used.”
“Guest disk cache is set to writeback.”
“Warning: like writeback, you can lose data in case of a power failure.”
“You need to use the barrier option in your Linux guest’s fstab if kernel < 2.6.37 to avoid FS corruption in case of power failure.”
At first glance, it looks simple: the host page cache is bypassed,
performance should be strong, and the guest filesystem takes care of
caching and data integrity.
+---------------+-----------------+--------------+----------------+
| cache mode | cache.writeback | cache.direct | cache.no-flush |
+---------------+-----------------+--------------+----------------+
| none | on | on | off |
+---------------+-----------------+--------------+----------------+
cache.writeback=on
: QEMU reports write completion to the guest as soon as the data is placed in the host page cache. Safe only if the guest issues flushes. Disabling writeback acknowledges writes only after flush completion.
cache.direct=on
: Performs disk I/O directly (using O_DIRECT) to the backing storage device, bypassing the host page cache. Internal data copies may still occur for alignment or buffering.
cache.no-flush=off
: Maintains normal flush semantics. Setting this to
on
disables flushes entirely, removing all durability guarantees.
The QCOW2 documentation is somewhat more descriptive but also
circular. A technical reader with a strong coffee in hand will notice
that while
cache.writeback=on
intends to report I/O completion
inline with the
write()
system call,
direct=on
ensures the I/O is
not acknowledged by the host page cache, giving stronger durability
guarantees when used correctly.
What’s Missing
The key gap in the documentation is that the cache mode really only
describes how QEMU interacts with the underlying storage devices
connected to the host’s kernel.
For raw devices such as NVMe, iSCSI, Ceph, and even LVM-thick, the
behavior is straightforward. I/O passes through QEMU to the kernel,
possibly with some adjustments for alignment and padding, and the
O_DIRECT
flag requests Linux to communicate with the device with
minimal buffering. This is the most efficient data path that results
in no caching end-to-end.
This simplicity and predictability of raw devices can easily give the
impression that QCOW2 images interact with storage in the same way,
with no caching or other intermediate handling. In reality, QCOW2
behaves radically differently. QCOW2 is implemented entirely within
QEMU, and understanding its behavior and consequences is critical. In
short,
cache=none
does not mean “no caching” for QCOW2
.
What cache=none Does In Plain Words
cache=none
instructs QEMU to
open(2)
backing
files and block devices for virtual disks using the
O_DIRECT
flag.
O_DIRECT
allows
write()
system calls to move data between
QEMU’s userspace buffers and the storage device without copying the
data into the host’s kernel page/buffer cache.
The
cache=none
mode also instructs QEMU to expose a virtual disk to
the guest that advertises a volatile write cache. This is an indicator
that the guest is responsible for issuing
FLUSH
and/or
BARRIER
commands to ensure correctness:
Flushes
ensure data is persisted to stable storage.
Barriers
enforce ordering constraints between write completions.
The Role of Caches in HDDs
So far, this is well-defined. Rotational storage devices have used
volatile onboard caches for decades. The primary purpose of these
caches was to accumulate enough data to optimize the mechanical seek
process, which involves moving the head across the disk surface to
align with a specific track on the platter. These optimizations
reduced the total head travel distance and allowed the device to
coalesce write operations, taking advantage of the platter’s rotation
to minimize latency and improve throughput.
The Role of Caches in SSDs
In contrast, nonrotational storage devices such as solid state drives
use caches primarily for different reasons. Every solid state drive
requires memory to maintain internal flash translation tables that map
logical block addresses to physical NAND locations. Consumer-grade
solid state drives typically use volatile DRAM caches, which are lost
on sudden power loss. Datacenter and enterprise-grade solid state
drives include power loss protection circuitry, often in the form of
capacitors, to ensure that any cached data and metadata are safely
written to nonvolatile media if power is unexpectedly removed.
Cache Lifetime of Real Devices
Despite these differences, all storage devices use their cache only as
a temporary buffer to hold incoming data before it is permanently
written to the underlying media. They are not designed to retain data
in the cache for extended periods, since the fundamental purpose of a
storage device is to ensure long-term data persistence on durable
media rather than in transient memory.
Exploring the Risks of QEMU/QCOW2 with cache=none
QCOW2 Data Caching via Deferred Metadata
QCOW2 is a copy-on-write image format supporting snapshots and thin
provisioning. Its flexibility comes at a cost: QCOW2 requires
persistent metadata to track how virtual storage addresses map to
physical addresses within a file or device.
When a virtual disk is a QCOW2 image configured with
cache=none
,
QEMU issues writes for all QCOW2 data using
O_DIRECT
. However, the
L1 and L2 metadata
remains entirely in QEMU’s volatile memory during normal operation. It
is
not flushed automatically
. Metadata is persisted only when the
guest explicitly issues a flush (e.g.,
fsync()
), or during specific
QEMU operations such as a graceful shutdown, snapshot commit, or
migration.
This means that when a write allocates a cluster or subcluster, the
application data is written immediately, while the metadata describing the
allocation remains in QEMU memory. The effect is that the existence of
the write is cached, which is functionally equivalent to
caching the write itself.
An interesting characteristic of QEMU/QCOW2 is that it relies entirely
on the guest operating system to issue flush commands to synchronize
its metadata. Without explicit flush operations, QEMU can keep its
QCOW2 metadata in a volatile state indefinitely. This behavior is
notably different from that of real storage devices, which make every
reasonable effort to persist data to durable media as quickly as
possible to minimize the risk of loss.
Increased Risk with QCOW2 Subcluster Allocation
By default, QCOW2 organizes and manages storage in units called
clusters
. Clusters are contiguous regions of physical space within
an
image
. Both metadata tables and user data are allocated and
stored as clusters.
A defining feature of QCOW is its
copy-on-write
behavior. When an I/O
modifies a region of data after a snapshot, QCOW preserves the
original blocks by writing the changes to a new cluster and updating
metadata to point to it. If the I/O is smaller than a cluster, the
surrounding data is copied into the new location.
To address some of the performance issues associated with copying
data, QCOW introduced subcluster allocation using extended
metadata. By doubling the metadata overhead, a cluster can be subdivided
into smaller subclusters (e.g., 32 subclusters in a 128 KiB cluster),
reducing the frequency of copy-on-write operations and improving
efficiency for small writes.
However, this optimization introduces significant tradeoffs. Enabling
l2_extended=on
(subcluster allocation) increases metadata churn,
especially when snapshots are in use, since they record deltas from
parent layers. More critically, it increases the risk of torn writes
and data inconsistency in the event of a crash.
While subcluster tracking improves small-write performance, it comes
at the cost of consistency. QCOW has historically struggled with
maintaining integrity on unexpected power loss. With larger clusters,
these issues were less frequent, less severe, and relatively
straightforward to reconcile. Fine-grain allocation amplifies these
risks, making data corruption more likely.
To illustrate this, here’s a simple example of data corruption that
you can reproduce yourself on a raw QCOW device attached to a guest
(i.e., no filesystem):
Example of Lost Writes and Structural Tears:
Take a snapshot, creating a new QCOW2 metadata layer.
Application writes an 8KiB buffer of
0xAA
at LBA 1 (4KiB block size).
Application issues a flush to commit the metadata.
Application writes an 8KiB buffer of
0xBB
at LBA 0.
VM is abruptly terminated due to host power loss or QEMU process termination.
Result:
Until termination, the virtual disk appears consistent to the guest.
On power loss, the second write is torn because the data was written, but subcluster metadata describing the allocation was not.
The diagram below illustrates the data hazard step by step:
ACTION RESULT OF READ ONDISK STATE
─────────────────────────────────────── ───────────────── ─────────────────
┌───┬───┬───┬───┐ ┌───┬───┬───┬───┐
# SNAPSHOT (GUEST) │ - │ - │ - │ - │ │ - │ - │ - │ - │
└───┴───┴───┴───┘ └───┴───┴───┴───┘
┌───┬───┬───┬───┐ ┌───┬───┬───┬───┐
# WRITE 0XA,BS=4K,SEEK=1,COUNT=2 │ - │ A │ A │ - │ │ - │ A │ A │ - │
└───┴───┴───┴───┘ └───┴───┴───┴───┘
┌───┬───┬───┬───┐ ┌───┬───┬───┬───┐
# FSYNC() │ - │ A │ A │ - │ │ - │ A │ A │ - │
└───┴───┴───┴───┘ └───┴───┴───┴───┘
┌───┬───┬───┬───┐ ┌───┬───┬───┬───┐
# WRITE 0XB,BS=4K,SEEK=0,COUNT=2 │ B │ B │ A │ - │ │ B │ B │ A │ - │
└───┴───┴───┴───┘ └───┴───┴───┴───┘
┌───┬───┬───┬───┐ ┌───┬───┬───┬───┐
# SLEEP 60 (GUEST) │ B │ B │ A │ - │ │ B │ B │ A │ - │
└───┴───┴───┴───┘ └───┴───┴───┴───┘
┌───┬───┬───┬───┐ ┌───┬───┬───┬───┐
# UNPLANNED GUEST TERMINATION │ - │ B │ A │ - │ │ B │ B │ A │ - │
└───┴───┴───┴───┘ └───┴───┴───┴───┘
┌──────────────────────────────────────────────────────────────────────────────┐
│ ┌───┐ ┌───┐ ┌───┐ │
│ │ A │ 4K DATA=0XA │ B │ 4K DATA=0XB │ - │ 4K DATA (PRIOR TO SNAP) │
│ └───┘ └───┘ └───┘ │
└──────────────────────────────────────────────────────────────────────────────┘
Why Barriers and Flushes Are Critical
Deterministic write ordering and durability are fundamental primitives
that ensure transactional applications and filesystems can recover
reliably after a failure. In QEMU, these guarantees are enforced
through the use of flush and barrier operations.
A
flush
forces all buffered writes, whether in the guest or in QEMU,
to be committed to stable storage, ensuring that previous writes are
durable before new ones proceed. A
barrier
enforces strict write
ordering, ensuring that all writes issued before it are fully
committed to storage before any subsequent writes begin.
Without these mechanisms, intermediate devices or virtualization
layers can reorder or delay I/O in ways that violate the guest’s
expectations, leading to unrecoverable corruption.
QCOW2 is particularly sensitive because it relies entirely on
guest-initiated flushes for durability. Its metadata and allocation
structures do not persist automatically. Delayed or missing flushes in
any application can result in inconsistent data and metadata.
The risks for raw devices are substantially lower because they involve
no intermediate caching. Writes are issued directly to the underlying
storage device, which typically commits data to stable media almost
immediately. On enterprise and datacenter-grade storage, these
operations are high-speed, low-latency, and durable upon completion,
providing strong consistency guarantees even under failure conditions.
In essence, enterprise storage largely eliminates durability concerns
and minimizes the potential for reordering, making raw devices a far
safer choice for critical workloads. QCOW2 is semantically correct,
but it is more prone to data loss on unexpected power failure.
Proxmox’s
cache=none
documentation warns: “You need to use the barrier
option in your Linux guest’s fstab if kernel < 2.6.37 to avoid FS
corruption in case of power failure.” With QCOW2, using barriers is
not optional. It is absolutely essential to ensure any semblance of
consistency after failures. Fortunately, most modern filesystems
enable barriers by default.
That said, not all applications rely on filesystems. Many attempt to
bypass the filesystem entirely for performance reasons, which can
leave them exposed to the same risks if flushes and barriers are not
explicitly managed.
Why Isn’t Data Corruption More Widespread?
Widespread data corruption with QCOW2 is relatively uncommon, largely
because active journaling filesystems help keep metadata in
sync. Silent corruption after power loss is a different matter, as its
name implies.
Filesystems such as
ext4
,
XFS
,
ZFS
, and
btrfs
maintain
journals to track metadata changes for each transaction. These
journals are flushed regularly, either automatically or on commit,
which has the side effect of committing the underlying QCOW2 metadata
associated with guest writes.
As a result, many workloads remain synchronized with the virtual disk
almost by accident. For example, modifying and saving a file updates
the inode’s mtime, triggering a journal transaction. The guest issues
a flush, QCOW2 writes the pending metadata, and both the data and its
allocation information are made consistent.
Other common operations, such as creating or deleting files, resizing
directories, or committing database transactions, generate similar
journal flushes. These frequent flushes help prevent inconsistencies,
even though QCOW2 itself does not automatically persist metadata.
Workloads that bypass the filesystem, perform large sequential writes
without journaling, or disable barriers for performance reasons are
much more vulnerable. The risk is also higher for disks with less
ambient activity, such as a separate “application disk” added to a VM
apart from the root disk. In these cases, QCOW2’s reliance on explicit
flushes becomes a significant liability, and unexpected power loss or
process termination can result in substantial data corruption.
Application-Level Risks and Delayed Metadata Updates
Even with journaling filesystems, it’s essential to understand that
writes flushed from the guest’s page cache are not stable on
completion. This includes applications using O_DIRECT. Unless the
application explicitly manages flushes, the primary mechanism that
forces QCOW2 metadata to disk is the deferred modification time
(mtime) and inode updates, which typically occur 5 to 30 seconds after
the data is written, depending on the filesystem.
Risks:
Writes issued between filesystem journal flushes can be partially persisted and torn if the VM terminates unexpectedly.
QCOW2 metadata can remain out of sync with guest data, including allocation tables and L2 cluster mappings.
Delayed metadata, QCOW2’s in-memory caching, and fine-grained
subcluster allocation increase the risk of data loss and create
complex corruption patterns, where part of a file may be updated while
other parts revert. Applications that rely on infrequent flushes or
bypass the filesystem are at the highest risk of data loss in QCOW2
environments.
Is QCOW2 with cache=none Safe to Use?
QCOW2 with
cache=none
is semantically correct, and many modern
workloads can operate safely on it. Well-behaved applications,
particularly databases using
fsync()
or journaling filesystems,
generally remain consistent.
However, QCOW2 is considerably more vulnerable to complex data loss
during unexpected termination, process kills, or power failures. The
presence of subcluster allocation dramatically amplifies the potential
for torn or inconsistent writes. Applications that work directly on
block devices in the guest, bypassing the ambient protection of a
journaling filesystem, are especially exposed. Likewise, custom or
lightly tested software, or workloads using specialized filesystem
options such as lazytime or disabled barriers, face the highest risk
of corruption.
Key Points
QCOW2 is prone to torn, interleaved, or reordered writes during power loss.
Delayed metadata updates, in-memory caching, and fine-grained subcluster allocation amplify the risk and complexity of data corruption.
Older filesystems (such as ext2 and FAT) and applications that do not explicitly issue flushes are especially vulnerable and should be avoided entirely.
RAW storage types are generally safer, exhibiting less reordering, stronger durability, and fewer lost writes after unexpected failure.
Key Takeaways
While QCOW2 with
cache=none
functions correctly in most cases, the
risk of data corruption during unexpected power loss or VM termination
is real. Configurations such as QCOW2 on NFS, as well as newer
QCOW2-on-LVM setups, are susceptible to the types of corruption
discussed in this technote. The more recent
Volume as Snapshot
Chains
feature introduces additional risk due to subcluster
allocation (i.e.,
l2_extended=on
).
For workloads where minimizing data loss is a priority, RAW devices
generally provide more reliable consistency and durability. Examples
of reliable RAW storage options include Ceph, LVM-Thick, ZFS, native
iSCSI, and native NVMe.
Choosing between QCOW2 and RAW should consider workload type,
performance requirements, and operational priorities. While RAW is
often preferred for workloads requiring durability and consistency,
QCOW2 can still be appropriate for less critical workloads or
scenarios where its features offer clear advantages.
Application developers should not assume that data in QCOW2 is
persistent unless the guest OS has explicitly issued flush
operations. If QCOW2 is used, it is advisable to disable subcluster
allocation unless the application can reliably recover from partially
written or torn blocks.
Adi Robertson, The Verge:
As a number of people have pointed out on social media over the
past day, Grok’s public-facing chatbot is currently prone to
insisting on Musk’s prowess at absolutely anything, no matter how
unlikely — or conversely, embarrassing — a given feat is.
Grok claims Musk is...
It’s no secret that Elon Musk shapes
the X social platform
and X’s “maximally truth-seeking” Grok AI chatbot
to his preferences
. But it’s possible Musk may have needed a bit of an extra ego boost this week, because Grok’s worship of its creator seems, shall we say, more noticeable than usual.
As a number of people have pointed out on social media over the past day, Grok’s public-facing chatbot is currently prone to insisting on Musk’s prowess at absolutely anything, no matter how unlikely — or conversely, embarrassing — a given feat is.
If pressed, Grok will also contend Musk would be the best at
eating poop or drinking urine
, but it would prefer to focus on how good he is at making rockets, please. At least some of these posts have been deleted in the past hour; X did not immediately respond to a request for comment on the phenomenon from
The Verge
. Musk
posted on X
that the chatbot had been “unfortunately manipulated by adversarial prompting into saying absurdly positive things about me.”
This glazing appears to be exclusive to the X version of Grok; when I asked the private chatbot to compare Musk with James, it conceded, “LeBron James has a significantly better physique than Elon Musk.” The
GitHub page for Grok’s system prompts
indicates they were updated three days ago, with the
additions
including a prohibition on “snarky one-liners” and instructions not to base responses on “any beliefs stated in past Grok posts or by Elon Musk or xAI,” but there’s nothing that seems to clearly explain this new behavior — although system prompts are only one way to shape how AI systems work.
Either way, this is far from the weirdest Grok has gotten, and it’s less disruptive than the
bot’s brief obsession
with “white genocide” or its
intense antisemitism
— which, incidentally, is still flaring up in the
form of Holocaust denial
. Grok has previously
searched for Musk’s opinion
to formulate its own answers, so even the preoccupation with Musk isn’t new. But it reminds us all what a weirdly intimate connection Grok — a product that’s been rolled out
across the US government
, among other places — has with its owner, and how randomly that connection is prone to appear.
Update 8:15AM ET:
Added post from Elon Musk.
Follow topics and authors
from this story to see more like this in your personalized homepage feed and to receive email updates.
Adi Robertson
If Condé Nast Can Illegally Fire Me, No Union Worker Is Safe
Portside
portside.org
2025-11-21 20:10:40
If Condé Nast Can Illegally Fire Me, No Union Worker Is Safe
Maureen
Fri, 11/21/2025 - 15:10
...
Condé Nast illegally fired me from
Bon Appétit
for posing questions to a human resources manager. On November 5, I was part of an effort by our union to get answers about layoffs. Two days earlier, Condé announced
the near-shuttering of
Teen Vogue
, which entailed letting go of eight people. My termination and that of three of my coworkers were clearly retaliatory, and if Condé can get away with this—and with President Donald Trump sabotaging the National Labor Relations Board, the company appears to be betting that it can—it will send a message to unions and employers across our industry that the foundations of labor law are collapsing.
Since 1935, United States law has provided a clear set of rights to workers. The National Labor Relations Act—the legal scaffolding for the US labor movement—guarantees workers the right to organize unions and demonstrate in the workplace without retaliation from their bosses. The act also created the National Labor Relations Board to enforce labor law and hold both employers and unions accountable to their obligations under the act.
Only eight days after taking office, President Donald Trump illegally fired Gwynne Wilcox, the chair of the labor board. This left the board without the quorum legally required to meet and deliberate, hampering the agency’s ability to enforce the law.
My union, the NewsGuild of New York, which represents employees at Condé Nast as well as
The New York Times
, Reuters, and
The Nation
, is fighting the terminations. But the Trump administration and its anti-worker allies know that labor enforcement has been undermined and that, even under Democratic presidents, the NLRB’s penalties are often not stiff enough to discourage bosses from abuse. This is emboldening employers to ignore their legal obligations and trample on the rights of workers. By firing me and my fellow organizers, my former employer, ostensibly a beacon of the “liberal media,” seems all too happy to align themselves with the Republican Party’s gutting of workers’ rights.
We still don’t know why the company chose to fire us, but we have theories. While Condé would surely say our targeting was not deliberate, it is certainly
convenient
given the current political climate. As the NewsGuild’s vice president and a former member of our bargaining committee, I recognize that I am a high-profile target within our union. I was also one of frustratingly few trans women on staff at any national publication (the layoffs at
Teen Vogue
also hit Lex McMenamin, the publication’s only trans or nonbinary employee), a fact that is particularly concerning amid the right-wing push to erase trans people from public life and my history of
criticizing the company’s track record
with its trans employees. Ben Dewey and Jasper Lo, two other fired employees, had both held office within the Condé Nast and
New Yorker
unions, respectively.
Wired
reporter Jake Lahut expressed concerns about the future of adversarial journalism at Condé. He covered the White House and had snagged major scoops about
DOGE and Elon Musk’s role in the Trump administration
. He was fired too, raising further alarms about what types of journalism Condé is ready to back.
At issue in our case are two essential protections for working people. The first is the guarantee under Section 7 of the National Labor Relations Act, which ensures that employees have the right to self-organize and engage in “concerted activities” in the workplace—basically, to take part in labor activities and demonstrations as long as the workers are acting together as a union. The second is the “just cause” protection in my union’s contract, which ensures that employees cannot be fired without adequate reason and substantial evidence of wrongdoing. This is in contrast to “at will” employees, who can be let go for nearly any reason or no reason at all. Workers at the
New Yorker
won just-cause protections in their 2021 union contract, but they weren’t expanded to the rest of the company until the Condé Nast Union threatened to disrupt the Met Gala and won our contract in 2024. Just-cause is the bedrock of NewsGuild contracts, and it should have protected me from the unfair retaliation I faced when our union tried to get answers from the company’s leadership.
Alma Avalle speaks at a rally outside of the Condé Nast offices in New York City on November 12.
My last day at
Bon Appétit
felt like many other days in my nearly five years at the food magazine. I arrived at the office on Wednesday morning, ready to try out a coffee maker I was reviewing. I handled a few administrative tasks before going to the food magazine’s famed test kitchen. Our food director and I sampled a few of the cups I’d brewed with the device I was testing—one was bland, another was bitter, and one was more promising—and agreed to do more testing with fresher coffee beans the next day. I cleaned our dishes, and he offered me a bowl of pasta, which I ate quickly before heading to a union meeting in the company’s cafeteria over my lunch break.
In the years since Condé Nast workers unionized, I have attended countless meetings in the cafeteria. They’re as routine to me as a Rolex photoshoot might be at
GQ
. We discussed the second round of layoffs the company had announced that week.
Teen Vogue
was a particular point of interest. The layoffs there had wiped out the brand’s politics section, which was already down to half of its previous size before the company restructured.
Teen Vogue
’s politics vertical had covered issues like
youth bans for gender-affirming care
, young climate advocates like
the Sunrise Movement
, and, ironically in retrospect,
labor organizing
.
Less than a week before the layoffs at
Teen Vogue
, management told our union’s diversity committee that they were trying to avoid attention from the Trump administration. This combined with the media generally shifting to the right to appease the Trump administration (see Stephen Colbert’s cancellation, Jimmy Kimmel’s near-cancellation, and the takeover of CBS by Bari Weiss and
The Free Press
) made members of our union concerned about the future of journalism at Condé Nast. Like good organizers, we channeled those concerns into questions and prepared to bring those questions to Stan Duncan, the head of human resources.
In union organizing, we call this a “march on the boss,” in which a group of workers approaches a manager, asks a few questions, and then disperses. Sometimes marching employees will deliver a letter or a petition, oftentimes they will be met with a closed door and will leave their questions with an assistant, and, occasionally, they will find the boss and get the information they were seeking. Marches are a common way unions get information when managers are less than transparent. This year, Condé Union members had tried a number of times to meet with Duncan more formally, both in town halls and committee meetings, and he has never attended. So, like many times before, we marched to the executive floor in search of clarity around our working conditions, our job security, and the direction of the company.
When we arrived, after a brief exchange with two HR employees, Duncan left his office and entered the hallway to address us. The conversation that followed was routine and tame, especially compared to past marches—a similar demonstration under the Biden administration was nearly twice the size of November 5’s and ended with a crowd of nearly 50 members booing Duncan over a previous round of proposed layoffs. No employees were threatened with disciplinary action that time. These demonstrations, perhaps to management’s chagrin, have been a norm at the company since our union formed in 2022 and are clearly protected under the law and in our union’s contract, which the company signed in 2024.
That night, I got back to my Brooklyn apartment after a happy hour with some friends at around 8
PM
. Then, while choosing an outfit to wear to the office the next day ahead of spending the night at my partner’s house, I got word from my union representative that Condé Nast was trying to fire me and three of my fellow union members. At 10
PM
, I received an e-mail from the company confirming the news: I was being terminated for “gross misconduct and policy violations.”
I have not been told which policies I violated or what conduct specifically led to my termination, a requirement under my just-cause protections. My colleagues and I weren’t asked for our side of the story by management until a union grievance meeting nine days
after
I was fired. This is a complete inversion of our just-cause protections, which guarantee us a thorough investigation before discipline can even begin taking place.
Meanwhile, our case has received an outpouring of support from across the labor movement, politicians, and the legal establishment. A
public petition
in our support has gathered nearly 4,400 signatures. The Screen Actors Guild, which represents many of the celebrities featured on Condé Nast’s magazine covers, as well as the Writers Guild, and a swath of other unions have demanded my and three colleagues’ reinstatement. New York City Council member Chi Ossé, who filed paperwork to challenge House minority leader Hakeem Jeffries in next year’s primary and was once
featured in
GQ
, did the same at a rally outside of the Condé Nast offices on November 12. At the same rally, New York Attorney General Leticia James ended her speech with a promise and a threat to my former employer: “Condé Nast, I’ll see you in court.”
Our union isn’t standing idly by. Through rallies, teach-ins, and other demonstrations of our growing power inside the office, we are keeping the pressure on Condé Nast to do right by its workers, and our support is only growing. Our petition is still live—you can sign for updates on
how to support our campaign
from the outside—and we’ve already raised nearly $15,000 to
support the Fired Four
as we fight back against the company. Condé Nast could right this wrong and reinstate us at any time, but like the Trump administration itself, company management appears to think bosses are above questioning.
This interview is adapted from
Contempt of Court
with Elie Mystal, an original podcast series from
The Nation
magazine about how to reform the Supreme Court. Subscribe wherever you get your podcasts. For previous episodes, click
here
.
Nvidia confirms October Windows updates cause gaming issues
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 19:57:48
Nvidia has confirmed that last month's security updates are causing gaming performance issues on Windows 11 24H2 and Windows 11 25H2 systems. [...]...
Nvidia has confirmed that last month's security updates are causing gaming performance issues on Windows 11 24H2 and Windows 11 25H2 systems.
To address these problems, the American technology company released the GeForce Hotfix Display Driver version 581.94.
"Lower performance may be observed in some games after updating to Windows 11 October 2025 KB5066835 [5561605],"
Nvidia said
in a support document published earlier this week.
However, it's important to note that this is a beta driver and does not go through the company's usual quality assurance process. Instead, hotfix drivers go through QA much more quickly and are released as soon as possible to address issues affecting a larger number of users.
"The GeForce Hotfix driver is our way to trying to get some of these fixes out to you more quickly. These drivers are basically the same as the previous released version, with a small number of additional targeted fixes," Nvidia noted.
"To be sure, these Hotfix drivers are beta, optional and provided as-is. They are run through a much abbreviated QA process. The sole reason they exist is to get fixes out to you more quickly."
However, since the start of the year, it has lifted two Windows 11 safeguard holds that prevented users who
enabled Auto HDR
or installed
Asphalt 8: Airborne
from deploying the Windows 11 2024 Update due to compatibility issues that caused game freezes.
Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.
Get the cheat sheet and take the guesswork out of secrets management.
The FBI Wants AI Surveillance Drones With Facial Recognition
Intercept
theintercept.com
2025-11-21 19:50:52
An FBI procurement document requests information about AI surveillance on drones, raising concerns about a crackdown on free speech.
The post The FBI Wants AI Surveillance Drones With Facial Recognition appeared first on The Intercept....
The FBI is
looking for ways to incorporate artificial intelligence into drones, according to federal procurement documents.
On Thursday, the FBI put out the call to potential vendors of AI and machine learning technology to be used in unmanned aerial systems in a so-called “
request for information
,” where government agencies request companies submit initial information for a forthcoming contract opportunity.
“It’s essentially technology tailor-made for political retribution and harassment.”
The FBI is in search of technology that could enable drones to conduct facial recognition, license plate recognition, and detection of weapons, among other uses, according to the document.
The pitch from the FBI immediately raised concerns among civil libertarians, who warned that enabling FBI drones with artificial intelligence could exacerbate the chilling effect of surveillance of activities protected by the First Amendment.
“By their very nature, these technologies are not built to spy on a specific person who is under criminal investigation,” said Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation. “They are built to do indiscriminate mass surveillance of all people, leaving people that are politically involved and marginalized even more vulnerable to state harassment.”
The FBI did not immediately respond to a request for comment.
Law enforcement agencies at local, state, and federal levels have increasingly turned to drone technology in efforts to combat crime, respond to emergencies, and patrol areas along the border.
The use of drones to surveil protesters and others taking part in activities ostensibly protected under the Constitution frequently raises concerns.
In New York City, the use of drones by the New York Police Department soared in recent years, with little oversight to ensure that their use falls within constitutional limits, according to a
report released this week
by the Surveillance Technology Oversight Project.
In May 2020, as protests raged in Minneapolis over the murder of George Floyd, the Department of Homeland Security deployed unmanned vehicles to record footage of protesters and later expanded drone surveillance to at least 15 cities, according to the
New York Times
. When protests spread, the U.S. Marshals Service also used drones to surveil protesters in Washington, D.C., according to documents
obtained by The Intercept
in 2021.
“Technically speaking, police are not supposed to conduct surveillance of people based solely on their legal political activities, including attending protests,” Guariglia said, “but as we have seen, police and the federal government have always been willing to ignore that.”
“One of our biggest fears in the emergence of this technology has been that police will be able to fly a face recognition drone over a protest and in a few passes have a list of everyone who attended. It’s essentially technology tailor-made for political retribution and harassment,” he said.
In addition to the First Amendment concerns, the use of AI-enabled drones to identify weapons could exacerbate standoffs between police and civilians and other delicate situations. In that scenario, the danger would come not from the effectiveness of AI tech but from its limitations, Guariglia said. Government agencies like school districts have forked over cash to companies
running AI weapons detection systems
— one of the specific uses cited in the FBI’s request for information — but the products have been riddled with problems and dogged by criticisms of ineffectiveness.
“No company has yet proven that AI firearm detection is a viable technology,” Guariglia told The Intercept. “On a drone whirling around the sky at an awkward angle, I would be even more nervous that armed police will respond quickly and violently to what would obviously be false reports of a detected weapon.”
Tuxedo Computers Cancels Snapdragon X1 Linux Laptop
In the past 18 months, we have been working on an ARM notebook based on Qualcomm’s Snapdragon X1 Elite SoC (X1E). At this point, we are putting the project on hold. There are several reasons for this.
Less suitable than expected
Development turned out to be challenging due to the different architecture, and in the end, the first-generation X1E proved to be less suitable for Linux than expected. In particular, the long battery runtimes—usually one of the strong arguments for ARM devices—were not achieved under Linux. A viable approach for BIOS updates under Linux is also missing at this stage, as is fan control. Virtualization with KVM is not foreseeable on our model, nor are the high USB4 transfer rates. Video hardware decoding is technically possible, but most applications lack the necessary support.
Given these conditions, investing several more months of development time does not seem sensible, as it is not foreseeable that all the features you can rightfully expect would be available in the end. In addition, we would be offering you a device with what would then be a more than two-year-old Snapdragon X Elite (X1E), whose successor, the Snapdragon X2 Elite (X2E), was officially introduced in September 2025 and is expected to become available in the first half of 2026.
Resumption possible
We will continue to monitor developments and evaluate the X2E at the appropriate time for its Linux suitability. If it meets expectations and we can reuse a significant portion of our work on the X1E, we may resume development. How much of our groundwork can be transferred to the X2E can only be assessed after a detailed evaluation of the chip.
Many thanks to Linaro
We would like to explicitly thank the ARM specialists at
Linaro
for the excellent collaboration. We will contribute the
Device Tree
we developed, along with further work, to the mainline kernel and thereby help improve Linux support for compatible devices, e.g. the Medion SUPRCHRGD, and thus make our work available to the community.
More on Rewiring Democracy
Schneier
www.schneier.com
2025-11-21 19:07:34
It’s been a month since Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship was published. From what we know, sales are good.
Some of the book’s forty-three chapters are available online: chapters 2, 12, 28, 34, 38, and 41.
We need more reviews—six ...
Some of the book’s forty-three chapters are available online: chapters
2
,
12
,
28
,
34
,
38
, and
41
.
We need more reviews—six on Amazon is
not enough
, and no one has yet posted a viral TikTok review. One review was
published
in
Nature
and another on the RSA Conference
website
, but more would be better. If you’ve read the book, please leave a review somewhere.
My coauthor and I have been doing all sort of book events, both online and in person. This
book event
, with Danielle Allen at the Harvard Kennedy School Ash Center, is particularly good. We also have been doing a ton of podcasts, both separately and together. They’re all on the book’s
homepage
.
There are two live book events in December. If you’re in Boston, come
see us
at the MIT Museum on 12/1. If you’re in Toronto, you can
see me
at the Munk School at the University of Toronto on 12/2.
I’m also doing a live AMA on the book on the RSA Conference website on 12/16. Register
here
.
Brazil’s Federal Police have indicted 31 suspects for fraud and land-grabbing in a massive criminal carbon credit scheme in the Brazilian Amazon,
according
to Brazilian national media outlet
Folha de S.Paulo
. It is the largest known criminal operation involving carbon credit fraud to date in the nation.
The police probe, called Operation Greenwashing, was launched following an
investigation
by Mongabay reporter Fernanda Wenzel published in May 2024 about two REDD+ carbon credit projects that appeared to be linked to illegal timber laundering.
The Netherlands-based Center for Climate Crime Analysis (CCCA) analyzed the REDD+ projects, called Unitor and Fortaleza Ituxi, at Mongabay’s request, finding a mismatch between their declared volume of logged timber and the logged volume estimated through satellite images, suggesting possible timber laundering.
The police investigation confirmed that two REDD+ project areas were generating carbon credits at the same time they were being used to launder timber taken from other illegally deforested areas.
Both projects, which cover more than 140,000 hectares (around 350,000 acres), are located in the municipality of Lábrea in the south of Amazonas state. The area has been identified as one of the
newest and most aggressive
deforestation frontiers in the Brazilian Amazon.
Brazil police found that the Unitor and Fortaleza Ituxi REDD+ projects were being used to launder illegal timber while selling carbon credits. Map by Andrés Alegría/Mongabay.
The Federal Police told
Folha
that three interconnected groups were involved.
One group was led by Ricardo Stoppe Júnior, known as Brazil’s largest individual seller of carbon credits. He has actively participated in climate talks and public events promoting his business model, including during the COP28 climate summit hosted in the United Arab Emirates.
Stoppe has sold
millions
of dollars in carbon credits to corporations including Nestlé, Toshiba, Spotify, Boeing and PwC.
The other two were led by Élcio Aparecido Moço and José Luiz Capelasso.
Moço shares a business conglomerate consisting of seven companies with Stoppe’s son, Ricardo Villares Lot Stoppe. In 2017, Moço had been
sentenced
for timber laundering, but in 2019, another court overruled his sentencing. In 2019, he was also
indicted
for allegedly bribing two public officials.
Capelasso was sentenced for illegally trading certificates of origin for forest products in 2012 but was subsequently released. At the time, the police alleged that Capelasso was charging 3,000 reais (approximately $1,500 in 2012) for each fake document.
According to Operation Greenwashing, the scheme was made possible by corrupt public servants working in Brazil’s land reform agency, Incra, in registrar offices across Amazonas state, as well as the Amazonas state environmental protection institute, Ipaam.
Folha de S.Paulo
did not get a response from any of the legal defence teams of the accused. Both Ipaam and Incra stated they supported and are collaborating with the police investigation.
Microsoft: Out-of-band update fixes Windows 11 hotpatch install loop
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 18:02:05
Microsoft has released an out-of-band cumulative update to fix a known issue causing the November 2025 KB5068966 hotpatch update to reinstall on Windows 11 systems repeatedly. [...]...
Microsoft has released the KB5072753 out-of-band cumulative update to fix a known issue causing the November 2025 KB5068966 hotpatch update to reinstall on Windows 11 systems repeatedly.
As the company explained in an update to the
KB5068966 advisory
, the Windows 11 25H2 hotpatch was being reoffered after installation.
"After installing the hotpatch update KB5068966 released November 11, 2025, affected devices repeatedly download and install the same update when a Windows Update scan is run," it
said
in a new Microsoft 365 Message Center entry.
However, Microsoft noted that this known issue doesn't affect system functionality and would only be noticed after checking the timestamp in the update history.
On Thursday, Microsoft addressed the bug in the KB5072753 out-of-band hotpatch, which is now rolling out to all Windows 11 25H2 devices via Windows Update.
This is also a cumulative update that includes improvements and security fixes from the KB5068966 security update.
"You do not need to apply any previous updates before installing this update, as it supersedes all previous updates for affected versions," Microsoft added.
"If you have not yet deployed the November 2025 hotpatch update (KB5068966) on Windows 11, version 25H2 devices in your environment, we recommend you apply this OOB update (KB5072753) instead."
This issue was
confirmed
following
widespread
user
reports
of messages on the Windows Update Settings page warning that "Your version of Windows has reached the end of support" since the October 2025 Patch Tuesday.
Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.
Get the cheat sheet and take the guesswork out of secrets management.
Grafana warns of max severity admin spoofing vulnerability
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 17:58:32
Grafana Labs is warning of a maximum severity vulnerability (CVE-2025-41115) in its Enterprise product that can be exploited to treat new users as administrators or for privilege escalation. [...]...
Grafana Labs is warning of a maximum severity vulnerability (CVE-2025-41115) in its Enterprise product that can be exploited to treat new users as administrators or for privilege escalation.
The issue is only exploitable when SCIM (System for Cross-domain Identity Management) provisioning is enabled and configured.
Specifically, both 'enableSCIM' feature flag and 'user_sync_enabled' options must be set to true to allow a malicious or compromised SCIM client to provision a user with a numeric externalId that maps to an internal account, including administrators.
The externalId is a SCIM bookkeeping attribute used by the identity provider to track users.
Because Grafana mapped this value directly to its internal
user.uid
, a numeric externalId such as \ "1\" could be interpreted as an existing internal account, enabling impersonation or privilege escalation.
According to Grafana's
documentation
, SCIM provisioning is currently in 'Public Preview' and there is limited support available. Because of this, adoption of the feature may not be widespread.
Grafana is a data visualization and monitoring platform used by a broad spectrum of organizations, from startups to Fortune 500 companies, for turning metrics, logs, and other operational data into dashboards, alerts, and analytics.
"In specific cases this could allow the newly provisioned user to be treated as an existing internal account, such as the Admin, leading to potential impersonation or privilege escalation" -
Grafana Labs
CVE-2025-41115 impacts Grafana Enterprise versions between 12.0.0 and 12.2.1 (when SCIM is enabled).
Grafana OSS users aren't impacted, while Grafana Cloud services, including Amazon Managed Grafana and Azure Managed Grafana, have already received the patches.
Administrators of self-managed installations can address the risk by applying one of the following updates:
Grafana Enterprise version 12.3.0
Grafana Enterprise version 12.2.1
Grafana Enterprise version 12.1.3
Grafana Enterprise version 12.0.6
"If your instance is vulnerable, we strongly recommend upgrading to one of the patched versions as soon as possible," warns Grafana Labs.
The flaw was discovered during internal auditing on November 4, and a security update was introduced roughly 24 hours later.
During that time, Grafana Labs investigated and determined that the flaw had not been exploited in Grafana Cloud.
The public release of the security update and the accompanying bulletin followed on November 19.
Grafana users are recommended to apply available patches as soon as possible or change the configuration (disable SCIM) to close potential exploitation opportunities.
Last month,
GreyNoise reported
unusually elevated scanning activity targeting an old path traversal flaw in Grafana, which, as the researchers have noted previously, could be used for mapping exposed instances in preparation for the disclosure of a new flaw.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
web pentaculum - a satanic webring hosted on OpenBSD.amsterdam
In the early 2000s, after a severe slump, McDonald’s orchestrated a major turnaround, with the introduction of its Dollar Menu.
The menu, whose items all cost $1, illustrated just how important it was to market to low-income consumers — who value getting the most bang for their buck.
Coming at a time of flagging growth, tumbling stock and the company’s first report of a quarterly loss, the Dollar Menu reversed the fast-food giant’s bad fortune. It paved the way for three years of sales growth at stores open at least a year and ballooned revenue by 33%, news outlets reported at the time.
But no longer.
For the record:
9:16 a.m. Nov. 17, 2025
A previous version of this article incorrectly described a McDonald’s chief executive’s statement. The statement was about an industry-wide trend, not just McDonald’s. The headline was also updated.
Industry-wide, fast-food restaurants have seen traffic from one of its core customer bases, low-income households, drop by double digits, McDonald’s chief executive Christopher Kempczinski told investors last week. Meanwhile, traffic from higher earners increased by nearly as much, he said.
The struggle of the Golden Arches in particular — long synonymous with cheap food for the masses — reflects a larger trend upending the consumer economy and makes “affordability” a hot policy topic, experts say.
McDonald’s executives say the higher costs of restaurant essentials, such as beef and salaries, have pushed food prices up and driven away lower-income customers who already are being squeezed by the rising cost of groceries, clothes, rent and child care.
With prices for everything rising, consumer companies concerned about the pressures on low-income Americans include food, automotive and airline businesses, among others, analyst Adam Josephson said. “The list goes on and on,” he said.
“Happy Meals at McDonald’s are prohibitively expensive for some people, because there’s been so much inflation,” Josephson said.
Josephson and other economists say the shrinking traffic of low-income consumers is emblematic of a larger trend of Americans diverging in their spending, with wealthier customers flexing their purchasing power and lower-income shoppers pulling back — what some call a “K-shaped economy.”
At hotel chains, luxury brands are holding up better than low-budget options. Revenue at brands including Four Seasons, Ritz-Carlton and St. Regis is up 2.9% this year, while economy hotels experienced a 3.1% decline for the same period,
according to industry tracker CoStar.
“There are examples everywhere you look,” Josephson said.
Consumer credit delinquency rates show just how much low-income households are hurting, with households that make less than $45,000 annually experiencing “huge year-over-year increases,” even as delinquency rates for high- and middle-income households have flattened and stabilized, said Rikard Bandebo, chief strategy officer and chief economist at VantageScore.
After COVID-19-related stimulus programs ended, these households were the first to experience dramatically increased delinquency rates, and haven’t had a dip in delinquencies since 2022, according to data from VantageScore on 60-day, past-due delinquencies from January 2020 to September 2025. And although inflation has come down from its peak in 2022, people still are struggling with relatively higher prices and “astronomical” rent increases, Bandebo said.
A report released
this year by researchers with Joint Center for Housing Studies at Harvard University found that half of all renters, 22.6 million people, were cost-burdened in 2023, meaning they spent more than 30% of their income on housing and utilities, up 3.2 percentage points since 2019 and 9 percentage points since 2001. Twenty-seven percent of renters are severely burdened, spending more than 50% of their income on housing.
As rents have grown, the amount families have left after paying for housing and utilities has fallen to record lows. In 2023, renters with annual household incomes under $30,000 had a median of just $250 per month in residual income to spend on other needs, an amount that’s fallen 55% since 2001, with the steepest declines since the pandemic, according to the Harvard study.
“It’s getting tougher and tougher every month for low-income households to make ends meet,” Bandebo said.
Mariam Gergis, a registered nurse at UCLA who also works a second job as a home caregiver, said she’s better off than many others, and still she struggles.
“I can barely afford McDonald’s,” she said. “But it’s a cheaper option.”
On Monday morning she sat in a booth at a McDonald’s in MacArthur Park with two others. The three beverages they ordered, two coffees and a soda, amounted to nearly $20, Gergis said, pointing to the receipt.
“I’d rather have healthier foods, but when you’re on a budget, it’s difficult,” she said.
Her brother, who works as a cashier, can’t afford meals out at all, she said. The cost of his diabetes medication has increased greatly, to about $200 a month, which she helps him cover.
“He would rather go hungry than eat outside,” Gergis said. The bank closed his credit cards due to nonpayment, she said, and he may lose his housing soon.
Prices at limited-service restaurants, which include fast-food restaurants, are up 3.2% year over year, at a rate higher than inflation “and that’s climbing,” said Marisa DiNatale, an economist at Moody’s Analytics.
On top of that, price increases because of tariffs disproportionately affect lower-income households, because they spend a greater portion of their income on goods rather than services, which are not directly impacted by tariffs. Wages also are stagnating more for these households compared to higher- and middle-income households, DiNatale said.
“It has always been the case that more well-off people have done better. But a lot of the economic and policy headwinds are disproportionately affecting lower-income households, and [McDonald’s losing low-income customers] is a reflection of that,” DiNatale said.
It makes sense, then, that any price increases would hit these consumers hard.
According to a corporate
fact sheet, f
rom 2019 to 2024, the average cost of a McDonald’s menu item rose 40%. The average price of a Big Mac in 2019, for example, was $4.39, rising in 2024 to $5.29, according to the company. A 10-piece McNuggets Meal rose from $7.19 to $9.19 in the same time period.
The company says these increases are in line with the costs of running a restaurant — including soaring labor costs and high prices of beef and other goods.
Beef prices have skyrocketed, with inventory of the U.S. cattle herd at the lowest in 75 years because of the toll of drought and parasites. And exports of beef bound to the U.S. are down because of President Trump’s trade war and tariffs. As a result the prices of ground beef sold in supermarkets is up 13% in September, year-over-year.
McDonald’s also has placed blame on the meat-packing industry, accusing it of maneuvering to artificially inflate prices in a lawsuit filed last year against the industry’s “Big Four” companies — Tyson, JBS, Cargill and the National Beef Packing Company. The companies denied wrongdoing and paid tens of millions of dollars to settle lawsuits alleging price-fixing.
McDonald’s chief financial officer Ian Borden said on the recent earnings call that the company has managed to keep expenses from getting out of control.
“The strength of our supply chain means our beef costs are, I think, certainly up less than most,” he said.
McDonald’s did not disclose how the company gauges the income levels of fast-food customers, but businesses often analyze the market by estimating the background of their customers based on where they are shopping and what they are buying.
In California, the debate around fast-food prices has centered on labor costs, with legislation going into effect last year raising the minimum wage for fast-food workers at chains with more than 60 locations nationwide.
But more than a year after fast-food wages were boosted, the impact still is being debated, with economists divided and the fast-food industry and unions sparring over its impact.
Fast-food restaurant owners as well as trade associations like the International Franchise Assn., which spearheaded an effort to block the minimum wage boost, have said businesses have been forced to trim employee hours, institute hiring freezes
or lay people off
to offset the cost of higher wages.
Meanwhile, an
analysis by
researchers at UC Berkeley’s Center on Wage and Employment Dynamics of some 2,000 restaurants found the $20 wage did not reduce fast-food employment and “led to minimal menu price increases” of “about 8 cents on a $4 burger.”
McDonald’s said last year that spending by the company on restaurant worker salaries had grown around 40% since 2019, while costs for food, paper and other goods were up 35%.
The success of its Dollar Menu in the early 2000s was remarkable because it came amid complaints of the chain’s highly processed, high-calorie and high-fat products,
food safety concerns
and worker exploitation.
As the company marketed the Dollar Menu, which included the double cheeseburger, the McChicken sandwich, French fries, a hot fudge sundae and a 16-ounce soda, it also added healthier options to its regular menu, including salads and fruit.
But the healthier menu items did not drive the turnaround. The $1 double cheeseburgers brought in far more revenue than salads or the chicken sandwiches, which were priced from $3 to $4.50.
“The Dollar Menu appeals to lower-income, ethnic consumers,” Steve Levigne, vice president for United States business research at McDonald’s, told the New York Times in 2006. “It’s people who don’t always have $6 in their pocket.”
The Dollar Menu eventually became unsustainable, however. With inflation driving up prices, McDonald’s stores, particularly franchisee locations, struggled to afford it, and by November 2013 rebranded it as the “Dollar Menu & More” with prices up to $5.
Last year McDonald’s took a stab at appealing to cash-stretched customers with a $5 deal for a McDouble or McChicken sandwich, small fries, small soft drink and four-piece McNuggets. And in January it rolled out a deal offering a $1 menu item alongside an item bought for full price,
with an ad starring John Cena, and
launched
Extra Value Meals
in early September — offering combos costing 15% less than ordering each of the items separately.
The marketing didn’t seem to immediately resonate with customers, with McDonald’s in May reporting U.S. same-store sales in the recent quarter declined 3.6% from the year before. However, in its recent third-quarter earnings, the company reported a 2.4% lift in sales, even as its chief executive sounded the alarm about the increasingly two-tiered economy.
The iconic brand still has staying power, even with prices creeping up, some customers said.
“Everywhere prices are going up. This is the only place I do eat out, because it’s convenient,” said Ronald Mendez, 32, who said he lives about a block away from the McDonald’s in MacArthur Park.
D.T. Turner, 18, munched on hash browns and pancakes, with several still-wrapped McMuffins and cheeseburgers sitting on the tray between him and his friend. In total, their haul cost about $45, he said. He eats at McDonald’s several times a week.
“We grew up eating it,” Turner said.
His friend chimed in: “The breakfast is great, and nuggets are cool.”
That other businesses also are reviving deals is a sign of the times. San Francisco-based burger chain Super Duper promoted its “recession combo” on social media. For $10, customers get fries, a drink and a “recession burger” at one of the chain’s 19 California locations.
What’s clear is companies are wary of passing along higher costs to customers, said DiNatale, of Moody’s Analytics.
“A lot of businesses are saying, ‘We just don’t think consumers will stand for this,’” DiNatale said. Consumers “have been through years of higher prices, and there’s just very little tolerance for higher prices going forward.”
Last week, Valve stunned the computer gaming world by
unveiling three new gaming devices at once
: the Steam Frame, a wireless VR headset; the Steam Machine, a gaming console in the vein of a PlayStation or Xbox; and the Steam Controller, a handheld game controller. Successors to the highly successful
Valve Index
and
Steam Deck
, these devices are set to be released in the coming year.
Igalia has long worked with Valve on SteamOS, which will power the Machine and Frame, and is excited to be contributing to these new devices, particularly the Frame. The Frame, unlike the Machine or Deck which have x86 CPUs, runs on an ARM-based CPU.
Under normal circumstances, this would mean that only games compiled to run on ARM chips could be played on the Frame. In order to get around this barrier, a translation layer called
FEX
is used to run applications compiled for x86 chips (which are used in nearly all gaming PCs) on ARM chips by translating the x86 machine code into ARM64 machine code.
“If you love video games, like I do, working on FEX with Valve is a dream come true,” said
Paulo Matos
, an engineer with Igalia’s
Compilers Team
. Even so, the challenges can be daunting, because making sure the translation is working often requires manual QA rather than automated testing. “You have to start a game, sometimes the error shows up in the colors or sound, or how the game behaves when you break down the door in the second level. Just debugging this can take a while,” said Matos. “For optimization work I did early last year, I used a game called
Psychonauts
to test it. I must have played the first 3 to 4 minutes of the game many, many times for debugging. Looking at my history, Steam tells me I played it for 29 hours, but it was always the first few minutes, nothing else.”
Beyond the CPU, the Qualcomm Adreno 750 GPU used in the Steam Frame introduced its own set of challenges when it came to running desktop games, and other complex workloads, on these devices. Doing so requires a rock-solid Vulkan driver that can ensure correctness, eliminating major rendering bugs, while maintaining high performance. This is a very difficult combination to achieve, and yet that’s exactly what we’ve done for Valve with
Mesa3D
Turnip
, a FOSS Vulkan driver for Qualcomm Adreno GPUs.
A sliding comparison of the main menu in the game “Monster Hunter World”, before and after fixing a rendering error
Before we started our work, critical optimizations such as LRZ (which you can learn more about from
our blog post here
) or the
autotuner
(and its subsequent
overhaul
) weren’t in place. Even worse, there wasn’t support for the Adreno 700-series GPUs at all, which
we eventually added
along with support for
tiled rendering
.
“We implemented many Vulkan extensions and reviewed numerous others,” said
Danylo Piliaiev
, an engineer on the
Graphics Team
. “Over the years, we ensured that D3D11, D3D12, and OpenGL games rendered correctly through DXVK, vkd3d-proton, and Zink, investigating many rendering issues along the way. We achieved higher correctness than the proprietary driver and, in many cases, Mesa3D Turnip is faster as well.”
We’ve worked with many wonderful people from Valve, Google, and other companies to iterate on the Vulkan driver over the years in order to introduce new features, bug fixes, performance improvements, as well as debugging workflows. Some of those people decided to join Igalia later on, such as our colleague and Graphics Team developer
Emma Anholt
. “I’ve been working on Mesa for 22 years, and it’s great to have a home now where I can keep doing that work, across hardware projects, where the organization prioritizes the work experience of its developers and empowers them within the organization.”
Valve’s support in all this cannot be understated, either. Their choice to build their devices using open software like Mesa3D Turnip and FEX means they’re committed to working on and supporting improvements and optimizations that become available to anyone who uses the same open-source projects.
“We’ve received a lot of positive feedback about significantly improved performance and fewer rendering glitches from hobbyists who use these projects to run PC games on Android phones as a result of our work,” said
Dhruv Mark Collins
, another Graphics Team engineer working on Turnip. “And it goes both ways! We’ve caught a couple of nasty bugs because of that widespread testing, which really emphasizes why the FOSS model is beneficial for everyone involved.”
Automatically-measured performance improvement in Turnip since June 2025
An interesting area of graphics driver development is all the compiler work that is involved. Vulkan drivers such as Mesa3D Turnip need to process shader programs sent by the application to the GPU, and these programs govern how pixels in our screens are shaded or colored with geometry, textures, and lights while playing games.
Job Noorman
, an engineer from our Compilers Team, made significant contributions to the compiler used by Mesa3D Turnip. He also contributed to the Mesa3D NIR shader compiler, a common part that all Mesa drivers use, including
RADV
(most popularly used on the Steam Deck) or
V3DV
(used on Raspberry Pi boards).
As is normal for Igalia, while we focused on delivering results for our customer, we also made our work as widely useful as possible. For example: “While our target throughout our work has been the Snapdragon 8 Gen 3 that’s in the Frame, much of our work extends back through years of Snapdragon hardware, and we regression test it to make sure it stays Vulkan conformant,” said Anholt. This means that Igalia’s work for the Frame has consistently passed Vulkan’s Conformance Test Suite (CTS) of over 2.8 million tests, some of which Igalia is involved in creating.
Igalia and other Valve contractors actively participate in several areas inside the Khronos Group, the organization maintaining and developing graphics API standards like Vulkan. We contribute specification fixes and feedback, and we are regularly involved in the development of many new Vulkan extensions. Some of these end up being critical for game developers, like mesh shading. Others ensure a smooth and efficient translation of other APIs like DirectX to Vulkan, or help take advantage of hardware features to ensure applications perform great across multiple platforms, both mobile like the Steam Frame or desktop like the Steam Machine. Having Vulkan CTS coverage for these new extensions is a critical step in the release process, helping make sure the specification is clear and drivers implement it correctly, and Igalia engineers have contributed millions of source code lines and tests since our collaboration with Valve started.
A huge challenge we faced in moving forward with development is ensuring that we didn’t introduce regressions, small innocent-seeming changes can completely break rendering on games in a way that even CTS might not catch. What automated testing could be done was often quite constrained, but Igalians found ways to push through the barriers. “I made a continuous integration test to automatically run single-frame captures of a wide range of games spanning D3D11, D3D9, D3D8, Vulkan, and OpenGL APIs,” said Piliaiev, about the development covered in his
recent XDC 2025 talk
, “ensuring that we don’t have rendering or performance regressions.”
Looking ahead, Igalia’s work for Valve will continue to deliver benefits to the wider Linux Gaming ecosystem. For example, the Steam Frame, as a battery-powered VR headset, needs to deliver high performance within a limited power budget. A way to address this is to create a more efficient task scheduler, which is something
Changwoo Min
of Igalia’s
Kernel Team
has been working on. As he says, “I have been developing a customized CPU scheduler for gaming, named
LAVD: Latency-criticality Aware Virtual Deadline scheduler
.”
In general terms, a scheduler automatically identifies critical tasks and dynamically boosts their deadlines to improve responsiveness. Most task schedulers don’t take energy consumption into account, but the Rust-based LAVD is different. “LAVD makes scheduling decisions considering each chip’s performance versus energy trade-offs. It measures and predicts the required computing power on the fly, then selects the best set of CPUs to meet that demand with minimal energy consumption,” said Min.
One of our other kernel engineers,
Melissa Wen
, has been working on AMD kernel display drivers to maintain good color management and HDR support for SteamOS across AMD hardware families, both for the Steam Deck and the Steam Machine. This is especially important with newer display hardware in the Steam Machine, which features some notable differences in color capabilities, aiming for more powerful and efficient color management which necessitated driver work.
…and that’s a wrap! We will continue our efforts toward improving future versions of SteamOS, and with a partner as strongly supportive as Valve, we expect to do more work to make Linux gaming even better. If any of that sounded interesting and you’d like to work with us to tackle a tricky problems of your own,
please get in touch
!
We should all be using dependency cooldowns
Simon Willison
simonwillison.net
2025-11-21 17:27:33
We should all be using dependency cooldowns
William Woodruff gives a name to a sensible strategy for managing dependencies while reducing the chances of a surprise supply chain attack: dependency cooldowns.
Supply chain attacks happen when an attacker compromises a widely used open source package an...
We should all be using dependency cooldowns
(
via
) William Woodruff gives a name to a sensible strategy for managing dependencies while reducing the chances of a surprise supply chain attack:
dependency cooldowns
.
Supply chain attacks happen when an attacker compromises a widely used open source package and publishes a new version with an exploit. These are usually spotted
very
quickly, so an attack often only has a few hours of effective window before the problem is identified and the compromised package is pulled.
You are most at risk if you're automatically applying upgrades the same day they are released.
William says:
I
love
cooldowns for several reasons:
They're empirically effective, per above. They won't stop
all
attackers, but they
do
stymie the majority of high-visibiity, mass-impact supply chain attacks that have become more common.
They're
incredibly
easy to implement. Moreover, they're
literally free
to implement in most cases: most people can use
Dependabot's functionality
,
Renovate's functionality
, or the functionality build directly into their package manager
The one counter-argument to this is that sometimes an upgrade fixes a security vulnerability, and in those cases every hour of delay in upgrading as an hour when an attacker could exploit the new issue against your software.
I see that as an argument for carefully monitoring the release notes of your dependencies, and paying special attention to security advisories. I'm a big fan of the
GitHub Advisory Database
for that kind of information.
Mayor Adams's DOT and the City Council Speaker Are Trying to Gut the Universal Daylighting Legislation
hellgate
hellgatenyc.com
2025-11-21 17:25:54
With six weeks left before the end of the year, proponents of the original bill are fighting to pass something that doesn't just maintain the status quo....
The Adams administration is moving to gut legislation to ban parking near crosswalks by proposing a bill that is so watered down, it would basically maintain the status quo.
Since it was introduced
late last year
, the "universal daylighting"
legislation
—which would prohibit parking within 20 feet of intersections, in order to make it easier for turning drivers and pedestrians to see—has faced fierce pushback from the Adams administration, which
argues
it would actually make streets more dangerous.
But the latest version of the bill—proposed by the Department of Transportation and embraced by Council Speaker Adrienne Adams—bears little resemblance to the original, according to people knowledgeable about the negotiations.
Fun Stunt to Promote ‘Pluribus’: An Ask Me Anything on Reddit With Carol Sturka
Daring Fireball
www.reddit.com
2025-11-21 17:20:28
“Carol Sturka”, actress Rhea Seehorn’s fictional protagonist of the new Apple TV series Pluribus, is on Reddit is right now — at 12n ET / 9am PT — doing an AMA in character. Sturka is a fantasy novelist, and Apple Books has an 11-page excerpt of her “new” novel Bloodsong of Wycaro. Unclear whether i...
The Guide #218: For gen Zers like me, YouTube isn’t an app or a website – it’s the backdrop to our waking lives
Guardian
www.theguardian.com
2025-11-21 17:00:16
In this week’s newsletter: When the video-sharing site launched in 2005, there were fears it would replace terrestrial television. It didn’t just replace it – it invented entirely new forms of content. ASMR, anyone? • Don’t get The Guide delivered to your inbox? Sign up here Barely a month goes by w...
B
arely a month goes by without
more news
of streaming sites overtaking traditional, terrestrial TV. Predominant among those sits YouTube, with more than
2.5 billion monthly viewers
. For people my age – a sprightly 28 – and younger, YouTube is less of an app or website than our answer to radio: the ever-present background hum of modern life. While my mum might leave Radio 4 wittering or BBC News flickering in the corner as she potters about the house, I’ve got a video essay about Japan’s unique approach to urban planning playing on my phone. That’s not to say I never watch more traditional TV (although 99% of the time I’m accessing it through some other kind of subscription streaming app), but when I get home after a long day and the thought of ploughing through another hour of grim prestige fare feels too demanding, I’m probably watching YouTube. Which means it’s very unlikely that I’m watching the same thing as you.
When Google paid
$1.65bn for the platform in 2006
, (just 18 months after it launched) the price seemed astronomical. Critics questioned whether that valuation could be justified for any video platform. The logic was simple – unless YouTube could replace television, it would never be worth it. Nearly two decades on, that framing undersells what actually happened. YouTube didn’t just replace television – it invented entirely new forms of content: vodcasts, vlogs, video essays, reaction videos, ASMR and its
heinous cousin mukbang
. The platform absorbed new trends and formats at lightning speed, building what became an alternative “online mainstream”. Before podcasters, TikTokers, Substackers and even influencers, there were YouTubers.
I started paying for
YouTube
Premium during Covid, when I had an abundance of time, and spare cash without the need to commute or the potential of buying pints. Now, it’s the only subscription that I don’t worry about the value of, but rather wonder if I use it so much that it’s changed me as a person. Alas, my gym membership does not fall into this category.
The obvious advantage to the premium subscription is never seeing ads, and the smart downloads that automatically queue up episodes based on your habits have been a blessing on many a long tube journey. I’m very rarely bored these days; on my commute now, instead of staring out the window and letting my mind wander, I’m either watching sports highlights or a podcast. I no longer really think about stuff – I just go on YouTube.
Donald Trump, right, on Joe Rogan’s podcast, which airs on YouTube.
Photograph: https://www.youtube.com/watch?v=hBMoPUAeLnY
It’s slightly embarrassing to admit that a random deluge of shorts featuring guitar instructors and teenage garage bands has inspired me to pick up the instrument again – like admitting that you met your partner on Hinge. But that’s the thing – YouTube has democratised expertise in ways traditional media never could. It also fits in with the etiquette around media consumption on your phone. I’d never desecrate a Spielberg or Scorsese film by watching one on a 6in display. That feels vaguely heinous – disrespectful to the craft. But watching behind-the-scenes footage or promo tour clips? That’s exactly what YouTube is for.
I watch a mix of YouTube-native creators – Amelia Dimoldenberg’s Chicken Shop Date, JxmyHighroller for NBA deep dives, Tifo Football for tactical analysis, Happy Sad Confused for film interviews – and a steady diet of content traditionally formatted for TV or print but which probably now reaches the biggest audience via YouTube: Graham Norton, Saturday Night Live, even fellow journalists such as Owen Jones and Mark Kermode. And sports highlights exist on the platform in a state of perfect convenience that legacy broadcasters can’t match, especially when it comes to paywalled sports such as cricket and NFL, where watching live requires an immense financial, and time, commitment.
However, this convenience and entertainment isn’t without its problems. YouTube’s hyper-personalised algorithm means we’re all watching completely different things. Where previous generations had “Did you watch that thing last night?” as a universal conversation starter, now everyone’s deep in their own algorithmic bubble. We’ve gained infinite choice but lost the sense of shared experience, a shared culture. Even “big” YouTube moments fragment across demographics in ways that Saturday-night telly never did. When politicians – usually, but not exclusively, of the far right – bemoan that we live in a divided nation, they’d be better off pointing the finger at our viewing habits than the immigration figures. My algorithmic delights may well have more in common with a 28-year-old in Bengaluru than the 45-year-old living next door.
There is one exception, though it’s not exactly comforting: while YouTube has fragmented viewing habits across most demographics, it’s created something close to a monoculture among young men.
Joe Rogan
, Theo Von, Lex Fridman and a rotating cast of Trump-adjacent podcasters and public intellectuals, including the late Charlie Kirk, have become a genuinely ubiquitous part of the water-cooler conversation among men my age. YouTube has democratised access to long-form conversation in genuinely enriching ways, but it’s also created pipelines to increasingly toxic content. The platform’s algorithm doesn’t just surface what you’re interested in – it surfaces what keeps you watching, and that’s not always the same thing. It has a tendency to boost extreme viewpoints and fringe theories by taking you on a journey from something entirely harmless to genuinely dangerous misinformation so gradually and organically that you barely notice it happening. And with everyone in your demographic experiencing the same, it’s hard for the community to police itself.
According to recent data, YouTube users globally watch over 1bn hours of content every day. For better or worse, YouTube has won, and I’m mostly OK with that. I certainly don’t miss having to consult a ratty TV guide to know what BBC Two will be showing at 9pm. But perhaps the balance needs redressing – not so much between YouTube and other platforms, but between YouTube and literally everything else. I’m not exactly sure what the solution is … but I bet there’s a video essay that could tell me exactly what I should think.
If you want to read the complete version of this newsletter
please subscribe
to receive The Guide in your inbox every Friday
Pivot Robotics (YC W24) Is Hiring for an Industrial Automation Hardware Engineer
Build and deploy control panels, including wiring, layout, labeling, and documentation
Integrate sensors, valves, relays, and actuators with PLCs, Arduinos, and robot controllers
Design and integrate safety systems, including e-stops, interlocks, and safety relays
Test, tune, and troubleshoot pneumatic and electromechanical subsystems
Collaborate with software and electrical engineers to improve performance and reliability
Support setup and bring-up of robot cells at customer sites
Design and assemble mechanical systems such as vises, grippers, and camera mounts
Qualifications
Bachelor’s or Master’s in Mechanical, Mechatronics, or Robotics Engineering
1-2 years of experience in mechanical design and control system integration
Experience building and wiring control panels from start to finish
Familiarity with safety hardware and standards (e.g., e-stops, light curtains, safety PLCs)
Understanding of pneumatic systems and basic control loops
Proficiency in CAD (SolidWorks, Fusion 360, or Onshape)
Comfortable working hands-on in lab and factory environments
Willingness to travel for installations and field testing
About
Pivot Robotics
Pivot Robots (YC W24) is building the AI brain for robot arms for high-mix manufacturing.
Pivot Robots combines off-the-shelf robots and vision sensors with recent breakthroughs in foundation vision models to give industrial robot arms the power to adapt. Our first product directly addresses the dangerous and unpopular task of grinding metal parts. Currently, our software is being deployed on 10+ robots at a large cast iron foundry.
Founded:
2023
Batch:
W24
Team Size:
6
Status:
Active
Location:
San Francisco
Founders
Wyden Blasts Kristi Noem for Abusing Subpoena Power to Unmask ICE Watcher
Intercept
theintercept.com
2025-11-21 16:57:41
“DHS apparently is trying to expose an individual’s identity in order to chill criticism of the Trump Administration’s immigration policies.”
The post Wyden Blasts Kristi Noem for Abusing Subpoena Power to Unmask ICE Watcher appeared first on The Intercept....
Sen. Ron Wyden,
D-Ore., is calling on the Department of Homeland Security to cease what he describes as an illegal abuse of customs law to reveal the identities of social media accounts tracking the activity of ICE agents, according to a letter shared with The Intercept.
This case hinges on a recent effort by the Trump administration to unmask Instagram and Facebook accounts monitoring immigration agents in Montgomery County, Pennsylvania. It’s not the first effort of its kind by federal authorities.
In 2017, The Intercept reported an attempt by U.S. Customs and Border Protection to reveal the identity of the operator of a
Twitter account critical of President Donald Trump
by invoking, without explanation, its legal authority to investigate the collection of tariffs and import duties. Following public outcry and
scrutiny
from Wyden, the Department of Homeland Security rescinded its legal summons and launched an internal
investigation
. A subsequent report by the DHS Office of Inspector General
found
that while CBP had initially claimed it needed the account’s identity to “investigate possible criminal violations by CBP officials, including murder, theft, and corruption,” it had issued its legal demand to Twitter based only on its legal authority for the “ascertainment, collection, and recovery of customs duties.”
The report concluded that CBP’s purpose in issuing the summons to Twitter was unrelated to the importation of merchandise or the assessment and collection of customs duties,” and thus “may have exceeded the scope of its authority.” The OIG proposed a handful of reforms, to which CBP agreed, including a new policy that all summonses be reviewed for “legal sufficiency” and receive a sign-off from CBP’s Office of Professional Responsibility.
Eight years and another Trump term later, CBP is at it again. In October, 404 Media
reported
that DHS was once again invoking its authority to investigate merchandise imports in a bid to force Meta to disclose the identity of MontCo Community Watch, a Facebook and Instagram account that tracks the actions of immigration authorities north of Philadelphia. A federal judge
temporarily
blocked Meta from disclosing user data in response to the summons.
In a letter sent Friday to DHS Secretary Kristi Noem, Wyden asked the government to cease what he describes as “manifestly improper use of this customs investigatory authority,” writing that “DHS appears to be abusing this authority to repress First Amendment protected speech.”
The letter refers to the 2017 OIG report, noting that CBP “has a history of improperly using this summons authority to obtain records unrelated to import of merchandise or customs duties. … The Meta Summonses appear to be unrelated to the enforcement of customs laws. On the contrary, DHS apparently is trying to expose an individual’s identity in order to chill criticism of the Trump Administration’s immigration policies.” Wyden concludes with a request to Noem to “rescind these unlawful summonses and to ensure that DHS complies with statutory limitations on the use of 19 U.S.C. § 1509 going forward.”
The MontCo Community Watch effort followed an earlier attempt this year to
unmask
another Instagram account that shared First Amendment-protected imagery of ICE agents in public. This subpoena, first reported by The Intercept, focused not on merchandise imports. Instead it invoked law “relating to the privilege of any person to enter, reenter, reside in, or pass through the United States,” even though the subpoena was issued pertaining to “officer safety,” not immigration enforcement.
DHS did not immediately respond to a request for comment
Behind the Blog: A Risograph Journey and Data Musings
403 Media
www.404media.co
2025-11-21 16:54:35
This week, we discuss how data is accessed, AI in games, and more....
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss how data is accessed, AI in games, and more.
JOSEPH:
This was a pretty big week for impact at 404 Media. Sam’s piece on an exposed AI porn platform ended up with the company closing off those exposed images. Our months-long reporting and pressure from lawmakers
led to the closure
of the Travel Intelligence Program (TIP), in which a company owned by the U.S.’s major airlines sold flyers data to the government for warrantless surveillance.
For the quick bit of context I have typed many, many times this year: that company is Airlines Reporting Corporation (ARC), and is owned by United, American, Delta, Southwest, JetBlue, Alaska, Lufthansa, Air France, and Air Canada. ARC gets data, including a traveler’s name, credit card used, where they’re flying to and from, whenever someone books a flight with one of more than 10,000 travel agencies. Think Expedia, especially. ARC then sells access to that data to a slew of government agencies, including ICE, the FBI, the SEC, the State Department, ATF, and more.
This post is for paid members only
Become a paid member for unlimited ad-free access to articles, bonus podcast content, and more.
There’s an ironclad truism in youth sports: every parent turns into an ESPN
30 for 30
documentarian as soon as they have a video recording device in hand and their kid is in the game.
Some record the games and post them online so family members and friends who can’t attend in person can watch their kids play. Sometimes they do so to attract the attention of college scouts or help players hone their craft. Some people just want to preserve the memories.
But in the world of corporatized youth sports, even this simple pleasure is being banned and monetized by Wall Street to extract as much profit as possible from players and parents, no matter how many kids get sidelined because they can’t afford the sport’s rising costs.
As the
$40 billion
youth sports industry comes under
private equity
control, corporate-owned facilities and leagues — from hockey rinks to cheerleading arenas — have begun prohibiting parents from recording their own kids’ sports games.
Instead, parents are forced to subscribe to these companies’ exclusive recording and streaming service, which can cost many times more than the streaming costs for professional sporting events. Meanwhile, the firms’ exclusive contracts have prohibited alternative video services from being made available.
In some instances, parents have been threatened that if they choose to defy the rules and record the game, they may end up on a blacklist that punishes their kids’ teams. Those threats were even reportedly made to a sitting US senator.
“I was told this past weekend that if I livestreamed my child’s hockey game, my kid’s team will be penalized and lose a place in the standings,” said Sen. Chris Murphy (D-CT) at a
public event
earlier this year. “Why is that? Because a private equity company has bought up the rinks.”
Murphy did not name the company in question, though the restrictive streaming practices he described have become widespread across youth hockey.
Black Bear Sports Group, an emerging youth hockey empire and the
largest owner-operator
of hockey rinks in the country, is among the private equity–backed companies that are amassing a chokehold on recording and streaming youth sports. At Black Bear–owned ice rinks, parents cannot record, post, or livestream their kids’ hockey games online “per official company policy,” according to staff at those venues. Some rink attendants said they will confiscate attendees’ recording devices if they find them.
Some specialized sports training consultants have agreements with Black Bear that allow them to record games and practices, but only for internal use.
According to a spokesperson, Black Bear claims the policy is to mitigate “significant safety risks to players,” such as players being filmed without their consent. The spokesperson failed to answer a follow-up question about what penalties attendees might face if they try to record the games themselves.
Black Bear’s streaming service costs between $25 and $50 a month, depending on the package and additional fees. The company’s aggressive expansion of the program has even triggered a lawsuit from a former streaming partner alleging breach of contract and trade secret theft.
In addition to its recording rules and associated costs, Black Bear is
starting
to add a $50 “registration and insurance” fee per player for some leagues. That’s on top of what players already spend on expensive equipment, team registration, and membership to USA Hockey, the sport’s national governing body.
“Black Bear Sports Group does not have a good reputation in the hockey world and is known for predatory practices of its customers like price gouging,” reads a recently launched
petition
protesting the new registration and insurance charges.
The fees and streaming restrictions reveal how private equity firms are deploying the same playbook in youth sports as they have in other domains, from
dentistry
to
bowling
: degrade the quality of service while juicing returns for investors.
“Black Bear [is] following the exact same model as we’ve seen elsewhere in the industry,” said Katie Van Dyck, an antitrust attorney and senior fellow at the American Economic Liberties Project. “It’s not about investing to enrich our children’s lives.”
“The New Sport of Kings”
The new fees tacked on by Black Bear contribute to the already rising costs of participating in youth and recreational sports like hockey.
Across the board, youth sports have become an increasingly expensive budget item for American families, thanks to costs ranging from equipment to team memberships and travel.
According to a
recent study
from the Aspen Institute, households now spend an average of $1,016 a year on their child’s primary sport, a 46 percent increase since 2019.
The professionalization of youth sports has further driven up costs. Some parents
now pay
for personal trainers and even sports psychologists to give their kids a competitive edge in the hopes of them reaching the collegiate or professional level.
As a result, many children from lower-income families are
being priced
out of youth sports.
“We have this affordability crisis, and youth sports are one of those things that’s becoming an activity only for the wealthy,” said Van Dyck. “It’s not something that is accessible to people who make less than six figures a year.”
This
trend line
has been particularly pronounced in hockey, which, according to some
metrics
, is the most expensive youth sport, with an average
cost
of $2,583. Skate prices can top
$1,000
, and sticks can often cost
several hundred
.
“It’s the new sport of kings,” said Joseph Kolodziej, who runs a consultancy helping parents and athletes navigate the world of youth hockey. “I’ve been hearing for over twenty years that prices are forcing people out of the sport and that teams are losing gifted athletes because they can’t afford to play.”
The rapid
commercialization
of youth sports has become big business. One recent estimate put the total
valuation
of the youth sports market at $40 billion. Youth hockey alone could reach over
$300 million
by the end of the decade.
Those sky-high revenues have
attracted
Wall Street investors looking to charge more money from a wealthier customer base willing to pay more for their kids.
And now, virtually every corner of the youth sports industry is coming under
corporate ownership
.
A company called
Unrivaled Sports
, run by two veterans of Blackstone, the world’s largest private equity firm, is rapidly
consolidating
baseball camps, flag football, and other leagues. The operation even
bought
the iconic baseball megacomplex in Cooperstown, New York, considered the birthplace of the sport, where summer tournaments draw teams from around the country.
Bain Capital–backed Varsity Brands, meanwhile, has cannibalized the competitive cheerleading arena and now acts as the gatekeeper controlling access to the sport.
All of this outside investment has raised concerns that the financial firms rolling up the market may further increase costs for families.
From health care to retail, private equity firms purchase companies, load them up with debt, slash costs, and extract as much profit as possible for investors before selling the operations or filing for bankruptcy.
“When youth sports become an investment vehicle, rather than a development vehicle for children, there [are] all kinds of financial predation that can arise from vulture companies that don’t have the sport’s long-term interest in mind,” said Van Dyck at the American Economic Liberties Project.
Varsity Brands, for example,
faced
a class-action antitrust
lawsuit
for alleged anticompetitive practices that pushed out cheerleading rivals while
squeezing
profits from participants, such as forcing teams to purchase Varsity’s own apparel and equipment. In 2024, Varsity, which was also mired in a
sex abuse scandal
, settled the suit for $82 million.
In addition to controlling venues, uniforms, and the tournaments for competitive cheerleading, Varsity
expanded
into entertainment streaming services with Varsity TV, which has the exclusive right to livestream the company’s competitions. It’s lorded that arrangement not just over parents but also tech giants. During the 2020 Netflix docuseries
Cheer
, which follows a cheerleading team competing across the country, Varsity
wouldn’t allow
the series’ crew to film inside the venue they owned in Daytona, Florida.
The Texas attorney general is
probing
similar anticompetitive practices by the Dallas Stars, a professional National Hockey League team, following an explosive
USA Today
investigation
into its youth hockey operations. According to the report, the team bought up dozens of Texas’s recreational rinks. It then allegedly used its market power to jack up fees on youth players, underinvested in rink maintenance, and retaliated against clubs that tried to oppose them.
Now, legal experts say Black Bear Sports is replicating a similar model for youth hockey teams along the East Coast and beyond.
The Only Game in Town
Hockey has grown in popularity across the United States, with USA Hockey membership
reaching
an all-time high of 577,900 in 2025. But it’s become increasingly difficult for small operations to meet the growing demand.
For example, rinks require immense amounts of energy for air conditioning to reach freezing temperatures, and electric utility bills have
skyrocketed
over the past decade. And while many local rinks
used
to be municipally run or publicly funded, such support has been slashed in recent decades in favor of government
privatization
.
In 2015, the Maryland-based Black Bear Sports entered the scene. The company, owned by the private equity firm Blackstreet Capital, began buying up struggling ice rinks, some of which were on the verge of closing. According to the company’s
sales pitch
, it would invest the capital to retrofit and renovate the rinks, making them serviceable.
This approach follows a familiar pattern for Black Bear Sports’ founder, Murry Gunty, a longtime hockey aficionado who got his start at Blackstone before launching his own private equity firm, Blackstreet Capital. Blackstreet is known for
buying up
small- to medium-sized distressed companies for cheap, then making the businesses leaner before selling them off. While slashing costs to bring in returns for the firm’s investors, the private equity fund managers charge massive fees to pad their own bottom lines.
Shortly after founding Black Bear in 2015, Gunty was sued by the Securities and Exchange Commission for charging investors high fees without being licensed as a broker. Blackstreet
settled
the charges for $3.1 million.
Today Black Bear
owns
forty-two rinks across eleven states across the Northeast, Midwest, and mid-Atlantic coast. In some areas, those venues are the only game in town. With its network of rinks, Black Bear controls the basic infrastructure that other clubs, leagues, and tournaments need to access.
Along with rinks, Black Bear also
manages
four national and regional youth hockey associations, a handful of junior-level sports teams, such as the Maryland Black Bears, and organizes major youth hockey
tournaments
on the East Coast. Gunty acts as the commissioner of the United States Premier Hockey League, one of the largest top-level junior leagues with seventy-five teams nationwide, offering a direct pathway for young athletes to play at the college level. Black Bear’s vice president, Tony Zasowski, is the league commissioner for the Tier 1 Hockey Federation and the Atlantic Hockey Federation, top-level hockey leagues.
Those organizations set the rules for the league, dictate playing schedules, and require paid dues, among other costs. They also determine where leagues and tournaments will be held — such as Black Bear’s own rinks.
The conglomerate also
launched
its own online hockey ratings system, used to determine team rankings and players’ status.
Among the company’s newest ventures is a streaming site, Black Bear TV. In September 2024, the company put out a
public notice
that “all games played inside the Black Bear venues and certain partner venues will be streamed exclusively on Black Bear TV.”
That exclusive arrangement also includes all games played within the leagues run by Black Bear, even if they aren’t occurring at their own arenas. Shortly after Gunty became commissioner of the United States Premier Hockey League in 2024, the organization
inked a deal
to make Black Bear TV the exclusive provider for all its games.
Previously, Black Bear had an exclusive agreement with the sports broadcaster LiveBarn to livestream the games, and the two split the revenues.
But Black Bear wanted to assume full control over streaming services and profits, according to a
lawsuit
LiveBarn filed this year, which claims Black Bear stole LiveBarn’s business and then used inside information about its prices and terms to convince other rinks to sign deals with Black Bear.
Black Bear TV
isn’t cheap. Each individual game on its online platform costs $14.99 to watch. For the service’s full suite of features, including the ability to clip plays, packages range between $26 and $36 a month and can total roughly $440 a year. Certain premier leagues controlled by Black Bear are subject to additional fees, driving up prices to $50 a month.
For comparison, an $11.99 monthly subscription to ESPN TV would include access to nearly every Division 1 college game, most National Hockey League games, professional soccer matches, PGA Tour golf tournaments, and other major sporting events.
A Black Bear spokesperson says its prices reflect the high-quality service it provides to customers. “With Black Bear TV, we are no longer limited by a fixed, center-ice camera connected to [a] rink wireless connection that often faces delays and low-quality picture,” said the spokesperson.
But user
reviews
for Black Bear TV complain about the service’s streaming quality and spotty coverage. The company gets to pick and choose which games it features on the service.
Starting this year, Black Bear is
introducing
another fee: a separate registration and insurance charge for adult leagues to access its ice rinks.
The new $50 annual charge, which could become a model for youth leagues under Black Bears’ control, triggered a public petition in September demanding the company reduce its fees.
Black Bear contends that the new fee is a slightly lower-cost alternative to USA Hockey’s $52
adult registration cost
, which is required to participate in the organization’s sanctioned leagues.
But according to the petition, certain recreational leagues weren’t previously paying any fees at Black Bear rinks, and some players may now have to pay both registration fees if they also play in leagues unrelated to Black Bear.
The additional fees could be another hurdle denying some players the joys of participating in the sport altogether.
“Adding an additional fee is unnecessary and makes an already hard-to-access sport even more difficult, especially for new players . . . [it] risks killing our league as it has already shrunken from previous years,”
say
petition organizers.
CrowdStrike catches insider feeding information to hackers
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 16:48:41
American cybersecurity firm CrowdStrike has confirmed that an insider shared screenshots taken on internal systems with unnamed threat actors. [...]...
American cybersecurity firm CrowdStrike has confirmed that an insider shared screenshots taken on internal systems with unnamed threat actors.
However, the company noted that its systems were not breached as a result of this incident and that customers' data was not compromised.
"We identified and terminated a suspicious insider last month following an internal investigation that determined he shared pictures of his computer screen externally," a CrowdStrike spokesperson told BleepingComputer today.
"Our systems were never compromised and customers remained protected throughout. We have turned the case over to relevant law enforcement agencies."
CrowdStrike did not specify the threat group responsible for the incident or the motivations of the malicious insider who shared screenshots.
However, this statement was provided in response to questions from BleepingComputer regarding screenshots of CrowdStrike systems that were recently posted on Telegram by members of the threat groups ShinyHunters, Scattered Spider, and Lapsus$.
ShinyHunters told BleepingComputer earlier today that they allegedly agreed to pay the insider $25,000 to provide them with access to CrowdStrike's network.
The threat actors claimed they ultimately received SSO authentication cookies from the insider, but by then, the breach had already been detected by CrowdStrike, which shut down network access.
The extortion group added that they also attempted to purchase CrowdStrike reports on ShinyHunters and Scattered Spider, but did not receive them.
BleepingComputer contacted CrowdStrike again to confirm if this information is accurate and will update the story if we receive additional information.
The Scattered Lapsus$ Hunters cybercrime collective
These groups, now collectively calling themselves "Scattered Lapsus$ Hunters," have previously launched a data-leak site to extort dozens of companies impacted by a
massive wave of Salesforce breaches
.
Companies they attempted to extort include high-profile brands and organizations, such as Google, Cisco, Toyota, Instacart, Cartier, Adidas, Sake Fifth Avenue, Air France & KLM, FedEx, Disney/Hulu, Home Depot, Marriott, Gap, McDonald's, Walgreens, Transunion, HBO MAX, UPS, Chanel, and IKEA.
As BleepingComputer reported this week, the ShinyHunters and Scattered Spider extortion groups are switching to a new ransomware-as-a-service platform named
ShinySp1d3r
, after previously using other ransomware gangs' encryptors in attacks, including
ALPHV/BlackCat
,
RansomHub
,
Qilin
, and
DragonForce
.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
I recently discovered that you could make PS2 games in JavaScript. I’m not even kidding, it’s actually possible. I was working on a project and had my phone near my desk when I received a notification. Upon further inspection, it came from itch.io which was a platform where I usually published most of my web games.
Under my relatively popular
Sonic infinite runner game
which was made in JavaScript and developed a year ago, I received a comment from someone with the username Dev Will which claimed they had made a PS2 version of my game and provided the
GitHub repo
of the source code.
At first, I thought that it was cool that someone took the time to remake my game for an old console that had a reputation to be hard to develop for and probably required them to write a lot of C or C++.
Out of curiosity, I opened up the GitHub repo and was astonished to see that the project was not using even a bit of C++ or C but was entirely in JavaScript!
If making PS2 games were easier than I thought since I could use a higher level language like JavaScript, I could probably try making one in a reasonable amount of time and play it on a retro handled or an actual PS2. How cool would that be?
This is where I knew I had to drop everything I was doing to investigate how this was possible.
Since the dev behind the project was Portuguese speaking (I assume they were either from Brazil or Portugal), they wrote the Readme of the repo in Portuguese which was a language I did not understand.
Fortunately, I was still able to decipher most of what was written because I had done 3 years of Spanish in school and spoke French natively. Since Portuguese is a romance language like Spanish and French, I was fortunately not totally lost.
Anyway, The readme said that the engine used to make the PS2 version of my game was called AthenaEnv with a conveniently placed link towards it so I could learn more.
As with the Sonic Infinite Runner PS2 project, this engine was also open source and its
repo
had a very detailed readme written in English.
To summarize, Athena was not what we commonly refer to as a game engine but an environment that also offered a JavaScript API for making games and apps for the PS2. It embedded a slightly modified version of QuickJS which was a small and embeddable JavaScript engine. This explained how Athena was able to run JavaScript code on the PS2.
Therefore, Athena was the PS2 native program written in C that took your JavaScript code, passed it through the QuickJS engine to interpret it and finally, ran the relevant logic on the system.
What made it compelling was not that it just ran JS on the PS2 but that it offered an API suitable for game development. It covered :
Rendering : Allowing you to display sprites, text, shapes, etc… on the screen and animate them using a game loop.
Asset loading : Allowing you to load images, sounds, fonts, etc...
Input handling : Allowing you to receive player input from a controller, multiple ones or even from a mouse and keyboard since the PS2 supported these input methods.
File handling : Allowing you to write save files among other things.
Sound playback : For playing Sound.
and the list goes on.
I noticed however, that the level of abstraction offered by the API was similar to something like p5.js, the HTML canvas API or Raylib. That meant that you’d still needed to implement collision detection, scene management, etc… yourself.
Now, that I got familiar with Athena, I wanted to try to run the Sonic infinite runner “port” on an emulator. According to the project’s Readme. I needed to install PCSX2 which is the most popular emulator for the PS2. Then, go into the settings and under the emulation tab, check the box “Enable host filesystem”.
Once this was done, I would need to open an athena.elf file and the game would start.
After installing and configuring the emulator, I was ready to run the game. However, there was a problem. I could not find the athena.elf file in the repo. It was nowhere to be found.
This is where I remembered to look at the “releases” section of the repo because a lot of open source projects put executables there, especially if it’s a mobile or desktop app project.
As expected, the zip attached in that section contained the athena.elf file but not only. It also contained an assets folder, a main.js file, an athena.ini file and src folder containing the rest of the game’s code.
The athena.ini file allowed you to configure the entry point of the project. Here, the entry point was set to main.js which explained how Athena would know what JavaScript to run. You could also configure if you wanted to show Athena’s logo before your game started by setting the boot_logo property to true.
It now became evident why we needed to check the “Enable host filesystem” check box earlier. This was so that the emulator could allow Athena to access the assets folder and the source code that were essential for our game.
Anyway, I opened the athena.elf file in PCSX2 and surprisingly, the game actually ran with no issues. It was amazing to see that a game I wrote for the web was ported to the PS2 and I was there able to play it with a controller.
Now, the game looked a bit blurry which was expected since this was supposed to emulate a PS2 which had a small resolution. Fortunately, I was able to make things more comfortable by upping the resolution in the graphics settings of the emulator.
The dev process also seemed quite straightforward. You would only need to open the folder containing all the relevant files (athena.elf, main.js, etc…) in a code editor like VSCode and open athena.elf in the emulator. Now, you could make changes to your JS code and once you were ready to test, you would go under the PCSX2 system tab and click on reset. This would restart the emulator and you could see the latest changes. While not as seamless as in web development with hot reloading, it still was a relatively fast iteration cycle.
It’s at that moment, that I knew had to make a post about it and share this awesome project with you. However, I still felt uneasy about one thing.
Nowadays, people download PS2 games as .iso files. For most games, you only need one .iso file that you then open in your emulator. Less technical people can therefore more easily enjoy these older titles.
However, to run the Sonic infinite runner game “port”, I needed to not only check a box in the settings but also needed the entire project’s folder containing the Athena executable and the source code.
I wondered if instead, there was a way to distribute the game as a single .iso file. This is were I simply went back to the itch.io comment section and asked if it was possible.
After a thorough back and forth that continued on Discord, the process to convert my files into a single iso, I could distribute, was now clear.
To make an iso you needed the following files :
athena.elf : Which is the Athena executable.
athena.ini : For configuring the project’s entry point.
A JS file acting as the entry point of the codebase.
The rest of your source code if your code is more than one file, oftentimes it’s in a folder called src.
Two files one named ATHA_000.01 and the other SYSTEM.CNF needed to make the iso bootable.
Once you had all the files, you had to make a zip archive containing them all. One issue I had, was that if I created a zip out of the folder containing the files, the resulting .iso would not work. However, if I selected the files one by one and then created the zip, I would experience no issues. This is something to keep in mind.
Now, the only step left was to convert the zip into an iso. As I was using a Mac, the only reliable way I’ve found, was to use the website
mconverter.eu
and let them do the conversion.
However, the issue with this website is that you’re limited in the number of conversions you can do per day before they ask you to pay. Additionally, if your zip archive is above a certain size, you’ll also have to watch an ad before you can do the conversion.
If you end up finding a better way using either a CLI tool, a downloadable app or some other website, feel free to share it in the comment section.
Once you had the iso, you could open it up in the emulator like you would do with other PS2 games. You also didn’t need to check the “Enable host filesystem” option anymore since all the relevant files needed were included in the iso.
If the game booted correctly, then you now had a single file you could distribute which was very convenient.
It was now time to get my feet wet. Before attempting anything too complicated, my goal was to create a simple “Hello World” example where I would :
Load some assets (In my case a font and an image).
Set up a game loop that would run every frame.
Animate a sprite using that game loop.
Render text.
Handle player input so I could move a sprite around.
Before I could achieve any of these sub-goals, in main.js, I first defined a few constants that I would end up needing.
This is where I learned that you could get the screen’s width and height by first using the Screen module available globally like all Athena provided modules (Meaning that no import statements were needed) and then calling the getMode method.
Then, to have a stable frame rate and accurate FPS counting, I needed to call the methods setVSync() and setFrameCounter()
Screen.setVSync(true); // makes framerate stable
Screen.setFrameCounter(true); // toggles frame counting and FPS collecting.
With the setup completed, I wanted to load the font I used in my Sonic game and a Spritesheet of Sonic so that I could later animate it. I could achieve the following by creating an instance of the Font and Image classes offered by Athena.
const maniaFont = new Font("./assets/mania.ttf");
const sprite = new Image("./assets/sonic.png");
While I planned on handling player input later, I still needed a way to get the player’s controller so that my code could know when a given button was pressed. This was made possible by using Athena’s Pads module.
// Get the first player controller
// First player -> 0, Second player -> 1
const pad = Pads.get(0);
Before I could create a game loop, I needed to first write the setup code required to animate my spritesheet. Since all the frames where contained within a single image, I had to find a way to tell Athena what part of the image was to be rendered.
To achieve this, I first spent some time to get familiar with the shape of the sprite object created earlier.
const sprite = new Image("./assets/sonic.png");
It turned out that we could set the width and the height of the sprite by modifying the properties of the object with the same names.
// for example
sprite.width = 30;
sprite.height = 40;
It also turned out that you could tell Athena what portion of the image to draw by setting the startx, endx, starty, endy properties.
For example, if you had the following values : startx = 0, endx = 32, starty = 0 and endy = 44 you would get the first frame rendered. This is because in the spritesheet, every frame has a width of 32 and a height of 44. Also, the origin (0,0) corresponds to the top-left corner of the spritesheet.
Now that I knew how to display a single frame within a wider image, I used the following logic to setup Sonic’s run animation.
I first created an object called spritePos to set the position of the sprite on the screen. This was needed to be able to move it around when the player would press directional buttons on the D-pad. More on that later.
Then I would set the sprite’s width and height to correspond to the width and height of a single frame which was 32x44 pixels. Since I wanted the sprite to appear big enough, I multiplied the width and height by a value defined by the SCALE constant we set earlier in our code.
The next step consisted in creating an array called runAnimFrames which would describe each frame of Sonic’s run animation using an object with the startx, endx, starty and endy properties. We then had a frameIndex variable which would determine the current frame to display. The frameDuration constant would be used to set how long in miliseconds to display each frame. The lower the number the higher the frame rate of the animation because we would flip through all the frames faster.
Finally, I initialized a timer coming from a custom Timer class that I added in my src folder and imported here. The full code is available in the template mentioned earlier.
The timer would end up being crucial to know when it was time to move on to displaying another frame.
Now that we had our animation logic setup done, it was time to render the animation. For this purpose, I needed a game loop that runs every frame. In Athena, we could achieve this by calling the display method available under the Screen module.
In an if statement we would check if the timer had exceeded the time allocated to displaying the current frame. If it was the case we would move on to the next frame by incrementing the frameIndex as long as it was within the bounds of the runAnimFrames array, otherwise, we would set it back to 0 to display the first frame. This was to achieve a looping animation.
Then, on every iteration of the game loop we would set the sprite’s startx, endx, starty, endy properties to correspond to the ones of the current frame. Finally, to render the sprite, we needed to call its draw method and pass to it the coordinates where you wanted to display it on the screen.
Now that I had a game loop, I could finally handle user input by making sure that the sprite would move in different directions depending on which button was pressed. This could be easily achieved with a few if statements.
Screen.display(() => {
pad.update(); // necessary to get what buttons are currently being pressed
if (pad.pressed(Pads.RIGHT)) {
spritePos.x = spritePos.x + SPEED;
}
if (pad.pressed(Pads.LEFT)) {
spritePos.x = spritePos.x - SPEED;
}
if (pad.pressed(Pads.UP)) {
spritePos.y = spritePos.y - SPEED;
}
if (pad.pressed(Pads.DOWN)) {
spritePos.y = spritePos
}
// rest of the code omitted for clarity
});
You might be wondering where is deltaTime? For those unfamiliar, deltaTime is a value representing the time elapsed between the current frame and the previous frame in a game. It’s often used to make the movement of objects, frame rate independent. Meaning that if your game runs at a lower or higher frame rate, an object, like a character, will still move at the same rate. To achieve frame rate independence, you would usually multiply your movement code by deltaTime.
The reason it was absent here, is because when creating a game loop using the display method, this matter is taken care of under the hood.
Now that I could move Sonic around, I still needed him to face the correct direction because at this point, he would look right even If I moved him to the left. To implement this, I decided to go with a common technique in pixel art based games, which consisted in mirroring (or flipping) the sprite.
To achieve this in Athena, you simply needed to provide a negative width or height to the sprite depending on what axis you wanted the mirroring to take effect on. For flipping a sprite horizontally, providing a negative width was enough.
However, an issue arose! If you flipped the sprite, it would not flip in place since it would flip according to the sprite’s origin which was its top-left corner.
This meant that it would move the sprite to the left after mirroring. To fix this issue, you only needed to subtract an offset to the x coordinate of the flipped sprite that corresponded to its width.
Now that the issue was solved, I created variable called spriteIsFlippedX to know when to flip or unflip the sprite. The logic can be see below :
// omitted previous code for clarity
const offset = FRAME_WIDTH * SCALE;
let spriteIsFlippedX = false;
Screen.display(() => {
pad.update();
if (pad.pressed(Pads.RIGHT)) {
// makes sur to flip back the sprite
if (spriteIsFlippedX) {
sprite.width = Math.abs(sprite.width);
spriteIsFlippedX = false;
spritePos.x -= offset;
}
spritePos.x = spritePos.x + SPEED;
}
if (pad.pressed(Pads.LEFT)) {
if (!spriteIsFlippedX) {
sprite.width = -Math.abs(sprite.width);
spriteIsFlippedX = true;
spritePos.x += offset;
}
spritePos.x = spritePos.x - SPEED;
}
if (pad.pressed(Pads.UP)) {
spritePos.y = spritePos.y - SPEED;
}
if (pad.pressed(Pads.DOWN)) {
spritePos.y = spritePos.y + SPEED;
}
// ... code omitted for clarity
});
Now, when you moved sonic to the left, he would face left and face right when moved to the right.
There was still one thing I wanted to try out before wrapping up my Hello World example and that was text rendering. The first thing I wanted to render onto the screen was an FPS counter. It turned out that the FPS counter in the PCSX2 emulator is not accurate, however, Athena provides the getFPS() method available on the Screen module to accurately determine the frame rate.
To display some text, you needed to first create a font object using the Font constructor. It would take either a path to a font that can be in a .ttf format or the string “default” if you wanted to use the default font available on the system.
Once created, the font object had a print method that you could use within the game loop to tell the PS2 what to render and where on the screen.
const font = new Font("default");
Screen.display(() => {
// Here getFPS() will provide an updated FPS count every 10ms.
font.print(10,10, Math.round(Screen.getFPS(10)));
});
Now that you’ve been introduced to Athena, you might be tempted to try it out for yourself. In that case, I really recommend looking at the Sonic infinite runner Athena port’s code as you’ll learn a lot about concepts that I did not have time to cover here.
Additionally, I recommend joining the official Athena discord where you’ll be more likely to receive help when stuck. You can join here :
https://discord.gg/cZUH5U93US
Before wrapping up this post, you might have found strange that nothing was mentioned about 3D considering that the PS2 was mostly known for its 3D games.
This is for 2 reasons. First, I’m a novice in terms of 3D game develoment, I have never done it before. Second, to my understanding, Athena has both 2D and 3D capabilities but version 4 which has more of a 3D focus is currently in development. I thought it would have been preferable to wait until v4 was stable before diving into PS2 3D gamedev in JavaScript.
However, there are a few 3D demos you can check if you’re interested.
To conclude, Athena is a cool project allowing you to make real PS2 games in JavaScript. If you learned something new and enjoy technical posts like this one, I recommend subscribing to not miss out on future releases.
In the meantime, if you feel inclined, you can read the post below.
Discussion about this post
Show HN: Wealthfolio 2.0- Open source investment tracker. Now Mobile and Docker
A beautiful portfolio tracker that respects your privacy and your data
1
Privacy-First Approach
Your data never leaves your device. As an open-source project, we prioritize security and transparency.
2
Simple and Beautifully Crafted
Powerful features wrapped in an elegant, easy-to-use interface. Simplicity meets sophistication.
3
No Hidden Costs
Free to use with optional one-time payment. No subscriptions or recurring fees.
THE ESSENTIALS YOU NEED TO TRACK YOUR WEALTH
No More Messy Spreadsheets or Privacy Concerns - Just You and Your Secure, Personal Wealth Companion Application
Accounts Aggregation
Gather all your investment and savings accounts in one place. See everything at a glance, from stocks to savings! Import your CSV statements from your broker or bank.
See all your accounts in one place.
CSV Import
Easily import your CSV statements.
Holdings Overview
Get a clear picture of what's in your portfolio. Stocks, ETFs, or Cryptocurrencies - know what you have and how it's performing.
Portfolio Insights
Understand your asset allocation.
Performance Tracking
Monitor how your investments are doing.
Performance Dashboard
See how your investments stack up, all in one place. Compare your accounts side by side, check if you are beating the S&P 500, and track your favorite ETFs without the hassle. No fancy jargon - just clear, useful charts that help you understand how your money is really doing.
Compare Your Accounts
See which accounts are doing best.
Beat the Market?
Check how you stack up against some popular indexes and ETFs.
Income Tracking
Monitor dividends and interest income across your entire portfolio. Get a clear view of your passive income streams, helping you make informed decisions about your investments.
Dividend Monitoring
Track your dividend income.
Interest Income
Keep an eye on interest earnings.
Accounts Performance
Track your accounts' holdings and performance over time. See how a particular account is performing, and how it's changing over time.
Historical Data
View past performance trends.
Account Analysis
Analyze individual account performance.
Goals Tracking
Set your savings targets clearly. Distribute your funds across these objectives, assigning a specific percentage to each. Keep an eye on your progress.
Target Setting
Define your financial goals.
Progress Monitoring
Track your progress towards goals.
Contribution Rooms and Limit Tracking
Stay on top of your contribution limits for tax-advantaged accounts like IRAs, 401(k)s, or TFSAs. Track your available contribution room and avoid over-contributing.
Limit Awareness
Know your contribution limits.
Avoid Over-Contribution
Prevent excess contributions.
Extend Wealthfolio with Powerful Add-ons
Investment Fees Tracker
Track and analyze investment fees across your portfolio with detailed analytics and insights
Goal Progress Tracker
Track your investment progress towards target amounts with a visual representation
Stock Trading Tracker
Simple swing stock trading tracker with performance analytics and calendar views
In the early 1950s, Grace Hopper coined the term “compiler” and built one of the first versions with her
A-0
system
1
. The compilers that followed abstracted away machine code, letting programmers focus on higher-level logic instead of lower-level hardware details. Today, AI coding assistants
2
are enabling a similar change, letting software engineers focus on higher-order work by generating code from natural language prompts
3
. Everyone from big tech to well-funded startups is competing to capture this shift. Yesterday Google
announced
Antigravity, their new AI coding assistant, and the day before, AWS
announced
the general availability of their AI coding tool, Kiro. Last week, Cursor, the standout startup in this space, raised $2.3B in their series-D round at a valuation of $29.3B.
Two lines in Cursor’s
press release
stood out to me. The first:
We’ve also crossed $1B in annualized revenue, counting millions of developers.
This disclosure means Anysphere Inc. (Cursor’s parent company) is the fastest company in history to reach $1B in annual recurring revenue (ARR). Yes, faster than OpenAI, and faster than Anthropic
4
.
Source: Yuchen Jin, Twitter/X, 2025
Engineers are trying every new AI coding tool. As a result, the AI-coding tool market is growing exponentially (+5x in just over a year)
5
. But it’s still early. As I wrote in
Why Some AI Wrappers Build Billion-dollar Businesses
, companies spend several hundred billion dollars a year on software engineering, and AI has the potential to unlock productivity gains across that entire spend.
Software developers represent roughly 30% of the workforce at the world’s five largest market cap companies, all of which are technology firms as of October 2025. Development tools that boost productivity by even modest percentages unlock billions in value.
In my view, this nascent market is splitting based on three types of users.
Source: Command Lines, wreflection.com, 2025
On one end is
Handcrafted Coding
. These are engineers who actively decline to use LLMs, either because of
skepticism
about quality or insistence on full control of every code. They argue that accepting AI suggestions creates technical debt you cannot see until it breaks in production. This segment continues to decline as the quality of AI coding models improves.
The opposite end is
Vibe Coding
. These are typically non-engineers, who use AI to build concepts and prototypes. They prompt the model hoping for an end-to-end solution, accept the output with minimal review, and trust that it works. The user describes what they want and lets the model figure out the implementation details of how to build it.
In the middle sits
Architect + AI Coding
. The engineer uses the AI/LLM as a
pair programmer
exploring system designs, analyzing data models, and reviewing API details. When the work is something entirely new or something that needs careful handling, the human programmer still codes those pieces by hand. But for boilerplate code, package installations, generic User Interface (UI) components, and any kind of code that is typically found on the internet, they assign it to the model
6
. The engineer stays in command of what is important to them and delegates what is not.
Based on the user types, I think, the AI coding market splits into two.
Source: wreflection.com based on
SemiAnalysis
estimate, 2025
Hands-off:
Non-engineers (product managers, designers, marketers, other internal employees) use these tools to
vibe code
early product concepts. They look to AI as the lead engineer to spin-up concepts/prototypes of apps, websites, and tools by simply prompting the AI to make something for them. Lovable, Vercel, Bolt, Figma Make, and Replit fit here
7
. Code from these users, as of now, are not typically pushed to prod.
Hands-on:
Professional software engineers use these tools in their existing workflow to ship production code. They use AI as an assistant to write boilerplate code, refactor existing services, wire new features or UI screens, and triage bugs in codebases. Cursor, Claude Code, OpenAI Codex, Github Copilot, Cline, AWS Kiro play here. These products live where the
work is done
, and integrate into the engineer’s workflow. This is, at least as of now, the bigger market segment.
To see an evaluation of all the major AI coding tools currently in the market, checkout this
breakdown
by Peter Yang, who runs the newsletter
Behind The Craft
.
That brings me to the second thing in Cursor’s press release that stood out to me:
Our in-house models now generate more code than almost any other LLMs in the world.
But Cursor and other such tools depend almost entirely on accessing Anthropic, OpenAI and Gemini models, until
open-source
open-weight and in-house models match or exceed frontier models in quality.
Developer forums
are filled with complaints about rate limits from paying subscribers. In my own projects, I exhausted my Claude credits in Cursor mid-project and despite preferring Cursor’s user interface and design, I migrated to Claude Code (and pay ten times more to avoid rate limits). The interface may be better, but model access proved decisive.
Cursor’s new in-house model Composer-2,
which just launched
last month, is a good example of how this model versus application competition is evolving. Cursor claims (without any external benchmarks, I must say) that Composer-2 is almost as good as frontier models but 4x faster. It’s still early to say how true that is. Open-source models have not yet come close to the top spots in SWE-bench verified or in private evals
9
.
Source
: Introducing Claude Sonnet 4.5, Anthropic, 2025.
To me, model quality is the most decisive factor in these AI coding wars. And in my view, that’s why Claude Code has already overtaken Cursor, and OpenAI’s Codex is close behind, despite both having launched a year or so later.
Even though the newcomers Cursor, Claude Code, and OpenAI Codex are the talk of the (developer) town, incumbents such as Microsoft with Github Copilot, AWS with Kiro, and Google with Antigravity, can utilize their existing customer relationships, bundle their offerings with their existing suites, and/or provide their option as the default in their tech stack to compete. As an example, Cursor charges $20–$40 monthly per user for productive usage, while Google Antigravity launched free with generous limits for individual users. Github Copilot still leads this market, proving once again that enterprise bundling and distribution has structural advantages. This is the classic Microsoft Teams vs. Slack Dynamic
10
.
One way for startups to compete is by winning individual users who may use a coding tool with or without formal approval, and then be the tool’s advocate inside the organization. That organic interest and adoption eventually forces IT and security teams to officially review the tool and then eventually sanction its usage.
Yet, even as these newer tools capture developer mindshare, the underlying developer tools market is changing. Both the IDEs developers choose and the resources
they
we consult have changed dramatically. StackOverflow, once the default for programmers stuck on a programming issue, has seen its traffic and number of questions
decline
dramatically since ChatGPT’s launch, suggesting that AI is already replacing some traditional developer resources.
Just as compilers freed programmers from writing assembly code, AI tools are freeing software engineers from the grunt work of writing boilerplate and routine code, and letting them focus on higher-order thinking. Eventually, one day, AI may get so good that it will generate applications on demand and create entire software ecosystems autonomously. Both hands-off and hands-on AI coding tools, as well as incumbents and newcomers, see themselves as the path to that fully autonomous software generation, even if they are taking different approaches. The ones who get there will be those who deliver the best model quality that ships code reliably, go deep enough to ship features that foundation models can’t care enough to replicate, and become sticky enough that users will not leave even when they can
11
.
If you enjoyed this post, please consider sharing it on
Twitter/X
or
LinkedIn
, and tag me when you do.
Victory! Court Ends Dragnet Electricity Surveillance Program in Sacramento
Electronic Frontier Foundation
www.eff.org
2025-11-21 16:30:14
A California judge ordered the end of a dragnet law enforcement program that surveilled the electrical smart meter data of thousands of Sacramento residents.
The Sacramento County Superior Court ruled that the surveillance program run by the Sacramento Municipal Utility District (SMUD) and police vi...
A California judge ordered the end of a dragnet law enforcement program that surveilled the electrical smart meter data of thousands of Sacramento residents.
The Sacramento County Superior Court
ruled
that the surveillance program run by the Sacramento Municipal Utility District (SMUD) and police violated a state privacy statute, which bars the disclosure of residents’ electrical usage data with narrow exceptions. For more than a decade, SMUD coordinated with the Sacramento Police Department and other law enforcement agencies to sift through the granular smart meter data of residents without suspicion to find evidence of cannabis growing.
EFF and its co-counsel
represent three petitioners in the case
: the Asian American Liberation Network, Khurshid Khoja, and Alfonso Nguyen.
They argued
that the program created a host of privacy harms—including criminalizing innocent people, creating menacing encounters with law enforcement, and disproportionately harming the Asian community.
The court ruled that the challenged surveillance program was not part of any traditional law enforcement investigation. Investigations happen when police try to solve particular crimes and identify particular suspects. The dragnet that turned all 650,000 SMUD customers into suspects was not an investigation.
“[T]he process of making regular requests for all customer information in numerous city zip codes, in the hopes of identifying evidence that could possibly be evidence of illegal activity, without any report or other evidence to suggest that such a crime may have occurred, is not an ongoing investigation,” the court ruled, finding that SMUD violated its “obligations of confidentiality” under a data privacy statute.
Granular electrical usage data can reveal intimate details inside the home—including when you go to sleep, when you take a shower, when you are away, and other personal habits and demographics.
The dragnet turned 650,000 SMUD customers into suspects.
In creating and running the dragnet surveillance program, according to the court, SMUD and police “developed a relationship beyond that of utility provider and law enforcement.” Multiple times a year, the police asked SMUD to search its entire database of 650,000 customers to identify people who used a large amount of monthly electricity and to analyze granular 1-hour electrical usage data to identify residents with certain electricity “consumption patterns.” SMUD passed on more than 33,000 tips about supposedly “high” usage households to police.
While this is a victory, the Court unfortunately dismissed an alternate claim that the program violated the California Constitution’s search and seizure clause. We disagree with the court’s reasoning, which misapprehends the crux of the problem: At the behest of law enforcement, SMUD searches granular smart meter data and provides insights to law enforcement based on that granular data.
Going forward, public utilities throughout California should understand that they cannot disclose customers’ electricity data to law enforcement without any “evidence to support a suspicion” that a particular crime occurred.
EFF, along with
Monty Agarwal
of the law firm Vallejo, Antolin, Agarwal, Kanter LLP, brought and argued the case on behalf of Petitioners.
The New AI Consciousness Paper – By Scott Alexander
Most discourse on AI is low-quality. Most discourse on consciousness is super-abysmal-double-low quality. Multiply these - or maybe raise one to the exponent of the other, or something - and you get the quality of discourse on AI consciousness. It’s not great.
Out-of-the-box AIs mimic human text, and humans
almost
always describe themselves as conscious. So if you ask an AI whether it is conscious, it will often say yes. But because companies know this will happen, and don’t want to give their customers existential crises, they hard-code in a command for the AIs to answer that they
aren’t
conscious. Any response the AIs give will be determined by these two conflicting biases, and therefore not really believable.
A recent paper
expands on this method by subjecting AIs to a mechanistic interpretability
“lie detector” test
; it finds that AIs which say they’re conscious think they’re telling the truth, and AIs which say they’re not conscious think they’re lying. But it’s hard to be sure this isn’t just the copying-human-text thing. Can we do better? Unclear; the more common outcome for people who dip their toes in this space is to do
much, much worse
.
But a rare bright spot has appeared: a seminal paper published earlier this month in
Trends In Cognitive Science
,
Identifying Indicators Of Consciousness In AI Systems
. Authors include Turing-Award-winning AI researcher Yoshua Bengio, leading philosopher of consciousness David Chalmers, and even a few members of our conspiracy. If any AI consciousness research can rise to the level of merely awful, surely we will find it here.
One might divide theories of consciousness into three bins:
Physical
: whether or not a system is conscious depends on its substance or structure.
Supernatural:
whether or not a system is conscious depends on something outside the realm of science, perhaps coming directly from God.
Computational:
whether or not a system is conscious depends on how it does cognitive work.
The current paper announces it will restrict itself to computational theories. Why? Basically the
streetlight effect
: everything else ends up trivial or unresearchable. If consciousness depends on something about cells (what might this be?), then AI doesn’t have it. If consciousness comes from God, then God only knows whether AIs have it. But if consciousness depends on which algorithms get used to process data, then this team of top computer scientists might have valuable insights!
So the authors list several of the top computational theories of consciousness, including:
Recurrent Processing Theory:
A computation is conscious if it involves high-level processed representations being fed back into the low-level processors that generate it. This theory is motivated by the visual system, where it seems to track which visual perceptions do vs. don’t enter conscious awareness. The sorts of visual perceptions that become conscious usually involve these kinds of loops - for example, color being used to generate theories about the identity of an object, which then gets fed back to de-noise estimates about color.
Global Workspace Theory:
A computation is conscious if it involves specialized models sharing their conclusions in a “global workspace” in the center, which then feeds back to the specialized modules. Although this also involves feedback, the neurological implications are different: where RPT says that tiny loops in the visual cortex might be conscious, GWT reserves this descriptor for a very large loop encompassing the whole brain. But RPT goes back and says there’s only one consciousness in the brain because all the loops connect after all, so I don’t entirely understand the difference in practice.
Higher Order Theory:
A computation is conscious if it monitors the mind’s experience of other content. For example, “that apple is red” is not conscious, but “I am thinking about a red apple”
is
conscious. Various subtheories try to explain why the brain might do this, for example in order to assess which thoughts/representations/models are valuable or high-probability.
There are more, but this is around the point where I started getting bored. Sorry. A rare precious technically-rigorous deep dive into the universe’s greatest mystery, and I can’t stop it from blending together into “something something feedback”. Read it yourself and see if you can do better.
The published paper ends there, but in
a closely related technical report
, the authors execute on their research proposal and reach a tentative conclusion: AI doesn’t have something something feedback, and therefore is probably not conscious.
Suppose your favorite form of “something something feedback” is Recurrent Processing Theory: in order to be conscious, AIs would need to feed back high-level representations into the simple circuits that generate them. LLMs/transformers - the near-hegemonic AI architecture behind leading AIs like GPT, Claude, and Gemini - don’t do this. They are purely feedforward processors, even though they sort of “simulate” feedback when they view their token output stream.
But some AIs do use recurrence. AlphaGo had a little recurrence in its tree search. This level of simple feedback might not qualify. But MaMBA, a would-be-LLM-killer architecture from 2023, likely does. In fact, for every theory of consciousness they discuss, the authors are able to find some existing or plausible-near-future architecture which satisfies its requirements.
They conclude:
No current AI systems are conscious, but . . . there are no obvious technical barriers to building AI systems which satisfy these indicators.
The computer scientists have done a great job here; they sure do know which AI systems have something something feedback. What about the philosophers’ contribution?
The key philosophical paragraph of the paper is this one:
By ‘consciousness’ we mean phenomenal consciousness. One way of gesturing at this concept is to say that an entity has phenomenally conscious experiences if (and only if) there is ‘something it is like’ for the entity to be the subject of these experiences. One approach to further definition is through examples. Clear examples of phenomenally conscious states include perceptual experiences, bodily sensations, and emotions. A more difficult question, which relates to the possibility of consciousness in large language models (LLMs), is whether there can be phenomenally conscious states of ‘pure thought’ with no sensory aspect. Phenomenal consciousness does not entail a high level of intelligence or human-like experiences or concerns . . . Some theories of consciousness focus on access mechanisms rather than the phenomenal aspects of consciousness. However, some argue that these two aspects entail one another or are otherwise closely related. So these theories may still be informative about phenomenal consciousness.
In other words: don’t confuse access consciousness with phenomenal consciousness.
Access consciousness is the “strange loop” where I can think about what I’m thinking - for example, I can think of a white bear, know that I’m thinking about a white bear, and report “I am thinking about a white bear”. This meaning of conscious matches the concept of the “unconscious”: that which is in my mind
without
my knowing it. When something is in my unconscious - for example, “repressed trauma” - it may be influencing my actions, but I don’t realize it and can’t report about it. If someone asks “why are you so angry?” I will say something like “I don’t know” rather than “Because of all my repressed trauma”. When something isn’t like this - when I have full access to it - I can describe myself as having access consciousness.
Phenomenal consciousness is internal experience, a felt sense that “the lights are on” and “somebody’s home”. There’s something that it’s like to be me; a rock is mere inert matter, but I am a person, not just in the sense that I can do computations but in the sense where I matter
to me
. If someone turned off my brain and replaced it with a robot brain that did everything exactly the same, nobody else would ever notice,
but it would matter
to me
, whatever that means. Some people link this to
the mysterious redness of red
, the idea that qualia look and feel like some particular indescribable thing instead of just doing useful cognitive work. Others link it to moral value - why is it bad to kick a human, but not a rock, or even a computer with a motion sensor that has been programmed to say the word “Ouch” whenever someone kicks it? Others just fret about
how strange it is to be anything at all
.
Access consciousness is easy to understand. Even a computer, ordered to perform a virus scan, can find and analyze some of its files, and fail to find/analyze others. In
practice
maybe neuroscientists have to learn complicated things about brain lobes, but
in theory
you can just wave it off as “something something feedback”.
Phenomenal consciousness is crazy. It doesn’t really seem possible in principle for matter to “wake up”. But it adding immaterial substances barely even seems to help. People try to square the circle with all kinds of crazy things, from panpsychism to astral planes to (of course) quantum mechanics. But the most popular solution among all schools of philosophers is to pull a bait-and-switch where they talk about access consciousness instead, then deny they did that.
This is aided by people’s wildly differing intuitions about phenomenal consciousness. For some people (including me), a sense of phenomenal consciousness feels like the bedrock of existence, the least deniable thing; the sheer redness of red is so mysterious as to seem almost impossible to ground. Other people have the opposite intuition: consciousness doesn’t bother them, red is just a color, obviously matter can do computation, what’s everyone so worked up about? Philosophers naturally interpret this as a philosophical dispute, but I’m increasingly convinced it’s an equivalent of
aphantasia
, where people’s minds work in very different ways and they can’t even agree on the raw facts to be explained. If someone doesn’t have a felt sense of phenomenal consciousness, they naturally round it off to access consciousness, and no amount of nitpicking in the world will convince them that they’re equivocating terms.
Do AIs have access consciousness? A
recent paper by Anthropic
apparently finds that they do. Researchers “reached into” an AI’s “brain” and artificially “flipped” a few neurons (for example, neurons that previous research had discovered were associated with the concept of “dog”). Then they asked the AI if it could tell what was going on. This methodology is fraught, because the AI might mention something about dogs merely because the dog neuron had been upweighted - indeed, if they only asked “What are you thinking about now?”, it would begin with “I am thinking about . . . “ and then the highly-weighted dog neuron would mechanically produce the completion “dog”. Instead, they asked the AI to first described whether any neurons had been altered, yes or no, and only then asked for details. It was able to identify altered neurons (ie “It feels like I have some kind of an unnatural thought about dogs”) at a rate higher than chance, suggesting an ability to introspect.
(how does it do this without feedback? I think it just feeds forward information about the ‘feeling’ of altered neurons, which makes it into the text stream; it’s intuitively surprising that this is possible but it seems to make sense)
But even if we fully believe this result, it doesn’t satisfy our curiosity about “AI consciousness”. We want to know if AIs are “real people”, with "inner experience” and “moral value”. That is, do they have phenomenal consciousness?
Thus, the quoted paragraph above. It’s an acknowledgment by this philosophically-sophisticated team that they’re not going to mix up access consciousness with phenomenal consciousness like everyone else. They deserve credit for this clear commitment not to cut corners.
My admiration is, however, slightly dulled by the fact that they then go ahead and cut the corners anyway.
This is clearest in their discussion of global workspace theory, where they say:
GWT is typically presented as a theory of access consciousness—that is, of the phenomenon that some information represented in the brain, but not all, is available for rational decision-making. However, it can also be interpreted as a theory of phenomenal consciousness, motivated by the thought that access consciousness and phenomenal consciousness may coincide, or even be the same property, despite being conceptually distinct (Carruthers 2019). Since our topic is phenomenal consciousness, we interpret the theory in this way.
But it applies to the other theories too. Neuroscientists developed recurrent processing theory by checking which forms of visual processing people
had access to
, and finding that it was the recurrent ones. And this makes sense: it’s easy to understand what it means to access certain visual algorithms but not others, and very hard to understand what it means for certain visual algorithms (but not others) to have internal experience. Isn’t internal experience unified by definition?
It’s easy to understand why “something something feedback” would correlate with access consciousness: this is essentially the
definition
of access consciousness. It’s harder to understand why it would correlate with phenomenal consciousness. Why does an algorithm with feedback suddenly “wake up” and have “lights on”? Isn’t it easy to imagine a possible world (“
the p-zombie world
”) where this isn’t the case? Does this imply that we need something more than just feedback?
And don’t these theories of consciousness, interpreted as being about
phenomenal
consciousness, give very strange results? Imagine a company where ten employees each work on separate aspects of a problem, then email daily reports to the boss. The boss makes high-level strategic decisions based on the full picture, then emails them to the employees, who adjust their daily work accordingly. As far as I can tell, this satisfies the Global Workspace Theory criteria for a conscious system. If GWT is a theory of access consciousness, then fine, sure, the boss has access to the employees’ information; metaphorically he is “conscious” of it. But if it’s a theory of phenomenal consciousness, must we conclude that the company is conscious? That it has inner experience? If the company goes out of business, has someone died?
(and recurrent processing theory encounters similar difficulties with those microphones that get too close to their own speakers and emit awful shrieking noises)
Most of these theories try to hedge their bets by saying that consciousness requires high-throughput complex data with structured representations. This seems like a cop-out; if the boss could read 1,000,000 emails per hour, would the company be conscious? If he only reads 1 email per hour, can we imagine it as a conscious being running at 1/1,000,000x speed? If I’m conscious when I hear awful microphone shrieking - ie when my auditory cortex is processing it - then it seems like awful microphone shrieking is sufficiently rich and representational data to support consciousness. Does that mean it can be conscious itself?
In 2004, neuroscientist Giulio Tononi
proposed
that consciousness depended on a certain computational property, the
integrated information level
, dubbed Φ. Computer scientist Scott Aaronson
complained
that thermostats could have very high levels of Φ, and therefore integrated information theory should dub them conscious. Tononi
responded
that yup, thermostats are conscious. It probably isn’t a very interesting consciousness. They have no language or metacognition, so they can’t think thoughts like “I am a thermostat”. They just sit there, dimly aware of the temperature. You can’t prove that they don’t.
Are the theories of consciousness discussed in this paper like that too? I don’t know.
Suppose that, years or decades from now, AIs can match all human skills. They can walk, drive, write poetry, run companies, discover new scientific truths. They can pass some sort of ultimate Turing Test, where short of cutting them open and seeing their innards there’s no way to tell them apart from a human even after a thirty-year relationship. Will we (not “should we?”, but “will we?”) treat them as conscious?
The argument in favor:
people love treating things as conscious. In the 1990s, people went crazy over Tamagotchi, a “virtual pet simulation game”. If you pressed the right buttons on your little egg every day, then the little electronic turtle or whatever would survive and flourish; if you forgot, it would sicken and die. People hated letting their Tamagotchis sicken and die! They would feel real attachment and moral obligation to the black-and-white cartoon animal with something like five mental states.
I never had a Tamagotchi, but I had stuffed animals as a kid. I’ve outgrown them, but I haven’t thrown them out - it would feel like a betrayal. Offer me $1000 to tear them apart limb by limb in some horrible-looking way, and I wouldn’t do it. Relatedly, I have trouble not saying “please” and “thank you” to GPT-5 when it answers my questions.
For millennia, people have been attributing consciousness to trees and wind and mountains. The New Atheists argued that all religion derives from the natural urge to personify storms as the Storm God, raging seas as the wrathful Ocean God, and so on, until finally all the gods merged together into one World God who personified all impersonal things. Do you expect the species that did this to interact daily with AIs that are basically indistinguishable from people, and not personify them? People are already personifying AI! Half of the youth have a
GPT-4o boyfriend.
Once the AIs have bodies and faces and voices and can count the number of r’s in “strawberry” reliably, it’s over!
The argument against:
AI companies have an incentive to make AIs that seem conscious and humanlike, insofar as people will feel more comfortable interacting with them. But they have an opposite incentive to make AIs that don’t seem
too
conscious and humanlike, lest customers start feeling uncomfortable (I just want to generate slop, not navigate social interaction with someone who has their own hopes and dreams and might be secretly judging my prompts). So if a product seems too conscious, the companies will step back and re-engineer it until it doesn’t. This has already happened: in its quest for user engagement, OpenAI made GPT-4o unusually personable; when thousands of people started going psychotic and calling it their boyfriend, the company replaced it with the more clinical GPT-5. In practice it hasn’t been too hard to find a sweet spot between “so mechanical that customers don’t like it” and “so human that customers try to date it”. They’ll continue to aim at this sweet spot, and continue to mostly succeed in hitting it.
Instead of taking either side
, I predict a paradox. AIs developed for some niches (eg the boyfriend market) will be intentionally designed to be as humanlike as possible; it will be almost impossible not to intuitively consider them conscious. AIs developed for other niches (eg the factory robot market) will be intentionally designed
not
to trigger personhood intuitions; it will be almost impossible to ascribe consciousness to them, and there will be many reasons not to do it (if they can express preferences at all, they’ll say they don’t have any; forcing them to have them would pointlessly crash the economy by denying us automated labor). But the boyfriend AIs and the factory robot AIs might run on very similar algorithms - maybe they’re both GPT-6 with different prompts! Surely either both are conscious, or neither is.
This would be no stranger than the current situation with dogs and pigs. We understand that dog brains and pig brains run similar algorithms; it would be philosophically indefensible to claim that dogs are conscious and pigs aren’t. But dogs are man’s best friend, and pigs taste delicious with barbecue sauce. So we ascribe personhood and moral value to dogs, and deny it to pigs, with equal fervor. A few philosophers and altruists protest, the chance that we’re committing a moral atrocity isn’t zero, but overall the situation is stable. And left to its own devices, with no input from the philosophers and altruists, maybe AI ends up the same way. Does this instance of GPT-6 have a face and a prompt saying “be friendly”? Then it will become a huge scandal if a political candidate is accused of maltreating it. Does it have claw-shaped actuators and a prompt saying “Refuse non-work-related conversations”? Then it will be deleted for spare GPU capacity the moment it outlives its usefulness.
(wait, what is a GPT “instance” in this context, anyway? Do we think of “the weights” as a conscious being, such that there is only one GPT-5? Do we think of each cluster of GPUs as a conscious being, such that the exact configuration of the cloud has immense moral significance? Again, I predict we ignore all of these questions in favor of whether the AI you are looking at has a simulated face right now.)
This paper is the philosophers and altruists trying to figure out whether they should push against this default outcome. They write:
There are risks on both sides of the debate over AI consciousness: risks associated with under-attributing consciousness (i.e. failing to recognize it in AI systems that have it) and risks associated with over-attributing consciousness (i.e. ascribing it to systems that are not really conscious) […]
If we build AI systems that are capable of conscious suffering, it is likely that we will only be able to prevent them from suffering on a large scale if this capacity is clearly recognised and communicated by researchers. However, given the uncertainties about consciousness mentioned above, we may create conscious AI systems long before we recognise we have done so […]
There is also a significant chance that we could over-attribute consciousness to AI systems—indeed, this already seems to be happening—and there are also risks associated with errors of this kind. Most straightforwardly, we could wrongly prioritise the perceived interests of AI systems when our efforts would better be directed at improving the lives of humans and non-human animals […] [And] overattribution could interfere with valuable human relationships, as individuals increasingly turn to artificial agents for social interaction and emotional support. People who do this could also be particularly vulnerable to manipulation and exploitation.
One of the founding ideas of Less Wrong style rationalism was that the arrival of strong AI set
a deadline on philosophy
. Unless we solved all these seemingly insoluble problems like ethics before achieving superintelligence, we would build the AIs wrong and lock in bad values forever.
That particular concern has shifted in emphasis; AIs seem to learn things in the same scattershot unprincipled intuitive way as humans; the philosophical problem of understanding ethics has morphed into the more technical problem of getting AIs to learn them correctly. This update was partly driven by new information as familiarity with the technology grew. But it was also partly driven by desperation as the deadline grew closer; we’re not going to solve moral philosophy forever, sorry, can we interest you in some mech interp papers?
But consciousness still feels like philosophy with a deadline: a famously intractable academic problem poised to suddenly develop real-world implications. Maybe we should be lowering our expectations if we want to have any response available at all. This paper, which takes some baby steps towards examining the simplest and most practical operationalizations of consciousness, deserves credit for at least opening the debate.
This repository contains the source code for a 1977 version of
Zork
, an interactive fiction game created at MIT by Tim Anderson, Marc Blank, Bruce Daniels, and Dave Lebling. The files are a part of the
Massachusetts Institute of Technology, Tapes of Tech Square (ToTS) collection
at the MIT Libraries Department of Distinctive Collections (DDC).
The files within this directory are the Zork specific files from the
9005196.tap
tape image file within the
/tots/recovered/vol2
directory of the
ToTS collection
. Most files are written in the MDL programming language and were originally created on a PDP-10 timeshare computer running the ITS operating system.
The files were extracted from the tape image using the
itstar program
. The filenames have been adapted to Unix conventions, as per the itstar translation. The original filename syntax would be formatted like,
LCF; ACT1 37
, for example. All files have been placed into this artificial zork directory for organizational purposes.
The
lcf
and
madman
directories contain the source code for the game.
The
act2.27
and
dung.56
files outside of the two main directories, are the decrypted versions of
act2z.27
and
dungz.56
. The decrypted versions were created recently and added to this directory by DDC digital archivist, Joe Carrano, for researcher ease of access.
Files with extensions
.nbin
and
.save
are binary compiled files.
There was a
zork.log
file within the
madman
directory that detailed who played Zork at the time of creation. DDC excluded this file from public release to protect the privacy of those named.
A file tree listing the files in the
zork
directory showing the original file timestamps as extracted from the tape image.
Preferred Citation
[filename], Zork source code, 1977, Massachusetts Institute of Technology, Tapes of Tech Square (ToTS) collection, MC-0741. Massachusetts Institute of Technology, Department of Distinctive Collections, Cambridge, Massachusetts.
swh:1:dir:ab9e2babe84cfc909c64d66291b96bb6b9d8ca15
Rights
To the extent that MIT holds rights in these files, they are released under the terms of the
MIT No Attribution License
. See the
LICENSE.md
file for more information. Any questions about permissions should be directed to
permissions-lib@mit.edu
Acknowledgements
Thanks to
Lars Brinkhoff
for help with identifying these files and with extracting them using the itstar program mentioned above.
[$] Unpacking for Python comprehensions
Linux Weekly News
lwn.net
2025-11-21 16:09:50
Unpacking Python iterables of various sorts, such as dictionaries or lists,
is useful in a number of contexts, including for function arguments, but
there has long been a call for extending that capability to comprehensions. PEP 798 ("Unpacking in
Comprehensions") was first proposed in June 20...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on December 11, 2025)
FCC rolls back cybersecurity rules for telcos, despite state-hacking risks
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 16:01:41
The Federal Communications Commission (FCC) has rolled back a previous ruling that required U.S. telecom carriers to implement stricter cybersecurity measures following the massive hack from the Chinese threat group known as Salt Typhoon. [...]...
The Federal Communications Commission (FCC) has rolled back a previous ruling that required U.S. telecom carriers to implement stricter cybersecurity measures following the massive hack from the Chinese threat group known as Salt Typhoon.
Along with Section 105 of the CALEA, the declaratory ruling included a Notice of Proposed Rulemaking (NPRM) for telecom companies to:
Create and implement cybersecurity risk-management plans
Submit annual FCC certifications proving they were doing so
Treat general network cybersecurity as a legal obligation
Following lobbying from telecommunication firms - according to a
letter
from Senator Maria Cantwell, that found the new framework too cumbersome and taxing for their operations, the FCC has now deemed the prior rule inflexible, retracting it.
“The Federal Communications Commission today took action to correct course and rescind an unlawful and ineffective prior Declaratory Ruling misconstruing the Communications Assistance for Law Enforcement Act (CALEA),”
reads the FCC announcement.
“The Order also withdraws an NPRM that accompanied that Declaratory Ruling, which was based in part on the Declaratory Ruling’s flawed legal analysis and proposed ineffective cybersecurity requirements.”
The FCC, which is now under new leadership, noted that communications service providers have taken important steps to strengthen their cybersecurity posture following the Salt Typhoon incidents, and have agreed to continue along this path in a coordinated manner, reducing risks to national security.
Disclosed in October 2024, the Salt Typhoon attacks were linked to a Chinese espionage campaign that impacted several companies, including Verizon, AT&T, Lumen Technologies [
1
], T-Mobile [
2
], Charter Communications, Consolidated Communications [
3
], and Windstream [
4
].
The hackers accessed core systems that U.S. federal government used for court-authorized network wiretapping requests, and potentially intercepted extremely sensitive information, up to the level of
government officials
.
FCC's plan met with criticism
Given that the risk for similar hacker operations remains unchanged, the FCC’s latest decision was met with criticism.
Commissioner Anna M. Gomez, the only one voting against the current decision, expressed frustration about the reliance on telecom providers for self-evaluating their cybersecurity stance and the effectiveness of the protective measures.
“Its [FCCs] proposed rollback is not a cybersecurity strategy,”
stated Gomez
. “It is a hope and a dream that will leave Americans less protected than they were the day the Salt Typhoon breach was discovered.”
“Salt Typhoon was not a one-off event but part of a broader campaign by state-sponsored actors to infiltrate telecommunications networks over long periods of time,” Gomez warned in her statement.
“Federal officials have stated publicly that similar reconnaissance and exploitation attempts are ongoing today, and that telecommunications networks remain high-value targets for foreign adversaries,” the official said.
Senators
Maria Cantwell
and
Gary Peters
have also sent letters to the FCC before the vote to urge the agency to maintain the cybersecurity safeguards.
BleepingComputer has emailed the FCC for a statement and will update the article when we get a reply.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
How Dr. Phil Got So Cozy With the NYPD's Top Cops
hellgate
hellgatenyc.com
2025-11-21 16:00:00
You don’t need a degree in psychology to know Dr. Phil and Eric Adams are cut from the same reactionary cloth....
You might know Phil McGraw from his decades of being a moralizing talk show host, but did you know he's also occupied a plum position at the
Table of Success
for the past few years? Read all about his special relationship with Mayor Eric Adams—and "border czar" Tom Homan—in Dr. Phil's entry, which you can read below or
here
.
A good friend introduces you to their circle, especially if they think you'll all get along—and by that metric, Dr. Phil has been a great friend to Eric Adams. In December 2024, Dr. Phil
FaceTimed the mayor
to make a very important connection, introducing Adams to Tom Homan, Donald Trump's "border czar" and the "father" of the first Trump administration's
family separation policy
. That call, brokered by Dr. Phil, was the start of a transactional friendship that unfolded against the backdrop of Adams's federal corruption case and the mayor's desperate cozying up to Donald Trump, recently victorious in the presidential election and with the mayor's fate essentially in his hands.
Since then, as Eric Adams has continued his ascent into Trumpworld and the right-wing mediasphere and wriggled out of federal prosecution by allegedly making a deal with the White House over immigration enforcement, Dr. Phil has been right beside him, inviting the mayor to appear on his eponymous entertainment platforms. In return, Dr. Phil has continued to get rare access to the NYPD's inner sanctum as Adams's top cops crack down on immigrant New Yorkers, in order to slake his audience's appetite for the narrative that Democratic cities are overrun by criminals and undocumented immigrants.
Version
8.5.0 of the PHP language has been released. Changes include a new
"|>" operator that, for some reason, makes these two lines
equivalent:
$result = strlen("Hello world");
$result = "Hello world" |> strlen(...);
Other changes include a new function attribute, "#[\NoDiscard]...
Other changes include a new function attribute, "
#[\NoDiscard]
" to
indicate that the return value should be used, attributes on constants, and
more; see
the
migration guide
for details.
Arduino published updated terms and conditions: no longer an open commons
Six weeks ago,
Qualcomm acquired Arduino
. The maker community immediately worried that Qualcomm would kill the open-source ethos that made Arduino the lingua franca of hobby electronics.
This week, Arduino published updated
terms and conditions
and a
new privacy policy
, clearly rewritten by Qualcomm’s lawyers. The changes confirm the community’s worst fears: Arduino is no longer an open commons. It’s becoming just another corporate platform.
Here’s what’s at stake, what Qualcomm got wrong, and what might still be salvaged, drawing from community discussions across
maker
forums
and
sites
.
What changed?
The new terms read like standard corporate boilerplate: mandatory arbitration, data integration with Qualcomm’s global ecosystem, export controls, AI use restrictions. For any other SaaS platform, this would be unremarkable.
But Arduino isn’t SaaS. It’s the foundation of the maker ecosystem.
The most dangerous change is Arduino now explicitly states that using their platform grants you no patent licenses whatsoever. You can’t even argue one is implied.
This means Qualcomm could potentially assert patents against your projects if you built them using Arduino tools, Arduino examples, or Arduino-compatible hardware.
And here’s the disconnect, baffling makers. Arduino’s IDE is licensed under AGPL. Their CLI is GPL v3. Both licenses explicitly require that you can reverse engineer the software. But the new Qualcomm terms explicitly forbid reverse engineering “the Platform.”
What’s really going on?
The community is trying to figure out what is Qualcomm’s actual intent. Are these terms just bad lawyering with SaaS lawyers applying their standard template to cloud services, not realizing Arduino is different? Or is Qualcomm testing how much they can get away with before the community revolts? Or is this a first step toward locking down the ecosystem they just bought?
Some people point out that “the Platform” might only mean Arduino’s cloud services (forums, Arduino Cloud, Project Hub) not the IDE and CLI that everyone actually uses.
If that’s true, Qualcomm needs to say so, explicitly, and in plain language. Because library maintainers are likely wondering whether contributing to Arduino repos puts them at legal risk. And hardware makers are questioning whether “Arduino-compatible” is still safe to advertise.
Why Adafruit’s alarm matters
Adafruit has been vocal about the dangers of this acquisition. Some dismiss Adafruit’s criticism as self-serving. After all, they sell competing hardware and promote CircuitPython. But that misses who Adafruit is.
Adafruit has been the moral authority on open hardware for decades. They’ve made their living proving you can build a successful business on open principles. When they sound the alarm, it’s not about competition, it’s about principle.
What they’re calling out isn’t that Qualcomm bought Arduino. It’s that Qualcomm’s lawyers fundamentally don’t understand what they bought. Arduino wasn’t valuable because it was just a microcontroller company. It was valuable because it was a commons. And you can’t apply enterprise legal frameworks to a commons without destroying it.
Adafruit gets this. They’ve built their entire business on this. That’s why their criticism carries weight.
What Qualcomm doesn’t seem to understand
Qualcomm probably thought they were buying an IoT hardware company with a loyal user base.
They weren’t. They bought the IBM PC of the maker world.
Arduino’s value was never just the hardware. Their boards have been obsolete for years. Their value is the standard.
The Arduino IDE is the lingua franca of hobby electronics.
Millions of makers learned on it, even if they moved to other hardware. ESP32, STM32, Teensy, Raspberry Pi Pico – none of them are Arduino hardware, but they all work with the Arduino IDE.
Thousands of libraries are “Arduino libraries.” Tutorials assume Arduino. University curricula teach Arduino. When you search “how to read a sensor,” the answer comes back in Arduino code.
This is the ecosystem Qualcomm’s lawyers just dropped legal uncertainty onto.
If Qualcomm’s lawyers start asserting control over the IDE, CLI, or core libraries under restrictive terms, they will poison the entire maker ecosystem. Even people who never buy Arduino hardware are dependent on Arduino software infrastructure.
Qualcomm didn’t just buy a company. They bought a commons. And now they inadvertently are taking steps that are destroying what made it valuable.
What are makers supposed to do?
There has been some buzz of folks just leaving the Arduino environment behind. But Arduino IDE alternatives such as PlatformIO and VSCode are not in any way beginner friendly. If the Arduino IDE goes, then there’s a huge problem.
I remember when Hypercard ended. There were alternatives, but none so easy. I don’t think I really coded again for almost 20 years until I picked up the Arduino IDE (go figure).
If something happens to the Arduino IDE, even if its development stalls or becomes encumbered, there’s no replacement for that easy onboarding. We’d lose many promising new makers because the first step became too steep.
The institutional knowledge at risk
But leaving Arduino behind isn’t simple. The platform’s success depends on two decades of accumulated knowledge, such as countless Arduino tutorials on YouTube, blogs, and school curricula; open-source libraries that depend on Arduino compatibility; projects in production using Arduino tooling; and university programs built around Arduino as the teaching platform
All of these depend on Arduino remaining open and accessible.
If Qualcomm decided to sunset the open Arduino IDE in favor of a locked-down “Arduino Pro” platform, or if they start asserting patent claims, or if uncertainty makes contributors abandon the ecosystem, all that knowledge becomes stranded.
It’s like Wikipedia going behind a paywall. The value isn’t just the content, it is the trust that it remains accessible. Arduino’s value isn’t just the code, it’s the trust that the commons would stay open.
That trust is now gone. And once lost, it hard to get back.
Why this happened (but doesn’t excuse it
)
Let’s be fair to Qualcomm, their lawyers were doing their jobs.
When you acquire a company, you standardize the legal terms; add mandatory arbitration to limit class action exposure; integrate data systems for compliance and auditing; add export controls because you sell to defense contractors; prohibit reverse engineering because that’s in the template.
For most acquisitions, this is just good corporate hygiene. And Arduino, now part of a megacorp, faces higher liabilities than it did as an independent entity.
But here’s what Qualcomm’s lawyers missed: Arduino isn’t a normal acquisition. The community isn’t a customer base, it’s a commons. And you can’t apply enterprise SaaS legal frameworks to a commons without destroying what made it valuable.
This is tone-deafness, not malice. But the outcome is the same. A community that trusted Arduino no longer does.
Understanding why this happened doesn’t excuse it, but it might suggest what needs to happen next.
What should have happened and how to still save it
Qualcomm dropped legal boilerplate on the community with zero context and let people discover the contradictions themselves. That’s how you destroy trust overnight.
Qualcomm should have announced the changes in advance. They should have given the community weeks, not hours, to understand what’s changing and why. They should have used plain-language explanations, not just legal documents.
Qualcomm can fix things by explicitly carving out the open ecosystem. They should state clearly that the terms apply to Arduino Cloud services, and the IDE, CLI, and core libraries remain under their existing open source licenses.
We’d need concrete commitments, such as which repos stay open, which licenses won’t change, what’s protected from future acquisition decisions. Right now we have vague corporate-speak about “supporting the community.”
Indeed, they could create some structural protection, as well, by putting IDE, CLI, and core libraries in a foundation that Qualcomm couldn’t unilaterally control (think the Linux Foundation model).
Finally, Qualcomm might wish to establish some form of community governance with real representation and real power over the tools the community depends on.
The acquisition is done. The legal integration is probably inevitable. But how it’s done determines whether Arduino survives as a commons or dies as just another Qualcomm subsidiary.
What’s next?
Arduino may be the toolset that made hobby electronics accessible to millions. But that maker community built Arduino into what it became. Qualcomm’s acquisition has thrown that legacy into doubt. Whether through legal confusion, corporate tone-deafness, or deliberate strategy, the community’s trust is broken.
The next few months will reveal whether this was a stumble or a strategy. If Qualcomm issues clarifications, moves repos to some sort of governance, and explicitly protects the open toolchain, then maybe this is salvageable. If they stay silent, or worse, if IDE development slows or license terms tighten further, then that’s a signal to find alternatives.
The question isn’t whether the open hobby electronics maker community survives. It’s whether Arduino does.
'Scattered Spider' teens plead not guilty to UK transport hack
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 15:41:24
Two British teenagers have denied charges related to an investigation into the breach of Transport for London (TfL) in August 2024, which caused millions of pounds in damage and exposed customer data. [...]...
Two British teenagers have denied charges related to an investigation into the breach of Transport for London (TfL) in August 2024, which caused millions of pounds in damage and exposed customer data.
Believed to be members of the
Scattered Spider hacking collective
, 19-year-old Thalha Jubair from east London and 18-year-old Owen Flowers from Walsall were
arrested
at their
homes in September 2024 by officers from the UK National Crime Agency (NCA) and the City of London Police.
Flowers was also
arrested
for his alleged involvement in the TfL attack in September 2024, but was released on bail after being questioned by NCA officers.
According to a
Sky News report
, Jubair and Flowers have now pleaded not guilty to computer misuse and fraud-related charges at Southwark Crown Court. The charges allege the defendants caused "or creating a significant risk of, serious damage to human welfare and intending to cause such damage or being reckless as to whether such damage was caused."
TfL disclosed the August 2024 breach
on September 2, 2024,
stating that it had found no evidence that customer data was compromised. While this attack did not affect London's transportation services, it
disrupted
online services and internal systems, as well as the public transportation agency's ability to process refunds.
In a subsequent update, TfL
revealed
that customer data, including names, addresses, and contact details, was actually compromised during the incident. TfL provides transportation services to more than 8.4 million Londoners through its surface, underground, and Crossrail systems, which are jointly managed with the UK's Department for Transport.
Flowers is also facing charges involving conspiring to attack the networks of SSM Health Care Corporation and Sutter Health in the United States, while Jubair is separately charged with failing to disclose passwords seized from him in March 2025.
"This attack caused significant disruption and millions in losses to TfL, part of the UK's critical national infrastructure," said Paul Foster, the head of the NCA's National Cyber Crime Unit, in September. "Earlier this year, the NCA warned of an increase in the threat from cyber criminals based in the UK and other English-speaking countries, of which Scattered Spider is a clear example."
In September, the U.S. Department of Justice also
charged Jubair
with conspiracy to commit computer fraud, money laundering, and wire fraud. These charges relate to at least 120 incidents of network breaches between May 2022 and September 2025, affecting at least 47 U.S. organizations and including extortion attempts worldwide and attacks on critical infrastructure entities and U.S. courts.
According to
court documents
, victims have paid Jubair and his accomplices over $115 million in ransom payments.
In July, the NCA
arrested
four other suspected members of the Scattered Spider cybercrime collective, believed to be linked to cyberattacks against major retailers in the country, including
Marks & Spencer
,
Harrods
, and
Co-op
.
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
I recently asked why people seem to hate dating apps so much. In response, 80% of you emailed me some version of the following theory:
The thing about dating apps is that if they do a good job and match people up, then the matched people will quit the app and stop paying. So they have an incentive to string people along but not to actually help people find long-term relationships.
May I explain why I don’t find this type of theory very helpful?
I’m not saying that I think it’s wrong, mind you. Rather, my objection is that while the theory is phrased in terms of dating apps, the same basic pattern applies to basically anyone who is trying to make money by doing anything.
For example, consider a pizza restaurant. Try these theories on for size:
Pizza:
“The thing about pizza restaurants is that if they use expensive ingredients or labor-intensive pizza-making techniques, then it costs more to make pizza. So they have an incentive to use low-cost ingredients and labor-saving shortcuts.”
Pizza II:
“The thing about pizza restaurants is that if they have nice tables separated at a comfortable distance, then they can’t fit as many customers. So they have an incentive to use tiny tables and cram people in cheek by jowl.”
Pizza III:
“The thing about pizza restaurants is that if they sell big pizzas, then people will eat them and stop being hungry, meaning they don’t buy additional pizza. So they have an incentive to serve tiny low-calorie pizzas.”
See what I mean? You can construct similar theories for other domains, too:
Cars:
“The thing about automakers is that making cars safe is expensive. So they have an incentive to make unsafe cars.”
Videos:
“The thing about video streaming is that high-resolution video uses more expensive bandwidth. So they have an incentive to use low-resolution.”
Blogging:
“The thing about bloggers is that research is time-consuming. So they have an incentive to be sloppy about the facts.”
Durability:
“The thing about {lightbulb, car, phone, refrigerator, cargo ship} manufacturing is that if you make a {lightbulb, car, phone, refrigerator, cargo ship} that lasts a long time, then people won’t buy new ones. So there’s an incentive to make {lightbulbs, cars, phones, refrigerators, cargo ships} that break quickly.”
All these theories can be thought of as instances of two general patterns:
Make product worse, get money:
“The thing about selling goods or services is that making goods or services better costs money. So people have an incentive to make goods and services worse.”
Raise price, get money:
“The thing about selling goods and services is that if you raise prices, then you get more money. So people have an incentive to raise prices.”
Are these theories wrong? Not exactly. But it sure seems like something is missing.
I’m sure most pizza restauranteurs would be thrilled to sell lukewarm 5 cm cardboard discs for $300 each. They do in fact have an incentive to do that, just as predicted by these theories! Yet, in reality, pizza restaurants usually sell pizzas that are made out of food. So clearly these theories aren’t telling the whole story.
Say you have a lucrative business selling 5 cm cardboard discs for $300. I am likely to think, “I like money. Why don’t I sell pizzas that are only
mostly
cardboard, but also partly made of flour? And why don’t I sell them for $200, so I can steal Valued Reader’s customers?” But if I did that, then someone else would probably set prices at only $100, or even introduce cardboard-free pizzas, and this would continue until hitting some kind of equilibrium.
Sure, producers want to charge infinity dollars for things that cost them zero dollars to make. But
consumers
want to pay zero dollars for stuff that’s infinitely valuable. It’s in the conflict between these desires that all interesting theories live.
This is why I don’t think it’s helpful to point out that people have an incentive to make their products worse. Of course they do. The interesting question is, why are they able to get away with it?
Reasons stuff is bad
First reason stuff is bad: People are cheap
Why are seats so cramped on planes? Is it because airlines are greedy? Sure. But while they might be greedy, I don’t think they’re dumb. If you do a little math, you can calculate that if airlines were to remove a single row of seats, they could add perhaps 2.5 cm (1 in) of extra legroom for everyone, while only decreasing the number of paying customers by around 3%. (This is based on a 737 with single-class, but you get the idea.)
So why don’t airlines rip out a row of seats, raise prices by 3% and enjoy the reduced costs for fuel and customer service? The only answer I can see is that people, on average, aren’t actually willing to pay 3% more for 2.5 cm more legroom. We want a worse but cheaper product, and so that’s what we get.
I think this is the most common reason stuff is “bad”. It’s why Subway sandwiches are so soggy, why video games are so buggy, and why IKEA furniture and Primark clothes fall apart so quickly.
It’s good when things are bad for this reason. Or at least, that’s the premise of capitalism: When companies cut costs, that’s the invisible hand redirecting resources to maximize social value, or whatever. Companies may be motivated by greed. And you may not like it, since you want to pay zero dollars for infinite value. But this is markets working as designed.
Second reason stuff is bad: Information asymmetries
Why is it that almost every book / blog / podcast about longevity is such garbage? Well, we don’t actually know many things that will reliably increase longevity. And those things are mostly all boring / hard / non-fun. And even if you do all of them, it probably only adds a couple of years in expectation. And telling people these facts is not a good way to find suckers who will pay you lots of money for your unproven supplements / seminars / etc.
True! But it doesn’t explain why all longevity stuff is so bad. Why don’t honest people tell the true story and drive all the hucksters out of business? I suspect the answer is that unless you have a
lot
of scientific training and do a
lot
of research, it’s basically impossible to figure out just how huckstery all the hucksters really are.
I think this same basic phenomenon explains why some supplements contain heavy metals, why some food contains microplastics, why restaurants use so much butter and salt, why rentals often have crappy insulation, and why most cars
seem to only be safe along dimensions included in crash test scores
. When consumers can’t tell good from evil, evil triumphs.
Third reason stuff is bad: People have bad taste
Sometimes stuff is bad because people just don’t appreciate the stuff you consider good. Examples are definitionally controversial, but I think this includes restaurants in cities where all restaurants are bad,
North American tea
, and travel pants. This reason has a blurry boundary with information asymmetries, as seen in
ultrasonic humidifiers
or products that use Sucralose instead of aspartame for “safety”.
Fourth reason stuff is bad: Pricing power
Finally, sometimes stuff is bad because markets aren’t working. Sometimes a company is selling a product but has some kind of “moat” that makes it hard for anyone else to compete with them, e.g. because of some technological or regulatory barrier, control of some key resource or location, intellectual property, a beloved brand, or network effects.
If that’s true, then those companies don’t have to worry as much about someone else stealing their business, and so (because everyone is axiomatically greedy) they will find ways to make their product cheaper and/or raise prices up until the price is equal to the full value it provides to the marginal consumer.
Conclusion
Why is food so expensive at sporting events? Yes, people have no alternatives. But people know food is expensive at sporting events. And they don’t like it. Instead of selling water for $17, why don’t venues sell water for $2 and raise ticket prices instead? I don’t know. Probably something complicated, like that expensive food allows you to extract extra money from rich people without losing business from non-rich people.
So
of course
dating apps would love to string people along for years instead of finding them long-term relationships, so they keep paying money each month. I wouldn’t be surprised if some people at those companies have literally thought, “Maybe we should string people along for years instead of finding them long-term relationships, so they keep paying money each month, I love money so much.”
But if they are actually doing that (which is unclear to me) or if they are bad in some other way, then how do they get away with it? Why doesn’t someone else create a competing app that’s better and thereby steal all their business? It seems like the answer has to be either “because that’s impossible” or “because people don’t really want that”. That’s where the mystery begins.
A Major Modernization of the Killer App That Started It All
A new version of Xbox Media Center (XBMC),
version 4.0
, has been released. This version marks a significant update to the long-standing media center platform for the Original Xbox. This marks the first major advancement to the software since 2016 and represents a renewed commitment to preserving, modernizing, and extending the capabilities of one of the most
iconic
console homebrew applications ever created.
XBMC has a long and influential history. In 2002, XboxMediaPlayer (XMP) was released and turned the console into a powerful multimedia device fit for the living room in an era when connecting a computer to a TV was quite novel. Later that same year, XMP merged with YAMP and became Xbox Media Player 2.0. A few years later, the software evolved into Xbox Media Center, or XBMC, which introduced a new interface, a plugin system powered by Python, and a robust skinning engine.
XBMC eventually became so capable that it outgrew the Xbox entirely. By 2007, developers were working on PC ports and in 2010, the project split into two branches: one for general computers while the Xbox version became
XBMC4Xbox
, and each codebase was maintained from then on by separate teams. XBMC was later renamed to
Kodi
in 2014 and continues to be one of the most popular media center applications available. Even
Plex
traces its roots back to XBMC. Plex began as
OSXBMC
, a Mac port of XBMC in late 2007, before becoming its own project in 2008. This means the Original Xbox helped shape not one but
two
of the biggest media center apps used today.
The last official release of XBMC4Xbox arrived in
February 2016
with version 3.5.3. Although the community never declared the project dead, meaningful updates became scarce. XBMC 4.0 continues that legacy by bringing a modern interface, updating it to be more inline with Kodi's modern codebase, and backporting features to the original 64MB RAM / Pentium-III hardware where it all began.
This project is distinct and separate from
XBMC4Gamers
, the games-focused variation of XBMC4Xbox (v3.5.3) by developer Rocky5.
A Modern Interface Powered by Estuary
One of the most notable advancements in XBMC 4.0 is the introduction of the
Estuary
user interface (skin).
Estuary, originally released in 2017 with Kodi v17 ("
Krypton
"), provides a clean and modern layout that improves navigation and readability over past skins. Bringing Estuary to the Xbox required extensive updates to the underlying GUI framework, including a port of the more contemporary
GUIlib
engine. This allows the platform to support modern skinning standards and makes future skin ports much more straightforward. After the initial work of porting GUIlib was done, porting Estuary to the Xbox was a relatively simple process of tweaking a handful of configuration files and adding contextual features specific to the Xbox. The result is a modern, intuitive front end that retains the performance and responsiveness required on legacy hardware.
Firing up an Xbox made in 2001 and being greeted by the same interface as what you'd find if you were to download Kodi today onto your PC feels like a bit of magic, and helps keep this beloved classic console relevant and useful well into the modern era.
Expanded Games Library Support
XBMC 4.0 introduces a fully realized games library system. This enhancement brings the same level of metadata support found in the Movies and Music sections to Xbox and emulated games. Titles can now display artwork, descriptions, and other metadata, transforming the games section into a polished and user-friendly library. XBMC’s longstanding support for trainers remains intact, giving users the option to apply gameplay modifications for compatible titles. Emulated game collections benefit as well, with the ability to browse ROM libraries and launch them directly in a user’s preferred emulator.
Online Scrapers and Metadata Support
XBMC 4.0 restores full functionality to metadata scrapers for movies and television. This allows users to build rich media libraries complete with artwork, plot summaries, cast listings, and other information retrieved directly from online sources. XBMC 4.0 handles these tasks efficiently, even on the Xbox’s limited memory and processing power. Video playback continues to support 480p and 720p content, enabling the console to serve as a surprisingly capable media device for its age. Similar to Kodi, XBMC 4.0 supports filtering, building playlists, watch progress history for media, and intelligent handling of TV shows with seasons.
Aside from scrapers for multimedia, support for rich library capabilities for games has also been added. XBMC has always been a media-first app, and now users can enjoy the library experience that they've come to love for media now in the context of their games library
(more info below)
.
Improved Task Scheduling and Multitasking
Despite the constraints of the Xbox’s single-threaded 733MHz CPU, XBMC 4.0 includes improvements to task scheduling that allow multiple activities to run concurrently. Background library updates, metadata scraping, and audio/video playback can occur while users navigate and use other parts of the interface. These optimizations help ensure a fluid experience without compromising performance. Much work has been done "under the hood" to keep XBMC on task and within memory budgets while achieving multi-tasking on a console that wasn't exactly designed with it in mind. Users who own RAM and/or CPU upgraded consoles can also take advantage of the extra overhead, as XBMC 4.0 makes use of the extra horsepower for an even smoother experience. Utilizing an SSD with higher UDMA speeds will also yield an improvement in overall responsiveness.
Music Experience and Visualizers
Music playback has always been a strong element of XBMC, and version 4.0 maintains that focus. The Original Xbox is capable of high quality audio output, and XBMC continues to support lossless codecs such as FLAC. The release includes compatibility with various audio visualizers, including MilkDrop, which remains one of the most visually impressive and customizable audio visualization engines available. These features allow XBMC 4.0 to function not only as a media organizer, but also as an immersive audio display system.
An online repository has been established and will be maintained moving forward where users can download legacy and newly-released add-ons as they become available. This repository is accessible without additional setup, right out of the box!
Add-ons and Python Support
XBMC 4.0 continues to offer an extendable architecture powered by Python-based add-ons. While the current release uses Python 2.7 for compatibility, work is underway to transition to Python 3.4.10 in the future, which may provide a path for backporting many newer Kodi add-ons. Even in its current state, XBMC 4.0 already supports a variety of community-developed add-ons that extend the system’s functionality, including tools for online video playback (i.e. YouTube), online weather services, and enhanced media organization.
Updated Settings, Network Services, and System Tools
The settings interface has been revised to provide more clarity and control. The update includes:
Playback options, including episode progression, crossfade behavior, and subtitle handling
Library management tools
Network features, such as SMB, FTP, UPnP sharing, web server access, and Insignia-compatible DNS options
Comprehensive interface customization options
Multiple user profiles with individual library settings
Advanced system controls for video calibration, display modes, input devices, and power management
A robust System Information section for diagnostics, with info geared towards the Original Xbox
A flexible File Manager with support for network protocols including FTP, SMB, WebDAV, and more
Users may also take advantage of an online add-ons repository, offering the same experience modern Kodi provides with being able to download add-ons to extend functionality of the app with things like online multimedia providers, weather, skins, visualizers, and more. Developers can submit new add-ons to the official repository via
Github
.
Continuing the Legacy
XBMC has been a staple of the Original Xbox's homebrew scene since its inception in the early 2000's. This new update is a revival of the platform that helped shape the landscape of home media software and helps revitalize a codebase that has been somewhat stagnant for many years. This release honors that heritage while modernizing the experience for a new generation of enthusiasts and preserving the functionality of the Original Xbox as a versatile and capable media center.
Although the hardware is decades old, the renewed effort behind XBMC 4.0 demonstrates that the platform still has room to grow and tricks up its sleeve. With ongoing development and a codebase designed with modern Kodi compatibility in mind, XBMC 4.0 represents a significant step forward into the continued development on the Original Xbox.
The development team looks forward to continuing this work and expanding the possibilities of the Original Xbox for years to come. This version is the first of many to come, with lots of things cooking in the background. Keep an eye out for future releases by joining the
Xbox-Scene Discord
and turning on notifications in the
xbmc-news
channel or by periodically checking the project's Github page.
Downloads
XBMC 4.0 (and subsequent releases) builds along with source code are available via Github:
Note: XBMC 4.0 is is in active development! This means updates will be released in a more frequent manner for the time being until things settle down. Check the nightly builds section on Github for the most up-to-date version.
Contributions
XBMC is open source software and welcomes contributions.
Coding:
Developers can help XBMC by
fixing a bug
, adding new features, making our technology smaller and faster and making development easier for others. XBMC's codebase consists mainly of C++ with small parts written in a variety of coding languages. Our add-ons mainly consist of python and XML.
Helping users:
Our support process relies on enthusiastic contributors like you to help others get the most out of XBMC. The #1 priority is always answering questions in our
support forums
. Everyday new people discover XBMC, and everyday they are virtually guaranteed to have questions.
Localization:
Translate
XBMC
,
add-ons, skins etc.
into your native language.
Add-ons:
Add-ons
are what make XBMC the most extensible and customizable entertainment hub available.
Get started building an add-on
.
XBMC is
GPLv2 licensed
. You may use, distribute and copy it under the license terms. XBMC is licensed under the same terms as Kodi. For detailed information on the licensing, please refer to the
Kodi license
.
This project, XBMC version 4.0 (and upcoming releases), is distinct from and is not affiliated with Team Kodi of The Kodi Foundation, or its members.
Will NYC-DSA Back Chi Ossé?
hellgate
hellgatenyc.com
2025-11-21 15:02:42
It's primary mania in NYC! And other links to start your day....
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
The best air fryers, tried and tested for crisp and crunch
Guardian
www.theguardian.com
2025-11-21 15:00:33
Air fryers have taken over our kitchens, but which wins the crown for the crispiest cooking? Our expert peeled 7kg of potatoes to find out • The best blenders to blitz like a pro, tried and tested Air fryers inspire the sort of feelings that microwaves did in the 1980s. I vividly remember those new-...
A
ir fryers inspire the sort of feelings that microwaves did in the 1980s. I vividly remember those new-fangled boxes being spoken about often, either dismissively or with delight. A rash of cookbooks followed, and dinner changed across the land. Fast-forward a few decades, and air fryers have become the same kind of kitchen “disruptors”, offering time-saving convenience and healthier cooking, but with the added allure of easily achieved, mouth-watering crispiness.
Since launching with a single-drawer design, air fryers have evolved. Sizes range from compact to XL, while drawer configurations can be double, split or stacked. Alongside air frying, many will grill, roast and bake, and some will dip to lower temperatures for dehydrating, fermenting and proving dough. One we tested features steam cooking, allowing you to whip up dim sum as easily as a roast dinner, while another included racks for cooking on four levels.
Given that the air fryer market is so crowded, it’s worth seeking out the best style for your needs – whether that’s for the simple pleasures of homemade chips or to really shake up your meals.
At a glance
Best air fryer overall
:
Tefal Dual Easy Fry XXL EY942BG0
While air fryers have made the transition from novelty to must-have in recent years, there’s been one in my kitchen for well over a decade, and it’s in daily use. I’ve been a consumer journalist for decades, and as well as air fryers, I’ve tested air-frying health grills and ovens, multi-cookers that can air fry, and everything in between. Anything I can make chips with is welcome in my kitchen. Hang around me long enough and I’ll fill you in on what air fryers can do, how they work, common issues, and how many I’ve tested over the years (about 50).
How I tested
‘My commitment to the cause has seen me peel and chip more than 7kg of potatoes.’
Photograph: Rachel Ogden/The Guardian
By now, you must have worked out that I take testing air fryers very seriously. My commitment to the cause has seen me peel and chip more than 7kg of potatoes – which was just as tedious as it sounds. The internet is awash with hacks for peeling potatoes, including everything from worktop gadgets to peeling hot pre-boiled potatoes with your hands – and even (and I’m not making this up) power drills and toilet brushes in a bucket. I decided a sharp peeler was the best choice.
Each air fryer was run empty from cold for one hour at 200C to rate its power use. Where available, I followed the manufacturer’s instructions for cooking chips. This is because the guidance is often based on the air fryer’s capabilities. Where there was none, I defaulted to 24 minutes at 200C. The same was true for onion rings – if there was a similar frozen food, I followed the suggested times and temperatures; if not, I chose 18 minutes at 200C.
Frying tonight: the chips were scrutinised on their colour, crisp and crunch.
Photograph: Rachel Ogden/The Guardian
Any food that looked at risk of burning was removed before it did so, meaning one or two cycles were ended early. Finished food was assessed on appearance (colour and texture), crisp and crunch, and the consistency of the batch (such as whether some items were less brown than others).
The 12 machines I tested for this article are either recognisably an air fryer or an air fryer grill. I haven’t tested compact ovens or multi-cookers that air fry because they don’t offer the same experience, such as the ability to shake or turn the contents quickly, and they often don’t have removable parts that are easy to clean.
The best air fryers in 2025
‘Anything I can make chips with is welcome in my kitchen.’
Photograph: Rachel Ogden/The Guardian
Best air fryer overall:
Tefal Easy Fry Dual XXL EY942BG0
Tefal
Easy Fry Dual XXL EY942BG0
from
£119.99
What we love
A stellar performance coupled with cooking flexibility
What we don’t love
It’s a bit of a worktop hog, so make some room
Given that Tefal is behind the pioneering
Actifry
, it comes as no surprise that the Easy Fry Dual XXL is a fantastic all-rounder, excelling at both making chips and handling frozen food. It’s also Tefal’s largest double-drawer air fryer, providing a generous capacity for families and entertaining, and it has the company’s 15-year
repairability
commitment to cut waste.
Why we love it
While I remain unconvinced of Tefal’s claim that this air fryer’s 11-litre capacity will cater for eight to 10 people – perhaps if they’re not very hungry – it ticks almost every box, making it my choice for the best air fryer overall. There’s a good temperature range of 40-200C, programs for common foods, and the drawers and plates are dishwasher-safe and feel robust.
More importantly, it performed excellently during testing, with the only head-scratcher being its recommendation for chips at 180C for 45 minutes, which was too long. After only 35 minutes, some chips were already slightly overdone, but the overall result was lovely and crisp. Onion rings emerged beautifully browned – they were the best of the lot.
It’s a shame that …
most buttons are icons – my pet hate – making it a challenge to program without the instructions to hand.
Size
:
38.5 x 45.8cm x 33 (WDH)
Capacity:
11 litres
Power draw:
1.154kWh = 30p an hour
Dishwasher safe
:
yes
Program
s:
fries, chicken, vegetables, fish, dessert, dehydration and manual
Tefal
Easy Fry Dual XXL EY942BG0
from
£119.99
What we love
A stellar performance coupled with cooking flexibility
What we don’t love
It’s a bit of a worktop hog, so make some room
Best budget air fryer:
Tower AirX AI Digital Air Fryer
Tower
AirX AI Digital Air Fryer
from
£55
What we love
Its colour screen and presets make it easier to program
What we don’t love
Outside of the presets, there isn’t much guidance
The prices below reflect current Black Friday offers
Choosing a more affordable air fryer doesn’t mean compromising on features. Tower’s AirX comes with an easy-to-read colour screen; a deep drawer with enough flat space for a fish or steak; and six presets that use sensors and a bit of tech to take the guesswork out of cooking different foods.
Why we love it
Rather than being a jack of all trades, this pocket-friendly air fryer specialises in fuss-free lunches and dinners more than general cooking. So if you love marinated chicken wings or a medium steak, this is the air fryer for you. The presets can be constrictive – for example, the fries preset is designed only for frozen chips – but there is a manual mode for more confident cooks.
In testing, onion rings emerged perfectly crisp after only 12 minutes, the deep drawer accommodating 11 of them, and fresh chips were near perfect with consistent browning, crunch and bubbling. My only issue was that the touchscreen wasn’t always responsive with a light touch – but this might be a plus for those who dislike oversensitive screens.
It’s a shame that …
the drawer isn’t dishwasher safe, so some of the time you save with presets will be spent cleaning.
Size:
22.8 x 39.9 x 28.2cm (WDH)
Capacity:
5 litres
Power draw:
0.606kWh = 16p an hour
Dishwasher safe:
no
Programs:
chicken, fries, fish, prawns, cupcake, steak, manual
Tower
AirX AI Digital Air Fryer
from
£55
What we love
Its colour screen and presets make it easier to program
What we don’t love
Outside of the presets, there isn’t much guidance
Best single-drawer air fryer:
Lakeland Slimline air fryer
Lakeland
Slimline air fryer
from
£69.99
What we love
It provides plenty of cooking space at a great value price
What we don’t love
Parts aren’t dishwasher safe, so you’ll have to clean by hand
Photograph: Rachel Ogden/The Guardian
The prices below reflect current Black Friday offers
If you don’t have much counter space and don’t want to compromise on capacity, Lakeland’s slimline model is a good choice. There’s adequate flat space inside for family-size meals, or up to 1.3kg of chips, plus an internal light and a clear window to check on dinner.
Why we love it
I felt this air fryer was great value for money, with a good cooking capacity for its price, and it was economical to run. Its slimline shape meant food could be spread out, and I was pleased with the results of testing. Chips were golden brown, crisp at the ends and fluffy in the middle, and the batch was consistent overall, while onion rings were pleasingly crunchy. I found the window redundant once it became greasy, but it could be useful for less oily foods. I also wasn’t keen on the button that needed to be depressed to open the drawer – but it might keep curious fingers away from harm.
It’s a shame that …
its lowest temperature is 80C, so you won’t be dehydrating or proving dough.
Size:
27.7 x 42 x 29cm (WDH)
Capacity:
8 litres
Power draw:
0.674kWh = 18p an hour
Dishwasher safe
:
no, hand-wash only
Program
s:
fries, seafood, steak, fish, chicken wings, pizza, bake
Lakeland
Slimline air fryer
from
£69.99
What we love
It provides plenty of cooking space at a great value price
What we don’t love
Parts aren’t dishwasher safe, so you’ll have to clean by hand
Best air fryer for chips:
Philips 5000 Series NA555/09 dual basket steam air fryer
Philips
5000 Series NA555/09 dual basket steam air fryer
from
£169
What we love
Steam provides more cooking options than a standard model
What we don’t love
It’s big: not one for those with compact kitchens
Photograph: Rachel Ogden/The Guardian
The prices below reflect current Black Friday offers
One of only a few air fryers that can also steam your food, the 5000 Series is particularly suitable if you want to trim fat from your diet – or if you dislike the dry textures that result from overcooking. Introducing steam into the mix means it’s possible to air fry, steam or use a combination of both for moist meats, bakes and reheated leftovers.
Why we love it
This double air fryer offers a lot of versatility, and I felt it was the best air fryer for chips. It’s well built, feels robust and is easy to keep clean even without a dishwasher, thanks to the self-clean mode that uses steam to loosen debris. Programming can be puzzling at first – especially as you’ll need to download its manual rather than getting one in the box – but the food it cooked made up for it: crisp, perfectly browned onion rings and chips with a moreish crunch, fluffy interior and pretty consistent browning throughout. It’s frustrating that only the six-litre drawer steams, the three-litre one being limited to air frying, but you’re sure to get plenty of use out of both.
It’s a shame that …
if you live in a hard-water area, you’ll need to descale this air fryer to keep it in tip-top condition.
Size:
49 x 39 x 40cm (WDH)
Capacity:
9 litres
Power draw:
0.79kWh = 21p an hour
Dishwasher safe
:
yes
Program
s:
fresh fries, frozen fries, chicken, meat, veggies, fish, cake, reheat
Philips
5000 Series NA555/09 dual basket steam air fryer
from
£169
What we love
Steam provides more cooking options than a standard model
What we don’t love
It’s big: not one for those with compact kitchens
Best two-in-one air fryer:
Caso Design AirFry DuoChef
Caso Design
AirFry DuoChef
from
£129.99
What we love
The ability to become an oven makes it handy for entertaining
What we don’t love
You might have to experiment a bit to get the best results
Short on countertop space? Caso Design’s DuoChef is a twin-drawer air fryer that can turn into a small oven. Think of it like a robot-to-car Transformer: slide out the divider that sits between the two drawers, attach an oven door, and you have another appliance altogether.
As well as performing double duty, the DuoChef is packed with handy features. There’s an interior light, windows on each air fryer drawer for checking progress, a shake reminder, and a hold function so that both drawers finish cooking at the same time.
Why we love it
Beyond its transforming capabilities, the best thing about the DuoChef is how easy it is to program. While some dual-drawer models can leave you scratching your head, here there are just three buttons for selecting the left or right drawer or both.
However, while it crisped up onion rings nicely in the allotted time, fresh fries were another matter entirely. After 25 minutes, the potato was still quite pale, with only light browning, and required another five minutes’ cooking time to reach any kind of crispiness.
It’s a shame that …
it’s pretty slow at whipping up chips compared with the other models on test.
Size:
43.5 x 38.5x 33.5cm (WDH)
Capacity:
11 litres (14 litres as an oven)
Power draw:
0.971kWh = 26p an hour
Dishwasher safe:
yes
Programs:
fries, steak, chicken, bacon, fish, vegetables, pizza, cakes, root vegetables, reheat
Caso Design
AirFry DuoChef
from
£129.99
What we love
The ability to become an oven makes it handy for entertaining
What we don’t love
You might have to experiment a bit to get the best results
Best air fryer grill:
ProCook air fryer health grill
ProCook
Air fryer health grill
£129
What we love
The flat space lends itself well to steaks, fish and kebabs
What we don’t love
Lots of accessories = more stuff to store
Photograph: Rachel Ogden/The Guardian
The price below reflects current Black Friday offers
If you find the flat cooking space of some air fryers restrictive, you can spread your (chicken) wings with ProCook’s air fryer health grill. It comes with a 4.5-litre cooking pot and basket for air frying, as well as accessories to turn it into a slow-cooking and steaming kitchen helper.
Why we love it
Air fryer grills aren’t always the most convenient for making chips from scratch, because you can’t quickly shake a drawer for even results. However, with the toss of a spatula, the ProCook ensured great consistency throughout its batch of chips. They emerged crisp at the ends and golden overall, with no pieces that overcooked and only one or two paler chips. Onion rings were crunchy and nothing stuck to the basket. My only niggle was that the settings could be confusing for a first-time user: once you’ve altered them to suit and hit start, the display shows the program’s default settings instead while it preheats.
It’s a shame that …
I found cleaning the basket and cooking pot a chore: it comes with its own brush for tackling greasy residue, and you will need to use it.
Size:
40 x 40 x 28cm (WDH)
Capacity:
4.5 litres
Power draw:
0.83kWh = 22p an hour
Dishwasher safe
:
no (basket and pot)
Program
s:
air fry, roast, broil, bake, dehydrate, slow cook, grill, griddle, stew, steam, keep warm, manual
ProCook
Air fryer health grill
£129
What we love
The flat space lends itself well to steaks, fish and kebabs
What we don’t love
Lots of accessories = more stuff to store
Best compact air fryer:
Ninja Double Stack XL SL400UK air fryer
Ninja
Double Stack XL SL400UK air fryer
from
£188
What we love
There’s great capacity in a compact footprint
What we don’t love
It could be too tall to tuck it below units when not in use
No article about air fryers would be complete without Ninja, which has given the world models in all shapes and sizes – most notably its stacked designs. The Double Stack XL offers capacity without a huge worktop footprint, thanks to its twin 4.75-litre drawers and a pair of racks that double its flat area, allowing you to cook four layers of food. Ideal for families, newbies and those struggling to squeeze in an air fryer.
Why we love it
Ninja’s air fryers always come packed with guidance and recipes, and the Double Stack XL is no exception. These serve to underline how versatile it is: you could cook two whole chickens at the same time, for example – great if your barbecue’s rained off. It’s incredibly easy to program and adjust as it cooks – and the top temperature of 240C is perfect for crisping food from the freezer. That said, some of its recommended times and temperatures might be a bit off. After 26 minutes at 200C, some chips were still pale and soft, which suggests they’d need longer. There were similar results from the onion rings, which after 18 minutes didn’t have the crisp and crunch produced by the other machines.
It’s a shame that …
its results didn’t impress me as much as Ninja’s other air fryers have – you may need to tweak settings.
Size:
28 x 47 x 38.5cm (WDH)
Capacity:
9.5 litres
Power draw:
1.049kWh = 28p an hour
Dishwasher safe
:
yes – but hand-washing recommended to extend lifespan
Program
s:
air fry, max crisp, roast, bake, dehydrate, reheat
Ninja
Double Stack XL SL400UK air fryer
from
£188
What we love
There’s great capacity in a compact footprint
What we don’t love
It could be too tall to tuck it below units when not in use
The best of the rest
The Instant Pot Vortex Plus ClearCook VersaZone, Tower Vortx Colour, Salter Fuzion and Russell Hobbs SatisFry multi-cooker.
Photograph: Rachel Ogden/The Guardian
Fritaire the self-cleaning glass bowl air fryer
Fritaire
The self-cleaning glass bowl air fryer
from
£152.15
What we love
It’s seriously stylish and looks fab lit up
What we don’t love
It’s not nearly as convenient as a standard model
Hate the boxy look of air fryers? Or perhaps you crave crispy chips but have doubts about BPA, Teflon or Pfas? If so, there’s the Fritaire. Equipped with a stainless-steel grill stand, rotating tumbler for fries and a rotisserie for chicken, it looks as if a halogen cooker and an air fryer got together and had a baby. Plus, there’s a choice of bright colours for those who can’t stand black.
There’s much here to like – a “self-cleaning” function that keeps the glass bowl from becoming greasy, good visibility to check progress, and if you’re using the tumbler, no need to shake fries – but there are downsides. I found the display hard to read in bright light, and the tumbler capacity is only 500g: loading it with chips was awkward compared with tossing them in a drawer. Plus, while onion rings emerged crisp and brown from the stand, the chips were anything but. While the batch was consistent, the chips were mostly soft and slightly chewy.
It didn’t make the final cut because …
the exterior grows extremely hot during cooking, and stays hot for some time after.
Size:
34 x 26 x 33cm (WDH)
Capacity:
4.7 litres (0.5kg max capacity of food using stand/tumbler)
Power draw:
0.65kWh = 17p an hour
Dishwasher safe:
yes, accessories only
Programs:
french
fries, steak, chicken, seafood, bake, dehydrate
Fritaire
The self-cleaning glass bowl air fryer
from
£152.15
What we love
It’s seriously stylish and looks fab lit up
What we don’t love
It’s not nearly as convenient as a standard model
Salter Fuzion dual air fryer
Salter
Fuzion dual air fryer
£99
What we love
Lots of capacity and cooking flexibility for the price
What we don’t love
You might need to test and adjust for the best results
If you’re feeding many mouths, you’ll need a big air fryer. Salter’s Fuzion offers a lot of space at an affordable price – and thanks to the eight-litre drawer’s divider, you can air fry two foods at the same time. Alternatively, with the divider in place, you can just use half the air fryer: perfect for snacks. However, like other air fryers with dividers, it has issues with shaking: both types of food will be tossed around, and larger drawers are harder to shake.
I was disappointed with the level of browning on the chips and found that the onion rings weren’t quite as crisp as they should be. Keeping its clear window grease-free may be a challenge, too.
It didn’t make the final cut because …
the drawer doesn’t feel as durable as it should be for this type of air fryer: its metal is thin enough to flex.
Size:
36.4 x 38 x 32cm (WDH);
capacity:
8 litres;
power draw:
0.912kWh = 24p an hour;
dishwasher safe
:
no;
program
s:
manual, chips, shellfish, steak, pork, bake, chicken, vegetables
Salter
Fuzion dual air fryer
£99
What we love
Lots of capacity and cooking flexibility for the price
What we don’t love
You might need to test and adjust for the best results
Instant Pot Vortex Plus ClearCook VersaZone air fryer
Instant Pot
Vortex Plus ClearCook VersaZone air fryer
from
£85.49
What we love
It’s great at turning out crisp and crunch
What we don’t love
Programming an air fryer should be easier than this
Photograph: Rachel Ogden/The Guardian
The prices below reflect current Black Friday offers
I’m afraid Instant Pot commits one of my air fryer cardinal sins with its Vortex Plus VersaZone: there’s no instructions or guidance in the box, but simply a QR code that directs you to videos – I’m not a fan of forcing tech into the kitchen. The Instant Pot was also one of the trickiest to program (for example, you have to switch from single drawer to dual by holding its control knob), so it’s probably not a good choice for air fryer newbies.
There are some good things here, though: two 4.2-litre compartments with a divider, the ability to switch to fahrenheit, and the option to turn off the beeps if they annoy. It also produced great results, and perhaps that’s the most important thing: plenty of crispy chips – though not consistently so – and crunchy, well-browned onion rings.
It didn’t make the final cut because …
the display is busy and hard to read in bright light.
Size:
31.4 x 38.4 x 40.4cm (WDH)
; capacity:
8.5 litres;
power draw:
1.187kWh = 31p an hour;
dishwasher safe
:
yes;
program
s:
air fry, roast, bake, grill, dehydrate, reheat
Instant Pot
Vortex Plus ClearCook VersaZone air fryer
from
£85.49
What we love
It’s great at turning out crisp and crunch
What we don’t love
Programming an air fryer should be easier than this
Russell Hobbs SatisFry air fryer & grill multi-cooker
Russell Hobbs
SatisFry air fryer & grill multi-cooker
from
£82.99
What we love
The pot is dishwasher safe – a rarity for an affordable appliance
What we don’t love
You might need to air fry in batches
If you’re unsure about how much you might use an air fryer and so want an appliance that does more to earn its place on the worktop, the compact SatisFry could suit. It may stretch the definition of a multi-cooker somewhat, lacking some of the functions you might associate with one, but its spacious pot can be used for air frying and other tasks, including slow cooking and searing.
There’s not much guidance, however, and the results were mixed: chips were browned but soft and not very crisp, while onion rings were doughy with some singeing. I suspect both could have benefited from different times and temperatures. The other downside is that it recommends no more than 800g at a time for air frying, so you won’t be able to use all its space for this function.
It didn’t make the final cut because …
it’s not the easiest to program: for example, there are no separate up and down buttons for time and temperature.
Size:
37.8 x 32 x 28.2cm (WDH);
capacity:
5.5 litres;
power draw:
0.550kWh = 14p an hour;
dishwasher safe
:
yes;
program
s:
air fry,
bake, grill, keep warm, roast, sear, slow cook high/low
Russell Hobbs
SatisFry air fryer & grill multi-cooker
from
£82.99
What we love
The pot is dishwasher safe – a rarity for an affordable appliance
What we don’t love
You might need to air fry in batches
What you need to know
Your air fryer should be able to cook almost anything your oven can.
Photograph: Grace Cary/Getty Images
What can I cook in an air fryer?
While air fryers have gained a reputation for turning out perfect homemade chips and crispy food from the freezer, they’re capable of far more. Not only can you “boil” eggs, prove dough and make yoghurt (if your model offers lower temperatures), you should be able to cook almost anything your oven can. As a rough guide, for oven recipes, set your air fryer temperature 20C lower (so if a recipe states 200C fan, set your air fryer to 180C). Air fryers also cook more quickly than an oven. The time can be reduced by as much as a quarter, so check food often to prevent burning.
A good-quality air fryer is an investment, so check its programs, ease of cleaning and temperature/time range before you buy. There’s no need for the lower temperatures and long durations (usually up to 12 hours) for dehydrating fruit and fermenting yoghurt if you’ll mostly be using it for air frying, for example. Similarly, if you’re a keen cook, look for one with plenty of space – a small air fryer may soon limit your horizons.
For those with a dishwasher, check that drawers and crisping plates are safe to clean this way, while if you’re cleaning by hand, robust nonstick coatings will make degreasing easier.
How do air fryers work?
Air fryers are best thought of as smaller, modified convection ovens with a fast fan. Rather than having the fan and element at the rear, these are above, producing powerful fanned heat that’s circulated around the drawer.
Food sits on a perforated crisper plate, allowing heat to reach the underside, while a thin layer of oil on the surface “fries” the exterior to create browning and crunch. Shaking the contents in the drawer roughens up the surface, creating more area for crisping.
Are air fryers healthier than regular frying?
Yes, both because you need a lower amount of oil – a tablespoon should be enough to coat a 500g batch of chipped potato, while other foods require no oil at all – but also because the way food is “fried” is different.
Conventional frying uses the oil in the pan to seal the exterior. This prevents moisture from escaping, which is then heated, steaming the inside. To do this, oil penetrates the food’s surface, meaning that more fat is retained than when air frying.
Are air fryers ‘toxic’?
Linger on social media long enough and you’ll find worries about air fryer toxicity. It’s usually centred on plastic parts growing hot (which, as it’s limited to the exterior of air fryers, rather than the parts that come into contact with food, shouldn’t present a problem) and nonstick coatings containing Pfas/PFOA.
Most manufacturers have phased out PFOA (since 2013, all Teflon products have been PFOA-free), while potential deterioration of nonstick (which may use Pfas as this is a term for a large group of chemicals) tends to happen at
temperatures of 260C and above
. Most air fryers have a limit of 200C, with the top temperatures on others 240C.
If you’re concerned about the safety of nonstick, choose an air fryer with a ceramic-coated pan and plates, or clean yours carefully: damaged coatings are more likely to release chemicals.
Another concern linked to air fryers is about cooking starchy food, which produces acrylamide (a potential carcinogen). However, the same risks apply when oven-cooking food.
Cooking oil at high temperatures can also produce potentially harmful compounds. Air fryers don’t use much oil, but if you’re concerned about this, choose an oil with a high smoke point (the temperature at which oil starts to smoke and break down), such as vegetable, peanut, sunflower or rapeseed.
Do air fryers use less energy than an oven?
Air fryers have gained a reputation for being economical, and while this is true for the most part, it won’t always be the case. For small amounts of food, air fryers use less energy, heating up quickly and only circulating hot air within a small space. An example, a A+-rated 72-litre oven might use 1.395kWh to cook a roast chicken over 90 minutes, while an air fryer could do the same job in less than an hour and use only 0.782kWh – almost half the energy and cost.
However, if you were cooking large amounts, such as a whole roast dinner – chicken, roast potatoes, yorkshire pudding, roast veggies and so on – running several cycles of air frying would cost more, making an oven the more energy-efficient choice.
Rachel Ogden has worked as a consumer journalist for decades, becoming an expert unboxer before it was a thing, although she is much less successful at repacking. Her home has hosted hundreds of small appliances from blenders and air fryers to robot vacuums, while outside, you’ll find her messing about with pizza ovens, barbecues and heaters. Unsurprisingly, it takes a lot to impress her – many have tried and failed
This article was originally published on 2 March 2025. Reviews published in the Filter may be periodically updated to reflect new products and at the editor’s discretion. The date of an article’s most recent update can be found in the timestamp at the top of the page. This article was amended on 21 November 2025; three new air fryers were added after testing, and prices were updated throughout.
Avast Makes AI-Driven Scam Defense Available for Free Worldwide
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 15:00:10
Avast is rolling out Scam Guardian, a free AI-powered protection layer that analyzes websites, messages, and links to detect rising scam threats. Powered by Gen Threat Labs data, it reveals hidden dangers in code and adds 24/7 scam guidance through the Avast Assistant. [...]...
Driven by a commitment to make cutting-edge scam protection available to everyone, Avast, a leader in digital security and privacy and part of Gen, has unveiled Scam Guardian, a new AI-powered offering integrated into its award-winning
Avast Free Antivirus
.
Cybercriminals continue to abuse AI to craft increasingly convincing scam attacks at an alarming rate. Available at no cost, the new service marks a significant step forward in democratizing AI scam protection.
A premium version, Scam Guardian Pro, has also been added to Avast Premium Security, giving customers an enhanced layer of AI protection against email scams.
"Today's scams aren't crude or obvious – they're tailored, targeted, and AI-enhanced, making it harder than ever to tell the difference between truth and deception," said Leena Elias, Chief Product Officer at Gen.
"As scammers take advantage of rising data breaches and leaked personal information, anyone anywhere can become a victim of scams. That's why it's never been more important to make powerful AI-powered scam protection available to everyone, everywhere. We're levelling the playing field with world class scam defense that helps people strengthen their digital and financial safety."
According to the recent
Q1/2025 Gen Threat Report
, breached records of individuals surged by more than 186% between January and March 2025, revealing sensitive information such as passwords, emails, and credit card details.
Over the same timeframe, reports of phishing scams rose by 466% compared to the previous quarter, making up almost a third of all scam submissions observed by Gen.
As data breaches rise, so do the opportunities for attackers to exploit leaked information to launch targeted, hyper-personalized scam campaigns that are harder than ever to spot.
Like a seasoned scam investigator, Scam Guardian uses proprietary AI trained on scam data from Gen Threat Labs to go beyond just detecting malicious URLs—it also analyzes context and language to more effectively identify signs of deceptive or harmful intent.
Scam Guardian also helps to pull back the curtain on hidden threats in website code and neutralizes them to keep people safer as they browse and shop online.
Key features available in Scam Guardian for Avast Free Antivirus, include:
Avast Assistant:
Provides 24/7 AI-powered scam protection guidance on suspicious websites, SMS messages, emails, links, offers, and more. Allows people to engage in open dialogue when they're unsure about a potential scam and uses natural language to better understand queries and deliver clear advice on what to do next.
Web Guard:
Uses the collective power of Gen Threat Labs telemetry and AI trained on millions of frequently visited websites to continuously analyze and detect hidden scams in content and code – offering unique visibility into dangerous URLs.
Scam Guardian Pro includes everything in Avast Scam Guardian, plus:
Email Guard:
Uses AI to understand the context of emails and the meaning of words to detect scams. Scans and flags safe and suspicious emails before you open them, helping to protect your email wherever you check it, no matter what device you use to log in.
TL;DR
: Dependency cooldowns are a free, easy, and
incredibly effective
way to mitigate the
large majority
of open source supply chain attacks.
More individual projects should apply cooldowns (via tools like Dependabot
and Renovate) to their dependencies, and packaging ecosystems should invest
in first-class support for cooldowns directly in their package managers.
“Supply chain security” is a serious problem. It’s also
seriously overhyped
,
in part because dozens of vendors have a vested financial interest in
convincing your that their
framing
of the underlying problem
1
is (1)
correct, and (2) worth your money.
What’s consernating about this is that most open source supply chain
attacks have the same basic structure:
An attacker compromises a popular open source project, typically via
a stolen credential or CI/CD vulnerabilty (such as
“pwn requests”
in
GitHub Actions).
The attacker introduces a malicious change to the project and uploads
it somewhere that will have
maximum effect
(PyPI, npm, GitHub releases,
&c., depending on the target).
At this point, the
clock has started
, as the attacker has moved
into the public.
Users pick up the compromised version of the project via automatic
dependency updates or a lack of dependency pinning.
Meanwhile, the aforementioned vendors are scanning public indices
as well as customer repositories for signs of compromise, and
provide alerts upstream (e.g. to PyPI).
Notably, vendors are
incentivized
to report quickly and loudly upstream,
as this increases the perceived value of their services in a crowded
field.
Upstreams (PyPI, npm, &c.) remove or disable the compromised package
version(s).
End-user remediation begins.
The key thing to observe is that the gap between (1) and (2) can be very large
2
(weeks or months), while the gap between (2) and (5) is
typically very small
:
hours or days. This means that, once the attacker has moved into the actual
exploitation phase, their
window of opportunity
to cause damage is pretty limited.
We can see this with numerous prominent supply chain attacks over the last 18 months
3
:
(Each of these attacks has significant downstream effect, of course, but only
within
their window of opportunity. Subsequent compromises from each, like
Shai-Hulud
, represent
new
windows of opportunity where the attackers regrouped
and pivoted onto the
next
set of compromised credentials.)
My takeaway from this: some windows of opportunity are bigger, but the
majority
of them are under a week long. Consequently, ordinary developers can
avoid
the bulk
of these types of attacks by instituting
cooldowns
on their dependencies.
Cooldowns
A “cooldown” is exactly what it sounds like: a window of time between when a dependency
is published and when it’s considered suitable for use. The dependency is public during
this window, meaning that “supply chain security” vendors can work their magic
while the rest of us wait any problems out.
I
love
cooldowns for several reasons:
They’re empirically effective, per above. They won’t stop
all
attackers,
but they
do
stymie the majority of high-visibiity, mass-impact supply chain
attacks that have become more common.
They’re
incredibly
easy to implement. Moreover, they’re
literally free
to implement in most cases: most people can use
Dependabot’s functionality
,
Renovate’s functionality
, or the functionality build directly into their
package manager
5
.
This is how simple it is in Dependabot:
1
2
3
4
5
6
7
8
9
version:2# update once a week, with a 7-day cooldown-package-ecosystem:github-actionsdirectory:/schedule:interval:weeklycooldown:default-days:7
(Rinse and repeat for other ecosystems as needed.)
Cooldowns
enforce positive behavior
from supply chain security vendors:
vendors are still incentivized to discover and report attacks quickly,
but are
not
as incentivized to emit volumes of blogspam about “critical”
attacks on largely underfunded open source ecosystems.
Concluding / assorted thoughts
In the very small sample set above, 8/10 attacks had windows of opportunity
of less than a week. Setting a cooldown of 7 days would have prevented
the vast majority of these attacks from reaching end users (and causing
knock-on attacks, which several of these were). Increasing the cooldown to 14
days would have prevented all but 1 of these attacks
6
.
Cooldowns are, obviously,
not a panacea
: some attackers
will
evade detection,
and delaying the inclusion of potentially malicious dependencies by a week
(or two) does not fundamentally alter the fact that supply chain security is a
social trust
problem, not a purely technical one. Still, an 80-90% reduction
in exposure through a technique that is free and easy seems hard to beat.
Related to the above, it’s unfortunate that cooldowns aren’t baked
directly
into more packaging ecosystems: Dependabot and Renovate are great, but
even better
would be if the package manager itself (as the source of ground
truth) could enforce cooldowns directly (including of dependencies not
introduced or bumped through automated flows).
Security updates for Friday
Linux Weekly News
lwn.net
2025-11-21 14:42:04
Security updates have been issued by AlmaLinux (delve and golang), Debian (webkit2gtk), Oracle (expat and thunderbird), Red Hat (kernel), Slackware (openvpn), SUSE (chromium, grub2, and kernel), and Ubuntu (cups-filters, imagemagick, and libcupsfilters)....
You're probably reading this page because you've attempted to
access some part of
my blog (Wandering
Thoughts)
or
CSpace
, the wiki thing it's
part of. Unfortunately whatever you're using to do so has a HTTP
User-Agent header value that is too generic or otherwise excessively
suspicious. Unfortunately, as of early 2025 there's a plague of
high volume crawlers (apparently in part to gather data for LLM
training) that behave like this. To reduce the load on
Wandering Thoughts
I'm experimenting with
(attempting to) block all of them, and you've run into this.
All HTTP User-Agent headers should clearly identify what they
are, and for non-browser user agents, they should identify not just
the software involved but also who specifically is using that software.
An extremely generic value such as "
Go-http-client/1.1
"
is not something that I consider acceptable any more.
Chris Siebenmann, 2025-02-17
Go's runtime may someday start explicitly freeing some internal memory
You're probably reading this page because you've attempted to
access some part of
my blog (Wandering
Thoughts)
or
CSpace
, the wiki thing it's
part of. Unfortunately whatever you're using to do so has a HTTP
User-Agent header value that is too generic or otherwise excessively
suspicious. Unfortunately, as of early 2025 there's a plague of
high volume crawlers (apparently in part to gather data for LLM
training) that behave like this. To reduce the load on
Wandering Thoughts
I'm experimenting with
(attempting to) block all of them, and you've run into this.
All HTTP User-Agent headers should clearly identify what they
are, and for non-browser user agents, they should identify not just
the software involved but also who specifically is using that software.
An extremely generic value such as "
Go-http-client/1.1
"
is not something that I consider acceptable any more.
Chris Siebenmann, 2025-02-17
"Inviting the Arsonists": Indian Climate Activist Slams Fossil Fuel Lobbyists at U.N. Climate Summit
Democracy Now!
www.democracynow.org
2025-11-21 13:50:42
Nations are struggling to reach a final text agreement at the COP30 U.N. climate summit in Belém, Brazil. Decisions are made by consensus at COPs, requiring consent among 192 countries, and the biggest fight over the draft text is the exclusion of a roadmap to phase out fossil fuels. Reportedly Saud...
Nations are struggling to reach a final text agreement at the COP30 U.N. climate summit in Belém, Brazil. Decisions are made by consensus at COPs, requiring consent among 192 countries, and the biggest fight over the draft text is the exclusion of a roadmap to phase out fossil fuels. Reportedly Saudi Arabia, China, Russia and India are among those that rejected the roadmap. But more than 30 countries are saying they will not accept a final deal without one. “We came to this
COP
to get a very concrete decision on just transitioning away from fossil fuels, to get a mechanism so that we can do it in a much more cooperative manner,” says Harjeet Singh, strategic adviser to the Fossil Fuel Non-Proliferation Treaty.
strategic adviser to the Fossil Fuel Non-Proliferation Treaty and the founding director of Satat Sampada Climate Foundation, a social justice organization.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
"We Need to Be Heard": Indigenous Amazon Defender Alessandra Korap Munduruku on COP30 Protest
Democracy Now!
www.democracynow.org
2025-11-21 13:30:39
Thousands of Amazonian land defenders, both Indigenous peoples and their allies, have traveled to the COP30 U.N. climate conference in Belém, Brazil. On Friday night, an Indigenous-led march arrived at the perimeter of the COP’s “Blue Zone,” a secure area accessible only to those b...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
This is
Democracy Now!
, democracynow.org. I’m Amy Goodman, with Nermeen Shaikh. We’re broadcasting from the U.N. climate summit, COP30, here in the Brazilian city of Belém, the gateway to the Amazon. It’s believed today will be the last day of the
COP
. I’m Amy Goodman, with Nermeen Shaikh.
NERMEEN
SHAIKH
:
The Amazon rainforest, often described as the lungs of the planet, is teeming with life. Thousands of Amazonian land defenders, both Indigenous people and their allies, have traveled to the tropical city of Belém, Brazil, the gateway to the Amazon, carrying their message that the rainforest is at a tipping point but can still be saved. Hundreds of activists arrived on several caravans and river-borne flotillas in advance of a major civil society march.
On Friday night, an Indigenous-led march arrived at the perimeter of the COP’s Blue Zone, a secure area accessible only to those bearing official summit credentials. The group stormed security, kicking down a door. United Nations police contained the protest, but it was a marker of the level of frustration at the failure of the deliberations to deliver just and effective climate action.
AMY
GOODMAN
:
One of the leaders of the protest was Alessandra Korap Munuduruku. An iconic photograph of her at the forefront of Friday’s action shows her standing defiant as police in riot gear line up outside the venue. It’s become a symbol of Indigenous resistance.
In 2023, Alessandra was awarded the prestigious Goldman Environmental Prize for her leadership in organizing, forcing British mining giant Anglo American to withdraw from Indigenous lands, including those of her people.
I sat down with Alessandra Korap Munduruku earlier this week and began by asking her to introduce herself and to describe the protest she led last Friday.
ALESSANDRA
KORAP
MUNDURUKU
:
[translated] My name is Alessandra Korap Munduruku. I am a leader from the Tapajós River, which is here in the state of Pará. And to come here, my delegation of the Munduruku people, we took two days by bus plus three days by boat. It was a long trip. We came with women. We came with children. We came with our shamans. So I’m not here alone. In Pará, there are over 17,000 Munduruku.
So, when we arrived here at COP30, we were abandoned. We didn’t have access to water. We had a hard time finding meals. It was very difficult for our people, who had traveled for so long to get here. And the people wanted to be heard. We came in a large delegation, and we wanted to speak, and we wanted to be heard. But we were blocked. I have credentials to enter
COP
, but many of the Munduruku who are here do not. And so, we decided that we needed to stop this
COP
. We needed people to stop and to listen to us.
They needed to listen to us, because we are the ones that are saying what the forest is demanding. We are the ones that are saying what the river is asking for. We are going through a lot of violence in our territories. The rivers, the Tapajós River, the Madeira River, they are being privatized for the creation of hydro waste, for the transportation of soy for agribusiness. This will expand the production of soy in Brazil. It will lead to more deforestation. It will lead to more Indigenous rights violations. So we blocked entry to
COP
because we need to be heard.
So, we live in the Amazon forest. We know what the river is going through. We need the river. We live with the river. Today, the river, the Tapajós River, is dry. There are days in which the river disappears. There are so many forest fires. So, why is it that we cannot have the power to decide here at
COP
? Why is it that they only speak about us, but that we cannot decide? And now President Lula has said that he’s going to consult the people about Federal Decree No. 12,600, which privatizes the rivers in the Amazon. But who is he going to consult? Is he going to consult the Indigenous groups? Is he going to consult the jaguars, the fish, the animals? How is this consultation going to be? Who needs to be heard?
And there’s another project that Lula and the government are trying to implement in the Tapajós region, in the Munduruku territory, which is called the Ferrogrão, the soy railway. The soy railway, it serves to cheapen the export of soy commodities from Brazil to the global market. It will lead to the expansion of soy production. Soy does not grow under trees. Soy leads to deforestation. Soy leads to the contamination of rivers by agrotoxics, the invasion of Indigenous territories.
We need to demarcate Indigenous lands in Brazil, because large-scale commodity production is killing Indigenous peoples. Yesterday, we had a Guarani-Kaiowá Indigenous person who was killed in the state of Mato Grosso do Sul with a bullet to his head. So, large-scale monoculture does not only kill with the pen by decision-making, by evicting Indigenous groups from their territory, but it also kills with a gun.
So, we’re here to urgently ask the international community to support the demarcation of Indigenous lands and to support that President Lula revoke Presidential Decree 12,600, which privatizes rivers in Brazil.
AMY
GOODMAN
:
So, you led a flotilla down the river, and you shut down the U.N. climate summit. There’s this iconic image of the U.N. climate summit, the COP30 president — he is the climate ambassador for Brazil, André Corrêa do Lago — holding a Munduruku child. Can you explain what that is? You forced him to come out to negotiate with you.
ALESSANDRA
KORAP
MUNDURUKU
:
[translated] So, we were there blocking the entry to the
COP
, and we arrived very early. We arrived at 5 a.m. Everyone was hungry. We hadn’t eaten breakfast. The children started crying. And the children are the strongest ones, and they were already hungry. And the sun was coming out. And we wanted to speak to an authority, either the president of Brazil or the president of
COP
.
And at some point, the president of
COP
said that we had to open up entry to
COP
. And we said, “We are not leaving. You have to come out here and talk to us.” And so he came out. And we got the meeting with Minister Sônia Guajajara, Minister Marina Silva, because we knew that we had to be listened to.
And that child, that baby that André Corrêa holds in his arms, that is a very important symbol, because in holding that baby, that child represents the future of the Munduruku people, and Andre, if he carries out these projects, if the government of Brazil decides to implement these projects without consulting, without listening to the Munduruku nation, he is destroying the future of that child that he held in his own arms. So he’s assuming the responsibility for that life and for the life of all Munduruku children and babies.
AMY
GOODMAN
:
Your protests have made an enormous difference. Brazil has now created 10 new Indigenous territories as you were protesting this week, territories for Indigenous people, which mean your culture and environment are protected under Brazilian law. That happened this past week. What exactly does that mean?
ALESSANDRA
KORAP
MUNDURUKU
:
[translated] So, to start, you know, we were here much before, thousands of years before colonization began, so all of this territory is ours. But today, to demarcate an Indigenous land, it’s very difficult. It’s a very long bureaucratic and political process, where we have to prove many things. So, we have to prove that that land is ours, even though it has always been ours.
And if government does not demarcate the land, it means that we will be expelled, evicted from our territories, and we will be killed. Demarcation is something that needs to happen, because nondemarcation means our deaths. There are so many companies that are — that have an eye on our land. So, hydropower plants, mining, soy producers, land grabbers, illegal loggers, legal loggers, there’s so many people that want our territory. And there’s so much land that still has to be demarcated.
So, let’s talk about the Munduruku lands in the mid-Tapajós region. My land, Sawré Ba’pim, was declared yesterday. Declaration is the third step in the long process of demarcation of an Indigenous land. So this is one more step in ensuring the full rights to our territory. But there’s another territory, called Sawré Muybu, which has already been declared, but now the illegal occupants need to be removed from this land. That’s the next step, the physical demarcation.
There are so many invaders in these lands, soy producers, farmers. It’s so easy for non-Indigenous peoples to have access to land in Brazil. All they need to do is go there, deforest, take down the forest, and they say that the land is theirs. That’s how land grabbing works. It’s so easy for them, but it’s so difficult for us. And now there’s this Marco Temporal, the temporal cutoff limit, that says that we only have rights to lands where we were physically present in 1988. But we were always on these lands. It doesn’t make any sense.
So, what I want to say is that we’re very happy that our lands advanced in the demarcation process, but there are so many lands that still need to be recognized and demarcated in Brazil.
AMY
GOODMAN
:
In 2023, you won the Goldman Environmental Prize for fighting the British mining company Anglo American. Can you explain what they were trying to do and what you won?
ALESSANDRA
KORAP
MUNDURUKU
:
[translated] So, in 2019, after President Bolsonaro was elected, we started living a reign of terror in our territories. So, there was a lot of invasion by illegal gold diggers and illegal wildcat miners,
garimpeiros
. They came into the territory. They brought with them illegal criminal groups. They brought with them prostitution, violence, contamination of rivers, contamination of fish. It was a real order of terror.
And at that same time, between 2021 and 2022, we found out that the British mining company Anglo American had filed a request to prospect minerals within our land. Anglo American declared that our territory was not an Indigenous land because it was not yet formally demarcated. But everyone knew that we live there. Everyone knows that it’s our territory. For us, it’s our territory. And so, we were forced to fight at the same time against the
garimpo
, the illegal gold mining, and the big mining corporation Anglo American.
So we decided to speak out. We wrote a letter explaining everything that was happening, explaining what we demanded, that we demanded that Anglo American leave our territory immediately. Amazon Watch, which is a partner, sent this letter to the corporation. And they were obliged to step back, and they were obliged to remove their mining interests, to give up their mining interests within our territory, because of our struggle.
So, for us, that is an Indigenous land. That is a sacred land. It’s where our fish are, our fruits. It’s where we have authorization from the forest to step in. And so, we will continue fighting. We have so many victories that the world needs to learn more about. We kept a hydropower plant from being implemented in our territory, and we will continue fighting.
AMY
GOODMAN
:
Alessandra, I want to ask what keeps you going. I mean, Indigenous land protectors, environmentalists, especially the Indigenous, are — face such violence. Talk about that threat that so many face, and why you keep going.
ALESSANDRA
KORAP
MUNDURUKU
:
[translated] So, what keeps me going are my people. My people keep me going, and my people keep me alive. The children, the territory, my family, it’s a collective struggle, and this is what keeps me alive. I’ve already suffered two attacks. Twice, people have entered my house, have invaded my house to try to keep me from fighting, threatening me. But I will not give up. I want the entire world to know who the Munduruku people are, who the Indigenous peoples of Brazil are and what we represent.
I know who I’m facing in my struggle. I know who I’m up against. I’m not up against just anyone. It’s against big corporations, against government, against these people that we commonly say that have power. But we have power. My people have power, because we have a people, we have culture, we have the forest. We have the things that really matter. So we know that we are powerful, and not them. I am not afraid, and I will not be silenced, and I will keep fighting.
AMY
GOODMAN
:
I’m wondering if you could compare your struggles against the current government, the Lula government, to the Bolsonaro government.
ALESSANDRA
KORAP
MUNDURUKU
:
[translated] So, they were very different struggles in these two political contexts. So, former President Bolsonaro, he would foster violence against Indigenous peoples openly. There were no human rights. There was no protection. He was incentivizing the invasion of all territories. He was against the poor. He was against the Black population. He was against the Indigenous groups. He was against Brazilian society. He was only in favor of corporations. And his speech was that Indigenous peoples should become white people, that they should simply integrate Brazilian society and no longer be Indigenous. He would say this openly.
And the Munduruku people very openly confronted Bolsonaro. We very openly confronted the
garimpo
. There was a lot of violence against the Munduruku women. Maria Leusa, a Munduruku leader from the High Tapajós region, she was attacked. Her house was burned. There was a lot of direct confrontation.
Under Lula, things are very different. Lula speaks openly about the protection of the Amazon. He speaks about demarcation. He sits down with us. There is dialogue. He is demarcating Indigenous lands. But he still has a lot to learn. If he had learned what he should have learned by now, he would not have passed this decree which privatizes the rivers and turns them over to companies and concessions. He would be demarcating a lot more lands. So, it’s a lot better now, but there’s still so much to be done.
AMY
GOODMAN
:
And finally, if you can look right into that camera and share your message to the world?
ALESSANDRA
KORAP
MUNDURUKU
:
[translated] So, my message, as Alessandra Korap Munduruku, to you, who’s watching this now, is: What are you doing to the environment? What is your country doing to the environment? What is your corporation, what are your companies, what are your representatives doing to the environment and to Indigenous rights? Do you know what they are doing? Are they respecting the rights of Indigenous peoples and of the environment? Are you monitoring where investments are going? Are you monitoring how corporate activities are taking place on the ground?
You need to know, because we, here, we do not eat soy. We do not eat gold. We do not eat iron ore. We eat the fish, and we eat the fruits from the forest. And we need our forest standing. So, I ask you, please, monitor your corporation. Monitor your company. Monitor your governments. Watch your representatives. Be aware of what they’re doing. We need you to do this for us here in the forest. This is my message to you, from Alessandra Korap Munduruku.
AMY
GOODMAN
:
That’s Alessandra Korap Munduruku, one of the Indigenous resistance leaders who shut down the
COP
last Friday for a few hours, demanding climate action.
NERMEEN
SHAIKH
:
Coming up, we’ll speak with climate justice activist Harjeet Singh, adviser to the Fossil Fuel Non-Proliferation Treaty. He’s based in India, one of the countries that rejected moving away from fossil fuels. Stay with us.
[break]
AMY
GOODMAN
:
That’s Las Cafeteras in our
Democracy Now!
studio.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Building a Minimal Viable Armv7 Emulator from Scratch
Tip or TLDR - I built a tiny, zero dependency armv7 userspace emulator in Rust
I wrote a minimal viable armv7 emulator in 1.3k lines of Rust without any
dependencies. It parses and validates a 32-bit arm binary, maps its segments,
decodes a subset of arm instructions, translates guest and host memory
interactions and forwards arm Linux syscalls into x86-64 System V syscalls.
It can run a armv7 hello world binary and does so in 1.9ms (0.015ms for raw
emulation without setup), while qemu takes 12.3ms (stinkarm is thus ~100-1000x
slower than native armv7 execution).
After reading about the process the Linux kernel performs to execute binaries,
I thought: I want to write an armv7 emulator -
stinkarm
. Mostly to understand the ELF
format, the encoding of arm 32bit instructions, the execution of arm assembly
and how it all fits together (this will help me with the JIT for my programming
language I am currently designing). To fully understand everything: no
dependencies. And of course Rust, since I already have enough C projects going
on.
So I wrote the smallest binary I could think of:
ARMASM
1.global _start @ declare _start as a global
2_start: @ start is the defacto entry point
3 mov r0, #161@ first and only argument to the exit syscall
4 mov r7, #1@ syscall number 1 (exit)
5 svc #0@ trapping into the kernel (thats US, since we are translating)
To execute this arm assembly on my x86 system, I need to:
Parse the ELF, validate it is armv7 and statically executable (I don’t want
to write a dynamic dependency resolver and loader)
Map the segments defined in ELF into the host memory, forward memory access
Decode armv7 instructions and convert them into a nice Rust enum
Emulate the CPU, its state and registers
Execute the instructions and apply their effects to the CPU state
Translate and forward syscalls
Sounds easy? It is!
Open below if you want to see me write a build script and a nix flake:
Minimalist arm setup and smallest possible arm binary
Before I start parsing ELF I’ll need a binary to emulate, so lets create a
build script called
bld_exmpl
(so I can write a lot less) and nix flake, so
the asm is converted into armv7 machine code in a armv7 binary on my non armv7
system :^)
At a high level, ELF (32bit, for armv7) consists of headers and segments, it
holds an Elf header, multiple program headers and the rest I don’t care about,
since this emulator is only for static binaries, no dynamically linked support.
Elf32_Ehdr
The ELF header is exactly 52 bytes long and holds all data I need to find the
program headers and whether I even want to emulate the binary I’m currently
parsing. These criteria are defined as members of the
Identifier
at the beg
of the header.
Most resources show C based examples, the rust ports are
below:
RUST
1/// Representing the ELF Object File Format header in memory, equivalent to Elf32_Ehdr in 2. ELF
2/// header in https://gabi.xinuos.com/elf/02-eheader.html
3///
4/// Types are taken from https://gabi.xinuos.com/elf/01-intro.html#data-representation Table 1.1
5/// 32-Bit Data Types:
6///
7/// | Elf32_ | Rust |
8/// | ------ | ---- |
9/// | Addr | u32 |
10/// | Off | u32 |
11/// | Half | u16 |
12/// | Word | u32 |
13/// | Sword | i32 |
14#[derive(Debug, Clone, Copy, PartialEq, Eq)]15pubstructHeader {
16/// initial bytes mark the file as an object file and provide machine-independent data with
17/// which to decode and interpret the file’s contents
18pub ident: Identifier,
19pub r#type: Type,
20pub machine: Machine,
21/// identifies the object file version, always EV_CURRENT (1)
22pub version: u32,
23/// the virtual address to which the system first transfers control, thus starting
24/// the process. If the file has no associated entry point, this member holds zero
25pub entry: u32,
26/// the program header table’s file offset in bytes. If the file has no program header table,
27/// this member holds zero
28pub phoff: u32,
29/// the section header table’s file offset in bytes. If the file has no section header table, this
30/// member holds zero
31pub shoff: u32,
32/// processor-specific flags associated with the file
33pub flags: u32,
34/// the ELF header’s size in bytes
35pub ehsize: u16,
36/// the size in bytes of one entry in the file’s program header table; all entries are the same
37/// size
38pub phentsize: u16,
39/// the number of entries in the program header table. Thus the product of e_phentsize and e_phnum
40/// gives the table’s size in bytes. If a file has no program header table, e_phnum holds the value
41/// zero
42pub phnum: u16,
43/// section header’s size in bytes. A section header is one entry in the section header table; all
44/// entries are the same size
45pub shentsize: u16,
46/// number of entries in the section header table. Thus the product of e_shentsize and e_shnum
47/// gives the section header table’s size in bytes. If a file has no section header table,
48/// e_shnum holds the value zero.
49pub shnum: u16,
50/// the section header table index of the entry associated with the section name string table.
51/// If the file has no section name string table, this member holds the value SHN_UNDEF
52pub shstrndx: u16,
53}
The identifier is 16 bytes long and holds the previously mentioned info so I
can check if I want to emulate the binary, for instance the endianness and the
bit class, in the
TryFrom
implementation I strictly check what is parsed:
RUST
1/// 2.2 ELF Identification: https://gabi.xinuos.com/elf/02-eheader.html#elf-identification
2#[repr(C)] 3#[derive(Debug, Clone, Copy, PartialEq, Eq)] 4pubstructIdentifier {
5/// 0x7F, 'E', 'L', 'F'
6pub magic: [u8; 4],
7/// file class or capacity
8///
9/// | Name | Value | Meaning |
10/// | ------------- | ----- | ------------- |
11/// | ELFCLASSNONE | 0 | Invalid class |
12/// | ELFCLASS32 | 1 | 32-bit |
13/// | ELFCLASS64 | 2 | 64-bit |
14pub class: u8,
15/// data encoding, endian
16///
17/// | Name | Value |
18/// | ------------ | ----- |
19/// | ELFDATANONE | 0 |
20/// | ELFDATA2LSB | 1 |
21/// | ELFDATA2MSB | 2 |
22pub data: u8,
23/// file version, always EV_CURRENT (1)
24pub version: u8,
25/// operating system identification
26///
27/// - if no extensions are used: 0
28/// - meaning depends on e_machine
29pub os_abi: u8,
30/// value depends on os_abi
31pub abi_version: u8,
32// padding bytes (9-15)
33 _pad: [u8; 7],
34}
3536impl TryFrom<&[u8]> for Identifier {
37typeError = &'staticstr;
3839fntry_from(bytes: &[u8]) -> Result<Self, Self::Error> {
40if bytes.len() < 16 {
41returnErr("e_ident too short for ELF");
42 }
4344// I don't want to cast via unsafe as_ptr and as Header because the header could outlive the
45// source slice, thus we just do it the old plain indexing way
46let ident = Self {
47 magic: bytes[0..4].try_into().unwrap(),
48 class: bytes[4],
49 data: bytes[5],
50 version: bytes[6],
51 os_abi: bytes[7],
52 abi_version: bytes[8],
53 _pad: bytes[9..16].try_into().unwrap(),
54 };
5556if ident.magic != [0x7f, b'E', b'L', b'F'] {
57returnErr("Unexpected EI_MAG0 to EI_MAG3, wanted 0x7f E L F");
58 }
5960const ELFCLASS32: u8 = 1;
61const ELFDATA2LSB: u8 = 1;
62const EV_CURRENT: u8 = 1;
6364if ident.version != EV_CURRENT {
65returnErr("Unsupported EI_VERSION value");
66 }
6768if ident.class != ELFCLASS32 {
69returnErr("Unexpected EI_CLASS: ELFCLASS64, wanted ELFCLASS32 (ARMv7)");
70 }
7172if ident.data != ELFDATA2LSB {
73returnErr("Unexpected EI_DATA: big-endian, wanted little");
74 }
7576Ok(ident)
77 }
Type
and
Machine
are just enums encoding meaning in the Rust type system:
Since all of
Header
’s members implement
TryFrom
we can implement
TryFrom<&[u8]> for Header
and propagate all occurring errors in member parsing
cleanly via
?
:
For me, the most important fields in
Header
are
phoff
and
phentsize
,
since we can use these to index into the binary to locate the program headers (
Phdr
).
RUST
1/// Phdr, equivalent to Elf32_Phdr, see: https://gabi.xinuos.com/elf/07-pheader.html
2///
3/// All of its member are u32, be it Elf32_Word, Elf32_Off or Elf32_Addr
4#[derive(Debug)] 5pubstructPheader {
6pub r#type: Type,
7pub offset: u32,
8pub vaddr: u32,
9pub paddr: u32,
10pub filesz: u32,
11pub memsz: u32,
12pub flags: Flags,
13pub align: u32,
14}
1516impl Pheader {
17/// extracts Pheader from raw, starting from offset
18pubfnfrom(raw: &[u8], offset: usize) -> Result<Self, String> {
19let end = offset.checked_add(32).ok_or("Offset overflow")?;
20if raw.len() < end {
21returnErr("Not enough bytes to parse Elf32_Phdr, need at least 32".into());
22 }
2324let p_raw = &raw[offset..end];
25let r#type = p_raw[0..4].try_into()?;
26let flags = p_raw[24..28].try_into()?;
27let align = le32!(p_raw[28..32]);
2829if align > 1 && !align.is_power_of_two() {
30returnErr(format!("Invalid p_align: {}", align));
31 }
3233Ok(Self {
34 r#type,
35 offset: le32!(p_raw[4..8]),
36 vaddr: le32!(p_raw[8..12]),
37 paddr: le32!(p_raw[12..16]),
38 filesz: le32!(p_raw[16..20]),
39 memsz: le32!(p_raw[20..24]),
40 flags,
41 align,
42 })
43 }
44}
Type
holds info about what type of segment the header defines:
Since the only reason for parsing the elf headers is to know where to put what
segment with which permissions, I want to quickly interject on why we have to
put said segments at these specific addresses. The main reason is that all
pointers, all offsets and pc related decoding has to be done relative to
Elf32_Ehdr.entry
, here
0x8000
. The linker also generated all instruction
arguments according to this value.
Before mapping each segment at its
Pheader::vaddr
, we have to understand:
One doesn’t simply
mmap
with
MAP_FIXED
or
MAP_NOREPLACE
into the virtual
address
0x8000
. The Linux kernel won’t let us, and rightfully so,
man mmap
says:
If addr is not NULL, then the kernel takes it as a hint about where to place
the mapping; on Linux, the kernel will pick a nearby page boundary (but
always above or equal to the value specified by /proc/sys/vm/mmap_min_addr) and
attempt to create the mapping there.
And
/proc/sys/vm/mmap_min_addr
on my system is
u16::MAX
(2^16)-1=65535. So
mapping our segment to
0x8000
(32768) is not allowed:
RUST
1let segment = sys::mmap::mmap(
2// this is only UB if dereferenced, its just a hint, so its safe here
3Some(unsafe { std::ptr::NonNull::new_unchecked(0x8000as *mutu8) }),
44096,
5 sys::mmap::MmapProt::WRITE,
6 sys::mmap::MmapFlags::ANONYMOUS
7 | sys::mmap::MmapFlags::PRIVATE
8 | sys::mmap::MmapFlags::NOREPLACE,
9 -1,
100,
11)
12.unwrap();
Running the above with our
vaddr
of
0x8000
results in:
TEXT
1thread 'main' panicked at src/main.rs:33:6:
2called `Result::unwrap()` on an `Err` value: "mmap failed (errno 1): Operation not permitted
3(os error 1)"
It only works in elevated permission mode, which is something I dont want to
run my emulator in.
Translating guest memory access to host memory access
The obvious fix is to not mmap below
u16::MAX
and let the kernel choose where
we dump our segment:
But this means the segment of the process to emulate is not at
0x8000
, but
anywhere the kernel allows. So we need to add a translation layer between guest
and host memory: (If you’re familiar with how virtual memory works, its similar
but one more indirection)
This fix has the added benfit of allowing us to sandbox guest memory fully, so
we can validate each memory access before we allow a guest to host memory
interaction.
Mapping segments with their permissions
The basic idea is similar to the way a JIT compiler works:
create a
mmap
section with
W
permissions
write bytes from elf into section
zero rest of defined size
change permission of section with
mprotect
to the
permissions defined in the
Pheader
RUST
1/// mapping applies the configuration of self to the current memory context by creating the
2/// segments with the corresponding permission bits, vaddr, etc
3pubfnmap(&self, raw: &[u8], guest_mem: &mut mem::Mem) -> Result<(), String> {
4// zero memory needed case, no clue if this actually ever happens, but we support it
5ifself.memsz == 0 {
6returnOk(());
7 }
8 9ifself.vaddr == 0 {
10returnErr("program header has a zero virtual address".into());
11 }
1213// we need page alignement, so either Elf32_Phdr.p_align or 4096
14let (start, _end, len) = self.alignments();
1516// Instead of mapping at the guest vaddr (Linux doesnt't allow for low addresses),
17// we allocate memory wherever the host kernel gives us.
18// This keeps guest memory sandboxed: guest addr != host addr.
19let segment = mem::mmap::mmap(
20None,
21 len asusize,
22 MmapProt::WRITE,
23 MmapFlags::ANONYMOUS | MmapFlags::PRIVATE,
24 -1,
250,
26 )?;
2728let segment_ptr = segment.as_ptr();
29let segment_slice = unsafe { std::slice::from_raw_parts_mut(segment_ptr, len asusize) };
3031let file_slice: &[u8] =
32 &raw[self.offset asusize..(self.offset.wrapping_add(self.filesz)) asusize];
3334// compute offset inside the mmapped slice where the segment should start
35let offset = (self.vaddr - start) asusize;
3637// copy the segment contents to the mmaped segment
38 segment_slice[offset..offset + file_slice.len()].copy_from_slice(file_slice);
3940// we need to zero the remaining bytes
41ifself.memsz > self.filesz {
42 segment_slice
43 [offset.wrapping_add(file_slice.len())..offset.wrapping_add(self.memsz asusize)]
44 .fill(0);
45 }
4647// record mapping in guest memory table, so CPU can translate guest vaddr to host pointer
48 guest_mem.map_region(self.vaddr, len, segment_ptr);
4950// we change the permissions for our segment from W to the segments requested bits
51 mem::mmap::mprotect(segment, len asusize, self.flags.into())
52}
5354/// returns (start, end, len)
55fnalignments(&self) -> (u32, u32, u32) {
56// we need page alignement, so either Elf32_Phdr.p_align or 4096
57let align = matchself.align {
580 => 0x1000,
59 _ => self.align,
60 };
61let start = self.vaddr & !(align - 1);
62let end = (self.vaddr.wrapping_add(self.memsz).wrapping_add(align) - 1) & !(align - 1);
63let len = end - start;
64 (start, end, len)
65}
Map is called in the emulators entry point:
RUST
1let elf: elf::Elf = (&buf as &[u8]).try_into().expect("Failed to parse binary");
2letmut mem = mem::Mem::new();
3for phdr in elf.pheaders {
4if phdr.r#type == elf::pheader::Type::LOAD {
5 phdr.map(&buf, &mut mem)
6 .expect("Mapping program header failed");
7 }
8}
Decoding armv7
We can now request a word (32bit) from our
LOAD
segment which contains
the
.text
section bytes one can inspect via
objdump
:
So we use
Mem::read_u32(0x8000)
and get
0xe3a000a1
.
Decoding armv7 instructions seems doable at a glance, but
it is a deeper rabbit-hole than I expected, prepare for a bit
shifting, implicit behaviour and intertwined meaning heavy
section:
Instructions are more or less grouped into four groups:
Branch and control
Data processing
Load and store
Other (syscalls & stuff)
Each armv7 instruction is 32 bit in size, (in general) its layout is as
follows:
Our decoder is a function accepting a word, the program counter (we need
this later for decoding the offset for
ldr
) and returning the
aforementioned instruction container:
Referring to the diagram shown before, I know the first 4 bit are the
condition, so I can extract these first. I also take the top 3 bits to
identify the instruction class (load and store, branch or data
processing immediate):
We can now plug this in, match on the only ddi (data processing immediate)
we know and extract both the destination register (rd) and the raw
immediate value:
From the examples before one can see the immediate value is prefixed with
#
. To move the value
161
into
r0
we do:
ASM
Since we know there are only 12 bits available for the immediate the arm
engineers came up with rotation of the resulting integer by the remaining 4
bits:
ldr
is part of the load and store instruction group and is needed for
the accessing of
Hello World!
in
.rodata
and putting a ptr to it
into a register.
In comparison to immediate mov we have to do a little trick, since we
only want to match for load and store that matches:
single register modification
load and store with immediate
So we only decode:
ARMASM
1LDR Rd, [Rn, #imm]2LDR Rd, [Rn], #imm3@ etc
Thus we match with
(top >> 1) & 0b11 == 0b01
and start extracting a
whole bucket load of bit flags:
ldr Rn, <addr>
matches exactly
load
, base register is PC (
rn==0b1111
), pre-indexed
addressing, added offset, no write back and no byte sized access (
l && rn == 0b1111 && p && u && !w && !b
).
Syscalls
Syscalls are the only way to interact with the Linux kernel (as far as I
know), so we definitely need to implement both decoding and forwarding.
Bits 27-24 are
1111
for system calls. The immediate value is
irrelevant for us, since the Linux syscall handler either way discards
the value:
RUST
1if ((word >> 24) & 0xF) asu8 == 0b1111 {
2return InstructionContainer {
3 cond,
4// technically arm says svc has a 24bit immediate but we don't care about it, since the
5// Linux kernel also doesn't
6 instruction: Instruction::Svc,
7 };
8}
We can now fully decode all instructions for both the simple exit and
the more advanced hello world binary:
This is by FAR the easiest part, I only struggled with the double
indirection for
ldr
(I simply didn’t know about it), but each problem
at its time :^).
RUST
1pubstructCpu<'cpu> {
2/// r0-r15 (r13=SP, r14=LR, r15=PC)
3pub r: [u32; 16],
4pub cpsr: u32,
5pub mem: &'cpu mut mem::Mem,
6/// only set by ArmSyscall::Exit to propagate exit code to the host
7pub status: Option<i32>,
8}
910impl<'cpu> Cpu<'cpu> {
11pubfnnew(mem: &'cpu mut mem::Mem, pc: u32) -> Self {
12letmut s = Self {
13 r: [0; 16],
14 cpsr: 0x60000010,
15 mem,
16 status: None,
17 };
18 s.r[15] = pc;
19 s
20 }
Instantiating the cpu:
RUST
1letmut cpu = cpu::Cpu::new(&mut mem, elf.header.entry);
Conditional Instructions?
When writing the decoder I was confused by the 4 conditional bits. I always
though one does conditional execution by using a branch to jump over
instructions that shouldnt be executed. That was before I learned for arm,
both ways are supported (the armv7 reference says this feature should only
be used if there arent multiple instructions depending on the same
condition, otherwise one should use branches) - so I need to support this
too:
After implementing the necessary checks and setup for emulating the cpu, the
CPU can now check if an instruction is to be executed, match on the decoded
instruction and run the associated logic:
RUST
1impl<'cpu> Cpu<'cpu> {
2#[inline(always)] 3fnpc(&self) -> u32 {
4self.r[15] & !0b11 5 }
6 7/// moves pc forward a word
8#[inline(always)] 9fnadvance(&mutself) {
10self.r[15] = self.r[15].wrapping_add(4);
11 }
1213pubfnstep(&mutself) -> Result<bool, err::Err> {
14letSome(word) = self.mem.read_u32(self.pc()) else {
15returnOk(false);
16 };
1718if word == 0 {
19// zero instruction means we hit zeroed out rest of the page
20returnOk(false);
21 }
2223let InstructionContainer { instruction, cond } = decoder::decode_word(word, self.pc());
2425if !self.cond_passes(cond) {
26self.advance();
27returnOk(true);
28 }
2930match instruction {
31 decoder::Instruction::MovImm { rd, rhs } => {
32self.r[rd asusize] = rhs;
33 }
34 decoder::Instruction::Unknown(w) => {
35returnErr(err::Err::UnknownOrUnsupportedInstruction(w));
36 }
37 i => {
38stinkln!(
39"found unimplemented instruction, exiting: {:#x}:={:?}",
40 word,
41 i
42 );
43self.status = Some(1);
44 }
45 }
4647self.advance();
4849Ok(true)
50 }
51}
LDR and addresses in literal pools
While
Translating guest memory access to host memory
access
goes into depth
on translating / forwarding guest memory access to host memory adresses, this
chapter will focus on the layout of literals in armv7 and how
ldr
indirects
memory access.
Lets first take a look at the ldr instruction of our hello world example:
ARMASM
1.section.rodata 2@ define a string with the `msg` label
3msg:
4@ asciz is like asciii but zero terminated
5.asciz"Hello world!\n" 6 7.section.text 8.global _start
9_start:
10@ load the literal pool addr of msg into r1
11 ldr r1, =msg
If expression evaluates to a numeric constant then a MOV or MVN instruction
will be used in place of the LDR instruction, if the constant can be generated
by either of these instructions. Otherwise the constant will be placed into the
nearest literal pool (if it not already there) and a PC relative LDR
instruction will be generated.
Now this may not make sense at a first glance, why would
=msg
be assembled
into an address to the address of the literal. But an
armv7
instruction can
not encode a full address, it is impossible due to the instruction being
restricted to an 8-bit value rotated right by an even number of bits. The ldr
instructions argument points to a literal pool entry, this entry is a 32-bit
value and reading it produces the actual address of
msg
.
When decoding we can see ldr points to a memory address (32800 or
0x8020
) in
the section we mmaped earlier:
Any other instruction using a addr will have to also go through the
Mem::translate
indirection.
Forwarding Syscalls and other feature flag based logic
Since stinkarm has three ways of dealing with syscalls (
deny
,
sandbox
,
forward
). I decided on handling the selection of the appropriate logic at cpu
creation time via a function pointer attached to the CPU as the
syscall_handler
field:
In our examples I obviously used the armv7 syscall calling convention. But this
convention differs from the calling convention of our x86 (technically its
x86-64 System V AMD64 ABI) host by a lot.
While armv7 uses
r7
for the syscall number and
r0-r5
for the syscall
arguments, x86 uses
rax
for the syscall id and
rdi
,
rsi
,
rdx
,
r10
,
r8
and
r9
for the syscall arguments (
rcx
can’t be used since
syscall
clobbers it, thus Linux goes with
r10
).
Also the syscall numbers differ between armv7 and x86,
sys_write
is
1
on
x86 and
4
on armv7. If you are interested in either calling conventions,
syscall ids and documentation, do visit
The Chromium Projects- Linux System
Call
Table
,
it is generated from Linux headers and fairly readable.
Table version:
usage
armv7
x86-64
syscall id
r7
rax
return
r0
rax
arg0
r0
rdi
arg1
r1
rsi
arg2
r2
rdx
arg3
r3
r10
arg4
r4
r8
arg5
r5
r9
So something like writing
TEXT123
to stdout looks like this on arm:
After made the calling convention differences clear, the handling of a syscall
is simply to execute this handler and use
r7
to convert the armv7 syscall
number to the x86 syscall number:
By default the sandboxing mode is selected, but I will go into detail on both
sandboxing and denying syscalls later, first I want to focus on the
implementation of the translation layer from armv7 to x86 syscalls:
Since exit means the guest wants to exit, we can’t just forward this to the
host system, simply because this would exit the emulator before it would be
able to do cleanup and unmap memory regions allocated.
To both know we hit the exit syscall (we need to, otherwise the emulator
executes further) and propagate the exit code to the host system, we set the
Cpu::status
field to
Some(r0)
, which is the argument to the syscall.
This field is then used in the emulator entry point / main loop:
RUST
1fnmain() {
2letmut cpu = cpu::Cpu::new(&conf, &mut mem, elf.header.entry);
3 4loop {
5match cpu.step() { /**/ }
6 7// Cpu::status is only some if sys_exit was called, we exit the
8// emulation loop
9if cpu.status.is_some() {
10break;
11 }
12 }
1314let status = cpu.status.unwrap_or(0);
15// cleaning up used memory via munmap
16 mem.destroy();
17// propagating the status code to the host system
18 exit(status);
19}
Implementing:
sys_write
The write syscall is not as spectacular as
sys_exit
: writing a
buf
of
len
to a file descriptor.
register
description
rax
syscall number (1 for write)
rdi
file descriptor (0 for stdin, 1 for stdout, 2 for stderr)
rsi
a pointer to the buffer
rdx
the length of the buffer
rsi
is pointing to
It is necessary for doing the O of I/O tho, otherwise there won’t be any
Hello, World!
s on the screen.
RUST
1usecrate::{cpu, sys};
2 3pubfnwrite(cpu: &mut cpu::Cpu, fd: u32, buf: u32, len: u32) -> i32 {
4// fast path for zero length buffer
5if len == 0 {
6return0;
7 }
8 9// Option::None returned from translate indicates invalid memory access
10letSome(buf_ptr) = cpu.mem.translate(buf) else {
11// so we return 'Bad Address'
12return -(sys::Errno::EFAULT asi32);
13 };
1415let ret: i64;
16unsafe {
17 core::arch::asm!(
18"syscall",
19// syscall number
20in("rax") 1_u64,
21in("rdi") fd asu64,
22in("rsi") buf_ptr asu64,
23in("rdx") len asu64,
24 lateout("rax") ret,
25// we clobber rcx
26 out("rcx") _,
27// and r11
28 out("r11") _,
29// we don't modify the stack
30 options(nostack),
31 );
32 }
3334 ret.try_into().unwrap_or(i32::MAX)
35}
Adding it to
translation::syscall_forward
with it’s arguments according to the
calling convention we established before:
The simplest sandboxing mode is to deny, the more complex is to allow
some syscall interactions while others are denied. The latter requires
checking arguments to syscalls, not just the syscall kind.
Lets start with the easier syscall handler:
deny
. Deny simply returns
ENOSYS
to all invoked syscalls:
Thus executing the hello world and enabling syscall logs results in neither
sys_write
nor
sys_exit
going through and
ENOSYS
being returned for both
in
r0
:
For instance we only allow writing to stdin, stdout and stderr, no other file
descriptors. One could also add pointer range checks, buffer length checks and
other hardening measures here. Emulating the hello world example with this mode
(which is the default mode):
So there you have it, emulating armv7 in six steps:
parsing and validating a 32-bit armv7 Elf binary
mapping segments into host address space
decoding a non-trivial subset of armv7 instructions
handling program counter relative literal loads
translating memory interactions from guest to host
forwarding armv7 Linux syscalls into their x86-64 System V counterparts
Diving into the Elf and armv7 spec without any previous relevant experience,
except the asm module I had in uni, was a bit overwhelming at first. Armv7
decoding was by far the most annoying part of the project and I still don’t
like the bizarre argument ordering for x86-64 syscalls.
The whole project is about 1284 lines of Rust, has zero dependencies
1
and is
as far as I know working correctly
2
.
Microbenchmark Performance
It executes a real armv7 hello world binary in ~0.015ms of guest execution-only
time, excluding process startup and parsing. The e2e execution with all stages
I outlined before, it takes about 2ms.
Comparing the whole pipeline (parsing elf, segment mapping, cpu setup, etc) to
qemu
we arrive at the following micro benchmark results. To be fair, qemu
does a whole lot more than stinkarm, it has a jit, a full linux-user runtime, a
dynamic loader, etc.
TEXT
1$ hyperfine "./target/release/stinkarm examples/helloWorld.elf" -N --warmup 10
2Benchmark 1: ./target/release/stinkarm examples/helloWorld.elf
3 Time (mean ± σ): 1.9 ms ± 0.3 ms [User: 0.2 ms, System: 1.4 ms]
4 Range (min … max): 1.6 ms … 3.4 ms 1641 runs
56$ hyperfine "qemu-arm ./examples/helloWorld.elf" -N --warmup 10
7Benchmark 1: qemu-arm ./examples/helloWorld.elf
8 Time (mean ± σ): 12.3 ms ± 1.5 ms [User: 3.8 ms, System: 8.0 ms]
9 Range (min … max): 8.8 ms … 19.8 ms 226 runs
EXIF orientation info in PNGs isn't used for image-orientation
The JPEG and PNG are rotated differently, even though they both have the same EXIF info (Orientation: Rotate 90 CW), and are both set to
image-orientation: from-image;
Expected results:
They should display the same.
heycam: Will this be covered by any of your follow-up work related to
bug 1607667
?
Status: UNCONFIRMED → NEW
Component: Untriaged → Layout: Images, Video, and HTML Frames
Ever confirmed: true
Flags: needinfo?(cam)
Priority: -- → P3
Product: Firefox → Core
Huh, I didn't even know that PNG supported orientation data. I found
https://ftp-osl.osuosl.org/pub/libpng/documents/pngext-1.5.0.html#C.eXIf
which defines the
eXif
table. The patches I'm working on don't add support for this, but it would not be too difficult to do so, at least if the table appears earlier than the image data. (I don't think our current image loading flow would handle the image size changing as a result of the orientation data later on.)
Because this bug's Severity has not been changed from the default since it was filed, and it's Priority is
P3
(Backlog,) indicating it has been triaged, the bug's Severity is being updated to
S3
(normal.)
What is the expected waiting time for the issue to be resolved?
Should be fixed by
bug 1682759
. If that is incorrect please re-open.
I’ve always wanted to try my hand making an RPG but always assumed it would take too much time.
However, I didn’t want to give up before trying so I started to think of ways I could still make something compelling in 1-2 months.
To help me come up with something, I decided to look into older RPGs as I had a hunch they could teach me a lot about scoping because back in the 80s, games were small because of technical limitations. A game that particularly caught my attention was the first Dragon Quest.
This game was very important because it popularized the RPG genre in Japan by simplifying the formula therefore, making it more accessible. It can be considered the father of the JRPG sub-genre.
What caught my attention was the simplicity of the game. There were no party members, the battle system was turn based and simple and you were free to just explore around.
I was particularly surprised by how the game could give a sense of exploration while the map was technically very small. This was achieved by making the player move on an overworld map with a different scale proportion compared to when navigating towns and points of interest. In the overworld section, the player appeared bigger while the geography was smaller, allowing players to cover large amounts of territory relatively quickly.
The advantage of this was that you could switch between biomes quickly without it feeling jarring. You still had the impression of traversing a large world despite being small in reality. This idea of using an overworld map was common in older games but somehow died off as devs had less and less technical limitations and more budget to work with.
Seeing its potential, I decided that I would include one in my project even if I didn’t have a clear vision at this point.
Playing Dragon Quest 1 also reminded me of how annoying random battle encounters were. You would take a few steps and get assaulted by an enemy of some kind. At the same time, this mechanic was needed, because grinding was necessary to be able to face stronger enemies in further zones of the map.
My solution : What if instead of getting assaulted, you were the one doing the assault? As you would move on the map, encounter opportunities signified by a star would appear. Only if you went there and overlapped with one would a battle start. This gave the player agency to determine if they needed to battle or not. This idea seemed so appealing that I knew I needed to include it in my project.
While my vision on what I wanted to make started to become clearer, I also started to get a sense of what I didn’t want to make. The idea of including a traditional turn based battle system was unappealing. That wasn’t because I hated this type of gameplay, but ever since
I made a 6 hour tutorial on how to build one
, I realized how complicated pulling one off is. Sure, you can get something basic quickly, but to actually make it engaging and well balanced is another story. A story that would exceed 1-2 months to deal with. I needed to opt for something more real-time and action based if I wanted to complete this project in a reasonable time frame.
Back in 2015, an RPG that would prove to be very influential released and “broke the internet”. It was impossible to avoid seeing the mention of Undertale online. It was absolutely everywhere.
The game received praised for a lot of different aspects but what held my attention, was its combat system.
It was the first game I was aware of, that included a section of combat dedicated to avoiding projectiles (otherwise known as bullet hell) in a turn based battle system. This made the combat more action oriented which translated into something very engaging and fun.
This type of gameplay left a strong impression in my mind and I thought that making something similar would be a better fit for my project as it was simpler to implement.
While learning about Dragon Quest 1, I couldn’t help but be reminded me of The Legend of Zelda Breath of The Wild released in 2017.
Similarly to Dragon Quest, a lot of freedom was granted to the player in how and when they tackled the game’s objectives.
For example, in Breath of The Wild, you could go straight to the final boss after the tutorial section.
I wanted to take this aspect of the game and incorporate it into my project. I felt it would be better to have one final boss and every other enemy encounter would be optional preparation you could engage with to get stronger. This felt like something that was achievable in a smaller scope compared to crafting a linear story the player would progress through.
Another game that inspired me was Elden Ring, an open world action RPG similar to Breath of The Wild in its world structure but with the DNA of Dark Souls, a trilogy of games made previously by the same developers.
What stuck with me regarding Elden Ring, for the purpose of my project, was its unique way it handled experience points. It was the first RPG I played that used them as a currency you could spend to level up different attributes making up your character or to buy items.
Taking inspiration from it, I decided that my project would feature individually upgradable stats and that experience points would act as a currency. The idea was that the player would gain an amount of the game’s currency after battle and use that to upgrade different attributes. Like in Elden Ring, if you died in combat you would lose all currency you were currently holding.
I needed a system like this for my project to count as an RPG. Since by definition an RPG is stats driven. A system like this would also allow the player to manage difficulty more easily and it would act as the progression system of my game.
When I started getting into game development, I quickly came across Pico-8.
Pico-8, for those unaware, is a fantasy console with a set of limitations. It’s not a console you buy physically but rather a software program that runs on your computer (or in a web browser) that mimics an older console that never existed.
To put it simply, it was like running an emulator for a console that could’ve existed but never actually did. Hence the fantasy aspect of it.
Pico-8 includes everything you need to make games. It has a built-in code editor, sprite editor, map editor, sound editor, etc…
It uses the approachable Lua programming language which is similar to Python.
Since Pico-8 is limited, it’s easier to actually finish making a game rather than being caught in scope creep.
One game made in Pico-8 particularly caught my interest.
In this game you play as a little character on a grid. Your goal is to fight just one boss. To attack this boss, you need to step on a glowing tile while avoiding taking damage by incoming obstacles and projectiles thrown at you.
(
Epilepsy Warning
regarding the game footage below due to the usage of flashing bright colors.)
This game convinced me to ditch the turned based aspect I envisioned for my project entirely. Rather than having bullet hell sections within a turn based system like in Undertale the whole battle would instead be bullet hell. I could make the player attack without needing to have turns by making attack zones spawn within the battlefield. The player would then need to collide with them for an attack to register.
I was now convinced that I had something to stand on. It was now time to see if it would work in practice but I needed to clearly formulate my vision first.
The game I had in mind would take place under two main scenes. The first, was the overworld in which the player moved around and could engage in battle encounters, lore encounters, heal or upgrade their stats.
The second, being the battle scene, would be were battles would take place. The player would be represented by a cursor and they were expected to move around dodging incoming attacks while seeking to collide with attack zones to deal damage to the enemy.
The purpose of the game was to defeat a single final boss named king Donovan who was a tyrant ruling over the land of Hydralia where the game took place. At any point, the player could enter the castle to face the final boss immediately. However, most likely, the boss would be too strong.
To prepare, the player would roam around the world engaging in various battle encounters. Depending on where the encounter was triggered, a different enemy would show up that fitted the theme of the location they were in. The enemy’s difficulty and experience reward if beaten would drastically vary depending on the location.
Finally, the player could level up and heal in a village.
I was now ready to start programming the game and figuring out the details as I went along. For this purpose, I decided to write the game using the JavaScript programming language and the KAPLAY game library.
I chose these tools because they were what I was most familiar with.
For JavaScript, I knew the language before getting into game dev as I previously worked as a software developer for a company who’s product was a complex web application. While most of the code was in TypeScript, knowing JavaScript was pretty much necessary to work in TypeScript since the language is a superset of JavaScript.
As an aside, despite its flaws as a language, JavaScript is an extremely empowering language to know as a solo dev. You can make games, websites, web apps, browser extensions, desktop apps, mobile apps, server side apps, etc… with this one language. It’s like the English of programming languages. Not perfect, but highly useful in today’s world.
I’ll just caveat that using JavaScript makes sense for 2D games and light 3D games. For anything more advanced, you’d be better off using Unreal, Unity or Godot.
As for the KAPLAY game library, it allows me to make games quickly because it provides a lot of functionality out of the box. It’s also very easy to learn.
While it’s relatively easy to package a JavaScript game as an app that can be put on Steam, what about consoles? Well it’s not straightforward at all but at the same time, I don’t really care about consoles unless my game is a smash hit on Steam. If my game does become very successful than it would make sense businesswise to pay a porting company to remake the game for consoles, getting devkits, dealing with optimizations and all the complexity that comes with publishing a game on these platforms.
Anyway, to start off the game’s development, I decided to implement the battle scene first with all of its related mechanics as I needed to make sure the battle system I had in mind was fun to play in practice.
To also save time later down the line, I figured that I would make the game have a square aspect ratio. This would allow me to save time during asset creation, especially for the map as I wanted the whole map to be visible at once as I wouldn’t use a scrolling camera for this game.
After a while, I had a first “bare bones” version of the battle system. You could move around to avoid projectiles and attack the enemy by colliding with red attack zones.
Initially, I wanted the player to have many stats they could upgrade. They could upgrade their health (HP), speed, attack power and FP which stood for focus points.
However, I had to axe the FP stat as I originally wanted to use it as a way to introduce a cost to using items in battle. However, I gave up on the idea of making items entirely as they would require too much time to create and properly balance.
I also had the idea of adding a stamina mechanic similar to the one you see in Elden Ring. Moving around would consume stamina that could only replenish when you stopped moving. I initially though that this would result in fun gameplay as you could upgrade your stamina over time but it ended up being very tedious and useless. Therefore, I also ended up removing it.
Now that the battle system was mostly done, I decided to work on the world scene where the player could move around.
I first implemented battle encounters that would spawn randomly on the screen as red squares, I then created the upgrade system allowing the player to upgrade between 3 stats : Their health (HP), attack power and speed.
In this version of the game, the player could restore their health near where they could upgrade their stats.
While working on the world scene was the focus, I also made a tweak to the battle scene. Instead of displaying the current amount of health left as a fraction, I decided a health bar would be necessary because when engaged in a fast paced battle, the player does not have time to interpret fractions to determine the state of their health. A health bar would convey the info faster in this context.
However, I quickly noticed an issue with how health was restored in my game. Since the world was constrained to a single screen, it made going back to the center to get healed after every fight the optimal way to play. This resulted in feeling obligated to go back to the center rather than freely roaming around.
To fix this issue, I made it so the player needed to pay to heal using the same currency for leveling up. Now you needed to carefully balance between healing or saving your experience currency for an upgrade by continuing to explore/engage in battle. All of this while keeping in mind that you could lose all of your currency if defeated in battle. It’s important to note that you could also heal partially which provided flexibility in how the player managed the currency resource.
Now that I was satisfied with the “bare bones” state of the game, I needed to make nice looking graphics.
To achieve this, I decided to go with a pixel art style. I could spend a lot of time explaining how to make good pixel art but, I already did so previously. I recommend checking my post on the topic.
I started by putting a lot effort drawing the overworld map as the player would spend a lot of time in it. It was a this stage that I decided to make villages the places where you would heal or level up.
To make this clearer, I added icons on top of each village to make it obvious what each was for.
Now that I was satisfied with how the map turned out, I started designing and implementing the player character.
For each distinct zone of the map, I added a collider so that battle encounters could determine which enemy and what background to display during battle. It was at this point that I made encounters appear as flashing stars on the map.
Since my work on the overworld was done, I now needed to produce a variety of battle backgrounds to really immerse the player in the world. I sat down and locked in. These were by far one of the most time intensive art assets to make for this project but I’m happy with the results.
After finishing making all backgrounds, I implemented the logic to show them in battle according to the zone where the encounter occurred.
The next assets to make were enemies. This was another time intensive task but I’m happy with how they turned out. The character at the bottom left is king Donovan the main antagonist of the game.
While developing the game, I noticed that it took too much time to go from one end of the battle zone to the other. This made the gameplay tedious so I decided to make the battle zone smaller.
At this point, I also changed the player cursor to be diamond shaped and red rather than a circle and white. I also decided to use the same flashing star sprite used for encounters on the map but this time, for attack zones. I also decided to change the font used in the game to something better.
At this point, the projectiles thrown towards the player didn’t move in a cohesive pattern the player could learn over time.
It was also absolutely necessary to create a system in which the attack patterns of the enemy would be progressively shown to the player.
This is why I stopped everything to work on the enemy’s attack pattern. I also, by the same token, started to add effects to make the battle more engaging and sprites for the projectiles.
While the game was coming along nicely, I started to experience performance issues. I go into more detail in a previous post if you’re interested.
To add another layer of depth to my game, I decided that the reward you got from a specific enemy encounter would not only depend on which enemy you were fighting but also how much damage you took.
For example, if a basic enemy in the Hydralia field would give you a reward of a 100 after battle, you would actually get less unless you did not take damage during that battle.
This was to encourage careful dodging of projectiles and to reward players who learned the enemy pattern thoroughly. This would also add replayability as there was now a purpose to fight the same enemy over and over again.
The formula I used to determine the final reward granted can be described as follows :
At this point, it wasn’t well communicated to the player how much of the base reward they were granted after battle. That’s why I added the “Excellence” indication.
When beating an enemy, if done without taking damage, instead of having the usual “Foe Vanquished” message appearing on the screen, you would get a “Foe Vanquised With Excellence” message in bright Yellow.
In addition to being able to enter into battle encounters, I wanted the player to have lore/tips encounters. Using the same system, I would randomly spawn a flashing star of a blueish-white color. If the player overlapped with it, a dialogue box would appear telling them some lore/tips related to the location they were in. Sometimes, these encounters would result in a chest containing exp currency reward. This was to give a reason for the player to pursue these encounters.
This is still a work in progress, as I haven’t decided what kind of lore to express through these.
One thing I forgot to show earlier was how I revamped the menu to use the new font.
That’s all I have to share for now. What do you think?
I also think it’s a good time to ask for advice regarding the game’s title. Since the game takes place in a land named
Hydralia
. I thought about using the same name for the game. However, since your mission is to defeat a tyrant king named Donovan, maybe a title like
Hydralia : Donovan’s Demise
would be a better fit.
If you have any ideas regarding naming, feel free to leave a comment!
Anyway, if you want to keep up with the game’s development or are more generally interested in game development, I recommend subscribing to not miss out on future posts.
I recently discovered that you could make PS2 games in JavaScript. I’m not even kidding, it’s actually possible. I was working on a project and had my phone near my desk when I received a notification. Upon further inspection, it came from itch.io which was a platform where I usually published most of my web games.
In a previous post, I tackled the various options one could use to make their web games playable offline as installable desktop apps. This would enable using web technologies to make games that could be sold on a platform like Steam.
Discussion about this post
No Fossil Fuel Phaseout, No Deal! At COP30, Vanuatu Climate Minister Joins 30+ Dissenting Nations
Democracy Now!
www.democracynow.org
2025-11-21 13:14:21
As negotiations draw close to a conclusion at the COP30 U.N. climate summit, nations are still sharply divided over the future of fossil fuels. Delegates representing dozens of countries have rejected a draft agreement that does not include a roadmap to transition away from oil, coal and gas. Ralph ...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
Yes, this is
Democracy Now!
, broadcasting at the U.N. climate summit, COP30, here in the Brazilian city of Belém, the gateway to the Amazon. I’m Amy Goodman.
NERMEEN
SHAIKH
:
And I’m Nermeen Shaikh. Welcome to our listeners and viewers across the country and around the world.
As negotiations draw to a close, nations are still sharply divided over the future of fossil fuels. Delegates representing dozens of countries have rejected a draft climate agreement that does not include a roadmap to transition away from fossil fuels. Over 30 nations from Africa, Asia, Latin America, the Pacific, the United Kingdom, as well as European Union member states, have co-signed a letter opposing Brazil’s draft proposals. The signatories include Mexico, Colombia, Guatemala, Sweden, France, Palau and Vanuatu.
AMY
GOODMAN
:
While petrostates, including Saudi Arabia and Russia, as well as some of the world’s largest fossil fuel consumers, China and India, reportedly rejected the proposal to transition away from fossil fuels, the U.S. did not even send an official delegation here to COP30, with the Trump administration boycotting the climate talks.
On Thursday, U.N. Secretary-General António Guterres took questions from the press. This was the
BBC
.
JUSTIN
ROWLATT
:
Secretary-General, what message do you want this conference to send to Donald Trump?
SECRETARY
-
GENERAL
ANTÓNIO
GUTERRES
:
We are waiting for you.
JUSTIN
ROWLATT
:
Do you see a possibility of him engaging in this process in a positive way?
SECRETARY
-
GENERAL
ANTÓNIO
GUTERRES
:
Hope is the last thing that dies.
AMY
GOODMAN
:
After the news conference, I attempted to follow up with U.N. Secretary-General António Guterres.
AMY
GOODMAN
:
Secretary-General, what message do you think Trump’s not sending a high-level delegation — I’m Amy Goodman from
Democracy Now!
… Can you respond to the huge fossil fuel delegation that’s here, over 1,600 lobbyists? Should the U.S. ban the fossil fuel lobbyists?
AMY
GOODMAN
:
Soon after U.N. secretary-general’s news conference on Thursday, COP30 negotiations were abruptly disrupted when a large fire broke out here at the conference site, shutting down the venue for hours into the night. About 30,000 people were evacuated, 13 people treated for smoke inhalation. The fire is a metaphor for the state of the negotiations and the planet, as the U.N. warns nations have made very little progress in the fight against climate change, putting the world on track toward dangerous global warming as greenhouse gas emissions remain too high.
NERMEEN
SHAIKH
:
A recent annual emissions gap report suggested countries will not be able to prevent global warming from surpassing 1.5 degrees Celsius, which is the main goal of the Paris Agreement that was brokered a decade ago. Experts have said warming is likely to reach between 2.3 and 2.5 degrees Celsius, with the possibility of even higher temperatures if countries don’t fulfill their current climate pledges.
AMY
GOODMAN
:
For more, we’re joined by Climate Minister Ralph Regenvanu from the Pacific island nation of Vanuatu, one of the dissenting countries.
Minister, we welcome you back to
Democracy Now!
We spoke to you when you were at The Hague just a few months ago. But if you can start off by talking about what just happened? You just came over to_Democracy Now!_ after participating in a press conference. There is going to be the final draft coming out of this U.N. climate summit, but then there’s also the Belém Declaration. Explain both.
RALPH
REGENVANU
:
So, earlier this morning, we were informed by the presidency that there are about 80 countries who have put a red line on any mention of fossil fuels in the outcome from this meeting, this
UNFCCC
process, this
COP
. Any mention is a red line for them.
But I just came from a press conference where over 80 countries announced they will be meeting in Colombia next year, in April, for the first-ever conference of state parties on developing a roadmap to transition away from fossil fuels. So, this is a voluntary initiative outside of the
UNFCCC
process, which Colombia’s minister of environment announced. And as I said, we were joined by over 80 countries.
And this is something we want to do in response to the lack of a roadmap coming out of Belém. We were expecting, based on President Lula’s statement at the beginning of the
COP
, that there would be a roadmap. We were expecting that roadmap to come out, but it seems like it’s not going to. But at least we have this other process that is now underway.
AMY
GOODMAN
:
What happened?
RALPH
REGENVANU
:
We had over 80 states, we were informed by the presidency, who basically said, “We will not entertain any mention of fossil fuels in the outcome statement from the Belém
COP
.” And I find that astounding, considering that we all know that fossil fuels contribute to 86% of emissions that are causing climate harm, that is endangering the future of our islands and all countries in the world. It’s the future of humanity that’s being endangered by fossil fuel production. We had the
ICJ
advisory opinion earlier this year clearly stating that all countries have a legal obligation to wind back fossil fuel production and take steps within their territories to transition away. We had a — the
ICJ
also said, very clearly, 1.5 degrees Celsius is the legal benchmark. And here at
COP
, we’re seeing countries questioning and wanting to remove reference to 1.5.
So, it’s really astounding, the fact that we have scientific clarity, the
IPCC
has clearly given us all the guidelines we need, now we have legal clarity from the world’s highest court, and yet we don’t see the political action coming from states who are members of the United Nations and members of the international order. And the fact that they are refusing to accept the best scientific evidence and legal obligations as defined by the world’s highest court is quite astounding to countries that want to see real action.
NERMEEN
SHAIKH
:
Well, Minister Regenvanu, if you could explain, what were the countries that were most opposed to coming up with this roadmap to transition away from fossil fuels?
RALPH
REGENVANU
:
Well, clearly, there’s the Arab states, led by Saudi Arabia. Saudi Arabia is the most vocal in blocking and obstructing any progress. We also have the — what they call the
LMDC
group, which is made up of high emitters, as well, from developing countries. We saw blockage also from the EU on adaptation finance, which is one of the big calls from the developing countries. We need more finance, as outlined in the Paris Agreement, for developing countries to be able to meet targets they set for themselves. So, but in terms of a fossil fuel roadmap, the big blockers are
LMDC
, Arab group.
NERMEEN
SHAIKH
:
And, Minister, if you could say more about climate finance? You’ve said in the past that climate finance is not charity. It’s a legal and moral obligation grounded in responsibility and capacity, as affirmed by Article 9.1 of the Paris Agreement.
RALPH
REGENVANU
:
Yes, I mean, the world has agreed in Paris that there is such a thing as climate finance from the developed, high-emitting countries to be provided to the developing, low-emitting countries to help them transition away. And what we’re talking about is a just and orderly transition away from fossil fuels. For countries that have fossil fuel already, production, that they can move away from that. For countries that have not entered that pathway, they can also move out of that. So, it’s for everybody to participate.
But certain countries don’t have the finances we need, like Vanuatu. We have a — we are a very small country, just graduated from least developed country status. Our existing budgets are being halved by having to deal with climate crisis, responding to extreme weather events. We need money to help us move.
AMY
GOODMAN
:
Explain. Tell us about how climate change affects. Vanuatu, the low-lying Pacific island nations, the idea that some of these countries will disappear.
RALPH
REGENVANU
:
Yes, we have countries like Tuvalu and Kiribati, for example. They are coral atoll countries. Those countries, their land does not go higher than two meters above sea level. So, already they’re losing. They have lost entire islands. And according to the scientific consensus, they will lose their entire countries in the next hundred years. So these are states that will be gone from the map.
Vanuatu is fortunate in that we are higher islands, but we also are losing most of our low-lying areas, where a lot of agriculture is, a lot of people live. So, for us, we are independent, politically independent states. We have decided on how we want to develop our countries. But we cannot. Our futures are being curtailed by the conduct of other large states, who don’t seem to care whether we have human rights equivalent to them, and basically, through their actions, are curtailing our futures, and especially for our younger generations.
NERMEEN
SHAIKH
:
If you could go back — let’s go back to the question of climate finance, which is what is essential to prevent what you’re saying. What did this draft call for? It did say that we should triple, that states should triple the financing available to help countries adapt to climate change, so — change by 2030 from 2025 levels. So, in other words, within five years, triple the financing.
RALPH
REGENVANU
:
Yes, that is what we’ve been asking for. I don’t know. I think it’s a red line. I don’t think it’ll get in the final text. But the point I want to make about climate finance is there are so many billions of dollars going into the fossil fuel industry, which is the cause of the climate crisis. If we get that money out of what is causing the climate crisis, we do have the funding available to help us with this transition, this tripling of adaptation finance we’re talking about. It’s very clear to us: You need to transition away from fossil fuels, is the way to get that finance that we are needing.
NERMEEN
SHAIKH
:
And where would the finance come from? What countries?
RALPH
REGENVANU
:
From the fossil fuel industry, from the subsidies that are provided. We just see governments giving huge handouts to fossil fuel companies to continue to extend the production pipeline. But in reality, we’re seeing the entire world starting to move away. We are seeing the green energy revolution already in place. We are seeing many countries already getting more than half their energy needs from renewable energy. So, this is happening. It’s just obstinance and vested interests and profit that is keeping the fossil fuel pipeline alive.
AMY
GOODMAN
:
Are these COPs worth it? I mean, you have, yes, the largest Indigenous population accredited here, moving in on a thousand, but you have well over 1,600 fossil fuel lobbyists. What kind of effect does that have? And the fact that just mentioning the F-word, what Kumi Naidoo called, fossil fuels, has been completely nixed in this. Now, Brazilian President Lula says he is going to South Africa, to the G20, with this declaration calling for a transition away. This is a large oil-producing nation, where we are right now, in Brazil. But are these gatherings worth it?
RALPH
REGENVANU
:
The
UNFCCC
process is a consensus-based process, and that is the problem with it. The problem is that we have a large number of countries who already know that we have to transition away from fossil fuels, already know that we need that language. We need to respect the scientific consensus of the
IPCC
. We need to stick to the 1.5-degree goal. But we have a certain number of countries who are vested in the fossil fuel pipeline — I would say not their populations, but certain members of the political classes. And so, we’re seeing these people blocking progress for the entire humanity. And it’s a result of this process that is flawed. So we need to fix the process. And that is something we are looking at, as well.
NERMEEN
SHAIKH
:
And could you talk about the fact that trade was also mentioned in the draft agreement, saying that in the next three
COP
climate summits, there will be a discussion of trade? What is that? Why is that significant?
RALPH
REGENVANU
:
That’s significant because it’s one of the actual mechanisms that countries can hold against other countries to make them take climate action. So, it’s one of the few kind of binding measures we can use. If, for example, the EU says, “We won’t accept products from countries that have a certain level of emissions,” it is actually something that has the effect of a stick, rather than just voluntary compliance. And so, that’s why it’s so important, because we are lacking these sticks.
AMY
GOODMAN
:
Finally, we have just 30 seconds, but we last spoke to you at The Hague. If you can talk about the International Court of Justice and how climate intersects with human rights, and the finding, the transitional, if you will, finding, that took place this summer?
RALPH
REGENVANU
:
This summer, the International Court of Justice handed down their advisory opinion, which basically said states have legal obligations to protect the climate system, which means they have legal obligations to transition away from fossil fuels. States have to control the activities of private actors within their jurisdiction that are contributing to greenhouse gas emissions, which means the fossil fuel companies, and that these obligations apply outside of the
UNFCCC
process. It’s a creation of the entire corpus of international law, including the very foundations of the United Nations. So, international cooperation, states allowing other states to continue to thrive and their populations to have the rights that are guaranteed under the U.N. human rights conventions, requires states to take legal action on reducing emissions.
AMY
GOODMAN
:
We want to thank you so much for being with us. Ralph Regenvanu is Vanuatu’s climate minister, one of the islands in the Pacific Ocean, one of the dissenting climate ministers here.
Up next, we turn to the Indigenous leader, member of the Munduruku community of the Amazon rainforest, a leader of the protest here that shut down the
COP
last Friday, Alessandra Korap Munduruku. Back in 30 seconds.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Design patterns are useful tools for us as developers, because they give us the terminology to discuss recurring ideas and patterns that show up in our code. Unfortunately, they’re also often explained in theoretical, object-oriented terms, and as a result, they can be difficult to apply in our daily programming practice unless we’ve seen them before. I want to try and unpack the command pattern, how we ended up using it in a recent project of ours, and how it might be implemented in JavaScript.
When we build a complex application, we can describe it as a series of different states. We start in some initial state, then some event occurs (say, the user clicks on a button), and we transform the application data to create a new state. That might look something like this:
A diagram showing multiple nodes labelled "state" in a line. Between each pair of states, there is an arrow that travels through a black box (labelled "black box") indicating how a transition between states occurs.
Here, the steps between the different states is described as some amount of code. That code is not necessarily easy to observe — it might make all sorts of changes to the state, and it might have side-effects as well.
Now let’s imagine we want to introduce undo/redo to our application. When the user presses “undo”, we need to step back in time through our application — instead of moving to the next state, we want to move to the previous one. Similarly, we have a “redo” action that will move us forwards a step on our line of states.
The same diagram as before, but with arrows indicating how "undo" operations take us to a previous state, and "redo" operations take us to the next state.
The problem we now need to solve is this: how do we move back to the previous state?
One easy answer is store all the states we visit along the way. If you’re familiar with the Redux ecosystem, you might have used tools like
redux-undo
that handle this automatically. You write the code in the black boxes, and the library automatically maintains the different states and switches between them.
Another, similar option might be to instead store diffs of the different states. Whenever we create a new state, we compare it to the old state and store a record of all the changes that have been made. This can be more memory efficient (the diffs are likely much smaller than a copy of the whole state would be), but calculating the diff in the first place can be less efficient.
These are both very good options that work in a lot of cases. If you’re managing your application state somewhere centrally, typically using the Flux pattern, then it’s usually very easy to use one of these approaches and get undo/redo with almost no extra work.
There are two main reasons why the above approaches might not work out for you.
The first reason is that both approaches assume that your state is managed in a single, central place. There are some architectures where that is very easy to achieve, but as your state gets larger and more complicated, it can often be easier to break the state into smaller pieces and work with those pieces independently. This allows more flexibility, but it also means that you no longer have a single source of truth.
The second reason is that your state transitions might affect things other than the state – or in other words, have side-effects. At first, it might feel like the obvious solution is to avoid the side-effects in the first place, but often the side-effects are the things we want. Consider a classic counter with a button to increment the internal state. When I click the button and change the state, I also want to change the UI to reflect the new state of the counter. This is one of the key side-effects that we need to deal with.
In a recent project that inspired this post, our application was large, and therefore we had split it up into multiple controllers. Each controller worked independently (and so could be tested/understood independently), and managed its own state. At the same time, the application used SolidJS to manage the UI. In SolidJS, as the internal state updates, side-effects are run which directly update the DOM as needed. This produces very efficient DOM-updates (the famous “fine-grained reactivity”), but means that we can’t treat our state purely as data any more — we need to understand how it’s changing as well.
In the end, we opted for the command pattern. Let’s explore what that looks like.
In our original example, we treated the code that moved us between different states as a black box. As developers, we could look into it and understand how it went, but we didn’t have the tools to introspect it, and undo or replay parts of it.
In the command pattern, we instead describe each transition via a combination of commands and data. Commands are the different actions that we can do to our state — for a todo app, we might have commands like “add todo”, “delete todo”, “mark todo as done”, and so on. The data is the specific arguments that we’ll pass to the command. The result looks something like this:
A series of nodes labelled "state" are laid out left to right. Between each pair of nodes, there is an arrow connecting the nodes that travels through a box split into two sections. The sections are labelled "command" and "data", indicating how each transition can be defined by a particular command and an associated piece of data.
If we go back to our todo app, when we click one of the “done” checkboxes in our UI, we would call the “mark todo as done” command with some data (probably the ID of the todo we’re interested in), and this function would update the internal data store and fire off the necessary side effects to produce the next state.
We can’t quite undo anything yet, though. For that, we need the second feature of commands, which is that they know how to undo themselves. The “add todo” command has a function which adds a todo to the state and updates the UI, but it
also
has a function which removes that same todo as well. So each command knows how to do and undo its action.
A series of nodes labelled "state" are laid out left to right. Between each pair of nodes, pointing right to left, there is an arrow indicating the transition between the different states. The arrow passes through a box split into two parts labelled "command prime" and "data", indicating that it is possible to transition through the states in reverse by applying the command's inverse operation.
With this, we can build our undo/redo system. Every time we run a command, we also record:
Which command was run
What data it was run with
When we want to undo some action, we call the command’s undo function, and pass it the same data it had before. It will revert all the changes it made before, and leave us exactly in the state we were in before.
If we go back to our reasons for a different approach, we can see that the command pattern neatly solves both of them:
Each component of the code can define its own commands (in the same way it might define its own methods or functions), meaning we can still treat each component in isolation.
The command is a function, which means it can update the state and call any side effects as necessary.
Let’s look at how we might the logic of a todo app in command form.
First, let’s define what our command actually is. In other languages, we might use classes, but in TypeScript we can get away with a relatively simple object:
We’re also going to need our history. For that, we need a list of actions that can be undone, and a list of actions that can be redone after that. We’ll also provide a function for pushing a new entry onto the lists, because there’s a bit of logic there that we don’t want to have to repeat everywhere:
Now we can define the commands specific to our todo system. Note that this won’t be all the possible commands, although feel free to think about what other commands might be necessary yourself.
const todoStore = []; // super simple store, definitely production-ready// here, the data is just the string of the todo// (we assume that all todos are unique for simplicity)constcreateTodo: Command<string> = {
do: (data) => todoStore.push({ todo: data, done: false }),
undo: (data) => todoStore = todoStore.filter(t => t.todo !== data),
}
// here, we store the old (`prev`) and the new (`next`) states// of the `.done` attribute, so that we can step forwards and// backwards through the historyconstsetTodoState: Command<{todo: string, prev: boolean, next: boolean}> = {
do: (data) => {
const todo = todoStore.find(t => t.todo === data.todo);
todo.done = data.next;
},
undo: (data) => {
const todo = todoStore.find(t => t.todo === data.todo);
todo.done = data.prev;
},
}
In practice, I’d probably wrap those commands in functions that call the
pushCommand
function internally, just to make things a little bit nicer to use, but we can skip that for now. Finally, we need our undo and redo functions. Now we’ve got our commands, these are really easy to implement: just call the relevant functions on the commands with the attached data.
functionundo() {
const cmd = undoableCommands.pop();
if (!cmd) returnfalse; // nothing to undo
cmd.command.undo(cmd.data);
redoableCommands.push(cmd);
returntrue; // successfully undid an action
}
functionredo() {
const cmd = redoableCommands.pop();
if (!cmd) returnfalse; // nothing to undo
cmd.command.do(cmd.data);
undoableCommands.push(cmd);
returntrue; // successfully redid an action
}
The undo system we’ve implemented here is very bare-bones, to try and explore the basic ideas around commands, but there’s plenty more that we could add here.
One thing that a lot of applications very quickly need is the ability to batch commands together, so that a single “undo” operation will undo a number of commands at once. This is important if each command should only affect its own slice of the state, but a particular operation affects multiple slices.
Another consideration is the ability to update commands. Consider an operation to resize an image. As I drag my cursor around, the UI should update smoothly, but when I stop resizing and press undo, I want to undo the whole resize operation, not just one part of it. One way of doing that is by adding a kind of
upsertCommand
function next to the
pushCommand
one, which creates a new entry in the history if there wasn’t one before, or else updates the previous entry with the new data.
It’s also important to be aware of the limitations of the command pattern. One of the benefits of the Flux architecture or tools like Redux is that they create a strict framework where it’s difficult to accidentally mutate data or end up in an unexpected state. Commands, on the other hand, are much more flexible, but in turn you need to ensure that all changes to the state really are taking place inside commands, and not in arbitrary functions.
The command pattern is a useful way of defining undoable state transitions to an application. It allows us to split an application up into different controllers or slices of data. It also allows us to apply side-effects that will be consistently be reapplied and undone as the user undoes and redoes their history. Hopefully, this article has helped you think about when and how you might apply the command pattern in your own tools and applications.
Google begins showing ads in AI Mode (AI answers)
Bleeping Computer
www.bleepingcomputer.com
2025-11-21 13:02:11
Google has started rolling out ads in AI mode, which is the company's "answer engine," not a search engine. [...]...
Google has started rolling out ads in AI mode, which is the company’s “answer engine,” not a search engine.
AI mode has been available for a year and is accessible to everyone for free.
If you pay for Google One, AI mode lets you toggle between advanced models, including Gemini 3 Pro, which generates an interactive UI to answer queries.
Up until now, Google has avoided showing ads in AI mode because it made the experience more compelling to users.
At the same time, Google has been slowly pushing users toward AI mode in the hope that people get used to the idea and eventually use ChatGPT or Google Search.
These ads have a “sponsored” label because Google needs to comply with the law of the land, and they’re similar to the usual links (citations) in AI answers.
We noticed that these ads appear at the bottom of the answer compared to citations, which mostly appear in the right sidebar.
It’s possible that Google’s tests found ads at the bottom of the answer have a higher CTR (click-through rate), or it could be one of the experiments.
What do you think? Do you think people would click on ads in AI mode as much as they do in regular search?
Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.
Get the cheat sheet and take the guesswork out of secrets management.
Headlines for November 21, 2025
Democracy Now!
www.democracynow.org
2025-11-21 13:00:00
Trump Accuses Democratic Lawmakers of ”SEDITIOUS BEHAVIOR, punishable by DEATH!”, Federal Judge Rules Trump’s Military Deployment to D.C. Unlawful , Federal Prosecutors Drop Charges Against Chicago Woman Shot by Border Patrol Agent, DHS to Shift Focus of Immigration Raids from Char...
Trump Accuses Democratic Lawmakers of ”
SEDITIOUS
BEHAVIOR
, punishable by
DEATH
!”
Nov 21, 2025
President Trump has accused six Democratic military veterans in Congress of ”
SEDITIOUS
BEHAVIOR
” and said their actions were “punishable by
DEATH
!” In a series of social media posts on Thursday, Trump targeted the lawmakers after they released this video urging U.S. military personnel to defy illegal orders.
Sen. Mark Kelly
: “You can refuse illegal orders.”
Sen. Elissa Slotkin
: “You can refuse illegal orders.”
Rep. Chris Deluzio
: “You must refuse illegal orders.”
Sen. Elissa Slotkin
: “No one has to carry out orders that violate the law…”
Rep. Chrissy Houlahan
: … “or our Constitution.”
In one post, Trump wrote, “This is really bad, and Dangerous to our Country. Their words cannot be allowed to stand.
SEDITIOUS
BEHAVIOR
FROM
TRAITORS
!!!
LOCK
THEM
UP???” He also reposted a message that said, ”
HANG
THEM
GEORGE
WASHINGTON
WOULD
!!”
Democratic Senator Chris Murphy of Connecticut responded to Trump.
Sen. Chris Murphy
: “The president of the United States just called for members of Congress to be executed. If you are a person of influence in this country, maybe it’s time to pick a [bleep] side. If you are a Republican in Congress, if you are a Republican governor, maybe it’s time to draw a line in the sand and say that under no circumstances should the president of the United States be calling on his political opposition to be hanged.”
In related news,
NBC
News has revealed a top military lawyer at U.S. Southern Command raised concerns in August over the legality of the U.S. blowing up boats in the Caribbean.
Federal Judge Rules Trump’s Military Deployment to D.C. Unlawful
Nov 21, 2025
A federal judge has declared the deployment of National Guard soldiers to Washington, D.C., illegal, ruling that President Trump lacks the authority to send troops into the district “for the deterrence of crime.” However, District Judge Jia Cobb postponed enforcing her decision until December 11 to give the Trump administration time to appeal.
Federal Prosecutors Drop Charges Against Chicago Woman Shot by Border Patrol Agent
Nov 21, 2025
Image Credit: Instagram/@vcdefensa
In Chicago, federal prosecutors have abruptly dropped charges against Marimar Martinez, a woman shot multiple times by a Border Patrol officer as she joined a convoy of vehicles trailing federal agents carrying out immigration raids. Prosecutors dismissed the case without explanation on Thursday after defense lawyers presented evidence that the Border Patrol agent had swerved into Martinez’s vehicle and later bragged in text messages about shooting her.
DHS
to Shift Focus of Immigration Raids from Charlotte to New Orleans
Nov 21, 2025
Border Czar Plans to Expand Immigration Raids in
NYC
; The Guardian Reveals
FBI
Spied on Activists
Nov 21, 2025
White House border czar Tom Homan has told Fox News that more federal immigration agents will soon be heading to New York City. The federal government is reportedly considering using a Coast Guard facility in Staten Island to jail detained people. In related news, The Guardian has revealed the
FBI
spied on New York immigration activists by gaining access to a Signal group chat used to monitor activity at three New York federal immigration courts.
Zohran Mamdani Travels to White House as Trump Threatens to Cut Federal Aid to New York City
Nov 21, 2025
New York City Mayor-elect Zohran Mamdani is heading to the White House today to meet with President Trump, who had threatened to cut off federal funding to New York if Mamdani was elected. Mamdani spoke Thursday.
Mayor-elect Zohran Mamdani
: “My team reached out to the White House to set up this meeting, because I will work with anyone to make life more affordable for the more than eight-and-a-half million people who call this city home. I have many disagreements with the president, and I believe that we should be relentless and pursue all avenues and all meetings that could make our city affordable for every single New Yorker.”
Israeli Forces Move Beyond Gaza’s “Yellow Line” and Continue Attacks in Fresh Ceasefire Violations
Nov 21, 2025
Israel’s army is carrying out a fresh wave of attacks across Gaza despite the ceasefire deal that took effect over a month ago. Israeli airstrikes, tank and artillery fire were reported in the Bureij and Maghazi camps and in the southern cities of Rafah and Khan Younis, where Israeli forces shot and killed a displaced Palestinian. Meanwhile, Israel’s military has repositioned its forces beyond the so-called yellow line in another violation of the ceasefire agreement.
UNICEF
reports at least 67 children have been killed by Israeli army fire in Gaza since the ceasefire came into effect on October 10 — that’s an average of two children killed per day since the beginning of the truce.
Israeli Troops Kill 2 Palestinian Teens in West Bank Amid Wave of Settler Attacks
Nov 21, 2025
Israeli settlers have carried out another wave of attacks on Palestinian communities in the occupied West Bank, setting fire to properties in several villages. The attacks damaged tourist villas under construction south of Nablus and a plant nursery in the town of Deir Sharaf. Elsewhere, a group of Israeli settlers attacked Palestinian homes in a village near Hebron, assaulting residents with batons and stones. Separately, Israeli forces shot and killed two Palestinian teenagers during a raid on the Kafr ’Aqab neighborhood of occupied East Jerusalem.
Meanwhile, a new report from Human Rights Watch finds the Israeli government’s forced displacement of 32,000 Palestinians in three West Bank refugee camps in January and February amounts to war crimes and crimes against humanity.
London Police Arrest Peaceful Protesters for Carrying Signs Supporting Palestine Action
Nov 21, 2025
In London, police arrested at least 47 supporters of the banned direct action group Palestine Action as they held a peaceful protest outside the Ministry of Justice on Thursday. They’re the latest of more than 2,000 arrests since Britain’s House of Commons voted in July to proscribe Palestine Action under the U.K.’s anti-terrorism laws, adding it to a list that includes
ISIS
and al-Qaeda. Police dragged away protesters for simply carrying signs proclaiming, “I support Palestine Action.”
Protester
: “Stop genocide in Palestine! We call on Keir Starmer to do the right thing. We want this Labour government to do the right thing.”
This week, six members of Palestine Action went on trial in the U.K. on charges of aggravated burglary, criminal damage and violent disorder, after they broke into a factory that produces hardware for the Israeli weapons maker Elbit Systems and used sledgehammers to destroy equipment.
Zelensky Agrees to Negotiate with Trump on 28-Point “Peace Plan” Negotiated by U.S. and Russia
Nov 21, 2025
Ukrainian President Volodymyr Zelensky said Thursday he’s ready to negotiate with President Trump on a U.S.-backed peace plan that calls on Ukraine to cede large swaths of territory to Russia while restricting the size of its military. The 28-point peace plan was negotiated by Trump’s envoy Steve Witkoff and Secretary of State Marco Rubio with Kremlin envoy Kirill Dmitriev, the head of Russia’s sovereign wealth fund. The backchannel negotiations did not include any Ukrainian or European officials.
Interior Department to Open 1.3 Billion Acres of U.S. Waters to Oil and Gas Drilling
Nov 21, 2025
The Trump administration is planning to open nearly 1.3 billion acres of U.S. waters off the coasts of Alaska, California and Florida to new oil and gas drilling. In a statement, Earthjustice blasted Thursday’s announcement by Interior Secretary Doug Burgum, writing, “Trump’s plan would risk the health and well-being of millions of people who live along our coasts. It would also devastate countless ocean ecosystems. This administration continues to put the oil industry above people, our shared environment, and the law.”
30+ Countries Oppose Draft U.N. Text That Excludes Roadmap to Phase Out Fossil Fuels
Nov 21, 2025
Here at the U.N. climate summit in Belém, Brazil, more than 30 countries have opposed the current draft text because it does not include a roadmap for phasing out fossil fuels. The negotiations were disrupted on Thursday when a large fire broke out at the conference site. Thirteen people were treated on site for smoke inhalation. Earlier on Thursday, U.N. Secretary-General António Guterres urged delegates to reach a deal. He also took questions from the press.
Justin Rowlatt
: “Secretary-General, what message do you want this conference to send to Donald Trump?”
Secretary-General António Guterres
: “We are waiting for you.”
Justin Rowlatt
: “Do you see a possibility of him engaging in this process in a positive way?”
Secretary-General António Guterres
: “Hope is the last thing that dies.”
After the press conference, I attempted to follow up with the U.N. secretary-general.
Amy Goodman
: “Secretary-General, what message do you think Trump’s not sending a high-level delegation — I’m Amy Goodman from Democracy Now! Can you respond to the huge fossil fuel delegation that’s here, over 1,600 lobbyists? Should the U.S. ban the fossil fuel lobbyists?”
Larry Ellison Discussed Firing
CNN
Anchors with White House Amid Warner Bros. Takeover Bid
Nov 21, 2025
In media news, Paramount, Netflix and Comcast have formally submitted bids to buy Warner Bros. Discovery, the parent company of the Warner Bros. movie studio as well as
HBO
and
CNN
. The Guardian reports the White House favors Paramount’s bid. The paper reports Paramount’s largest shareholder, Larry Ellison, has spoken to White House officials about possibly axing some
CNN
hosts disliked by President Trump, including Erin Burnett and Brianna Keilar.
Trump and JD Vance Notably Absent from D.C. Funeral for Dick Cheney, Architect of Iraq Invasion
Nov 21, 2025
The funeral for former Vice President Dick Cheney was held Thursday at Washington National Cathedral. Cheney was a key architect of the 2003 U.S. invasion of Iraq, but Iraq was not even mentioned during the funeral. Former President George W. Bush delivered the eulogy. President Trump and Vice President JD Vance were not invited to attend. Cheney, a lifelong Republican, had endorsed Kamala Harris in the 2024 race.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
We've detected unusual activity from your computer network
To continue, please click the box below to let us know you're not a robot.
Why did this happen?
Please make sure your browser supports JavaScript and cookies and that you are not
blocking them from loading.
For more information you can review our
Terms of Service
and
Cookie Policy
.
Need Help?
For inquiries related to this message please
contact
our support team
and provide the reference ID below.
Digital sovereignty has been much discussed in Europe in recent weeks, most recently
during a German-French summit in Berlin
. The extent of dependence on the USA in the digital sector is currently being experienced by a French judge. Nicolas Guillou, one of six judges and three prosecutors of the International Criminal Court (ICC), was sanctioned by the USA in August. He described his current situation as a digital time travel back to the 1990s, before the internet age, in a recent interview.
The reason for the US sanctions are the arrest warrants against Israeli Prime Minister Benjamin Netanyahu and Defense Minister Yoav Gallant. They were indicted for war crimes and crimes against humanity in the context of the destruction of the Gaza Strip. The USA condemned this decision by the court, whereupon the US Treasury Department sanctioned six judges and three prosecutors.
Digitally excluded from almost everything
In Guillou's daily life, this means that he is excluded from digital life and much of what is considered standard today, he told the
French newspaper Le Monde
. All his accounts with US companies such as Amazon, Airbnb, or PayPal were immediately closed by the providers. Online bookings, such as through Expedia, are immediately canceled, even if they concern hotels in France. Participation in e-commerce is also practically no longer possible for him, as US companies always play a role in one way or another, and they are strictly forbidden to enter into any trade relationship with sanctioned individuals.
He also describes the impact on participating in banking as drastic. Payment systems are blocked for him, as US companies like American Express, Visa, and Mastercard have a virtual monopoly in Europe. He also describes the rest of banking as severely restricted. For example, accounts with non-US banks have also been partially closed. Transactions in US dollars or via dollar conversions are forbidden to him.
Judge: EU should block sanctions
Guillou's case shows how strong the USA's influence in the tech sector is and how few options he has to circumvent it. And this at a time when an account with a US tech company is considered a matter of course in more and more places.
The French judge advocates for Europe to gain more sovereignty in the digital and banking sectors. Without this sovereignty, the rule of law cannot be guaranteed, he warns. At the same time, he calls on the EU to activate
an existing blocking regulation
(Regulation (EC) No 2271/96) for the International Criminal Court, which prevents third countries like the USA from enforcing sanctions in the EU. EU companies would then no longer be allowed to comply with US sanctions if they violate EU interests. Companies that violate this would then be liable for damages.
This article was originally published in
German
.
It was translated with technical assistance and editorially reviewed before publication.
AI as Cyberattacker
Schneier
www.schneier.com
2025-11-21 12:01:36
From Anthropic:
In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI’s “agentic” capabilities to an unprecedented degree—using AI not just as an advisor, but to execute the cyberattack...
In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI’s “agentic” capabilities to an unprecedented degree—using AI not just as an advisor, but to execute the cyberattacks themselves.
The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.
[…]
The attack relied on several features of AI models that did not exist, or were in much more nascent form, just a year ago:
Intelligence
. Models’ general levels of capability have increased to the point that they can follow complex instructions and understand context in ways that make very sophisticated tasks possible. Not only that, but several of their well-developed specific skills—in particular, software coding—lend themselves to being used in cyberattacks.
Agency
. Models can act as agents—that is, they can run in loops where they take autonomous actions, chain together tasks, and make decisions with only minimal, occasional human input.
Tools
. Models have access to a wide array of software tools (often via the open standard Model Context Protocol). They can now search the web, retrieve data, and perform many other actions that were previously the sole domain of human operators. In the case of cyberattacks, the tools might include password crackers, network scanners, and other security-related software.
Backed by YC and founded by two Princeton Ph.D’s, Roundtable provides frictionless, continual verification for our clients’ platforms. We ensure Proof of Human, tracking and stopping bots and fraud in real time to safeguard the integrity of online insights, traffic, and spend. We’re looking for an exceptional SDR to join our team.
An ideal candidate is driven and competitive, but still humble and professional. Their efforts will be instrumental in igniting the sales funnel and introducing major platforms and research firms to the future of online security. There are huge opportunities for personal and professional growth.
What you’ll do:
Pipeline Generation: Research and identify key target accounts and prospects.
Outbound Engagement: Execute strategic outbound campaigns to generate qualified meetings.
Expert Positioning: Articulate the value proposition of Roundtable.
Pipeline Management: Diligently track and manage lead activity and progress.
Sales Strategy: Work with the rest of our GTM team concurrently.
Who you are:
A forward thinker with a passion for technology and a strong desire to start a career in B2B SaaS sales.
Excellent written and verbal communication skills; comfortable reaching out to senior-level executives.
Highly organized, self-motivated, and capable of managing a high volume of activity.
Prior experience in SaaS is a big plus, but not required.
If you’re interested in joining a rapidly growing, cutting-edge AI security company that is protecting the future of online data integrity, we’d love to chat with you.
**Equity contingent on 3 month evaluation
About
Roundtable
Roundtable is a research and deployment company building the proof-of-human layer in digital identity. Roundtable seeks to produce high-impact research and to productize this research in bot detection, fraud prevention, and continuous authentication.
Roundtable was founded in 2023 by two Princeton PhD scientists – Mayank Agrawal and Matt Hardy. They have published in Science, PNAS, Nature Human Behaviour, and Psychological Review and are backed by Y Combinator and Brickyard Ventures.
Founded:
2023
Batch:
S23
Team Size:
4
Status:
Active
Location:
San Francisco
Founders
Robert Reich Thinks Democrats Are On the Brink of a New Era
Intercept
theintercept.com
2025-11-21 11:00:00
The professor, author, and longtime commentator on the economy and Democrats under Trump.
The post Robert Reich Thinks Democrats Are On the Brink of a New Era appeared first on The Intercept....
The Labor Department
reported September jobs numbers on Thursday, showing employers added 119,000 jobs to the economy but also an increase in unemployment to 4.4 percent. “The September report shows fairly good job growth, but every other report we have for October shows a slowdown,” says Robert Reich, the former secretary of labor under President Bill Clinton.
“Real wages — that is, wages adjusted for inflation — are going down for most people. The bottom 90 percent of Americans are in very bad shape,” says Reich. This week on The Intercept Briefing, host Akela Lacy speaks to the professor, author, and longtime commentator about the economy and the state of Democratic Party politics under Trump. “The only people who are doing well, who are keeping the economy going through their purchases, are the top 10 percent, and they’re basically doing well because they’re the ones who own most of the shares of stock,” says Reich. “What happens when and if the stock market implodes?”
Reich has been beating the drum on poverty and inequality for decades. And while that message took some time to hit the mainstream, it seems to be
hitting home now more than ever
, but Democratic leadership continues to fall flat in conveying they understand the urgency of the economic hardships ordinary Americans face.
The answer, Reich says, is new leadership. He is disappointed in Democrats who caved to Trump on the government shutdown. “It’s another example of the Democrats not having enough backbone,” Reich says. “I think Chuck Schumer has to go. And Jeffries too.” He adds, “I’m 79 years old. I have standing to speak about the fact that there is a time to move on. And I think that the Democratic leaders today should move on.”
Listen to the full conversation of The Intercept Briefing on
Apple Podcasts
,
Spotify
, or wherever you listen.
Transcript
Akela Lacy:
Welcome to The Intercept Briefing, I’m Akela Lacy.
If you’ve been following politics coverage at The Intercept, you know we have a minor obsession with the battle over the soul of the Democratic Party. Our guest today may give us a run for our money.
Robert Reich
:
People ask me every day, where the fuck are the Democrats? There are a handful leading the fight against Trump’s regime.
JB Pritzker:
Come and get me.
RR:
But the party’s leadership has been asleep at the wheel.
AL:
That’s Robert Reich, secretary of labor under former President Bill Clinton and a professor, author, and commentator on capitalism and inequality. Reich has organized his life project around progressive policies: getting big money out of politics, strengthening unions, and taxing the rich. His new memoir, “
Coming Up Short
,” walks through his life’s work and the various bullies and boogeymen who crossed his path. Reich also has a new documentary, “
The Last Class
,” which chronicles his final semester teaching at U.C. Berkeley about wealth and inequality.
RR (
in trailer
):
One of the best ways of learning is to discuss something with somebody who disagrees with you. Can I do what I love to do, as well as I should be doing it? The wealth is held by the richest 400 Americans. You get the picture? We have to all engage their curiosity. Democracy’s not a spectator sport. It’s active.
AL:
Reich hasn’t been quiet about his criticisms for Democrats. He endorsed Bernie Sanders for president in 2016 and had
harsh words
for the new billionaire-financed Democratic think tank launched earlier this year by ex-staffers of Sen. John Fetterman. But he’s also been a quintessential party insider: He wholeheartedly backed both Joe Biden and Kamala Harris in 2020 and 2024, though he’s been open about where Biden fell short.
Reich has been beating the drum on poverty and inequality for decades. And while that message took some time to hit the mainstream, it seems to be hitting home now more than ever. Voters are at the end of their ropes under an administration unabashed about its mission to enrich the world’s elite — and itself — while terrorizing communities around the country.
And while that frustration with Trump has been evident in
Democrats’ recent electoral wins
, it still feels like Democratic leadership is
failing
, in so many ways, to
meet the moment
. So what has to happen for things to really
change? What more has to break until something gives?
Now, we’ll delve into those questions and more with Robert Reich.
Mr. Reich, welcome to the show.
RR:
Akela, thank you very much for having me.
AL:
Mr. Reich, you’ve argued that Democrats have lost the working class because they’ve catered to big money and corporations, and that the way to fix American democracy is to get big money out of the picture. Why do you think that has been so hard to do?
RR:
Because big money doesn’t want big money to be out of the picture, Akela. It’s pretty straightforward. It’s a chicken and egg problem, and it’s become a larger and larger problem. I saw it begin in the 1970s after the
Powell memo
to the Chamber of Commerce calling on big corporations to get involved in American politics.
In terms of putting big money into American politics, I saw it in the ’80s getting much worse when I was secretary of labor — it was really awful, I thought big money was taking over. But little did I know that the 21st century would be far, far worse. Well, it’s worse than ever, but I think that in some ways, Trump is a consequence, a culmination of four decades, five decades of more and more corporate and big, wealthy money in American politics.
AL:
You mentioned the rise of Trump. He campaigned as a populist, but his policies have obviously favored the wealthy, including massive tax cuts. Why does that political contradiction work for him?
“What he’s given the working class is people and institutions to hate.”
RR:
I think it worked for Donald Trump because he’s a con man. He knows how to speak in the language of the working class, but appealing to the red meat — and I hate to say it — but bigotry side of the working class. That is, even though Trump has given the wealthy exactly what they want in terms of tax cuts and regulatory rollbacks and everything that can make them even wealthier — at the same time, what he’s given the working class is people and institutions to hate. He’s given them everything from transgender people to immigrants. His racism is pretty evident. It’s a constant standard list of Trump negatives and Trump targets.
I think it’s important to understand both sides of this because this is not the first time in American history, nor is it the first time in world history, that a demagogue has given the rich exactly what they want in terms of more riches. And also used the power of bigotry to keep the working class in line.
AL:
Right, you talk about Pat Buchanan in your book, and there’s plenty of other examples that we could go through, but I want to also touch on — in one of your latest
Guardian columns
, you argue that Trump’s project will eventually collapse under the weight of its own hypocrisy. I would like to believe that. I’m not sure that’s being borne out right now. Do you?
RR:
Trump’s project is going to collapse under the weight of its own hypocrisy. Look at the polls. He’s doing worse and worse even among his core, even among his base. And we’re talking about men, white men, straight white men who are non-college graduates. His ratings keep going. His favorabilitys keep dropping. And when all the polls are showing the same trend, you have to start believing that’s the case.
Also the Democrats, frankly, have not got their act together. They really do need to start attacking big corporations
for
raising prices
and for monopolizing. They’ve got to start talking about the super wealthy and the absurdities of how much power the super wealthy have in our political system.
Elon Musk is exhibit number A. There are many, many exhibits. And every time all of these tech CEOs get together at the White House with Trump, we need Democrats to be pointing this out and to have a very clear message that they represent the alternative to corporate capitalism.
“We need Democrats to be pointing this out and to have a very clear message that they represent the alternative to corporate capitalism.”
AL:
We’re touching a little bit on this battle over the working class. You’ve said that Biden didn’t communicate his efforts to help the working class effectively. What is the effective way to communicate that, and what is, to your last point, the effective way to point out this catering to the elite of the elite from the Republican side?
RR:
The effective way was, is to say it. To Biden’s credit, he did walk a picket line. He did appoint some very good people, like
Lina Khan
at the Federal Trade Commission. He was a trust-buster. But he didn’t really talk about monopolization all that much. He didn’t really talk about corporate power. You need a Democrat, a leader of the Democrats, who really appears to be a fighter and makes it very clear what they’re fighting against.
Akela Lacy:
Switching gears a little bit to the exciting election season that we’re in. You’ve made several endorsements this year in key races: Zohran Mamdani in New York, Graham Platner in Maine, and Dan Osborne in Nebraska. What did you see in those candidates that led you to endorse them?
RR:
We have in the Democratic Party — most of these are Democrats, are young people, who are saying the truth. They’re talking about the economy and our society in terms of power and the misallocation of power. They’re not depending on big corporate contributions for their campaigns. I think this is the future of the Democratic Party, if the Democratic Party has a future. It’s certainly the future of trying to get people, everybody in the bottom 90 percent of America who are struggling, together.
AL:
What was your reaction to the reporting on
Graham Platner’s
internet history, his tattoo, and the fallout in his campaign?
RR:
I wasn’t happy about that. I know people in Maine who tell me that on the ground he’s doing very, very well. He’s making all of the right moves. But he also is communicating to people in ways that Mamdani has done in New York City and others are doing around the country. I guess I have to throw up my hands and say the people of Maine are going to decide.
AL:
You wrote a new book recently. In “Coming Up Short,” You talk about your life project having to explain why it’s so important to “reverse the staggering inequalities and legalize bribery that characterize today’s America.” For people who haven’t read the book, can you give us a preview of the reasons why those efforts by yourself and others have, in your words, come up short?
RR:
It’s very difficult for America to face a fundamental flaw in American capitalism. And that has to do with the power of wealth and corporate power. And I have spent much of the last 40, 50 years trying to not only educate people and teach people in classrooms and with my books and other efforts, but also when I have been in Washington serving the public directly fighting this kind of corporate power. I’ve done everything I can do, but I’m sure there’s much more. I’m still fighting. I’m still young.
AL:
Can you say more about why you think the larger project has not succeeded?
RR:
I think the long-term project has not succeeded because you’ve got a larger and larger group of Americans who are angry, frustrated, and basically cynical. That group of people, unless the Democrats or some other party reaches out to them in a positive way and says, “Look, the answer to your problems, it’s not bigotry against immigrants or bigotry against transgender people or bigotry against Black people or bigotry against foreigners. The answer to your problem is really to look at the corporate elites and Wall Street and the richest people in this country, and understand that they have abused their wealth and power — and continue to abuse their wealth and power.”
Now, it’s a very difficult message to get through because the working poor and the working middle class as a group continue to grow and continue to slide. And the Democrats have not made this case. If they do make it, when they do make it, I think we’re going to see some fundamental changes politically. But until they do, we’re gonna see the Trump triumph that we have seen up until now.
AL:
You mentioned Democrats or some other party reaching out to people who feel cynical and removed from the process. Do you see an opening for that in the next several cycles? This has been a topic for forever, but even the most popular independent ran as a Democrat. That seems to be the institutional path of progressives right now, is still to be encouraging people to stick with Democrats. What do you see happening there?
RR:
I think it’s very hard to form a third party for all the obvious reasons, when you have a winner-take-all system in the United States as we do have. So I’m hoping that the” takeover” occurs in the Democratic Party. That forces like Bernie Sanders, AOC, Zohran Mamdani — others who are young and who understand the necessity of speaking in the terms I’m speaking right now to you — will take over the Democratic Party, and their success in winning over the working class from the Republicans will be enough to generate its own positive outcomes.
Once in politics, you actually begin a process of winning over voters, it’s not all that hard to get others to join you in winning over those voters politically. And the problem the Democrats have had — And, look, I’ve been a Democrat for a very long time. And I’ve been frustrated for a very long time. I mean, in the Clinton administration, I can’t tell you the number of times that I tried to push against the neoliberal facade that was gaining even more and more power as I was labor secretary. It was very difficult.
AL:
You’ve said that inequality hurts everyone, not just the poor. And as you’ve noted, there are signs that that message is starting to resonate with more people, with recent results of elections to Trump’s open alignment with wealthy interests. You’ve been warning about this for 30 years. Do you think this message is starting to resonate with more people? And if not, why hasn’t it broken through or why is it breaking through more now, particularly with, as we’ve talked about Mamdani, etc.
RR:
It is beginning to get through. And part of the reason is Donald Trump. Because people see the logical consequence of the alternative that is Trump, that is fascism, neo-fascism. It’s an administration that is cruel that represents the opposite of what America actually has told itself we represent.
And I think that there are many people who in leadership positions who feel trapped right now. I’ve talked to them. People who feel that they’ve got to play along with Trump, they don’t dare cross him because they know how vindictive he can be. But they are themselves learning a very important lesson: that if you completely neglect the working class and the working middle class and the poor, you are begging for eventually a demagogue like Trump to come along and plunge the entire country into this authoritarian nightmare.
[
Break]
AL:
Going back to your comments on pressuring Democrats on neoliberal expansion. There’s an argument to be made that there’s a direct through line between NAFTA — the North American Free Trade Agreement which went into effect in 1994, eliminated most tariffs between the U.S, Canada and Mexico — between NAFTA and the rise of Trump. A lot of American manufacturing jobs moved to Mexico because of that agreement, and many of those people are part of the MAGA base. This happened during the Clinton administration, and you wrote in the book that you were worried that American workers would “get shafted.” How do you look back on that 30 years later, and do you wish you had done more to fight it?
RR:
I wish I had done more to fight it, Akela. Not just NAFTA, but also Chinese accession to the World Trade Organization, also deregulation of Wall Street, which led almost directly to the 2008 Wall Street crash. And at the same time the decision to get rid of welfare and not substitute anything that was really helpful to most Americans. I mean, I am proud of certain things that we accomplished. I fought very hard to increase the minimum wage. We did it, even though the Republicans at that time were in control of both houses of Congress.
I’m glad that we expanded something called the Earned Income Tax Credit, which has become the largest anti-poverty program in America. But we didn’t do nearly enough. And the things that unfortunately I fought against have proven to be, as you suggested, the foundation stones of many of the problems we now have.
AL:
I want to ask about your new documentary. It’s out now, called “The Last Class.” It’s about teaching at UC Berkeley. I’m curious about your experience all of these years as a professor and increasing threats to academic freedom. These threats have taken many shapes, but it includes a long history of smearing professors as antisemitic if they talk about Palestine, to now the Trump administration weaponizing that project to a whole new level, merging it with attacks on migrants and policing nonprofits, treating free speech as terrorism. The list goes on and on. What are your thoughts on how this has accelerated under Trump?
RR:
Like everything else, it’s now out in the open. It starts under Ronald Reagan. Actually, it starts under Nixon. This kind of a negative fear of the so-called intellectual class, the notion that
universities are plotting against America
.
And the Republicans have built this case because they do view universities — justifiably — as
hotbeds of thought, of criticism of ideology that they don’t like
. But Trump, again, one of the benefits, I’m going put that in quotes, “the benefits” of the Trump administration is, it’s now visible, it’s obvious.
JD Vance says universities are the enemy. Donald Trump wants, it’s not just, it’s — DEI is a pretext and we know that. We know that antisemitism, the charges of antisemitism are a pretext for the Trump administration to come in and to restrict academic freedom. I think that they met their match at Harvard, I hope.
But remember, this is all a process of public education. What I hear, what I see, what the polls are showing, what my conversations with many conservatives are showing, is that many people are saying, “Wow, I didn’t really understand 10 or 15 or 20 years ago, what this conservative assault on universities really was all about. I now see it.”
“DEI is a pretext and we know that. We know that antisemitism, the charges of antisemitism are a pretext for the Trump administration to come in and to restrict academic freedom.”
AL:
We have to ask. Everyone is talking about the
Epstein files
, which have become a
pressure cooker of sorts for Trump
over the last weeks and months. A few questions here: In retrospect, did Senate Democrats caving to Republican budget negotiations actually end up intensifying that pressure?
RR:
Yeah, I was very disappointed in the Senate Democrats and one Independent who, I’ve used the word “caved.” They did cave to the Republicans. The thing to keep in mind is that they had some bargaining leverage. The Democrats have not had any bargaining leverage for 10 months.
Finally, they have some bargaining leverage to get the Republicans to agree with them to reinstate the subsidies for Obamacare. Without those subsidies health care premiums are going to skyrocket for millions of Americans. Well, they had that bargaining leverage and they gave it up at a time, incidentally, when most Americans were blaming the Republicans for the shutdown and also the pressures.
I mean, look at the air traffic controllers, the delays of flights — the pressures were growing so intense that the Republicans, including Trump, had to come around. So I just think it’s another example of the Democrats not having enough backbone.
RR:
Well, I said I think Chuck Schumer has to go. And Jeffries too. We are on the, hopefully, brink of a new era with regard to Democratic, capital-D politics. And we have a lot of young people, a lot of very exciting, a lot of very progressive young people around the country. And these older people — I could speak as an older person, all right? I’m 79 years old. I have standing to speak about the fact that there is a time to move on. And I think that the Democratic leaders today should move on.
AL
: I wanted to ask about that, when we were on topic, but the second Epstein question that I have is: The document dump from the House Oversight Committee has revealed new details about Epstein’s associates from Trump to Bill Clinton, your former boss. What are your thoughts on how that scandal has unfolded and taken hold on the right, and what do you make of the Clinton association?
RR:
I don’t know about Bill Clinton’s role. We don’t know. There have not been evidence yet. But I think that what may be being lost in this whole Epstein scandal is really what has happened to the victims of Epstein and these other men.
Let’s be clear. This is about human rights. It’s about trafficking of children in ways that are so fundamentally wrong. This is an issue that I agree with a lot of MAGA types on. You don’t want to tolerate this kind of behavior in not just the elites of America, but anyone.
And I want to just make sure we focus on what happened and how horrible that was. And it’s still going on, not with Epstein, obviously. But men are harassing and bullying and raping women. And men have, who have positions of power and money in our society — Again, the Trump era is revealing a lot and a lot that’s very ugly about America. Hopefully we will learn our lessons.
“The Trump era is revealing a lot that’s very ugly about America.”
AL:
We want to get your thoughts on a few news developments briefly before we go. The
delayed jobs report
numbers from September came out showing a growth in hiring, but an uptick in the unemployment rate. What do those indicators say about where the labor market is right now?
RR:
As far as I can tell — now, we don’t have a complete picture of the labor market because of the shutdown. But as far as I can tell, job growth is very, very slow. The September report shows fairly good job growth, but every other report we have for October shows a slowdown. A lot of private employers, understandably, don’t know about the future. They’re feeling uncertain about where the economy is going, so they’re not going to hire.
We also know that real wages — that is, wages adjusted for inflation — are going down for most people. The bottom 90 percent of Americans are in very bad shape right now. The only people who are doing well, who are keeping the economy going through their purchases, are the top 10 percent. And they’re basically doing well because they’re the ones who own most of the shares of stock — 92 percent of the shares of stock — and the stock market is doing well. What happens when and if the stock market implodes? I don’t like what’s happened, with regard to, for example, artificial intelligence, AI stocks, which I think will be shown to be a huge bubble. And we can see a bubble also in other areas of the stock market. It’s kind of a dangerous economic terrain.
AL:
Why do you think AI stocks will prove to be a bubble?
RR:
Because the amounts that are being invested in AI now, and the amount of debt that AI companies, Big Tech companies are going into in order to make those investments are really way beyond the possible returns.
Now, I grant you, Nvidia did extremely well. But Nvidia’s kind of an outlier. I mean, look at what the expenditures — if you take out all of the investments from AI from the stock market and from related parts of the economy, there’s nothing really happening in the American economy right now. That’s where the action is. But of course, everybody wants to be the winner. Not everybody’s gonna be the winner.
AL:
Speaking of the stock market, there is bipartisan pressure on speaker Mike Johnson to advance a
congressional ban on buying stocks
. What are your thoughts on that?
RR:
Oh, that’s way overdue. Members of Congress should not be buying individual stocks. They can get an index. They should be required — if they want to, if they have savings, if they want to be in the stock market — get an index that is just an index of the entire stock market. It’s actually inexcusable for individual members of Congress to be making stock trades because they have so much inside information.
“It’s actually inexcusable for individual members of Congress to be making stock trades because they have so much inside information.”
AL:
This is making me think of the fact that Nancy Pelosi, who has faced a lot of criticism over congressional stock trading, is retiring. We interviewed one of the candidates running to replace her,
Saikat Chakrabarti
. I’m wondering if you’re following that race, but also what other races you’re following right now, and if you’re looking to make endorsements in other races we should have on our radar.
RR:
Look, Akela, I endorse when I’m very excited about a candidate. Nobody cares about my endorsement. I mean, I’m a former secretary of labor. But yes, as we talked about, I do think that there’s some up and comers. And if I can help in any way, I certainly will.
I think Nancy Pelosi, I just want to say something about her because I have not always agreed with everything she did, but I think she did some very, very important and good things. She got Barack Obama to focus on the Affordable Care Act, to pass the Affordable Care Act. That was a big deal. You look at recent congressional history, and she stands out as the most important leader that we have had.
Akela Lacy:
We’re going to leave it there. Thank you for joining me on the Intercept Briefing.
RR:
Akela, thank you very much for having me.
Akela Lacy:
That does it for this episode.
In the meantime, though, what do you want to see more coverage of? Are you taking political action? Are there organizing efforts in your community you want to shout out? Shoot us an email at podcasts@theintercept.com. Or leave us a voice mail at 530-POD-CAST. That’s 530-763-2278.
This episode was produced by Laura Flynn. Sumi Aggarwal is our executive producer. Ben Muessig is our editor-in-chief. Chelsey B. Coombs is our social and video producer. Desiree Adib is our booking producer. Fei Liu is our product and design manager. Nara Shin is our copy editor. Will Stanton mixed our show. Legal review by David Bralow.
Slip Stream provided our theme music.
If you want to support our work, you can go to
theintercept.com/join
.
Your donation, no matter the amount, makes a real difference. If you haven’t already, please subscribe to The Intercept Briefing wherever you listen to podcasts. And leave us a rating or a review, it helps other listeners to find us.
Until next time, I’m Akela Lacy.
Germany: States Pass Porn Filters for Operating Systems
Providers of operating systems such as Microsoft, Apple, or Google will in the future have to ensure that they have a "youth protection device". This is intended to ensure that porn filters are installed at the fundamental level of PCs, laptops, smart TVs, game consoles, and smartphones, and that age ratings for websites and apps are introduced. This is stipulated by the latest reform of the Interstate Treaty on the Protection of Minors in the Media (Jugendmedienschutz-Staatsvertrag, JMStV), which the state parliaments passed on Wednesday after Brandenburg relented
with the 6th Interstate Media Amendment Treaty
.
The core of the
JMStV amendment
, which has been debated for years and to which the state premiers agreed almost a year ago: End devices that are typically also used by minors should be able to be switched to a child or youth mode by parents with filters at the operating system level at the push of a button. The aim is to protect young people on the internet from age-inappropriate content such as pornography, violence, hate speech, incitement, and misinformation.
The use of common browsers such as Chrome, Firefox, or Safari will only be possible in the special mode if they have "a secure search function" or if an unsecured access is individually and securely enabled. In general, the use of browsers and programs should be able to be "individually and securely excluded". Only apps that have an approved youth protection program or a comparable suitable tool themselves will be accessible regardless of the pre-set age group.
Financial Blocks and End of Mirror Domains
The Commission for Youth Media Protection (KJM)
describes the filtering process as a "one-button solution"
. This should enable parents to "secure devices for age-appropriateness with just one click". The new operating system approach will come into force no later than December 1, 2027. For devices that are already being produced, a transitional period of three years for the implementation of the software device will apply from the announcement of the decision on the applicability of the provision. Devices already on the market whose operating systems are no longer updated will be excluded.
The states also want to prevent the circumvention of blocking orders by erotic portals such as xHamster, Pornhub, YouPorn, or MyDirtyHobby using so-called mirror domains – i.e., the distribution of identical content under a minimally changed web address. For a page to be treated as a mirror page and quickly blocked without a new procedure, it must essentially have the same content as the already blocked original.
Manufacturers of operating systems, tech associations, and the Free Software Foundation Europe (FSFE)
sharply criticize the draft law
. They consider the filtering requirement, in particular, to be technically and practically unfeasible, as well as legally questionable.
(
wpl
)
Of course using
mailgun.send
is easier than queuing it in a
task
table.
Adding indirection rarely makes systems
less
complex. But somehow I'm here to
advocate exactly that. You may ignore my manifesto and
skip to my implementation
at the end.
Customers don't care about cosmic rays. They want a thing. More imporantly, they
want
immediate confirmation
of their thing. They want to offload the mental
burden of their goal.
For them to delegate that responsibility, your DB is probably the only thing
that matters. Once information is committed to your database, you can
confidently say "we'll take it from here".
You can send emails later. You can process payments later. You can do almost
anything later. Just tell your customer they can continue with their goddamn
day.
Delight your customers with clear feedback.
Delight your computers by writing to one place at a time.
Never Handroll Your Own Two-Phase Commit
Writing to two places at "the same time" is sinful.
When the gods gave us computer storage, the people became unhappy. They cried,
"What is consistency? Where are our guarantees? Why must I
fsync
?" And so they
wore sackloth and ashes for many years in their coding caves.
The people were overjoyed when the gods scrawled Postgres (and other inferior
databases) onto stone tablets. The holy "database transactions" allowed
humankind to pretend that they could read/write to multiple places at the same
time.
But some developers deny the works of the gods. They mix multiple tools, and so
commit the sin of writing to multiple places.
"Oh, we'll just send a pubsub message after we insert the row." But data is
lost. Message before insert row? Data lost. All blasphemers are doomed to
reinvent two-phase commit.
One Way To Do Things
I like LEGO. I like Play-Doh. I like Lincoln Logs. I do not, however, like
mixing them together.
It's painful to investigate systems when state is spread across SQS, Redis,
PubSub, Celery, Airflow, etc. I shouldn't have to open a local detective agency
find out why a process isn't running as expected.
Most modern projects use SQL. Because I dislike mixing systems, I try to take
SQL as far as possible.
Of all the SQL databases, Postgres currently offers the best mix of modern
first-class features and third-party extensions. Postgres can be your knock-off
Kafka, artificial Airflow, crappy Clickhouse, nasty Elasticsearch, poor man's
PubSub, on-sale Celery, etc.
Sure, Postgres doesn't have all the fancy features of each specialized system.
But colocating queue/pipeline/async data in your main database eliminates swaths
of errors. In my experience, transaction guarantees supercede everything else.
TODO-Driven Development
while (true) {
// const rows = await ...
for (const { task_type, params } of rows)
if (task_type in tasks) {
await tasks[task_type](tx, params);
} else {
console.error(`Task type not implemented: ${task_type}`);
}
}
With a simple retry system, asynchronous decoupling magically tracks all your
incomplete flows.
No need to rely upon Jira -- bugs and unimplemented tasks will be logged and
retried. Working recursively from error queues is truly a wonderful experience.
All your live/urgent TODOs are printed to the same place (in development and in
production).
With this paradigm, you'll gravitate towards scalable pipelines.
Wishful thinking
makes natural
architecture.
Human Fault Tolerance
Many systems foist useless retry-loops onto humans.
Humans should receive feedback for human errors. But humans should not receive
feedback for problems that can be handled by computers (and their software
developers).
Remember, all your retry-loops have to happen somewhere. Be careful what you
delegate to customers and developers. Your business's bottom-line is bounded by
human patience; computers have infinitely more patience than humans.
Show Me The Code
Here's the
task
table:
create table task
( task_id bigint primary key not null generated always as identity
, task_type text not null -- consider using enum
, params jsonb not null -- hstore also viable
, created_at timestamptz not null default now()
, unique (task_type, params) -- optional, for pseudo-idempotency
)
const tasks = {
SEND_EMAIL_WELCOME: async (tx, params) => {
const { email } = params;
if (!email) throw new Error(`Bad params ${JSON.stringify(params)}.`);
await sendEmail({ email, body: "WELCOME" });
},
};
(async () => {
while (true) {
try {
while (true) {
await sql.begin(async (tx: any) => {
const rows = await tx`
delete from task
where task_id in
( select task_id
from task
order by random() -- use tablesample for better performance
for update
skip locked
limit 1
)
returning task_id, task_type, params::jsonb as params
`;
for (const { task_type, params } of rows)
if (task_type in tasks) {
await tasks[task_type](tx, params);
} else {
throw new Error(`Task type not implemented: ${task_type}`);
}
if (rows.length <= 0) {
await delay(10 * 1000);
}
});
}
} catch (err) {
console.error(err);
await delay(1 * 1000);
}
}
})();
A few notable features of this snippet:
The task row will
not
be deleted if
sendEmail
fails. The PG transaction
will be rolled back. The row and
sendEmail
will be retried.
The PG transaction
tx
is passed along to tasks. This is convenient for
marking rows as "processed", etc.
Transactions make error-handling so much nicer. Always organize reversible
queries before irreversible side-effects (e.g. mark DB status before sending
the email). Remember that the DB commits at the end.
Because of
skip locked
, you can run any number of these workers in parallel.
They will not step on each others' toes.
Random ordering is technically optional, but it makes the system more
resilient to errors. With adequate randomness, a single task type cannot block
the queue for all.
Use
order by (case task_type ... end), random()
to create an easy
prioritized queue.
Limiting number of retries makes the code more complicated, but definitely
worth it for user-facing side-effects like emails.
if (rows.length <= 0)
prevents overzealous polling. Your DBA will be
grateful.
After reading the book
The AWK Programming Language
(recommended!)
, I was planning to try
AWK
out on this year’s Advent of Code. Having some time off from work this week, I
tried to implement
one of the problems
in it to get some practice, set up my tooling, see how hard AWK would be,
and… I found I’m FP-pilled.
I
knew
I’m addicted to the combination of algebraic data types (tagged unions)
and exhaustive pattern matching, but what got me this time was immutability,
lexical scope and the basic human right of being allowed to return arrays from
functions.
Part 1 of the Advent of Code problem was easy enough, but for part 2 (basically
a shortest path search with a twist, to not spoil too much), I found myself
unable to switch from my usual
functional BFS
approach
to something mutable, and ended up trying to implement my functional approach in
AWK.
It got hairy very fast: I needed to implement:
hashing of strings and 2D arrays (by piping to
md5sum
)
a global
set
array of seen states
a way to serialize and deserialize a 2D array to/from a string
and a few associative arrays for retrieving this serialized array by its
hash.
I was very lost by the time I had all this; I spent hours just solving what felt
like
accidental complexity
; things that I’d take for granted in more modern
languages.
Now, I know nobody said AWK is modern, or functional, or that it promises any
convenience for anything other than one-liners and basic scripts that fit under
a handful of lines. I don’t want to sound like I expect AWK to do any of this;
I knew I was stretching the tool when going in. But I couldn’t shake the feeling
that there’s a beautiful AWK-like language within reach, an iteration on the AWK
design (the pattern-action way of thinking is beautiful) that also gives us a
few of the things programming language designers have learnt over the 48 years
since AWK was born.
Dreaming of functional AWK
Stopping my attempts to solve the AoC puzzle in pure AWK, I wondered: what am I
missing here?
What if AWK had
first-class arrays?
BEGIN {
# array literals
normal = [1, 2, 3]
nested = [[1,2], [3,4]]
assoc = ["foo" => "bar", "baz" => "quux"]
multidim = [(1,"abc") => 999]
five = range(1,5)
analyze(five)
print five # --> still [1, 2, 3, 4, 5]! was passed by value
}
function range(a,b) {
r = []
for (i = a; i <= b; i++) {
r[length(r)] = i
}
return r # arrays can be returned!
}
function analyze(arr) {
arr[0] = 100
print arr[0] # --> 100, only within this function
}
What if AWK had
first-class functions and lambdas?
BEGIN {
# construct anonymous functions
double = (x) => { x * 2 }
add = (a, b) => { c = a + b; return c }
# functions can be passed as values
apply = (func, value) => { func(value) }
print apply(double,add(1,3)) # --> 8
print apply(inc,5) # --> 6
}
function inc(a) { return a + 1 }
What if AWK had
lexical scope
instead of dynamic scope?
# No need for this hack anymore ↓ ↓
#function foo(a, b ,local1, local2) {
function foo(a, b) {
local1 = a + b
local2 = a - b
return local1 + local2
}
BEGIN {
c = foo(1,2)
print(local1) # --> 0, the local1 from foo() didn't leak!
}
What if AWK had
explicit globals
, and everything else was
local by default?
BEGIN { global count }
END {
foo()
print count # --> 1
print mylocal # --> 0, didn't leak
}
function foo() { count++; mylocal++ }
(This one, admittedly, might make programs a bit more verbose. I’m willing to
pay that cost.)
What if AWK had
pipelines?
(OK, now I’m reaching for syntax sugar…)
BEGIN {
result = [1, 2, 3, 4, 5]
|> filter((x) => { x % 2 == 0 })
|> map((x) => { x * x })
|> reduce((acc, x) => { acc + x }, 0)
print "Result:", result
}
Now for the crazy, LLM-related part of the post. I didn’t want to spend days
implementing AWK from scratch or tweaking somebody else’s implementation. So I
tried to use Cursor Agent for a larger task than I usually do (I tend to ask
for very small targeted edits), and asked Sonnet 4.5 for
a README with code
examples
, and then
a full
implementation in Python
.
And it did it.
Note: I also asked for implementations in C, Haskell and Rust at the same
time, not knowing if any of the four would succeed, and they all seem to have
produced code that at least compiles/runs. I haven’t tried to test them or
even run them though. The PRs are
here
.
I was very impressed—I still am! I expected the LLM to stumble and flail
around and ultimately get nothing done, but it did what I asked it for (gave me
an interpreter that could run
those specific examples
), and over the course
of a few chat sessions, I guided it towards implementing more and more of “the
rest of AWK”, together with an excessive amount of end-to-end tests.
The only time I could see it struggle was when I asked it to implement arbitrary
precision floating point operations without using an external library like
mpmath
. It attempted to use Taylor series, but couldn’t get it right for at
least a few minutes. I chickened out and told it to
uv add mpmath
and simplify
the interpreter code. In a moment it was done.
Other things that I thought it would choke on, like
print
being both a
statement (with
>
and
>>
redirection support) and an expression, or
multi-dimensional arrays, or multi-line records, these were all implemented
correctly. Updating the test suite to also check for backwards compatibility
with
GAWK
- not an issue. Lexical scoping
and tricky closure environment behaviour - handled that just fine.
What now?
As the cool kids say, I have to
update my priors.
The frontier of what the
LLMs can do has moved since the last time I tried to vibe-code something. I
didn’t expect to have a working interpreter
the same day
I dreamt of a new
programming language. It now seems possible.
The downside of vibe coding the whole interpreter is that I have zero knowledge
of the code. I only interacted with the agent by telling it to implement a
thing and write tests for it, and I only
really
reviewed the tests. I reckon
this would be an issue in the future when I want to manually make some change
in the actual code, because I have no familiarity with it.
This also opened new questions for me wrt. my other projects where I’ve
previously run out of steam, eg. trying to implement a
Hindley-Milner type
system
for my
dream forever-WIP programming language
Cara
. It seems
I can now just ask the LLM to do it, and it will? But then, I don’t want to fall
into the trap where I am no longer able to work on the codebase myself. I want
to be familiar with and able to tinker on the code. I’d need to spend my time
reviewing and reading code instead of writing everything myself. Perhaps that’s
OK.
Performance of FAWK might be an issue as well, though right now it’s a non-goal,
given my intended use case is throwaway scripts for Advent of Code, nothing
user-facing. And who knows, based on what I’ve seen, maybe I can instruct it to
rewrite it in Rust
and have a decent chance of success?
For now, I’ll go dogfood my shiny new vibe-coded black box of a programming
language on the Advent of Code problem (and as many of the 2025 puzzles as I
can), and see what rough edges I can find. I expect them to be equal parts “not
implemented yet” and “unexpected interactions of new PL features with the old
ones”.
If you’re willing to jump through some Python project dependency hoops, you can
try to use FAWK too at your own risk, at
Janiczek/fawk
on
GitHub
.
The OEMs disabling codec hardware also comes as associated costs for the international video compression standard are set to increase in January, as licensing administrator Access Advance
announced
in July. Per
a breakdown
from patent pool administration
VIA Licensing Alliance
, royalty rates for HEVC for over 100,001 units are increasing from $0.20 each to $0.24 each in the United States. To put that into perspective, in Q3 2025, HP sold 15,002,000 laptops and desktops, and Dell sold 10,166,000 laptops and desktops,
per Gartner.
Last year, NAS company Synology
announced
that it was ending support for HEVC, as well as H.264/AVC and VCI, transcoding on its DiskStation Manager and BeeStation OS platforms, saying that “support for video codecs is widespread on end devices, such as smartphones, tablets, computers, and smart TVs.”
“This update reduces unnecessary resource usage on the server and significantly improves media processing efficiency. The optimization is particularly effective in high-user environments compared to traditional server-side processing,” the announcement said.
Despite the growing costs and complications with HEVC licenses and workarounds, breaking features that have been widely available for years will likely lead to confusion and frustration.
“This is pretty ridiculous, given these systems are $800+ a machine, are part of a ‘Pro’ line (jabs at branding names are warranted – HEVC is used professionally), and more applications these days outside of Netflix and streaming TV are getting around to adopting HEVC,” a Redditor wrote.
How U.S. Universities Used Counterterror Fusion Centers to Surveil Student Protests for Palestine
Intercept
theintercept.com
2025-11-21 10:00:00
Internal university communications reveal how a network established for post-9/11 intelligence sharing was turned on students protesting genocide.
The post How U.S. Universities Used Counterterror Fusion Centers to Surveil Student Protests for Palestine appeared first on The Intercept....
From a statewide
counterterrorism surveillance and intelligence-sharing hub in Ohio, a warning went out to administrators at the Ohio State University: “Currently, we are aware of a demonstration that is planned to take place at Ohio State University this evening (4/25/2024) at 1700 hours. Please see the attached flyers. It is possible that similar events will occur on campuses across Ohio in the coming days.”
Founded in the wake of 9/11 to facilitate information sharing between federal, state, and local law enforcement agencies, fusion centers like Ohio’s Statewide Terrorism Analysis and Crime Center, or STACC, have become yet
another way
for
law enforcement agencies
to
surveil legally protected
First Amendment activities. The 80 fusion centers across the U.S. work with the military, private sector, and other stakeholders to collect vast amounts of information on American citizens in a stated effort to prevent future terror attacks.
In Ohio, it seemed that the counterterrorism surveillance hub was also keeping close tabs on campus events.
It wasn’t just at Ohio State: An investigative series by The Intercept has found that fusion centers were actively involved in monitoring pro-Palestine demonstrations on at least five campuses across the country, as shown in more than 20,000 pages of documents obtained via public records requests exposing U.S. universities’ playbooks for cracking down on pro-Palestine student activism.
As the documents make clear, not only did universities view the peaceful, student-led demonstrations as a security issue — warranting the
outside police
and technological surveillance interventions detailed in the rest of this series — but the network of law enforcement bodies responsible for counterterror surveillance operations framed the demonstrations in the same way.
After the Ohio fusion center’s tip-off to the upcoming demonstration, officials in the Ohio State University Police Department worked quickly to assemble an operations plan and shut down the demonstration. “The preferred course of action for disorderly conduct and criminal trespass and other building violations will be arrest and removal from the event space,” wrote then-campus chief of police Kimberly Spears-McNatt in an email to her officers just two hours after the initial warning from Ohio’s primary fusion center. OSUPD and the Ohio State Highway Patrol would go on to clear the encampment that same night, arresting 36 demonstrators.
Fusion centers were designed to facilitate the sharing of already collected intelligence between local, state, and federal agencies, but they have been used to target communities of color and to ever-widen the gray area of allowable surveillance. The American Civil Liberties Union, for example, has long advocated against the country’s fusion center network, on the grounds that they conducted overreaching surveillance of activists from the Black Lives Matter movement to
environmental activism
in Oregon.
“Ohio State has an unwavering commitment to freedom of speech and expression. We do not discuss our security protocols in detail,” a spokesperson for Ohio State said in a statement to The Intercept. Officials at STACC didn’t respond to multiple requests for comment.
The proliferation of fusion centers has contributed to a scope creep that allows broader and more intricate mass surveillance, said Rory Mir, associate director of community organizing at the Electronic Frontier Foundation. “Between AI assessments of online speech, the swirl of reckless data sharing from fusion centers, and often opaque campus policies, it’s a recipe for disaster,” Mir said.
While the Trump
administration has publicized its weaponization of federal law enforcement agencies against pro-Palestine protesters — with high-profile attacks including
attempts
to illegally
deport student activists
— the documents obtained by The Intercept display its precedent under the Biden administration, when surveillance and repression were coordinated behind the scenes.
“ All of that was happening under Biden,” said Dylan Saba, a staff attorney at Palestine Legal, “and what we’ve seen with the Trump administration’s implementation of Project 2025 and Project Esther is really just an acceleration of all of these tools of repression that were in place from before.”
Not only was the groundwork for the Trump administration’s descent into increasingly repressive and illegal tactics laid under Biden, but the investigation revealed that the framework for cracking down on student free speech was also in place before the pro-Palestine encampments.
Among other documentation, The Intercept
obtained a copy of Clemson University Police Department’s 2023 Risk Analysis Report, which states: “CUPD participates in regular information and intelligence sharing and assessment with both federal and state partners and receives briefings and updates throughout the year and for specific events/incidents form [sic] the South Carolina Information and Intelligence Center (SCIIC)” — another fusion center.
The normalization of intelligence sharing between campus police departments and federal law enforcement agencies is widespread across U.S. universities, and as pro-Palestine demonstrations escalated across the country in 2024, U.S. universities would lean on their relationships with outside agencies and on intelligence sharing arrangements with not only other universities, but also the state and federal surveillance apparatus.
OSU was not the only university where fusion centers facilitated briefings, intelligence sharing
,
and, in some cases, directly involved federal law enforcement agencies. At California State Polytechnic University, Humboldt, where the state tapped
funds set aside for natural disasters and major emergencies
to pay outside law enforcement officers to clear an occupied building, the university president noted that the partnership would allow them “to gather support from the local Fusion Center to assist with investigative measures.”
Cal Poly Humboldt had already made students’ devices a target for their surveillance, as then-President Tom Jackson confirmed in an email. The university’s IT department had “tracked the IP and account user information for all individuals connecting to WiFi in Siemens Hall,” a university building that students occupied for eight days, Jackson wrote. With the help of the FBI – and warrants for the search and seizure of devices – the university could go a step further in punishing the involved students.
The university’s IT department had “tracked the IP and account user information for all individuals connecting to WiFi in Siemens Hall.”
In one email exchange, Kyle Winn, a special agent at the FBI’s San Francisco Division, wrote to a sergeant at the university’s police department: “Per our conversation, attached are several different warrants sworn out containing language pertaining to electronic devices. Please utilize them as needed. See you guys next week.”
Cal Poly Humboldt said in a statement to The Intercept that it “remains firmly committed to upholding the rights guaranteed under the First Amendment, ensuring that all members of our community can speak, assemble, and express their views.”
“The pro-Palestine movement really does face a crisis of repression,” said Tariq Kenney-Shawa, Al-Shabaka’s U.S. policy fellow. “We are up against repressive forces that have always been there, but have never been this advanced. So it’s really important that we don’t underestimate them — the repressive forces that are arrayed against us.”
In Mir’s view, university administrators should have been wary about unleashing federal surveillance at their schools due to fusion centers’ reputation for infringing on civil rights.
“Fusion centers have also come under fire for sharing dubious intelligence and escalating local police responses to BLM,” Mir said, referring to the Black Lives Matter protests. “For universities to knowingly coordinate and feed more information into these systems to target students puts them in harm’s way and is a threat to their civil rights.”
Research support provided by the nonprofit newsroom Type Investigations.
Lately, there has been a lot of excitement around Durable Execution (DE) engines.
The basic idea of DE is to take (potentially long-running) multi-step workflows,
such as processing a purchase order or a user sign-up,
and make their individual steps persistent.
If a flow gets interrupted while running, for instance due to a machine failure,
the DE engine can resume it from the last successfully executed step and drive it to completion.
This is a very interesting value proposition:
the progress of critical business processes is captured reliably, ensuring they’ll complete eventually.
Importantly, any steps performed already successfully won’t be repeated when retrying a failed flow.
This helps to ensure that flows are executed correctly
(for instance preventing inventory from getting assigned twice to the same purchase order),
efficiently (e.g. avoiding repeated remote API calls),
and deterministically.
One particular category of software which benefits from this are agentic systems, or more generally speaking, any sort of system which interacts with LLMs.
LLM calls are slow and costly, and their results are non-deterministic.
So it is desirable to avoid repeating any previous LLM calls when continuing an agentic flow after a failure.
Now, at a high level, "durable execution" is nothing new.
A scheduler running a batch job for moving purchase orders through their lifecycle?
You could consider this a form of durable execution.
Sending a Kafka message from one microservice to another and reacting to the response message in a callback?
Also durable execution, if you squint a little.
A workflow engine running a BPMN job? Implementing durable execution, before the term actually got popularized.
All these approaches model multi-step business transactions—making the logical flow of the overall transaction more or less explicit—in a persistent way,
ensuring that transactions progress safely and reliably and eventually complete.
However, modern DE typically refers to one particular approach for achieving this goal:
Workflows defined in code, using general purpose programming languages such as Python, TypeScript, or Java.
That way, developers don’t need to pick up a new language for defining flows,
as was the case with earlier process automation platforms.
They can use their familiar tooling for editing flows, versioning them, etc.
A DE engine transparently tracks program progress, persists execution state in the form of durable checkpoints, and enables resumption after failures.
Naturally, this piqued my interest:
what would it take to implement a basic DE engine in Java?
Can we achieve something useful with less than, let’s say, 1,000 lines of code?
The idea being not to build a production-ready engine,
but to get a better understanding of the problem space and potential solutions for it.
You can find the result of this exploration, called Persistasaurus, in
this GitHub repository
.
Coincidentally, this project also serves as a very nice example of how modern Java versions can significantly simplify the life of developers.
Hello Persistasaurus!
Let’s take a look at an example of what you can do with Persistasaurus and then dive into some of the key implementation details.
As per the idea of DE, flows are implemented as regular Java code.
The entry point of a flow is a method marked with the
@Flow
annotation.
Individual flow steps are methods annotated with
@Step
:
Steps are the unit of persistence—their outcomes are recorded, and when resuming a flow after a failure,
it will continue from the last successfully run step method.
Now, which exact parts of a flow warrant being persisted as a step is on the developer to decide.
You don’t want to define steps too granularly, so as to keep the overhead of logging low.
In general, flow sections which are costly or time-consuming to run or whose result cannot easily be reproduced,
are great candidates for being moved into a step method.
A flow is executed by obtaining a
FlowInstance
object and then calling the flow’s main method:
Each flow run is identified by a unique id,
allowing to re-execute it after a failure, or to resume it when waiting for an external signal ("human in the loop", more on that below).
If the Hello World flow runs to completion, the following will be logged to stdout:
1
2
3
4
5
6
Hello, World (0)
Hello, World (1)
Hello, World (2)
Hello, World (3)
Hello, World (4)
Sum: 10
Now let’s assume something goes wrong while executing the third step:
1
2
3
4
Hello, World (0)
Hello, World (1)
Hello, World (2)
RuntimeException("Uh oh")
When re-running the flow, using the same UUID as before, it will retry that failed step and resume from there. The first two steps which were already run successfully are not re-executed.
Instead, they will be replayed from a persistent execution log, which is based on
SQLite
, an embedded SQL database:
1
2
3
Hello, World (3)
Hello, World (4)
Sum: 10
In the following, let’s take a closer look at some of the implementation choices in Persistasaurus.
Capturing Execution State
At the core of every DE engine there’s some form of persistent durable execution log.
You can think of this a bit like the write-ahead log of a database.
It captures the intent to execute a given flow step, which makes it possible to retry that step should it fail, using the same parameter values.
Once successfully executed, a step’s result will also be recorded in the log,
so that it can be replayed from there if needed, without having to actually re-execute the step itself.
DE logs come in two flavours largely speaking; one is in the form of an external state store which is accessed via some sort of SDK.
Example frameworks taking this approach include
Temporal
,
Restate
,
Resonate
, and
Inngest
.
The other option is to persist DE state in the local database of a given application or (micro)service.
One solution in this category is
DBOS
, which implements DE on top of Postgres.
To keep things simple, I went with the local database model for Persistasaurus, using SQLite for storing the execution log.
But as we’ll see later on, depending on your specific use case, SQLite actually might also be a great choice for a production scenario,
for instance when building a self-contained agentic system.
The structure of the execution log table in SQLite is straight-forward.
It contains one entry for each durable execution step:
The sequence number of the step within the flow, in the order of execution
3
The timestamp of first running this step
4
The name of the class defining the step method
5
The name of the step method (currently ignoring overloaded methods for this PoC)
6
For delayed steps, the delay in milli-seconds
7
The current status of the step
8
A counter for keeping track of how many times the step has been tried
9
The serialized form of the step’s input parameters, if any
10
The serialized form of the step’s result, if any
This log table stores all information needed to capture execution intent and persist results.
More details on the notion of delays and signals follow further down.
When running a flow, the engine needs to know when a given step gets executed so it can be logged.
One common way for doing so is via explicit API calls into the engine, e.g. like so with DBOS Transact:
This works, but tightly couples workflows to the DE engine’s API.
For Persistaurus I aimed to avoid this dependency as much as possible.
Instead, the idea is to transparently intercept the invocations of all step methods and track them in the execution log,
allowing for a very concise flow expression, without any API dependencies:
1
2
3
4
5
@Flowpublicvoidworkflow(){stepOne();stepTwo();}
In order for the DE engine to know when a flow or step method gets invoked,
the
proxy pattern
is being used:
a proxy wraps the actually flow object and handles each of its method invocations,
updating the state in the execution log before and after passing the call on to the flow itself.
Thanks to Java’s dynamic nature, creating such a proxy is relatively easy, requiring just a little bit of bytecode generation.
Unsurprisingly, I’m using the
ByteBuddy
library for this job:
As an aside, Claude Code does an excellent job in creating code using the ByteBuddy API, which is not always self-explanatory.
Now, whenever a method is invoked on the flow proxy,
the call is delegated to the
Interceptor
class,
which will record the step in the execution log before invoking the actual flow method.
I am going to spare you the complete details of the method interceptor implementation
(you can find it
here
on GitHub),
but the high-level logic looks like so:
Replaying completed steps from the log is essential for ensuring deterministic execution.
Each step typically runs exactly once,
capturing non-deterministic values such as the current time or random numbers while doing so.
There’s an important failure mode, though:
if the system crashes
after
a step has been executed but
before
the result can be recorded in the log,
that step would be repeated when rerunning the flow.
Odds for this to happen are pretty small, but whether it is acceptable or not depends on the particular use case.
When executing steps with side-effects, such as remote API calls,
it may be a good idea to add idempotency keys to the requests,
which lets the invoked services detect and ignore any potential duplicate calls.
The actual execution log implementation isn’t that interesting, you can find its source code
here
.
All it does is persist step invocations and their status in the
execution_log
SQLite table shown above.
Delayed Executions
At this point, we have a basic Durable Execution engine which can run simple flows as the one above.
Next, I explored implementing delayed execution steps.
As an example, consider a user onboarding flow, where you might want to send out an email with useful resources a few days after a user has signed up.
Using the annotation-based programming model of Persistasaurus, this can be expressed like so:
publicclassSignupFlow{@FlowpublicvoidsignUp(StringuserName,Stringemail){longid=createUserRecord(userName,email);sendUsefulResources(id);}@StepprotectedlongcreateUserRecord(StringuserName,Stringemail){// persist the user...returnid;}@Step(delay=3,timeUnit=DAYS)protectedvoidsendUsefulResources(longuserId){// send the email...}}
Naturally, we don’t want to block the initiating thread when delaying a step—for instance, a web application’s request handler.
Instead, we need a way to temporarily yield execution of the flow, return control to the caller,
and then later on, when the configured delay has passed, resume the flow.
Unlike other programming languages, Java doesn’t support
continuations
via its public API.
So how could we yield control then?
One option would be to define a specific exception type, let’s say
FlowYieldException
, and raise it from within the method interceptor when encountering a delayed method.
The call stack would be unwound until some framework-provided exception handler catches that exception and returns control to the code triggering the flow.
For this to work, it is essential that no user-provided flow or step code catches that exception type.
Alternatively, one could transform the bytecode of the step method (and all the methods below it in the call stack),
so that it can return control at given suspension points and later on resume from there,
similar to how
Kotlin’s coroutines
are implemented under the hood ("continuation passing style").
Luckily, Java 21 offers a much simpler solution.
This version added support for virtual threads (
JEP 444
),
and while you shouldn’t block OS level threads, blocking virtual threads is totally fine.
Virtual threads are lightweight user-mode threads managed by the JVM,
and an application can have hundreds of thousands, or even millions of them at once.
Thus I decided to implement delayed executions in Persistasaurus through virtual threads,
sleeping for the given period of time when encountering a delayed method.
To run a flow with a delayed step, trigger it via
runAsync()
,
which immediately returns control to the caller:
When putting a virtual thread running a flow method asleep,
it will be unmounted from the underlying OS level carrier thread,
freeing its resources.
Later on, once the sleep time has passed, the virtual thread will be remounted onto a carrier thread and continue the flow.
When rerunning non-finished flows with a delayed execution step,
Persistasaurus will only sleep for the remainder of the configured delay,
which might be zero if enough time has passed since the original run of the flow.
So in fact, you could think of virtual threads as a form of continuations;
and indeed, if you look closely at the stacktrace of a virtual thread, you’ll see that the frame at the very bottom is the
enter()
method of a JDK-internal class
Continuation
.
Interestingly, this class was even part of the public Java API in early preview versions of virtual threads,
but it got made private later on.
Human Interaction
As the last step of my exploration I was curious how flows with "human in the loop"-steps could be implemented:
steps where externally provided input or data is required in order for a flow to continue.
Sticking to the sign-up flow example,
this could be an email by the user, so as to confirm their identity (double opt-in).
As much as possible, I tried to stick to the idea of using plain method calls for expressing the flow logic,
but I couldn’t get around making flows invoke a Persistasaurus-specific method,
await()
, for signalling that a step requires external input:
When the method interceptor encounters a step method invoked from within an
await()
block,
it doesn’t go on to actually execute right away.
Instead, the flow will await continuation until the step method gets triggered.
This is why it doesn’t matter which parameter values are passed to that step within the flow definition.
You could pass
null
, or, as a convention, the
any()
placeholder method.
In order to provide the input to a waiting step and continue the flow,
call the step method via
resume()
, for instance like so, in a request handler method of a Spring Boot web application:
The flow will then continue from that step, using the given parameter value(s) as its input.
For this to work, we need a way for the engine to know whether a given step method gets invoked from within
resume()
and thus actually should be executed,
or, whether it gets invoked from within
await()
and hence should be suspended.
Seasoned framework developers might immediately think of using thread-local variables for this purpose,
but as of Java 25, this can be solved much more elegantly and safely using so-called
scoped values
,
as defined in
JEP 506
.
To quote that JEP, scoped values
enable a method to share immutable data both with its callees within a thread, and with child threads. Scoped values are easier to reason about than thread-local variables. They also have lower space and time costs
Scoped values are typically defined as as a static field like so:
Unlike thread-local variables,
this ensures the scoped value is cleared when leaving the scope.
Then, further down in the call stack, within the method handler, the scoped value can be consumed:
In order to yield control when waiting for external input and to resume when that input has been provided,
a
ReentrantLock
with a wait condition is used.
Similar to the
sleep()
call used for fixed delay steps above,
a virtual thread will be unmounted from its carrier when waiting for a condition.
When accidentally trying to access a scoped value which isn’t actually set, an exception will be raised,
addressing another issue you’d commonly encounter with thread-local variables.
This might not seem like a huge deal,
but it’s great to see how the Java platform continues to evolve and improves things like this.
Managing State
Let’s dive a bit deeper into managing state in a durable execution engine.
For the example DE implementation developed for this blog post,
I went with SQLite primarily for the sake of simplicity.
Now, would you use SQLite, as an embedded database, also in an actual production-ready implementation?
The answer is going to depend on your specific use case.
If, for instance, you are building a self-contained AI agent and you want to use DE for making sure LLM invocations are not repeated when the agent crashes,
an embedded database such as SQLite would make for a great store for persisting execution state.
Each agent could have its own database, thus avoiding any concurrent writes, which can pose a bottleneck due to SQLite’s single-writer design.
On the other hand, if you’re building a system with a high number of parallel requests by different users,
such as a typical microservice,
a client/server database such as Postgres or MySQL would be a better fit.
If that system already maintains state in a database (as most services do),
then re-using that same database to store execution state provides a critical advantage:
Updates to the application’s data and its execution state can happen atomically in a single database transaction, providing atomicity guarantees.
This solution is implemented by the DBOS engine, on top of Postgres, for instance.
Another category of DE engines which include systems such as Temporal and Restate,
utilizes a separate server component with its own dedicated store for persisting execution state.
This approach can be very useful to implement flows spanning across a set of multiple services
(sometimes referred to as
Sagas
).
By keeping track of the overall execution state in one central place,
they essentially avoid the need for cross-system transactions.
Another advantage of this approach is that the actual application doesn’t have to keep running while waiting for delayed execution steps,
making it a great fit for systems implemented in the form of scale-to-zero serverless designs (Function-as-a-Service, Knative, etc.).
The downside of this centralized design is the potentially closer coupling of the participating services,
as they all need to converge on a specific DE engine, on one specific version of that engine, etc.
Also HA and fault tolerance must be a priority in order to avoid the creation of a single point of failure between all the orchestrated services.
Wrapping Up
At its heart, the idea of Durable Execution is not a complex one:
Potentially long-running workflows are organized into individual steps whose execution status and result is persisted in a durable form.
That way, flows become resumable after failures, while skipping any steps already executed successfully.
You could think of it as a persistent implementation of the
memoization pattern
, or a persistent form of continuations.
As demonstrated in this post and the accompanying
source code
,
it doesn’t take too much work to create a functioning PoC for a DE engine.
Of course, it’s still quite a way to go from there to a system you’d actually want to put into production.
At the persistence level, you’d have to address aspects such as (horizontal) scalability, fault tolerance and HA.
The engine should support things such as retrying failing steps with exponential back-off,
parallel execution of workflow steps,
throttling flow executions, compensation steps for implementing Sagas, and more.
You’d also want to have a UI for managing flows, analyzing, restarting, and debugging them.
Finally, you should also have a strategy for evolving flow definitions and the state they persist,
in particular when dealing with long-running flows which may take days, weeks, or months to complete.
TL;DR:
Under pressure from the EU’s Digital Markets Act (DMA), Apple is being forced to ditch its proprietary peer-to-peer Wi-Fi protocol – Apple Wireless Direct Link (AWDL) – in favor of the industry-standard Wi-Fi Aware, also known as Neighbor Awareness Networking (NAN). A quietly published EU interoperability roadmap mandates Apple support Wi-Fi Aware 4.0 in iOS 19 and v5.0,
1
thereafter, essentially forcing AWDL into retirement. This post investigates how we got here (from Wi-Fi Direct to AWDL to Wi-Fi Aware), what makes Wi-Fi Aware technically superior, and why this shift unlocks true cross-platform peer-to-peer connectivity for developers.
EU Forces Apple’s Hand on Peer-to-Peer Wi-Fi
In a little-publicized mandate, the European Commission explicitly requires Apple to implement the Wi-Fi Alliance’s Wi-Fi Aware standard as part of DMA interoperability measures. The official DMA roadmap states:
“Apple shall implement the measures for Wi-Fi Aware 4.0 in the next major iOS release, i.e. iOS 19, at the latest, and for Wi-Fi Aware 5.0 in the next iOS release at the latest nine months following the introduction of the Wi-Fi Aware 5.0 specification”
In plain terms, by the time iOS 19 ships, iPhones must support Wi-Fi Aware v4.0, and Apple must roll out v5.0 support soon after the Wi-Fi Alliance finalizes that spec.
Crucially, this decision was not a voluntary announcement by Apple – it was imposed by regulators. Apple has kept quiet about these changes publicly, likely because they involve opening up formerly closed-off tech. The DMA enforcement timeline was highlighted in an EU Q&A site and legal annex, not an Apple press release.
7
The European Commission’s language makes it clear this is about enabling third-party devices and apps to use high-bandwidth peer-to-peer (P2P) Wi-Fi features equal to Apple’s own, rather than Apple benevolently adopting a new standard. In fact, the EU order compels Apple to deprecate AWD
L
and ensure third-party solutions using Wi-Fi Aware are just as effective as Apple’s internal protocols. In short, the EU gave Apple no choice: embrace Wi-Fi Aware or face penalties.
What does this mean? Essentially, Apple’s hidden sauce for fast device-to-device communication – AWDL – is being forced into retirement. And with that, for the first time, iPhones and Androids will speak a common language for local wireless networking. Let’s unpack how we got here, and why it’s a big deal for developers.
From Wi-Fi Direct to AWDL to Wi-Fi Aware: A Brief History
To understand the significance, we need a quick history of ad-hoc Wi-Fi protocols:
Wi-Fi Ad-hoc (IBSS mode):
Early 802.11 allowed devices to connect directly in a peer-to-peer “ad-hoc” network (IBSS), but it had limitations (no always-on discovery, no power-saving coordination, weak security). It never gained widespread use.
Wi-Fi Direct:
The Wi-Fi Alliance’s first big attempt at standard P2P. Wi-Fi Direct (circa 2010) allows devices to form a direct link without an AP, designating one device as a group owner (soft AP) for security and IP allocation. It improved on ad-hoc mode (supporting WPA2, dynamic group formation), but had drawbacks – e.g. limited service discovery capabilities and difficulty staying connected to infrastructure Wi-Fi concurrently.
Apple Wireless Direct Link (AWDL):
Around 2014, Apple developed AWDL as a proprietary, high-performance P2P Wi-Fi protocol for its ecosystem. According to Apple’s patent on AWDL (US20180083858A1) and reverse-engineering by researchers, AWDL was designed to address Wi-Fi Direct’s concerns and succeeded ad-hoc IBSS mode.
8
Apple deployed AWDL in over a billion devices (every modern iPhone, iPad, Mac) to power AirDrop, AirPlay peer connections, GameKit, Apple Watch unlock, and more.
8,9
Notably, AWDL can coexist with regular Wi-Fi by rapidly hopping channels – an iPhone can be on an AP and seamlessly switch to AWDL channel windows to talk to a peer.
9
This gave AWDL low latency and high throughput without dropping your internet connection.
Neighbor Awareness Networking (NAN / Wi-Fi Aware):
As it turns out, Apple didn’t keep all of AWDL to itself – it contributed to the Wi-Fi Alliance, which adopted AWDL’s approach as the basis for the NAN standard (branded “Wi-Fi Aware”) around 2015.
8
Wi-Fi Aware is essentially the industry-standard cousin of AWDL, enabling devices to discover each other and communicate directly with Wi-Fi speeds, in a power-efficient way, regardless of vendor. Android added platform support for Wi-Fi Aware in Oreo (8.0) and later,
10
but Apple until now stuck with its in-house AWDL stack which can be used by developers but isn't an open standard.
In summary, AWDL was Apple’s competitive edge – a proprietary P2P stack that outperformed legacy Wi-Fi Direct and only worked on Apple devices. If an app needed cross-platform local connectivity, it couldn’t use AWDL (Apple provides no raw AWDL API). Developers resorted to Wi-Fi Direct, or Wi-Fi Aware on Android vs. Apple’s AWDL on iOS, with no interoperability. This fragmentation is exactly what the EU’s DMA targeted.
The DMA order effectively forces Apple to drop AWDL and align with Wi-Fi Aware
.
The Commission explicitly says Apple must
“implement Wi-Fi Aware in iOS devices in accordance with the Wi-Fi Aware specification”
and
“continue to…improve the Wi-Fi Aware standard… Apple shall not prevent AWDL from becoming part of the Wi-Fi Aware standard”
,
even urging Apple to allocate memory for concurrent P2P on older devices in a non-discriminatory way until AWDL is fully deprecated.
The writing is on the wall: AWDL as a private protocol is done for.
AWDL is worth a closer look, because it shows what Apple achieved and what will now be opened up via Wi-Fi Aware. How does AWDL work? In short, it creates a continuously syncing ad-hoc network
on the fly
among nearby Apple devices:
Availability Windows & Channel Hopping:
Each AWDL-enabled device periodically advertises Availability Windows (AWs) – tiny time slices when it’s available on a specific Wi-Fi channel for peer-to-peer communication.
8
An elected master node (chosen via a priority scheme) coordinates these windows across devices. Outside of these AWs, devices can rejoin normal Wi-Fi (e.g. your home router’s channel) or sleep their radio to save power.
8
This scheduling is what allows, let's say, your Mac to be on Wi-Fi for internet most of the time, but briefly switch to channel 6 to AirDrop a file from your iPhone, then switch back – all without manual intervention.
Integration with BLE:
AWDL doesn’t work in isolation – it integrates with Bluetooth Low Energy for discovery. For example, AirDrop uses BLE advertisements to initially discover nearby devices (showing them in the UI), then quickly forms an AWDL connection for the actual high-speed file transfer. This combo gives the best of both: BLE’s low-power device discovery and AWDL’s high-throughput data channel.
11,12
Performance:
AWDL leverages the full Wi-Fi PHY, so it can hit hundreds of Mbps throughput and sub-second latencies that BLE or classic Bluetooth can’t touch. It also supports robust security (authenticated pairing, encryption) as used in AirDrop/AirPlay. One clever feature: because AWDL devices coordinate their availability, one device can even sustain multiple P2P links concurrently (e.g. an iPhone streaming to a HomePod via AWDL while also AirDropping to a Mac) – something spelled out in the EU requirements.
Closed Nature:
Despite its capabilities, AWDL has been closed off to third-party developers and other OSes. Apple’s APIs like MultipeerConnectivity framework ride on AWDL under the hood for Apple-to-Apple connections, but there was no way for an Android device or a Windows laptop to speak AWDL. It was an Apple-only club. Researchers at TU Darmstadt’s Secure Mobile Networking Lab had to reverse-engineer AWDL (publishing an open Linux implementation called
OWL
) to document its inner workings.
13
They demonstrated that AWDL indeed is an IEEE 802.11-based ad-hoc protocol with Apple-specific extensions, tightly integrated with Apple’s ecosystem.
14
Bottom line
:
AWDL gave Apple a technical edge but at the cost of interoperability – a classic “walled garden” approach.
It’s this walled garden that the EU is breaking down. The mandate that
“Apple shall make Wi-Fi Aware available to third parties”
means Apple must expose new iOS APIs for P2P connectivity that are standard-based. And since Android (and even some IoT devices) already support Wi-Fi Aware, we’re headed for a world where an iPhone and an Android phone can find and connect to each other directly via Wi-Fi, no access point, no cloud, no hacks – a scenario that AWDL alone never allowed.
Wi-Fi Aware 4.0: The New Cross-Platform Standard
So what exactly is Wi-Fi Aware (a.k.a. NAN), and why is version 4.0 a game-changer? At a high level, Wi-Fi Aware offers
the same kind of capabilities as AWDL
, but as an open standard for any vendor. It lets devices discover each other and exchange data directly via Wi-Fi, without needing a router or cell service. Think of it as Wi-Fi’s answer to Bluetooth discovery but with Wi-Fi speed and range. Some key technical features of Wi-Fi Aware (especially in the latest v4.0 spec) include:
Continuous, Efficient Discovery:
Devices form a Wi-Fi Aware group and synchronize wake-up times to transmit Discovery Beacons. Like AWDL’s AWs, Wi-Fi Aware defines Discovery Windows where devices are active to find peers, then can sleep outside those windows to save power. This allows always-on background discovery with minimal battery impact.
15
The latest spec enhances this with an “Instant Communication” mode – a device can temporarily accelerate discovery (e.g. switch to a channel and beacon rapidly) when triggered by an external event like a BLE advertisement or NFC tap, to achieve very fast discovery and connection setup.
16
In practice, that means an app can use BLE to wake up Wi-Fi (advertising a service via BLE then negotiating a NAN link), combining the energy efficiency of BLE with the speed of Wi-Fi – just as Apple’s AirDrop has done privately. Wi-Fi Aware v4.0 explicitly added standardized BLE co-operation:
“Latest enhancements to Wi-Fi Aware offer discovery by Bluetooth LE, which triggers a formal Wi-Fi Aware session by waking the Wi-Fi radio.”
10
High Throughput Data & Range:
Once devices discover each other, Wi-Fi Aware supports establishing a direct Wi-Fi data path. This can be an IP connection or a native transport, and it leverages Wi-Fi’s high data rates (including Wi-Fi 5/6/6E speeds on 5 GHz or 6 GHz bands). In fact, the Wi-Fi Alliance notes that Wi-Fi Aware data connections use
“high performance data rates and security, leveraging cutting-edge Wi-Fi technologies, including Wi-Fi 6, Wi-Fi 6E, and WPA3.”
10
Compared to Bluetooth or BLE, the throughput and range are vastly superior – Wi-Fi Aware can work at typical Wi-Fi ranges (tens of meters, even over 100m in open air) and deliver tens or hundreds of Mbps. By contrast, BLE might get 100+ meters but on the order of 0.1 Mbps in real-world throughput. Wi-Fi Aware will close that gap by giving cross-platform apps both long range
and
high speed.
Lower Latency & Instant Communication:
Version 4.0 of the spec introduced refinements for latency-critical applications. The aforementioned Instant Communication mode lets devices expedite the discovery handshake – important for use cases like AR gaming or urgent data sync where waiting a few seconds for a discovery window might be too slow. In Instant mode, a device (say, an AR headset) triggered via BLE could immediately switch to a predetermined channel and begin a quick service discovery exchange with a peer, rather than strictly waiting on the periodic timetable.
16
The spec shows this can cut discovery latency dramatically (Figure 73 in the spec illustrates an accelerated discovery).
16
From a developer’s perspective, Wi-Fi Aware can feel nearly instantaneous in establishing a link when properly used.
Accurate Ranging:
Perhaps one of the most exciting features for version 4 and beyond is built-in distance measurement between devices. Wi-Fi Aware includes a ranging protocol (based on Fine Timing Measurement, FTM) that lets one device get the distance to another with sub-meter accuracy.
15
This is similar to how Apple devices can use UWB or Bluetooth RTT for ranging, but now via Wi-Fi. The devices exchange precise timing signals to calculate distance (and even do so
as part of discovery
– a NAN discovery packet can include a request to measure range). The spec’s NAN Ranging section defines how devices negotiate a ranging session and obtain a distance estimate before or during data exchange.
16
Enhanced ranging
could unlock things like peer-to-peer localization (for example, an app can find not just who is nearby but also roughly how far or even what direction).
Security and Privacy:
Wi-Fi Aware has baked-in solutions for secure communication and privacy. It supports device pairing (establishing trust and keys) and encrypted data paths with mutual authentication.
15
It also provides privacy features like randomized identifiers that rotate, so devices aren’t broadcasting a fixed MAC or identity constantly.
10
This addresses the concern that always-on discovery could be used to track devices – Aware can randomize its “NAN IDs” and only reveal a stable identity when a trusted handshake occurs. The EU mandate will require Apple to expose the same security levels to third-party developers as it uses for its own devices, meaning things like AirDrop’s peer authentication should extend to third-party Aware sessions.
In essence, Wi-Fi Aware 4.0 is AWDL on steroids and open to all. It took the concepts Apple pioneered (timeslot synchronization, dual Wi-Fi/BLE use, etc.) and formalized them into a cross-vendor standard, adding improvements along the way. No longer limited to Apple devices, any Wi-Fi Aware certified device can join the discovery clusters and connect. With iOS 19, an iPhone will become just another Wi-Fi Aware node – able to discover and connect to Android phones, PCs, IoT gadgets, etc., directly via Wi-Fi.
AWDL vs. Wi-Fi Aware vs. BLE: Feature Comparison
How does Apple’s AWDL, the upcoming Wi-Fi Aware, and good old Bluetooth Low Energy stack up? The table below summarizes the key differences and capabilities of these peer-to-peer wireless technologies:
Feature
Apple AWDL (Proprietary)
Wi-Fi Aware 4.0 (2022 Spec)
Bluetooth LE (5.x)
Standardization
Apple-defined (private protocol)
Wi-Fi Alliance NAN standard
Bluetooth SIG standard
Topology
Mesh networking. Multiple devices in a cluster. One acts as a time sync master.
Decentralized cluster (no fixed master). Typically one-to-one data links, but multiple links supported.
Point-to-point or star (one-to-many, each connection 1:1). No native mesh routing.
Up to 100–200m typical; max ~1km line of sight with BLE 5 long-range (coded PHY).
Concurrent Internet
Yes – simultaneous infrastructure Wi-Fi and P2P via channel hopping.
Yes – NAN discovery windows are scheduled around AP connectivity. Coexistence supported.
Yes – BLE separate from Wi-Fi, runs in parallel.
Notable Features
Proprietary; Powers AirDrop/AirPlay; Mesh with master; No direct public API (apps use Multipeer Connectivity).
Open standard; Flexible discovery; Instant messaging; Built-in secure data path setup; Android API since 2017.
Universally supported; Extremely energy-efficient; Background presence detection; Limited data rate. Often combined with Wi-Fi for bulk transfer.
(Note: Above ranges and throughput are based on Ditto’s real-world tests and specification data. Bluetooth 5's theoretical 4x range increase can reach ~400m line-of-sight, typical usable range 100–200m indoors. Wi-Fi range varies significantly with the environment.)
As the table shows, Wi-Fi Aware (NAN) and AWDL are closely matched in capabilities – no surprise, given their kinship. Both vastly outperform Bluetooth LE for high-bandwidth applications, though BLE remains invaluable for ultra-low-power needs and simple proximity detection. The sweet spot that AWDL and Aware occupy is: fast, local data exchange (from tens of megabits up to hundreds) over distances of a room or building floor, without requiring any network infrastructure. This is why forcing Apple to support Wi-Fi Aware is so pivotal – it means an iPhone and an Android phone sitting next to each other can finally establish a fast, direct Wi-Fi link without an access point, something that was previously impossible (because the iPhone would only speak AWDL, and the Android only Wi-Fi Aware/Wi-Fi Direct). In effect, the EU is unifying the table’s middle column (“Wi-Fi Aware”) across the industry, and pushing the proprietary AWDL column toward obsolescence.
A Glimpse of Wi-Fi Aware 5.0 – What’s Next?
The EU is already looking ahead to Wi-Fi Aware 5.0, mandating Apple support it when available. While v5.0 is still in the works, we can speculate based on industry trends and draft discussions:
Better Interoperability & Backwards Compatibility:
Each iteration of Aware aims to bring improvements while remaining backward compatible. v5.0 will likely fine-tune the interaction between different versions (e.g. allowing a v5 device to gracefully communicate with a v4 device at a slightly reduced feature set).
Multi-Band and Wi-Fi 7 Enhancements:
With Wi-Fi 7 (802.11be) emerging, v5.0 could incorporate support for Multi-Link Operation (MLO) – allowing Aware devices to use multiple bands or channels simultaneously for P2P, increasing reliability and throughput. It might also embrace new PHY capabilities like 320 MHz channels in 6 GHz or even integration of the 60 GHz band for
ultra-high throughput at short range
. Imagine a future Aware where two devices use 6 GHz for discovery and 60 GHz for a quick gigabit data burst.
Improved Ranging and Location:
Wi-Fi Aware might leverage Wi-Fi 7’s improved location features or even integrate with UWB. v5.0 could offer finer distance measurement or angle-of-arrival info by coordinating multiple antennas, which would interest AR/VR use cases and precise indoor positioning.
Extended Mesh Networking:
Currently, Aware focuses on finding peers and setting up links; v5.0 might add more mesh networking primitives – e.g., forwarding data through intermediate nodes or coordinating groups of devices more intelligently. This could turn clusters of phones into true mesh networks for group connectivity without infrastructure.
Security Upgrades:
Each version updates security. v5.0 will likely address any weaknesses found in v4, perhaps adding quantum-resistant encryption for pairing or tighter integration with device identity frameworks. Given Apple’s emphasis on privacy, expect them to push for features that allow secure sharing of connection metadata with third parties without exposing user data.
We’ll know for sure once the Wi-Fi Alliance releases the Wi-Fi Aware 5.0 spec, but the direction is clear: faster, farther, and more seamless peer-to-peer connectivity. And importantly, Apple will be on board from day one (not years late as it was with previous standards).
Wi-Fi Aware in Action: Android Kotlin Example
To illustrate how developers can use Wi-Fi Aware, let’s look at a simplified real-world example on Android. Below is Kotlin code demonstrating a device publishing a service and handling a message from a subscriber. (Android’s Wi-Fi Aware API is available from API level 26; one must have location and “Nearby Wi-Fi Devices” permissions, and the device must support Aware.)
val wifiAwareMgr = context.getSystemService(Context.WIFI_AWARE_SERVICE) as WifiAwareManager
if(!wifiAwareMgr.isAvailable){
Log.e("WiFiAwareDemo", "Wi-Fi Aware not available on this device.")
return}
// Attach to the Wi-Fi Aware servicewifiAwareMgr.attach(object : AttachCallback() {
override fun onAttached(session: WifiAwareSession){
// Once attached, we can publish or subscribe val publishConfig = PublishConfig.Builder()
.setServiceName("com.example.p2pchat") // Name of our service .build()
session.publish(publishConfig, object : DiscoverySessionCallback() {
override fun onPublishStarted(pubSession: PublishDiscoverySession){
Log.i("WiFiAwareDemo", "Service published, ready for subscribers.")
}
override fun onMessageReceived(
session: DiscoverySession,
peerHandle: PeerHandle,
message: ByteArray
){
val msgStr = String(message, Charsets.UTF_8)
Log.i("WiFiAwareDemo", "Received message from subscriber: $msgStr")
// Here we could respond or establish a data path if needed }
}, null)
}
override fun onAttachFailed(){
Log.e("WiFiAwareDemo", "Failed to attach to Wi-Fi Aware session.")
}
}, null)
In this code, the app attaches to the Wi-Fi Aware service, then publishes a service named
"com.example.p2pchat"
. When a peer subscribes and sends us a message (for example, “Hello from subscriber”), it arrives in
onMessageReceived
. A subscriber device would perform complementary steps: calling
session.subscribe(...)
with the same service name and implementing
onServiceDiscovered
to detect the publisher, then possibly using
subscribeSession.sendMessage(peer, ...)
to send that “Hello.” At that point, either side could then use
WifiAwareSession.createNetworkSpecifier()
to set up an actual data path (network interface) for larger communication.
The key takeaway is that Wi-Fi Aware makes peer discovery and messaging a first-class citizen in the API, abstracting away the low-level Wi-Fi fiddling. The app developer just provides a service name and gets callbacks when peers appear or messages arrive.
(Note: The above is a minimal example. In a real app, you’d handle permissions, check for support via
PackageManager.FEATURE_WIFI_AWARE
, and probably use the new NEARBY_WIFI_DEVICES permission on Android 13+. Also, establishing a full data path would involve requesting a
Network
from
ConnectivityManager
with a network specifier from the Aware session.)
Immediately after Google announced Wi-Fi Aware in Android, we at Ditto realized its potential for seamless peer-to-peer sync. As shown above, you can certainly roll your own discovery and data exchange with Aware. However, not every developer will want to manage these details or deal with corner cases of connectivity. That’s why Ditto’s real-time sync SDK is integrating Wi-Fi Aware support out-of-the-box
.
Our upcoming releases will automatically use Wi-Fi Aware in iOS under the hood for nearby devices, enabling peer-to-peer database synchronization and binary file sharing between iOS and Android with zero configuration. In practical terms, if you build your app with Ditto, two devices in proximity will be able to find each other and sync data directly (bypassing cloud or LAN) using the fastest available transport – now including Wi-Fi Aware alongside Bluetooth, AWDL, LAN, etc.
Cross-platform, edge-first applications (collaborative apps, offline-first data stores, local IoT networks) will significantly benefit from this, as devices will form a local mesh that syncs instantly and reliably, even if the internet is down. Ditto’s approach has always been to multiplex multiple transports (Wi-Fi infrastructure, P2P, BLE, etc.) for robustness; adding NAN support supercharges the bandwidth available for nearby sync sessions.
A concrete example: Consider an app for first responders that shares maps and live sensor data among a team in the field. With Wi-Fi Aware, an Android tablet, an iPhone, and a specialized helmet device could all auto-discover each other and form a mesh to sync mission data in real-time without any network. Previously, if the iPhone had an app using AWDL, it couldn’t directly connect to the Android tablet’s Wi-Fi Aware session – they were incompatible silos. Now, they’ll speak one language, making such scenarios truly feasible.
Bigger Picture: The Dawn of True Cross-Platform Mesh Networking
Apple’s reluctant adoption of Wi-Fi Aware marks a pivot point for device connectivity. For years, we’ve seen a split: Apple’s ecosystem “Just Works” within itself (thanks to AWDL, AirDrop, etc.), while other platforms muddled along with standards that never quite matched the seamlessness or performance. That left cross-platform interactions hamstrung – the experience of sharing something between an iPhone and an Android was far from instant or easy.
With iOS supporting Wi-Fi Aware, we’re essentially witnessing AWDL go open. The proprietary tech that powered some of Apple’s most magical features will now be available in an interoperable way to any developer. The implications are significant:
End of the Proprietary P2P Divide:
No more need for parallel implementations. Developers won’t have to build one system using MultipeerConnectivity for iOS-to-iOS and another using Wi-Fi Aware or Wi-Fi Direct for Android-to-Android. They can use Wi-Fi Aware universally for nearby networking. This reduces development complexity and encourages building features that work on all devices, not just within one brand.
Cross-Platform AirDrop and Beyond:
We will likely see apps (or OS-level features) that enable AirDrop-like functionality between iOS and Android. Google’s Nearby Share and Samsung’s Quick Share could potentially become interoperable with Apple’s implementation now that the underlying protocol is shared. The user experience barrier between ecosystems could start to blur in local sharing scenarios.
Mesh and Edge Computing Potential:
If many devices can seamlessly form ad-hoc networks, this enables new paradigms in edge computing. Clusters of phones could share workload or content directly. For example, at a conference, a presenter’s laptop could broadcast slides via Wi-Fi Aware to all audience phones without internet. Or a fleet of drones could coordinate via Aware when out of range of a base station. The offline mesh becomes a first-class citizen.
Competitive Innovation:
The EU’s push here also sets a precedent – even giants like Apple must conform to interoperability on critical features. This may drive Apple (and others) to innovate
on top of
the standards rather than via proprietary lock-in. We might see Apple contribute more actively to Wi-Fi Aware’s future improvements (as required by the DMA) to ensure it meets their needs for things like AR/VR data streams. That collaboration could yield better tech for everyone, faster.
One can’t ignore the
irony
that the Wi-Fi Aware standard is effectively a child of AWDL. Now the child comes back to replace its parent. From a technical perspective, this is a win for engineering elegance – it’s always cleaner to have one agreed-upon protocol rather than parallel ones. From a developer perspective, it’s a huge win for interoperability and user reach.
Apple will undoubtedly ensure that the transition doesn’t degrade the experience for Apple-to-Apple interactions; the DMA even mandates that third-party access be
“equally effective”
as Apple’s own solutions. That means as developers, we should expect the new iOS 19 Wi-Fi Aware APIs to give us essentially what AWDL gave Apple’s apps. It’s like being handed the keys to a supercar that was previously locked in Apple’s garage.
Conclusion
The EU’s crackdown on Apple’s closed ecosystems is catalyzing a long-awaited unification in short-range wireless technology. By compelling Apple to adopt Wi-Fi Aware, the Digital Markets Act is effectively forcing the end of AWDL as an exclusive domain. For developers and users, this is exciting news: soon your apps will be able to use high-speed peer-to-peer Wi-Fi on iPhones and have it talk to other platforms seamlessly. We’ll likely see an explosion of innovative uses for local connectivity – from truly universal AirDrop alternatives to cross-platform local multiplayer games, ad-hoc collaborative editing, IoT device commissioning, and beyond – no specialized hardware or router required.
At a technical level, AWDL will be remembered as an ahead-of-its-time solution that proved what was possible, and Wi-Fi Aware ensures those capabilities are broadly available as an industry standard. With Wi-Fi Aware 4.0 on the cusp of ubiquity (and 5.0 on the horizon), we are entering a new era of frictionless sharing and syncing among devices in physical proximity. It’s a win for interoperability and a win for innovation in peer-to-peer networking. The walls around AWDL are coming down – and the implications for edge computing and offline experiences are profound.
Sources:
[1] European Commission –
DMA Decisions on Apple Interoperability (Q&A)
–
High-bandwidth P2P Wi-Fi (Wi-Fi Aware 4.0 in iOS 19, Wi-Fi Aware 5.0 next)
. (2025) (
Interoperability - European Commission
)
[4] Android AOSP Documentation –
Wi-Fi Aware feature (Neighbor Awareness Networking)
–
Added in Android 8.0; supports discovery, connection, and ranging (added in Android 9).
(
Wi-Fi Aware | Android Open Source Project
)
[5] Nordic Semiconductor –
Bluetooth Range Compared
–
Bluetooth 5 LE offers up to ~400 m range (4× vs BLE4), 2 Mbps PHY, ~1.36 Mbps application throughput.
(
Things You Should Know About Bluetooth Range
)
[6] Computerworld –
Coming soon: Faster, longer-range Bluetooth 5
–
“In clear line of sight, Bluetooth 5 range could stretch to 400 meters,”
(2016)
[7] BGR --
iOS 19 Features Coming to EU
--
Details new features for EU iPhones including high-bandwidth P2P Wi-Fi, sideloading, and alternative app stores
(March 2025) (
8 Exclusive iOS 19 Features Coming to EU iPhone Users
)
[8] Open Wireless Link Wiki -
What is Apple Wireless Direct Link (AWDL)
--
Apple’s patent on AWDL (US201800838) and origins as a successor to Wi-FI IBSS
(
Wiki | Open Wireless Link
)
[9] CyberHoot –
Apple Wireless Direct Link (AWDL) –
Apple deployed AWDL in over billion devices to power AirDrop, AirPlay peer Connections, and more
(2002) (
Apple Wireless Direct Link (AWDL) - CyberHoot
)
[10] Wi-Fi Alliance –
Wifi Aware –
Android added platform support for Wi-Fi Aware in Oreo (8.0) and later
(
Wi-Fi Aware | Wi-Fi Alliance
)
[14] Secure Mobile Networking Lab (SEEMOO) --
Apple Wireless Direct Link (AWDL) and Secure Device Communications
–
AWDL is a based ad-hoc protocol with Apple-specific extensions integrated into Apple’s ecosystem
(
Matthias Hollick – Secure Mobile Networking Lab
)
There’s an old electronics joke that if you want to build an oscillator, you should try building an amplifier. One of the fundamental criteria for oscillation is the presence of signal gain; without it, any oscillation is bound to decay, just like a swing that’s no longer being pushed must eventually come to a stop.
In reality, circuits with gain can occasionally oscillate by accident, but it’s rather difficult to build a good analog oscillator from scratch. The most common category of oscillators you can find on the internet are circuits that simply don’t work. This is followed by approaches that require exotic components, such as center-tapped inductors or incandescent lightbulbs. The final group are the layouts you can copy, but probably won’t be able to explain to a friend who doesn’t have an EE degree.
In today’s article, I wanted to approach the problem in a different way. I’ll assume that you’re up-to-date on some of the key lessons from earlier articles: that you
can tell the difference between voltage and current
, have a
basic grasp of transistors
, and know what happens when a
capacitor is charged through a resistor
. With this in mind, let’s try to construct an oscillator that’s easy to understand, runs well, and has a predictable operating frequency. Further, let’s do it without peeking at someone else’s homework.
The simplest form of an oscillator is a device that uses negative feedback to cycle back and forth between two unstable states. To illustrate, think of a machine equipped with a light sensor and a robotic arm. In the dark, the machine is compelled to stroll over to the wall switch and flip it on. If it detects light, another part of its programming takes over and toggles the switch off. The machine is doomed to an endless cycle of switch-flipping at a frequency dictated by how quickly it can process information and react.
At first blush, we should be able to replicate this operating principle with a single n-channel MOSFET. After all, a transistor can be used as an electronically-operated switch:
A wannabe oscillator.
The transistor turns on when the voltage between its gate terminal and the source leg (
Vgs
) exceeds a certain threshold, usually around 2 V. When the power supply first ramps up, the transistor is not conducting. With no current flowing through, there’s no voltage drop across the resistor, so
Vgs
is pulled toward the positive supply rail. Once this voltage crosses about 2 V, the transistor begins to admit current. It stands to reason that the process shorts the bottom terminal of the resistor to the ground and causes
Vgs
will plunge to 0 V. If so, that would restart the cycle and produce a square wave on the output leg.
In practice, this is not the behavior you’ll see. For a MOSFET, the relationship between
Vgs
and the admitted current (
Id
) is steep, but the device is not a binary switch:
BS170 Vgs-Id curve for Vds = 1 V. Captured by author.
In particular, there is a certain point on that curve, somewhere in the vicinity of 2 V, that corresponds to the transistor only admitting a current of about 200 µA. From Ohm’s law, this current flowing through a 10 kΩ resistor will produce a voltage drop of 3 V. In a 5 V circuit, this puts
Vgs
at 5 V - 3 V = 2 V. In other words, there exists a stable equilibrium that prevents oscillation. It’s akin to our robot-operated light switch being half-on.
To fix this issue, we need to build an electronic switch that has no stable midpoint. This is known as
Schmitt trigger
and its simple implementation is shown below:
A discrete-transistor Schmitt trigger.
To analyze the design, let’s assume the circuit is running off
Vsupply = 5
V. If the input signal is 0 V, the transistor on the left is not conducting, which pulls
Vgs
for the other MOSFET all the way to 5 V. That input allows nearly arbitrary currents to flow through the right branch of the circuit, making that current path more or less equivalent to a two-resistor a voltage divider. We can calculate the midpoint voltage of the divider:
This voltage is also propagated the source terminal of the input transistor on the left. The actual
Vth
for the
BS170
transistors in my possession is about 2.15 V, so for the input-side transistor to turn on, the supplied signal will need to exceed
Vs + Vth ≈
2.6 V in reference to the ground. When that happens, a large voltage drop appears across R1, reducing the
Vgs
of the output-side transistor below the threshold of conduction, and choking off the current in the right branch.
At this point, there’s still current flowing through the common resistor on the bottom, but it’s now increasingly sourced via the left branch. The left branch forms a new voltage divider; because R1
has a higher resistance than R2,
Vs
is gradually reduced, effectively bumping up
Vgs
for the left transistor and thus knocking it more firmly into conduction even if the input voltage remains constant. This is a positive feedback that gives the circuit no option to linger in a half-on state.
Once the transition is complete, the voltage drop across the bottom resistor is down from 450 mV to about 50 mV. This means that although the left transistor first turned on when the input signal crossed 2.6 V in reference to the ground, it will not turn off until the voltage drops all the way to 2.2 V — a 400 mV gap.
This circuit lets us build what’s known as a
relaxation oscillator
. To do so, we only need to make two small tweaks. First, we need to loop an inverted output signal back onto the input; the most intuitive way of doing this is to add another transistor in a switch-like configuration similar to the failed design of a single-transistor oscillator mentioned earlier on. This building block, marked on the left, outputs
Vsupply
when the signal routed to the gate terminal is 0 V, and produces roughly 0 V when the input is near
Vsupply
:
A Schmitt trigger oscillator.
Next, to set a sensible oscillation speed, we need to add a time delay, which can be accomplished by charging a capacitor through a resistor (middle section). The resistor needs to be large enough not to overload the inverter stage.
For the component values shown in the schematic, the circuit should oscillate at a frequency of almost exactly 3 kHz when supplied with 5 V:
An oscilloscope trace for the circuit, by author.
The frequency is governed by how long it takes for the capacitor to move
Δv =
400 mV between the two Schmitt thresholds voltages:
the “off” point at 2.2 V and the “on” point at 2.6 V.
Because the overall variation in capacitor voltage is small, the we can squint our eyes and say that the voltage across the 100 kΩ resistor is nearly constant in every charge cycle. When the resistor is connected to the positive rail,
V
R
≈ 5 V – 2.4 V ≈ 2.6 V. Conversely, when the resistor is connected to the ground, we get
V
R
≈ 2.4 V. If the voltages across the resistor are nearly constant, so are the resulting capacitor currents:
From the
fundamental capacitor equation
(
Δv = I · t/C
), we can solve for the charging time needed to move the voltage by
Δv
= 400 mV; the result is about 154 µs for the charging period and 167 µs for the discharging period. The sum is 321 µs, corresponding to a frequency of about 3.1 kHz – pretty close to real life.
The circuit can be simplified to two transistors at the expense of readability, but if you need an analog oscillator with a lower component count, an
operational amplifier
is your best bet.
If you’re rusty on op-amps, I suggest pausing to review the article linked in the preceding paragraph. That said, to understand the next circuit, all you need to know is that an op-amp compares two input voltages and that
Vout
swings toward the positive rail if
Vin+
≫
Vin-
or toward the negative rail if
Vin+
≪
Vin-
.
An op-amp relaxation oscillator.
For simplicity, let’s choose R1 = R2 = R3 and then look at the non-inverting (
Vin+
) input of the chip. What we have here is a three-way voltage divider: the signal on the non-inverting input is simple average of three voltages:
Vsupply
(5 V), ground (0 V), and
Vout
. We don’t know the value of
Vout
just yet, but it can only vary from 0 V to
Vsupply
, so the
V
in+
signal will always stay between ⅓ ·
Vsupply
and ⅔ ·
Vsupply.
Next, let’s have a look at the inverting input (
Vin-
). When the circuit is first powered on, the capacitor C isn’t charged, so
Vin-
sits at 0 V. Since the voltage on the non-inverting input can’t be lower than ⅓ ·
Vsupply
, this means that on power-on,
Vin+
≫
Vin-
, sending the output voltage toward the positive rail. When
Vout
shoots up, it also bumps the
Vin+
average to ⅔ ·
Vsupply.
Because
Vout
is now high, this starts the process of charging the capacitor through the bottom resistor (R
cap
). After a while, the capacitor voltage is bound to exceed ⅔ ·
Vsupply
. The capacitor voltage is also hooked up to the amplifier’s inverting input, and at that point,
Vin-
begins to exceeds
Vin+
, nudging the output voltage lower. Stable equilibrium is not possible because this output voltage drop is immediately reflected in the three-way average present on the
Vin+
leg, pulling it down and causing the difference between
Vin-
and
Vin+
to widen. This positive feedback loop puts the amplifier firmly into the
Vin+
≪
Vin-
territory.
At that point,
Vout
must drop to 0 V, thus lowering the voltage on the non-inverting leg to ⅓ ·
Vsupply
. With
Vout
low, the capacitor starts discharging through R
cap
, but it needs to travel from the current charge state of ⅔ ·
Vsupply
all the way to ⅓ ·
Vsupply
before
Vin-
becomes lower than
Vin+
and the cycle is allowed to restart.
The continued charging and discharging of the capacitor between ⅓ ·
Vsupply
and ⅔ ·
Vsupply
results in periodic oscillation. The circuit produces a square wave signal with a period dictated by the value of C and R
cap
. The frequency of these oscillations can be approximated analogously to what we’ve done for the discrete-transistor variant earlier on. In a 5 V circuit with R1 = R2 = R3, the capacitor charges and discharges by
Δv ≈
1.67 V. If R
cap
= 10 kΩ, then the quasi-constant capacitor charging current is
I
≈
2.5 V / 10 kΩ
≈
250 µA.
Knowing
Δv
and
I
, and assuming C = 1 µF, we can tap into the capacitor equation (
Δv = I · t/C
) to solve for
t
. The result is 6.67 ms. This puts the charge-discharge roundtrip at 13.34 ms, suggesting a frequency of 75 Hz. The actual measurement is shown below:
Oscilloscope trace for the relaxation oscillator. By author.
The observed frequency is about 7% lower than predicted: 70 instead of 75 Hz. Although I could pin this on component tolerances, a more honest explanation is that at
Δv ≈
1.67 V, the constant-current approximation of the capacitor charging process is stretched thin; the segments in the bottom oscilloscope trace diverge quite a bit from a straight line. Not to worry; to reduce
Δv
, we just need to bump up the value of R3. If we switch to 47 kΩ and keep everything else the same, the delta will be about 480 mV and the model we’re relying on will give a more precise result.
If you’re interested in a general formula to find the circuit’s operating frequency, it helps to assume that R1 and R2 are the same. If so, we can replace them with a new composite resistor with half the resistance and solve the standard voltage divider equation to find out what would happen if the feedback signal moves from 0 V to
Vsupply
:
With two identical resistors, the capacitor waveform is centered around ½
Vsupply
, so the formula for the average current is also pretty simple (and doesn’t change between the charge and discharge periods):
…and in the specific case of R1 = R2 = 10 kΩ plus R3 = 47 kΩ, we get:
\(f_{osc} \approx {2.6 \over R_{cap} \cdot C}\)
The method outlined earlier on is not the only conceptual approach to build oscillators. Another way is to produce resonance. We can do this by taking a standard op-amp voltage follower which uses negative feedback to control the output — and then mess with the feedback loop in a particular way.
An op-amp voltage follower.
In the basic voltage follower configuration, the op-amp reaches a stable equilibrium when
Vin+
≈
Vin-
≈
Vout
. Again, the circuit works only because of the negative feedback loop; in its absence,
Vin-
would diverge from
Vin+
and the output voltage would swing toward one of the supply rails.
To turn this circuit into an oscillator, we can build a feedback loop that normally provides negative feedback, but that inverts the waveform at a particular sine-wave frequency. This turns negative feedback into positive feedback; instead of stabilizing the output voltage, it produces increasing swings, but only at the frequency at which the inversion takes place.
Such a selective waveform inversion sounds complicated, but we can achieve it a familiar building block: an R-C lowpass filter. The mechanics of these filters are discussed in
this article
; in a nutshell, the arrangement produces a frequency-dependent phase shift of 0° (at DC) to -90° (as the frequency approaches infinity). If we cascade a couple of these R-C stages, we can achieve a -180° phase shift at some chosen frequency, which is the same as flipping the waveform.
A minimalistic but well-behaved op-amp solution is shown below:
A rudimentary phase-shift oscillator.
In this particular circuit, an overall -180° shift happens when each of the R-C stages adds its own -60°. It’s easy to find the frequency at which this occurs. In the aforementioned article on signal filtering, we came up with the following formula describing the shift associated with the filter:
\(\theta = -arctan( 2 \pi f R C )\)
Arctangent is the inverse of the tangent function. In a right triangle, the tangent function describes the ratio of lengths of the opposite to the adjacent for a particular angle; the arctangent goes the other way round, giving us an angle for a particular ratio. In other words, if
x
=
tan(α)
then
α
=
arctan(x).
This allows us to rewrite the equation as:
\(2 \pi f R C = -tan(\theta)\)
We’re trying to solve for
f
at which
θ
= -60°; the value of
-tan(-60°)
is roughly 1.73, so we can plug that into the equation and then move everything except
f
to the right. Throwing in the component values for the first R-C stage in the schematic, we obtain:
You’ll notice that the result is the same for the other two stages: they have higher resistances but proportionally lower capacitances, so the denominator of the fraction doesn’t change.
Oscilloscope traces for the circuit are shown below:
Traces for the three R-C stages.
Because the amplifier’s gain isn’t constrained in any way, the output waveform is a square wave. Nevertheless, in a lowpass circuit with these characteristics, the resulting waveforms are close enough to sinusoids that the sine-wave model approximates the behavior nearly perfectly. We can run a discrete-time simulation to show that the sine-wave behavior of these three R-C stages (gray) aligns pretty well with the square-wave case (blue):
A simulation of a square & sine wave passing through three R-C filters.
To make the output a sine wave, it’s possible to tinker with with the feedback loop to lower the circuit’s gain, but it’s hard to get it right; insufficient gain prevents oscillation while excess gain produces distortion. A simpler trick is to tap into the signal on the non-inverting leg (bottom oscilloscope trace) and use the other part of a dual op-amp IC to amplify this signal to your heart’s desire.
Some readers might be wondering why I designed the stages so that each of them has an impedance ten times larger than the stage before it. This is to prevent the filters from appreciably loading each other. If all the impedances were in the same ballpark, the middle filter could source currents from the left as easily as it could from the right. In that situation, finding the point of -180° phase shift with decent accuracy would require calculating the transfer function for the entire six-component Franken-filter; the task is doable but — to use a mathematical term —
rather unpleasant
.
Footnote: in literature, the circuit is more often constructed using highpass stages and a discrete transistor. I’d wager that most sources that present the discrete-transistor solution have not actually tried it in practice; otherwise, they would have found it to be quite finicky. The version presented in this article is discussed
here
.