It's hard to build an oscillator

Lobsters
lcamtuf.substack.com
2025-11-27 10:58:39
Comments...
Original Article

There’s an old electronics joke that if you want to build an oscillator, you should try building an amplifier. One of the fundamental criteria for oscillation is the presence of signal gain; without it, any oscillation is bound to decay, just like a swing that’s no longer being pushed must eventually come to a stop.

In reality, circuits with gain can occasionally oscillate by accident, but it’s rather difficult to build a good analog oscillator from scratch. The most common category of oscillators you can find on the internet are circuits that don’t work reliably. This is followed by approaches that require exotic components, such as center-tapped inductors or incandescent lightbulbs. The final group are the layouts you can copy, but probably won’t be able to explain to a friend who doesn’t have an EE degree.

In today’s article, I wanted to approach the problem in a different way. I’ll assume that you’re up-to-date on some of the key lessons from earlier articles: that you can tell the difference between voltage and current , have a basic grasp of transistors , and know what happens when a capacitor is charged through a resistor . With this in mind, let’s try to construct an oscillator that’s easy to understand, runs well, and has a predictable operating frequency. Further, let’s do it without peeking at someone else’s homework.

The simplest form of an oscillator is a device that uses negative feedback to cycle back and forth between two unstable states. To illustrate, think of a machine equipped with a light sensor and a robotic arm. In the dark, the machine is compelled to stroll over to the wall switch and flip it on. If it detects light, another part of its programming takes over and toggles the switch off. The machine is doomed to an endless cycle of switch-flipping at a frequency dictated by how quickly it can process information and react.

At first blush, we should be able to replicate this operating principle with a single n-channel MOSFET. After all, a transistor can be used as an electronically-operated switch:

A wannabe oscillator.

The transistor turns on when the voltage between its gate terminal and the source leg ( Vgs ) exceeds a certain threshold, usually around 2 V. When the power supply first ramps up, the transistor is not conducting. With no current flowing through, there’s no voltage drop across the resistor, so Vgs is pulled toward the positive supply rail. Once this voltage crosses about 2 V, the transistor begins to admit current. It stands to reason that the process shorts the bottom terminal of the resistor to the ground and causes Vgs will plunge to 0 V. If so, that would restart the cycle and produce a square wave on the output leg.

In practice, this is not the behavior you’ll see. For a MOSFET, the relationship between Vgs and the admitted current ( Id ) is steep, but the device is not a binary switch:

BS170 Vgs-Id curve for Vds = 1 V. Captured by author.

In particular, there is a certain point on that curve, somewhere in the vicinity of 2 V, that corresponds to the transistor only admitting a current of about 300 µA. From Ohm’s law, this current flowing through a 10 kΩ resistor will produce a voltage drop of 3 V. In a 5 V circuit, this puts Vgs at 5 V - 3 V = 2 V. In other words, there exists a stable equilibrium that prevents oscillation. It’s akin to our robot-operated light switch being half-on.

To fix this issue, we need to build an electronic switch that has no stable midpoint. This is known as Schmitt trigger and its simple implementation is shown below:

A discrete-transistor Schmitt trigger.

To analyze the design, let’s assume the circuit is running off Vsupply = 5 V. If the input signal is 0 V, the transistor on the left is not conducting, which pulls Vgs for the other MOSFET all the way to 5 V. That input allows nearly arbitrary currents to flow through the right branch of the circuit, making that current path more or less equivalent to a two-resistor a voltage divider. We can calculate the midpoint voltage of the divider:

\(V_{s\textrm{ (input low)}} \approx V_{supply} \cdot { R_{comm} \over { R_{comm} + R2} } \approx 450 \textrm{ mV}\)

This voltage is also propagated the source terminal of the input transistor on the left. The actual Vth for the BS170 transistors in my possession is about 2.15 V, so for the input-side transistor to turn on, the supplied signal will need to exceed Vs + Vth ≈ 2.6 V in reference to the ground. When that happens, a large voltage drop appears across R1, reducing the Vgs of the output-side transistor below the threshold of conduction, and choking off the current in the right branch.

At this point, there’s still current flowing through the common resistor on the bottom, but it’s now increasingly sourced via the left branch. The left branch forms a new voltage divider; because R1 has a higher resistance than R2, Vs is gradually reduced, effectively bumping up Vgs for the left transistor and thus knocking it more firmly into conduction even if the input voltage remains constant. This is a positive feedback that gives the circuit no option to linger in a half-on state.

Once the transition is complete, the voltage drop across the bottom resistor is down from 450 mV to about 50 mV. This means that although the left transistor first turned on when the input signal crossed 2.6 V in reference to the ground, it will not turn off until the voltage drops all the way to 2.2 V — a 400 mV gap.

This circuit lets us build what’s known as a relaxation oscillator . To do so, we only need to make two small tweaks. First, we need to loop an inverted output signal back onto the input; the most intuitive way of doing this is to add another transistor in a switch-like configuration similar to the failed design of a single-transistor oscillator mentioned earlier on. This building block, marked on the left, outputs Vsupply when the signal routed to the gate terminal is 0 V, and produces roughly 0 V when the input is near Vsupply :

A Schmitt trigger oscillator.

Next, to set a sensible oscillation speed, we need to add a time delay, which can be accomplished by charging a capacitor through a resistor (middle section). The resistor needs to be large enough not to overload the inverter stage.

For the component values shown in the schematic, the circuit should oscillate at a frequency of almost exactly 3 kHz when supplied with 5 V:

An oscilloscope trace for the circuit, by author.

The frequency is governed by how long it takes for the capacitor to move Δv = 400 mV between the two Schmitt thresholds voltages: the “off” point at 2.2 V and the “on” point at 2.6 V.

Because the overall variation in capacitor voltage is small, the we can squint our eyes and say that the voltage across the 100 kΩ resistor is nearly constant in every charge cycle. When the resistor is connected to the positive rail, V R ≈ 5 V – 2.4 V ≈ 2.6 V. Conversely, when the resistor is connected to the ground, we get V R ≈ 2.4 V. If the voltages across the resistor are nearly constant, so are the resulting capacitor currents:

\(\begin{array}{c} I_{C \textrm{ (charging)}} \approx {2.6 \textrm{ V} \over 100 \textrm{ kΩ}} \approx 26 \textrm{ µA} \\ I_{C \textrm{ (discharging)}} \approx {2.4 \textrm{ V} \over 100 \textrm{ kΩ}} \approx 24 \textrm{ µA} \end{array} \)

From the fundamental capacitor equation ( Δv = I · t/C ), we can solve for the charging time needed to move the voltage by Δv = 400 mV; the result is about 154 µs for the charging period and 167 µs for the discharging period. The sum is 321 µs, corresponding to a frequency of about 3.1 kHz – pretty close to real life.

The circuit can be simplified to two transistors at the expense of readability, but if you need an analog oscillator with a lower component count, an operational amplifier is your best bet.

If you’re rusty on op-amps, I suggest pausing to review the article linked in the preceding paragraph. That said, to understand the next circuit, all you need to know is that an op-amp compares two input voltages and that Vout swings toward the positive rail if Vin+ Vin- or toward the negative rail if Vin+ Vin- .

An op-amp relaxation oscillator.

For simplicity, let’s choose R1 = R2 = R3 and then look at the non-inverting ( Vin+ ) input of the chip. What we have here is a three-way voltage divider: the signal on the non-inverting input is simple average of three voltages: Vsupply (5 V), ground (0 V), and Vout . We don’t know the value of Vout just yet, but it can only vary from 0 V to Vsupply , so the V in+ signal will always stay between ⅓ · Vsupply and ⅔ · Vsupply.

Next, let’s have a look at the inverting input ( Vin- ). When the circuit is first powered on, the capacitor C isn’t charged, so Vin- sits at 0 V. Since the voltage on the non-inverting input can’t be lower than ⅓ · Vsupply , this means that on power-on, Vin+ Vin- , sending the output voltage toward the positive rail. When Vout shoots up, it also bumps the Vin+ average to ⅔ · Vsupply.

Because Vout is now high, this starts the process of charging the capacitor through the bottom resistor (R cap ). After a while, the capacitor voltage is bound to exceed ⅔ · Vsupply . The capacitor voltage is also hooked up to the amplifier’s inverting input, and at that point, Vin- begins to exceeds Vin+ , nudging the output voltage lower. Stable equilibrium is not possible because this output voltage drop is immediately reflected in the three-way average present on the Vin+ leg, pulling it down and causing the difference between Vin- and Vin+ to widen. This positive feedback loop puts the amplifier firmly into the Vin+ Vin- territory.

At that point, Vout must drop to 0 V, thus lowering the voltage on the non-inverting leg to ⅓ · Vsupply . With Vout low, the capacitor starts discharging through R cap , but it needs to travel from the current charge state of ⅔ · Vsupply all the way to ⅓ · Vsupply before Vin- becomes lower than Vin+ and the cycle is allowed to restart.

The continued charging and discharging of the capacitor between ⅓ · Vsupply and ⅔ · Vsupply results in periodic oscillation. The circuit produces a square wave signal with a period dictated by the value of C and R cap . The frequency of these oscillations can be approximated analogously to what we’ve done for the discrete-transistor variant earlier on. In a 5 V circuit with R1 = R2 = R3, the capacitor charges and discharges by Δv ≈ 1.67 V. If R cap = 10 kΩ, then the quasi-constant capacitor charging current is I 2.5 V / 10 kΩ 250 µA.

Knowing Δv and I , and assuming C = 1 µF, we can tap into the capacitor equation ( Δv = I · t/C ) to solve for t . The result is 6.67 ms. This puts the charge-discharge roundtrip at 13.34 ms, suggesting a frequency of 75 Hz. The actual measurement is shown below:

Oscilloscope trace for the relaxation oscillator. By author.

The observed frequency is about 7% lower than predicted: 70 instead of 75 Hz. Although I could pin this on component tolerances, a more honest explanation is that at Δv ≈ 1.67 V, the constant-current approximation of the capacitor charging process is stretched thin; the segments in the bottom oscilloscope trace diverge quite a bit from a straight line.

Short of reducing R3 to bring down Δv and thus reduce the variations in current, the way to develop a better formula is to tap into the equation for a capacitor charged by a constant voltage via a resistor, as derived here :

\(V_{cap} = V_{in} \cdot (1 - e^{-t \over RC})\)

To make the math simple, we can use ⅓ · Vsupply as the reference point for the calculation. In this view, the “virtual” supply voltage is Vin = ⅔ · Vsupply (because we took away the unused bottom ⅓) and the capacitor is charging from 0 V to Vcap = 50% · Vin ( i.e., ⅓ of ⅔ ).

To find the charging time, we just need to rearrange the R-C formula for the Vcap/Vin ratio, and then solve for t at which the value works out to 50% (0.5):

\(0.5 = {V_{cap} \over V_{in}} = 1 - e^{-t \over RC}\)

After moving 1 to the left and flipping signs, the equation simplifies to:

\(0.5 = e^{-t \over RC}\)

From there, we can take a natural logarithm of both sides:

\(ln(0.5) = { -t \over RC }\)

…and solve for t:

\(t = \underbrace{-ln(0.5)}_{=\ ln(2)} \cdot RC\)

In this particular case, the charging resistor is called Rcap , so the equation should be restated as:

\(t = ln(2) \cdot R_{cap} \cdot C \approx 0.693 \cdot R_{cap} \cdot C\)

The value of t can be used to find the oscillation frequency:

\(f _{osc} = {1 \over 2 \cdot t} \approx {1 \over 1.386 \cdot R_{cap} \cdot C} \approx {0.721 \over R_{cap} \cdot C }\)

If we plug 1 µF and 10 kΩ into the equation, the value works out to 72 Hz, which is within 3% of the observed behavior, comfortably within the tolerances of standard passive components.

The method outlined earlier on is not the only conceptual approach to build oscillators. Another way is to produce resonance. We can do this by taking a standard op-amp voltage follower which uses negative feedback to control the output — and then mess with the feedback loop in a particular way.

An op-amp voltage follower.

In the basic voltage follower configuration, the op-amp reaches a stable equilibrium when Vin+ Vin- Vout . Again, the circuit works only because of the negative feedback loop; in its absence, Vin- would diverge from Vin+ and the output voltage would swing toward one of the supply rails.

To turn this circuit into an oscillator, we can build a feedback loop that normally provides negative feedback, but that inverts the waveform at a particular sine-wave frequency. This turns negative feedback into positive feedback; instead of stabilizing the output voltage, it produces increasing swings, but only at the frequency at which the inversion takes place.

Such a selective waveform inversion sounds complicated, but we can achieve it a familiar building block: an R-C lowpass filter. The mechanics of these filters are discussed in this article ; in a nutshell, the arrangement produces a frequency-dependent phase shift of 0° (at DC) to -90° (as the frequency approaches infinity). If we cascade a couple of these R-C stages, we can achieve a -180° phase shift at some chosen frequency, which is the same as flipping the waveform.

A minimalistic but well-behaved op-amp solution is shown below:

A rudimentary phase-shift oscillator.

In this particular circuit, an overall -180° shift happens when each of the R-C stages adds its own -60°. It’s easy to find the frequency at which this occurs. In the aforementioned article on signal filtering, we came up with the following formula describing the shift associated with the filter:

\(\theta = -arctan( 2 \pi f R C )\)

Arctangent is the inverse of the tangent function. In a right triangle, the tangent function describes the ratio of lengths of the opposite to the adjacent for a particular angle; the arctangent goes the other way round, giving us an angle for a particular ratio. In other words, if x = tan(α) then α = arctan(x). This allows us to rewrite the equation as:

\(2 \pi f R C = -tan(\theta)\)

We’re trying to solve for f at which θ = -60°; the value of -tan(-60°) is roughly 1.73, so we can plug that into the equation and then move everything except f to the right. Throwing in the component values for the first R-C stage in the schematic, we obtain:

\(f_{osc} \approx {1.73 \over {2 \pi R C}} \approx {1.73 \over {2 \pi \cdot 1 \textrm{ kΩ} \cdot 100 \textrm{ nF}}} \approx 2.75 \textrm{ kHz} \)

You’ll notice that the result is the same for the other two stages: they have higher resistances but proportionally lower capacitances, so the denominator of the fraction doesn’t change.

Oscilloscope traces for the circuit are shown below:

Traces for the three R-C stages.

Because the amplifier’s gain isn’t constrained in any way, the output waveform is a square wave. Nevertheless, in a lowpass circuit with these characteristics, the resulting waveforms are close enough to sinusoids that the sine-wave model approximates the behavior nearly perfectly. We can run a discrete-time simulation to show that the sine-wave behavior of these three R-C stages (gray) aligns pretty well with the square-wave case (blue):

A simulation of a square & sine wave passing through three R-C filters.

To make the output a sine wave, it’s possible to tinker with with the feedback loop to lower the circuit’s gain, but it’s hard to get it right; insufficient gain prevents oscillation while excess gain produces distortion. A simpler trick is to tap into the signal on the non-inverting leg (bottom oscilloscope trace) and use the other part of a dual op-amp IC to amplify this signal to your heart’s desire.

Some readers might be wondering why I designed the stages so that each of them has an impedance ten times larger than the stage before it. This is to prevent the filters from appreciably loading each other. If all the impedances were in the same ballpark, the middle filter could source currents from the left as easily as it could from the right. In that situation, finding the point of -180° phase shift with decent accuracy would require calculating the transfer function for the entire six-component Franken-filter; the task is doable but — to use a mathematical term — rather unpleasant .

Footnote: in the literature, the circuit is more often constructed using highpass stages and a discrete transistor as an amplifier. I’d wager that most authors who present the discrete-transistor solution have not actually tried it in practice; otherwise, they would have found it to be quite finicky. The version presented in this article is discussed here .

If you enjoyed the content, please subscribe. I’m not selling anything; it’s just a good way to stay in touch with the writers you like.

Discussion about this post

How Arthur Conan Doyle Explored Men's Mental Health Through Sherlock Holmes

Hacker News
scienceclock.com
2025-11-27 10:54:02
Comments...
Original Article

Note: This article is republished from The Conversation under a Creative Commons license. It includes links to external sites that may earn a commission for purchases. We did not add these links and have kept the original content intact.

Arthur Conan Doyle was not just one of the world’s best crime fiction writers. He was a progressive wordsmith who brought light to controversial and taboo subjects. One of those taboo subjects was male vulnerability and mental health problems – a topic of personal significance to the author.

Doyle was a vulnerable child . His father, Charles, was an alcoholic , which led to financial troubles in the family. Charles was admitted to an asylum in 1881 and spent the next 12 years in various mental care establishments . So began Doyle’s interest in male vulnerability and mental health.

The character of Sherlock Holmes is a true expression of male vulnerability that does not equate it with weakness. Doyle does not represent Holmes as infallible, but as a man others can relate to – he battles with drug addiction, loneliness and depression. His genius thrives in part because of these vulnerabilities, not despite them.

Many of Doyle’s Sherlock Holmes stories examine male characters facing emotional catastrophe, betrayal or moral dilemmas. In works such as The Man with the Twisted Lip (1891), The Adventure of the Engineer’s Thumb (1892) and The Stockbroker’s Clerk (1894), Holmes’s male clients approach him with problems layered with emotional turmoil, fear and failure.

In The Man with the Twisted Lip, for example, a man named Neville St Clair hides his double life. He tells his family that he is a respectable entrepreneur going to London on business. In reality he is begging on the city streets. He lives this double life due to fear and shame over the inability to pay off his debts. “It was a long fight between my pride and the money,” he explains, “but the dollars won at last.”

Also Read: Who Looks Smarter: The Quick Thinker or the Careful Thinker?

“I would have endured imprisonment, ay, even execution, rather than have left my miserable secret as a family blot to my children,” St Clair says. In having his character consider execution to protect his and his family’s reputation, Doyle explored the societal expectations of Victorian masculinity and how men struggled with such pressures.

The Stockbroker’s Clerk also examines male suicide, as well as economic and professional anxieties. When Holmes reveals the crimes of Harry Pinner, the man attempts suicide rather than face prison.

In The Engineer’s Thumb, hydraulic engineer Victor is treated physically by Watson and mentally by Holmes. As Doyle writes: “Round one of his hands he had a handkerchief wrapped, which was mottled all over with bloodstains. He was young, not more than five-and-twenty, I should say, with a strong masculine face; but he was exceedingly pale and gave me the impression of a man who was suffering from some strong agitation, which it took all his strength of mind to control.”

The physical injury marks Victor as a victim of physical violence. Watson suggests that Victor is using all his mental capabilities to keep calm about his severe pain. Holmes treats Victor’s mind as he listens to his story: “Pray lie down there and make yourself absolutely at home. Tell us what you can, but stop when you are tired, and keep up your strength with a little stimulant.”

Also Read: Study of 3 Million Finnish Adults Finds Non-Voters Tend to Die Earlier

Holmes is a protector, a confidante and a comforter in this scene. He provides Victor with breakfast, induces him to lie down and offers him a stimulant (more than likely brandy).

The extremity of violence that Victor has endured has escalated to mental trauma. In having Holmes treat Victor’s mental trauma while Watson treats his physical pain, Doyle showed the importance psychological support for men of the age.

Holmes was a highly popular character. To contemporary readers, his drug use and dysfunctional clients were seen as markers of his genius rather than a reflection of the significant social issues that men faced during this period. But today, they offer a window into the mental struggles of Victorian men, and a point of connection between readers of the past and present.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here .


This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org . If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

Emma Linford , Honorary research associate, English literature, University of Hull

This article is republished from The Conversation under a Creative Commons license. Read the original article .


Lazy Linearity for a Core Functional Language (POPL 2026)

Lobsters
alt-romes.github.io
2025-11-27 09:39:17
Comments...
Original Article

I’m very proud to announce that “Lazy Linearity for a Core Functional Language”, a paper by myself and Bernardo Toninho , will be published at POPL 26 !

The extended version of the paper, which includes all proofs, is available here [ arXiv , PDF ].

The short-ish story : In 2023, for my Master’s thesis, I reached out to Arnaud Spiwack to discuss how Linear Types had been implemented in GHC. I wanted to research compiler optimisations made possible by linearity. Arnaud was quick to tell me:

Well yes, but you can’t!“

“Even though Haskell is linearly typed, Core isn’t!” 1

Linearity is ignored in Core because, as soon as it’s optimised, previously valid linear programs become invalid. It turns out that traditional linear type systems are too syntactic, or strict , about understanding linearity – but Haskell, regardless of linear types, is lazily evaluated.

Our paper presents a system which, in contrast, also accepts programs that can only be understood as linear under non-strict evaluation. Including the vast majority of optimised linear Core programs (with proofs!).

The key ideas of this paper were developed during my Master’s, but it took a few more years of on-and-off work (supported by my employer Well-Typed ) with Bernardo to crystalize the understanding of a “lazy linearity” and strengthen the theoretical results.

Now, the proof of the pudding is in the eating. Go read it!

Abstract

Traditionally, in linearly typed languages, consuming a linear resource is synonymous with its syntactic occurrence in the program. However, under the lens of non-strict evaluation, linearity can be further understood semantically, where a syntactic occurrence of a resource does not necessarily entail using that resource when the program is executed. While this distinction has been largely unexplored, it turns out to be inescapable in Haskell’s optimising compiler, which heavily rewrites the source program in ways that break syntactic linearity but preserve the program’s semantics. We introduce Linear Core, a novel system which accepts the lazy semantics of linearity statically and is suitable for lazy languages such as the Core intermediate language of the Glasgow Haskell Compiler. We prove that Linear Core is sound, guaranteeing linear resource usage, and that multiple optimising transformations preserve linearity in Linear Core while failing to do so in Core. We have implemented Linear Core as a compiler plugin to validate the system against linearity-heavy libraries, including linear-base.

  1. Core is the intermediate compiler language to which source Haskell is desugared and to which optimisations are applied ↩︎

Keep Talking About Gaza at Your Thanksgiving Table

Intercept
theintercept.com
2025-11-27 09:00:00
The so-called ceasefire might seem like a good excuse to bury the hatchet and enjoy a quieter family dinner, but it’s not. The post Keep Talking About Gaza at Your Thanksgiving Table appeared first on The Intercept....
Original Article
DEIR AL-BALAH, GAZA - NOVEMBER 23: Relatives of Palestinians, who lost their lives in Israeli attacks that violated the ceasefire across several areas of the Gaza Strip, mourn during the funeral which held at the Aqsa Martyrs Hospital in Deir al-Balah, Gaza on November 23, 2025. (Photo by Abdalhkem Abu Riash/Anadolu via Getty Images)
Relatives of Palestinians who lost their lives in Israeli attacks that violated the ceasefire in the Gaza Strip mourn at the Aqsa Martyrs Hospital in Deir al-Balah, Gaza, on Nov. 23, 2025. Photo: Abdalhkem Abu Riash/Anadolu via Getty Images

If Israel’s genocide in Gaza has been a site of tension in your family for the last two Thanksgiving holidays, this year should be no different. The so-called ceasefire might seem like a good excuse to bury the hatchet and enjoy a quieter turkey dinner, but when we look at the harrowing status quo for Palestinians in Gaza today, there is no peace to be thankful for — especially not on a day that marks the remembrance of this country’s own genocide against Indigenous Americans .

To be clear, if two years of livestreamed annihilation have failed to shift your loved ones’ support away from the Israeli ethnostate, I doubt there is anything a dinner table argument could do to persuade them. There can be no reasoning with a worldview that forecloses seeing Palestinians as fully human.

I navigate this with pro-Israel members of my own British Jewish family. It’s painful, and I don’t have any good advice. Whatever your approach with your family, there can be no pretense that the genocide in Gaza is over.

I’ll be thinking of another family this Thanksgiving: that of my student from Gaza.

Families like mine, divided over Israel, are not the important ones here. For my part, I’ll be thinking instead of another family this Thanksgiving: that of my student from Gaza. He escaped in 2024 after Israel bombed his home, killing two of his immediate family members, including his mother. His surviving family are still there, living in tents. He hasn’t heard from them in over two weeks.

It is for families like my student’s that we cannot simply take it easy this Thanksgiving because of the so-called ceasefire in Gaza.

Unending Destruction

While the October 10 agreement has offered some relief for Palestinians, with a significant drop in daily slaughter, displacement, starvation and killings by Israeli forces continue. Instead of relentless, Israel’s bombings over the last 45 days have been simply ongoing and regular. Israel has killed 345 Palestinians in Gaza, including 120 children, while demolishing over 1,500 structures.

At the same time, only a fraction of the aid trucks which were supposed to enter Gaza daily under the ceasefire agreement have been permitted entry by Israeli forces. Mass, enforced hunger continues in the Strip, where 50 million tons of rubble sits atop well over 10,000 unrecovered bodies.

In the face of such totalizing and unending destruction, it’s hard to find much solace in the fact that the support for the Palestinian cause has grown internationally; that nearly all major international human rights organizations have recognized Israel’s actions as genocidal ; that a major wave of nation-states, including France, Canada, and Britain, moved this year to recognize the state of Palestine . The dead, displaced, and occupied can do little with declarations that carry no concrete consequences.

“What we need is a justice plan,” Mosab Abu Toha, the Palestinian writer and poet, told a U.N. meeting this week. “It is time to stop accepting the illusion of peace processes that only entrench injustices.”

With the state of the world as it stands, it feels unlikely that Israeli leaders will be held accountable for their war crimes any time soon. Justice for Palestine is hard to imagine, but we can continue to apply pressure in ways that have already seen paradigms shift. Zohran Mamdani’s victory in the New York City mayoral election was a genuine victory against the perverse weaponization of antisemitism against Israel’s critics. Now New Yorkers must push our next mayor to uphold commitments to Palestinian solidarity and international law.

And there is more those of us living in safety can do. We can send funds and share resources, as so many already do. And we can continue heading and supporting Palestinians’ call for boycotts, divestment, and sanctions against Israeli institutions complicit in occupation and apartheid.

Activist sometimes say, “Solidarity begins at home.” Yet not everyone can choose their home. If you have the great fortune of spending the holidays with loved ones who share your commitments to justice and liberation, I hope your time together is full of joy. Most of the time, though, solidarity actually begins anywhere but home. So if you choose to spend time with your family knowing that it will be fraught, I wish you luck. The weekend will pass, and there’s urgent work to be done.

DNS Firewalling with MISP and Technitium DNS Server

Hacker News
zaferbalkan.com
2025-11-27 07:38:28
Comments...

Ray Marching Soft Shadows in 2D

Hacker News
www.rykap.com
2025-11-27 07:31:24
Comments...
Original Article

Disclaimer: the demos on this page use WebGL features that aren’t available on some mobile devices.

A couple of weeks ago I tweeted a video of a toy graphics project (below). It’s not done, but a lot of people liked it which was surprising and fun! A few people asked how it works, so that’s what this post is about.

Under the hood it uses something called a distance field. A distance field is an image like the one below that tells you how far each pixel is from your shape. Light grey pixels are close to the shape and dark grey pixels are far from it.

When the demo starts up, it draws some text on a 2D canvas and generates a distance field of it. It uses a library I wrote that generates distance fields really quickly. If you’re curious how the library works, I wrote about that here .

Our lighting scheme works like this: when processing a particular pixel we consider a ray from it to the light, like so…

If the ray intersects a glyph, the pixel we’re shading must be in shadow because there’s something between it and the light.

The simplest way to check this would be to move along the ray in 1px increments, starting from the pixel we’re shading and ending at the light, repeatedly asking the distance field if we’re distance 0 from a shape. This would work, but it’d be really slow.

We could pick some specific length like 30px and move in increments of that size, but then we risk jumping over glyphs that are smaller than 30px. We might think we’re not in shadow when we should be.

Ray marching’s core idea is this: the distance field tells you how far you are from the closest glyph. You can safely advance along your ray by that distance without skipping over any glyphs.

Let’s walk through an example. We start as pictured above and ask the distance field how far we are from any glyph. Turns out in this case that the answer is 95px (pictured left). This means that we can move 95px along our ray without skipping over anything!

Now we’re a little closer to the light. We repeat the process until we hit the ascender of the b! If the b glyph weren’t there, we’d have kept going until we hit the light.

Below is a demo that shows the ray marching steps for a given pixel. The red box is the pixel we’re shading, and each circle along the ray represents a ray marching step and the distance from the scene at that step.

Try dragging the light and the pixel around to build an intuition for it.

Below is GLSL to implement this technique. It assumes you’ve defined a function getDistance that samples the distance field.

vec2 rayOrigin = ...;
vec2 rayDirection = ...;

float rayProgress = 0;
while (true) {
  if (rayProgress > distance(rayOrigin, lightPosition)) {
    // We hit the light! This pixel is not in shadow.
    return 1.;
  }

  float sceneDist = getDistance(
    rayOrigin + rayProgress * rayDirection);
  if (sceneDist <= 0.) {
    // We hit a shape! This pixel is in shadow.
    return 0.;
  }

  rayProgress += sceneDist;
}

It turns out that some pixels are really expensive to process. So in practice we use a for-loop instead of a while loop – that way we bail out if we’ve done too many steps. A common “slow case” in ray marching is when a ray is parallel to the edge of a shape in the scene…

The approach I’ve described so far will get you a scene that looks like the one below.

It’s cool, but the shadows are sharp which doesn’t look very good. The shadows in the demo look more like this…

One big disclaimer is that they’re not physically realistic! Real shadows look like hard shadows where the edges have been fuzzed. This approach does something slightly different: all pixels that were previously in shadow are still fully in shadow. We’ve just added a penumbra of partially shaded pixels around them.

The upside is that they’re pretty and fast to compute, and that’s what I care about! There are three “rules” involved in computing them.

Rule 1: The closer a ray gets to intersecting a shape, the more its pixel should be shadowed. In the image below there are two similar rays (their distances to the shape pictured in yellow and green). We want the one that gets closer to touching the corner to be more shadowed.

This is cheap to compute because the variable sceneDist tells us how far we are from the closest shape at each ray marching step. So the smallest value of sceneDist across all steps is a good approximation for the yellow and green lines in the image above.

Rule 2: if the pixel we’re shading is far from the point where it almost intersects a shape, we want the shadow to spread out more.

Consider two pixels along the ray above. One is closer to the almost-intersection and is lighter (its distance is the green line). The other is farther and darker (its distance is the yellow line). In general: the further a pixel is from its almost intersection, the more “in shadow” we should make it.

This is cheap to compute because the variable rayProgress is the length of the green and yellow lines in the image above.

So: we previously returned 1.0 for pixels that weren’t in shadow. To implement rules 1 and 2, we compute sceneDist / rayProgress on each ray marching step, keep track of its minimum value, and return that instead.

vec2 rayOrigin = ...;
vec2 rayDirection = ...;
float rayProgress = 0.;
float stopAt = distance(samplePt, lightPosition);
float lightContribution = 1.;
for (int i = 0; i < 64; i++) {
  if (rayProgress > stopAt) {
    return lightContribution;
  }

  // `getDistance` samples our distance field texture.
  float sceneDist = getDistance(
    rayOrigin + rayProgress * rayDirection);
  if (sceneDist <= 0.) {
    // We hit a shape! This pixel is in shadow.
    return 0.;
  }

  lightContribution = min(
    lightContribution,
    sceneDist / rayProgress
  );

  rayProgress += sceneDist;
}

// Ray-marching took more than 64 steps!
return 0.;

This ratio feels kind of magical to me because it doesn’t correspond to any physical value. So let’s build some intuition for it by thinking through why it might take on particular values…

  • If sceneDist / rayProgress >= 1 , then either sceneDist is big or rayProgress is small (relative to each other). In the former case we’re far from any shapes and we shouldn’t be in shadow, so a light value of 1 makes sense. In the latter case, the pixel we’re shadowing is really close to an object casting a shadow and the shadow isn’t fuzzy yet, so a light value of 1 makes sense.

  • The ratio is 0 only when sceneDist is 0 . This corresponds to rays that intersect an object and whose pixels are in shadow.

And here’s a demo of what we have so far…

Rule #3 is the most straightforward one: light gets weaker the further you get from it.

Instead of returning the minimum value of sceneDist / rayProgress verbatim, we multiply it by a distanceFactor which is 1 right next to the light, 0 far away from it, and gets quadratically smaller as you move away from it.

All together, the code for the approach so far looks like this…

vec2 rayOrigin = ...;
vec2 rayDirection = ...;
float rayProgress = 0.;
float stopAt = distance(samplePt, lightPosition);
float lightContribution = 1.;
for (int i = 0; i < 64; i++) {
  if (rayProgress > stopAt) {
    // We hit the light!
    float LIGHT_RADIUS_PX = 800.;

    // fadeRatio is 1.0 next to the light and 0. at
    // LIGHT_RADIUS_PX away.
    float fadeRatio =
      1.0 - clamp(stopAt / LIGHT_RADIUS_PX, 0., 1.);

    // We'd like the light to fade off quadratically instead of
    // linearly.
    float distanceFactor = pow(fadeRatio, 2.);
    return lightContribution * distanceFactor;
  }

  // `getDistance` samples our distance field texture.
  float sceneDist = getDistance(rayOrigin + rayProgress * rayDirection);
  if (sceneDist <= 0.) {
    // We hit a shape! This pixel is in shadow.
    return 0.;
  }

  lightContribution = min(
    lightContribution,
    sceneDist / rayProgress
  );

  rayProgress += sceneDist;
}

// Ray-marching took more than 64 steps!
return 0.;

I forget where I found this soft-shadow technique, but I definitely didn’t invent it. Inigo Quilez has a great post on it where he talks about using it in 3D.

Inigo’s post also talks about a gotcha with this approach that you might have noticed in the demos above: it causes banding artifacts. This is because Rule 1 assumes that the smallest value of sceneDist across all steps is a good approximation for the distance from a ray to the scene. This is not always true because we sometimes take very few ray marching steps.

So in my demo I use an improved approximation that Inigo writes about in his post. I also use another trick that is more effective but less performant: instead of advancing by sceneDist on each ray marching step, I advance by something like sceneDist * randomJitter where randomJitter is between 0 and 1 .

This improves the approximation because we’re adding more steps to our ray march. But we could do that by advancing by sceneDist * .3 . The random jitter ensures that pixels next to each other don’t end up in the same band. This makes the result a little grainy which isn’t great. But I think looks better than banding… This is an aspect of the demo that I’m still not satisfied with, so if you have ideas for how to improve it please tell me!

Overall my demo has a few extra tweaks that I might write about in future but this is the core of it. Thanks for reading! If you have questions or comments, let me know on Twitter .

Thank you to Jessica Liu, Susan Wang, Matt Nichols and Kenrick Rilee for giving feedback on early drafts of this post! Also, if you enjoyed this post you might enjoy working with me at Figma !

Mixpanel Security Breach

Hacker News
mixpanel.com
2025-11-27 07:02:40
Comments...

GameShell: a game to learn (or teach) how to use standard commands in a Unix shell

Lobsters
github.com
2025-11-27 06:57:17
Comments...
Original Article

GameShell: a "game" to teach the Unix shell

Illustration inspired by the game

Teaching first-year university students or high schoolers to use a Unix shell is not always the easiest or most entertaining of tasks. GameShell was devised as a tool to help students at the Université Savoie Mont Blanc to engage with a real shell, in a way that encourages learning while also having fun.

The original idea, due to Rodolphe Lepigre, was to run a standard bash session with an appropriate configuration file that defined "missions" which would be "checked" in order to progress through the game.

Here is the result...

GameShell's first mission

GameShell is available in English, French and Italian.

Feel free to send us your remarks, questions or suggestions by opening issues or submitting pull requests . We are particularly interested in any new missions you might create!

Getting started

GameShell should work on any standard Linux system, and also on macOS and BSD (but we have run fewer tests on the latter systems). On Debian or Ubuntu, the only dependencies (besides bash ) are the gettext-base and awk packages (the latter is generally installed by default). Some missions have additional dependencies: these missions will be skipped if the dependencies are not met. On Debian or Ubuntu, run the following command to install all game and mission dependencies.

$ sudo apt install gettext man-db procps psmisc nano tree ncal x11-apps wget

Check the user manual to see how to install the game dependencies on other systems (macOS, BSD, ...).

Assuming all the dependencies are installed, you can try the latest version of the game by running the following two commands in a terminal.

$ wget https://github.com/phyver/GameShell/releases/download/latest/gameshell.sh
$ bash gameshell.sh

The first command will download the latest version of the game in the form of a self-extracting archive, and the second command will initialise and run the game from the downloaded archive. Instructions on how to play are provided in the game directly.

Note that when you quit the game (with control-d or the command gsh exit ) your progression will be saved in a new archive (called gameshell-save.sh ). Run this archive to resume the game where you left it.

If you prefer not running foreign shell scripts on your computer, you can generate a Docker image with the following:

$ mkdir GameShell; cd GameShell
$ wget --quiet https://github.com/phyver/GameShell/releases/download/latest/Dockerfile
$ docker build -t gsh .
$ docker run -it gsh

The game will NOT be saved when you exit, and additional flags are required if you want to run X programs from inside GameShell. Refer to this section of the user manual.

Documentation

To find out more about GameShell, refer to the following documents:

  • The user manual provides information on how to run the game on all supported platforms (Linux, macOS, BSD), explains how to run the game from the sources, tells you how to generate custom game archives (which is useful if you want to use GameShell for teaching a class), and more.
  • The developer manual provides information on how to create new missions, how to translate missions, and how to participate in the development of the game.

Who is developing GameShell?

Developers

The game is currently being developed by:

Mission contributors

  • Pierre Hyvernat
  • Rodolphe Lepigre
  • Christophe Raffalli
  • Xavier Provencal
  • Clovis Eberhart
  • Sébastien Tavenas
  • Tiemen Duvillard

Translation

Italian Version

  • Daniele Scasciafratte (@mte90)
  • Paolo Mauri (@maupao)
  • Marco Ciampa (@ciampix)
  • Antonio Vivace (@avivace)
  • Lorenzo Millucci (@lmillucci)
  • Sirio Negri (@ziriuz84)
  • Domenico Mammola (@domenicomammola)
  • Leonardo Canello (@anulo2)
  • @michirod
  • @serhack
  • WhiteShield (@wshield05)
  • @gioisco

Special thanks

  • All the students who found many bugs in the early versions.
  • Joan Stark (a.k.a, jgs), who designed hundreds of ASCII-art pieces in the late 90s. Most of the ASCII-art encountered in GameShell are due to her.

Licence

GameShell is released under the GPLv3 .

Please link to this repository if you use GameShell.

GameShell is open source and free to use. One way you can acknowledge the work it required is by sending an actual postcard to

  Pierre Hyvernat
  Laboratoire de Mathématiques, CNRS UMR 5127
  Université de Savoie
  73376 Le Bourget du Lac
  FRANCE

Sanders, Warren Help Form Senate Democratic ‘Fight Club’ Challenging Schumer’s Leadership

Portside
portside.org
2025-11-27 05:49:27
Sanders, Warren Help Form Senate Democratic ‘Fight Club’ Challenging Schumer’s Leadership Mark Brody Thu, 11/27/2025 - 00:49 ...
Original Article
Sanders, Warren Help Form Senate Democratic ‘Fight Club’ Challenging Schumer’s Leadership Published

Senators Bernie Sanders and Elizabeth Warren | CNN

Angered by the Democratic leadership’s fecklessness and lack of a bold vision for the future, a group of senators including Bernie Sanders of Vermont and Elizabeth Warren of Massachusetts has formed an alliance to push back on Senate Minority Leader Chuck Schumer and the party’s campaign arm ahead of next year’s critical midterm elections .

The existence of the group, known as the “Fight Club,” was first revealed Monday by the New York Times , which reported that the senators are pressing the Democratic Party to “embrace candidates willing to challenge entrenched corporate interests, fiercely oppose the Trump administration , and defy their own party’s orthodoxy.”

Sens. Chris Van Hollen of Maryland, Tina Smith of Minnesota, and Chris Murphy of Connecticut are also members of the alliance, and other senators—including Ed Markey of Massachusetts and Jeff Merkley of Oregon—have taken part in group actions, according to the Times .

“The coalition of at least half a dozen senators... is unhappy with how Mr. Schumer and his fellow senator from New York, Kirsten Gillibrand , the head of Senate Democrats’ campaign arm, have chosen, recruited and, they argue, favored candidates aligned with the establishment,” the newspaper reported. “The party’s campaign arm, the Democratic Senatorial Campaign Committee, has not made any formal endorsements in contested primaries. However, the senators are convinced that it is quietly signaling support for and pushing donors toward specific Senate candidates: Rep. Angie Craig in Minnesota, Rep. Haley Stevens in Michigan, and Gov. Janet Mills in Maine .”

Members of the “Fight Club” have endorsed Minnesota Lt. Gov. Peggy Flanagan’s bid for US Senate . In addition to Flanagan, Sanders has backed Abdul El-Sayed’s US Senate run in Michigan and Graham Platner’s campaign to unseat Republican Sen. Susan Collins in Maine.

Platner’s top opponent in the primary race, Mills, was “ aggressively recruited ” by Schumer.

News of the “Fight Club” alliance comes after a small group of centrist Democrats, with Schumer’s tacit blessing, capitulated to President Donald Trump and Republicans earlier this month by agreeing to end the government shutdown without an extension of Affordable Care Act subsidies, even as health insurance premiums skyrocket nationwide.

The cave sparked widespread fury, much of it directed at Schumer. Indivisible, a progressive advocacy group that typically aligns with Democrats, has said it will not support any Senate Democratic primary candidate who does not call on Schumer to step down as minority leader.

“We must turn the page on this era of cowardice,” Indivisible said following Senate Democrats’ capitulation. “We must nominate and elect Democratic candidates who have an actual backbone. And we must ensure that the kind of failed leadership we see from Sen. Schumer does not doom a future Democratic majority.”

Thus far, no sitting member of the Senate Democratic caucus has demanded Schumer’s resignation. But the emergence of the “Fight Club” is the latest evidence that the Democratic leader’s support is beginning to crumble.

“Absolutely love to see this,” progressive strategist Robert Cruickshank wrote on social media in response to the Times reporting. “So glad there are some Senate Dems willing to fight back.”

Jake Johnson is a senior editor and staff writer for Common Dreams.

20 States Sue the Trump Administration Over Cuts to Homeless Permanent Housing Funding

Portside
portside.org
2025-11-27 05:34:55
20 States Sue the Trump Administration Over Cuts to Homeless Permanent Housing Funding Mark Brody Thu, 11/27/2025 - 00:34 ...
Original Article

The new conditions placed on the program would also give HUD the ability to deny funding for organizations that acknowledge the existence of transgender or nonbinary individuals.

“Communities across the country depend on Continuum of Care funds to provide housing and other resources to our most vulnerable neighbors,” said James in a press release. “These funds help keep tens of thousands of people from sleeping on the streets every night. I will not allow this administration to cut off these funds and put vital housing and support services at risk.”

The coalition of mainly Democratic-led states argues in the lawsuit that HUD’s new conditions on the funding are “unlawful and unconstitutional,” alleging that the administration “cannot impose its own conditions on funds that Congress mandated should be distributed based solely on need.”

The lawsuit accuses the Trump administration of violating the Administrative Procedure Act and Congress’ “constitutional power to control spending.”

“HUD is dismayed that the plaintiffs have chosen to misuse the Courts and pursue this delaying tactic to serve their own personal political agenda at the expense of the homeless individuals, youth and families now living on our Nation’s streets. Their use of the courts for political means seeks to prevent nearly $4 billion of aid to flow nationwide to assist those in need. HUD intends to mount a vigorous defense to this meritless legal action,” the spokesperson said in a statement.

The case was filed in the U.S. District Court for the District of Rhode Island and will be decided by Judge Mary S. McElroy, who was appointed by President Donald Trump in 2019 but first nominated by former President Barack Obama.

McElroy has blocked other recent funding cuts and freezes by the Trump administration.

Earlier this month, HUD imposed a cap on the amount of program funds that can support permanent housing. Previously, there was not a specific limit and around 90 percent of funds supported permanent housing. Under the new cap, no more than 30 percent of these funds can support permanent housing.

HUD Secretary Scott Turner has argued that the policy change is a necessary shift from what the Trump administration considers to be a failed “housing first” model that prioritizes permanent housing without preconditions, such as getting a job or seeking treatment. The agency has said the current policy has fueled a “homeless industrial complex” and does not address the root causes of homelessness.

“What we’ve done is take this Biden-era slush fund, called the Continuum of Care, and turned it into not just housing, but also treatment and transitional housing,” Turner said on Fox Business last week.

The funding cuts could put 170,000 people at risk of experiencing homelessness, according to internal HUD documentation previously obtained by POLITICO. HUD has maintained that the changes will include specific protections for children, veterans and seniors.

Different factions of lawmakers have sent letters to the agency with multiple requests, including extending funding for CoC projects expiring in 2026, reversing the policy changes or answering various questions about implementation.

Additionally, 1,001 national, state and local organizations sent a letter to Congress on Monday urging that lawmakers include language directing HUD to renew all existing CoC program grants expiring in 2026 for a full year in the upcoming Transportation-Housing and Urban Development appropriations bill.

A group of 22 House Republicans asked for the same one-year funding extension in a letter to the agency earlier this month.

House and Senate Democrats have urged in letters to HUD to rescind the policy change, submit documentation on how the agency will complete the quick application turnaround for housing project funding and extend funding for grants expiring in 2026.

Senate Banking ranking member Elizabeth Warren (D-Mass.) said in a statement that Trump’s “draconian changes to the Continuum of Care program could force 170,000 people out of permanent housing and back onto the street. Congress, state leaders, all of us should be pushing back against the Administration’s cruel move that will dramatically exacerbate the homelessness crisis in cities, towns, and suburbs across the country.”

Rep. Mike Flood (R-Neb.), chair of the House Financial Services Subcommittee on Housing and Insurance said that while he doesn’t typically discuss pending litigation, he’s “been working with the administration on policy to build more housing, drive housing costs down, and ensure that existing federal funds are spent in a way that rewards success and drives positive results for the American people.”

Other states included as plaintiffs in the lawsuit are Arizona, California, Colorado, Connecticut, Delaware, Illinois, Kentucky, Maine, Maryland, Massachusetts, Michigan, Minnesota, New Jersey, Oregon, Pennsylvania, Rhode Island, Vermont, Washington and Wisconsin, as well as the District of Columbia.

Katherine Hapgood reports on economic and small business policy in Congress at POLITICO.

Russ Allbery: Review: A Matter of Execution

PlanetDebian
www.eyrie.org
2025-11-27 05:34:00
Review: A Matter of Execution, by Nicholas & Olivia Atwater Series: Tales of the Iron Rose #0 Publisher: Starwatch Press Copyright: 2024 ISBN: 1-998257-08-8 Format: Kindle Pages: 131 A Matter of Execution is ...
Original Article

A Matter of Execution is the introductory novella that kicked off the Tales of the Iron Rose series. It is steampunk fantasy with airships. I previously read and reviewed the subsequent novel, Echoes of the Imperium .

As noted in that review, I read the novel first. That was a mistake; this is a much better place to start. A Matter of Execution was clearly intended as the introduction of all of these characters. More importantly, I think reading the novella first would have given me enough affinity with the characters to not mind the worst part of Echoes of the Imperium : the extremely slow first half that seemed filled with the protagonist's impostor syndrome.

A Matter of Execution opens, fittingly, with Captain William Blair, a goblin, former Imperial soldier, Oathbreaker, and series first-person protagonist being carted to his execution. He is not alone; in the same prison wagon is an arrogant (and racist) man named Strahl, the killer of one of the rulers of Lyonesse.

Strahl is rather contemptuous of Blair's claim to be a captain, given that he's both a goblin and an Oathbreaker. Strahl quickly revises that opinion when Blair's crew, somewhat predictably given that he is the series protagonist, creates a daring escape for both of them. The heat of action gives both a chance to gain some respect for the other, which explains why Blair is not only willing to invite Strahl to join his crew, but to go back for Strahl's companion.

Breaking out Strahl's companion will be a more difficult, and surprising, problem.

Nicholas Atwater is a role-playing game GM, something that you will learn the "about the author" section at the end of this novella but probably will have guessed by then. Even more than Echoes of the Imperium , this novella feels like a (good) write-up of an RPG adventure. A wildly varied cast of characters come together and form a party with a well-defined objective that has some surrounding mysteries and surprises. Each of those characters get their individual moments to show off their specific skills. Readers with a certain gaming background will know exactly where to insert the Borderlands -style title card with a slightly demented description of each character.

This is not a complaint. You may be able to see the bones of the setup adventure for a long-running campaign, but I like this style of character introduction and the story moves right along. There are a ton of varied characters, some interesting villains and maybe-villains, a rather satisfying heist setup, and some good chemistry and a bit of banter. This is not a deep story — it's clearly an introductory episode for both the characters and the world background — but it's a fun way to spend a few hours.

I think the best part of this series is the world-building. If you have read my review of Echoes of the Imperium , you have unfortunately been mildly spoiled for the revelation in this novella. I don't think it hurt the story that much; you will be able to predict what obvious gaps in the novel backstory the novella is going to fill in, but it's just as enjoyable to see how that happens. But the Atwaters aren't going to drop any of the big world-building bombs in the introductory novella, of course. Instead, you get a gradual introduction to the nature of magic in this world, some of the political setup of the recent war, and a quick introduction to the capabilities of Strahl's mysterious companion.

If you've not yet read this series, I recommend starting here. It's a quick investment to see if you'll be interested. The novel is heavier and slower, and the pacing of the first half isn't great, but the world-building is even better.

If you've already read the novel, this is still worth reading as long as you enjoyed it. You'll have a few moments of "oh, that's how that happened," and it's a fun and fast-moving way to spend a bit more time with the characters.

Followed by Echoes of the Imperium . The back matter of the novella says that The Winds of Fortune is supposedly forthcoming.

Rating: 7 out of 10

Reviewed: 2025-11-26

Progressive Mayors vs. Developers

Portside
portside.org
2025-11-27 05:26:24
Progressive Mayors vs. Developers Mark Brody Thu, 11/27/2025 - 00:26 ...
Original Article

In New York, one of the toughest challenges that Mayor-elect Zohran Mamdani faces will be preserving and increasing the supply of affordable housing. Same story in Boston, where I live and where our progressive mayor, Michelle Wu, is constrained by similar forces.

The immediate obstacles are a scarcity of buildable land and subsidy dollars. In both cities, higher taxes to support more housing requires the approval of state government.

New York has a form of rent control, known as rent stabilization, but most New Yorkers do not live in rent-stabilized apartments. Boston once had rent control, but the state legislature took it away in 1994. Local option rent control will be back before voters next year via a ballot initiative.

But behind all of these challenges is the sheer political power of developers. Let me give a couple of emblematic examples

Thirty years ago, Boston had massive tracts of vacant developable land in a part of the waterfront that was a jumble of parking lots, warehouses, and piers. It had not been developed partly because its ownership was patchwork, and partly because Boston was still emerging from a prolonged recession.

The land area totaled about 1,000 acres, only slightly less than Boston’s entire historic downtown. It represented the city’s last large-scale building opportunity.

The Boston Redevelopment Authority (BRA) gradually got control of the land, rebranded it as the Seaport District, then as the Innovation District, and in 1999 began working with private developers to create a whole new section of the city with hotels, office buildings, restaurants, and luxury housing. Number of affordable housing units: fewer than 500.

Why? Because the BRA and the two mayors of that era, Tom Menino (1993–2014) and Marty Walsh (2014–2021), were close allies of developers, and luxury pays. The total public subsidy for the Seaport/Innovation District is hard to calculate, because it is a mix of land assembly, roads, infrastructure, and tax breaks, but it easily runs into the billions. Think of the affordable  housing that might have been built.

In addition to being a case study of how not to develop affordable housing, the Innovation District is a case study of how not to do transportation and climate remediation. It is exactly at sea level, and the city imposed hardly any building standards to protect against sea level rise. Thus its nickname: The Inundation District. And no subway line was extended to the new district, creating parking problems.

This all occurred not because planners are stupid. It occurred because of the political power of developers.

Now, Boston finally has a mayor who is not in the pocket of developers, Michelle Wu. But that one last giant tract is pretty well filled up.

Developers were so anxious about not having an ally in City Hall that they poured money into the campaign of billionaire Josh Kraft, a carpetbagger from the suburbs whom Wu so thoroughly trounced in the September preliminary election that he dropped out before the November final.

But winning an election overwhelmingly is not the same as having adequate resources. And even if developers no longer control City Hall, they pretty well control the legislature. So Boston is unlikely to get the taxing resources that it needs to build more affordable housing.

IN NEW YORK, THERE IS NOTHING QUITE COMPARABLE to the Seaport District, but a wasted opportunity on a smaller scale is the development called Hudson Yards on the far West Side of Manhattan. Built on giant platforms over rail lines, the heart of Hudson Yards is a giant indoor mall plus luxury housing.

Think about New York City for a minute. One of its many great qualities is the street-level retail of all kinds. New York needs a suburban-style indoor shopping mall like the proverbial bull needs proverbial teats. Plus, the region already has them: It’s called New Jersey.

But there was money to be made, so in 2005 the city cut a deal (finalized by the city council in 2013) with billionaire Steve Ross and his Related Companies to develop Hudson Yards with a total of 13,500 housing units, of which some 4,000 were to be affordable. In the end, only about 600 affordable units were produced. The average Hudson Yards condo in 2025 has sold for $7.4 million.

At the time, the mayor was (of course) Michael Bloomberg, a civic liberal in some respects but the ultimate ally of real estate developers.

Here is another telling irony. One of the nearby public amenities that makes Hudson Yards and the surrounding areas so commercially valuable is a wonderful quirky walkway called the High Line. It began life as an abandoned elevated railroad track. When I lived in West Greenwich Village as a young writer, it went right through my neighborhood.

In the 1990s, a local group, Friends of the High Line, came up with the improbable idea of developing it into a greened pathway. They persuaded very skeptical city officials to let them try, and the idea has succeeded spectacularly. The High Line is now a charming elevated park. It is so attractive that luxury housing has been built all along it, and it is one of the attractions of the nearby Hudson Yards.

So something that began as a loving, volunteer civic endeavor has become one more subsidy to billionaires. The ghost of the economist Henry George would understand. He proposed a tax on the unearned increment in land values.

There is a second phase of Hudson Yards still in the planning stage. The original developer bailed out, and various city agencies are still in final negotiations with the latest developer. The city has spent billions in various subsidies for Hudson Yards. Here is where Mayor-elect Mamdani comes in. He could demand a lot more affordable housing.

THERE IS ONE MORE WAY THAT DEVELOPERS have choked off the supply of affordable housing in places like Boston and New York. That is by converting subsidized apartments intended for low- or middle-income people into luxury housing. Many federal programs allow this to be done as soon as the original mortgage is paid off.

In New York, many moderate-income complexes built with tax subsidies, such as Stuyvesant Town and Peter Cooper Village, have been converted to luxury housing. Likewise for many New York apartments built as middle-class housing under a city-state program that used tax-exempt bonds and loans with low interest rates, called the Mitchell-Lama program.

One of the prime offenders, who got very rich converting Mitchell-Lama apartments, was a developer named … wait for it … Steve Witkoff. Yes, the same Trump crony who sold out middle-income New York renters is now reborn as Trump’s foreign-policy guy in charge of selling out Ukraine.

These conversions could not have been done without the approval of city officials. This is another reflection of the same political power of developers. The big developers were huge campaign contributors to the opponents of Mamdani because they appreciated that he could put a stop to this thievery. Let’s hope that he does.

Robert Kuttner is co-founder and co-editor of The American Prospect, and professor at Brandeis University’s Heller School. His latest book is Going Big: FDR’s Legacy, Biden’s New Deal, and the Struggle.

The Goal of Socialism Is Everything

Portside
portside.org
2025-11-27 05:01:47
The Goal of Socialism Is Everything Mark Brody Thu, 11/27/2025 - 00:01 ...
Original Article

On Saturday, November 22, Jacobin founding editor Bhaskar Sunkara delivered the keynote at New York City Democratic Socialists of America’s (DSA) biannual organizing conference at the First Unitarian Congregational Society in Brooklyn. Below is a transcript of his remarks on why the Left must win real gains today — but also keep fighting for a socialist society beyond them.

I'm so excited to be here with you all. It feels to me that this is the political moment so many of us have been waiting for and working to build for years.

We’re a month away from one of our comrades becoming mayor. We’ve built a network of socialist elected officials, we have a real organization to call home, and there’s a growing base of support in this city for our immediate demand of taxing the rich to expand public goods.

This moment extends beyond New York — we have a huge political opening in the United States as whole. But we know that we have that opportunity because millions of people are living through hard times. We have an erratic and authoritarian president, we have an affordability crisis, with millions struggling to pay their bills and to live lives where they’re treated with dignity and respect. We’ve seen the return of forms of nativism and racism that should have been long vanquished by now.

And at a social and economic level, things may get worse very soon.

The country — not just this city — is crying out for principled political leadership. Not just a kind of populist leadership through great figures, though I’m grateful we have one of the greatest figures on our side. I mean class leadership through organization.

The leadership that says that the disparities that we see in our country and the world are not the natural laws of God but the result of a world that human beings have created. The leadership that says that the interests of the working-class majority are distinct from the interest of capitalist elites, and that we need to organize around those interests to win not only a better distribution of wealth within capitalism but a different type of society all together.

God’s Children Can Govern

Ijoined the Democratic Socialists of America when I was seventeen years old. I don’t need to tell you what DSA was in New York back in 2007. Some of you here remember it. I made so many good friends, but we were lucky if a dozen people showed up to a meeting.

We made progress through the patient, steady work and commitment of those people and the many more who joined later. We were marathon runners for socialism.

This, though, is a moment for sprinting. This is the biggest opening our movement has had in decades. The time we devote to political work in the next few months and years will have an outsize impact in our city and country — for now and for posterity.

But what exactly should we be doing, and how should we relate to both the new mayor’s administration and our other comrades in elected office? In my mind, our tasks as organized socialists outside of government are both different and largely compatible with theirs.

The key demands of our moment are around the affordability agenda. Our mayor-elect will be leading an effort to raise revenue to fund social programs and empower the city’s working class. If Zohran , our other electeds, and the grassroots movement around them deliver positive change in people’s lives, we’ll build a deeper social base for the Left.

Right now, our electoral strength has far outpaced our base. But people are ready for our message and ready for results.

But fundamentally, there are constraints to any sort of social democratic governance. Just as under capitalism, workers are dependent on having profitable firms for jobs. Cities are dependent on big corporations and wealthy people for tax revenue.

Zohran needs to navigate these constraints. He can’t undermine the old regime of accumulation and redistribution without having a replacement for it, and certainly there can’t be a total replacement in one city.

These concerns aren’t new. This is the dilemma of social democracy. This is the tension between our near-term and long-term goals that has existed in the socialist movement for 150 years.

Our elected officials in the near term need to manage capitalism in the interest of workers, while our movement also has a long-term goal of constructing a new system through the self-emancipation of those workers.

We need to see the constraints that Zohran will be under in these structural terms, rather than moral ones. But having patience and being supportive of him doesn’t answer how we reconcile the near and the long — social democracy and socialism.

At the very least, it’s important that we remember the end goal. The great theorist of reformism, Eduard Bernstein, once said that “the goal is nothing, the movement everything.” I think that’s not quite right. If we don’t talk about socialism after capitalism, no one else will. The historic dream of our movement, a world without exploitation or oppression, will be lost.

But we shouldn’t just avoid reformism because we want to feel pure as “true socialists” or as an intellectual pursuit. We should avoid reformism and remember the goal of rupture with capitalism because it can offer a compelling vision of the world to those we’re trying to reach.

Socialism isn’t “Sweden” like Bernie sometimes says. Socialism isn’t even just , as Martin Luther King Jr said and Zohran has beautifully invoked, “a better distribution of wealth for all of God’s children.”

Socialism means a better distribution but also democratic control over the things we all depend on — workers holding the levers of production and investment, and the state guaranteeing the basics of life as social rights.

Socialism means no longer begging corporations to invest in our communities or the rich to stay and pay their taxes.

Socialism means overcoming the labor-capital dialectic through the triumph of labor itself, not a more favorable class compromise.

Socialism means that the people who’ve kept this world alive — the caregivers, the drivers, the machinists, the farmworkers, the cleaners — stop being an invisible backdrop and become the authors of their futures.

Socialism means a society where those who have always given without having any say finally show their true capabilities. Where, as C. L. R. James said , every cook can govern.

Socialism means replacing an economy built on hierarchy and exclusion with one built on the intelligence and creativity of working people themselves.

That is the goal we keep alive. Not because it’s utopian, but because it is the only horizon equal to the dignity and potential of ordinary people.

And because, it’s compelling . This isn’t just offering workers some of their surplus value back in exchange for their votes. It’s offering them the future , a society that they can own, a chance to assume their rightful place as agents of history.

Something like this is real socialism. It isn’t an interest group or a label to distinguish ourselves from other progressives. It’s a fundamentally more radical goal than those of our allies. It’s premised on a different analysis of the world around us and the world that can be built.

Perhaps we can think of ways to bridge some of the gap between near and long through a set of demands that at least raise the concept of socialization immediately. Ideas that offer not just more badly needed social welfare but a taste of ownership and control. A hint at a different political economy.

Just one example: when a business closes down or its owners are retiring, workers supported by a public fund could get the first crack at saving it by converting it into a labor-managed firm. At the city level, we can have a municipal office to help workers turn shuttered shops into cooperatives by providing the backbone of legal and accounting support and fast-tracking permits.

We’ve already been talking about city-owned grocery stores and the need for public housing. We need more ideas like these. Reforms that fit within social democracy but gesture beyond it.

Socialism in Our Time

It’s been thrilling to meet people who’ve just joined DSA. It’s been nice to see old friends too. I’ve been complaining about missing the first half of the Knicks game, but even Jalen Brunson can’t keep me away from here.

I’m really enthusiastic about what we can do in the next couple of years. We will improve the lives of millions of people. And we will grow our movement.

But in addition to enthusiasm, we need

honesty about how far we still have to go to root ourselves in working-class communities. We need more power not just at the ballot box but at the points of production and exchange. And we need to be honest about the battles and constraints that Zohran will face, and be ready to support him when times get tough.

Zohran’s mayoralty will be a fight for what’s winnable right now. Our job is to let that fight expand, not narrow, our horizon — and to keep alive the goal of socialism in our time.

Bhaskar Sunkara is the founding editor of Jacobin , the president of the Nation magazine, and the author of The Socialist Manifesto: The Case for Radical Politics in an Era of Extreme Inequality .

Generalized Worley Noise

Lobsters
ianthehenry.com
2025-11-27 04:19:36
Comments...
Original Article

Worley noise is a type of noise used for procedural texturing in computer graphics. In its most basic form, it looks like this:

(color r2 (worley q 50 :oversample true))

That’s ugly and boring, but it’s a quick way to see what the effect looks like. If we use Worley noise to distort a 3D shape, we can get something like a hammered or cratered texture:

(def s (osc t 5 | ss 0.2 0.8 * 30 + 10))
(ball 100
| expound (worley p s) (sqrt s)
| slow 0.8
| shade sky
| rotate y (t / 10))

Like many procedural textures, it looks a lot better if you repeat the effect a few times with different frequencies:

(def s (osc t 5 | ss 0.2 0.8 * 30 + 10))
(ball 100
| expound (fbm 3 worley p s) (sqrt s)
| slow 0.8
| shade sky
| rotate y (t / 10))

There are some visual artifacts in these renderings, because they’re using a fast approximation of Worley noise that gives the wrong answer for some values.

To explain these artifacts in more detail, we have to understand a little bit about how Worley noise works.

It’s pretty simple: you start with a grid of points.

(circle 0
| color (hsv (hash $i) 0.5 1)
| tile: $i [30 30]
| expand 3)

Then you move each point by some random offset:

(def animate (osc t 5 | ss 0.1 0.7))
(circle 0
| color (hsv (hash $i) 0.5 1)
| move (hash2 $i * animate * 15)
| tile: $i [30 30] :oversample true
| expand 3)

When you’re writing a shader, you don’t actually have the ability to generate random numbers, so we’re using a hash function to produce random- looking offsets based on the logical position of each point (that is, $i = [0 0] for the center point, $i = [1 0] for the point to the right of that, etc).

Finally, once you have the points at random-looking positions, you compute the distance to the nearest point for every simple pixel that you sample – and that’s Worley noise.

(def animate (osc t 5 | ss 0.1 0.7))
(def points
  (circle 0
  | color (hsv (hash $i) 0.5 1)
  | move (hash2 $i * animate * 15)
  | tile: $i [30 30] :oversample true
  ))

(set background-color 
  (vec3 (shape/distance points / 30)))

(expand points 3)

How do you compute the distance to the nearest point for any pixel you ask about? It’s actually pretty simple: you know that you started with a perfectly even square grid. For any pixel, you can compute the “grid cell” that that pixel falls into ( [0 0] , [0 1] , etc). It’s just the pixel divided by the grid size, rounded to the nearest integer.

(def animate (osc t 5 | ss 0.1 0.7))
(union 
  (color r2 (hsv (hash $i) 0.5 1))
  (circle 3 | color (hsv (hash $i) 0.5 0.1))
| move (hash2 $i * animate * 15)
| tile: $i [30 30] :oversample true :sample-from -1
)

And you know that the nearest point is either in this grid, or it’s in one of the immediately adjacent grids, because we only offset our points by at most half the grid size, so each randomly distributed point is still inside its original grid cell. Which means there’s no point inside any other cell that could be nearer than any point in one of the adjacent cells.

(def animate (osc t 5 | ss 0.1 0.7))
(def question-point (rotate [(osc t 5 50 100) 0] (t / 2)))
(def question-cell (question-point / 30 - 1 | round))
(union 
  (color r2 (hsv (hash $i) 0.5 
    (gl/if (<= (max ($i - question-cell | abs)) 1) 1 0.25)))
  (circle 3 | color (hsv (hash $i) 0.5 0.1))
| move (hash2 $i * animate * 15)
| tile: $i [30 30] :oversample true :sample-from -1
| union
  (union (circle 4 | color black) (circle 3 | color white)
  | move question-point))

So that leaves you nine points to check, for every single pixel in your shader. Here’s the optimization that’s causing visual artifacts: instead of checking all nine adjacent cells, only check the current cell and the three cells closest to the point in question. The nearest point to your sample position is probably in one of those cells, but it doesn’t have to be. So you might get some visual artifacts occasionally.

(def animate (osc t 5 | ss 0.1 0.7))
(def question-point (rotate [(osc t 5 50 100) 0] (t / 2)))
(def question-cell (question-point / 30 - 1 | round))
(def question-bias (question-point / 30 | fract | round * 2 - 1 -))
(defn all [bvec] (and bvec.x bvec.y))
(defn or-bvec [a b] [(or a.x b.x) (or a.y b.y)])
(union 
  (color r2 (hsv (hash $i) 0.5 
    (gl/let [offset ($i - question-cell)]
      (gl/if (and (<= (max (abs offset)) 1) 
        (all 
          (or-bvec (equal offset question-bias)
                    (equal offset [0 0]))))
      1 0.25))))
  (circle 3 | color (hsv (hash $i) 0.5 0.1))
| move (hash2 $i * animate * 15)
| tile: $i [30 30] :oversample true :sample-from -1
| union
  (union (circle 4 | color black) (circle 3 | color white)
  | move question-point))
# ahh that took so long

But notice: this is getting a little bit complicated. And the original code snippet I showed you wasn’t very complicated at all.

(def animate (osc t 5 | ss 0.1 0.7))
(def points
  (circle 0
  | color (hsv (hash $i) 0.5 1)
  | move (hash2 $i * animate * 15)
  | tile: $i [30 30] :oversample true
  ))

(set background-color 
  (vec3 (shape/distance points / 30)))

(expand points 3)

Nowhere does that code compute cell coordinates or check for the nearest point. I just constructed this thing , said shape/distance , and somehow that just… gave me the distance to the nearest point.

I was able to do that because Bauble is a playground for making 3D graphics with signed distance functions . Bauble’s whole deal is computing distances to things! And Worley noise is just the signed distance function of a bunch of randomly distributed points. I’m used to thinking of signed distance functions as defining implicit surfaces of 3D shapes, but Worley noise uses the distance as a scalar in its own right.

So.

This is interesting.

What if… we took other signed distance functions, and used them as procedural noise distortions?

We’ll start simple. Instead of points, what if we randomly distribute a bunch of squares?

(def animate (osc t 5 | ss 0.1 0.7))
(def size 60)
(def points
  (rect size
  | color (hsv (hash $i) 0.5 1)
  | move (hash2 $i * animate * 15)
  | tile: $i (0.5 * size | vec2) :oversample true
  ))

(set background-color 
  (vec3 (shape/distance points + size / (0.5 * size))))

(expand points (- 3 size))

It’s not obvious that that will be interesting. Let’s look at it in action:

(defn squarley [input &opt period]
  (default period 1)
  (gl/with [q (input / period)]
    (rect 1
    | color (hsv (hash $i) 0.5 1)
    | move (hash2 $i * 0.5)
    | tile: $i [1 1] :oversample true
    | shape/distance)))

(ball 100 | expound (squarley p.xy 20) 20
| rotate y (t / 10))

Since we only defined this noise function in 2D, we need a two-dimensional input. That’s a pretty boring 2D input. This is a little more interesting:

(defn squarley [input &opt period]
  (default period 1)
  (gl/with [q (input / period)]
    (rect 1
    | color (hsv (hash $i) 0.5 1)
    | move (hash2 $i * 0.5)
    | tile: $i [1 1] :oversample true
    | shape/distance)))

(ball 100
| expound (squarley 
    [(atan2 p.xy | abs) (sqrt (150 - p.z | abs))] 
    [.5 1]) 20
| rotate y (t / 10))

We can apply multiple octaves of this, to get… something .

(defn squarley [input &opt period]
  (default period 1)
  (gl/with [q (input / period)]
    (rect 1
    | color (hsv (hash $i) 0.5 1)
    | move (hash2 $i * 0.5)
    | tile: $i [1 1] :oversample true
    | shape/distance)))

(ball 100
| expound (fbm 3 squarley 
    [(atan2 p.xy | abs) (sqrt (150 - p.z | abs))] 
    [.5 1]) 20
| rotate y (t / 10)
)

But so far this is not a very interesting effect. What if we vary the orientation as well?

(def animate (osc t 5 | ss 0.1 0.7))
(def size 60)
(def points
  (rect size
  | color (hsv (hash $i) 0.5 1)
  | rotate (hash $i 1000 * pi/2 * animate)
  | move (hash2 $i * animate * 15)
  | tile: $i (0.5 * size | vec2) :oversample true
  ))

(set background-color 
  (vec3 (shape/distance points + size / (0.5 * size))))

(expand points (- 3 size))

It’s a little bit more random-looking, I guess:

(defn squarley [input &opt period]
  (default period 1)
  (gl/with [q (input / period)]
    (rect 1
    | rotate (hash $i 1000 * pi/2)
    | color (hsv (hash $i) 0.5 1)
    | move (hash2 $i * 0.5)
    | tile: $i [1 1] :oversample true
    | shape/distance)))

(ball 100
| expound (fbm 3 squarley 
    [(atan2 p.xy | abs) (sqrt (150 - p.z | abs))] 
    [.5 1]) 20
| rotate y (t / 10)
)

But distorting 3D space with 2D noise is not… it doesn’t look great.

Let’s jump to 3D.

(def animate (osc t 5 | ss 0.1 0.7))
(def size 60)
(def points
  (box size
  | shade (hsv (hash $i) 0.5 1)
  | rotate (hash3 $i 2000) (hash $i 1000 * pi/2 * animate)
  | move (hash3 $i * animate * 15)
  | tile: $i (0.5 * size | vec3) :limit 5 :oversample true
  ))

(union 
  (plane (- ray.direction)
  | color (shape/distance points + size / (0.5 * size)))

(expand points (- 3 size)) | scale 2)

It’s a lot harder to visualize the distance field in 3D. What you’re seeing there is the distance field at the plane that passes through the origin and faces towards the camera. I know it’s not a great visualization, but the point is that this technique generalizes to 3D (even if it’s hard to imagine the distance field at every point in 3D space).

Let’s see how this looks when we use it to distort a 3D shape:

(defn cubeley [input &opt period]
  (default period 1)
  (gl/with [p (input / period)]
    (box 1
    | rotate (hash3 $i 2000) (hash $i 1000 * pi/2)
    | move (hash3 $i * 0.5)
    | tile: $i [1 1 1] :oversample true
    | shape/distance)))

(torus y 100 50
| expound (cubeley p 30) 20
| rotate y (t / 10))

It’s kind of interesting? Definitely better than what we had before. Sort of a faceted gemstone effect.

Do you think our computers will catch on fire if we try multiple octaves of this?

(defn cubeley [input &opt period]
  (default period 1)
  (gl/with [p (input / period)]
    (box 1
    | rotate (hash3 $i 2000) (hash $i 1000 * pi/2)
    | move (hash3 $i * 0.5)
    | tile: $i [1 1 1] :oversample true
    | shape/distance)))

(torus y 100 50
| expound (fbm 3 cubeley p 30) 20
| rotate y (t / 10))

I’m glad you’re still with me.

Let’s trade the boxes for cones:

(defn coneley [input &opt period]
  (default period 1)
  (gl/with [p (input / period)]
    (cone y 1 1
    | move (hash3 $i * 0.5)
    | tile: $i [1 1 1] :oversample true
    | shape/distance)))

(ball [100 150 100]
| expound (coneley p 30) 20
| rotate y (t / 10)
)

It’s kind of an interesting pinecone-y texture? I guess?

There are more primitives to try. But of course we don’t have to limit ourselves to primitive shapes.

This is a classic SDF example to demonstrate how easy it is to do constructive solid geometry stuff:

(box 100 :r 10
| subtract :r 10 (sphere 120))

What if… we used that as the basis for our Worley noise?

(def jitter (osc t 5 | ss 0.1 0.9))
(defn cubeley [input &opt period]
  (default period 1)
  (gl/with [p (input / period)]
    (box 1 :r 0.1
    | subtract :r 0.1 (sphere 1.2)
    | move (hash3 $i * 0.5 * jitter)
    | tile: $i [1 1 1] :oversample true
    | shape/distance)))

(torus y 100 50
| expound (cubeley p 30) 20
| rotate y (t / 10))

I think it’s kind of more interesting without the randomization.

We’ve constructed an interesting 3D noise function, and we’re using it to distort 3D space. But of course, we can go back to considering this a “noise texture” in the original sense of the word:

(def jitter (osc t 5 | ss 0.1 0.9))
(defn cubeley [input &opt period]
  (default period 1)
  (gl/with [p (input / period)]
    (box 1 :r 0.1
    | subtract :r 0.1 (sphere 1.2)
    | rotate (hash3 $i 1000) (hash $i 2000 * jitter)
    | move (hash3 $i * 0.5 * jitter)
    | tile: $i [1 1 1] :oversample true :sample-from -2 :sample-to 2
    | shape/distance)))

(r2 | color (vec3 (fbm 4 cubeley [q 0] 128 | abs)))

Kinda neat.

The point of all of this is: Worley noise invites us to reconsider signed distance functions as more than implicit surfaces. And since Bauble makes it easy to construct signed distance functions, it’s a good playground for experimenting with textures like this.

Even if we never found anything particularly attractive, it’s fun to play around with space.

If this is your first time seeing Bauble , hey welcome! This post is an expansion of something I briefly talked about in a YouTube video once . The video has many more examples of the sorts of things that Bauble can do. Check it out if this piqued your interest!

The Nerd Reich – Silicon Valley Fascism and the War on Democracy

Hacker News
www.simonandschuster.com
2025-11-27 06:53:17
Comments...
Original Article

Why have I been blocked?

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

What can I do to resolve this?

You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.

Linux Kernel Explorer

Hacker News
reverser.dev
2025-11-27 06:17:37
Comments...
Original Article

The kernel isn't a process—it's the system. It serves user processes, reacts to context, and enforces separation and control.

  • The Kernel Is Not a Process : It's the always-present authority bridging hardware and software.
  • Serving the Process : Orchestrates syscalls, interrupts, and scheduling to keep user tasks running.
  • System of Layers : Virtual, mapped, isolated, and controlled—structure at runtime.

📚 Study Files

init/main.c

kernel/fork.c

include/linux/sched.h

arch/x86/kernel/entry_64.S

1

.

What is the fundamental difference between the kernel and a process?

A . The kernel is a special process with elevated privileges

B . The kernel is not a process—it's the system itself that serves processes

C . The kernel is just a library that processes link against

D . There is no difference; they are the same thing

2

.

How does the kernel primarily serve user processes?

A . By running as a background daemon

B . By orchestrating syscalls, interrupts, and scheduling

C . By providing a GUI interface

D . By compiling user code

3

.

What characterizes the kernel's system of layers?

A . Physical, tangible, and direct

B . Simple and flat with no hierarchy

C . Virtual, mapped, isolated, and controlled

D . User-accessible and modifiable

Last Issue of "ECMAScript News"

Hacker News
ecmascript.news
2025-11-27 06:14:21
Comments...
Original Article

Dear readers!

Sadly, we have to inform you that this is the last issue of “ECMAScript News”. We have been operating at a loss for too long: The number of advertisers and subscribers has been slowly but steadily decreasing over the last two years (vs. constant growth before that). Therefore, we made the difficult decision to stop publishing this newsletter.

The first issue came out on 2016-09-27. We published a total of 368 issues and are thankful for many loyal readers during many interesting years!

Axel may continue this newsletter in some shape or form next year. If he does, he’ll inform you via one last email in 2026.

Principles of Vasocomputation

Hacker News
opentheory.net
2025-11-27 05:51:21
Comments...
Original Article

A unification of Buddhist phenomenology, active inference, and physical reflexes; a practical theory of suffering, tension, and liberation; the core mechanism for medium-term memory and Bayesian updating; a clinically useful dimension of variation and dysfunction; a description of sensory type safety; a celebration of biological life.

Michael Edward Johnson, Symmetry Institute, July 12, 2023.


I. What is tanha?

By default, the brain tries to grasp and hold onto pleasant sensations and push away unpleasant ones. The Buddha called these ‘micro-motions’ of greed and aversion taṇhā , and the Buddhist consensus seems to be that it accounts for an amazingly large proportion (~90%) of suffering. Romeo Stevens suggests translating the original Pali term as “fused to,” “grasping,” or “clenching,” and that the mind is trying to make sensations feel stable, satisfactory, and controllable . Nick Cammarata suggests “ fast grabby thing ” that happens within ~100ms after a sensation enters awareness; Daniel Ingram suggests this ‘grab’ can occur as quickly as 25-50ms (personal discussion). Uchiyama Roshi describes tanha in terms of its cure, “ opening the hand of thought ”; Shinzen Young suggests “ fixation ”; other common translations of tanha are “ desire ,” “thirst,” “craving.” The vipassana doctrine is that tanha is something the mind instinctively does, and that meditation helps you see this process as it happens, which allows you to stop doing it. Shinzen estimates that his conscious experience is literally 10x better due to having a satisfying meditation practice.

Tanha is not yet a topic of study in affective neuroscience but I suggest it should be. Neuroscience is generally gated by soluble important mysteries: complex dynamics often arise from complex mechanisms, and complex mechanisms are difficult to untangle. The treasures in neuroscience happen when we find exceptions to this rule: complex dynamics that arise from elegantly simple core mechanisms. When we find one it generally leads to breakthroughs in both theory and intervention. Does “tanha” arise from a simple or complex mechanism? I believe Buddhist phenomenology is very careful about what it calls dependent origination — and this makes items that Buddhist scholarship considers to be ‘basic building-blocks of phenomenology’ particularly likely to have a simple, elegant implementations in the brain — and thus are exceptional mysteries to focus scientific attention on.

I don’t think tanha has 1000 contributing factors; I think it has one crisp, isolatable factor. And I think if we find this factor, it could herald a reorganization of systems neuroscience similar in magnitude to the past shifts of cybernetics , predictive coding, and active inference .

Core resources:

  1. Anuruddha, Ā. (n.d.). A Comprehensive Manual of Abhidhamma .
  2. Stevens, R. (2020). (mis)Translating the Buddha . Neurotic Gradient Descent.
  3. Cammarata, N. (2021-2023). [Collected Twitter threads on tanha].
  4. Markwell, A. (n.d.). Dhamma resources .

II. Tanha as unskillful active inference (TUAI)

The first clue is what tanha is trying to do for us. I’ll claim today that tanha is a side-effect of a normal, effective strategy our brains use extensively, active inference . Active inference suggests we impel ourselves to action by first creating some predicted sensation (“I have a sweet taste in my mouth” or “I am not standing near that dangerous-looking man”) and then holding it until we act in the world to make this prediction become true (at which point we can release the tension). Active inference argues we store our to-do list as predictions , which are equivalent to untrue sensory observations that we act to make true.

Formally, the “tanha as unskillful active inference” (TUAI) hypothesis is that this process commonly goes awry (i.e. is applied unskillfully) in three ways:

  • First, the rate of generating normative predictions can outpace our ability to make them true and overloads a very finite system. Basically we try to control too much, and stress builds up.
  • Second, we generate normative predictions in domains that we cannot possibly control ; predicting a taste of cake will linger in our mouth forever, predicting that we did not drop our glass of water on the floor. That good sensations will last forever and the bad did not happen. (This is essentially a “predictive processing” reframe of the story Romeo Stevens has told on his blog , Twitter , and in person.)[1]
  • Third, there may be a context desynchronization between the system that represents the world model, and the system that maintains predictions-as-operators on this world model. When desynchronization happens and the basis of the world model shifts in relation to the basis of the predictions, predictions become nonspecific or nonsensical noise and stress.
  • We may also include a catch-all fourth category for when the prediction machinery becomes altered outside of any semantic context, for example metabolic insufficiency leading to impaired operation.

Core resources:

  1. Safron, A. (2020). An Integrated World Modeling Theory (IWMT) of Consciousness : Combining Integrated Information and Global Neuronal Workspace Theories With the Free Energy Principle and Active Inference Framework; Toward Solving the Hard Problem and Characterizing Agentic Causation. Frontiers in Artificial Intelligence, 3. https://doi.org/10.3389/frai.2020.00030
  2. Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., Pezzulo, G. (2017). Active inference: A Process Theory . Neural Computation, 29(1), 1-49.
  3. Sapolsky, R.M. (2004). Why Zebras Don’t Get Ulcers: The Acclaimed Guide to Stress, Stress-Related Diseases, and Coping . Holt Paperbacks. [Note: link is to a video summary.]
  4. Pyszczynski, T., Greenberg, J., Solomon, S. (2015). Thirty Years of Terror Management Theory . Advances in Experimental Social Psychology, 52, 1-70.

III. Evaluating tanha requires a world model and cost function

There are many theories about the basic unit of organization of the brain; brain regions, functional circuits, specific network topologies , etc. Adam Safron describes the nervous system’s basic building block as Self-Organized Harmonic Modes (SOHMs); I like this because the math of harmonic modes allows a lot of interesting computation to arise ‘for free.’ Safron suggests these modes function as autoencoders , which I believe are functionally identical to symmetry detectors . It’s increasingly looking like SOHMs are organized around physical brain resonances at least as much as connectivity, which been a surprising result.

At high frequencies these SOHMs will act as feature detectors, at lower frequencies we might think of them as wind chimes: by the presence and absence of particular SOHMs and their interactions we obtain a subconscious feeling about what kind of environment we’re in and where its rewards and dangers are. We can expect SOHMs will be arranged in a way that optimizes differentiability of possible/likely world states, minimizes crosstalk , and in aggregate constitutes a world model , or in the Neural Annealing / REBUS / ALBUS framework, a belief landscape.

To be in tanha-free “ open awareness ” without greed, aversion, or expectation is to feel the undoctored hum of your SOHMs. However, we doctor our SOHMs *all the time* — when a nice sensation enters our awareness, we reflexively try to ‘grab’ it and stabilize the resonance; when something unpleasant comes in, we try to push away and deaden the resonance. Likewise society puts expectations on us to “ act normal ” and “ be useful ”; we may consider all such SOHM adjustments/predictions as drawing from the same finite resource pool. “Active SOHM management” is effortful (and unpleasant) in rough proportion to how many SOHMs need to be actively managed and how long they need to be managed.

But how can the brain manage SOHMs? And if the Buddhists are right and this creates suffering, why does the brain even try?

Core resources:

  1. Safron, A. (2020). An Integrated World Modeling Theory (IWMT) of Consciousness : Combining Integrated Information and Global Neuronal Workspace Theories With the Free Energy Principle and Active Inference Framework; Toward Solving the Hard Problem and Characterizing Agentic Causation. Frontiers in Artificial Intelligence, 3. https://doi.org/10.3389/frai.2020.00030
  2. Safron, A. (2020). On the varieties of conscious experiences: Altered beliefs under psychedelics (ALBUS) . PsyArxiv. Retrieved July 7, 2023, from the PsyArxiv website.
  3. Safron, A. (2021). The radically embodied conscious cybernetic bayesian brain: From free energy to free will and back again . Entropy, 23(6), 783. MDPI.
  4. Bassett, D. S., & Sporns, O. (2017). Network neuroscience . Nature Neuroscience, 20(3), 353-364.
  5. Buzsáki, G., & Draguhn, A. (2004). Neuronal oscillations in cortical networks . Science, 304(5679), 1926-1929.
  6. Johnson, M. (2016). Principia Qualia . opentheory.net.
  7. Johnson, M. (2019). Neural Annealing: Toward a Neural Theory of Everything . opentheory.net.
  8. Johnson, M. (2023). Qualia Formalism and a Symmetry Theory of Valence . opentheory.net.
  9. Carhart-Harris, R. L., & Friston, K. J. (2019). REBUS and the Anarchic Brain: Toward a Unified Model of the Brain Action of Psychedelics . Pharmacological Reviews, 71(3), 316-344.
  10. Dahl, C. J., Lutz, A., & Davidson, R. J. (2015). Reconstructing and deconstructing the self: cognitive mechanisms in meditation practice . Trends in Cognitive Sciences, 19(9), 515-523.

IV. Tanha as artifact of compression pressure

I propose reframing tanha as an artifact of the brain’s compression pressure . I.e. tanha is an artifact of a continual process that subtly but systematically pushes on the complexity of ‘what is’ (the neural patterns represented by undoctored SOHMs) to collapse it into a more simple configuration, and sometimes holds it there until we act to make that simplification true. The result of this compression drive conflates “what is”, “what could be”, “what should be”, and “what will be,” and this conflation is the source of no end of moral and epistemological confusion.

This reframes tanha as both the pressure which collapses complexity into simplicity, and the ongoing stress that comes from maintaining the counterfactual aspects of this collapse ( compression stress ). We can think of this process as balancing two costs: on one hand, applying compression pressure has metabolic and epistemic costs, both immediate and ongoing. On the other hand, the brain is a finite system and if it doesn’t continually “compress away” patterns there will be unmanageable sensory chaos. The right amount of compression pressure is not zero.[2]

Equivalently, we can consider tanha as an excessive forcefulness in the metabolization of uncertainty. Erik P. Hoel has written about energy, information, and uncertainty as equivalent and conserved quantities ( Hoel 2020 ): much like literal digestion , the imperative of the nervous system is to extract value from sensations then excrete the remaining information, leaving a low-information, low-uncertainty, clean slate ready for the next sensation (thank you Benjamin Anderson for discussion). However, we are often unskillful in the ways we try to extract value from sensations, e.g. improperly assessing context, trying to extract too much or too little certainty, or trying to extract forms of certainty inappropriate for the sensation.

We can define a person’s personality, aesthetic, and a large part of their phenomenology in terms of how they metabolize uncertainty — their library of motifs for (a) initial probing, (b) digestion and integration, and (c) excretion/externalization of any waste products, and the particular reagents for this process they can’t give themselves and must seek in the world .

So far we’ve been discussing brain dynamics on the computational level. But how does the brain do all this — what is the mechanism by which it attempts to apply compression pressure to SOHMs? This is essentially the question neuroscience has been asking for the last decade. I believe evolution has coupled two very different systems together to selectively apply compression/prediction pressure in a way that preserves the perceptive reliability of the underlying system (undoctored SOHMs as ground-truth perception) but allows near-infinite capacity for adjustment and hypotheticals. One system focused on perception; one on compression, judgment, planning, and action.

The traditional neuroscience approach for locating these executive functions has been to associate them with particular areas of the brain. I suspect the core logic is hiding much closer to the action.

Core resources:

  1. Schmidhuber, J. (2008). Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes . Arxiv. Retrieved July 7, 2023, from the Arxiv website.
  2. Johnson, M. (2023). Qualia Formalism and a Symmetry Theory of Valence . opentheory.net.
  3. Hoel, E. (2020). The Overfitted Brain: Dreams evolved to assist generalization . Arxiv. Retrieved July 7, 2023, from the Arxiv website.
  4. Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.
  5. Chater, N., & Vitányi, P. (2003). Simplicity: a unifying principle in cognitive science? Trends in Cognitive Sciences, 7(1), 19-22.
  6. Bach, D.R., & Dolan, R.J. (2012). Knowing how much you don’t know: a neural organization of uncertainty estimates . Nature Reviews Neuroscience, 13(8), 572-586.

V. VSMCs as computational infrastructure

Above: the vertical section of an artery wall (Wikipedia, emphasis added; video ): the physical mechanism by which we grab sensations and make predictions; the proximate cause of 90% of suffering and 90% of goal-directed behavior.

All blood vessels are wrapped by a thin sheathe of vascular smooth muscle cells ( VSMCs ). The current scientific consensus has the vasculature system as a spiderweb of ever-narrower channels for blood, powered by the heart as a central pump, and supporting systems such as the brain, stomach, limbs, and so on by bringing them nutrients and taking away waste. The sheathe of muscle wrapped around blood vessels undulates in a process called “ vasomotion ” that we think helps blood keep circulating, much like peristalsis in the gut helps keep food moving, and can help adjust blood pressure.

I think all this is true, but is also a product of what’s been easy to measure and misses 90% of what these cells do.

Evolution works in layers, and the most ancient base layers often have rudimentary versions of more specialized capacities ( Levin 2022 ) as well as deep control hooks into newer systems that are built around them. The vascular system actually predates neurons and has co-evolved with the nervous system for hundreds of millions of years. It also has mechanical actuators (VSMCs) that have physical access to all parts of the body and can flex in arbitrary patterns and rhythms. It would be extremely surprising if evolution didn’t use this system for something more than plumbing. We can also “follow the money”; the vascular system controls the nutrients and waste disposal for the neural system and will win in any heads-up competition over co-regulation balance.

I expect VSMC contractions to influence nearby neurons through e.g. ephaptic coupling, reducing blood flow, and adjusting local physical resonance, and to be triggered by local dissonance in the electromagnetic field.

I’ll offer three related hypotheses about the computational role of VSMCs[3] today that in aggregate constitute a neural regulatory paradigm I’m calling vasocomputation :

  1. Compressive Vasomotion Hypothesis (CVH) : the vasomotion reflex functions as a compression sweep on nearby neural resonances, collapsing and merging fragile ambivalent patterns (the “Bayesian blur” problem) into a more durable, definite state. Motifs of vasomotion, reflexive reactions to uncertainties, and patterns of tanha are equivalent.
  2. Vascular Clamp Hypothesis (VCH) : vascular contractions freeze local neural patterns and plasticity for the duration of the contraction, similar to collapsing a superposition or probability distribution, clamping a harmonic system, or pinching a critical network into a definite circuit. Specific vascular constrictions correspond with specific predictions within the Active Inference framework and function as medium-term memory.
  3. Latched Hyperprior Hypothesis (LHH) : if a vascular contraction is held long enough, it will engage the latch-bridge mechanism common to smooth muscle cells. This will durably ‘freeze’ the nearby circuit, isolating it from conscious experience and global updating and leading to a much-reduced dynamical repertoire; essentially creating a durable commitment to a specific hyperprior. The local vasculature will unlatch once the prediction the latch corresponds to is resolved, restoring the ability of the nearby neural networks to support a larger superposition of possibilities.

The initial contractive sweep jostles the neural superposition of interpretations into specificity; the contracted state temporarily freezes the result; if the contraction is sustained, the latch bridge mechanism engages and cements this freeze as a hyperprior. With one motion the door of possibility slams shut. And so we collapse our world into something less magical but more manageable, one clench at a time. Tanha is cringe.

The claim relevant to the Free Energy Principle – Active Inference paradigm is we can productively understand the motifs of smooth muscle cells (particularly in the vascular system) as “where the brain’s top-down predictive models are hiding,” which has been an open mystery in FEP-AI. Specific predictions are held as vascular tension, and vascular tension in turn is released by action, consolidated by Neural Annealing , or rendered superfluous by neural remodeling (hold a pattern in place long enough and it becomes the default). Phrased in terms of the Deep CANALs framework which imports ideas from machine learning: the neural weights that give rise to SOHMs constitute the learning landscape, and SOHMs+vascular tension constitute the inference landscape.

The claim relevant to Theravada Buddhism is we can productively understand the motifs of the vascular system as the means by which we attempt to manipulate our sensations. Vasomotion corresponds to an attempt to ‘pin down’ a sensation (i.e. tanha); muscle contractions freeze patterns; smooth muscle latches block out feelings of possibility and awareness of that somatic area. Progress on the contemplative path will correspond with both using these forms of tension less, and needing them less. I expect cessations to correspond with a nigh-complete absence of vasomotion (and EEG may measure vasomotion moreso than neural activity).

The claim relevant to practical health is that smooth muscle tension, especially in VSMCs, and especially latched tension, is a system science knows relatively little about but is involved in an incredibly wide range of problems, and understanding this system is hugely helpful for knowing how to take care of yourself and others. The “latch-bridge” mechanism is especially important, where smooth muscle cells have a discrete state where they attach their myosin heads to actin in a way that “locks” or “latches” the tension without requiring ongoing energy. Latches take between seconds to minutes to form & dissolve — a simple way to experience the latch-bridge cycle releasing is to have a hot bath and notice waves of muscle relaxation. Latches can persist for minutes, hours, days, months, or years (depending on what prediction they’re stabilizing), and the sum total of all latches likely accounts for the majority of bodily suffering. If you are “holding tension in your body” you are subject to the mechanics of the latch-bridge mechanism. Migraines and cluster headaches are almost certainly inappropriate VSMC latches; all hollow organs are surrounded by smooth muscle and can latch. A long-term diet of poor food (e.g. seed oils) leads to random latch formation and “lumpy” phenomenology. Sauna + cold plunges are an effective way to force the clench-release cycle and release latches; likewise, simply taking time to feel your body and put your attention into latched tissues can release them. Psychedelics can force open latches. Many issues in neuropathy & psychiatry are likely due to what I call “latch spirals” — a latch forms, which reduces blood flow to that area, which reduces energy available to those tissues, which prevents the latch from releasing (since releasing the latch requires activation energy and returning to a freely cycling state also increases the cell’s rate of energy expenditure).

Core resources:

  1. Levin, M. (2022). Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds . Frontiers in Systems Neuroscience, 16. https://doi.org/10.3389/fnsys.2022.768201
  2. Watson, R., McGilchrist, I., & Levin, M. (2023). Conversation between Richard Watson, Iain McGilchrist, and Michael Levin #2 . YouTube.
  3. Wikipedia contributors. (2023, April 26). Smooth muscle . In Wikipedia, The Free Encyclopedia . Retrieved 22:39, July 7, 2023, from https://en.wikipedia.org/w/index.php?title=Smooth_muscle&oldid=1151758279
  4. Wikipedia contributors. (2023, June 27). Circulatory system . In Wikipedia, The Free Encyclopedia . Retrieved 22:41, July 7, 2023, from https://en.wikipedia.org/w/index.php?title=Circulatory_system&oldid=1162138829
  5. Johnson, M., GPT4. (2023). [Mike+GPT4: Latch bridge mechanism discussion].
  6. Juliani, A., Safron, A., & Kanai, R. (2023, May 18). Deep CANALs: A Deep Learning Approach to Refining the Canalization Theory of Psychopathology . https://doi.org/10.31234/osf.io/uxmz6
  7. Moore CI, Cao R. The hemo-neural hypothesis: on the role of blood flow in information processing . J Neurophysiol. 2008 May;99(5):2035-47. doi: 10.1152/jn.01366.2006. Epub 2007 Oct 3. PMID: 17913979; PMCID: PMC3655718 Added 11-17-23; recommended priority reading
  8. Jacob M, Ford J and Deacon T (2023) Cognition is entangled with metabolism: relevance for resting-state EEG-fMRI. Front. Hum. Neurosci. 17:976036. doi: 10.3389/fnhum.2023.976036 Added 1-19-24

To summarize the story so far : tanha is a grabby reflex which is the source of most moment-by-moment suffering. The ‘tanha as unskillful active inference’ (TUAI) hypothesis suggests that we can think of this “grabbing” as part of the brain’s normal predictive and compressive sensemaking, but by default it makes many unskillful predictions that can’t possibly come true and must hold in a costly way. The vascular clamp hypothesis (VCH) is that we store these predictions (both skillful and unskillful) in vascular tension. The VCH can be divided into three distinct hypotheses (CVH, VCH, LHH) that describe the role of this reflex at different computational and temporal scales. An important and non-obvious aspect of smooth muscle (e.g. VSMCs) is they have a discrete “latch” setting wherein energy usage and flexibility drops significantly, and sometimes these latches are overly ‘sticky’; unlatching our sticky latches is a core part of the human condition.

Concluding Part I : the above work describes a bridge between three distinct levels of abstraction: a central element in Buddhist phenomenology, the core accounting system within active inference, and a specific muscular reflex. I think this may offer a functional route to synthesize the FEP-AI paradigm and Michael Levin’s distributed stress minimization work, and in future posts I plan to explore why this mechanism has been overlooked, and how its dynamics are intimately connected with human problems and capacities.

I view this research program as integral to both human flourishing and AI alignment.


Acknowledgements : This work owes a great deal to Romeo Stevens’ scholarship on tanha , pioneering tanha as a ‘clench’ dynamic, intuitions about muscle tension and prediction , and notion that we commit to dukkha ourselves until we get what we want; Nick Cammarata’s fresh perspectives on Buddhism and his tireless and generative inquiry around the phenomenology & timescale of tanha ; Justin Mares’ gentle and persistent encouragement; Andrea Bortolameazzi’s many thoughtful comments and observations about the path, critical feedback, and thoughtful support; and Adam Safron’s steadfast belief and support, theorizing on SOHMs, and teachings about predictive coding and active inference. Much of my knowledge of Buddhist psychology comes from the work and teachings of Anthony Markwell; much of my intuition around tantra and interpersonal embodiment dynamics comes from Elena Selezneva. I’m also grateful for conversations with Benjamin Anderson about emergence, to Curran Janssens for supporting my research, and to Ivanna Evtukhova for starting me on the contemplative path. An evergreen thank you to my parents their unconditional support. Finally, a big thank-you to Janine Leger and Vitalik Buterin’s Zuzalu co-living community for creating a space to work on this writeup and make it real.


Footnotes :

[1] We might attempt to decompose the Active Inference – FEP term of ‘precision weighting’ as (1) the amount of sensory clarity (the amount of precision available in stimuli), and (2) the amount of ‘grabbiness’ of the compression system (the amount of precision we empirically try to extract). Perhaps we could begin to put numbers on tanha by calculating the KL divergence between these distributions.

[2] We can speculate that the arrow of compression points away from Buddhism’s three attributes: e.g. the brain tries to push and prod its SOHMs toward patterns that are stable (dissonance minimization), satisfactory (harmony maximization), and controllable (compression maximization) — similar yet subtly distinct targets. Thanks to both Romeo and Andrea for discussion about the three attributes and their opposite.

[3] (Added July 19, 2023) Skeletal muscle, smooth muscle, and fascia (which contains myofibroblasts with actin fibers similar to those in muscles) are all found throughout the body and reflexively distribute physical load; it’s likely they do the same for cognitive-emotional load. Why focus on VSMCs in particular? Three reasons: (1) they have the best physical access to neurons, (2) they regulate bloodflow, and (3) they have the latch-bridge mechanism. I.e. skeletal, non-VSMC smooth muscle, and fascia all likely contribute significantly to distributed stress minimization , and perhaps do so via similar principles/heuristics, but VSMCs seem to be the only muscle with means, motive, and opportunity to finely puppet the neural system, and I believe are indispensably integrated with its moment-by-moment operation in more ways than are other contractive cells. (Thanks to @askyatharth for bringing up fascia.)

Edit, April 6th, 2025: a friendly Buddhist scholar suggests that common translations of taṇhā conflate two concepts: taṇhā in Pali is most accurately translated as craving or thirst, whereas the act of clinging itself is “upādāna (as in the upādāna-khandhās), and in the links of dependent origination is one step downstream from the thirst (or impulsive craving) of taṇhā.” Under this view we can frame taṇhā as a particular default bias in the computational-biochemical tuning of the human nervous system, and upādāna as the impulsive physical (VSMC) clenching this leads to.

Buddhism describes taṇhā as being driven by the three fundamental defilements, greed, fear, & delusion; I expect each defilement maps to a hard truth (aka clearly suboptimal but understandable failure mode) of implementing vasocomputation-based active inference systems.

Show HN: Era – Open-source local sandbox for AI agents

Hacker News
github.com
2025-11-27 05:28:50
Comments...
Original Article

ERA Agent - Sandbox to run AI generated code

Run untrusted or AI-generated code locally inside microVMs that behave like containers for great devX, 200ms launch time, and better security.

There's a fully managed cloud layer, globally deployed Worker/API, jump to cloudflare/README.md .

Publish Release

Quick Start

installation options

option 1: homebrew (recommended)

# 1. install the tap
brew tap binsquare/era-agent-cli

# 2. install era agent
brew install binsquare/era-agent-cli/era-agent

# 3. install dependencies
brew install krunvm buildah

# 4. verify the CLI is on PATH
agent vm exec --help

# 4. follow platform-specific setup (see below)

option 2: from source

# 1. install dependencies
brew install krunvm buildah  # on macos

# 2. clone the repository
git clone https://github.com/binsquare/era
cd era-agent

# 3. build the agent
make

# 4. follow platform-specific setup (see below)

Installation (macOS)

brew tap binsquare/era-agent-cli
brew install era-agent-cli
brew install krunvm buildah

Run the post-install helper to prepare the case-sensitive volume/state dir on macOS:

$(brew --prefix era-agent)/libexec/setup/setup.sh

platform setup details

homebrew installation setup

if you installed era agent via homebrew, use the setup script from the installed location:

# for macos users with homebrew installation
$(brew --prefix era-agent)/libexec/setup/setup.sh

# or run the setup script directly after installation
$(brew --prefix)/bin/era-agent-setup  # if setup script is linked separately

macos setup

  • Run scripts/macos/setup.sh to bootstrap dependencies, validate (or create) a case-sensitive volume, and prepare an agent state directory (the script may prompt for your password to run diskutil ). The script will also detect your Homebrew installation and recommend the correct value for the DYLD_LIBRARY_PATH environment variable, which may be required for krunvm to find its dynamic libraries.

  • If you prefer to create the dedicated volume manually, open a separate terminal and run (with sudo as required):

    diskutil apfs addVolume disk3 "Case-sensitive APFS" krunvm
    

    (replace disk3 with the identifier reported by diskutil list ). The operation is non-destructive, does not require sudo , and shares space with the source container volume.

  • When prompted by the setup script, accept the default mount point ( /Volumes/krunvm ) or provide your own. Afterwards, export the environment variables printed by the script (at minimum AGENT_STATE_DIR , KRUNVM_DATA_DIR , and CONTAINERS_STORAGE_CONF ) before invoking agent or running krunvm / buildah directly. The helper now prepares a matching container-storage configuration under the case-sensitive volume so the CLI can run without extra manual steps.

    • The script also writes policy.json / registries.conf under the same directory so Buildah doesn't look for root-owned files in /etc/containers . Export the variables it prints ( CONTAINERS_POLICY , CONTAINERS_REGISTRIES_CONF ) if you invoke Buildah manually.

Linux Setup

  • Install krunvm and buildah using your package manager (the specific installation method may vary)
  • Ensure the system is properly configured to run microVMs (may require kernel modules or specific privileges)
  • Consider setting AGENT_STATE_DIR to a writable location if running as non-root

Runtime Requirements

  • krunvm must be installed and available on $PATH (Homebrew: brew install krunvm ; see upstream docs for other platforms).
  • buildah must also be present because krunvm shells out to it for OCI image handling.
  • On macOS, krunvm requires a case-sensitive APFS volume; see the macOS setup notes above.

Build

make          # builds the agent CLI
make clean    # removes build artifacts (Go cache)

Full platform-specific steps (macOS volume setup, Linux env vars, troubleshooting) live in era-agent/README.md .

🎥 Demo Video

Demo Video

A demo video showing how to install and use the CLI tool is available in the era-agent directory . This video covers:

  • Installing dependencies and compiling the CLI tool
  • Creating and accessing local VMs
  • Running code and agents through commands or scripts
  • Uploading and downloading files to/from a VM

Core Commands

# create a long-running VM
agent vm create --language python --cpu 1 --mem 256 --network allow_all

# run something inside it
agent vm exec --vm <id> --cmd "python -c 'print(\"hi\")'"

# ephemeral one-off execution
agent vm temp --language javascript --cmd "node -e 'console.log(42)'"

# inspect / cleanup
agent vm list
agent vm stop --all
agent vm clean --all

Supported --language values: python , javascript / node / typescript , go , ruby . Override the base image with --image if you need a custom runtime.

⚙ Configuration Highlights

  • AGENT_STATE_DIR : writable directory for VM metadata, krunvm state, and Buildah storage. The macOS setup script prints the correct exports.
  • AGENT_LOG_LEVEL ( debug|info|warn|error ) and AGENT_LOG_FILE : control logging.
  • AGENT_ENABLE_GUEST_VOLUMES=1 : re-enable /in , /out , /persist mounts for advanced workflows.

See era-agent/README.md for every tunable.

Testing Locally

cd era-agent
make agent
./agent vm temp --language python --cmd "python -c 'print(\"Smoke test\")'"

Integration helpers and sample recipes live under examples/ , recipes/ , and docs/ .

Need the Hosted API?

To deploy ERA as a Cloudflare Worker with Durable Object-backed sessions and HTTP APIs:

  • Follow cloudflare/README.md for setup, local Wrangler dev, and deployment.
  • The Worker reuses the same Go agent primitives but adds session orchestration, package caching, and REST endpoints.

Additional Docs

📄 License

Apache 2.0

Evaluating Uniform Memory Access Mode on AMD's Turin

Hacker News
chipsandcheese.com
2025-11-27 05:22:28
Comments...
Original Article

NUMA, or Non-Uniform Memory Access, lets hardware expose affinity between cores and memory controllers to software. NUMA nodes traditionally aligned with socket boundaries, but modern server chips can subdivide a socket into multiple NUMA nodes. It’s a reflection of how non-uniform interconnects get as core and memory controller counts keep going up. AMD designates their NUMA modes with the NPS (Nodes Per Socket) prefix.

NPS0 is a special NUMA mode that goes in the other direction. Rather than subdivide the system, NPS0 exposes a dual socket system as a single monolithic entity. It evenly distributes memory accesses across all memory controller channels, providing uniform memory access like in a desktop system. NPS0 and similar modes exist because optimizing for NUMA can be complicated and time intensive. Programmers have to specify a NUMA node for each memory allocation, and take are to minimize cross-node memory accesses. Each NUMA node only represents a fraction of system resources, so code pinned to a NUMA node will be constrained by that node’s CPU core count, memory bandwidth, and memory capacity. Effort spent getting an application to scale across NUMA nodes might be effort not spent on a software project’s other goals.

From AMD’s EPYC 9005 Series Architecture Overview, showing a dual socket Zen 5 (Turin) setup in NPS1 mode

A massive thank you goes to Verda (formerly DataCrunch) for proving an instance with 2 AMD EPYC 9575Fs and 8 Nvidia B200 GPUs. Verda gave us about 3 weeks with the instance to do with as we wished. While this article looks at the AMD EPYC 9575Fs, there will be upcoming coverage of the B200s found in the VM.

This system appears to be running in NPS0 mode, giving an opportunity to see how a modern server acts with 24 memory controllers providing uniform memory access.

A simple latency test immediately shows the cost of providing uniform memory access. DRAM latency rises to over 220 ns, giving a nearly 90 ns penalty over the EPYC 9355P running in NPS1 mode. It’s a high penalty compared to using the equivalent of NPS0 on older systems. For example, a dual socket Broadwell system has 75.8 ns of DRAM latency when each socket is treated as a NUMA node, and 104.6 ns with uniform memory access[1].

NPS0 mode does have a bandwidth advantage from bringing twice as many memory controllers into play. But the extra bandwidth doesn’t translate to a latency advantage until bandwidth demands reach nearly 400 GB/s. The EPYC 9355P seems to suffer when a latency test thread is mixed with bandwidth heavy ones. A bandwidth test thread with just linear read patterns can achieve 479 GB/s in NPS1 mode . However, my bandwidth test produces low values on the EPYC 9575F because not all test threads finish at the same time. I avoid this problem in the loaded memory latency test, because I have bandwidth load threads check a flag. That lets me stop all threads at approximately the same time.

Per-CCD bandwidth is barely affected by the different NPS modes. Both the EPYC 9355P and 9575F use “GMI-Wide” links for their Core Complex Dies, or CCDs. GMI-Wide provides 64B/cycle of read and write bandwidth at the Infinity Fabric clock. On both chips, each CCD enjoys more bandwidth to the system compared to standard “GMI-Narrow” configurations. For reference, a GMI-Narrow setup running at a typical desktop 2 GHz FCLK would be limited to 64 GB/s of read and 32 GB/s of write bandwidth.

Higher memory latency could lead to lower performance, especially in single threaded workloads. But the EPYC 9575F does surprisingly well in SPEC CPU2017. The EPYC 9575F runs at a higher 5 GHz clock speed, and DRAM latency is only one of many factors that affect CPU performance.

Individual workloads show a more complex picture. The EPYC 9575F does best when workloads don’t miss cache. Then, its high 5 GHz clock speed can shine. 548.exchange2 is an example. On the other hand, workloads that hit DRAM a lot suffer in NPS0 mode. 502.gcc, 505.mcf, and 520.omnetpp see the EPYC 9575F’s higher clock speed count for nothing, and the higher clocked chip underperforms compared to 4.4 GHz setups with lower DRAM latency.

SPEC CPU2017’s floating point suite also shows diverse behavior. 549.fotonik3d and 554.roms suffer in NPS0 mode as the EPYC 9575F struggles to keep itself fed. 538.imagick plays nicely to the EPYC 9575F’s advantages. In that test, high cache hitrates let the 9575F’s higher core throughput shine through.

NPS0 mode performs surprisingly well in a single threaded SPEC CPU2017 run. Some sub-tests suffer from higher memory latency, but enough other tests benefit from the higher 5 GHz clock speed to make up the difference. It’s a lesson about the importance of clock speeds and good caching in a modern server CPU. Those two factors go together, because faster cores only provide a performance advantage if the memory subsystem can feed them. The EPYC 9575F’s good overall performance despite having over 220 ns of memory latency shows how good its caching setup is.

As for running in NPS0 mode, I don’t think it’s worthwhile in a modern system. The latency penalty is very high, and bandwidth gains are minor for NUMA-unaware code. I expect those latency penalties to get worse as server core and memory controller counts continue to increase. For workloads that need to scale across socket boundaries, optimizing for NUMA looks to be an unfortunate necessity.

Again, a massive thank you goes to Verda (formerly DataCrunch) without which this article, and the upcoming B200 article, would not be possible!

$96M AUD revamp of Bom website bombs out on launch

Hacker News
www.bbc.com
2025-11-27 05:05:46
Comments...
Original Article

Australia's beloved weather website got a makeover - and infuriated users

Getty Images A farmer wearing a hat, sunglasses, jeans and shirt leans on a tractor and looks at a smartphone. Getty Images

Farmers are angry - they argue the information they need is now hard to find

It was an unseasonably warm spring day in Sydney on 22 October, with a forecast of 39C (99F) - a real scorcher.

The day before, the state of New South Wales had reported its hottest day in over a century, a high of 44.8C in the outback town of Bourke.

But little did the team at the national Bureau of Meteorology foresee that they, in particular, would soon be feeling the heat.

Affectionately known by Australians as the Bom, the agency's long-awaited website redesign went live that morning, more than a decade after the last update.

Within hours, the Bom was flooded with a deluge of complaints. The hashtag #changeitback went viral.

Gripes ranged from the new colour scheme for the rain radar, to furious farmers and fishermen who could no longer put in GPS coordinates to find forecasts for a specific location.

And then, this week it was revealed that the site's redesign had cost about A$96.5m ($62.3m; £48m), 20 times more than the previously stated A$4.1m.

"First you violate expectations by making something worse, then you compound the injury by revealing the violation was both expensive and avoidable," psychologist and neuroscientist Joel Pearson told the BBC, explaining the public outrage.

"It's the government IT project equivalent of ordering a renovation, discovering the contractor has made your house less functional, and then learning they charged you for a mansion."

'Game of hide and seek'

A consensus was quickly clear: "Please bring back the previous format," one person surmised on social media.

"It's awful, the most useful features are gone and it's not user-friendly. A waste of taxpayer money," another added.

Others said the timing was poor: "Why change it on a day of severe weather?"

There were some fans, including one who posted: "I like the new site. The front page is much cleaner". But they were few and far between.

Less than 48 hours after the launch, the Bom released a list of tips on how to use the new site, but this was further mocked by disgruntled users.

"Terrible! You shouldn't need step-by-step instructions to navigate the site," one post read.

An over the shoulder shot of a man looking at the new Bureau of Meterology website

Social media has been flooded with complaints about the new site

With more than 2.6 billion views a year, Bom tried to explain that the site's refresh - prompted by a major cybersecurity breach in 2015 - was aimed at improving stability, security and accessibility. It did little to satisfy the public.

Some frustrated users turned to humour: "As much as I love a good game of hide and seek, can you tell us where you're hiding synoptic charts or drop some clues?"

Malcolm Taylor, an agronomist in Victoria, told the Australian Broadcasting Corporation (ABC) that the redesign was a complete disaster.

"I'm the person who needs it and it's not giving me the information I need," the plant and soil scientist said.

Others appeared to accept their fate: "I am sure we will get used to it but it is not intuitive at all."

Bureau of Meteorology Landing page of the old Bureau of Meteorology site showing a weather radar over Australia and the forecast for all capital cities Bureau of Meteorology

Many users say they found the old version easier to navigate

Exactly a week after the debacle, the acting head of the agency was forced to apologise. There were concerns that people had been underprepared for storms in Queensland because of the site's poor usability.

The outpouring prompted the federal government to issue a scathing rebuke of the Bom and order immediate changes to the site.

"The bureau clearly has work to do, in that it has lost community confidence in the new website," Energy Minister Chris Bowen said at the time.

In a bid to calm the storm, parts of the previous site were brought back to life, giving people the option to use the old features.

A month after the relaunch, the new head of the Bom - who started his role during the saga - admitted the changes had been "challenging for some" and again apologised for the confusion.

"Inherently, we don't, and won't, always get it perfectly right. But, we are constantly striving to get better," Dr Stuart Minchin said.

But he kicked off another round of criticism by revealing the revamp actually cost $96m, a figure which covered a full website rebuild and testing of the "systems and technology that underpin" it.

Immediately, the government demanded Bom explain how taxpayers' money had been spent "efficiently and appropriately," according to the Sydney Morning Herald.

Barnaby Joyce, a member of the Nationals, which mainly represents regional communities, said: "We spent $96m to put a B at the end of the Bom site. It's now bomb, it's hopeless."

New site 'scrambling' people's brains

On the day of the launch, the Bom assured Australians that the community had been consulted on the changes. A test site in the months leading up to the relaunch found customer satisfaction rates were consistently above 70%, they told the BBC.

"The tsunami of complaints suggests that consultation was either perfunctory or they listened to the wrong people," Mr Pearson said.

For years, farmers and emergency workers had developed what neuroscientists call "procedural memory" for reading weather patterns using the site, he explained. It's muscle memory like touch-typing or driving a familiar route home.

"Your fingers know where the keys are, your hands know when to turn."

But when the new site changed the radar's colour scale, long-time users were left scratching their heads as their "hard-won intuition for reading storm intensity became unreliable overnight".

Steve Turton/Bom Rain radar of Brisbane, with a version of the old Bureau of Meteorology site on the left and the new look on the right. Steve Turton/Bom

The old colour scheme included black which users said was a useful indicator

The new site, Mr Pearson said, "was scrambling the neurological shortcuts that people had spent a decade building".

"It's like rearranging all the furniture in your house and then expecting you to navigate it in the dark without stubbing your toe. Except the 'furniture' in this case determines whether you move your livestock before the flood arrives."

For sociologist Ash Watson, the collective reaction to the site reflected its special status in Australia.

"Australia has always been a large country of weather extremes, and Bom's cultural importance has really been cemented in recent years as we've experienced more severe weather and the rising impacts of climate change."

As a regular user of Bom's site, Ms Watson acknowledged the good intentions behind the changes, but said her research - on the social impact of tech - showed that people are getting fatigued by change.

"It can be hard for people to get excited by new updates and see their immediate benefits when they don't want to have to learn how to use yet another new platform, app or website."

AFP via Getty Images A flooded road is seen in the Sydney suburb of North Narrabeen on April 6, 2024, after heavy rain hit New South Wales state. AFP via Getty Images

The Bom website performs an essential role in times of disaster

This is not the first time the Bom has weathered a publicity storm.

In 2022, it spent hundreds of thousands of dollars on a rebrand, asking to be called either its full name or "the bureau", not the "weather bureau" or "the Bom", given the negative connotations.

But the campaign was short-lived. They eventually released a statement saying the public was welcome to use whatever name they wished.

The incident reflected a fundamental misunderstanding of how the culture of naming works, Mr Pearson said.

Australians had organically adopted "Bom" as a term of affection, like a nickname for a friend, he said.

"When the institution tried to correct this, it felt like being told you're pronouncing your mate's name wrong."

He said the site's redesign revealed a similar "cultural blindness but with higher stakes".

In a statement, Bom's spokesperson told the BBC it had received about 400,000 items of feedback on the new site, which accounted for less than 1% of the 55 million visits in the past month.

The responses were "both positive and negative", they said, with fans saying they liked the new design and presentation, the accuracy and reliability of the forecasts, and greater ease in using the site on different types of mobile devices.

But it was clear that people had "formed strong habits", the spokesperson said, and further changes may be made based on the feedback.

Music eases surgery and speeds recovery, study finds

Hacker News
www.bbc.com
2025-11-27 04:55:57
Comments...
Original Article

Music eases surgery and speeds recovery, Indian study finds

Soutik Biswas India correspondent

BBC A patient with headphones playing music during surgery in a hospital in Delhi BBC

A patient with headphones playing music during surgery in a hospital in Delhi

Under the harsh lights of an operating theatre in the Indian capital, Delhi, a woman lies motionless as surgeons prepare to remove her gallbladder.

She is under general anaesthesia: unconscious, insensate and rendered completely still by a blend of drugs that induce deep sleep, block memory, blunt pain and temporarily paralyse her muscles.

Yet, amid the hum of monitors and the steady rhythm of the surgical team, a gentle stream of flute music plays through the headphones placed over her ears.

Even as the drugs silence much of her brain, its auditory pathway remains partly active. When she wakes up, she will regain consciousness more quickly and clearly because she required lower doses of anaesthetic drugs such as propofol and opioid painkillers than patients who heard no music.

That, at least, is what a new peer-reviewed study from Delhi's Maulana Azad Medical College and Lok Nayak Hospital suggests. The research, published in the journal Music and Medicine, offers some of the strongest evidence yet that music played during general anaesthesia can modestly but meaningfully reduce drug requirements and improve recovery.

The study focuses on patients undergoing laparoscopic cholecystectomy, the standard keyhole operation to remove the gallbladder. The procedure is short - usually under an hour - and demands a particularly swift, "clear-headed" recovery.

To understand why the researchers turned to music, it helps to decode the modern practice of anaesthesia.

"Our aim is early discharge after surgery," says Dr Farah Husain, senior specialist in anaesthesia and certified music therapist for the study. "Patients need to wake up clear-headed, alert and oriented, and ideally pain-free. With better pain management, the stress response is curtailed."

Achieving that requires a carefully balanced mix of five or six drugs that together keep the patient asleep, block pain, prevent memory of the surgery and relax the muscles.

Getty Images Indian surgeons medical team performing surgery in operation theater at hospital Getty Images

Patients need to wake up clear-headed and ideally pain-free after surgery

In procedures like laparoscopic gallbladder removal, anaesthesiologists now often supplement this drug regimen with regional "blocks" - ultrasound-guided injections that numb nerves in the abdominal wall.

"General anaesthesia plus blocks is the norm," says Dr Tanvi Goel, primary investigator and a former senior resident of Maulana Azad Medical College. "We've been doing this for decades."

But the body does not take to surgery easily. Even under anaesthesia, it reacts: heart rate rises, hormones surge, blood pressure spikes. Reducing and managing this cascade is one of the central goals of modern surgical care. Dr Husain explains that the stress response can slow recovery and worsen inflammation, highlighting why careful management is so important.

The stress starts even before the first cut, with intubation - the insertion of a breathing tube into the windpipe.

To do this, the anaesthesiologist uses a laryngoscope to lift the tongue and soft tissues at the base of the throat, obtain a clear view of the vocal cords, and guide the tube into the trachea. It's a routine step in general anaesthesia that keeps the airway open and allows precise control of the patient's breathing while they are unconscious.

"The laryngoscopy and intubation are considered the most stressful response during general anaesthesia," says Dr Sonia Wadhawan, director-professor of anaesthesia and intensive care at Maulana Azad Medical College and supervisor of the study.

"Although the patient is unconscious and will remember nothing, their body still reacts to the stress with changes in heart rate, blood pressure, and stress hormones."

To be sure, the drugs have evolved. The old ether masks have vanished. In their place are intravenous agents - most notably propofol, the hypnotic made infamous by Michael Jackson's death but prized in operating theatres for its rapid onset and clean recovery. "Propofol acts within about 12 seconds," notes Dr Goel. "We prefer it for short surgeries like laparoscopic cholecystectomy because it avoids the 'hangover' caused by inhalational gases."

The team of researchers wanted to know whether music could reduce how much propofol and fentanyl (an opioid painkiller) patients required. Less drugs means faster awakening, steadier vital signs and reduced side effects.

So they designed a study. A pilot involving eight patients led to a full 11-month trial of 56 adults, aged roughly 20 to 45, randomly assigned to two groups. All received the same five-drug regimen: a drug that prevents nausea and vomiting, a sedative, fentanyl, propofol and a muscle relaxant. Both groups wore noise-cancelling headphones - but only one heard music.

"We asked patients to select from two calming instrumental pieces - soft flute or piano," says Dr Husain. "The unconscious mind still has areas that remain active. Even if the music isn't explicitly recalled, implicit awareness can lead to beneficial effects."

A pilot on eight patients led to a full trial of 56 adults randomly assigned to two groups

A pilot involving eight patients led to a full trial of 56 adults randomly assigned to two groups

The results were striking.

Patients exposed to music required lower doses of propofol and fentanyl. They experienced smoother recoveries, lower cortisol or stress-hormone levels and a much better control of blood pressure during the surgery. "Since the ability to hear remains intact under anaesthesia," the researchers write, "music can still shape the brain's internal state."

Clearly, music seemed to quieten the internal storm. "The auditory pathway remains active even when you're unconscious," says Dr Wadhawan. "You may not remember the music, but the brain registers it."

The idea that the mind behind the anaesthetic veil is not entirely silent has long intrigued scientists. Rare cases of "intraoperative awareness" show patients recalling fragments of operating-room conversation.

If the brain is capable of picking up and remembering stressful experiences during surgery - even when a patient is unconscious - then it might also be able to register positive or comforting experiences, like music, even without conscious memory.

"We're only beginning to explore how the unconscious mind responds to non-pharmacological interventions like music," says Dr Husain. "It's a way of humanising the operating room."

Music therapy is not new to medicine; it has long been used in psychiatry, stroke rehabilitation and palliative care. But its entry into the intensely technical, machine-governed world of anaesthesia marks a quiet shift.

If such a simple intervention can reduce drug use and speed recovery - even modestly - it could reshape how hospitals think about surgical wellbeing.

As the research team prepares its next study exploring music-aided sedation, building on earlier findings, one truth is already humming through the data: even when the body is still and the mind asleep, it appears a few gentle notes can help the healing begin.

Coq: The World's Best Macro Assembler? [pdf]

Hacker News
nickbenton.name
2025-11-27 04:34:56
Comments...
Original Article
No preview for link for known binary extension (.pdf), Link: https://nickbenton.name/coqasm.pdf.

Optimizing Ray Tracing in Haskell (2020)

Lobsters
medium.com
2025-11-27 03:35:31
Comments...

We Should Listen To Rev Barber on White Poverty and Multracial Organizing

Portside
portside.org
2025-11-27 03:32:39
We Should Listen To Rev Barber on White Poverty and Multracial Organizing Geoffrey Wed, 11/26/2025 - 22:32 ...
Original Article

White Poverty
How Exposing Myths About Race and Class Can Reconstruct American Democracy
Liveright
William J. Barber II with Jonathan Wilson-Hartgrove
ISBN: 978-1-324-09675-7

For progressives to win, we need a powerful multiracial coalition. That includes the people of color who disproportionately suffer poverty and structural violence, but it also includes the white people who make up the largest share of poor people in this country.

As the Reverend Dr. William J. Barber II points out in his new book, White Poverty , there are more poor white people than any other racial group, and more effort should be put into pulling them into this coalition.

I’m a white man from a wealthy family—and a lawyer who took on tough civil rights cases and fought them as if my life depended on it. My goal from the beginning was to join those who are trying to make America a better place—a country where racism and sexism would slowly fade away and where the possibility of equal opportunity would shine through.

I see that road forward in Rev. Barber’s new book, co-written with Jonathan Wilson-Hartgrove.

White Poverty ‘s great value is to teach and motivate both Black and white leaders to create a multiracial movement which demands legislation that benefits all poor people.

Talking to white people in all walks of life—from taxi drivers to restaurant workers as well as bankers and stockbrokers—has been very revealing. When I say I’m a civil rights lawyer, their voices often take on a certain unsympathetic tone—and many times they inject the “Black crime rate” into the conversation. Sometimes the person will shift the conversation to discuss Black children being raised by single women who use food stamps to put food on the table or who benefit from other welfare programs.

As Barber points out, there are “more than twice as many poor white people as there are poor Black people in this nation.” But if I mention that, the person sometimes appears not to hear me, or lets me know in no uncertain terms that it’s Black people themselves who are at fault for their poverty—and they should look to their own lives rather than blame whites. The government taxes “us,” I’m often told, to give “them” a free ride.

When I hear this, I know there’s something major missing.


De-racializing Poverty

I’ve been encouraged by the many articles, books, and memoirs that have been written about racial justice since the protests over George Floyd’s murder, but few suggest an effective way forward.

For example, a new book by Kellie Carter Jackson, We Refuse: A Forceful History of Black Resistance (Seal Press, 2024), highlights how Black women fought back against racism, some with weapons, some without, but none took the path that Reverend Barber takes in White Poverty . Reverend Barber, by contrast, argues that Blacks and whites must join together to address their common needs.

Another prominent civil rights advocate, Heather McGhee, traveled across America to write The Sum of Us: What Racism Costs Everyone and How We Can Prosper Together (One World, 2021), which documents how some progressives were beginning to engage in cross-racial solidarity through collective action to achieve higher wages and benefits for working people.

As Barber points out, the political establishment invariably markets itself to the needs of “the middle class” and ignores the poor, and whites especially look the other way.

In effect, Barber’s White Poverty builds upon McGhee’s book. It’s the work of a man of action to not only test cross-racial solidarity, but to put that theory into action. Barber lays it on the line in his very first sentence: “This is a book by a Black man about white poverty in America.” That initial signal points to where he is headed.

As a lifelong civil rights lawyer, I find that his signal resonates. As Barber persuasively argues, the public and the country’s legislatures—federal, state, and local—accept the myth that poverty is only a Black issue, as do the people I talk to daily. They view poverty through this lens to the detriment of Black and white people alike, as well as people of all other colors and races.

As Barber points out, the political establishment invariably markets itself to the needs of “the middle class” and ignores the poor, and whites especially look the other way. The same is true even in our country’s religious establishments. Barber notes that “a Pew Research Center study of nearly 50,000 sermons found that neither the words ‘poverty’ nor ‘poor’ register as commonly used in American pulpits.”

A Multiracial Fusion Movement

Much of White Poverty concerns the history of how American racism came into being and how the myths evolved around it. Barber explains how the manipulation of these myths has preserved the power of white elites, who use their political and economic power to downgrade the needs of poor white people as well as Black people, while benefiting the wealthy.

To this reader then, White Poverty ‘s great value is to teach and motivate both Black and white leaders to create a multiracial movement which demands legislation that benefits all poor people. As an additional benefit, White Poverty gives examples of Black and white movements fusing themselves together.

Not least, Barber has spent a huge amount of energy over the past seven years in building a multiracial Poor People’s Campaign . Co-chaired by Rev. Barber along with Rev. Liz Theoharis of the Kairos Center, the Poor People’s Campaign has thousands in the field to help poor white and poor Black communities understand each others’ community needs and the advantages of working together to fight against “policy violence” and to turn out the vote.

This beautifully written book offers a road map to the powerful multiracial organizing that can turn this country around, lift up poor people, and deepen our democracy.

In the last election for governor in Kentucky , the campaign and its allies worked with both white and Black rural communities to get out the vote. The result was an upset in electing the state’s present governor, Democrat Andy Beshear. In rural counties, an enlarged electorate turned out to vote and that tipped the election.

The Poor People’s Campaign has built durable alliances with other organizations to advance its multiracial vision. It’s currently collaborating with the AFL-CIO on voter engagement. It pursues legal challenges with Forward Justice. It coordinates actions with national Christian and Jewish organizations. With the Institute for Policy Studies, on whose board I serve, it has produced the data and the analysis to back up its bold agenda.

Barber is a man of the cloth who takes his religion seriously. As a result, the book is sprinkled with words from other religious figures who offer moral reasons for organizing poor people to struggle for their needs nonviolently but willing to cross police lines and stand up to authority.

In short, this beautifully written book offers a road map to the powerful multiracial organizing that can turn this country around, lift up poor people, and deepen our democracy.

Lewis M. Steel is a former senior counsel at Outten & Golden LLP and an Institute for Policy Studies board member. He's the author of The Butler's Child: White Privilege, Race, and a Lawyer's Life in Civil Rights

DIY NAS: 2026 Edition

Hacker News
blog.briancmoses.com
2025-11-27 02:54:23
Comments...
Original Article

Fourteen years ago, my storage needs outpaced my capacity and I began to look into building a network attached storage server. I had a few criteria in mind and was curious to see if anyone had _ recently_ shared something similar, but I couldn’t find anything that was relevant.

In fact, I found that the communities I was looking for answers in were actively hostile towards what I wanted to do. This resulted in my decision to build my own DIY NAS and share that as one of my very first blogs.

Much to my surprise, people were very interested in that blog! Ever since, I’ve been building a similar DIY NAS machine almost every year trying to satisfy the curiosity of other prospective DIY NAS builders.

Here are those criteria:

  1. Small form factor : It’s not the case for me any more, but at the time the space was limited in my office. I always assume that space in everybody’s office is limited. As a result, I want my DIY NAS builds to occupy as little of that office space as I can.
  2. At least six drive bays : Back when I built my NAS, it took about four drives’ worth of storage to meet my storage needs. Plus I desired two empty drive bays for future use. However, in the years since hard drive capacities have increased dramatically. At some point in the future, I may reduce this to four drive bays.
  3. An integrated, low power CPU : I intend my DIY NAS to run 24 hours a day, 7 days a week, and 52 weeks a year. When it comes to power consumption, that can do some damage on your electric bill! Thankfully our electricity here isn’t as expensive as others’ in the United States, or even further outside its borders, but I try and keep power consumption in mind when picking components for a DIY NAS build.
  4. Homelab potential : It does not take up a lot of CPU horsepower for a NAS to serve up files, which means that on modern hardware there’s a lot of untapped potential in a DIY NAS for virtual machines or containers to self-host services.

It’s important to remember that these are my criteria , and not necessarily yours. Every DIY NAS builder should be making their own list of criteria and reconcile all of their component purchases against the criteria that’s important to them.

Is it even a good time to build a NAS?

As I prepared to build this NAS, component prices disappointed me. Hard drives, SSDs, and RAM prices were all rising. Based on what I’ve been told, I expect Intel CPU prices to increase as well. My contact at Topton has been encouraging me to stock up on motherboards while they still have some in inventory. Based on what’s been explained to me, I expect the motherboard’s prices to rise and for their availability to potentially dwindle.

1TB NVMe SSD prices

In short, the economy sucks and the price of DIY NAS components is a pretty good reflection of just how sucky things are becoming. I briefly considered not publishing a DIY NAS build this year hoping that things would improve a few months down the road. But then I asked myself, “What if it’s even worse in a few months?”

I sure hope things get better, but I fear and expect that they’ll get worse.

Motherboard and CPU

I built my first DIY NAS with a Topton motherboard in 2023 . Each DIY NAS since then has also featured a Topton motherboard. My only complaint about the motherboards has been that buying them from one of the Chinese e-tail sites like AliExpress is considered problematic by some. With every DIY NAS build, I try and go through all the motherboards that I can find while searching for something with a better value proposition, but for each of the past three years I’ve landed on the latest offering from Topton.

For the DIY NAS: 2026 Edition , I chose the Topton N22 motherboard with the Intel Core 3 N355 CPU. The motherboard is similar to last year’s Topton N18 but has incrementally more compelling features, particularly the extra 2 SATA ports, the PCI-e x1 slot, and the N355 CPU!

  • Mini-ITX Form Factor
  • Intel® Processor Core 3 N355
    • 8 cores / 8 threads / Max Turbo 3.9GHz
    • 15 W TDP
    • Integrated GPU with Intel Quick Sync Video
  • 1 x DDR5 SO-DIMM
  • 8 x SATA 3.0 Ports (Asmedia ASM1164)
  • 2 x M.2 NVMe Slots (PCIe 3.0 x1)
  • 1 x 10Gbps NIC (Marvell AQC113C)
  • 2 x 2.5Gbps NICs (Intel i226-V)
  • 1 x PCI-e x1 or M.2 E-Key slot

I opted for the motherboard with the Intel Core 3 N355 CPU. This makes the server a more capable homelab machine than prior years’ DIY NAS builds. The extra cores and threads come in handy for streaming media, replacing your cloud storage, facilitating home automation, hosting game servers, etc.

Case

Just like Topton has been making great motherboards for DIY NAS machines, JONSBO has been steadily releasing great cases for DIY NAS machines. This year SilverStone Technology released a new case, the CS383 ( specs ) which I was very interested in buying one for the DIY NAS: 2026 Edition . Unfortunately it carries a pretty hefty price tag to go along with all of its incredible features!

The JONSBO N4 ( specs ) is a third the price, adheres to my “smaller footprint” criteria, and it is rather impressive on its own. It’s a tiny bit larger case than last year’s DIY NAS, but I really like that it has drive bays for six 3.5” drives and two 2.5” drives.

Although, it’s peculiar in that two of the 3.5” drive bays (and the two 2.5” drive bays) aren’t attached to a SATA backplane and can’t be swapped anywhere as easily as the other four 3.5” bays. However, this peculiar decision seems to have caused the JONSBO N4 to sell for a bit less ($20-$40) than similar offerings from JONSBO. At its price, it’s a compelling value proposition!

Case Fan

In the past, I’ve found that the fans which come with JONSBO cases are too noisy. They’ve been noisy for two reasons; the design quality of the fans make them loud. And the fans are constantly running at their top speed because of the fan header they’re plugged into on the cases’ SATA backplanes.

I anticipated that fan efficiency and noise would be a problem, so I picked out the Noctua NF-A12x25 PWM to solve it. Firstly, swapping in a high-quality fan that pushes more air and generates less noise–especially at its top speed–is a good first step. Secondly, I’d address the problem by plugging the fan into the motherboard’s SYS_FAN header instead of on the SATA backplane. This provides the opportunity to tune the fan’s RPMs directly in the BIOS and generate far less noise.

RAM

The first time I first asked myself, “Should I even build the DIY NAS: 2026 Edition ?” came as I was checking prices on DDR5 memory. Thankfully for me I had leftover RAM after purchasing DDR5 4800MHz SODIMMs for the DIY NAS: 2025 Edition , the Pocket Mini NAS , and then again for the DIY NAS that I built and gave away at 2025’s Texas Linux Fest . I was personally thankful that I had one brand new 32GB DDR5 4800MHz SODIMM laying around, but I was wildly disappointed for everybody who will try and follow this build when I saw the price of those same SODIMMs.

Regardless, I felt a Crucial 32GB DDR5 4800MHz SODIMM ( specs ) was the right amount of RAM to get started with for a DIY NAS build in 2025. Whether you just need storage or you wish to also host virtual machines, you will benefit from having more than the bare minimum recommendation of RAM. I really wanted to buy a 48GB DDR5 4800MHZ SODIMM for this DIY NAS build, but I couldn’t talk myself into spending the $250-$300 that it would’ve wound up costing.

Storage

A quick disclaimer about all the drives that I purchased for the DIY NAS: 2026 Edition , I already had all of them! I tend to buy things when I see them on sale and as a result, I have a collection of brand new parts for machines in my homelab or for upcoming projects. I raided that collection of spare parts for the DIY NAS: 2026 Edition .

Boot Drive

If you ranked the drives in your DIY NAS in order of importance, the boot drive should be the least-important drive. That is not saying that boot drive isn’t performing an important function, but I am suggesting that you shouldn’t invest a bunch of energy and money into picking the optimal boot drive.

Because the JONSBO N4 has a pair of 2.5” drive bays, I decided that a 2.5” SATA SSD would be ideal for the boot drives. As a rule of thumb, I try and spend less than $30 per boot drive in my DIY NAS builds.

Ultimately I selected a pair of 128GB Silicon Power A55 SSDs ( specs ). I’ve used these before, I’d use them again in the future, and I even have four of their higher capacity (1TB) SSDs in a pool in my own NAS.

App and Virtual Machine NVMe SSDs

Self-hosting apps and virtual machines on your DIY NAS has really exploded in the past few years. The developers of NAS appliance packages have made it much easier and the self-hosted products themselves have become as good–or often better–than things you’re probably subscribing to today. Because of that, I saved the highest-performing storage options on the Topton N22 motherboard for apps and VMs.

However, it’s important to point out that these M.2 slots are PCI-e version 3 and capped at a single PCI-e lane. This is a consequence of the limited number of PCI-e lanes available for each of the CPU options available for the Topton N22 motherboard (N100, N150, N305, and N355).

I opted for a NVMe drive that was a good value rather than a high performer and chose two of the Silicon Power 1TB M.2 NVMe SSDs (SP001TBP34A60M28) ( specs ).

Bulk Storage Hard Disk Drives

Thanks to rising prices, I opted to do like I’ve done with past DIY NAS builds and skip buying hard drives for the DIY NAS: 2026 Edition .

When planning your DIY NAS, it is good to always remember that storage will ultimately be your costliest and most important expense .

Here’s a few things to consider when buying hard drives:

  1. Determine your hardware redundancy preferences. I recommend having two hard disk drives’ worth of redundancy (RAIDZ2, RAID6, etc.)
  2. Focus on price-per-terabyte when comparing prices of drives.
  3. Do some burn in testing of your hard drives before putting them to use.
  4. When buying new drives of the same model, try and buy them from multiple vendors to increase the chances of buying drives manufactured in separate batches.
  5. Plan Ahead! Understand the rate that your storage grows so that you can craft a strategy to grow your storage down the road.
  6. Being cheap today can and will paint you into a corner that’s quite expensive to get out of.
  7. Understand that RAID is not a backup!

Thankfully, I’ve collected a bunch of my own decomissioned hard drives which I used to thoroughly test this DIY NAS build.

SATA Cables

One of the under-the-radar features of the Topton N22 motherboard might be one of my favorite features! The motherboard’s Asmedia ASM1164 SATA controllers sit behind two SFF-8643 connectors. These connectors provide two advantages for these motherboards:

  1. Saves room on the motherboard’s PCB.
  2. SFF-8643 to 4x SATA breakout cables reduces the amount of cable management hassle.

Power Supply

The one thing that I have routinely disliked about building small form factor DIY NAS machines is the price tag that accompanies a small form factor power supply (SFX) like is required with the JONSBO N4 .

I wound up choosing the SilversStone Technology SX500-G ( specs ) which I had used earlier in the year for the DIY NAS I gave away at Texas Linux Fest . Its 500W rating exceeds the needs of all the components that I’d picked out for the DIY NAS: 2026 Edition . Plus the power supply’s 80 Plus Gold rating aligns well with my criteria for power efficiency.

Regardless of whether it was called FreeNAS, TrueNAS, TrueNAS CORE, TrueNAS SCALE, or now TrueNAS Community Edition , the storage appliance product(s) from iXSystems have always been my go-to choice. For each yearly DIY NAS build, I wander over to the TrueNAS Software Status page and look at the state of the current builds.

I’m conservative with my personal NAS setup. However, for these blog builds, I typically choose Early Adopter releases. This year that’s TrueNAS 25.10.0.1 (aka Goldeye) . I enjoy being able to use these DIY NAS builds as a preview to the latest and greatest that TrueNAS has to offer.

I repeatedly choose TrueNAS because it’s what I’ve become accustomed to; it’s legitimately an enterprise-grade storage product, which is exactly the quality of solution that I want my data to depend on. At the same time it does not feel like you need a specialized certification and a truckload of enterprise storage experience to meet set up a NAS that exceeds your needs at home.

Many times I have been asked, “Why not <insert NAS appliance or OS here> ?” My answer to that question is, TrueNAS has always done everything that I need it to and they haven’t given me any reason to consider anything else. As a result, there’s never been a need for me to evaluate something else.

Final Parts List

Component Part Name Qty Cost
Motherboard Topton N22 (w/ N355 CPU) NAS Motherboard specs 1 $446.40
CPU Intel Core 3 N355 specs 1 N/A
Memory Crucial RAM 32GB DDR5 4800MHz SODIMM (CT32G48C40S5) specs 1 $172.96
Case JONSBO N4 specs 1 $121.59
Case Fan Noctua NF-A12x25 PWM chromax.Black.swap specs 1 $37.95
Power Supply SilverStone 500W SFX Power Supply SST-SX500-G) specs 1 $142.34
Boot Drive Silicon Power 128GB A55 SATA SSD specs 2 $21.97
Apps/VM Drives Silicon Power 1TB - NVMe M.2 SSD (SP001TBP34A60M28) specs 2 $99.99
SATA Cables OIKWAN SFF-8643 Host to 4 X SATA Breakout Cable N/A 2 $11.99
Price without Storage: $989.36
Total Price: $1,189.34

Hardware Assembly, BIOS Configuration, and Burn-In

Hardware Assembly

I wanted the smallest possible DIY NAS. The JONSBO N4 case initially felt too large since it accommodates Micro ATX motherboards. However, I grew to accept its slightly larger footprint. However, putting the Topton N22 motherboard into the case felt roomy and luxurious. Building the DIY NAS: 2026 Edition compared to prior years’ felt a lot like coming home to put on sweatpants and a t-shirt after wearing a suit and tie all day long.

I wasn’t too fond of the cable-management of the power supply’s cables. The layout of the case pretty much makes the front of the power supply inaccessible once it is installed. One consequence of this is that the power cable which powered the SATA backplane initially prevented the 120mm case fan from spinning up. That issue was relatively minor and was resolved with zip ties.

Overall, I felt pretty good about the assembly of the DIY NAS: 2026 Edition , but things would take a turn for the worse when I decided to fill all the 3.5-inch drive bays up with some of my decommissioned 8TB HDDs. Now this is probably my fault, I wouldn’t be surprised at all that the manual of the JONSBO N4 warned me against this, but putting the drives in last turned out to be a major pain in the neck for each of the four drive bays without a SATA backplane.

DIY NAS: 2026 Edition with half its drives installed backwards!

I had wrongly guessed that you accessed those drives’ power and data ports from the front of the case. I worked really hard to route the cables and even managed to install all of the drives before realizing my error and learning my lesson. I’m understanding now why the JONSBO N4 is cheaper than all of its siblings. Partly because there’s a missing SATA backplane, but also because those other 4 drive bays’ layout is frustrating.

Don’t let my last couple paragraphs sour you on the JONSBO N4 , though. I still really like its size, it feels big when you’re working in it with a Mini ITX motherboard. If you wind up deciding to use the JONSBO N4, then I suggest that you put those four drives and their cables in first before you do anything else. That would’ve made a world of difference for me. Actually looking at the documentation before getting started might have saved me quite a bit of aggravation, too!

If I have ruined the JONSBO N4 for you, then check out the JONSBO N3 . It’s eight 3.5-inch drive bays pair up really nicely with the Topton N22 motherboard . You can see what I thought of the JONSBO N3 by reading the DIY NAS: 2024 Edition blog.

BIOS Configuration

Generally speaking, I do as little as I possibly can in the BIOS. Normally I strive to only set the time and change the boot order. However, I did a bit more for the DIY NAS: 2026 Edition since I’m using the SYS_FAN header for the fan which is responsible for cooling the hard drives. Here are the changes that I made in the BIOS:

  1. Set the System Date and System Time to Greenwich Mean Time
    1. Advanced
      1. Hardware Monitor ( Advanced)
        1. Set SYS SmartFan Mode to Disabled .
        2. Set the Manual PWM Setting (for SYS_FAN ) to 180.
    2. Set PWRON After Power Loss to Always On
    3. Boot
      1. Set Boot Option #1 to the TrueNAS boot device.

I’m not at all interested in venturing into the rabbit’s hole of trying to completely minimize how much power the NAS uses. However, I imagine there’s some opportunities for power savings lurking in the BIOS. I didn’t go looking for them myself, but if you’re intrepid enough to do so here’s a few suggestions that I have to save some additional power:

  • Disable the onboard audio.
  • Disable any network interfaces that you don’t wind up using.
  • Tinker with the CPU settings.
  • Got other suggestions? Share them in the comments!

Burn-In

Because all of the hardware is brand-new to me brand-new components are not guaranteed to be free of defects, I always do a little bit of burn-in testing to establish some trust in the hardware that I’ve picked out for each DIY NAS build. While I think doing some burn-in testing critically important, I also think the value of subsequent burn-in testing drops the more that you do. Don’t get too carried away and do your own burn-in testing in moderation!

Memtest86+

I always use Memtest86+ to burn-in the RAM. I always run at least 3+ passes of Memtest86+. Typically, I run many more passes because I tend to let the system keep running additional passes overnight. Secondarily, running these many passes give the CPU a little bit of work to do and there’s enough information displayed by Memtest86+ to give me confidence in the CPU and its settings.

Memtest86+

Hard Drives

The failure rate of hard drives is highest when the drives are new and then again when they’re old. Regardless of type of hard drives that I buy or when I buy them, I always do some disk burn in. I tend to run Spearfoot’s Disk Burn-in and Testing script on all of my new drives. However executing this script against all of the drives can take quite a long time, even if you use something like tmux to run the tests in parallel.

Initial TrueNAS CE Setup

There’s always a little bit of setup that I do for a new TrueNAS machine. This isn’t intended to be an all inclusive step-by-step guide for all the things you should do with your DIY NAS. Instead, it’s more of a list of things I kept track of while I made sure that the DIY NAS: 2026 Edition was functional enough for me to finish writing this blog. That being said, I do think your NAS would be rather functional if you decided to do the same configuration.

  1. Updated the hostname to diynas2026
    1. Note: This is only to avoid issues with another NAS on my network.
  2. Updated the timezone.
  3. Enabled the following services and set them to start automatically.
    1. SMB
    2. SSH
    3. NFS
  4. Enabled password login for the truenas_admin user.
    • Note: If I were planning to use this DIY NAS long-term, I wouldn’t have done this. Using SSH keys for authentication is a better idea .
  5. Edited the TrueNAS Dashboard widgets to reflect the 10Gb interface ( enp1s0 ).
  6. Created a pool named flash which consisted of mirrored vdev using the Teamgroup MP44 1TB NVMe SSDs .
  7. Created a pool named rust which consisted of a single RAID-Z2 vdev using eight hard drives that I had sitting on my shelf after they were decomissioned.
  8. Configured the Apps to use the flash pool for the apps’ dataset.
  9. Made sure that the System Dataset Pool was set to flash .
  10. Confirmed that there were Scrub Tasks set up for the flash and rust pools.
  11. Created a dataset on each pool for testing; flash-test and rust-test
  12. Installed the Scrutiny app found in the App Catalog.

If I were planning to keep this NAS and use it for my own purposes, I would also:

  1. Set up a Let’s Encrypt certificate .
  2. Hook up the NAS to a compatible UPS , enable the UPS service, and configure the UPS service to shut down the NAS before the battery runs out of juice.
  3. Set up system email alert service .
  4. Create replication tasks to back up critical data to my off-site NAS .
  5. Add the new NAS to my Tailscale tailnet using the Tailscale app from the official catalog .
  6. As the NAS is seeded with data, create and maintain a suite of snapshot tasks tailored to the importance of the different data being stored on the NAS.
  7. Set up S.M.A.R.T. tests for all of the drives:
    1. Weekly Short Test
    2. Monthly Long Test

Benchmarks

Just about every year, I benchmark each DIY NAS build and almost always come to the same conclusion; the NAS will outperform your network at home. Your first bottleneck is almost always going to be the network and the overlwhelming majority of us have gigabit networks at home–but that’s slowly changing since 2.5Gbps and 10Gbps network hardware has started to get reasonable lately.

Even though I always come to the same conclusion, I still like to do the benchmarks for two reasons:

  1. It helps me build confidence that the DIY NAS: 2026 Edition works well.
  2. People tend to enjoy consuming benchmarks and it’s fun for me to see the DIY NAS’ network card get saturated during the testing.

Throughput

I like to do three categories of tests to measure the throughput of the NAS:

  1. Use iperf3 to benchmark throughput between my NAS and another machine on my network.
  2. Benchmark the throughput of the pool(s) locally on the NAS using fio .
  3. Set up SMB shares on each of the pools and then benchmark the throughput when using those shares.

Every year I try and mention that Tom Lawrence from Lawrence Systems published a great video about benchmarking storage with FIO and shared the FIO commands from his video in their forums . I use these FIO commands constantly as a reference point for testing ZFS pools’ throughput. Importantly I’d like to point out that, in that same video, Tom says something very wise:

There are lies, damn lies, and then there are benchmarks!

Tool Pool Test
Size
Random
Write
IOPS
Random
Read
IOPS
Sequential
Write
(MB/s)
Sequential
Read
(MB/s)
FIO flash 4G 1906.00 2200.00 548.00 1214.00
FIO flash 32G 2132.00 3012.00 544.00 1211.00
FIO rust 4G 1352.00 108.00 367.00 530.00
FIO rust 32G 1474.00 326.00 368.00 544.00
CrystalDiskMark flash 4GiB 5858.89 50409.91 1104.64 956.70
CrystalDiskMark flash 32GiB 4193.36 31047.36 635.42 946.20
CrystalDiskMark rust 4GiB 5226.50 46239.01 756.23 655.32
CrystalDiskMark rust 32GiB 3794.43 12809.33 759.38 677.02

What do I think these benchmarks and my use of the DIY NAS: 2026 Edition tell me? In the grand scheme of things, not a whole lot.

However, these benchmarks do back up what I expected, the DIY NAS: 2026 Edition is quite capable and more than ready to meet my storage needs. I especially like that the CrystalDiskMark benchmark of the SMB shares were both faster than a SATA SSD , and the throughput to the share on the flash pool practically saturated the NAS’ 10GbE network connection.

FIO Tests

Every time I benchmark a NAS, I seem to either be refining what I tried in prior years or completely reinventing the wheel. As a result, I wouldn’t recommend comparing these results with results that I shared in prior years’ DIY NAS build blogs. I haven’t really put a ton of effort into developing a standard suite of benchmarks. Things in my homelab change enough between DIY NAS blogs that trying to create and maintain an environment for a standard suite of benchmarks is beyond what my budget, spare time, and attention span will allow.

I’m going to paste these fio commands here in the blog for my own use in future DIY NAS build blogs. If you wind up building something similar, these might be helpful to measure your new NAS’ filesystem’s performance and compare it to mine!

## Random Write IOPS
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=128k --size=4G --readwrite=randwrite --ramp_time=10
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=128k --size=32G --readwrite=randwrite --ramp_time=10

## Random Read IOPS
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=128k --size=4G --readwrite=randread --ramp_time=10
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=128k --size=32G --readwrite=randread --ramp_time=10

## Sequential Write (MB/s)
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4M --size=4G --readwrite=write --ramp_time=10
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4M --size=32G --readwrite=write --ramp_time=10

## Sequential Read (MB/s)
fio --randrepeat=1 --ioengine=libaio --direct=1  --name=test --filename=test --bs=4M --size=4G --readwrite=read --ramp_time=10
fio --randrepeat=1 --ioengine=libaio --direct=1  --name=test --filename=test --bs=4M --size=32G --readwrite=read --ramp_time=10

Perfmon of Networkthroughput for reading files over SMB and during iperf3 tests.

Power Consumption

One not-so-obvious cost of running a DIY NAS is how much power it consumes. While I specifically tried to pick items that were efficient in terms of power consumption, it’s also important to realize that all the other bells and whistles on the awesome Topton N18 NAS motherboard consume power, too. And that the biggest consumer of power in a NAS is almost always the hard disk drives.

Thanks to my tinkering with home automation , I have a plethora of smart outlets which are capable of power monitoring. I used those smart outlets for most of my power monitoring. But I also have a Kill a Watt P400 that I also use for some of the shorter tests:

  • Power consumed during a handful of specific tasks:
    • Idle while running TrueNAS
    • RAM Burn-in (~14 passes of Memtest86+)
    • An 8-hour throughput benchmark copying randomly-sized files to the NAS using SMB.
  • Total consumed during the build, burn-in, and use of the DIY NAS: 2026 Edition .
Task Duration Max Wattage Avg. Wattage Total Consumption
Boot 10 min. 200.00 W 120.00 W 0.02 kWh
Idle 3 hr. 90.00 W 66.67 W 0.20 kWh
RAM Burn-in 18 hr. 104.00 W 91.67 W 1.65 kWh
SMB Benchmark of HDDs 8 hr. 107.00 W 85.00 W 0.68 kWh
Total 108 hr. 237.80 W 66.49 W 7.17 kWh

What about an EconoNAS?

Shortly before prices skyrocketed, I decided I wasn’t very interested in doing a separate EconoNAS builds. Several months ago, I realized that there were several off-the-shelf NAS machines that were more-than-capable of running TrueNAS and they were selling at economical prices that couldn’t be topped by a DIY approach. I will dive deeper into this in a future blog, eventually … maybe ?

All that being said–it’d be incredibly easy to make some compromises which result in the DIY NAS: 2026 Edition becoming quite a bit more economical. Here’s a list of changes that I would consider to be more budget-friendly:

Altogether, these savings could add up to more than $400, which is pretty considerable! If you made all of these changes, you’d have something that’s going to be nearly equivalent to the DIY NAS: 2026 Edition but at a fraction of the price.

Brian as a Super Villian shopping for economical DIY NAS deals!

What am I going to do with the DIY NAS: 2026 Edition?!

My DIY NAS is aging quite gracefully, but I’ve recently been wondering about replacing it. Shortly before ordering all the parts for the DIY NAS: 2026 Edition , I briefly considered using this year’s DIY NAS build to replace my personal NAS. However, I decided not to do that. Then prices skyrocketed and I shelved the idea of building a replacement for my own NAS and I nearly shelved the idea of a DIY NAS in 2026!

So that begs the question, “What is Brian going to do with the DIY NAS: 2026 Edition ?”

I’m going to auction it off on the briancmosesdotcom store on eBay ! Shortly after publishing this blog, I’ll list it on eBay. In response to skyrocketing prices for PC components, I’m going to do a no-reserve auction. At the end of the auction, the highest bidder wins and hopefully they’ll get a pretty good deal!

eBay auction of the DIY NAS: 2026 Edition

Final Thoughts

Overall, I’m pleased with the DIY NAS: 2026 Edition . The Topton N22 motherboard is a significant improvement over last year’s Topton N18 motherboard , primarily due to its extra two SATA ports. This provides 33.3% more gross storage capacity.

While testing, I found the Intel Core 3 N355 CPU somewhat excessive for basic NAS functions. However, the substantial untapped CPU horsepower offers luxurious performance potential. This makes the build compelling for anyone planning extensive self-hosting projects.

I have mixed feelings about the JONSBO N4 case . The four right-side drive bays lack SATA backplane connectivity. Without creative cabling solutions, individual drive replacement becomes challenging. However, the case’s ~$125 price point compensates for this inconvenience. I anticipate that those the cost savings will justify the compromise for most builders. If I were to build the DIY NAS: 2026 Edition all over again, I’d be tempted to use the JONSBO N3 case or even the JONSBO N6 which isn’t quite obtainable, yet.

The DIY NAS: 2026 Edition delivers excellent performance and superior specifications. In my opinion, it represents better value than off-the-shelf alternatives:

Building your own NAS provides significant advantages. Years later, you can upgrade RAM, motherboard, case, or add PCI-e (x1) expansion cards. These off-the-shelf alternatives offer severely limited upgrade paths.

Is 2026 finally the year that you decide to build your DIY NAS? I hope that it is! Share your experience building your NAS in the comments below or come tell us about it in the #diynas-and-homelab channel on the Butter, What?! Discord server !

Green Card Interviews End in Handcuffs for Spouses of U.S. Citizens

Hacker News
www.nytimes.com
2025-11-27 02:51:31
Comments...
Original Article

Please enable JS and disable any ad blocker

Penpot: The Open-Source Figma

Hacker News
github.com
2025-11-27 02:14:36
Comments...
Original Article
penpot header image

License: MPL-2.0 Penpot Community Managed with Taiga.io Gitpod ready-to-code

Website User Guide Learning Center Community

Youtube Peertube Linkedin Instagram Mastodon Bluesky X


Penpot_OpenYourEyes_.mp4

Penpot is the first open-source design tool for design and code collaboration. Designers can create stunning designs, interactive prototypes, design systems at scale, while developers enjoy ready-to-use code and make their workflow easy and fast. And all of this with no handoff drama.

Available on browser or self-hosted, Penpot works with open standards like SVG, CSS, HTML and JSON, and it’s free!

The latest updates take Penpot even further. It’s the first design tool to integrate native design tokens —a single source of truth to improve efficiency and collaboration between product design and development. With the huge 2.0 release , Penpot took the platform to a whole new level. This update introduces the ground-breaking CSS Grid Layout feature , a complete UI redesign, a new Components system, and much more. For organizations that need extra service for its teams, get in touch

🎇 Design, code, and Open Source meet at Penpot Fest ! Be part of the 2025 edition in Madrid, Spain, on October 9-10.

Table of contents

Why Penpot

Penpot expresses designs as code. Designers can do their best work and see it will be beautifully implemented by developers in a two-way collaboration.

Plugin system

Penpot plugins let you expand the platform's capabilities, give you the flexibility to integrate it with other apps, and design custom solutions.

Designed for developers

Penpot was built to serve both designers and developers and create a fluid design-code process. You have the choice to enjoy real-time collaboration or play "solo".

Inspect mode

Work with ready-to-use code and make your workflow easy and fast. The inspect tab gives instant access to SVG, CSS and HTML code.

Self host your own instance

Provide your team or organization with a completely owned collaborative design tool. Use Penpot's cloud service or deploy your own Penpot server.

Integrations

Penpot offers integration into the development toolchain, thanks to its support for webhooks and an API accessible through access tokens.

Building Design Systems: design tokens, components and variants

Penpot brings design systems to code-minded teams: a single source of truth with native Design Tokens, Components, and Variants for scalable, reusable, and consistent UI across projects and platforms.

Getting started

Penpot is the only design & prototype platform that is deployment agnostic. You can use it in our SAAS or deploy it anywhere.

Learn how to install it with Docker, Kubernetes, Elestio or other options on our website .

Open Source

Community

We love the Open Source software community. Contributing is our passion and if it’s yours too, participate and improve Penpot. All your designs, code and ideas are welcome!

If you need help or have any questions; if you’d like to share your experience using Penpot or get inspired; if you’d rather meet our community of developers and designers, join our Community !

You will find the following categories:

Community

Code of Conduct

Anyone who contributes to Penpot, whether through code, in the community, or at an event, must adhere to the code of conduct and foster a positive and safe environment.

Contributing

Any contribution will make a difference to improve Penpot. How can you get involved?

Choose your way:

To find (almost) everything you need to know on how to contribute to Penpot, refer to the contributing guide .

Libraries and templates

Resources

You can ask and answer questions, have open-ended conversations, and follow along on decisions affecting the project.

💾 Documentation

🚀 Getting Started

✏️ Tutorials

🏘️ Architecture

📚 Dev Diaries

License

This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.

Copyright (c) KALEIDOS INC

Penpot is a Kaleidos’ open source project

Functional Data Structures and Algorithms: a Proof Assistant Approach

Hacker News
fdsa-book.net
2025-11-27 02:04:57
Comments...
Original Article

A Proof Assistant Approach

Tobias Nipkow , Jasmin Blanchette , Manuel Eberl , Alejandro Gómez-Londoño , Peter Lammich , Christian Sternagel , Simon Wimmer , Bohua Zhan

Published by ACM Books

This book is an introduction to data structures and algorithms for functional languages, with a focus on proofs. It covers both functional correctness and running time analysis. It does so in a unified manner with inductive proofs about functional programs and their running time functions. All proofs have been machine-checked by the proof assistant Isabelle . The pdf contains links to the corresponding Isabelle theories.

Click on an image to download the pdf of the whole book:

Functional Data Structures
  and Algorithms

Table of contents 1 Table of contents 2

This book is meant to evolve over time. If you would like to contribute, get in touch!

Migrating the Main Zig Repository from GitHub to Codeberg

Hacker News
ziglang.org
2025-11-27 01:49:00
Comments...
Original Article

← Back to News page

November 26, 2025

Ever since git init ten years ago , Zig has been hosted on GitHub. Unfortunately, when it sold out to Microsoft , the clock started ticking . “Please just give me 5 years before everything goes to shit,” I thought to myself. And here we are, 7 years later, living on borrowed time.

Putting aside GitHub’s relationship with ICE , it’s abundantly clear that the talented folks who used to work on the product have moved on to bigger and better things, with the remaining losers eager to inflict some kind of bloated, buggy JavaScript framework on us in the name of progress. Stuff that used to be snappy is now sluggish and often entirely broken.

More importantly, Actions is created by monkeys and completely neglected . After the CEO of GitHub said to “embrace AI or get out” , it seems the lackeys at Microsoft took the hint, because GitHub Actions started “vibe-scheduling”; choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked.

Rather than wasting donation money on more CI hardware to work around this crumbling infrastructure, we’ve opted to switch Git hosting providers instead.

As a bonus, we look forward to fewer violations (exhibit A , B , C ) of our strict no LLM / no AI policy , which I believe are at least in part due to GitHub aggressively pushing the “file an issue with Copilot” feature in everyone’s face.

GitHub Sponsors

The only concern we have in leaving GitHub behind has to do with GitHub Sponsors. This product was key to Zig’s early fundraising success, and it remains a large portion of our revenue today . I can’t thank Devon Zuegel enough. She appeared like an angel from heaven and single-handedly made GitHub into a viable source of income for thousands of developers. Under her leadership, the future of GitHub Sponsors looked bright, but sadly for us, she, too, moved on to bigger and better things. Since she left, that product as well has been neglected and is already starting to decline.

Although GitHub Sponsors is a large fraction of Zig Software Foundation’s donation income, we consider it a liability . We humbly ask if you, reader, are currently donating through GitHub Sponsors, that you consider moving your recurring donation to Every.org , which is itself a non-profit organization.

As part of this, we are sunsetting the GitHub Sponsors perks. These perks are things like getting your name onto the home page, and getting your name into the release notes, based on how much you donate monthly. We are working with the folks at Every.org so that we can offer the equivalent perks through that platform.

Migration Plan

Effective immediately, I have made ziglang/zig on GitHub read-only, and the canonical origin/master branch of the main Zig project repository is https://codeberg.org/ziglang/zig.git .

Thank you to the Forgejo contributors who helped us with our issues switching to the platform, as well as the Codeberg folks who worked with us on the migration - in particular Earl Warren , Otto , Gusted , and Mathieu Fenniak .

In the end, we opted for a simple strategy, sidestepping GitHub’s aggressive vendor lock-in: leave the existing issues open and unmigrated, but start counting issues at 30000 on Codeberg so that all issue numbers remain unambiguous. Let us please consider the GitHub issues that remain open as metaphorically “copy-on-write”. Please leave all your existing GitHub issues and pull requests alone . No need to move your stuff over to Codeberg unless you need to make edits, additional comments, or rebase. We’re still going to look at the already open pull requests and issues ; don’t worry.


In this modern era of acquisitions, weak antitrust regulations, and platform capitalism leading to extreme concentrations of wealth, non-profits remain a bastion defending what remains of the commons.

Happy hacking,

Andrew

Graph Algorithms in Rayon

Lobsters
davidlattimore.github.io
2025-11-27 01:34:43
Comments...
Original Article

David Lattimore -

The Wild linker makes very extensive use of rayon for parallelism. Much of this parallelism is in the form of par_iter and friends. However, some parts of the linker don’t fit neatly because the amount of work isn’t known in advance. For example, the linker has two places where it explores a graph. When we start, we know some roots of that graph, but we don’t know all the nodes that we’ll need to visit. We’ve gone through a few different approaches for how we implement such algorithms. This post covers those approaches and what we’ve learned along the way.

Spawn broadcast

Our first approach was to spawn a task for each thread (rayon’s spawn_broadcast ) then do our own work sharing and job control between those threads. By “our own job control” I mean that each thread would pull work from a channel and if it found no work, it’d park the thread . If new work came up, the thread that produced the work would wake a parked thread.

This was complex. Worse, it didn’t allow us to use other rayon features while it was running. For example, if we tried to do a par_iter from one of the threads, it’d only have the current thread to work with because all the others were doing their own thing, possibly parked, but in any case, not available to rayon.

Scoped spawning

Using rayon’s scope or in_place_scope , we can create a scope into which we spawn tasks.

rayon::scope(|scope| {  
  for node in roots {  
    scope.spawn(|scope| {  
      explore_graph(node, scope);  
    });  
  }  
});  

The idea here is that we create a scope and spawn some initial tasks into that scope. Those tasks then spawn additional tasks and so on until eventually there are no more tasks.

The rayon documentation warns that this is more expensive than other approaches, so should be avoided if possible. The reason it’s more expensive is that it heap-allocates the task. Indeed, when using this approach, we do see increased heap allocations.

Channel + par_bridge

Another approach that I’ve tried recently and which arose out of the desire to reduce heap allocations is to put work into a crossbeam channel . The work items can be an enum if there are different kinds. Our work scope is then just something like the following:

let (work_send, work_recv) = crossbeam_channel::unbounded();

// Add some initial work items.  
for node in roots {  
  work_send.send(WorkItem::ProcessNode(node, work_send.clone()));  
}

// Drop sender to ensure we can terminate. Each work item has a copy of the sender.  
drop(work_send);

work_recv.into_iter().par_bridge().for_each(|work_item| {  
   match work_item {  
      WorkItem::ProcessNode(node, work_send) => {  
        explore_graph(node, work_send);  
      }  
   }  
});  

The trick with this approach is that each work item needs to hold a copy of the send-end of the channel. That means that when processing work items, we can add more work to the queue. Once the last work item completes, the last copy of the sender is dropped and the channel closes.

This approach works OK. It does avoid the heap allocations associated with scoped spawning. It is a little bit complex, although not as complex as doing all the job control ourselves. One downside is that like doing job control ourselves, it doesn’t play nicely with using par_iter inside of worker tasks. The reason why is kind of subtle and is due to the way rayon is implemented. What can happen is that the par_iter doesn’t just process its own tasks. It can also steal work from other threads. When it does this, it can end up blocking trying to pull another work item from the channel. The trouble is that because the par_iter was called from a work item that holds a copy of the send-end of the channel, we can end up deadlocked. The channel doesn’t close because we hold a sender and we don’t drop the sender because we’re trying to read from the read-end of the channel.

Another problem with this approach that I’ve just come to realise is that it doesn’t compose well. I had kind of imagined just getting more and more options in my WorkItem enum as the scope of the work increased. The trouble is that working with this kind of work queue doesn’t play nicely with the borrow checker. An example might help. Suppose we have some code written with rayon’s par_chunks_mut and we want to flatten that work into some other code that uses a channel with work items. First we need to convert the par_chunks_mut code into a channel of work items.

let foo = create_foo();  
foo.par_chunks_mut(chunk_size).for_each(|chunk| {  
   // Do work with mutable slice `chunk`  
});  

If we want the creation of foo to be a work item and each bit of processing to also be work items, there’s no way to do that and have the borrow checker be happy.

match work_item {  
   WorkItem::CreateAndProcessFoo => {  
      let foo = create_foo();  
      // Split `foo` into chunks and queue several `WorkItem::ProcessChunk`s….?  
   }  
   WorkItem::ProcessChunk(chunk) => {  
      // Do work with mutable slice `chunk`.  
   }  
}  

So that clearly doesn’t work. There’s no way for us to take our owned foo and split it into chunks that can be processed as separate WorkItem s. The borrow checker won’t allow it.

Another problem arises if we’ve got two work-queue-based jobs and we’d like to combine them, but the second job needs borrows that were taken by the first job to be released before it can run. This runs into similar problems.

The kinds of code structures we end up with here feel a bit like we’re trying to write async code without async/await. This makes me wonder if async/await could help here.

Async/await

I don’t know exactly what this would look like because I haven’t yet tried implementing it. But I imagine it might look a lot like how the code is written with rayon’s scopes and spawning. Instead of using rayon’s scopes, it’d use something like async_scoped .

One problem that I have with rayon currently is, I think, solved by using async/await. That problem, which I briefly touched on above is described in more detail here. Suppose we have a par_iter inside some other parallel work:

outer_work.par_iter().for_each(|foo| {
  let foo = inputs.par_iter().map(|i| ...).collect();

  // < Some other work with `foo` here, hence why we cannot merge the two par_iters >

  foo.par_iter().map(|i| ...).for_each(|i| ...);
});

If the thread that we’re running this code on becomes idle during the first inner par_iter , that thread will try to steal work from other threads. If it succeeds, then even though all the work of the par_iter is complete, we can’t continue to the second inner par_iter until the stolen work also completes. However, with async/await, tasks are not tied to a specific thread once started. Threads steal work, but tasks don’t, so the task that’s running the above code would become runnable as soon as the par_iter completed even if the thread that had originally been running that task had stolen work - the task could just be run on another thread.

It’d be very interesting to see what async/await could contribute to the parallel computation space. I don’t have any plans to actually try this at this stage, but maybe in future.

Return to scoped spawning and future work

In the meantime, I’m thinking I’ll return to scoped spawning. Using a channel works fine for simple tasks and it avoids the heap allocations, but it really doesn’t compose at all well.

I am interested in other options for avoiding the heap allocations. Perhaps there’s options for making small changes to rayon that might achieve this. e.g. adding support for spawning tasks without boxing, provided the closure is less than or equal to say 32 bytes. I’ve yet to explore such options though.

Thanks

Thanks to everyone who has been sponsoring my work on Wild, in particular the following, who have sponsored at least $15 in the last two months:

  • CodeursenLiberte
  • repi
  • rrbutani
  • Rafferty97
  • wasmerio
  • mati865
  • Urgau
  • mstange
  • flba-eb
  • bes
  • Tudyx
  • twilco
  • sourcefrog
  • simonlindholm
  • petersimonsson
  • marxin
  • joshtriplett
  • coreyja
  • binarybana
  • bcmyers
  • Kobzol
  • HadrienG2
  • +3 anonymous

The Tesla Model Y Just Scored the Worst Reliability Rating in a Decade

Hacker News
www.autoblog.com
2025-11-27 01:33:39
Comments...
Original Article

Please enable JS and disable any ad blocker

bonsai_term: A library for building dynamic terminal apps by Jane Street

Hacker News
github.com
2025-11-27 01:20:33
Comments...
Original Article

Bonsai Term

Bonsai_term is a library that lets you write Terminal UIs (TUIs) using OCaml. It uses the same programming model as the bonsai_web library.

Getting started

  1. If you are new to OCaml - or if you haven't already - install opam . It is OCaml's package manager and we'll be using it to install bonsai_term and its dependencies. The specific installation instructions depend on your platform. You can find platform-specific instructions here .
  2. bonsai_term uses OxCaml so the next thing you'll want to do is install oxcaml by following the instructions here .
  3. Run opam install bonsai_term . (This will install bonsai_term and its dependencies).

At this point you should now have bonsai_term "installed".

To learn how to use bonsai_term you can read its MLI src/bonsai_term.mli and / or look at some examples in the bonsai_term_examples repo.

To learn how to use bonsai , you can read the docs in bonsai_web . (most of those docs are aimed at the "web" version of bonsai, so the "vdom" bits may not apply, but the "effect" / "state-fulness" and ways of doing "incrementality" all should transfer from bonsai_web into bonsai_term ).

To learn how to use ocaml here are some good resources:

If you followed the install instructions at the top of this page, you can skip the "Install" instructions on the above links.

DSP 101 Part 1: An Introductory Course in DSP System Design

Hacker News
www.analog.com
2025-11-27 00:42:13
Comments...
Original Article
Timed out getting readerview for https://www.analog.com/en/resources/analog-dialogue/articles/dsp-101-part-1.html

Russell Coker: PineTime Band

PlanetDebian
etbe.coker.com.au
2025-11-27 00:37:27
I’ve had a Pine Time for just over 2 years [1]. About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no ri...
Original Article

I’ve had a Pine Time for just over 2 years [1] . About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no risk of the watch suddenly falling off and being broken or lost. The Pine64 web site has a page about this with bad options, one broken link and a few Amazon items that are have ridiculous postage [2] .

I started writing this post while using the band from a Colmi P80 [3] . I bought one for a relative who wanted the metal band and the way the Aliexpress seller does it is to sell the package with the plastic band and include the metal band in the package so I had a spare band. It fits quite well and none of the reported problems of the PineTime having insufficient space between the spring bar and the watch. The Colmi band in question is described as “rose gold” but is more like “pinkish beige” and doesn’t match the style of the black PineTime.

I ordered a couple of cheap bands from AliExpress which cost $9.77 and $13.55 including postage while the ones that Pine64 recommend have over $15 postage from Amazon!

The 20mm Silicone Magnetic Buckle Watch Strap Band For Huawei GT2 Smart Watch Connected Bracelet Black Watchband Man [4] cost $13.55 including postage. It has a magnetic unfold mechanism which I find a bit annoying and it doesn’t allow easily changing the length. I don’t think I’ll choose that again. But it basically works and is comfortable.

The 20mm Metal Strap for Huawei Watch GT2 3 Quick Release Stainless Steel Watch Band for Samsung Galaxy Watch Bracelet [5] cost $9.77 including postage. I found this unreasonably difficult to put on and not particularly comfortable. But opinion will vary on that, it is cheap and will appeal to some people’s style.

Conclusion

There are claims that getting a replacement band for a PineTime is difficult. My experience is that every band with a 20mm attachment works as long as it’s designed for a square watch, some of the bands are designed to partly go around a round face and wouldn’t fit. I expect that some bands won’t fit, but I don’t think that it’s enough of a problem to be worried about buying a random band from AliExpress. The incidence of bands not fitting will probably be lower than the incidence of other AliExpress products not doing quite what you want (while meeting the legal criteria of doing what they are claimed to do) and not being used.

I’m now wearing the PineTime with the “Magnetic Buckle Watch Strap Band” and plan to wear it for the next year or so.

David Lerner, Co-Founder of Tekserve, Dies at 72

Daring Fireball
www.nytimes.com
2025-11-27 00:31:44
Sam Roberts, reporting for The New York Times: David Lerner, a high school dropout and self-taught computer geek whose funky foothold in New York’s Flatiron district, Tekserve, was for decades a beloved discount mecca for Apple customers desperate to retrieve lost data and repair frozen hard dri...
Original Article

Please enable JS and disable any ad blocker

AdBlock and Signal are for terrorists, according to the French government [video]

Hacker News
www.youtube.com
2025-11-27 00:20:37
Comments...

Crypto hoarders dump tokens as shares tumble

Hacker News
arstechnica.com
2025-11-27 00:07:49
Comments...
Original Article

“It was inevitable,” said Jake Ostrovskis, head of OTC trading at Wintermute, referring to the sell-off in digital asset treasury stocks. “It got to the point where there’s too many of them.”

Several companies have begun selling their crypto stockpiles in an effort to fund share buybacks and shore up their stock prices, in effect putting the crypto treasury model into reverse.

North Carolina-based ether holder FG Nexus sold about $41.5 million of its tokens recently to fund its share buyback program. Its market cap is $104 million, while the crypto it holds is worth $116 million. Florida-based life sciences company turned ether buyer ETHZilla recently sold about $40 million worth of its tokens, also to fund its share buyback program.

Sequans Communications, a French semiconductor company, sold about $100 million of its bitcoin this month in order to service its debt, in a sign of how some companies that borrowed to fund crypto purchases are now struggling. Sequans’ market capitalization is $87 million, while the bitcoin it holds is worth $198 million.

graph of crypto prices

Credit: LSEG

Georges Karam, chief executive of Sequans, said the sale was a “tactical decision aimed at unlocking shareholder value given current market conditions.”

While bitcoin and ether sellers can find buyers, companies with more niche tokens will find it more difficult to raise money from their holdings, according to Morgan McCarthy. “When you’ve got a medical device company buying some long-tail asset in crypto, a niche in a niche market, it is not going to end well,” he said, adding that 95 percent of digital asset treasuries “will go to zero.”

Strategy, meanwhile, has doubled down and bought even more bitcoin as the price of the token has fallen to $87,000, from $115,000 a month ago. The firm also faces the looming possibility of being cut from some major equity indices, which could heap even more selling pressure on the stock.

But Saylor has brushed off any concerns. “Volatility is Satoshi’s gift to the faithful,” he said this week, referring to the pseudonymous creator of bitcoin.

© 2025 The Financial Times Ltd . All rights reserved . Not to be redistributed, copied, or modified in any way.

Foreign interference or opportunistic grifting: why are so many pro-Trump X accounts based in Asia?

Guardian
www.theguardian.com
2025-11-27 00:01:12
A new feature on the social media platform formerly known as Twitter allows users to see the location of other accounts. It has resulted in a firestorm of recriminations When X rolled out a new feature revealing the locations of popular accounts, the company was acting to boost transparency and clam...
Original Article

W hen X rolled out a new feature revealing the locations of popular accounts, the company was acting to boost transparency and clamp down on disinformation. The result, however, has been a circular firing squad of recriminations, as users turn on each other enraged by the revelation that dozens of popular “America first” and pro-Trump accounts originated overseas.

The new feature was enabled over the weekend by X’s head of product, Nikita Bier, who called it the first step in “securing the integrity of the global town square.” Since then many high-engagement accounts that post incessantly about US politics have been “unmasked” by fellow users.

An Ivanka Trump fan account that posts about illegal immigration to the US was shown to be based in Nigeria. MAGAStorm, spreading conspiracy theories about the assassination attempt on Trump, was found to be in eastern Europe. AmericanVoice which posts anti-Islam content, is based in India.

Users have noted that a high proportion of these potentially misleading accounts – many of which claim to be in America – are operating from Asia, but experts are in disagreement over whether they may be state-backed influence campaigns or even opportunists trying to make a quick buck.

Monetising ‘rage bait’

In 2024 the Centre for Information Resilience (CIR) revealed that a network of accounts on X were posing as young American women, stealing images from European influencers to burnish their credibility. Often these images were manipulated to include pro-Trump hats and clothing.

The new location feature on X has allowed Benjamin Strick, who ran the original investigation, to confirm that almost all of these accounts purporting to be “independent Trump supporting” women are located in Thailand.

Strick noted that while promising to “follow patriots” and “stand with Trump”, these accounts often also posted anti-Islamic content too.

In their 2024 report, the CIR found that these accounts exploited “pre-existing societal tensions” in their efforts to spread disinformation.

“Accounts seized upon news stories relating to gender and LGBTQ+ rights, in some cases allowing them to undermine Democratic policies and promote Republican views.”

Fears that foreign actors are using social media to influence US voters reached their zenith in the months after Trump’s 2016 election win over Hillary Clinton. An intelligence assessment the following year detailed the steps that the Russian state took to bolster Trump using bot farms.

In the years since, experts have warned that foreign influence campaigns are becoming more sophisticated, but as America’s politics has become more partisan and voters more siloed, those warnings appear to have been forgotten.

However it’s possible though that the sheer number of pro-Trump accounts around the world might have as much to do with turning a profit as political influence, says Simon Copland, a researcher at the Australian National University.

“Social media is really based on attention … [and] on places like X or Twitter you can get money from that,” he says, adding that at the moment, the best way to get attention “is to be posting about Donald Trump .”

Changes to the way X monetises its content could be a factor as well. In 2024, the platform announced that creators would now be paid based on the levels of engagement with their content. At the time, some expressed concern that this would incentivise users to create more and more controversial content.

“When platforms begin to reward engagement, creators will begin posting anything that drives a discussion of some sort, including posts that are designed to enrage users, forcing them to reply or comment,” TechCrunch wrote at the time.

“That’s where things like rage bait come about,” says Copland. “People deliberately induce rage to try to encourage people to go on to the platforms” and engage in the content.

The calculations used to determine a user’s payments remain opaque and it’s not clear how much money overseas users posing as Maga-faithful could be making. A BBC investigation from 2024 suggested that for some, it could be thousands of dollars. Experts in southeast Asia’s disinformation space say such figures could be highly motivating for people in the region.

A 2021 report into southeast Asia’s “disinformation crisis” found that many accounts pushing xenophobic and misogynistic messages to appeal to the US right were not particularly invested ideologically, but “driven by almost purely entrepreneurial motivations.”

The ‘dark corners’ of the internet

While the perpetually online cadre of Trump’s followers erupt in anger over the origins of some accounts – many of which have now been suspended – others have been left questioning why the issue matters at all.

Copland points to the flow of rightwing ideas, and how policies dreamed up in dank corners of the internet can make their way to the heights of US and European politics.

On the night that X began to reveal the location of accounts, Donald Trump shared a post from an account called Trump_Army_. With nearly 600,000 followers, the account regularly amplifies conspiracy theories; in a recent post it asked its followers if “JFK was killed for trying to expose the same crooks Trump is now exposing”. Soon after, another user pointed out that Trump_Army_ was based in India.

It’s among the more innocuous examples, but illustrative of the way the wider ecosystem of right-wing politics operates online.

“Extreme ideas start in these dark corners of the internet. They spread, they become memes, they go on to more mainstream platforms and then you see politicians pick them up,” says Copland. ‘

In May, Trump ambushed South African president Cyril Ramaphosa in the Oval Office, accusing him of turning a blind eye to a “white genocide” against South African farmers. These widely discredited claims are thought to have in-part originated in far-right chatrooms.

“We have to be taking this stuff seriously,” he warns, because these ideas “are suddenly becoming mainstream.”

X was approached for comment.

Valhalla's Things: PDF Planners 2026

PlanetDebian
blog.trueelena.org
2025-11-27 00:00:00
Posted on November 27, 2025 Tags: madeof:atoms, madeof:bits, craft:bookbinding A few years ago I wrote some planner generating code to make myself a custom planner; in November 2023 I generated a few, and posted them here on the blog...
Original Article

A few years ago I wrote some planner generating code to make myself a custom planner; in November 2023 I generated a few, and posted them here on the blog, in case somebody was interested in using them.

In 2024 I tried to do the same, and ended up being even more late, to the point where I didn’t generate any (uooops).

I did, however, start to write a Makefile to automate the generation (and got stuck on the fact that there wasn’t an easy way to deduce the correct options needed from just the template name); this year, with the same promptness as in 2023 I got back to the Makefile and finished it, so maybe next year I will be able to post them early enough for people to print and bind them? maybe :)

Anyway, these are all of the variants I currently generate, for 2026.

The files with -book in the name have been imposed on A4 paper for a 16 pages signature. All of the fonts have been converted to paths, for ease of printing (yes, this means that customizing the font requires running the script, but the alternative also had its drawbacks).

In English:

daily-95×186-en.pdf

blank daily pages, 95 mm × 186 mm;

daily-A5-en.pdf daily-A5-en-book.pdf

blank daily pages, A5;

daily-A6-en.pdf daily-A6-en-book.pdf

blank daily pages, A6;

daily-graph-A5-en.pdf daily-graph-A5-en-book.pdf

graph paper (4 mm) daily pages, A5;

daily-points4mm-A5-en.pdf daily-points4mm-A5-en-book.pdf

pointed paper (4 mm), A5;

daily-ruled-A5-en.pdf daily-ruled-A5-en-book.pdf

ruled paper daily pages, A5;

week_on_two_pages-A6-en.pdf week_on_two_pages-A6-en-book.pdf

weekly planner, one week on two pages, A6;

week_on_one_page-A6-en.pdf week_on_one_page-A6-en-book.pdf

weekly planner, one week per page, A6;

week_on_one_page_dots-A6-en.pdf week_on_one_page_dots-A6-en-book.pdf

weekly planner, one week per page with 4 mm dots, A6;

week_health-A6-en.pdf week_health-A6-en-book.pdf

weekly health tracker, one week per page with 4 mm dots, A6;

month-A6-en.pdf month-A6-en-book.pdf

monthly planner, A6;

And the same planners, in Italian:

daily-95×186-it.pdf

blank daily pages, 95 mm × 186 mm;

daily-A5-it.pdf daily-A5-it-book.pdf

blank daily pages, A5;

daily-A6-it.pdf daily-A6-it-book.pdf

blank daily pages, A6;

daily-graph-A5-it.pdf daily-graph-A5-it-book.pdf

graph paper (4 mm) daily pages, A5;

daily-points4mm-A5-it.pdf daily-points4mm-A5-it-book.pdf

pointed paper (4 mm), A5;

daily-ruled-A5-it.pdf daily-ruled-A5-it-book.pdf

ruled paper daily pages, A5;

week_on_two_pages-A6-it.pdf week_on_two_pages-A6-it-book.pdf

weekly planner, one week on two pages, A6;

week_on_one_page-A6-it.pdf week_on_one_page-A6-it-book.pdf

weekly planner, one week per page, A6;

week_on_one_page_dots-A6-it.pdf week_on_one_page_dots-A6-it-book.pdf

weekly planner, one week per page with 4 mm dots, A6;

week_health-A6-it.pdf week_health-A6-it-book.pdf

weekly health tracker, one week per page with 4 mm dots, A6;

month-A6-it.pdf month-A6-it-book.pdf

monthly planner, A6;

Some of the planners include ephemerids and moon phase data: these have been calculated for the town of Como, and specifically for geo:45.81478,9.07522?z=17 , because that’s what everybody needs, right?

If you need the ephemerids for a different location and can’t run the script yourself (it depends on pdfjam, i.e. various GB of LaTeX, and a few python modules such as dateutil, pypdf and jinja2), feel free to ask: unless I receive too many requests to make this sustainable I’ll generate them and add them to this post.

I hereby release all the PDFs linked in this blog post under the CC0 license .

You may notice that I haven’t decided on a license for the code dump repository; again if you need it for something (that is compatible with its unsupported status) other than running it for personal use (for which afaik there is an implicit license) let me know and I’ll push “decide on a license” higher on the stack of things to do :D

Finishing the Makefile meant that I had to add a tiny feature to one of the scripts involved, which required me to add a dependency to pypdf : up to now I have been doing the page manipulations with pdfjam , which is pretty convenient to use, but also uses LaTeX, and apparently not every computer comes with texlive installed (shocking, I know).

If I’m not mistaken, pypdf can do all of the things I’m doing with pdfjam, so maybe for the next year I could convert my script to use that one instead.

But then the planners 2027 will be quick and easy, and I will be able to publish them promptly , right?

Running to the Press

Daring Fireball
daringfireball.net
2025-11-26 23:55:20
Regarding my earlier post on similarities between the 2010 App Store Guidelines and today’s: Notably absent from the current guidelines (I think for a very long time) is the specious but very Jobsian claim that “If you run to the press and trash us, it never helps.” Getting the press on your side is...
Original Article
November Update to the App Store Review Guidelines

Here’s the updated full guideline for section 4.1 :

4.1 Copycats

(a) Come up with your own ideas. We know you have them, so make yours come to life. Don’t simply copy the latest popular app on the App Store, or make some minor changes to another app’s name or UI and pass it off as your own. In addition to risking an intellectual property infringement claim, it makes the App Store harder to navigate and just isn’t fair to your fellow developers.

(b) Submitting apps which impersonate other apps or services is considered a violation of the Developer Code of Conduct and may result in removal from the Apple Developer Program.

(c) You cannot use another developer’s icon, brand, or product name in your app’s icon or name, without approval from the developer.

It’s guideline (c) that’s new, but I like guideline (a) here. Not just the intent of it, but the language. It’s clear, direct, and human. It reminds me of the tone of the very early guidelines, when it seemed like Steve Jobs’s voice was detectable in some of them. In a post back in 2010, I wrote :

This new document is written in remarkably casual language. For example, a few bullet items from the beginning:

  • We have over 250,000 apps in the App Store. We don’t need any more Fart apps.

  • If your app doesn’t do something useful or provide some form of lasting entertainment, it may not be accepted.

  • If your App looks like it was cobbled together in a few days, or you’re trying to get your first practice App into the store to impress your friends, please brace yourself for rejection. We have lots of serious developers who don’t want their quality Apps to be surrounded by amateur hour.

  • We will reject Apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, “I’ll know it when I see it”. And we think that you will also know it when you cross it.

  • If your app is rejected, we have a Review Board that you can appeal to. If you run to the press and trash us, it never helps.

Some of that language remains today. Here’s the current guideline for section 4.3:

4.3 Spam [...]

(b) Also avoid piling on to a category that is already saturated; the App Store has enough fart, burp, flashlight, fortune telling, dating, drinking games, and Kama Sutra apps, etc. already. We will reject these apps unless they provide a unique, high-quality experience. Spamming the store may lead to your removal from the Apple Developer Program.

I could be wrong, but my sense is that Apple has, without much fanfare, cracked down on scams and rip-offs in the App Store. That doesn’t mean there’s none. But it’s like crime in a city: a low amount of crime is the practical ideal, not zero crime. Maybe Apple has empowered something like the “ bunco squad ” I’ve wanted for years? If I’m just unaware of blatant rip-offs running wild in the App Store, send examples my way.

Wednesday, 26 November 2025

Tesla's European sales tumble nearly 50% in October

Hacker News
finance.yahoo.com
2025-11-26 23:44:56
Comments...
Original Article

Tesla's ( TSLA ) Europe woes are only getting worse.

According to the European Automobile Manufacturers' Association (ACEA), Tesla electric vehicle registrations (a proxy for sales) in Europe fell to just 6,964 units in October, a 48.5% drop compared to a year ago. Meanwhile, total EV registrations in the region, which includes the UK and the European Free Trade Association, rose 32.9% in October, with overall registrations regardless of powertrain up 4.9%.

October's total marks the 10th straight month of declining Tesla sales in Europe. Meanwhile, the overall market share of EVs in the broader European region grew to 16.4%.

Tesla's sales hangover rolled on in certain key European territories, with the introduction of the revamped Model Y not enough to blunt the effect of rising competition and CEO Elon Musk's deep unpopularity.

October's sales slide follows a rough 2025 for Tesla year to date in broader Europe.

In the first 10 months of the year, Tesla sales dropped 29.6% to 180,688 units, per the ACEA. Conversely, Tesla's overall market share in Europe dropped to 1.6% from 2.4% a year ago.

Meanwhile, Tesla's Chinese competitor BYD ( BYDDY ), which sells a mix of pure EVs and hybrids, reported sales jumping 207% to 17,470 units sold in Europe. Another major China rival, SAIC, saw sales climb 46% to just under 24,000 vehicles sold.

While weakening sales in a key, EV-centric region should be a concern, it hasn't been a significant issue for Tesla stock.

On Monday, Tesla shares surged nearly 7% after Melius Research tabbed the EV maker a "must own" due to its autonomy efforts and as CEO Elon Musk talked up its chipmaking progress.

Read more: How to avoid the sticker shock on Tesla car insurance

The editorial image shows the interior of the new Tesla Model 3 with Full Self-Driving activated. The photograph highlights the advanced autonomous driving system and the innovative design of Tesla&#39;s electric vehicles, representing the future of mobility and sustainable transport in Bari, Italy, on September 6, 2025. (Photo by Matteo Della Torre/NurPhoto via Getty Images)

The interior of the new Tesla Model 3 with Full Self-Driving activated, highlighting the advanced autonomous driving system and design of Tesla's electric vehicles, in Bari, Italy, on Sept. 6, 2025. (Matteo Della Torre/NurPhoto via Getty Images) · NurPhoto via Getty Images

"One of the reasons we called Tesla a 'must own' in our recent launch — despite all the obvious risks — is that the world is about to change, dramatically," analyst Rob Wertheimer wrote. "Autonomy is coming very soon, and it will change everything about the driving ecosystem.”

The main spark appears to be the latest version of Tesla's full self-driving (FSD) software, which is available in the US and select territories.

While investors own Tesla stock mostly for the AI and autonomous potential, there could be good news from the self-driving front for European buyers.

The Netherlands RDW automotive governing body said it has set up a schedule allowing Tesla to demonstrate in February whether FSD meets requirements but has not approved it yet.

Getting at least one automotive regulator in Europe to approve FSD would be a huge step in the right direction for Tesla and may help staunch the sales slide in the region.

Babushka Lady

Hacker News
en.wikipedia.org
2025-11-26 23:39:58
Comments...
Original Article

From Wikipedia, the free encyclopedia

President John F. Kennedy , Jacqueline Kennedy , Nellie Connally , and Texas governor John Connally minutes before the assassination

The Babushka Lady is an unidentified woman present during the 1963 assassination of US President John F. Kennedy who is speculated to have photographed or filmed the events that occurred in Dallas Dealey Plaza at the time President Kennedy was shot. Her nickname arose from the US Army headscarf she wore, which was similar to scarves worn by elderly Russian women . Babushka ( Russian : бабушка ) literally means "grandmother" or "old woman" in Russian .

The Babushka Lady was seen to be holding a camera by eyewitnesses and was also seen in film accounts of the assassination. [ 1 ] [ 2 ] She was observed standing on the grass between Elm and Main streets, standing amongst onlookers in front of the Dallas County Building, and is visible in the Zapruder film , as well as in the films of Orville Nix , [ 3 ] Marie Muchmore , and Mark Bell, [ 4 ] 44 minutes and 47 seconds into the Bell film; even though the shooting had already taken place and most of her surrounding witnesses took cover, she can be seen still standing with the camera at her face. After the shooting, she crossed Elm Street and joined the crowd that went up the grassy knoll .

The Babushka Lady is last seen in photographs walking east on Elm Street. Neither she, nor the film she may have taken, have ever been positively identified. Her first appearance on film chronologically is on the sidewalk in front of the Dallas County Building, which is visible in an image as being on Kennedy's right. She would have crossed Houston Street and onto Dealey Plaza in order to be visible in the Dealey Plaza images. This may imply that the images show two different women of similar appearance. It is plausible that once the motorcade passed by she was able to cross the street to catch a second motorcade drive past on Dealey Plaza where she would be on Kennedy's left.

Beverly Oliver's claim

[ edit ]

In 1970, a woman named Beverly Oliver told conspiracy researcher Gary Shaw at a church revival meeting in Joshua, Texas , that she was the Babushka Lady. [ 5 ] Oliver stated that she filmed the assassination with a Super 8 film Yashica and that she turned the undeveloped film over to two men who identified themselves to her as FBI agents. [ 5 ] According to Oliver, she obtained no receipt from the men, who told her that they would return the film to her within ten days. She did not follow up with an inquiry. [ 5 ]

Oliver reiterated her claims in the 1988 documentary The Men Who Killed Kennedy . [ 5 ] According to Vincent Bugliosi , Oliver has "never proved to most people's satisfaction that she was in Dealey Plaza that day". [ 5 ] Confronted with the fact that the Yashica Super-8 camera was not made until 1969, she stated that she received the "experimental" camera from a friend and was not sure the manufacturer's name was on it. [ 5 ] Oliver's claims were the basis for a scene in Oliver Stone 's 1991 film JFK , in which a character named "Beverly" meets Jim Garrison in a Dallas nightclub. [ 6 ] Played by Lolita Davidovich , she is depicted in the director's cut as wearing a headscarf at Dealey Plaza and speaking of having given the film she shot to two men claiming to be FBI agents.

House Select Committee on Assassinations report

[ edit ]

In March 1979, the Photographic Evidence Panel of the United States House Select Committee on Assassinations indicated that they were unable to locate any film attributed to the Babushka Lady. [ 7 ] According to their report: "Initially, Robert Groden , a photographic consultant to the committee advised the panel as to pertinent photographic issues and related materials. Committee investigators located many of the suggested films and photographs, however, some items were never located, i.e. the Babushka Lady film, a color photograph by Norman Similas, and the original negative of the Betzner photograph." [ 7 ]

Public hearings of the Assassination Records Review Board

[ edit ]

On November 18, 1994, assassination researcher Gary Mack testified before the Assassination Records Review Board that he had recently been told by an executive in Kodak 's Dallas office that a woman in her early 30s with brunette hair brought in film purported to be of the assassination scene while they were processing the Zapruder film . [ 8 ] According to Mack, the executive said the woman explained to federal investigators already at the film processing office that she ran from Main Street across the grass to Elm Street where she stopped and snapped a photo with some people in the foreground of the presidential limousine and the Texas School Book Depository . [ 8 ]

Mack said that he was told by the Kodak executive that the photo was extremely blurry and "virtually useless" and indicated that the woman likely went home without anyone recording her identity. [ 8 ] After suggesting that the woman in the story may have been the Babushka Lady, Mack then told the Board: "I do not believe that Beverly Oliver is the Babushka Lady, or, let me rephrase that, she certainly could be but the rest of the story is a fabrication." [ 8 ]

Also appearing that same day before the ARRB as "Beverly Oliver Massegee", Oliver stated that she was 17 years old at the time of the assassination. [ 8 ] She told the Board that she was filming with an "experimental" 8 mm movie camera approximately 20 to 30 feet (6 to 9 m) from Kennedy when he was shot and that the film was confiscated by a man who identified himself as an FBI agent. [ 8 ] According to Oliver, she handed over the camera because the man was an authority figure and because she feared being caught in possession of marijuana . [ 8 ] Oliver's claims were addressed point by point and debunked by conspiracy theory researcher John McAdams. [ 9 ]

  1. ^ "Muchmore frame" . Archived from the original on March 9, 2012 . Retrieved October 9, 2003 .
  2. ^ "Zapruder Frame 285" . Archived from the original on September 23, 2015 . Retrieved September 17, 2005 .
  3. ^ "JFK Assassination Films" . Jfk-online.com. Archived from the original on February 25, 2021 . Retrieved December 3, 2009 .
  4. ^ "JFK Assassination Films" . Jfk-online.com. Archived from the original on February 26, 2021 . Retrieved December 3, 2009 .
  5. ^ a b c d e f Bugliosi, Vincent (2007). Reclaiming History: The Assassination of President John F. Kennedy . New York: W. W. Norton & Company. p. 1405 . ISBN 978-0-393-04525-3 .
  6. ^ Stone, Oliver ; Sklar, Zachary (1992). "JFK: The Documented Screenplay" . JFK: The Book of the Film : The Documented Screenplay . New York: Applause Books. pp. 119– 121. ISBN 9781557831279 . Retrieved June 7, 2012 .
  7. ^ a b Appendix to Hearings before the Select Committee on Assassinations of the U.S. House of Representatives . Vol. VI Photographic Evidence. Washington, D.C.: United States Government Printing Office. 1979. p. 13. Archived from the original on April 17, 2008 . Retrieved June 9, 2012 .
  8. ^ a b c d e f g United States of America Assassination Records Review Board: Public Hearing . Washington, D.C.: United States Government Printing Office. November 18, 1994. pp. 25– 26, 41– 43. Archived from the original on January 19, 2023 . Retrieved June 9, 2012 .
  9. ^ [1]

Sutskever and LeCun: Scaling LLMs Won't Yield More Useful Results

Hacker News
www.abzglobal.net
2025-11-26 23:33:12
Comments...
Original Article

When two of the most influential people in AI both say that today’s large language models are hitting their limits , it’s worth paying attention.

In a recent long-form interview, Ilya Sutskever – co-founder of OpenAI and now head of Safe Superintelligence Inc. – argued that the industry is moving from an “age of scaling” to an “age of research” . At the same time, Yann LeCun , VP & Chief AI Scientist at Meta, has been loudly insisting that LLMs are not the future of AI at all and that we need a completely different path based on “world models” and architectures like JEPA .

As developers and founders, we’re building products right in the middle of that shift.

This article breaks down Sutskever’s and LeCun’s viewpoints and what they mean for people actually shipping software.

1. Sutskever’s Timeline: From Research → Scaling → Research Again

Sutskever divides the last decade of AI into three phases:

1.1. 2012–2020: The first age of research

This is the era of “try everything”:

  • convolutional nets for vision

  • sequence models and attention

  • early reinforcement learning breakthroughs

  • lots of small experiments, new architectures, and weird ideas

There were big models, but compute and data were still limited. The progress came from new concepts , not massive clusters.

1.2. 2020–2025: The age of scaling

Then scaling laws changed everything.

The recipe became:

More data + more compute + bigger models = better results.

You didn’t have to be extremely creative to justify a multi-billion-dollar GPU bill. You could point to a curve: as you scale up parameters and tokens, performance climbs smoothly.

This gave us:

  • GPT-3/4 class models

  • state-of-the-art multimodal systems

  • the current wave of AI products everyone is building on

1.3. 2025 onward: Back to an age of research (but with huge computers)

Now Sutskever is saying that scaling alone is no longer enough :

  • The industry is already operating at insane scale .

  • The internet is finite, so you can’t just keep scraping higher-quality, diverse text forever.

  • The returns from “just make it 10× bigger” are getting smaller and more unpredictable.

We’re moving into a phase where:

The clusters stay huge, but progress depends on new ideas , not only new GPUs.

2. Why the Current LLM Recipe Is Hitting Limits

Sutskever keeps circling three core issues.

2.1. Benchmarks vs. real-world usefulness

Models look god-tier on paper:

  • they pass exams

  • solve benchmark coding tasks

  • reach crazy scores on reasoning evals

But everyday users still run into:

  • hallucinations

  • brittle behavior on messy input

  • surprisingly dumb mistakes in practical workflows

So there’s a gap between benchmark performance and actual reliability when someone uses the model as a teammate or co-pilot.

2.2. Pre-training is powerful, but opaque

The big idea of this era was: pre-train on enormous text + images and you’ll learn “everything”.

It worked incredibly well… but it has downsides:

  • you don’t fully control what the model learns

  • when it fails, it’s hard to tell if the issue is data, architecture, or something deeper

  • pushing performance often means more of the same , not better understanding

That’s why there’s so much focus now on post-training tricks: RLHF, reward models, system prompts, fine-tuning, tool usage, etc. We’re papering over the limits of the pre-training recipe.

2.3. The real bottleneck: generalization

For Sutskever, the biggest unsolved problem is generalization .

Humans can:

  • learn a new concept from a handful of examples

  • transfer knowledge between domains

  • keep learning continuously without forgetting everything

Models, by comparison, still need:

  • huge amounts of data

  • careful evals to avoid weird corner-case failures

  • extensive guardrails and fine-tuning

Even the best systems today generalize much worse than people . Fixing that is not a matter of another 10,000 GPUs; it needs new theory and new training methods.

3. Safe Superintelligence Inc.: Betting on New Recipes

Sutskever’s new company, Safe Superintelligence Inc. (SSI) , is built around a simple thesis:

  • scaling was the driver of the last wave

  • research will drive the next one

SSI is not rushing out consumer products. Instead, it positions itself as:

  • focused on long-term research into superintelligence

  • trying to invent new training methods and architectures

  • putting safety and controllability at the core from day one

Instead of betting that “GPT-7 but bigger” will magically become AGI, SSI is betting that a different kind of model , trained with different objectives, will be needed.

4. Have Tech Companies Overspent on GPUs?

Listening to Sutskever, it’s hard not to read between the lines:

  • Huge amounts of money have gone into GPU clusters on the assumption that scale alone would keep delivering step-function gains.

  • We’re discovering that the marginal gains from scaling are getting smaller, and progress is less predictable.

That doesn’t mean the GPU arms race was pointless. Without it, we wouldn’t have today’s LLMs at all.

But it does mean:

  • The next major improvements will likely come from smarter algorithms , not merely more expensive hardware .

  • Access to H100s is slowly becoming a commodity , while genuine innovation moves back to ideas and data .

For founders planning multi-year product strategies, that’s a big shift.

5. Yann LeCun’s Counterpoint: LLMs Aren’t the Future at All

If Sutskever is saying “scaling is necessary but insufficient,” Yann LeCun goes further:

LLMs, as we know them, are not the path to real intelligence.

He’s been very explicit about this in talks, interviews and posts.

5.1. What LeCun doesn’t like about LLMs

LeCun’s core criticisms can be summarized in three points:

  1. Limited understanding
    LLMs are great at manipulating text but have a shallow grasp of the physical world .
    They don’t truly “understand” objects, physics or causality – all the things you need for real-world reasoning and planning.

  2. A product-driven dead-end
    He sees LLMs as an amazing product technology (chatbots, assistants, coding helpers) but believes they are approaching their natural limits .
    Each new model is larger and more expensive, yet delivers smaller improvements.

  3. Simplicity of token prediction
    Under the hood, an LLM is just predicting the next token. LeCun argues this is a very narrow, simplistic proxy for intelligence .
    For him, real reasoning can’t emerge from next-word prediction alone.

5.2. World models and JEPA

Instead of LLMs, LeCun pushes the idea of world models – systems that:

  • learn by watching the world (especially video)

  • build an internal representation of objects, space and time

  • can predict what will happen next in that world, not just what word comes next

One of the architectures he’s working on is JEPA – Joint Embedding Predictive Architecture :

  • it learns representations by predicting future embeddings rather than raw pixels or text

  • it’s designed to scale to complex, high-dimensional input like video

  • the goal is a model that can support persistent memory, reasoning and planning

5.3. Four pillars of future AI

LeCun often describes four pillars any truly intelligent system needs:

  1. Understanding of the physical world

  2. Persistent memory

  3. Reasoning

  4. Planning

His argument is that today’s LLM-centric systems mostly hack around these requirements instead of solving them directly. That’s why he’s increasingly focused on world-model architectures instead of bigger text models.

6. Sutskever vs. LeCun: Same Diagnosis, Different Cure

What’s fascinating is that Sutskever and LeCun agree on the problem :

  • current LLMs and scaling strategies are hitting limits

  • simply adding more parameters and data is delivering diminishing returns

  • new ideas are required

Where they differ is how radical the change needs to be :

  • Sutskever seems to believe that the next breakthroughs will still come from the same general family of models – big neural nets trained on massive datasets – but with better objectives, better generalization, and much stronger safety work.

  • LeCun believes we need a new paradigm : world models that learn from interaction with the environment, closer to how animals and humans learn.

For people building on today’s models, that tension is actually good news: it means there is still a lot of frontier left.

7. What All This Means for Developers and Founders

So what should you do if you’re not running an AI lab, but you are building products on top of OpenAI, Anthropic, Google, Meta, etc.?

7.1. Hardware is becoming less of a moat

If the next big gains won’t come from simply scaling, then:

  • the advantage of “we have more GPUs than you” decreases over time

  • your real edge comes from use cases, data, UX and integration , not raw model size

This is good for startups and agencies: you can piggyback on the big models and still differentiate.

7.2. Benchmarks are not your product

Both Sutskever’s and LeCun’s critiques are a warning against obsessing over leaderboards.

Ask yourself:

  • Does this improvement meaningfully change what my users can do?

  • Does it reduce hallucinations in their workflows?

  • Does it make the system more reliable, debuggable and explainable?

User-centric metrics matter more than another +2% on some synthetic reasoning benchmark.

7.3. Expect more diversity in model types

If LeCun’s world models, JEPA-style architectures, or other alternatives start to work, we’ll likely see:

  • specialized models for physical reasoning and robotics

  • LLMs acting as a language interface over deeper systems that actually handle planning and environment modeling

  • more hybrid stacks, where multiple models collaborate

For developers, that means learning to orchestrate multiple systems instead of just calling one chat completion endpoint.

7.4. Data, workflows and feedback loops are where you win

No matter who is right about the far future, one thing is clear for product builders:

  • Owning high-quality domain data

  • Designing tight feedback loops between users and models

  • Building evaluations that match your use case

…will matter more than anything else.

You don’t need to solve world modeling or superintelligence yourself. You need to:

  • pick the right model(s) for the job

  • wrap them in workflows that make sense for your users

  • keep improving based on real-world behavior

8. A Quiet Turning Point

In 2019–2021, the story of AI was simple: “scale is all you need.” Bigger models, more data, more GPUs.

Now, two of the field’s most influential figures are effectively saying:

  • scaling is not enough (Sutskever)

  • LLMs themselves may be a dead end for real intelligence (LeCun)

We’re entering a new phase where research, theory and new architectures matter again as much as infrastructure.

For builders, that doesn’t mean you should stop using LLMs or pause your AI roadmap. It means:

  • focus less on chasing the next parameter count

  • focus more on how intelligence shows up inside your product: reliability, reasoning, planning, and how it fits into real human workflows

The GPU race gave us today’s tools. The next decade will be defined by what we do with them – and by the new ideas that finally move us beyond “predict the next token.”

C100 Developer Terminal

Hacker News
caligra.com
2025-11-26 23:22:24
Comments...
Original Article

Most people don’t write code or manage data, and consumer devices are designed accordingly.

But change isn’t made by most people. Progress comes from the people whose work improves our understanding and ability.

Scientists and artists. Engineers and designers. Hackers and painters.

We think the world needs a brand of computing that stands behind creative technical work, dedicated to creating instead of consuming.

Caligra is a new computer company. Our goal is to help you make the future.

Historic Win for Italy’s Metal Workers

Portside
portside.org
2025-11-26 22:59:16
Historic Win for Italy’s Metal Workers Ray Wed, 11/26/2025 - 17:59 ...
Original Article
Historic Win for Italy’s Metal Workers Published

Last week Italy’s metal workers secured a major victory as the unions Fim, Fiom and Uilm, all affiliated to IndustriALL Global Union, signed the renewed National Collective Labour Agreement (NCLA) with Federmeccanica and Assistal after four days of continuous and intense negotiations. The agreement covers more than 1.5 million workers across the country and guarantees a €205(US$ 237.17) increase on minimum contractual salaries over four years, which the unions say is essential to protecting wages amid rising living costs and economic uncertainty.

In June 2025, FIOM, FIM and UILM staged an eight-hour strike accompanied by regional demonstrations across Italy, calling out what they described as the employers’ irresponsible refusal to negotiate. Workers across the sector, including those in small and medium-sized enterprises, joined the strike action and additional measures, such as overtime and flexibility blockades, were enforced. Demonstrations sent a clear and unified message: workers would not accept stagnation, wage erosion or further delays. The strike movement strengthened union resolve and demonstrated to employers that metal workers were mobilized, united and prepared to continue the fight to defend purchasing power and secure fair working conditions.

Union negotiators have described this as a crucial victory that ensures long-term wage defence at a moment when many families are facing mounting financial strain. The revised wage structure maintains a system designed to safeguard purchasing power through inflation. The agreement also includes an additional salary quota and a safeguard clause should inflation surge beyond forecasts during the contract period.

General secretaries Ferdinando Uliano, Michele De Palma and Rocco Palombella, from Fim, Fiom and Uilm, said the contract represents not only a negotiation victory, but also the defence of Italy’s national collective bargaining system itself. They emphasized the unity and resolve of the unions throughout the process:

“It was a very hard negotiation, but we closed the gap and signed a strong contract. We protected the purchasing power of metal workers and strengthened rights and protections. The wage increase, the start of a working-time reduction trial and the stabilization of precarious work were our pillars and we achieved them. Today, we can say we saved the national contract, which has never stopped being challenged. This agreement ensures dignity for those who build the industrial heart of Italy. Metal workers are once again writing the history of this country at a time when it needs stability, courage and lasting solutions.”

The contract delivers significant gains in the fight against job insecurity and precarious work, issues that have been central to the unions’ platform. Employers will now be required to stabilize a share of fixed-term workers after 12 months if they wish to extend temporary contracts under specific conditions. Workers employed through staffing agencies will gain the right to permanent employment at the host company after 48 months, an important shift toward fairer and more secure employment for thousands of metal workers.

The agreement also introduces forward-looking changes, including a structured trial to reduce working hours under the guidance of a dedicated commission. Additional improvements include stronger health and safety protections, expanded rights to workplace training, enhanced safeguards for seriously ill and disabled workers and new provisions specifically aimed at preventing violence against women.

IndustriALL general secretary, Atle Høie, praised the agreement and the determination of the Italian unions:

“This is an important victory not only for metal workers in Italy, but for workers everywhere who are fighting against insecurity, declining wages and the erosion of fundamental rights. By securing real wage protection, pathways to stable employment and groundbreaking progress on working-time reduction, Fim, Fiom and Uilm have shown what strong, united unions can achieve. This agreement sends a clear message: collective bargaining remains one of the most powerful tools workers have to build fairer, safer and more dignified workplaces.”

Running Unsupported iOS on Deprecated Devices

Hacker News
nyansatan.github.io
2025-11-26 22:57:56
Comments...
Original Article

Created on 26.11.25

Earlier this year I demoed iOS 6 running on an iPod touch 3 - a device that Apple never gave iOS 6 to, making iOS 5.1.1 the latest build it can run

A few months later I also released a script that generates an iOS 6 restore image installable on that iPod touch model

This article describes technical details behind this work. Certain proficiency in iOS internals is assumed

I'll show you what iOS is made of

First of all, let's recap what software components iOS consists of:

  1. iBoot - the bootloader. Has 4 different types for different scenarios - iBSS, iBEC, LLB and iBoot

  2. Kernelcache - the OS kernel + kernel extensions (drivers) built into a single binary blob

  3. DeviceTree - structured list of hardware used by specific device model + some parameters that specify software behavior. The copy included in an IPSW is more of a template that is heavily modified by iBoot before jumping into kernel

  4. Userspace filesystem - tiny restore ramdisk used purely for OS installation or the actual root filesystem of iOS installed persistently

  5. Various firmwares for coprocessors, be they internal or external to the main SoC - like, baseband, Wi-Fi, Bluetooth, multitouch and etc.

iPhone 3GS tests

iPhone 3GS was released the same year as iPod touch 3 (2009), and has a very similar hardware ( S5L8920X SoC vs. S5L8922X ). But the most important part is that it actually got iOS 6 officially

Before doing anything on the iPod I decided to try to boot iOS 6.0 with iOS 5.1.1 iBoot & DeviceTree on the iPhone and see what's gonna break and how

DeviceTree

The most broken thing was DeviceTree - iOS 6 added a lot of new nodes and properties. To fix it in automated manner I wrote a stupid Python script that decodes and computes a diff between 2 DeviceTrees. Such diff can also be applied to another DeviceTree

The script is available in the SundanceInH2A repo

As I mentioned above a lot of things in a DeviceTree is filled by iBoot at runtime. One of such new properties is nvram-proxy-data in chosen node

The property must contain a raw NVRAM dump - leaving it empty will make kernel get stuck somewhere very early

For iPod touch 3 I also had to clean-up the diff out of iPhone-specific things before applying it to iPod's 5.1.1 DeviceTree

iBoot

iBoot didn't require any major changes in this case. Just typical Image3 signature check patch, boot-args injection and debug-enabled patch so kernel is going to actually respect AMFI boot-args

One important thing is to actually populate nvram-proxy-data dynamically, at least for normal boots (aka non-restore). Restore boot will be fine with some random NVRAM hardcoded into DeviceTree, but normal one will overwrite your actual NVRAM with the random one if it decides to sync it at some point

I do it by replacing a call to UpdateDeviceTree() with my own little function that calls the real UpdateDeviceTree() , but also populates actual nvram-proxy-data and random-seed (this one shouldn't be of any importance)

For boot-args I always add amfi=0xff to disable code-signing, but that's pretty cannonical as well

Please note that other iBoot+kernel combos might require more changes - if you ever try something and it doesn't work, I recommend looking into DeviceTree differences (both the initial template and how iBoot fills it) and also boot_args structure iBoot passes to kernel (not to be confused with boot-args string , the boot_args structure is a different thing)

Kernelcache

The most complex part. iPod touch 3 never got iOS 6 officialy, yes, but it was rumored that initially it was meant to have it, but Apple's marketing team said no. Either way, almost every internal iOS 6 build got both standalone S5L8922X kernel and even standalone kexts (including ones specific to iPod touch 3)

The question is how to load them all simultaneously. My initial idea was to do it just as older Mac OS X could do - load all kexts dynamically on bootloader level. Long story short, my strategy was the following:

  1. In iBoot context, load all kexts from filesystem - binary itself + Info.plist
  2. Lay them out in memory and add corresponding entries to chosen/memory-map node of DeviceTree
  3. Boot standalone kernel which will then pick them up and load

The sad outcome:

panic(cpu 0 caller 0x802e5223): "kern_return_t kxld_link_file(KXLDContext *, u_char *, u_long, const char *, void *, KXLDDependency *, u_int, u_char **, kxld_addr_t *) (com.apple.kec.corecrypto) called in kernel without kxld support"

The kernel has all the code to pick them up, but not to actually link...

Glueing a prelinked kernelcache

So creating a legit kernelcache is the only way after all. I was already imagining all the horrors of writing software to parse and apply LINKEDIT and etc., but then it occured to me! Mac OS X (before Apple Silicon) was generating such kernelcaches somehow! What if we use that logic to build our iOS kernelcache?

kcgen \
    -c output.bin \
    $(cat n18.10A403.kextlist | sed 's/^/--bundle-id /') \
    -kernel kernels_kexts_10A63970m/mach.development.s5l8922x \
    -arch armv7 \
    -all-personalities \
    -strip-symbols \
    -uncompressed \
    -- \
    kernels_kexts_10A63970m/Extensions

I used /usr/local/bin/kcgen from internal Sierra build (can be found online as "Phoenix A1708.dmg"), but it seems that even latest macOS kextcache can do it (included by default)

Here is a breakdown of the options:

  • -c output.bin - output file to write resulting kernelcache to

  • $(cat n18.10A403.kextlist | sed 's/^/--bundle-id /') - this weird expression appends --bundle-id to every line from the file at n18.10A403.kextlist . This is to specify which kexts we'd like to include. How I created such list is described below

  • -arch armv7 - obviously only build armv7 slice

  • -all-personalities - very important flag that prevents irrelevant IOKit personalities to be stripped. "Irrelevant" as in "irrelevant to current machine", meaning everything relevant to iPod touch 3 is going to be stripped

  • -strip-symbols - strips unnecessary symbols. This flag can be omitted theoretically, but I recommend keeping it to make resulting kernelcache smaller

  • -uncompressed - do not apply compression. Since we'll have to change one little thing later, compression would have to be reapplied anyway

  • -- means the rest of the args will point to directories to grab kexts from

  • kernels_kexts_10A63970m/Extensions is a path to a folder containing kexts

The little thing to do is to remove fat header. For some reason, it creates a fat Mach-O with a single slice. iBoot doesn't like it, so let's strip it:

lipo -thin armv7 output.bin -o output.thin.bin

The kernel cache is ready now! Just needs to be compressed and packaged into Image3 container

About kext lists

Once again I compared iPhone 3GS' iOS 5.1.1 vs. 6.0 - some kexts were added, some removed, some changed their bundle IDs, some were irrelevant for iPod touch 3

Do not forget to include the pseudo-extensions as well!

Samples can be found in SundanceInH2A repository

About IOKit personalities

In this specific case I had to patch up Info.plist of the Wi-Fi kext. As always there is a sample in the repo

Restore ramdisk filesystem

Pretty cannonical here. I patched asr as usual and also had to move options.n88.plist to options.n18.plist so it can lay out partitions properly

However, I also have to install the iBoot exploit. To do that I reimplement rc.boot binary:

  1. Remount ramdisk and set umask just like the original one does

  2. Call restored_external , but with -server argument, so it doesn't reboot after finishing restore

  3. If restore was completed properly, I add a third partition, write the exploit there and set boot-partition to 2

  4. Reboot the device

My implementation is available guess where? Yes, in the repository

Root filesystem

This needed a lot of changes:

  1. Add matching SpringBoard's hardware feature plist ( /System/Library/CoreServices/SpringBoard.app/N18AP.plist in this case)

    • I took the iOS 5.1.1 variant as a base and added iOS 6 specific capabilities

    • I tried to keep original enough Home screen icon order by merging iPod touch 3 iOS 5.1.1 and iPod touch 4 6.x layouts

  2. Add multitouch & Wi-Fi firmwares

    • I use versions from 5.1.1
  3. Add Bluetooth firmware and scripts

    • This is more complicated, as those are all hardcoded into /usr/sbin/BlueTool

    • Luckily, they can also be overriden by files in /etc/bluetool - as always check my code for reference

    • I extracted both firmware and scripts from 5.1.1 BlueTool

  4. FairPlay daemon is limited to N88AP (iPhone 3GS)

    • It has LimitLoadToHardware key in its' LaunchDaemon plist

    • But if we simply remove the key, it works on iPod touch 3 as well

    • This is important, because otherwise we cannot activate device through Apple's servers

    • This trick will be harder to pull off on iOS 6.1+ because they load LaunchDaemons from a signed cache. Still can be bypassed in many ways - for instance, patching launchd or forcefully loading another plist via launchctl

  5. DYLD shared cache patches

    1. Product ID map patch

      • iOS 6 brings a concept of "product ID" in the form of a long byte sequence
      • It is filled by iBoot into product node of DeviceTree (which didn't even exist before)
      • I hardcode the value of iPhone 3GS straight into DeviceTree ( 8784AE8D7066B0F0136BE91DCFE632A436FFD6FB )
      • There is also a short form of this identifier - 16-bit integer - which existed before iOS 6
      • iPhone 3GS is 0x2714 and the iPod is 0x2715
      • MobileGestalt framework has a table that matches the short form by the long one - I swap 0x2714 with 0x2715 there
      • I believe it's better for iTunes and etc.
    2. getDeviceVariant() patch

      • MobileGestalt once again messes us up our business
      • Device variant is a letter - usually "A" or "B"
      • It seems to depend on Wi-Fi transciever vendor used in exact device (?)
      • iOS 6 fails miserably to determine this value for iPod touch 3
      • This crashes activation process, for example
      • To fix it, I patch the function to always return "A" (in form of CFString )
    3. Fixing code signature

      • This is much easier than most people think
      • Shared cache files have the same format of signature as normal Mach-Os
      • And since it's just ad-hoc, all you need to do is to recalculate SHA-1 hash for pages you modified and update the signature
      • So easy, it can be done with just a hex-editor

The iBoot exploit

iOS 5 iBoot had a bug in HFS+ filesystem driver. I did make an exploit many years ago but it was bad . Like, truly bad . I reimplemented it from scratch for this project making it deterministic (hopefully...)

This subject probably deserves a separate article

Conclusion & future plans

This was not easy to do, and yet easier than I expected initially

After releasing the tool many people asked me about jailbreaking. The old tools are not going to work, but it should be easy to just patch the kernel and drop Cydia tarball onto the filesystem. I guess I will give it a try later

There was another device that Apple dropped support for in that year - iPad 1. I will try that soon enough as well

I hope that the information from this write-up will help you making other crazy combinations, like iOS 4 on iPhone 4S or iOS 5 on iPad mini 1

PyPI and Shai-Hulud: Staying Secure Amid Emerging Threats

Lobsters
blog.pypi.org
2025-11-26 22:43:17
Comments...
Original Article

An attack on the npm ecosystem continues to evolve, exploiting compromised accounts to publish malicious packages. This campaign, dubbed Shai-Hulud , has targeted large volumes of packages in the JavaScript ecosystem, exfiltrating credentials to further propagate itself.

PyPI has not been exploited , however some PyPI credentials were found exposed in compromised repositories. We've revoked these tokens as a precaution, there's no evidence they have been used maliciously. This post raises awareness about the attack and encourages proactive steps to secure your accounts, especially if you're using build platforms to publish packages to PyPI.

How does this relate to PyPI?

This week, a security researcher disclosed long-lived PyPI credentials exposed as part of the Shai-Hulud campaign. The credentials were found in GitHub repositories (stored as repository secrets), and were still valid. We saw an attack with insecure workflow settings for Ultralytics in 2024 .

While the campaign primarily targets npm, some projects use monorepo setups, publishing both JavaScript packages to npmjs.com and Python packages to PyPI from the same repository. When attackers compromise these repositories, they can extract credentials for multiple platforms.

We investigated the reported credentials and found they were associated with accounts that hadn't published recently. We've revoked these credentials and reached out to affected users to advise them to rotate any remaining tokens.

What can I do to protect my PyPI account?

Here are security practices to protect your PyPI account:

  • Use Trusted Publishing: If you are using a build platform to publish packages to PyPI, consider using a Trusted Publisher . This eliminates the need to manage long-lived authentication tokens, reducing the risk of credential exposure. Trusted Publishing uses short-lived, scoped tokens for each build, minimizing the impact of any potential compromise. This approach has risen in popularity , with other registries like Crates.io , RubyGems , and npmjs.com adopting similar models.

    When using GitHub Actions, consider layering in additional security measures, like requiring human approval via GitHub Environments before publishing. This blog post from pyOpenSci has detailed guidance on adding manual review steps to GitHub Actions workflows.

  • Audit your workflows for misconfiguration: Review your GitHub Actions workflows for any potential security issues. Tools like zizmor and CodeQL can help identify vulnerabilities in your CI/CD pipelines. Adopt scanning as automated actions for the repository to catch future issues.

  • Review your account activity: Regularly check your PyPI account activity for any unauthorized actions. If you notice any suspicious activity, report it to the PyPI security team immediately.

Taking any of these steps helps mitigate the risk of compromise and keeps packages secure.

References

Some blog posts covering the attack behaviors and mitigation steps:

EFF to Arizona Federal Court: Protect Public School Students from Surveillance and Punishment for Off-Campus Speech

Electronic Frontier Foundation
www.eff.org
2025-11-26 22:33:54
Legal Intern Alexandra Rhodes contributed to this blog post.  EFF filed an amicus brief urging the Arizona District Court to protect public school students’ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is “on c...
Original Article

Legal Intern Alexandra Rhodes contributed to this blog post.

EFF filed an amicus brief urging the Arizona District Court to protect public school students’ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is “on campus.” We argued that students need private digital spaces beyond their school’s reach to speak freely, without the specter of constant school surveillance and punishment.

Surveillance Software Exposed a Bad Joke Made in the Privacy of a Student’s Home

The case, Merrill v. Marana Unified School District , involves a Marana High School student who, while at home one morning before school started, asked his mother for advice about a bad grade he received on an English assignment. His mother said he should talk to his English teacher, so he opened his school-issued Google Chromebook and started drafting an email. The student then wrote a series of jokes in the draft email that he deleted each time. The last joke stated: “GANG GANG GIMME A BETTER GRADE OR I SHOOT UP DA SKOOL HOMIE,” which he narrated out loud to his mother in a silly voice before deleting the draft and closing his computer.

Within the hour, the student’s mother received a phone call from the school principal, who said that Gaggle surveillance software had flagged a threat from her son and had sent along the screenshot of the draft email. The student’s mother attempted to explain the situation and reassure the principal that there was no threat. Nevertheless, despite her reassurances and the student’s lack of disciplinary record or history of violence, the student was ultimately suspended over the draft email—even though he was physically off campus at the time, before school hours, and had never sent the email.

After the student’s suspension was unsuccessfully challenged, the family sued the school district alleging infringement of the student’s right to free speech under the First Amendment and violation of the student’s right to due process under the Fourteenth Amendment.

Public School Students Have Greater First Amendment Protection for Off-Campus Speech

The U.S. Supreme Court has addressed the First Amendment rights of public school students in a handful of cases .

Most notably, in Tinker v. Des Moines Independent Community School District (1969), the Court held that students may not be punished for their on-campus speech unless the speech “materially and substantially” disrupted the school day or invaded the rights of others.

Decades later, in Mahanoy Area School District v. B.L. by and through Levy (2021) , in which EFF filed a brief , the Court further held that schools have less leeway to regulate student speech when that speech occurs off campus. Importantly, the Court stated that schools should have a limited ability to punish off-campus speech because “from the student speaker’s perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.”

The Ninth Circuit has further held that off-campus speech is only punishable if it bears a “ sufficient nexus ” to the school and poses a credible threat of violence.

In this case, therefore, the extent of the school district’s authority to regulate student speech is tied to whether the high schooler was on or off campus at the time of the speech. The student here was at home and thus physically off campus when he wrote the joke in question; he wrote the draft before school hours; and the joke was not emailed to anyone on campus or anyone associated with the campus.

Yet the school district is arguing that his use of a school-issued Google Chromebook and Google Workspace for Education account (including the email account) made his speech—and makes all student speech—automatically “on campus” for purposes of justifying punishment under the First Amendment.

Schools Provide Students with Valuable Digital Tools—But Also Subject Them to Surveillance

EFF supports the plaintiffs’ argument that the student’s speech was “off campus,” did not bear a sufficient nexus to the school, and was not a credible threat. In our amicus brief, we urged the trial court at minimum to reject a rule that the use of a school-issued device or cloud account always makes a student’s speech “on campus.”

Our amicus brief supports the plaintiffs’ First Amendment arguments through the lens of surveillance, emphasizing that digital speech and digital privacy are inextricably linked.

As we explained, Marana Unified School District, like many schools and districts across the country, offers students free Google Chromebooks and requires them to have an online Google Account to access the various cloud apps in Google Workspace for Education, including the Gmail app.

Marana Unified School District also uses three surveillance technologies that are integrated into Chromebooks and Google Workspace for Education: Gaggle, GoGuardian, and Securly. These surveillance technologies collectively can monitor virtually everything students do on their laptops and online, from the emails and documents they write (or even just draft ) to the websites they visit.

School Digital Surveillance Chills Student Speech and Further Harms Students

In our amicus brief, we made four main arguments against a blanket rule that categorizes any use of a school-issued device or cloud account as “on campus,” even if the student is geographically off campus or outside of school hours.

First, we pointed out that such a rule will result in students having no reprieve from school authority, which runs counter to the Supreme Court’s admonition in Mahanoy not to regulate “all the speech a student utters during the full 24-hour day.” There must be some place that is “off campus” for public school students even when using digital tools provided by schools, otherwise schools will reach too far into students’ lives.

Second, we urged the court to reject such an “on campus” rule to mitigate the chilling effect of digital surveillance on students’ freedom of speech—that is, the risk that students will self-censor and choose not to express themselves in certain ways or access certain information that may be disfavored by school officials. If students know that no matter where they are or what they are doing with their Chromebooks and Google Accounts, the school is watching and the school has greater legal authority to punish them because they are always “on campus,” students will undoubtedly curb their speech.

Third, we argued that such an “on campus” rule will exacerbate existing inequities in public schools among students of different socio-economic backgrounds. It would distinctly disadvantage lower-income students who are more likely to rely on school-issued devices because their families cannot afford a personal laptop or tablet. This creates a “pay for privacy” scheme : lower-income students are subject to greater school-directed surveillance and related discipline for digital speech, while wealthier students can limit surveillance by using personal laptops and email accounts, enabling them to have more robust free speech protections.

Fourth, such an “on campus” rule will incentivize public schools to continue eroding student privacy by subjecting them to near constant digital surveillance. The student surveillance technologies schools use are notoriously privacy invasive and inaccurate , causing various harms to students—including unnecessary investigations and discipline, disclosure of sensitive information, and frustrated learning.

We urge the Arizona District Court to protect public school students’ freedom of speech and privacy by rejecting this approach to school-managed technology . As we said in our brief, students, especially high schoolers, need some sphere of digital autonomy, free of surveillance, judgment, and punishment, as much as anyone else—to express themselves, to develop their identities, to learn and explore, to be silly or crude, and even to make mistakes .

Bring Back Doors – Bring Bathroom Doors Back to Hotels

Hacker News
bringbackdoors.com
2025-11-26 22:26:36
Comments...
Original Article

I’m done. I’m done arriving at hotels and discovering that they have removed the bathroom door. Something that should be as standard as having a bed, has been sacrificed in the name of “aesthetic”.

I get it, you can save on material costs and make the room feel bigger, but what about my dignity??? I can’t save that when you don’t include a bathroom door.

It’s why I’ve built this website, where I compiled hotels that are guaranteed to have bathroom doors, and hotels that need to work on privacy.

I’ve emailed hundreds of hotels and I asked them two things: do your doors close all the way, and are they made of glass? Everyone that says yes to their doors closing, and no to being made of glass has been sorted by price range and city for you to easily find places to stay that are guaranteed to have a bathroom door.


Quickly check to see if the hotel you’re thinking of booking has been reported as lacking in doors by a previous guest.


Finally, this passion project could not exist without people submitting hotels without bathroom doors for public shaming. If you’ve stayed at a doorless hotel send me an email with the hotel name to bringbackdoors@gmail.com, or send me a DM on Instagram with the hotel name and a photo of the doorless setup to be publicly posted.

Let’s name and shame these hotels to protect the dignity of future travelers.

New ShadowV2 botnet malware used AWS outage as a test opportunity

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 22:24:14
A new Mirai-based botnet malware named 'ShadowV2' has been observed targeting IoT devices from D-Link, TP-Link, and other vendors with exploits for known vulnerabilities. [...]...
Original Article

New ShadowV2 botnet malware used AWS outage as a test opportunity

A new Mirai-based botnet malware named ‘ShadowV2’ has been observed targeting IoT devices from D-Link, TP-Link, and other vendors with exploits for known vulnerabilities.

Fortinet’s FortiGuard Labs researchers spotted the activity during the major AWS outage in October . Although the two incidents are not connected, the botnet was active only for the duration of the outage, which may indicate that it was a test run.

ShadowV2 spread by leveraging at least eight vulnerabilities in multiple IoT products:

Wiz

  • DD-WRT (CVE-2009-2765)
  • D-Link (CVE-2020-25506, CVE-2022-37055, CVE-2024-10914, CVE-2024-10915)
  • DigiEver (CVE-2023-52163)
  • TBK (CVE-2024-3721)
  • TP-Link (CVE-2024-53375)

Among these flaws, CVE-2024-10914 is a known-to-be-exploited command injection flaw impacting EoL D-Link devices, which the vendor announced that it would not fix .

Regarding CVE-2024-10915, for which there’s a NetSecFish report from November 2024, BleepingComputer initially did not find the vendor's advisory for the flaw. After reaching out to the company, we received confirmation that the issue would not be fixed for the impacted models.

D-Link updated an older bulletin to add the particular CVE-ID and published a new one referring to the ShadowV2 campaign, to warn users that end-of-life or end-of-support devices are no longer under development and will not receive firmware updates.

CVE-2024-53375, which was also presented in detail in November 2024, was reportedly fixed via a beta firmware update.

Various exploits used by ShadowV2
Various exploits used by ShadowV2
Source: Fortinet

According to FortiGuard Labs researchers, the ShadowV2 attacks originated from 198[.]199[.]72[.]27, and targeted routers, NAS devices, and DVRs across seven sectors, including government, technology, manufacturing, managed security service providers (MSSPs), telecommunications, and education.

The impact was global, with attacks observed in North and South America, Europe, Africa, Asia, and Australia.

The botnet's global impact
The botnet's global impact
Source: Fortinet

The malware identifies itself as "ShadowV2 Build v1.0.0 IoT version," and is similar to the Mirai LZRD variant, the researchers say in a report that provides technical details on how ShadowV2 functions.

It is delivered to vulnerable devices through an initial access stage using a downloader script (binary.sh) that fetches it from a server at 81[.]88[.]18[.]108.

Downloader script
Downloader script
Source: Fortinet

It uses XOR-encoded configuration for filesystem paths, User-Agent strings, HTTP headers, and Mirai-style strings.

In terms of functional capabilities, it supports distributed denial-of-service (DDoS) attacks on UDP, TCP, and HTTP protocols, with various flood types for each. The command-and-control (C2) infrastructure triggers these attacks via commands sent to the bots.

DDoS attack trigger
DDoS attack trigger
Source: Fortinet

Typically, DDoS botnets make money by renting their firepower to cybercriminals or by directly extorting targets, demanding payments for stopping the attacks. However, it is not yet known who is behind Shadow V2 and what their monetization strategy is.

Fortinet shared indicators of compromise (IoCs) to help identify this emerging threat at the bottom of the report, while warning about the importance of keeping firmware updated on IoT devices.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

EU approves Chat Control policy

Hacker News
www.techradar.com
2025-11-26 21:52:14
Comments...
Original Article
Danish Justice Minister Peter Hummelgaard gives a doorstep statement after a briefing on drones at the Ministry of Justice on September 29, 2025, following recent drone disturbances over Denmark.
Image credit: Pixabay (Image credit: Photo by Thomas Traasdahl / Ritzau Scanpix / AFP) / Denmark OUT (Photo by THOMAS TRAASDAHL/Ritzau Scanpix/AFP via Getty Images)

  • The EU Council reached an agreement on the Child Sexual Abuse Regulation
  • Voluntary chat scanning remains in the bill despite privacy backlash
  • The Council now prepares to start negotiations with the Parliament

The EU Council has finally reached an agreement on the controversial Child Sexual Abuse Regulation (CSAR) after more than three years of failed attempts.

Nicknamed Chat Control by its critics, the agreement has kept cryptographers, technologists, encrypted service providers, and privacy experts alike in turmoil since its inception.

Presidency after presidency, the bill has taken many shapes. But its most controversial feature is an obligation for all messaging service providers operating in the EU – including those using end-to-end-encryption – to scan their users' private chats on the lookout for child sexual abuse material (CSAM).

At the beginning of the month, the Danish Presidency decided to change its approach with a new compromise text that makes the chat scanning voluntary, instead. That turned to be a winning move, with the proposal managing to reach an agreement in the Council on Wednesday, November 26, 2025.

Privacy experts are unlikely to celebrate, though. The decision came a few days after a group of scientists wrote yet another open letter warning that the latest text still " brings high risks to society ." That's after other privacy experts deemed the new proposal a " political deception " rather than an actual fix.

The EU Council is now preparing to start negotiations with the European Parliament, hoping to agree on the final terms of the regulation.

What we know about the Council agreement

EU flags outside administrative building

(Image credit: Pixabay)

As per the EU Council announcement , the new law imposes a series of obligations on digital companies. Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to "implement mitigating measures to counter that risk," the Council notes.

The Council also introduces three risk categories of online services. Those deemed to be a high-risk can be forced "to contribute to the development of technologies to mitigate the risks relating to their services." Voluntary scanning also remains in the bill.

A new EU agency is then tasked to oversee the implementation of the new rules.

"I'm glad that the member states have finally agreed on a way forward that includes a number of obligations for providers of communication services to combat the spread of child sexual abuse material," said Danish Minister for Justice, Peter Hummelgaard.

But concerns about how the agreement threatens our digital rights persist, with one person on the forum, Hacker News , saying the Danish "government has today turned the EU into a tool for total surveillance, I don't know if there can be any return from."

As trilogue negotiations approach, the ongoing challenge for legislators remains striking the right balance between halting abuse online, without compromising on fundamental rights and strong encryption .


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!


Chiara is a multimedia journalist committed to covering stories to help promote the rights and denounce the abuses of the digital side of life – wherever cybersecurity, markets, and politics tangle up. She believes an open, uncensored, and private internet is a basic human need and wants to use her knowledge of VPNs to help readers take back control. She writes news, interviews, and analysis on data privacy, online censorship, digital rights, tech policies, and security software, with a special focus on VPNs, for TechRadar and TechRadar Pro. Got a story, tip-off, or something tech-interesting to say? Reach out to chiara.castro@futurenet.com

November Update to the App Store Review Guidelines

Daring Fireball
developer.apple.com
2025-11-26 21:46:25
Here’s the updated full guideline for section 4.1: 4.1 Copycats (a) Come up with your own ideas. We know you have them, so make yours come to life. Don’t simply copy the latest popular app on the App Store, or make some minor changes to another app’s name or UI and pass it off as yo...
Original Article

November 13, 2025

The App Review Guidelines have been revised to support updated policies and to provide clarification. Please review the changes below:

  • 1.2.1(a): This new guideline specifies that creator apps must provide a way for users to identify content that exceeds the app’s age rating, and use an age restriction mechanism based on verified or declared age to limit access by underage users.
  • 2.5.10: This language has been deleted (“Apps should not be submitted with empty ad banners or test advertisements.”).
  • 3.2.2(ix): Clarified that loan apps may not charge a maximum APR higher than 36%, including costs and fees, and may not require repayment in full in 60 days or less.
  • 4.1(c): This new guideline specifies that you cannot use another developer’s icon, brand, or product name in your app’s icon or name, without approval from the developer.
  • 4.7: Clarifies that HTML5 and JavaScript mini apps and mini games are in scope of the guideline.
  • 4.7.2: Clarifies that apps offering software not embedded in the binary may not extend or expose native platform APIs or technologies to the software without prior permission from Apple.
  • 4.7.5: Clarifies that apps offering software not embedded in the binary must provide a way for users to identify content that exceeds the app’s age rating, and use an age restriction mechanism based on verified or declared age to limit access by underage users.
  • 5.1.1(ix): Adds crypto exchanges to the list of apps that provide services in highly regulated fields.
  • 5.1.2(i): Clarifies that you must clearly disclose where personal data will be shared with third parties, including with third-party AI, and obtain explicit permission before doing so.

Translations of the guidelines will be available on Apple Developer website within one month.

The EU made Apple adopt new Wi-Fi standards, and now Android can support AirDrop

Hacker News
arstechnica.com
2025-11-26 21:25:36
Comments...
Original Article

Skip to content

cats and dogs living together

Google’s Pixel 10 works with AirDrop, and other phones should follow later.

Google's Pixel 10 series now features compatibility with Apple's AirDrop. Credit: Ryan Whitwam

Google's Pixel 10 series now features compatibility with Apple's AirDrop. Credit: Ryan Whitwam

Last year, Apple finally added support for Rich Communications Services (RCS) texting to its platforms, improving consistency, reliability, and security when exchanging green-bubble texts between the competing iPhone and Android ecosystems. Today, Google is announcing another small step forward in interoperability, pointing to a slightly less annoying future for friend groups or households where not everyone owns an iPhone.

Google has updated Android’s Quick Share feature to support Apple’s AirDrop, which allows users of Apple devices to share files directly using a local peer-to-peer Wi-Fi connection. Apple devices with AirDrop enabled and set to “everyone for 10 minutes” mode will show up in the Quick Share device list just like another Android phone would, and Android devices that support this new Quick Share version will also show up in the AirDrop menu.

Google will only support this feature on the Pixel 10 series, at least to start. The company is “looking forward to improving the experience and expanding it to more Android devices,” but it didn’t announce anything about a timeline or any hardware or software requirements. Quick Share also won’t work with AirDrop devices working in the default “contacts only” mode, though Google “[welcomes] the opportunity to work with Apple to enable ‘Contacts Only’ mode in the future.” (Reading between the lines: Google and Apple are not currently working together to enable this, and Google confirmed to The Verge that Apple hadn’t been involved in this at all.)

Like AirDrop, Google notes that files shared via Quick Share are transferred directly between devices, without being sent to either company’s servers first.

Google shared a little more information in a separate post about Quick Share’s security , crediting Android’s use of the memory-safe Rust programming language with making secure file sharing between platforms possible.

“Its compiler enforces strict ownership and borrowing rules at compile time, which guarantees memory safety,” writes Google VP of Platforms Security and Privacy Dave Kleidermacher. “Rust removes entire classes of memory-related bugs. This means our implementation is inherently resilient against attackers attempting to use maliciously crafted data packets to exploit memory errors.”

Why is this happening now?

Google doesn’t mention it in either Quick Share post, but if you’re wondering why it’s suddenly possible for Quick Share to work with AirDrop, it can almost certainly be credited to European Union regulations imposed under the Digital Markets Act (DMA).

Let’s start with how AirDrop works. Like many of Apple’s “ Continuity ” features that rely on wireless communication between devices, AirDrop uses Bluetooth to allow devices to find each other, and a fast peer-to-peer Wi-Fi connection to actually transfer files and other data. This isn’t exotic hardware; all smartphones, tablets, and computers sold today include some flavor of Bluetooth and Wi-Fi.

But to make those Continuity features work, Apple also developed a proprietary protocol called Apple Wireless Direct Link (AWDL) to facilitate the actual connection between devices and the data transfer. Because this wasn’t a standard anyone could use, other companies couldn’t try to make their own wireless sharing features compatible with AirDrop.

But earlier this year , the EU adopted new specification decisions that required Apple to adopt new interoperable wireless standards, starting in this year’s iOS 26 release. If you don’t want to wade through the regulatory documents, this post from cloud services company Ditto is a useful timeline of events written in plainer language.

Setting AirDrop to “everyone for 10 minutes” mode on an iPhone. Credit: Andrew Cunningham

The rulings required Apple to add support for the Wi-Fi Alliance’s Wi-Fi Aware standard instead of AWDL—and in fact required Apple to deprecate AWDL and to help add its features to Wi-Fi Aware so that any device could benefit from them. This wasn’t quite the imposition it sounded like; Wi-Fi Aware was developed with Apple’s help , based on the work Apple had already done on AWDL. But it meant that Apple could no longer keep other companies out of AirDrop by using a functionally similar but private communication protocol instead of the standardized version.

In some ways, Apple’s journey to Wi-Fi Aware recalls the iPhone’s journey to USB-C: first, Apple developed a proprietary port that achieved some of the same goals as USB-C; Apple then contributed work to what would become the standardized USB-C connector; but then the company hesitated to actually adopt the standardized port in its phones until its hand was forced by regulators .

In any case, Wi-Fi Aware was added to iOS 26 and iPadOS 26, and Apple’s developer documentation lists the specific hardware that supports it (the iPhone 12 and later, and most iPads released within the last three or four years). For Android users, that likely means that Quick Share will only work with AirDrop on those devices, if they’ve been updated to iOS/iPadOS 26 or later. Google has supported Wi-Fi Aware in Android since version 8.0, so it should at least theoretically be possible for most modern Android phones to add support for the feature in software updates somewhere down the line.

Apple’s hardware support list also suggests that Android phones won’t work with AirDrop on the Mac, since macOS 26 isn’t listed as a supported operating system on Apple’s Wi-Fi Aware (it’s likely not a coincidence that macOS is not considered to be a “gatekeeper” operating system under the DMA, as both iOS and iPadOS are).

If I had to guess why neither of Google’s Quick Share posts mentions Wi-Fi interoperability standards or the DMA, it may be because Google has been complaining about various aspects of the law and its enforcement since before it was even passed (as have many US tech companies designated as gatekeepers by the law). Google has occasionally tried to take advantage of the DMA, as it did when it argued that Apple’s iMessage service should be opened up. But it may be that Google doesn’t want to explicitly credit or praise the DMA in its press releases when the company is facing the possibility of huge fines under the same law.

The New York Times reported earlier this week that EU regulators are considering changes to some of its tech regulations, citing concerns about “overregulation” and “competitiveness,” but that the EU was not currently considering changes to the DMA. For its part, Apple recently called for the DMA to be repealed entirely .

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue .

118 Comments

AirDrop support for Pixel 10 likely exists because of the EU ruling

Hacker News
9to5google.com
2025-11-26 21:24:09
Comments...
Original Article

Out of nowhere, Google brought cross-platform AirDrop support to the Pixel 10 this week, allowing the company’s latest lineup of flagships to safely and securely send photos, files, and more to the iPhone. While it initially seemed like this was a rogue move made by Google to coerce Apple into another boundary-breaking decision, it might actually be part of the repercussions that also led to USB-C on iPhone and the adoption of RCS.

If you’ve been scratching your head trying to figure out just how — not to mention why — Google was able to get this up and running, the answer might be a little more simple than you could think. While this certainly brought back memories of, say, Beeper’s attempt at getting iMessage up and running on Android two years ago, as well as Palm’s war of attrition on iTunes support in the earliest days of the Pre, it sounds like this particular example was far less hostile towards Apple than any of its predecessors, all thanks to some of the changes made by the EU.

As reported by Ars Technica , the answer to this week’s mysterious Quick Share upgrade lies in the EU’s interoperability requirements designed for the DMA. The ruling out of the European Commission pushed Apple to begin supporting interoperable wireless standards beginning with this year’s set of OS upgrades, replacing the previous proprietary standard the company used to power its various Continuity features. That forced Apple to add support for the Wi-Fi Alliance’s Wi-Fi Aware standard of multi-directional file sharing, at the cost of completely phasing out its previous walled-in protocol.

So yes, while Apple wasn’t officially involved with opening up AirDrop clients to Android, it’s a little unfair to paint this company as having no involvement at all. Thanks to actions Apple was required to make under the DMA in Europe, Pixel 10 users — and soon, Android users at large — now have effectively native AirPlay support through Quick Share without any sacrifice to security, so long as the hardware has proper support for Wi-Fi Aware.

Still, just because this isn’t the quiet workaround some of us might’ve assumed Google was relying on doesn’t mean you should expect Apple to join in on the fun any time soon. As Ars Technica points out in its report, Europe has been rethinking its heavy-handed approach to tech firms, specifically in reaction to the absence of AI-centric firms in the region — and Apple, for its part, still wants the DMA revoked. Try out AirDrop while your phone still supports it , Pixel 10 owners. While it seems unlikely, you never know if this could disappear overnight.

Add 9to5Google as a preferred source on Google Add 9to5Google as a preferred source on Google

FTC: We use income earning auto affiliate links. More.

Why 90s Movies Feel More Alive Than Anything on Netflix

Hacker News
afranca.com.br
2025-11-26 20:53:45
Comments...
Original Article

Tags: # Blogging # ClassicCinema # ModernMovies # Netflix # Streaming

I was rewatching The Silence of the Lambs the other night, and something hit me hard. This movie, made in 1991, feels more alive, more gripping, more real than most things coming out today. And it got me thinking: why do 80s and 90s movies seem so much better than what we're getting now?

There's something about the way older films were crafted that modern cinema seems to have lost. Take Goodfellas from 1990. Scorsese doesn't just tell you a story about mobsters, he pulls you into their world. The tracking shot through the Copacabana, the narration that feels like a conversation, the way violence erupts suddenly and brutally. You feel the seduction of that lifestyle and the paranoia that comes with it. Every frame has purpose. Every scene builds character. Compare that to The Irishman from 2019, which is actually good but feels bloated, overly long, relying too heavily on “de-aging” technology that never quite convinces you.

Or think about Pulp Fiction from 1994. Tarantino took narrative structure and shattered it into pieces, then reassembled it into something that shouldn't work but does, brilliantly. The dialogue crackles. The characters feel lived-in. Vincent and Jules aren't just hitmen, they're more like philosophers debating foot massages and divine intervention between murders. Now look at something like Bullet Train from 2022. It's stylish, sure, but it feels like it's trying too hard to be quirky. The characters are archetypes. The dialogue is clever for cleverness' sake. It's entertaining in the moment but fades away from your memory almost immediately.

Even The Silence of the Lambs itself proves the point. Every interaction between Clarice and Hannibal is a chess match. You feel her vulnerability, his intelligence, the way he gets under her skin. The horror isn't in jump scares, it's in the psychological warfare. Modern thrillers like The Woman in the Window from 2021 have twists and atmosphere, but they lack that deep character work that makes you actually care what happens.

I think the difference comes down to this: older movies took risks. They trusted audiences to pay attention, to feel something, to think. Scorsese and Tarantino had visions and the freedom to execute them without endless studio interference. They weren't chasing demographics or worrying about franchise potential. They were making films , not products.

Today's cinema often feels designed by committee, optimized for streaming algorithms and opening weekend numbers rather than lasting impact. We have better technology, way bigger budgets, more sophisticated effects, but somewhere along the way, we forgot that movies are supposed to move us, not just occupy our time between scrolling sessions.

Maybe I'm just nostalgic. Maybe I'm romanticizing the past. But when I finish a good movie, I can sit there thinking about them for hours, even days depending on the movie. When I finish most modern blockbusters, I'm already thinking about dinner. And that difference, I think, says everything.

Crews Claim Boring Company Failed to Pay Workers and Snubbed OSHA Concerns

Hacker News
nashvillebanner.com
2025-11-26 20:14:17
Comments...
Original Article

Willie Shane broke the asphalt on Elon Musk’s Music City Loop project this summer. Seven of his crew had been the sole excavators, fabricators and dump trucking company on The Boring Company’s proposed tunnel through Nashville for months.

Then came Monday night, when they walked off the site.

“I moved the equipment myself,” Shane said in an interview with the Banner on Tuesday.

“We were really skeptical from the beginning, and then since then, things pretty much just went downhill,” he added.

Musk’s company has a spotty record of completing similar tunnels in other cities , often snagging on government regulations and contractual issues. When Shane’s company, Shane Trucking and Excavating, which works with major local clients like the Grand Ole Opry and the Nashville International Airport, was approached by The Boring Company, he said he had some reservations.

“I told them very bluntly — and I don’t want this to come across like egotistical — but I told them, ‘Hey, my dad worked really hard to build a reputation in Nashville, and my brother and I work very hard to keep that reputation,’” Shane said. “If you guys are actually serious about doing this, you need to be 100 percent serious, because this is going to be our reputation as part of this too.”

After being reassured, Shane’s team took the job in July.

He and his crew left the state-owned property on Rosa L Parks Boulevard, where they had been working on the proposed 9-mile tunnel from the state capitol to the airport after months of safety and financial issues with Musk’s company.

It started about a month in with a change in pay.

“We were supposed to be paid every 15 days. And then they switched accounting firms, and then it went from 15 days to 60,” Shane said. Now it’s been 123 days since they started digging, and Shane says The Boring Company has only paid out about five percent of what he’s owed.

According to Shane, he has still been able to pay his employees on time, but the local trucking company is left holding the bag for money unpaid by The Boring Company. Other subcontractors, he says, have also severed ties due to nonpayment on the project.

The final straw that caused Shane to pull his crew from the site was when multiple employees reported that a representative of The Boring Company was soliciting them to bail on Shane and work directly for TBC on Monday.

“One of their head guys texts two of my welders, offering them a job for $45 an hour from his work phone,” Shane described, noting that the same TBC employee denied sending the texts when confronted with screenshots. “That’s actually a breach of contract.”

Shane also says he and other vendors have filed multiple OSHA safety complaints since working on the site but have gotten no response. His biggest concerns have been Boring employees on the jobsite not wearing proper personal protective equipment, such as hard hats, and unsafe shoring, which he says he’s repeatedly complained about to the Boring Company.

“Where we’re digging, we’re so far down, there should be concrete and different structures like that to hold the slope back from falling on you while you’re working,” Shane explained. “Where most people use concrete, they currently have — I’m not even kidding — they currently have wood. They had us install wood 2x12s.”

The safety concerns are why Shane says he decided to make the issue public.

“We’re not coming forward in like a vindictive way,” Shane said. “I just don’t want someone to get hurt, sure, and then, in the future, I have to be like, ‘Dang, I worked on there, and I turned a blind eye to it.’”

In the meantime, Shane said that the amount of backpay owed to his company is in the six figures and that he has retained a lawyer.

Boring Company response

After the Banner contacted The Boring Company about Shane’s claims, Vice President David Buss said he connected with Shane and would make good on the outstanding invoices by the end of the day Wednesday and would do a “full audit” on the error.

“It does look like we had some invoicing errors on that,” Buss told the Banner . “It was, you know, unfortunately, too common of a thing, but I assured them that we are going to make sure that invoices are wired tomorrow.”

Buss later clarified that he does not believe The Boring Company has a “common” practice of missing payments to vendors, but rather missed payments happen sometimes during “the normal course of business.”

“You hate to have an unhappy vendor. We certainly aim to have great relationships,” Buss said. “And so my goal will be to figure out what happened in this incident and then make sure that that’s not extrapolated to any other incidents.”

Buss also said he was looking into Shane’s claims about The Boring Company trying to hire contractors.

“It is definitely not our practice to try to poach anybody, so I understand the frustrations on their side,” Buss said. “Hopefully it’s something where we’re able to smooth that over and correct some of the things that happened on site and that led to this.”

Asked about the safety complaints, Buss said Shane did not raise any concerns on their call Tuesday and said he was unaware of any OSHA complaints, but would look into it.

“Safety is existential to our company,” Buss said. “We thankfully have a long history of seven years of tunneling in Las Vegas, and we’ve had one construction-related injury that was not the company’s fault in a violation.”

Hiring headaches

According to Buss, the projected timeline had not changed, and work had not been slowed by the crews’ departure from the site. Shane, however, painted a different picture.

“Actually, we were the crew that was building the tunnel boring machine. So there’s nobody building the tunnel boring machine right now, and the Boring Company has been trying to hire welders, but they haven’t been able to secure any help,” Shane said Tuesday, noting that many prospective employees won’t work on the project because of Musk’s reputation.

“A lot of people don’t like Elon and their payment terms; the way that they pay their employees, is not traditional,” Shane said.

Buss denied any hiring trouble.

“We’ve had zero issues finding great talent thus far in Nashville,” Buss said. “I think we’ve hired about 14 people now, and we’re going to start to grow the team as we begin mining operations.”

Instability and safety have been pervasive concerns around the project since its hurried public rollout this summer , in which little-to-no public input was received by the state before approving a lease of the state-owned property where digging is taking place.

As reports of a second Boring tunnel under Broadway and West End surfaced, Boring Company CEO Steve Davis hosted a two-hour live update session on X, the social media website also owned by Musk Monday evening, in which he touted progress on the Music City Loop and described the project as smoothly underway, with boring set to begin around January after the proper permits are secured.

An hour later, Shane’s team left the site.

During Davis’ virtual meeting, members of the public could submit questions, some of which were answered by Boring Company leadership. Many of those questions came from State Sen. Heidi Campbell (D-Nashville), who represents the area and has been a vocal critic of the project since it was announced.

“I would say the promotional session that they had last night on on Twitter was disingenuous at best, if not dishonest, because it was, it sounded like a utopian project and then, lo and behold, the very next day, we find out that there are people leaving the site because they’re not getting paid and they’re not being treated well,” Campbell told the Banner .

In addition to her concerns about irreparable damage to the site and whether the project would even be completed, Campbell said she was concerned about the state’s liability if there were unsafe working conditions on the leased property and whether there was any way for lawmakers to stop the process.

“There is nothing to hold The Boring Company accountable for any of these things,” Campbell said of the lease. “They’ve already dug a big hole. But then on top of it, if they move forward, forward in any capacity, they have not proven that they are reliable to take care of the damage that they cause.”

When Shane first spoke to the Banner , he said he did not intend to return to the job even if they received payment, noting that his employees had expressed discomfort “because they didn’t feel the management there was very good.”

Hours later, after hearing from Buss, Shane said he would consider returning “if they correct the situation on their end.”

Demetria Kalodimos contributed to this report .

The most male and female reasons to end up hospital

Hacker News
leobenedictus.substack.com
2025-11-26 20:01:41
Comments...
Original Article

The first post I wrote for this blog was about people being injured by dogs . Specifically, how much of this goes on, and what counts as a lot.

We can measure this reasonably well in England, because the health service publishes annual data for hospital admissions showing what people were admitted for .

This often includes not just the physical condition that needed treatment, but the event that led to that condition in the first place. So not just the tissue damage on someone’s hand, in other words, but the story of a dog bite behind it.

These second-order reasons for admission—known as “external causes”—cover a whole world of horrible mishaps beyond the ones that I looked at last time. The data also records whether the patient was male or female, so I wondered what the most male and most female external causes might be.

To cut to the chase, here they are.

When I began the crunching that produced these numbers, I’d given no thought at all to what I would find. If I had, it would have been obvious that pregnancy would top the charts on the female side.

But I don’t think I could have imagined what a stark dossier of male and female stereotypes I was compiling. Because to me, the chart above basically says that violence, physical labour, sport and machines are the most typically male ways to end up in hospital, while pregnancy, beauty and animals and mental health are the most typically female.

I’m having to choose my words carefully, because I need to stress one thing: these are not the most common reasons for men and women to be admitted to hospital . They are the most typically male and typically female .

So only about 400 men in the whole of England go to hospital after falls from scaffolding each year. But that cause is at the top of the chart because it is the reason for admission that’s most male-dominated—just as the various pregnancy-related reasons are the most female. (I’ve put the total number of admissions in the column on the right, to give an actual sense of scale.)

In practice, I’d guess that these causes are the things that men or women do more often, or more dangerously.

Some minor points: I excluded all the external causes with less than 1,000 admissions in the last three years, so everything you see here happens at least fairly frequently, and amounts to a reasonable sample. I also excluded a small number of admissions (less than half a percent) that are classified “Gender Unknown”.

Some of the external causes have very longwinded names , so I’ve made them as simple as possible. “Agents primarily acting on smooth and skeletal muscles and the respiratory system” is especially unenlightening, although I suspect it might have something to do with Botox.

In the next few days I plan to upload all the data in a searchable table (if I can make that work) so you can explore it in other ways too.

UPDATE: You can now find the data in this follow-up post .

Discussion about this post

NordVPN Black Friday Deal: Unlock 77% off VPN plans in 2025

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 20:00:37
The NordVPN Black Friday Deal is now live, and you can get the best discount available: 77% off that applies automatically when you follow our link. If you've been waiting for the right moment to upgrade your online security, privacy, and streaming freedom, this is the one VPN deals this Black Frida...
Original Article

NordVPN Black Friday deal Want one of the best VPN discounts of 2025? This NordVPN Black Friday deal gives you the fastest VPN with strong digital security and US Netflix access – all at an unbeatable price.

NordVPN Black Friday Deal: Unlock up to 77% off VPN plans in 2025

The NordVPN Black Friday Deal is now live, and you can get the best discount available: 77% off that applies automatically when you follow our link. If you’ve been waiting for the right moment to upgrade your online security, privacy, and streaming freedom, this is the one VPN deal we can guarantee will have you smiling all year round.

There’s no better time to buy a VPN than Black Friday or Cyber Monday. You get the same premium VPN that costs more at any other time of year, but at a fraction of the price. What’s more, if you grab a 1-year, 2-year plan, or even a 3-year plan right now, your renewal will fall during Black Friday. That means you’ll be able to hunt for another discount each time you need a VPN subscription.

Wiz

So, why NordVPN? Besides having one of the best discounts, NordVPN ranks as the fastest VPN thanks to its NordLynx protocol (WireGuard fork). Fast VPN speeds make Nord perfect for Netflix access, HD streaming, gaming, and torrenting.

It enforces a strict no-logs policy, offers powerful Threat Protection Pro, and bundles valuable extras like NordPass, NordLocker, and NordProtect (Identity Theft Protection) for better everyday protection online.

NordVPN offers a more comprehensive privacy suite. Plus, with a 30-day money-back guarantee, you can try it risk-free while the discount lasts. If you want the biggest NordVPN savings of 2025, Black Friday is the perfect time to act.

NordVPN: The best Black Friday deal of 2025

The top promo this year is NordVPN’s 2-year plan . It is available with a massive 77% discount plus three extra months free. Best of all? NordVPN’s Black Friday pricing immediately surpasses VPN promotions advertised by competing providers.

In 2025, NordVPN confirmed that its Black Friday and Cyber Monday promotion runs from October 16 through December 10. That gives you nearly two months to grab the most impressive VPN deals of 2025.

Here’s what NordVPN had to say:

"Black Friday is a busy time — and not just for shoppers. Cybercriminals are also highly active during this period, so remember to take the necessary steps to protect yourself online. NordVPN protects your online traffic with encryption, making you safer on every network and device."

Get the discount – with no strings attached

When you follow the link in this article, the Black Friday deal will activate automatically – no codes or hoops to jump through.

The deal brings the total down to just $80.73 for 27 months of NordVPN Basic.

To put that into perspective, the regular subscription costs $12.99 per month, which means you’d normally pay $77.94 for just six months of VPN protection.

With this Black Friday deal, you’re getting well over two years of protection, unbeatable streaming access on vacation, and some of the best online security tools we have ever tested – for a fraction of the usual cost.

This is exactly why NordVPN’s Black Friday bundle is turning heads across the VPN industry. And why it’s easily the most competitive VPN offer we’ve managed to land on this season.

NordVPN plans

NordVPN’s Black Friday deals means you’ll get security, privacy, Netflix access, and WiFi privacy at the lowest cost.

NordVPN bundle deals

NordVPN didn't stop at its Basic plan this Black Friday. The leading privacy provider has also slashed prices across its premium bundles. This gives you access to the Ultimate Nord Security ecosystem at prices we’ve never seen outside the Black Friday window.

NordVPN Plus

The first standout option for bargain hunters seeking better all-around protection is the NordVPN Plus subscription.

This plan includes full VPN access, Threat Protection Pro (an always‑on security layer that blocks malware, phishing websites, intrusive ads, and trackers in real time, even when the VPN is disconnected), and Nord’s secure password manager.

This Black Friday, you can get this all bundled for just $3.89 per month: turning a standard VPN subscription into a full-blown online security suite, at a price point that beats most competitors' basic plans.

If you’re looking for an online protection suite with reliable filtering against trackers, ads, and malware, NordVPN delivers exactly that. It also includes a top-tier password manager that helps secure your accounts against hackers and phishing.

What’s more, NordVPN’s pricing is unusually generous for the amount of protection you get. It’s genuinely rare to find such a comprehensive security bundle at a cost that beats what most providers charge for the VPN alone.

NordVPN Ultimate

Hunting for the ultimate VPN deal of the year? NordVPN’s “Ultimate” plan is the centerpiece of its 2025 Black Friday event.

Normally valued at $626.13, the 27-month Ultimate plan is currently discounted to just $159. That works out to $5.89 per month, which is a massive 77% price cut.

Ultimate includes every service and feature that Nord Security offers. You get unlimited VPN use, the password manager, upgraded anti-malware and anti-tracking tools, 1TB of encrypted cloud storage, and even $5,000 in scam loss insurance through NordProtect. Just bear in mind that insurance is only available to US residents.

When you consider that Google charges $5 per month for just 1TB of cloud storage, Nord’s Black Friday pricing really comes out swinging! For only 89 cents more, you’ll get cloud storage plus a VPN, password manager, advanced threat filtering, and identity theft protection.

For anyone looking to build a full security stack at the lowest possible cost, these Black Friday bundles are among the strongest tech deals of the year.

Which VPN features does NordVPN come with?

No matter whether you choose NordVPN Basic, Plus, or Ultimate, you'll get full access to NordVPN’s complete VPN feature set. All core tools, including global server options, VPN protocol options, privacy settings, and security features, remain identical across all plans.

The higher-tier bundles simply add extra services such as password management, advanced threat filtering, encrypted cloud storage, and identity protection.

That means you can stick with NordVPN Basic if all you want is a powerful, fast, and fully featured VPN. The upgrades are optional add-ons and will not change how the VPN itself performs.

Full NordVPN feature list:

  • Strong encryption of all traffic (AES‑256 with modern VPN protocols like NordLynx/ WireGuard , OpenVPN, and IKEv2 for both security and speed).​
  • Protection against ISP or network surveillance by hiding all browsing activity inside an encrypted tunnel.​
  • IP address masking so websites and services see the VPN server’s IP instead of your real one, improving privacy and helping avoid IP‑based tracking.​
  • Location spoofing lets you choose from thousands of servers in 127+ countries, useful for bypassing geo‑restrictions and regional blackouts.​
  • Ad blocking at the server level to strip many ads before they reach your device (via Threat Protection/Threat Protection Pro).​
  • Tracking prevention by blocking common tracking domains and cookies so advertisers and analytics tools collect less data on you.​
  • Malicious site blocking that stops connections to known phishing, malware, and scam domains before they load.​
  • Malware download scanning (on supported desktop apps) that checks downloaded files.
  • MultiHop VPN routing (Double VPN) , sending your traffic through two VPN servers with two layers of encryption for extra anonymity in high‑risk situations.​
  • Tor over VPN sends your traffic first through the VPN and then into the Tor network for stronger identity protection on .onion sites.​
  • Automatic kill switch that cuts your internet connection if the VPN drops, preventing any data from leaking outside the encrypted tunnel.​
  • DNS leak protection by forcing all DNS lookups through NordVPN’s own DNS resolvers, so your ISP cannot see what domains you visit.​
  • Obfuscated servers (NordWhisper / obfuscation) to hide the fact that you are using a VPN. Useful to connect on restrictive networks and to use the VPN in high-censorship countries.​
  • P2P‑optimized servers for safer torrenting and other peer‑to‑peer traffic without sacrificing too much speed.​
  • Streaming‑optimized servers (SmartPlay) that automatically use working DNS/routes to access major streaming platforms when they try to block VPN IPs.​
  • Split tunneling (on supported apps) so you can choose which apps use the VPN and which go directly to the internet—for example, routing only your browser through the VPN while games or banking apps use a normal connection.​
  • Private DNS servers operated by NordVPN instead of your ISP’s DNS, reducing data exposure and some forms of DNS‑based censorship.​
  • High‑speed connections (including 10 Gbps locations and NordLynx) to minimize the performance hit usually associated with VPNs.​
  • Support for up to 10 simultaneous devices under one subscription, so you can cover multiple personal devices or family members at once.​
  • Optional dedicated IP addresses so you can get a consistent IP (useful for hosting, remote access, avoiding CAPTCHA, and accessing strict streaming accounts).​
  • Native apps for Windows, macOS , Linux, Android, iOS/iPadOS, Android TV , and many smart TVs , Amazon Fire TV/Firestick, Apple TV, and Apple Vision (via native/tvOS/visionOS support).​
  • Browser extensions (proxy-based protection) for Chrome, Firefox, and Microsoft Edge.​

Why NordVPN is the standout Black Friday VPN deal of 2025

NordVPN is one of the most trusted VPN brands on the market, and its Black Friday and Cyber Monday deals make 2025 the perfect time to lock in long-term savings.

The service is headquartered in privacy-friendly Panama, a location that puts it well out of reach of data-hungry jurisdictions like the US, the UK, and the EU. Thanks to Panama's lack of mandatory data retention laws, NordVPN can maintain a strict no-logging policy . That means Nord has no records of your activities, even if the government comes knocking with a warrant.

Add to this its wide feature set and excellent third-party audit results, and you can see why NordVPN continues to stand out as one of the best value VPN options for netizens who care about strong privacy and watertight online security.

With the NordVPN Black Friday Deal, you will get access to all the premium features that helped NordVPN earn its reputation. This includes its NordLynx protocol (built on WireGuard to make NordVPN the fastest VPN ), advanced encryption, and reliable privacy settings for users in countries where surveillance and censorship are a part of daily life.

Fully optimized for streaming

When it comes to streaming, NordVPN is exceptional. During our tests, its international network successfully accessed multiple Netflix regions , Hulu, HBO Max, Disney+, Prime Video , YouTube TV, DirecTV, SlingTV, BBC iPlayer, Joyn, Canal+, Crave , ESPN+, FOX, ABC, NBC, and Peacock.

And its fast connection speeds make it perfect for HD streaming without buffering, as well as for gaming , torrenting, and making video calls.

Does NordVPN have apps for all platforms?

Yes, NordVPN gives you comprehensive coverage for every gadget you own.

NordVPN provides custom apps for all major platforms (including Windows, macOS, iOS, Android, Linux , and Amazon Firestick), making it a practical, versatile option for households with mixed devices.

Each subscription supports up to 10 simultaneous connections , allowing you to protect phones, tablets, laptops, smart TVs, and even school or work devices under one account.

With this year’s Black Friday pricing, NordVPN has turned one of the most polished premium VPNs on the market into a cheap VPN we can confidently recommend.

These offers only run until December 10 , and once they expire, pricing returns to normal. Grab it before it's too late.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

S&box is now an open source game engine

Hacker News
sbox.game
2025-11-26 19:58:27
Comments...
Original Article

Bad gateway Error code 502

Visit cloudflare.com for more information.

2025-11-26 20:27:04 UTC

You

Browser

Working

Newark

Cloudflare

Working

sbox.game

Host

Error

What happened?

The web server reported a bad gateway error.

What can I do?

Please try again in a few minutes.

Cloudflare Ray ID: 9a4c1f4af9f6439f Your IP: 204.19.241.141 Performance & security by Cloudflare

Don't Download Apps

Hacker News
blog.calebjay.com
2025-11-26 19:51:52
Comments...
Original Article
Timed out getting readerview for https://blog.calebjay.com/posts/dont-download-apps/

Liber Indigo: The Affordances of Magic

Lobsters
www.youtube.com
2025-11-26 19:44:20
Comments...

Popular Forge library gets fix for signature verification bypass flaw

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 19:32:42
A vulnerability in the 'node-forge' package, a popular JavaScript cryptography library, could be exploited to bypass signature verifications by crafting data that appears valid. [...]...
Original Article

Popular Forge library gets fix for signature verification bypass flaw

A vulnerability in the ‘node-forge’ package, a popular JavaScript cryptography library, could be exploited to bypass signature verifications by crafting data that appears valid.

The flaw is tracked as CVE-2025-12816 and received a high severity rating. It arises from the library’s ASN.1 validation mechanism, which allows malformed data to pass checks even when it is cryptographically invalid.

“An interpretation-conflict vulnerability in node-forge versions 1.3.1 and earlier enables unauthenticated attackers to craft ASN.1 structures to desynchronize schema validations, yielding a semantic divergence that may bypass downstream cryptographic verifications and security decisions,” reads the flaw's description in the National Vulnerabilities Database (NVD).

Wiz

Hunter Wodzenski of Palo Alto Networks discovered the flaw and reported it responsibly to the node-forge developers.

The researcher warned that applications that rely on node-forge to enforce the structure and integrity of ASN.1-derived cryptographic protocols can be tricked into validating malformed data, and provided a proof-of-concept demonstrating how a forged payload could trick the verification mechanism.

A security advisory from the Carnegie Mellon CERT-CC explains that the impact varies per application, and may include authentication bypass, signed data tampering, and misuse of certificate-related functions.

“In environments where cryptographic verification plays a central role in trust decisions, the potential impact can be significant,” CERT-CC warns .

The impact may be significant considering that node-forge is massively popular with close to 26 million weekly downloads on the Node Package Manager (NPM) registry.

The library is used by projects that need cryptographic and public-key infrastructure (PKI) functionality in JavaScript environments.

A fix was released earlier today in version 1.3.2. Developers using node-forge are advised to switch to the latest variant as soon as possible.

Flaws in widely used open-source projects can persist for a long time after their public disclosure and the availability of a patch. This may happen due to various reasons, the complexity of the environment and the need to test the new code being some of them.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

fail2ban RCE

Lobsters
www.cve.org
2025-11-26 19:13:07
Comments...

Alan.app – Add a Border to macOS Active Window

Hacker News
tyler.io
2025-11-26 19:12:40
Comments...
Original Article

Maybe it’s because my eyes are getting old or maybe it’s because the contrast between windows on macOS keeps getting worse. Either way, I built a tiny Mac app last night that draws a border around the active window. I named it “Alan”.

In Alan’s preferences, you can choose a preferred border width and colors for both light and dark mode.

That’s it. That’s the app.

You can download a notarized copy of Alan here .

Here’s a short demo video.

If you want to hide Alan’s icon from the Dock, you can set a hidden preference by running this Terminal command. Then, relaunch the app.

defaults write studio.retina.Alan hideDock -bool true

API that auto-routes to the cheapest AI provider (OpenAI/Anthropic/Gemini)

Hacker News
tokensaver.org
2025-11-26 19:12:26
Comments...
Original Article

Pay Less. Build More.

One API.
Three Providers.
90% Savings.

Automatically route your AI requests to the cheapest provider. OpenAI, Anthropic, or Google Gemini. Real-time pricing. Zero lock-in.

30 free requests. No card required.

$ 0.50

Per 1K Input Tokens

$

Massive Cost Savings

Automatically routes to the cheapest provider. Save 90-99% compared to using premium models directly. Your budget goes further.

*

Always Available

Automatic fallback if one provider fails. Your app stays online even when individual AI services go down.

/

Zero Configuration

One simple API works with all providers. We handle the routing logic, SDK differences, and price monitoring.

=

Full Transparency

See exactly which provider was used, token counts, and costs for every request. No hidden fees or surprises.

OpenAI

GPT-4o, GPT-4o-mini

Anthropic

Claude 3.5 Sonnet, Haiku

Google

Gemini 2.0, Gemini 1.5

Input Tokens

$ 0.50

per 1,000 tokens

Output Tokens

$ 1.50

per 1,000 tokens

Billed per request via Stripe. View your usage anytime in the customer dashboard.

curl -X POST https://tokensaver.org/api/chat \ -H "Content-Type: application/json" \ -d '{ "email": "your@email.com", "messages": [ {"role": "user", "content": "Hello!"} ] }'

const response = await fetch('https://tokensaver.org/api/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ email: 'your@email.com', messages: [{ role: 'user', content: 'Hello!' }] }) }); const data = await response.json(); console.log(data.message); console.log('Provider:', data.billing.provider);

import requests response = requests.post( 'https://tokensaver.org/api/chat', json={ 'email': 'your@email.com', 'messages': [{'role': 'user', 'content': 'Hello!'}] } ) data = response.json() print(data['message']) print('Provider:', data['billing']['provider'])

const response = await fetch('https://tokensaver.org/api/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ email: 'your@email.com', messages: [{ role: 'user', content: 'Hello!' }] }) }); const { message, billing } = await response.json(); console.log(message); console.log('Provider:', billing.provider);

$ch = curl_init('https://tokensaver.org/api/chat'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: application/json']); curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode([ 'email' => 'your@email.com', 'messages' => [['role' => 'user', 'content' => 'Hello!']] ])); $data = json_decode(curl_exec($ch), true); echo $data['message'];

payload := map[string]interface{}{ "email": "your@email.com", "messages": []map[string]string{ {"role": "user", "content": "Hello!"}, }, } jsonData, _ := json.Marshal(payload) resp, _ := http.Post( "https://tokensaver.org/api/chat", "application/json", bytes.NewBuffer(jsonData), )

POST /api/chat Send messages, get AI responses

GET /api/pricing View provider pricing

GET /api/stats Real-time usage statistics

Payment Security

All payments processed by Stripe, a PCI-DSS Level 1 certified provider. We never see your card details.

Data Encryption

All data encrypted in transit (TLS 1.3) and at rest (AES-256). Hosted on enterprise infrastructure.

Message Privacy

Your API requests are processed and immediately forwarded. We never store or log conversation content.

Minimal Data

We only store your email and usage records. Nothing else. Your data stays yours.

Fara-7B by Microsoft: An agentic small language model designed for computer use

Hacker News
github.com
2025-11-26 19:10:24
Comments...
Original Article

Fara-7B: An Efficient Agentic Model for Computer Use

Fara-7B Performance

Microsoft Hugging Face Model Foundry Dataset


Overview

Fara-7B is Microsoft's first agentic small language model (SLM) designed specifically for computer use. With only 7 billion parameters, Fara-7B is an ultra-compact Computer Use Agent (CUA) that achieves state-of-the-art performance within its size class and is competitive with larger, more resource-intensive agentic systems.

Try Fara-7B locally as follows (see Installation for detailed instructions):

# 1. Clone repository
git clone https://github.com/microsoft/fara.git
cd fara

# 2. Setup environment
python3 -m venv .venv 
source .venv/bin/activate
pip install -e .
playwright install

Then in one process, host the model:

vllm serve "microsoft/Fara-7B" --port 5000 --dtype auto 

Then you can iterative query it with:

fara-cli --task "whats the weather in new york now"

Hint: might need to do --tensor-parallel-size 2 with vllm command if you run out of memory

What Makes Fara-7B Unique

Unlike traditional chat models that generate text-based responses, Fara-7B leverages computer interfaces—mouse and keyboard—to perform multi-step tasks on behalf of users. The model:

  • Operates visually by perceiving webpages and taking actions like scrolling, typing, and clicking on directly predicted coordinates
  • Uses the same modalities as humans to interact with computers—no accessibility trees or separate parsing models required
  • Enables on-device deployment due to its compact 7B parameter size, resulting in reduced latency and improved privacy as user data remains local
  • Completes tasks efficiently , averaging only ~16 steps per task compared to ~41 for comparable models

Fara-7B is trained using a novel synthetic data generation pipeline built on the Magentic-One multi-agent framework, with 145K trajectories covering diverse websites, task types, and difficulty levels. The model is based on Qwen2.5-VL-7B and trained with supervised fine-tuning.

Key Capabilities

Fara-7B can automate everyday web tasks including:

  • Searching for information and summarizing results
  • Filling out forms and managing accounts
  • Booking travel, movie tickets, and restaurant reservations
  • Shopping and comparing prices across retailers
  • Finding job postings and real estate listings

Performance Highlights

Fara-7B achieves state-of-the-art results across multiple web agent benchmarks, outperforming both comparable-sized models and larger systems:

Model Params WebVoyager Online-M2W DeepShop WebTailBench
SoM Agents
SoM Agent (GPT-4o-0513) - 90.6 57.7 49.1 60.4
SoM Agent (o3-mini) - 79.3 55.4 49.7 52.7
SoM Agent (GPT-4o) - 65.1 34.6 16.0 30.8
GLM-4.1V-9B-Thinking 9B 66.8 33.9 32.0 22.4
Computer Use Models
OpenAI computer-use-preview - 70.9 42.9 24.7 25.7
UI-TARS-1.5-7B 7B 66.4 31.3 11.6 19.5
Fara-7B 7B 73.5 34.1 26.2 38.4

Table: Online agent evaluation results showing success rates (%) across four web benchmarks. Results are averaged over 3 runs.

WebTailBench: A New Benchmark for Real-World Web Tasks

We are releasing WebTailBench , a new evaluation benchmark focusing on 11 real-world task types that are underrepresented or missing in existing benchmarks. The benchmark includes 609 tasks across diverse categories, with the first 8 segments testing single skills or objectives (usually on a single website), and the remaining 3 evaluating more difficult multi-step or cross-site tasks.

WebTailBench Detailed Results

Task Segment Tasks SoM GPT-4o-0513 SoM o3-mini SoM GPT-4o GLM-4.1V-9B OAI Comp-Use UI-TARS-1.5 Fara-7B
Single-Site Tasks
Shopping 56 62.5 71.4 38.1 31.0 42.3 41.1 52.4
Flights 51 60.1 39.2 11.1 10.5 17.6 10.5 37.9
Hotels 52 68.6 56.4 31.4 19.9 26.9 35.3 53.8
Restaurants 52 67.9 59.6 47.4 32.1 35.9 22.4 47.4
Activities 80 70.4 62.9 41.7 26.3 30.4 9.6 36.3
Ticketing 57 58.5 56.7 37.4 35.7 49.7 30.4 38.6
Real Estate 48 34.0 17.4 20.1 16.0 9.0 9.7 23.6
Jobs/Careers 50 49.3 44.0 32.7 22.7 20.7 20.7 28.0
Multi-Step Tasks
Shopping List (2 items) 51 66.0 62.7 17.0 7.8 34.0 20.9 49.0
Comparison Shopping 57 67.3 59.1 27.5 22.8 1.2 8.8 32.7
Compositional Tasks 55 51.5 39.4 26.7 17.0 10.3 9.1 23.0
Overall
Macro Average 609 59.7 51.7 30.1 22.0 25.3 19.9 38.4
Micro Average 609 60.4 52.7 30.8 22.4 25.7 19.5 38.4

Table: Breakdown of WebTailBench results across all 11 segments. Success rates (%) are averaged over 3 independent runs. Fara-7B achieves the highest performance among computer-use models across all task categories.

Coming Soon:

  • Task Verification pipeline for LLM-as-a-judge evaluation
  • Official human annotations of WebTailBench (in partnership with BrowserBase)

Evaluation Infrastructure

Our evaluation setup leverages:

  1. Playwright - A cross-browser automation framework that replicates browser environments
  2. Abstract Web Agent Interface - Allows integration of any model from any source into the evaluation environment
  3. Fara-Agent Class - Reference implementation for running the Fara model

Note: Fara-7B is an experimental release designed to invite hands-on exploration and feedback from the community. We recommend running it in a sandboxed environment, monitoring its execution, and avoiding sensitive data or high-risk domains.


Installation

Install the package using either UV or pip:

or

Then install Playwright browsers:


Hosting the Model

Recommended: The easiest way to get started is using Azure Foundry hosting, which requires no GPU hardware or model downloads. Alternatively, you can self-host with VLLM if you have GPU resources available.

Azure Foundry Hosting (Recommended)

Deploy Fara-7B on Azure Foundry without needing to download weights or manage GPU infrastructure.

Setup:

  1. Deploy the Fara-7B model on Azure Foundry and obtain your endpoint URL and API key
  2. Add your endpoint details to the existing endpoint_configs/ directory (example configs are already provided):
# Edit one of the existing config files or create a new one
# endpoint_configs/fara-7b-hosting-ansrz.json (example format):
{
    "model": "Fara-7B",
    "base_url": "https://your-endpoint.inference.ml.azure.com/",
    "api_key": "YOUR_API_KEY_HERE"
}
  1. Run the Fara agent:
fara-cli --task "how many pages does wikipedia have" --start_page "https://www.bing.com"

That's it! No GPU or model downloads required.

Self-hosting with VLLM

If you have access to GPU resources, you can self-host Fara-7B using VLLM. This requires a GPU machine with sufficient VRAM.

All that is required is to run the following command to start the VLLM server:

vllm serve "microsoft/Fara-7B" --port 5000 --dtype auto 

Testing the Fara Agent

Run the test script to see Fara in action:

fara-cli --task "how many pages does wikipedia have" --start_page "https://www.bing.com" --endpoint_config endpoint_configs/azure_foundry_config.json [--headful] [--downloads_folder "/path/to/downloads"] [--save_screenshots] [--max_rounds 100] [--browserbase]

In self-hosting scenario the endpoint_config points to endpoint_configs/vllm_config.json from the VLLM server above.

If you set --browserbase , export environment variables for the API key and project ID.

Expected Output

Initializing Browser...
Browser Running... Starting Fara Agent...
##########################################
Task: how many pages does wikipedia have
##########################################
Running Fara...


Thought #1: To find the current number of Wikipedia pages, I'll search for the latest Wikipedia page count statistics.
Action #1: executing tool 'web_search' with arguments {"action": "web_search", "query": "Wikipedia total number of articles"}
Observation#1: I typed 'Wikipedia total number of articles' into the browser search bar.

Thought #2: Wikipedia currently has 7,095,446 articles.
Action #2: executing tool 'terminate' with arguments {"action": "terminate", "status": "success"}
Observation#2: Wikipedia currently has 7,095,446 articles.

Final Answer: Wikipedia currently has 7,095,446 articles.

Enter another task (or press Enter to exit): 

Reproducibility

We provide a framework in webeval/ to reproduce our results on WebVoyager and OnlineMind2Web. Agentic evaluations on live websites present unique challenges due to day-to-day changes. We implement several measures to ensure reliable and comparable evaluations:

BrowserBase Integration We employ BrowserBase to manage browser session hosting, enabling reliable browser instance management.

Time-sensitive Task Updates Tasks in benchmarks like WebVoyager can become stale or impossible. We:

  • Removed ~48 impossible tasks from the original WebVoyager benchmark
  • Updated ~50 tasks with future dates to keep them achievable
  • Example: "Search for a hotel in Bali from Jan 1 to Jan 4, 2024" "Search for a hotel in Bali from Jan 1 to Jan 4, 2026"
  • Our updated WebVoyager benchmark is available at webeval/data/webvoyager/WebVoyager_data_08312025.jsonl

Environment Error Handling Browser errors (connection drops, page timeouts) are handled robustly:

  • Trajectories are retried up to 5 times when environment errors occur
  • Complete yet incorrect trajectories are never retried
  • Each retry starts with a fresh browser session, with no retained state

Step Budget Each trajectory is capped at a maximum of 100 actions across all online benchmarks. Trajectories exceeding this budget without choosing to stop are considered incorrect.

WebEval Package Installation

conda create --name fara_webeval python=3.12
conda activate fara_webeval

# Install fara package
pip install -e .

# Install autogen submodule
git submodule update --init --recursive
cd autogen/python/packages
pip install -e autogen-core
pip install -e autogen-ext

# Install webeval
cd webeval
pip install -e .

# Install playwright
playwright install

Running Evaluations

Navigate to the scripts directory:

Make sure you set a valid OpenAI GPT-4o endpoint in endpoint_configs_gpt4o/dev in order to run the WebVoyager LLM-as-a-judge!

Option 1: Self-hosted VLLM

python webvoyager.py --model_url /path/where/you/want/to/download/model/ --model_port 5000 --eval_oai_config ../endpoint_configs_gpt4o/dev/ --out_url /path/to/save/eval/files --device_id 0,1 --processes 1 --run_id 1 --max_rounds 100

Option 2: Azure Foundry Deployment

Deploy Fara-7B on Foundry endpoint(s) , then place endpoint URLs and keys in JSONs under endpoint_configs/ :

python webvoyager.py --model_endpoint ../../endpoint_configs/ --eval_oai_config ../endpoint_configs_gpt4o/dev/ --out_url /path/to/save/eval/files --processes 1 --run_id 1_endpoint --max_rounds 100

Notes

  • We use the same LLM-as-a-judge prompts and model (GPT-4o) as WebVoyager, hence the --eval_oai_config argument
  • Set --browserbase for browser session management (requires exported API key and project ID environment variables)
  • Avoid overloading a single VLLM deployment with more than ~10 concurrent processes due to known issues
  • See debugging output in fara/webeval/scripts/stdout.txt

Analyzing Evaluation Results

Evaluation Output Structure

Evaluation results are stored under --out_url in folders organized by:

  • Model name
  • Dataset
  • Username
  • Run ID

Example path:

/runs/WebSurfer-fara-100-max_n_images-3/fara-7b/<username>/WebVoyager_WebVoyager_data_08312025.jsonl/<run_id>

Each evaluation folder contains:

  • gpt_eval/ - LLM-as-a-judge evaluation results
  • traj/ - Per-task trajectory subdirectories containing:
    • final_answer.json (e.g., Amazon--1_final_answer.json ) - <no_answer> indicates abortion or step budget exceeded
    • scores/gpt_eval.json - LLM judge scores
    • web_surfer.log - Action history and errors
    • screenshot_X.png - Screenshots captured before each action X

Running Analysis

Use the analysis notebook to compute metrics:

cd webeval/scripts/analyze_eval_results/
jupyter notebook analyze.ipynb

The script:

  • Identifies trajectories aborted mid-execution and diagnostic reasons
  • Computes average scores across non-aborted trajectories
  • Distinguishes between aborted trajectories (errors during sampling) and completed trajectories (with terminate() call or step budget exceeded)

To re-run failed tasks, execute the evaluation script again with the same run_id and username - it will skip non-aborted tasks.

Example WebVoyager GPT Eval Result
{
  "score": 1.0,
  "gpt_response_text": "To evaluate the task, we need to verify if the criteria have been met:\n\n1. **Recipe Requirement**: A vegetarian lasagna recipe with zucchini and at least a four-star rating.\n\n2. **Search and Results**:\n   - The screenshots show that the search term used was \"vegetarian lasagna zucchini.\"\n   - Among the search results, \"Debbie's Vegetable Lasagna\" is prominently featured.\n   \n3. **Evaluation of the Recipe**:\n   - Rating: \"Debbie's Vegetable Lasagna\" has a rating of 4.7, which satisfies the requirement of being at least four stars.\n   - The presence of zucchini in the recipe is implied through the search conducted, though the screenshots do not explicitly show the ingredients list. However, the result response confirms the match to the criteria.\n\nGiven the information provided, the task seems to have fulfilled the requirement of finding a vegetarian lasagna recipe with zucchini and a four-star rating or higher. \n\n**Verdict: SUCCESS**"
}

Citation

If you use Fara in your research, please cite our work:


Releasing Packages with a Valet Key: npm, PyPI, and beyond

Lobsters
byk.im
2025-11-26 18:51:16
Comments...
Original Article

Disclaimer: This post should have been written about 5 years ago but I never got around to it; with the most recent Shai-Hulud attack , I thought it would be a good time to finally check this off the list and hopefully help others avoid supply-chain attacks.

About 5 years ago, I sat in on a meeting at Sentry in the midst of their SOC 2 compliance efforts. There was Armin , telling us that we needed a secret storage service for our package repository tokens. The tokens we used to deploy Sentry SDKs to package repositories such as npm, PyPI etc. This was to ensure there were no unauthorized releases of our SDKs which were embedded into all Sentry customers’ products. There were a limited set of people who had access to these tokens back in the time. Now, they became the bottleneck for more and more frequent releases. There was also the auditability issue at hand: releases were performed from individuals’ workstations and there was no easy way to trace a release back to where it originated from or whether it was authorized or not.

For some reason I intuitively was against such a secret storage service and felt like the answer was somewhere in GitHub, GitHub Actions, and their secret storage service we already used. We already had the repo permissions, personnel structure, and all the visibility for auditing there. Heck, even the approval mechanics were there with pull requests. So I said “give me a week and I’ll get you a proof of concept” which Armin did and I delivered - though I think it took a bit more than a week 😅

Secrets in Plain Sight

Before we dive into the solution, let me paint a picture of the problem. Publishing packages to registries like npm, PyPI, or crates.io requires access tokens. These tokens are essentially the keys to the kingdom - whoever has them can publish anything under your organization’s name. At the time, these tokens were either distributed to select individuals, or lived in GitHub repository secrets, accessible to anyone with write access to the repository. 1

Now, here’s the scary part: at Sentry, we had 90-100+ engineers with commit rights to our SDK repositories. Any one of them could:

  1. Create a new workflow or modify an existing one
  2. Access these secrets within that workflow
  3. Exfiltrate them to any web service they controlled
  4. Do all of the above without triggering any alarms

And the truly terrifying bit? Even if someone did steal these tokens, there would be no indication whatsoever. No alerts, no logs, nothing. They could sit on these credentials and use them months later, long after they’ve left the company. We’ve seen this exact scenario play out recently with supply-chain attacks like the Shai-Hulud npm takeover where attackers compromised maintainer accounts to publish malicious versions of popular packages.

The Valet Key

Some fancy cars come with a “valet key” - a special key you give to parking attendants or car wash folks. Unlike your regular key, this one has limited capabilities: maybe it can only go up to 20mph, can’t open the trunk, or won’t let you disable the alarm. It’s the same car, but with reduced privileges for reduced risk of theft.

This concept maps beautifully to our problem. Instead of giving everyone the full keys (the publishing tokens), why not give them a way to request the car to be moved (a release be made)? The actual keys stay with a very small, trusted (and monitored) group who are the builders and maintainers of the infrastructure. Even the approvers don’t actually have access to the keys!

Here’s what we wanted:

  1. Secrets in a secure, limited-access location - only 3-4 release engineers should have access
  2. Clear approval process - every release needs explicit sign-off from authorized personnel
  3. Low friction for developers - anyone should be able to request a release easily
  4. Full audit trail - everything logged being compliance-friendly
  5. No new infrastructure - we didn’t want to build or maintain a separate secrets service

As a side note, trusted publishing through OIDC and OAuth with limited and very short-lived tokens is the actual digital equivalent of valet keys. npm is slowly rolling this out 2 , but at the time we built this system, it wasn’t an option. And even today, it’s not available at the organization/scope level which is what we’d need. Also, we publish way more places than npm so we need a more generic solution.

Another approach worth mentioning is Google’s Wombat Dressing Room - an npm registry proxy that funnels all publishes through a single bot account with 2FA enabled. It’s a clever solution if you’re npm-only and want something off-the-shelf. That said it still requires running a separate service. 3

Enter getsentry/publish

The solution we landed on is beautifully simple in hindsight: a separate repository dedicated entirely to publishing. Here’s the trick:

  • Write access is extremely limited - only 3-4 release engineers can actually modify the repo
  • Release managers get “triage” access - GitHub’s triage role lets you manage issues and labels, but not code - perfect for approving releases
  • Everyone else can create issues - that’s all you need to request a release
  • Approval happens via labels - a release manager adds the “accepted” label to trigger the actual publish

The beauty of this setup is that the publishing tokens live only in this repo’s secrets. The repo itself is mostly static - we rarely need to modify the actual code - so the attack surface is minimal.

The Implementation (with Craft)

Under the hood, we use Craft , our CLI tool for managing releases. Craft was designed with a crucial architectural decision that predates the publish repo: it separates releases into two distinct phases - prepare and publish .

The prepare phase is where all the “dangerous” work happens: npm install , build scripts, test runs, changelog generation. This phase runs in the SDK repository without any access to publishing tokens. The resulting artifacts are uploaded to GitHub as, well , build artifacts.

The publish phase simply downloads these pre-built artifacts and pushes them to the registries. No npm install , no build scripts, no arbitrary code execution - just download and upload. This dramatically reduces the attack surface during the privileged publishing step. Even if an attacker managed to inject malicious code into a dependency, it would only execute during the prepare phase which has no access to publishing credentials.

This two-phase architecture is what makes supply-chain attacks like Shai-Hulud much harder to pull off against Sentry’s SDKs. The malicious code would need to somehow persist through the artifact upload/download cycle and execute during a phase that deliberately runs no code.

The magic happens with our GitHub Actions setup:

  1. Developer triggers release workflow in their SDK repo (e.g., sentry-javascript )
  2. action-prepare-release runs craft prepare : creates the release branch, updates changelogs, builds artifacts, uploads them to GitHub
  3. An issue is automatically created in getsentry/publish with all the details: what changed, what’s being released, which targets
  4. Release manager reviews and approves by adding the “accepted” label
  5. Publishing workflow triggers craft publish : downloads artifacts from GitHub and pushes to npm, PyPI, crates.io, etc. - no build step, just upload

Fighting Overprotective Parents

GitHub, bless their security-conscious hearts, put up quite a few guardrails that we had to work around. Here’s where things got… creative:

The Token Trigger Problem : For the automation, we had to use the Sentry Release Bot , a GitHub App that generates short-lived tokens. This is crucial because GITHUB_TOKEN (default token GitHub Actions creates) has a security restriction: actions triggered by it don’t trigger other actions 4 . We needed workflows in getsentry/publish to trigger based on issues created from SDK repos, so we had to work around this.

The Admin Bot Account : We needed a bot that could commit directly to protected branches. GitHub’s branch protection rules are were all-or-nothing - you can’t say “this bot can commit, but only to update CHANGELOG.md ”. So our bot ended up with admin access on all repos. Not ideal, but necessary 5 .

Composite Actions and Working Directories : If you’ve ever tried to use GitHub’s composite actions with custom working directories, you know the pain. There’s no clean way to say “run this composite action from this subdirectory”. We ended up with various hacks involving explicit cd commands and careful path management.

Some More Creative Workarounds : We maintain a small collection of ugly-but-necessary workarounds in our action definitions. They’re not pretty, but they work. Sometimes pragmatism beats elegance 6 .

Happily Ever After

After all this work, what did we actually achieve?

  • Compliance-friendly ✓ - every release is logged, approved, and traceable
  • Centralized secrets - tokens live in one place, accessible to very few
  • Developer convenience - anyone can request a release with a few clicks
  • Enterprise security - no individual has publishing credentials on their machine
  • Full transparency - the entire publish repo is open, notifications enabled for stakeholders

We’ve made more than 6,000 releases through this system and happily counting upwards. Every single one is traceable: who requested it, who approved it, what changed, when it shipped.

Why This Matters Today

Recent supply-chain attacks like Shai-Hulud show exactly why this architecture matters. When attackers compromise a maintainer’s npm account, they can publish malicious versions of packages that millions of developers will automatically install. With our system:

  • No individual at Sentry has npm/PyPI/crates.io credentials on their machine
  • Every release requires explicit approval from a release manager
  • The approval happens in a public repo with full audit trail
  • Any suspicious activity would be immediately visible

Is it perfect? No. Could a determined attacker with inside access still cause damage? Probably. But we’ve dramatically reduced the attack surface and made any compromise immediately visible and auditable.

Closing Thoughts

Looking back, this is one of my proudest achievements at Sentry. It’s not flashy - no one’s going to write a blog post titled “Revolutionary New Way to Click a Label” - but it’s the kind of infrastructure that quietly makes everything more secure and more convenient at the same time. 7

If you’re dealing with similar challenges, I encourage you to check out getsentry/publish and the Craft . The concepts are transferable even if you don’t use our exact implementation.

And hey, it only took me 5 years to write about it. Better late than never, right? 😅

Thanks

I’d like to thank the following people:

  • Armin and Daniel for their trust and support in building this system.
  • Kamil for Craft as I knew it.
  • Jeffery for reviewing this post thoroughly and being my partner in crime for many things security at Sentry.
  • Michael for giving me the push I needed to write this post, coming up with the awesome post image idea, and for his support and guidance on the post itself.
  1. This was before GitHub introduced “environment secrets” which allow more granular access control. Even with those, the problem isn’t fully solved for our use case.

  2. npm has OIDC support for individual packages, but not yet at the organization or scope level. See npm’s trusted publishers documentation .

  3. If only someone could make this run directly in GitHub Actions…

  4. This is actually a smart security feature - imagine a workflow that creates a commit that triggers itself. Infinite loop, infinite bills, infinite sadness.

  5. This is now fixed with special by-pass rules via rule sets recently and we also no longer have admin access for the bots, phew.

  6. If you peek at the repo, you’ll see what I mean. I’m not proud of all of it, but I’m proud it works.

  7. Especially while “security means more friction” is still a thing.

China Has Three Reusable Rockets Ready for Their Debut Flights

Hacker News
www.china-in-space.com
2025-11-26 18:45:43
Comments...
Original Article

Three of China’s space enterprises are near the debut flights of their partially reusable rockets, expected to liftoff before the end of the year.

Around November 25th , the Shanghai Academy of Spaceflight Technology’s Long March 12A partially reusable launch vehicle 1 was spotted heading for its launch pad the the Jiuquan Satellite Launch Center, for its first public appearance of a full vehicle. The liquid methane and liquid oxygen burning rocket has two 3.8-meter wide stages, with the first equipped with seven Longyun engines from Jiuzhou Yunjian (九州云箭) and the second with a single vacuum optimized YF-209 , to carry up to 12,000 kilograms. First-stage reuse will be achieved by an engine performing a landing burn to touchdown on four legs, with grid fins guiding it before that.

Details on development for the Long March 12A have been hard to come by as few have been released. In January, a largely successful high-altitude hop test occurred, succumbing to software glitches during splashdown. Around August, a second-stage static fire was completed in Haiyang (海阳市). Lastly in November, the rockets transporter-erector was delivered. What has been trackable is Jiuzhou Yunjian’s efforts on verifying its engines for reusable operation.

Due to the opaque nature of the Long March 12A’s development, it is unknown if the launch vehicle at Jiuquan will wrap up the overall development campaign, possibly with a static fire, before a debut flight later in December.

The Shanghai Academy of Spaceflight Technology’s Long March 12A launch vehicle atop of its transporter-erector at the Jiuquan Satellite Launch Center in November 2025.
The Shanghai Academy of Spaceflight Technology’s Long March 12A launch vehicle atop of its transporter-erector at the Jiuquan Satellite Launch Center in November 2025.

Meanwhile, LandSpace’s 66-meter-tall, 4.5-meter-wide Zhuque-3 is on its Jiuquan launch pad too, following delivery in October . Like the Long March 12A, the rocket burns liquid methane and liquid oxygen, but has two more engines, LandSpace’s TQ-12A, on its first-stage and one vacuum-optimized TQ-15A engine on the second-stage, to deliver up to 11,800 kilograms in its ‘ block one ’ configuration. Similar to the Shanghai Academy’s rocket, Zhuque-3’s first-stage will touchdown on four landing legs following an engine burn, with four grid fins guiding it through the atmosphere.

Zhuque-3 has had a highly successful test campaign during its just over two-year-long development process. In September 2024, the launch vehicle’s in-atmosphere hop-testing campaign was completed with a 10-kilometer flight that saw an engine relight for touchdown. That was followed by a 45-second static fire in June , later matched by flight hardware performing a similar static fire with a second-stage on top. Hardware has also been flown with the company’s Zhuque-2 and Zhuque-2E launch vehicles as well.

LandSpace’s Zhuque-3 Y1 vehicle at Launch Complex 96B at the Jiuquan Satellite Launch Center in October 2025.
LandSpace’s Zhuque-3 Y1 vehicle at Launch Complex 96B at the Jiuquan Satellite Launch Center in October 2025.

Along with the two methane-fueled rockets, Space Pioneer’s Tianlong-3 is also at Jiuquan, having arrived sometime in November . The two-stage 72-meter-tall, 3.8-meter-wide launch burns rocket-grade kerosene and liquid oxygen to carry up to 17,000 kilograms to low Earth orbit, with nine TH-12 engines on the first-stage and a single vacuum-optimized one on the second-stage. Tianlong-3's first-stage is planned to land on four landing legs, guided by four grid fins, with an engine burn providing the soft touchdown needed.

In the lead-up to launch, Tianlong-3 conducted its first wholly successful static fire in September and skipped a second-stage firing, having confidence in the singular engine powering it following its development campaign. At the moment, the launch vehicle is on its dedicated launchpad at the launch site for integrated testing with ground systems. Notably, no reuse hardware has been installed yet, and mounting points appear to be missing.

Space Pioneer’s Tianlong-3 Y1 vehicle on its launch pad at the Jiuquan Satellite Launch Center in November 2025.
Space Pioneer’s Tianlong-3 Y1 vehicle on its launch pad at the Jiuquan Satellite Launch Center in November 2025.

Out of the Long March 12A, Zhuque-3, and Tianlong-3, LandSpace may fly China’s first reusable rocket. Despite a current lack of hazard notices, news outlets are saying November 29th is the first targeted date. LandSpace has vaguely denied that date , asking enthusiasts to do diligent research. As for the other two rockets, Space Pioneer and the Shanghai Academy of Spaceflight Technology are yet to share relevant information 2 .

First-stage booster landing sites have been completed for both Zhuque-3 and the Long March 12A in previous months. Those sites are expected to have systems for safing the boosters following touchdown as well as fire suppression systems in the event of an anomaly. LandSpace and the Shanghai Academy are eyeing first-stage landings during the debut flights. Whichever lands first will be the third globally and the first outside of the United States, following SpaceX’s Falcon 9 in 2015 and Blue Origin’s New Glenn on November 13th 2025 .

No major Jiuquan-side holdups are expected to slow the debut flights of the three rockets. During the past month, the China Manned Space Agency had priority use of the site for the launch of the Shenzhou-21 mission , return of the Shenzhou-20 crew , and ‘emergency response’ launch of the Shenzhou-22 spacecraft.

When the three rockets do debut, they will be a boon to the deployment efforts of China’s various mega-constellations , as reuse will allow for cheaper and more frequent launch missions. Back in August, Shanghai Spacesail Technologies, the operator of the Qianfan (千帆) constellation, awarded contracts to LandSpace and Space Pioneer to prove they can launch satellite batches with their partially reusable rockets, with Tianlong-3 looking to deliver larger satellite groups.

Thanks for reading China in Space! This post is public so feel free to share it.

Share

Discussion about this post

Typeform is Too Expensive – Try Fabform, the Typeform Alternative

Lobsters
fabform.io
2025-11-26 18:44:12
Comments...
Original Article

Effortless to create. Enjoyable to answer. Designed for real insights.

Build Smarter Forms. Capture Honest Answers.

Fabform makes it easy to create flexible, powerful forms that invite real responses — so you can connect, understand, and grow with confidence.

  • Conversational Forms

    Guide your respondents through one question at a time, creating a natural, friendly flow that feels like a conversation. This approach increases completion rates and captures thoughtful, honest answers.

  • No-Code Logic

    Build complex branching logic and customized form paths easily — no coding required. Set up conditional questions and personalized flows that make your forms smarter, saving you time and ensuring your data is relevant.

Everything your form should be — smart, conversational, and effortless

Build interactive forms that guide people one step at a time — no code, no stress.

  • Smart Branching

    Show or hide questions based on what users say. Branching logic makes your forms feel natural, not robotic.

  • Conversational UI

    Ask one question at a time, just like a real conversation. Your form becomes a friendly guide.

  • Design Without Code

    Drop in questions, style layouts, and brand everything — all with a visual editor.

  • Seamless Integrations

    Connect your forms effortlessly with apps and tools you already use — automations start the moment a form is submitted.

Build Smarter Forms — Without Limits

Drag, drop, and deploy forms in minutes. No code, no caps, no nonsense. Perfect for teams, startups, and creators who just want it to work.

Collect unlimited responses for free. Customize every pixel. Integrate with over 6,000 tools like Google Sheets, Slack, and Zapier.

From lead gen to surveys, our builder adapts to whatever you're building. It's everything you need — nothing you don’t.

Build Smarter Forms Faster

Fabform makes building smart, responsive, and beautiful forms easier than ever. Whether you’re collecting data, payments, or signatures, Fabform’s rich feature set helps you get the job done — quickly and effortlessly.

  • Unlimited Forms & Responses
    Build and manage unlimited forms without worrying about restrictions. Collect as many responses as you need to fuel your business insights and operations.
  • Intuitive Drag-and-Drop Builder
    Design beautiful, customized forms effortlessly using our visual drag-and-drop interface. No coding skills required—just drag, drop, and create.
  • Fully Responsive Design
    Fabform’s forms automatically adjust to look perfect on any device, from smartphones and tablets to desktops, ensuring an excellent user experience everywhere.
  • Advanced Conditional Logic
    Create dynamic forms that adapt in real time. Show or hide questions, sections, or entire pages based on previous answers for a personalized experience.
  • 500+ Professionally Crafted Templates
    Get a head start with a huge library of ready-made templates designed for surveys, quizzes, registrations, feedback forms, and more.
  • File Upload Support
    Easily collect documents, images, or any other files directly through your forms. Users can upload files up to 10MB, with options for higher limits on premium plans.
  • Easy Embedding & Sharing
    Embed Fabform forms seamlessly on your website or share them via direct links on social media, emails, or messaging platforms with zero hassle.
  • Real-Time Email Notifications
    Stay instantly informed about new submissions with customizable email alerts, so you never miss important data or leads.
  • Powerful Integrations
    Connect Fabform effortlessly with popular tools like Google Sheets, Slack, Zapier, and Calendly to automate tasks and streamline your workflow.
  • Webhooks for Instant Data Delivery
    Push form submissions in real time directly to your own servers or apps with secure webhooks, enabling seamless integrations and automation.
  • Digital Signature Collection
    Collect legally binding e-signatures right inside your forms — perfect for contracts, consent forms, and agreements.
  • Integrated Payment Processing
    Accept payments securely and effortlessly via Stripe without forcing users to leave the form, simplifying order and donation workflows.
  • Custom Branding & Domains
    Make Fabform yours by adding logos, customizing fonts and colors, and hosting forms on your own domain for a fully branded experience.
  • Save & Resume Partial Submissions
    Allow users to save their progress and return later to complete forms — improving completion rates for longer surveys or applications.
  • Collaborative Team Workspaces
    Work together with your team in shared spaces to build, manage, and analyze forms efficiently.
  • Multilingual Forms
    Reach and engage a global audience by creating forms in multiple languages with ease.
  • Custom Redirects & Thank You Pages
    Personalize the post-submission experience by redirecting users to custom pages or displaying tailored thank-you messages.
  • Google Tag Manager Integration
    Track form performance, conversions, and user behavior easily with full Google Tag Manager support.

Powerful dashboard to manage your forms and get the insight you need.

Easily navigate all of FabForm's features through its powerful yet easy to use dashboard.

#

Rocking reviews from our customers

Here is what our loyal customers have to say.

  1. I'm amazed by the quality of the features offered on the FabForm platform. I evaluated other Form Builders on the market and FabForm comes out on top. The UI design feels better. It's feature rich, terrific pricing and it works a charm.

    Roberta Johnson

    I.T. Recruiter

    "I'm amazed by the quality of the features offered on the FabForm platform. I evaluated other Form Builders on the market and FabForm comes out on top. The UI design feels better. It's feature rich, terrific pricing and it works a charm. "

  2. As a full-time Digital Marketing professional, I needed a flexible and easy way to create and monitor various marketing forms for some picky clients. FabForm has exceeded my expectations. It is -- in my humble opinion -- the best Form Builder out there barnone.

    Emilio Pirelli

    Digital Marketing

    "As a full-time Digital Marketing professional, I needed a flexible and easy way to create and monitor various marketing forms for some picky clients. FabForm has exceeded my expectations. It is -- in my humble opinion -- the best Form Builder out there barnone."

  3. FabForm is my absolute favorite form builder. I can throw together a beautiful form in minutes.  It's reliable and has all the features and ease of use that one needs -- and then some.

    "FabForm is my absolute favorite form builder. I can throw together a beautiful form in minutes. It's reliable and has all the features and ease of use that one needs -- and then some."

Hell Gate’s 2025 Guide to Navigating Difficult Thanksgiving Conversations

hellgate
hellgatenyc.com
2025-11-26 18:38:26
Happy Zanksgiving!...
Original Article

Thanksgiving is a time to contemplate what we are grateful for in our lives, to prepare an elaborate dinner, and, for many of us, to enjoy the company of family. But as we all know, family members don't always see things the same way, and sometimes conversation around the Thanksgiving dinner table can get a little rocky.

Navigating these difficult family encounters can be stressful, frustrating, or downright traumatic. It's important to remember that you can't control how your family behaves, but you can control how you respond. With that in mind, Hell Gate would like to offer some examples of how to productively engage with loved ones over the Thanksgiving holiday.

Scenario 1

Your uncle Danny, who owns a car dealership in Long Island, interrupts a story about your co-op shift to say that he'd never live in New York City, because the crime is out of control, the subways are murder traps, and "illegals" are taking over.

Solution: You could blow your top, call him a racist MAGA blowhard who doesn't know what he's talking about, storm away from the table, and poison the whole family gathering. Or, you could try saying something like this:

"Well Danny, when I talk to everyday New Yorkers—no matter where they're from originally—they all share many of the same concerns. They all want safety, and they all want justice. But what they want most of all is a city that they can afford to live in. I bet affordability is even an issue for you guys up in Syosset, right?"

Boom! You can bet Danny has some gripes about the cost of living he's ready to share—now you're back on the same page and the conversation is back on track.

Scenario 2

Every Thanksgiving, you make cheddar mashed potatoes as a side dish. And every year, they're a hit. But this year, your father-in-law is insisting that you make the gross sweet potato pecan casserole instead— the one with marshmallows in it . No one actually wants to eat this, especially because the turkey he prepares is dry and flavorless, but Frank is adamant: "My house, my side dishes."

You could stand on your principles and stick with the cheddar potatoes, but that might involve an uncomfortable scene. Everyone is looking to you to save a Thanksgiving tradition.

Solution: Make the disgusting marshmallow abomination. After all, your father-in-law has done a decent job raising such a wonderful family, and this is arguably a small price to pay to stay in his good graces.

To smooth it over with the rest of the family, tell them, "We need not choose between justice and safety, and with this decision, we are affirming our commitment to both. And while Frank and I don't agree on everything, we both share the same passion for furthering an agenda of a successful Thanksgiving, one that delivers on a promise of fellowship, football, and a pumpkin pie at the end."

If one of your cousins points out that this sweet potato side dish is basically a dessert, and one that is far too similar to pumpkin pie, sidestep his question with something like, "No, what I believe, is that both cheddar potatoes and sweet potato casseroles should be taken seriously, but that at this juncture, the imperative to execute on the casserole should take precedence so that our larger goals, those that involve a restful and meaningful holiday, ultimately prevail."

Scenario 3

Your aunt Glenda starts to say the blessing, but is rudely cut off by your cousin Sarah, who promised one of her kids, Dylan, who is 3, that he could say the blessing this year. Awkward silence at the table ensues, as the pro-Glenda and pro-Dylan factions glower at each other. How do you stop this from going off the rails?

Solution: Remind your family about why they are here: to appreciate the fact that everyone at this table agrees that the cost of living is simply too high, but that if we all work together, we can take actions to reverse this trend.

Say something like, "I'm talking about groceries for Glenda, and I'm talking about day care for Dylan. I'm talking about utility bills too—Frank, you were saying earlier that your PSEG bill was insane last month?" (at this point nod at your father-in-law, who will appreciate this, even if he didn't specifically talk to you about his utility bill, there is a good chance he complained about it to someone else).

Definitely add something like, "Let us all appreciate this time we have together. As Eugene Debs once said , 'Upon every hand we see the signs of preparation,' and I definitely see the hands here at this table ready to dig into this delicious turkey and stuffing and sweet potato casserole! Right, Frank? Now let's raise our glasses to a new dawn, one where we usher in a new era of leadership, one that speaks for all of us—young and old—without condescension, and without compromising our basic commitment to ensure a decent standard of living. Years from now, may our only regret be that this day took so long to come. Cheers!"

Scenario 4

After the big meal, uncle Phil wants to hide the wishbone for the little kids, and whoever finds it, gets to break it. Everyone else is tired, or engaged in cleaning, no one is jumping at the chance to play a game with a group of hyperactive children. Many of the adults are rolling their eyes, but Uncle Phil, who is kindhearted and genuine, approaches you for backup. "Will you help me play this game? Maybe start a new tradition?" he asks, poking you in the ribs and handing you another way-too-strong Nitro Double IPA that he brought, and has been sitting in the trunk of his Impala since last Thanksgiving.

Solution: Put your hand on Phil's shoulder and say that you will help him. Look around for something to stand on—maybe an ottoman, or a step stool—and climb up it before announcing, "Far too often, the traditions of the past have been forgotten by the politics of the present. Tonight let us speak in a clear voice: Hope is alive. Hope for a time when we can play some low-effort games to entertain the kids after a satisfying meal. Hope that those same kids can afford to grow up and raise their own families in the city that they call home. And we will build a Thanksgiving tradition defined by competence and a compassion that have for too long been placed at odds with one another."

At this point, everyone in the room should be applauding, so add this: "Together, we will usher in a generation of change. And if we embrace this brave new course, rather than fleeing from it, we can respond to cynicism and solipsism with the strength it fears, not the appeasement it craves."

Scenario 5

Your partner corners you outside of the bathroom and whispers, "Why are you talking in platitudes and acting like a fucking stooge? Can't you be normal for two fucking hours? Jesus fucking Christ."

Solution: Walk up to the crowd of uncles gathered around the TV watching American football and say, "Hey did any of you catch that Arsenal game yesterday ? It was amazing! I think we have Paramount+ on this puppy , let's watch some highlights at halftime!"

ULID - the ONLY identifier you should use?

Lobsters
www.youtube.com
2025-11-26 18:34:41
Comments...

Comcast to pay $1.5M fine for vendor breach affecting 270K customers

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 18:30:10
Comcast will pay a $1.5 million fine to settle a Federal Communications Commission investigation into a February 2024 vendor data breach that exposed the personal information of nearly 275,000 customers. [...]...
Original Article

Comcast

Comcast will pay a $1.5 million fine to settle a Federal Communications Commission investigation into a February 2024 vendor data breach that exposed the personal information of nearly 275,000 customers.

The breach occurred in February 2024 , when attackers hacked into the systems of Financial Business and Consumer Solutions (FBCS), a debt collector Comcast had stopped using two years earlier.

The FCBS data breach was initially believed to have affected 1.9 million people in total, but the tally was raised to 3.2 million in June and, finally, to 4.2 million in July.

Wiz

FBCS, which filed for bankruptcy before revealing a data breach in August 2024, notified Comcast on July 15 (five months after the attack) that customer data had been compromised, affecting 273,703 Comcast customers . Previously, it had assured Comcast in March that the breach did not affect any of its customers.

The threat actors stole personal and financial information between February 14 and February 26, including the names, addresses, Social Security numbers, dates of birth, and Comcast account numbers of affected current and former customers. Affected customers had used Comcast's Xfinity-branded internet, television, streaming, VoIP, and home security services.

Under the consent decree announced by the FCC on Monday , Comcast has also agreed to implement a compliance plan that includes enhanced vendor oversight to protect data and ensure customer privacy, ensuring its vendors properly dispose of customer information they no longer need for business purposes, as required by the Cable Communications Policy Act of 1984.

The telecommunications giant must also appoint a compliance officer, conduct risk assessments of vendors handling customer data every two years, file compliance reports with the FCC every six months over the next three years, and report any material violations within 30 days of discovery.

However, Comcast said in a statement to Reuters that it "was not responsible for and has not conceded any wrongdoing in connection with this incident," noting that its network wasn't breached and that FBCS was contractually required to comply with security requirements.

A Comcast spokesperson was not immediately available for comment when contacted by BleepingComputer.

Comcast is an American mass media, telecommunications, and entertainment multinational company, and the fourth-largest telecom firm in the world by revenue, after AT&T, Verizon, and China Mobile.

It also has over 182,000 employees, hundreds of millions of customers worldwide, and reported revenues of $123.7 billion in 2024.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

✋ Get A Warrant | EFFector 37.17

Electronic Frontier Foundation
www.eff.org
2025-11-26 18:16:27
Even with the holidays coming up, the digital rights news doesn't stop. Thankfully, EFF is here to keep you up-to-date with our EFFector newsletter! In our latest issue, we’re explaining why politicians latest attempts to ban VPNs is a terrible idea; asking supporters to file public comments opposin...
Original Article

Even with the holidays coming up, the digital rights news doesn't stop. Thankfully, EFF is here to keep you up-to-date with our EFFector newsletter!

In our latest issue , we’re explaining why politicians latest attempts to ban VPNs is a terrible idea; asking supporters to file public comments opposing new rules that would make bad patents untouchable ; and sharing a privacy victory—Sacramento is forced to end its dragnet surveillance program of power meter data.

Prefer to listen in? Check out our audio companion, where EFF Surveillance Litigation Director Andrew Crocker explains our new lawsuit challenging the warrantless mass surveillance of drivers in San Jose . Catch the conversation on YouTube or the Internet Archive .

LISTEN TO EFFECTOR

EFFECTOR 37.17 - ✋ GET A WARRANT

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Gemini CLI Tips and Tricks for Agentic Coding

Hacker News
github.com
2025-11-26 18:08:02
Comments...
Original Article

Gemini CLI Tips & Tricks

This guide covers ~30 pro-tips for effectively using Gemini CLI for agentic coding

Gemini CLI is an open-source AI assistant that brings the power of Google's Gemini model directly into your terminal . It functions as a conversational, "agentic" command-line tool - meaning it can reason about your requests, choose tools (like running shell commands or editing files), and execute multi-step plans to help with your development workflow .

In practical terms, Gemini CLI acts like a supercharged pair programmer and command-line assistant. It excels at coding tasks, debugging, content generation, and even system automation, all through natural language prompts. Before diving into pro tips, let's quickly recap how to set up Gemini CLI and get it running.

Table of Contents

Getting Started

Installation: You can install Gemini CLI via npm. For a global install, use:

npm install -g @google/gemini-cli

Or run it without installing using npx :

Gemini CLI is available on all major platforms (it's built with Node.js/TypeScript). Once installed, simply run the gemini command in your terminal to launch the interactive CLI .

Authentication: On first use, you'll need to authenticate with the Gemini service. You have two options: (1) Google Account Login (free tier) - this lets you use Gemini 2.5 Pro for free with generous usage limits (about 60 requests/minute and 1,000 requests per day . On launch, Gemini CLI will prompt you to sign in with a Google account (no billing required . (2) API Key (paid or higher-tier access) - you can get an API key from Google AI Studio and set the environment variable GEMINI_API_KEY to use it .

API key usage can offer higher quotas and enterprise data‑use protections; prompts aren't used for training on paid/billed usage, though logs may be retained for safety .

For example, add to your shell profile:

export GEMINI_API_KEY="YOUR_KEY_HERE"

Basic Usage: To start an interactive session, just run gemini with no arguments. You'll get a gemini> prompt where you can type requests or commands. For instance:

$ gemini
gemini> Create a React recipe management app using SQLite

You can then watch as Gemini CLI creates files, installs dependencies, runs tests, etc., to fulfill your request. If you prefer a one-shot invocation (non-interactive), use the -p flag with a prompt, for example:

gemini -p "Summarize the main points of the attached file. @./report.txt"

This will output a single response and exit . You can also pipe input into Gemini CLI: for example, echo "Count to 10" | gemini will feed the prompt via stdin .

CLI Interface: Gemini CLI provides a rich REPL-like interface. It supports slash commands (special commands prefixed with / for controlling the session, tools, and settings) and bang commands (prefixed with ! to execute shell commands directly). We'll cover many of these in the pro tips below. By default, Gemini CLI operates in a safe mode where any action that modifies your system (writing files, running shell commands, etc.) will ask for confirmation. When a tool action is proposed, you'll see a diff or command and be prompted ( Y/n ) to approve or reject it. This ensures the AI doesn't make unwanted changes without your consent.

With the basics out of the way, let's explore a series of pro tips and hidden features to help you get the most out of Gemini CLI. Each tip is presented with a simple example first, followed by deeper details and nuances. These tips incorporate advice and insights from the tool's creators (e.g. Taylor Mullen) and the Google Developer Relations team, as well as the broader community, to serve as a canonical guide for power users of Gemini CLI.

Tip 1: Use GEMINI.md for Persistent Context

Quick use-case: Stop repeating yourself in prompts. Provide project-specific context or instructions by creating a GEMINI.md file, so the AI always has important background knowledge without being told every time .

When working on a project, you often have certain overarching details - e.g. coding style guidelines, project architecture, or important facts - that you want the AI to keep in mind. Gemini CLI allows you to encode these in one or more GEMINI.md files. Simply create a .gemini folder (if not already present) in your project, and add a Markdown file named GEMINI.md with whatever notes or instructions you want the AI to persist. For example:

# Project Phoenix - AI Assistant

- All Python code must follow PEP 8 style.  
- Use 4 spaces for indentation.  
- The user is building a data pipeline; prefer functional programming paradigms.

Place this file in your project root (or in subdirectories for more granular context). Now, whenever you run gemini in that project, it will automatically load these instructions into context . This means the model will always be primed with them, avoiding the need to prepend the same guidance to every prompt.

How it works: Gemini CLI uses a hierarchical context loading system . It will combine global context (from ~/.gemini/GEMINI.md , which you can use for cross-project defaults) with your project-specific GEMINI.md , and even context files in subfolders. More specific files override more general ones. You can inspect what context was loaded at any time by using the command:

This will display the full combined context the AI sees . If you make changes to your GEMINI.md , use /memory refresh to reload the context without restarting the session .

Pro Tip: Use the /init slash command to quickly generate a starter GEMINI.md . Running /init in a new project creates a template context file with information like the tech stack detected, a summary of the project, etc .. You can then edit and expand that file. For large projects, consider breaking the context into multiple files and importing them into GEMINI.md with @include syntax. For example, your main GEMINI.md could have lines like @./docs/prompt-guidelines.md to pull in additional context files . This keeps your instructions organized.

With a well-crafted GEMINI.md , you essentially give Gemini CLI a "memory" of the project's requirements and conventions. This persistent context leads to more relevant responses and less back-and-forth prompt engineering.

Tip 2: Create Custom Slash Commands

Quick use-case: Speed up repetitive tasks by defining your own slash commands. For example, you could make a command /test:gen that generates unit tests from a description, or /db:reset that drops and recreates a test database. This extends Gemini CLI's functionality with one-liners tailored to your workflow.

Gemini CLI supports custom slash commands that you can define in simple configuration files. Under the hood, these are essentially pre-defined prompt templates. To create one, make a directory commands/ under either ~/.gemini/ for global commands or in your project's .gemini/ folder for project-specific commands . Inside commands/ , create a TOML file for each new command. The file name format determines the command name: e.g. a file test/gen.toml defines a command /test:gen .

Let's walk through an example. Say you want a command to generate a unit test from a requirement description. You could create ~/.gemini/commands/test/gen.toml with the following content:

# Invoked as: /test:gen "Description of the test"  
description \= "Generates a unit test based on a requirement."  
prompt \= """  
You are an expert test engineer. Based on the following requirement, please write a comprehensive unit test using the Jest framework.

Requirement: {{args}}  
"""

Now, after reloading or restarting Gemini CLI, you can simply type:

/test:gen "Ensure the login button redirects to the dashboard upon success"

Gemini CLI will recognize /test:gen and substitute the {{args}} in your prompt template with the provided argument (in this case, the requirement). The AI will then proceed to generate a Jest unit test accordingly . The description field is optional but is used when you run /help or /tools to list available commands.

This mechanism is extremely powerful - effectively, you can script the AI with natural language. The community has created numerous useful custom commands. For instance, Google's DevRel team shared a set of 10 practical workflow commands (via an open-source repo) demonstrating how you can script common flows like creating API docs, cleaning data, or setting up boilerplate code . By defining a custom command, you package a complex prompt (or series of prompts) into a reusable shortcut.

Pro Tip: Custom commands can also be used to enforce formatting or apply a "persona" to the AI for certain tasks. For example, you might have a /review:security command that always prefaces the prompt with "You are a security auditor..." to review code for vulnerabilities. This approach ensures consistency in how the AI responds to specific categories of tasks.

To share commands with your team, you can commit the TOML files in your project's repo (under .gemini/commands directory). Team members who have Gemini CLI will automatically pick up those commands when working in the project. This is a great way to standardize AI-assisted workflows across a team.

Tip 3: Extend Gemini with Your Own MCP Servers

Quick use-case: Suppose you want Gemini to interface with an external system or a custom tool that isn't built-in - for example, query a proprietary database, or integrate with Figma designs. You can do this by running a custom Model Context Protocol (MCP) server and plugging it into Gemini CLI . MCP servers let you add new tools and abilities to Gemini, effectively extending the agent .

Gemini CLI comes with several MCP servers out-of-the-box (for instance, ones enabling Google Search, code execution sandboxes, etc.), and you can add your own. An MCP server is essentially an external process (it could be a local script, a microservice, or even a cloud endpoint) that speaks a simple protocol to handle tasks for Gemini. This architecture is what makes Gemini CLI so extensible .

Examples of MCP servers: Some community and Google-provided MCP integrations include a Figma MCP (to fetch design details from Figma), a Clipboard MCP (to read/write from your system clipboard), and others. In fact, in an internal demo, the Gemini CLI team showcased a "Google Docs MCP" server that allowed saving content directly to Google Docs . The idea is that whenever Gemini needs to perform an action that the built-in tools can't handle, it can delegate to your MCP server.

How to add one: You can configure MCP servers via your settings.json or using the CLI. For a quick setup, try the CLI command:

gemini mcp add myserver --command "python3 my_mcp_server.py" --port 8080

This would register a server named "myserver" that Gemini CLI will launch by running the given command (here a Python module) on port 8080. In ~/.gemini/settings.json , it would add an entry under mcpServers . For example:

"mcpServers": {
  "myserver": {
    "command": "python3",
    "args": ["-m", "my_mcp_server", "--port", "8080"],
    "cwd": "./mcp_tools/python",
    "timeout": 15000
  }
}

This configuration (based on the official docs) tells Gemini how to start the MCP server and where . Once running, the tools provided by that server become available to Gemini CLI. You can list all MCP servers and their tools with the slash command:

This will show any registered servers and what tool names they expose .

Power of MCP: MCP servers can provide rich, multi-modal results . For instance, a tool served via MCP could return an image or a formatted table as part of the response to Gemini CLI . They also support OAuth 2.0, so you can securely connect to APIs (like Google's APIs, GitHub, etc.) via an MCP tool without exposing credentials . Essentially, if you can code it, you can wrap it as an MCP tool - turning Gemini CLI into a hub that orchestrates many services.

Default vs. custom: By default, Gemini CLI's built-in tools cover a lot (reading files, web search, executing shell commands, etc.), but MCP lets you go beyond. Some advanced users have created MCP servers to interface with internal systems or to perform specialized data processing. For example, you could have a database-mcp that provides a /query_db tool for running SQL queries on a company database, or a jira-mcp to create tickets via natural language.

When creating your own, be mindful of security: by default, custom MCP tools require confirmation unless you mark them as trusted. You can control safety with settings like trust: true for a server (which auto-approves its tool actions) or by whitelisting specific safe tools and blacklisting dangerous ones .

In short, MCP servers unlock limitless integration . They're a pro feature that lets Gemini CLI become a glue between your AI assistant and whatever system you need it to work with. If you're interested in building one, check out the official MCP guide and community examples.

Tip 4: Leverage Memory Addition & Recall

Quick use-case: Keep important facts at your AI's fingertips by adding them to its long-term memory. For example, after figuring out a database port or an API token, you can do:

/memory add "Our staging RabbitMQ is on port 5673"

This will store that fact so you (or the AI) don't forget it later . You can then recall everything in memory with /memory show at any time.

The /memory commands provide a simple but powerful mechanism for persistent memory . When you use /memory add <text> , the given text is appended to your project's global context (technically, it's saved into the global ~/.gemini/GEMINI.md file or the project's GEMINI.md . It's a bit like taking a note and pinning it to the AI's virtual bulletin board. Once added, the AI will always see that note in the prompt context for future interactions, across sessions.

Consider an example: you're debugging an issue and discover a non-obvious insight ("The config flag X_ENABLE must be set to true or the service fails to start"). If you add this to memory, later on if you or the AI are discussing a related problem, it won't overlook this critical detail - it's in the context.

Using /memory :

  • /memory add "<text>" - Add a fact or note to memory (persistent context). This updates the GEMINI.md immediately with the new entry.

  • /memory show - Display the full content of the memory (i.e. the combined context file that's currently loaded).

  • /memory refresh - Reload the context from disk (useful if you manually edited the GEMINI.md file outside of Gemini CLI, or if multiple people are collaborating on it).

Because the memory is stored in Markdown, you can also manually edit the GEMINI.md file to curate or organize the info. The /memory commands are there for convenience during conversation, so you don't have to open an editor.

Pro Tip: This feature is great for "decision logs." If you decide on an approach or rule during a chat (e.g., a certain library to use, or an agreed code style), add it to memory. The AI will then recall that decision and avoid contradicting it later. It's especially useful in long sessions that might span hours or days - by saving key points, you mitigate the model's tendency to forget earlier context when the conversation gets long.

Another use is personal notes. Because ~/.gemini/GEMINI.md (global memory) is loaded for all sessions, you could put general preferences or information there. For example, "The user's name is Alice. Speak politely and avoid slang." It's like configuring the AI's persona or global knowledge. Just be aware that global memory applies to all projects, so don't clutter it with project-specific info.

In summary, Memory Addition & Recall helps Gemini CLI maintain state. Think of it as a knowledge base that grows with your project. Use it to avoid repeating yourself or to remind the AI of facts it would otherwise have to rediscover from scratch.

Tip 5: Use Checkpointing and /restore as an Undo Button

Quick use-case: If Gemini CLI makes a series of changes to your files that you're not happy with, you can instantly roll back to a prior state. Enable checkpointing when you start Gemini (or in settings), and use the /restore command to undo changes like a lightweight Git revert . /restore rolls back your workspace to the saved checkpoint; conversation state may be affected depending on how the checkpoint was captured.

Gemini CLI's checkpointing feature acts as a safety net. When enabled, the CLI takes a snapshot of your project's files before each tool execution that modifies files . If something goes wrong, you can revert to the last known good state. It's essentially version control for the AI's actions, without you needing to manually commit to Git each time.

How to use it: You can turn on checkpointing by launching the CLI with the --checkpointing flag:

Alternatively, you can make it the default by adding to your config ( "checkpointing": { "enabled": true } in settings.json ). Once active, you'll notice that each time Gemini is about to write to a file, it says something like "Checkpoint saved."

If you then realize an AI-made edit is problematic, you have two options:

  • Run /restore list (or just /restore with no arguments) to see a list of recent checkpoints with timestamps and descriptions.

  • Run /restore <id> to rollback to a specific checkpoint. If you omit the id and there's only one pending checkpoint, it will restore that by default .

For example:

Gemini CLI might output:

0: [2025-09-22 10:30:15] Before running 'apply_patch'
1: [2025-09-22 10:45:02] Before running 'write_file'

You can then do /restore 0 to revert all file changes (and even the conversation context) back to how it was at that checkpoint. In this way, you can "undo" a mistaken code refactor or any other changes Gemini made .

What gets restored: The checkpoint captures the state of your working directory (all files that Gemini CLI is allowed to modify) and the workspace files (conversation state may also be rolled back depending on how the checkpoint was captured). When you restore, it overwrites files to the old version and resets the conversation memory to that snapshot. It's like time-traveling the AI agent back to before it made the wrong turn. Note that it won't undo external side effects (for example, if the AI ran a database migration, it can't undo that), but anything in the file system and chat context is fair game.

Best practices: It's a good idea to keep checkpointing on for non-trivial tasks. The overhead is small, and it provides peace of mind. If you find you don't need a checkpoint (everything went well), you can always clear it or just let the next one overwrite it. The development team recommends using checkpointing especially before multi-step code edits . For mission-critical projects, though, you should still use a proper version control ( git ) as your primary safety net - consider checkpoints as a convenience for quick undo rather than a full VCS.

In essence, /restore lets you use Gemini CLI with confidence. You can let the AI attempt bold changes, knowing you have an "OH NO" button to rewind if needed.

Tip 6: Read Google Docs, Sheets, and More. With a Workspace MCP server configured, you can paste a Docs/Sheets link and have the MCP fetch it, subject to permissions

Quick use-case: Imagine you have a Google Doc or Sheet with some specs or data that you want the AI to use. Instead of copy-pasting the content, you can provide the link, and with a configured Workspace MCP server Gemini CLI can fetch and read it.

For example:

Summarize the requirements from this design doc: https://docs.google.com/document/d/<id>

Gemini can pull in the content of that Doc and incorporate it into its response. Similarly, it can read Google Sheets or Drive files by link.

How this works: These capabilities are typically enabled via MCP integrations . Google's Gemini CLI team has built (or is working on) connectors for Google Workspace. One approach is running a small MCP server that uses Google's APIs (Docs API, Sheets API, etc.) to retrieve document content when given a URL or ID . When configured, you might have slash commands or tools like /read_google_doc or simply an auto-detection that sees a Google Docs link and invokes the appropriate tool to fetch it.

For example, in an Agent Factory podcast demo, the team used a Google Docs MCP to save a summary directly to a doc - which implies they could also read the doc's content in the first place. In practice, you might do something like:

@https://docs.google.com/document/d/XYZ12345

Including a URL with @ (the context reference syntax) signals Gemini CLI to fetch that resource. With a Google Doc integration in place, the content of that document would be pulled in as if it were a local file. From there, the AI can summarize it, answer questions about it, or otherwise use it in the conversation.

Similarly, if you paste a Google Drive file link , a properly configured Drive tool could download or open that file (assuming permissions and API access are set up). Google Sheets could be made available via an MCP that runs queries or reads cell ranges, enabling you to ask things like "What's the sum of the budget column in this Sheet [link]?" and have the AI calculate it.

Setting it up: As of this writing, the Google Workspace integrations may require some tinkering (obtaining API credentials, running an MCP server such as the one described by Kanshi Tanaike , etc.). Keep an eye on the official Gemini CLI repository and community forums for ready-to-use extensions - for example, an official Google Docs MCP might become available as a plugin/extension. If you're eager, you can write one following guides on how to use Google APIs within an MCP server . It typically involves handling OAuth (which Gemini CLI supports for MCP servers) and then exposing tools like read_google_doc .

Usage tip: When you have these tools, using them can be as simple as providing the link in your prompt (the AI might automatically invoke the tool to fetch it) or using a slash command like /doc open <URL> . Check /tools to see what commands are available - Gemini CLI lists all tools and custom commands there .

In summary, Gemini CLI can reach out beyond your local filesystem . Whether it's Google Docs, Sheets, Drive, or other external content, you can pull data in by reference. This pro tip saves you from manual copy-paste and keeps the context flow natural - just refer to the document or dataset you need, and let the AI grab what's needed. It makes Gemini CLI a true knowledge assistant for all the information you have access to, not just the files on your disk.

(Note: Accessing private documents of course requires the CLI to have the appropriate permissions. Always ensure any integration respects security and privacy. In corporate settings, setting up such integrations might involve additional auth steps.)

Tip 7: Reference Files and Images with @ for Explicit Context

Quick use-case: Instead of describing a file's content or an image verbally, just point Gemini CLI directly to it. Using the @ syntax, you can attach files, directories, or images into your prompt. This guarantees the AI sees exactly what's in those files as context . For example:

Explain this code to me: @./src/main.js

This will include the contents of src/main.js in the prompt (up to Gemini's context size limits), so the AI can read it and explain it .

This @ file reference is one of Gemini CLI's most powerful features for developers. It eliminates ambiguity - you're not asking the model to rely on memory or guesswork about the file, you're literally handing it the file to read. You can use this for source code, text documents, logs, etc. Similarly, you can reference entire directories :

Refactor the code in @./utils/ to use async/await.

By appending a path that ends in a slash, Gemini CLI will recursively include files from that directory (within reason, respecting ignore files and size limits). This is great for multi-file refactors or analyses, as the AI can consider all relevant modules together.

Even more impressively, you can reference binary files like images in prompts. Gemini CLI (using the Gemini model's multimodal capabilities) can understand images. For example:

Describe what you see in this screenshot: @./design/mockup.png

The image will be fed into the model, and the AI might respond with something like "This is a login page with a blue sign-in button and a header image," etc .. You can imagine the uses: reviewing UI mockups, organizing photos (as we'll see in a later tip), or extracting text from images (Gemini can do OCR as well).

A few notes on using @ references effectively:

  • File limits: Gemini 2.5 Pro has a huge context window (up to 1 million tokens ), so you can include quite large files or many files. However, extremely large files might be truncated. If a file is enormous (say, hundreds of thousands of lines), consider summarizing it or breaking it into parts. Gemini CLI will warn you if a reference is too large or if it skipped something due to size.

  • Automatic ignoring: By default, Gemini CLI respects your .gitignore and .geminiignore files when pulling in directory context . So if you @./ a project root, it will not dump huge ignored folders (like node_modules ) into the prompt. You can customize ignore patterns with .geminiignore similarly to how .gitignore works.

  • Explicit vs implicit context: Taylor Mullen (the creator of Gemini CLI) emphasizes using @ for explicit context injection rather than relying on the model's memory or summarizing things yourself. It's more precise and ensures the AI isn't hallucinating content. Whenever possible, point the AI to the source of truth (code, config files, documentation) with @ references. This practice can significantly improve accuracy.

  • Chaining references: You can include multiple files in one prompt, like:

Compare @./foo.py and @./bar.py and tell me differences.

The CLI will include both files. Just be mindful of token limits; multiple large files might consume a lot of the context window.

Using @ is essentially how you feed knowledge into Gemini CLI on the fly . It turns the CLI into a multi-modal reader that can handle text and images. As a pro user, get into the habit of leveraging this - it's often faster and more reliable than asking the AI something like "Open the file X and do Y" (which it may or may not do on its own). Instead, you explicitly give it X to work with.

Tip 8: On-the-Fly Tool Creation (Have Gemini Build Helpers)

Quick use-case: If a task at hand would benefit from a small script or utility, you can ask Gemini CLI to create that tool for you - right within your session. For example, you might say, "Write a Python script to parse all JSON files in this folder and extract the error fields." Gemini can generate the script, which you can then execute via the CLI. In essence, you can dynamically extend the toolset as you go.

Gemini CLI is not limited to its pre-existing tools; it can use its coding abilities to fabricate new ones when needed. This often happens implicitly: if you ask for something complex, the AI might propose writing a temporary file (with code) and then running it. As a user, you can also guide this process explicitly:

  • Creating scripts: You can prompt Gemini to create a script or program in the language of your choice. It will likely use the write_file tool to create the file. For instance:
Generate a Node.js script that reads all '.log' files in the current directory and reports the number of lines in each.

Gemini CLI will draft the code, and with your approval, write it to a file (e.g. script.js ). You can then run it by either using the ! shell command (e.g. !node script.js ) or by asking Gemini CLI to execute it (the AI might automatically use run_shell_command to execute the script it just wrote, if it deems it part of the plan).

  • Temporary tools via MCP: In advanced scenarios, the AI might even suggest launching an MCP server for some specialized tasks. For example, if your prompt involves some heavy text processing that might be better done in Python, Gemini could generate a simple MCP server in Python and run it. While this is more rare, it demonstrates that the AI can set up a new "agent" on the fly. (One of the slides from the Gemini CLI team humorously referred to "MCP servers for everything, even one called LROwn" - suggesting you can have Gemini run an instance of itself or another model, though that's more of a trick than a practical use!).

The key benefit here is automation . Instead of you manually stopping to write a helper script, you can let the AI do it as part of the flow. It's like having an assistant who can create tools on-demand. This is especially useful for data transformation tasks, batch operations, or one-off computations that the built-in tools don't directly provide.

Nuances and safety: When Gemini CLI writes code for a new tool, you should still review it before running. The /diff view (Gemini will show you the file diff before you approve writing it) is your chance to inspect the code . Ensure it does what you expect and nothing malicious or destructive (the AI shouldn't produce something harmful unless your prompt explicitly asks, but just like any code from an AI, double-check logic, especially for scripts that delete or modify lots of data).

Example scenario: Let's say you have a CSV file and you want to filter it in a complex way. You ask Gemini CLI to do it, and it might say: "I will write a Python script to parse the CSV and apply the filter." It then creates filter_data.py . After you approve and it runs, you get your result, and you might never need that script again. This ephemeral creation of tools is a pro move - it shows the AI effectively extending its capabilities autonomously.

Pro Tip: If you find the script useful beyond the immediate context, you can promote it into a permanent tool or command. For instance, if the AI generated a great log-processing script, you might later turn it into a custom slash command (Tip #2) for easy reuse. The combination of Gemini's generative power and the extension hooks means your toolkit can continuously evolve as you use the CLI.

In summary, don't restrict Gemini to what it comes with . Treat it as a junior developer who can whip up new programs or even mini-servers to help solve the problem. This approach embodies the agentic philosophy of Gemini CLI - it will figure out what tools it needs, even if it has to code them on the spot.

Tip 9: Use Gemini CLI for System Troubleshooting & Configuration

Quick use-case: You can run Gemini CLI outside of a code project to help with general system tasks - think of it as an intelligent assistant for your OS. For example, if your shell is misbehaving, you could open Gemini in your home directory and ask: "Fix my .bashrc file, it has an error." Gemini can then open and edit your config file for you.

This tip highlights that Gemini CLI isn't just for coding projects - it's your AI helper for your whole development environment . Many users have used Gemini to customize their dev setup or fix issues on their machine:

  • Editing dotfiles: You can load your shell configuration ( .bashrc or .zshrc ) by referencing it ( @~/.bashrc ) and then ask Gemini CLI to optimize or troubleshoot it. For instance, "My PATH isn't picking up Go binaries, can you edit my .bashrc to fix that?" The AI can insert the correct export line. It will show you the diff for confirmation before saving changes.

  • Diagnosing errors: If you encounter a cryptic error in your terminal or an application log, you can copy it and feed it to Gemini CLI. It will analyze the error message and often suggest steps to resolve it. This is similar to how one might use StackOverflow or Google, but with the AI directly examining your scenario. For example: "When I run npm install , I get an EACCES permission error - how do I fix this?" Gemini might detect it's a permissions issue in node_modules and guide you to change directory ownership or use a proper node version manager.

  • Running outside a project: By default, if you run gemini in a directory without a .gemini context, it just means no project-specific context is loaded - but you can still use the CLI fully. This is great for ad-hoc tasks like system troubleshooting. You might not have any code files for it to consider, but you can still run shell commands through it or let it fetch web info. Essentially, you're treating Gemini CLI as an AI-powered terminal that can do things for you, not just chat.

  • Workstation customization: Want to change a setting or install a new tool? You can ask Gemini CLI, "Install Docker on my system" or "Configure my Git to sign commits with GPG." The CLI will attempt to execute the steps. It might fetch instructions from the web (using the search tool) and then run the appropriate shell commands. Of course, always watch what it's doing and approve the commands - but it can save time by automating multi-step setup processes. One real example: a user asked Gemini CLI to "set my macOS Dock preferences to auto-hide and remove the delay," and the AI was able to execute the necessary defaults write commands.

Think of this mode as using Gemini CLI as a smart shell . In fact, you can combine this with Tip 16 (shell passthrough mode) - sometimes you might drop into ! shell mode to verify something, then go back to AI mode to have it analyze output.

Caveat: When doing system-level tasks, be cautious with commands that have widespread impact (like rm -rf or system config changes). Gemini CLI will usually ask for confirmation, and it doesn't run anything without you seeing it. But as a power user, you should have a sense of what changes are being made. If unsure, ask Gemini to explain a command before running (e.g., "Explain what defaults write com.apple.dock autohide-delay -float 0 does" - it will gladly explain rather than just execute if you prompt it in that way).

Troubleshooting bonus: Another neat use is using Gemini CLI to parse logs or config files looking for issues. For instance, "Scan this Apache config for mistakes" (with @httpd.conf ), or "Look through syslog for errors around 2 PM yesterday" (with an @/var/log/syslog if accessible). It's like having a co-administrator. It can even suggest likely causes for crashes or propose fixes for common error patterns.

In summary, don't hesitate to fire up Gemini CLI as your assistant for environment issues . It's there to accelerate all your workflows - not just writing code, but maintaining the system that you write code on. Many users report that customizing their dev environment with Gemini's help feels like having a tech buddy always on call to handle the tedious or complex setup steps.

Tip 10: YOLO Mode - Auto-Approve Tool Actions (Use with Caution)

Quick use-case: If you're feeling confident (or adventurous), you can let Gemini CLI run tool actions without asking for your confirmation each time. This is YOLO mode (You Only Live Once). It's enabled by the --yolo flag or by pressing Ctrl+Y during a session . In YOLO mode, as soon as the AI decides on a tool (like running a shell command or writing to a file), it executes it immediately, without that "Approve? (y/n)" prompt.

Why use YOLO mode? Primarily for speed and convenience when you trust the AI's actions . Experienced users might toggle YOLO on if they're doing a lot of repetitive safe operations. For example, if you ask Gemini to generate 10 different files one after another, approving each can slow down the flow; YOLO mode would just let them all be written automatically. Another scenario is using Gemini CLI in a completely automated script or CI pipeline - you might run it headless with --yolo so it doesn't pause for confirmation.

To start in YOLO mode from the get-go, launch the CLI with:

Or the short form gemini -y . You'll see some indication in the CLI (like a different prompt or a notice) that auto-approve is on . During an interactive session, you can toggle it by pressing Ctrl+Y at any time - the CLI will usually display a message like "YOLO mode enabled (all actions auto-approved)" in the footer.

Big warning: YOLO mode is powerful but risky . The Gemini team themselves labels it for "daring users" - meaning you should be aware that the AI could potentially execute a dangerous command without asking. In normal mode, if the AI decided to run rm -rf / (worst-case scenario), you'd obviously decline. In YOLO mode, that command would run immediately (and likely ruin your day). While such extreme mistakes are unlikely (the AI's system prompt includes safety guidelines), the whole point of confirmations is to catch any unwanted action. YOLO removes that safety net.

Best practices for YOLO: If you want some of the convenience without full risk, consider allow-listing specific commands. For example, you can configure in settings that certain tools or command patterns don't require confirmation (like allowing all git commands, or read-only actions). In fact, Gemini CLI supports a config for skipping confirmation on specific commands: e.g., you can set something like "tools.shell.autoApprove": ["git ", "npm test"] to always run those . This way, you might not need YOLO mode globally - you selectively YOLO only safe commands. Another approach: run Gemini in a sandbox or container when using YOLO, so even if it does something wild, your system is insulated (Gemini has a --sandbox flag to run tools in a Docker container ).

Many advanced users toggle YOLO on and off frequently - turning it on when doing a string of minor file edits or queries, and off when about to do something critical. You can do the same, using the keyboard shortcut as a quick toggle.

In summary, YOLO mode eliminates friction at the cost of oversight . It's a pro feature to use sparingly and wisely. It truly demonstrates trust in the AI (or recklessness!). If you're new to Gemini CLI, you should probably avoid YOLO until you clearly understand the patterns of what it tends to do. If you do use it, double down on having version control or backups - just in case.

(If it's any consolation, you're not alone - many in the community joke about "I YOLO'ed and Gemini did something crazy." So use it, but... well, you only live once.)

Tip 11: Headless & Scripting Mode (Run Gemini CLI in the Background)

Quick use-case: You can use Gemini CLI in scripts or automation by running it in headless mode . This means you provide a prompt (or even a full conversation) via command-line arguments or environment variables, and Gemini CLI produces an output and exits. It's great for integrating with other tools or triggering AI tasks on a schedule.

For instance, to get a one-off answer without opening the REPL, you've seen you can use gemini -p "...prompt..." . This is already headless usage: it prints the model's response and returns to the shell . But there's more you can do:

  • System prompt override: If you want to run Gemini CLI with a custom system persona or instruction set (different from the default), you can use the environment variable GEMINI_SYSTEM_MD . By setting this, you tell Gemini CLI to ignore its built-in system prompt and use your provided file instead . For example:
export GEMINI_SYSTEM_MD="/path/to/custom_system.md"
gemini -p "Perform task X with high caution"

This would load your custom_system.md as the system prompt (the "role" and rules the AI follows) before executing the prompt . Alternatively, if you set GEMINI_SYSTEM_MD=true , the CLI will look for a file named system.md in the current project's .gemini directory . This feature is very advanced - it essentially allows you to replace the built-in brain of the CLI with your own instructions, which some users do for specialized workflows (like simulating a specific persona or enforcing ultra-strict policies). Use it carefully, as replacing the core prompt can affect tool usage (the core prompt contains important directions for how the AI selects and uses tools ).

  • Direct prompt via CLI: Aside from -p , there's also -i (interactive prompt) which starts a session with an initial prompt, and then keeps it open. For example: gemini -i "Hello, let's debug something" will open the REPL and already have said hello to the model. This is useful if you want the first question to be asked immediately when starting.

  • Scripting with shell pipes: You can pipe not just text but also files or command outputs into Gemini. For example: gemini -p "Summarize this log:" < big_log.txt will feed the content of big_log.txt into the prompt (after the phrase "Summarize this log:"). Or you might do some_command | gemini -p "Given the above output, what went wrong?" . This technique allows you to compose Unix tools with AI analysis. It's headless in the sense that it's a single-pass operation.

  • Running in CI/CD: You could incorporate Gemini CLI into build processes. For instance, a CI pipeline might run a test and then use Gemini CLI to automatically analyze failing test output and post a comment. Using the -p flag and environment auth, this can be scripted. (Of course, ensure the environment has the API key or auth needed.)

One more headless trick: the --format=json flag (or config setting). Gemini CLI can output responses in JSON format instead of the human-readable text if you configure it . This is useful for programmatic consumption - your script can parse the JSON to get the answer or any tool actions details.

Why headless mode matters: It transforms Gemini CLI from an interactive assistant into a backend service or utility that other programs can call. You could schedule a cronjob that runs a Gemini CLI prompt nightly (imagine generating a report or cleaning up something with AI logic). You could wire up a button in an IDE that triggers a headless Gemini run for a specific task.

Example: Let's say you want a daily summary of a news website. You could have a script:

gemini -p "Web-fetch \"https://news.site/top-stories\" and extract the headlines, then write them to headlines.txt"

With --yolo perhaps, so it won't ask confirmation to write the file. This would use the web fetch tool to get the page and the file write tool to save the headlines. All automatically, no human in the loop. The possibilities are endless once you treat Gemini CLI as a scriptable component.

In summary, Headless Mode enables automation. It's the bridge between Gemini CLI and other systems. Mastering it means you can scale up your AI usage - not just when you're typing in the terminal, but even when you aren't around, your AI agent can do work for you.

(Tip: For truly long-running non-interactive tasks, you might also look into Gemini CLI's "Plan" mode or how it can generate multi-step plans without intervention. However, those are advanced topics beyond this scope. In most cases, a well-crafted single prompt via headless mode can achieve a lot.)

Tip 12: Save and Resume Chat Sessions

Quick use-case: If you've been debugging an issue with Gemini CLI for an hour and need to stop, you don't have to lose the conversation context. Use /chat save <name> to save the session. Later (even after restarting the CLI), you can use /chat resume <name> to pick up where you left off . This way, long-running conversations can be paused and continued seamlessly.

Gemini CLI essentially has a built-in chat session manager. The commands to know are:

  • /chat save <tag> - Saves the current conversation state under a tag/name you provide . The tag is like a filename or key for that session. Save often if you want, it will overwrite the tag if it exists. (Using a descriptive name is helpful - e.g., chat save fix-docker-issue .)

  • /chat list - Lists all your saved sessions (the tags you've used . This helps you remember what you named previous saves.

  • /chat resume <tag> - Resumes the session with that tag, restoring the entire conversation context and history to how it was when saved . It's like you never left. You can then continue chatting from that point.

  • /chat share - (saves to file) This is useful as you can share the entire chat with someone else who can continue the session. Almost collaboration-like.

Under the hood, these sessions are stored likely in ~/.gemini/chats/ or a similar location. They include the conversation messages and any relevant state. This feature is super useful for cases such as:

  • Long debugging sessions: Sometimes debugging with an AI can be a long back-and-forth. If you can't solve it in one go, save it and come back later (maybe with a fresh mind). The AI will still "remember" everything from before, because the whole context is reloaded.

  • Multi-day tasks: If you're using Gemini CLI as an assistant for a project, you might have one chat session for "Refactor module X" that spans multiple days. You can resume that specific chat each day so the context doesn't reset daily. Meanwhile, you might have another session for "Write documentation" saved separately. Switching contexts is just a matter of saving one and resuming the other.

  • Team hand-off: This is more experimental, but in theory, you could share the content of a saved chat with a colleague (the saved files are likely portable). If they put it in their .gemini directory and resume, they could see the same context. The practical simpler approach for collaboration is just copying the relevant Q&A from the log and using a shared GEMINI.md or prompt, but it's interesting to note that the session data is yours to keep.

Usage example:

(Session saved as "api-upgrade")

(Later, reopen CLI)

$ gemini
gemini> /chat list

(Shows: api-upgrade)

gemini> /chat resume api-upgrade

Now the model greets you with the last exchange's state ready. You can confirm by scrolling up that all your previous messages are present.

Pro Tip: Use meaningful tags when saving chats . Instead of /chat save session1 , give it a name related to the topic (e.g. /chat save memory-leak-bug ). This will help you find the right one later via /chat list . There is no strict limit announced on how many sessions you can save, but cleaning up old ones occasionally might be wise just for organization.

This feature turns Gemini CLI into a persistent advisor. You don't lose knowledge gained in a conversation; you can always pause and resume. It's a differentiator compared to some other AI interfaces that forget context when closed. For power users, it means you can maintain parallel threads of work with the AI. Just like you'd have multiple terminal tabs for different tasks, you can have multiple chat sessions saved and resume the one you need at any given time.

Tip 13: Multi-Directory Workspace - One Gemini, Many Folders

Quick use-case: Do you have a project split across multiple repositories or directories? You can launch Gemini CLI with access to all of them at once, so it sees a unified workspace. For example, if your frontend and backend are separate folders, you can include both so that Gemini can edit or reference files in both.

There are two ways to use multi-directory mode :

  • Launch flag: Use the --include-directories (or -I ) flag when starting Gemini CLI. For example:
gemini --include-directories "../backend:../frontend"

This assumes you run the command from, say, a scripts directory and want to include two sibling folders. You provide a colon-separated list of paths. Gemini CLI will then treat all those directories as part of one big workspace.

  • Persistent setting: In your settings.json , you can define "includeDirectories": ["path1", "path2", [...]](https://www.philschmid.de/gemini-cli-cheatsheet#:~:text=,61AFEF%22%2C%20%22AccentPurple) . This is useful if you always want certain common directories loaded (e.g., a shared library folder that multiple projects use). The paths can be relative or absolute. Environment variables in the paths (like ~/common-utils ) are allowed .

When multi-dir mode is active, the CLI's context and tools consider files across all included locations. The > /directory show command will list which directories are in the current workspace . You can also dynamically add directories during a session with /directory add [<path>](https://medium.com/@ferreradaniel/gemini-cli-free-ai-tool-upgrade-5-new-features-you-need-right-now-04cfefac5e93#:~:text=How%20to%20add%20multiple%20directories,step) - it will then load that on the fly (potentially scanning it for context like it does on startup).

Why use multi-directory mode? In microservice architectures or modular codebases, it's common that one piece of code lives in one repo and another piece in a different repo. If you only ran Gemini in one, it wouldn't "see" the others. By combining them, you enable cross-project reasoning. For example, you could ask, "Update the API client in the frontend to match the backend's new API endpoints" - Gemini can open the backend folder to see the API definitions and simultaneously open the frontend code to modify it accordingly. Without multi-dir, you'd have to do one side at a time and manually carry info over.

Example: Let's say you have client/ and server/ . You start:

cd client
gemini --include-directories "../server"

Now at the gemini> prompt, if you do > !ls , you'll see it can list files in both client and server (it might show them as separate paths). You could do:

Open server/routes/api.py and client/src/api.js side by side to compare function names.

The AI will have access to both files. Or you might say:

The API changed: the endpoint "/users/create" is now "/users/register". Update both backend and frontend accordingly.

It can simultaneously create a patch in the backend route and adjust the frontend fetch call.

Under the hood, Gemini merges the file index of those directories. There might be some performance considerations if each directory is huge, but generally it handles multiple small-medium projects fine. The cheat sheet notes that this effectively creates one workspace with multiple roots .

Tip within a tip: Even if you don't use multi-dir all the time, know that you can still reference files across the filesystem by absolute path in prompts ( @/path/to/file ). However, without multi-dir, Gemini might not have permission to edit those or know to load context from them proactively. Multi-dir formally includes them in scope so it's aware of all files for tasks like search or code generation across the whole set.

Remove directories: If needed, /directory remove <path> (or a similar command) can drop a directory from the workspace. This is less common, but maybe if you included something accidentally, you can remove it.

In summary, multi-directory mode unifies your context . It's a must-have for polyrepo projects or any situation where code is split up. It makes Gemini CLI act more like an IDE that has your entire solution open. As a pro user, this means no part of your project is out of the AI's reach.

Tip 14: Organize and Clean Up Your Files with AI Assistance

Quick use-case: Tired of a messy Downloads folder or disorganized project assets? You can enlist Gemini CLI to act as a smart organizer. By providing it an overview of a directory, it can classify files and even move them into subfolders (with your approval). For instance, "Clean up my Downloads : move images to an Images folder, PDFs to Documents , and delete temporary files."

Because Gemini CLI can read file names, sizes, and even peek into file contents, it can make informed decisions about file organization . One community-created tool dubbed "Janitor AI" showcases this: it runs via Gemini CLI to categorize files as important vs junk, and groups them accordingly . The process involved scanning the directory, using Gemini's reasoning on filenames and metadata (and content if needed), then moving files into categories. Notably, it didn't automatically delete junk - rather, it moved them to a Trash folder for review .

Here's how you might replicate such a workflow with Gemini CLI manually:

  1. Survey the directory: Use a prompt to have Gemini list and categorize. For example:
List all files in the current directory and categorize them as "images", "videos", "documents", "archives", or "others".

Gemini might use !ls or similar to get the file list, then analyze the names/extensions to produce categories.

  1. Plan the organization: Ask Gemini how it would like to reorganize. For example:
Propose a new folder structure for these files. I want to separate by type (Images, Videos, Documents, etc.). Also identify any files that seem like duplicates or unnecessary.

The AI might respond with a plan: e.g., "Create folders: Images/ , Videos/ , Documents/ , Archives/ . Move X.png , Y.jpg to Images/ ; move A.mp4 to Videos/ ; etc. The file temp.txt looks unnecessary (maybe a temp file)."

  1. Execute moves with confirmation: You can then instruct it to carry out the plan. It may use shell commands like mv for each file. Since this modifies your filesystem, you'll get confirmation prompts for each (unless you YOLO it). Carefully approve the moves. After completion, your directory will be neatly organized as suggested.

Throughout, Gemini's natural language understanding is key. It can reason, for instance, that IMG_001.png is an image or that presentation.pdf is a document, even if not explicitly stated. It can even open an image (using its vision capability) to see what's in it - e.g., differentiating between a screenshot vs a photo vs an icon - and name or sort it accordingly .

Renaming files by content: A particularly magical use is having Gemini rename files to be more descriptive. The Dev Community article "7 Insane Gemini CLI Tips" describes how Gemini can scan images and automatically rename them based on their content . For example, a file named IMG_1234.jpg might be renamed to login_screen.jpg if the AI sees it's a screenshot of a login screen . To do this, you could prompt:

For each .png image here, look at its content and rename it to something descriptive.

Gemini will open each image (via vision tool), get a description, then propose a mv IMG_1234.png login_screen.png action . This can dramatically improve the organization of assets, especially in design or photo folders.

Two-pass approach: The Janitor AI discussion noted a two-step process: first broad categorization (important vs junk vs other), then refining groups . You can emulate this: first separate files that likely can be deleted (maybe large installer .dmg files or duplicates) from those to keep. Then focus on organizing the keepers. Always double-check what the AI flags as junk; its guess might not always be right, so manual oversight is needed.

Safety tip: When letting the AI loose on file moves or deletions, have backups or at least be ready to undo (with /restore or your own backup). It's wise to do a dry-run: ask Gemini to print the commands it would run to organize, without executing them, so you can review. For instance: "List the mv and mkdir commands needed for this plan, but don't execute them yet." Once you review the list, you can either copy-paste execute them, or instruct Gemini to proceed.

This is a prime example of using Gemini CLI for "non-obvious" tasks - it's not just writing code, it's doing system housekeeping with AI smarts . It can save time and bring a bit of order to chaos. After all, as developers we accumulate clutter (logs, old scripts, downloads), and an AI janitor can be quite handy.

Tip 15: Compress Long Conversations to Stay Within Context

Quick use-case: If you've been chatting with Gemini CLI for a long time, you might hit the model's context length limit or just find the session getting unwieldy. Use the /compress command to summarize the conversation so far, replacing the full history with a concise summary . This frees up space for more discussion without starting from scratch.

Large language models have a fixed context window (Gemini 2.5 Pro's is very large, but not infinite). If you exceed it, the model may start forgetting earlier messages or lose coherence. The /compress feature is essentially an AI-generated tl;dr of your session that keeps important points.

How it works: When you type /compress , Gemini CLI will take the entire conversation (except system context) and produce a summary. It then replaces the chat history with that summary as a single system or assistant message, preserving essential details but dropping minute-by-minute dialogue. It will indicate that compression happened. For example, after /compress , you might see something like:

--- Conversation compressed ---
Summary of discussion: The user and assistant have been debugging a memory leak in an application. Key points: The issue is likely in DataProcessor.js , where objects aren't being freed. The assistant suggested adding logging and identified a possible infinite loop. The user is about to test a fix.
--- End of summary ---

From that point on, the model only has that summary (plus new messages) as context for what happened before. This usually is enough if the summary captured the salient info.

When to compress: Ideally before you hit the limit. If you notice the session is getting lengthy (several hundred turns or a lot of code in context), compress proactively. The cheat sheet mentions an automatic compression setting (e.g., compress when context exceeds 60% of max ). If you enable that, Gemini might auto-compress and let you know. Otherwise, manual /compress is in your toolkit.

After compressing: You can continue the conversation normally. If needed, you can compress multiple times in a very long session. Each time, you lose some granularity, so don't compress too frequently for no reason - you might end up with an overly brief remembrance of a complex discussion. But generally the model's own summarization is pretty good at keeping the key facts (and you can always restate anything critical yourself).

Context window example: Let's illustrate. Suppose you fed in a large codebase by referencing many files and had a 1M token context (the max). If you then want to shift to a different part of the project, rather than starting a new session (losing all that understanding), you could compress. The summary will condense the knowledge gleaned from the code (like "We loaded modules A, B, C. A has these functions... B interacts with C in these ways..."). Now you can proceed to ask about new things with that knowledge retained abstractly.

Memory vs Compression: Note that compression doesn't save to long-term memory, it's local to the conversation. If you have facts you never want lost, consider Tip 4 (adding to /memory ) - because memory entries will survive compression (they'll just be reinserted anyway since they are in GEMINI.md context). Compression is more about ephemeral chat content.

A minor caution: after compression, the AI's style might slightly change because it's effectively seeing a "fresh" conversation with a summary. It might reintroduce itself or change tone. You can instruct it like "Continue from here... (we compressed)" to smooth it out. In practice, it often continues fine.

To summarize (pun intended), use /compress as your session grows long to maintain performance and relevance. It helps Gemini CLI focus on the bigger picture instead of every detail of the conversation's history. This way, you can have marathon debugging sessions or extensive design discussions without running out of the "mental paper" the AI is writing on.

Tip 16: Passthrough Shell Commands with ! (Talk to Your Terminal)

Quick use-case: At any point in a Gemini CLI session, you can run actual shell commands by prefixing them with ! . For example, if you want to check the git status, just type !git status and it will execute in your terminal . This saves you from switching windows or context - you're still in the Gemini CLI, but you're essentially telling it "let me run this command real quick."

This tip is about Shell Mode in Gemini CLI. There are two ways to use it:

  • Single command: Just put ! at the start of your prompt, followed by any command and arguments. This will execute that command in the current working directory and display the output in-line . For example:

will list the files in the src directory, outputting something like you'd see in a normal terminal. After the output, the Gemini prompt returns so you can continue chatting or issue more commands.

  • Persistent shell mode: If you enter ! alone and hit Enter, Gemini CLI switches into a sub-mode where you get a shell prompt (often it looks like shell> or similar . Now you can type multiple shell commands interactively. It's basically a mini-shell within the CLI. You exit this mode by typing ! on an empty line again (or exit ). For instance:
!
shell> pwd
/home/alice/project
shell> python --version
Python 3.x.x
shell> !

After the final ! , you're back to the normal Gemini prompt.

Why is this useful? Because development is a mix of actions and inquiries. You might be discussing something with the AI and realize you need to compile the code or run tests to see something. Instead of leaving the conversation, you can quickly do it and feed the result back into the chat. In fact, Gemini CLI often does this for you as part of its tool usage (it might automatically run !pytest when you ask to fix tests, for example ). But as the user, you have full control to do it manually too.

Examples:

  • After Gemini suggests a fix in code, you can do !npm run build to see if it compiles, then copy any errors and ask Gemini to help with those.

  • If you want to open a file in vim or nano , you could even launch it via !nano filename (though note that since Gemini CLI has its own interface, using an interactive editor inside it might be a bit awkward - better to use the built-in editor integration or copy to your editor).

  • You can use shell commands to gather info for the AI: e.g., !grep TODO -R . to find all TODOs in the project, then you might ask Gemini to help address those TODOs.

  • Or simply use it for environment tasks: !pip install some-package if needed, etc., without leaving the CLI.

Seamless interplay: One cool aspect is how the conversation can refer to outputs. For example, you could do !curl http://example.com to fetch some data, see the output, then immediately say to Gemini, "Format the above output as JSON" - since the output was printed in the chat, the AI has it in context to work with (provided it's not too large).

Terminal as a default shell: If you find yourself always prefacing commands with ! , you can actually make the shell mode persistent by default. One way is launching Gemini CLI with a specific tool mode (there's a concept of default tool). But easier: just drop into shell mode ( ! with nothing) at session start if you plan to run a lot of manual commands and only occasionally talk to AI. Then you can exit shell mode whenever you want to ask a question. It's almost like turning Gemini CLI into your normal terminal that happens to have an AI readily available.

Integration with AI planning: Sometimes Gemini CLI itself will propose to run a shell command. If you approve, it effectively does the same as !command . Understanding that, you know you can always intervene. If Gemini is stuck or you want to try something, you don't have to wait for it to suggest - you can just do it and then continue.

In summary, the ! passthrough means you don't have to leave Gemini CLI for shell tasks . It collapses the boundary between chatting with the AI and executing commands on your system. As a pro user, this is fantastic for efficiency - your AI and your terminal become one continuous environment.

Tip 17: Treat Every CLI Tool as a Potential Gemini Tool

Quick use-case: Realize that Gemini CLI can leverage any command-line tool installed on your system as part of its problem-solving. The AI has access to the shell, so if you have cURL , ImageMagick , git , Docker , or any other tool, Gemini can invoke it when appropriate. In other words, your entire $PATH is the AI's toolkit . This greatly expands what it can do - far beyond its built-in tools.

For example, say you ask: "Convert all PNG images in this folder to WebP format." If you have ImageMagick's convert utility installed, Gemini CLI might plan something like: use a shell loop with convert command for each file . Indeed, one of the earlier examples from a blog showed exactly this, where the user prompted to batch-convert images, and Gemini executed a shell one-liner with the convert tool .

Another scenario: "Deploy my app to Docker." If Docker CLI is present, the AI could call docker build and docker run steps as needed. Or "Use FFmpeg to extract audio from video.mp4 " - it can construct the ffmpeg command.

This tip is about mindset: Gemini isn't limited to what's coded into it (which is already extensive). It can figure out how to use other programs available to achieve a goal . It knows common syntax and can read help texts if needed (it could call --help on a tool). The only limitation is safety: by default, it will ask confirmation for any run_shell_command it comes up with. But as you become comfortable, you might allow certain benign commands automatically (see YOLO or allowed-tools config).

Be mindful of the environment: "With great power comes great responsibility." Since every shell tool is fair game, you should ensure that your $PATH doesn't include anything you wouldn't want the AI to run inadvertently. This is where Tip 19 (custom PATH) comes in - some users create a restricted $PATH for Gemini, so it can't, say, directly call system destructive commands or maybe not call gemini recursively (to avoid loops). The point is, by default if gcc or terraform or anything is in $PATH , Gemini could invoke it. It doesn't mean it will randomly do so - only if the task calls for it - but it's possible.

Train of thought example: Imagine you ask Gemini CLI: "Set up a basic HTTP server that serves the current directory." The AI might think: "I can use Python's built-in server for this." It then issues !python3 -m http.server 8000 . Now it just used a system tool (Python) to launch a server. That's an innocuous example. Another: "Check the memory usage on this Linux system." The AI might use the free -h command or read from /proc/meminfo . It's effectively doing what a sysadmin would do, by using available commands.

All tools are extensions of the AI: This is somewhat futuristic, but consider that any command-line program can be seen as a "function" the AI can call to extend its capability. Need to solve a math problem? It could call bc (calculator). Need to manipulate an image? It could call an image processing tool. Need to query a database? If the CLI client is installed and credentials are there, it can use it. The possibilities are expansive. In other AI agent frameworks, this is known as tool use, and Gemini CLI is designed with a lot of trust in its agent to decide the right tool .

When it goes wrong: The flip side is if the AI misunderstands a tool or has a hallucination about one. It might try to call a command that doesn't exist, or use wrong flags, resulting in errors. This isn't a big deal - you'll see the error and can correct or clarify. In fact, the system prompt of Gemini CLI likely guides it to first do a dry-run (just propose the command) rather than executing blindly. So you often get a chance to catch these. Over time, the developers are improving the tool selection logic to reduce these missteps.

The main takeaway is to think of Gemini CLI as having a very large Swiss Army knife - not just the built-in blades, but every tool in your OS. You don't have to instruct it on how to use them if it's something standard; usually it knows or can find out. This significantly amplifies what you can accomplish. It's like having a junior dev or devops engineer who knows how to run pretty much any program you have installed.

As a pro user, you can even install additional CLI tools specifically to give Gemini more powers. For example, if you install a CLI for a cloud service (AWS CLI, GCloud CLI, etc.), in theory Gemini can utilize it to manage cloud resources if prompted to. Always ensure you understand and trust the commands run, especially with powerful tools (you wouldn't want it spinning up huge cloud instances accidentally). But used wisely, this concept - everything is a Gemini tool - is what makes it exponentially more capable as you integrate it into your environment.

Tip 18: Utilize Multimodal AI - Let Gemini See Images and More

Quick use-case: Gemini CLI isn't limited to text - it's multimodal. This means it can analyze images, diagrams, or even PDFs if given. Use this to your advantage. For instance, you could say "Here's a screenshot of an error dialog, @./error.png - help me troubleshoot this." The AI will "see" the image and respond accordingly.

One of the standout features of Google's Gemini model (and its precursor PaLM2 in Codey form) is image understanding. In Gemini CLI, if you reference an image with @ , the model receives the image data. It can output descriptions, classifications, or reason about the image's content. We already discussed renaming images by content (Tip 14) and describing screenshots (Tip 7). But let's consider other creative uses:

  • UI/UX feedback: If you're a developer working with designers, you can drop a UI image and ask Gemini for feedback or to generate code. "Look at this UI mockup @mockup.png and produce a React component structure for it." It could identify elements in the image (header, buttons, etc.) and outline code.

  • Organizing images: Beyond renaming, you might have a folder of mixed images and want to sort by content. "Sort the images in ./photos/ into subfolders by theme (e.g., sunsets, mountains, people)." The AI can look at each photo and categorize it (this is similar to what some photo apps do with AI - now you can do it with your own script via Gemini).

  • OCR and data extraction: If you have a screenshot of error text or a photo of a document, Gemini can often read the text from it. For example, "Extract the text from invoice.png and put it into a structured format." As shown in a Google Cloud blog example, Gemini CLI can process a set of invoice images and output a table of their info . It basically did OCR + understanding to get invoice numbers, dates, amounts from pictures of invoices. That's an advanced use-case but entirely possible with the multimodal model under the hood.

  • Understanding graphs or charts: If you have a graph screenshot, you could ask "Explain this chart's key insights @chart.png ." It might interpret the axes and trends. Accuracy can vary, but it's a nifty try.

To make this practical: when you @image.png , ensure the image isn't too huge (though the model can handle reasonably large images). The CLI will likely encode it and send it to the model. The response might include descriptions or further actions. You can mix text and image references in one prompt too.

Non-image modalities: The CLI and model potentially can handle PDFs and audio too, by converting them via tools. For example, if you @report.pdf , Gemini CLI might use a PDF-to-text tool under the hood to extract text and then summarize. If you @audio.mp3 and ask for a transcript, it might use an audio-to-text tool (like a speech recognition function). The cheat sheet suggests referencing PDFs, audio, video files is supported , presumably by invoking appropriate internal tools or APIs. So, "transcribe this interview audio: @interview.wav " could actually work (if not now, likely soon, since underlying Google APIs for speech-to-text could be plugged in).

Rich outputs: Multimodal also means the AI can return images in responses if integrated (though in CLI it usually won't display them directly, but it could save an image file or output ASCII art, etc.). The MCP capability mentioned that tools can return images . For instance, an AI drawing tool could generate an image and Gemini CLI could present it (maybe by opening it or giving a link).

Important: The CLI itself is text-based, so you won't see the image in the terminal (unless it's capable of ASCII previews). You'll just get the analysis. So this is mostly about reading images, not displaying them. If you're in VS Code integration, it might show images in the chat view.

In summary, don't forget the "I" in GUI when using Gemini CLI - it can handle the visual just as well as the textual in many cases. This opens up workflows like visual debugging, design help, data extraction from screenshots, etc., all under the same tool. It's a differentiator that some other CLI tools may not have yet. And as models improve, this multimodal support will only get more powerful, so it's a future-proof skill to exploit.

Tip 19: Customize the $PATH (and Tool Availability) for Stability

Quick use-case: If you ever find Gemini CLI getting confused or invoking the wrong programs, consider running it with a tailored $PATH . By limiting or ordering the available executables, you can prevent the AI from, say, calling a similarly named script that you didn't intend. Essentially, you sandbox its tool access to known-good tools.

For most users, this isn't an issue, but for pro users with lots of custom scripts or multiple versions of tools, it can be helpful. One reason mentioned by the developers is avoiding infinite loops or weird behavior . For example, if gemini itself is in $PATH , an AI gone awry might recursively call gemini from within Gemini (a strange scenario, but theoretically possible). Or perhaps you have a command named test that conflicts with something - the AI might call the wrong one.

How to set PATH for Gemini: Easiest is inline on launch:

PATH=/usr/bin:/usr/local/bin gemini

This runs Gemini CLI with a restricted $PATH of just those directories. You might exclude directories where experimental or dangerous scripts lie. Alternatively, create a small shell script wrapper that purges or adjusts $PATH then exec's gemini .

Another approach is using environment or config to explicitly disable certain tools. For instance, if you absolutely never want the AI to use rm or some destructive tool, you could technically create an alias or dummy rm in a safe $PATH that does nothing (though this could interfere with normal operations, so maybe not that one). A better method is the exclude list in settings. In an extension or settings.json , you can exclude tool names . E.g.,

"excludeTools": ["run_shell_command"]

This extreme example would stop all shell commands from running (making Gemini effectively read-only). More granular, there was mention of skipping confirmation for some; similarly you might configure something like:

"tools": {
  "exclude": ["apt-get", "shutdown"]
}

(This syntax is illustrative; consult docs for exact usage.)

The principle is, by controlling the environment, you reduce risk of the AI doing something dumb with a tool it shouldn't. It's akin to child-proofing the house.

Prevent infinite loops: One user scenario was a loop where Gemini kept reading its own output or re-reading files repeatedly . Custom $PATH can't directly fix logic loops, but one cause could be if the AI calls a command that triggers itself. Ensuring it can't accidentally spawn another AI instance (like calling bard or gemini command, if it thought to do so) is good. Removing those from $PATH (or renaming them for that session) helps.

Isolation via sandbox: Another alternative to messing with $PATH is using --sandbox mode (which uses Docker or Podman to run tools in an isolated environment ). In that case, the AI's actions are contained and have only the tools that sandbox image provides. You could supply a Docker image with a curated set of tools. This is heavy-handed but very safe.

Custom PATH for specific tasks: You might have different $PATH setups for different projects. For example, in one project you want it to use a specific version of Node or a local toolchain. Launching gemini with the $PATH that points to those versions will ensure the AI uses the right one. Essentially, treat Gemini CLI like any user - it uses whatever environment you give it. So if you need it to pick gcc-10 vs gcc-12 , adjust $PATH or CC env var accordingly.

In summary: Guard rails. As a power user, you have the ability to fine-tune the operating conditions of the AI. If you ever find a pattern of undesirable behavior tied to tool usage, tweaking $PATH is a quick remedy. For everyday use, you likely won't need this, but it's a pro tip to keep in mind if you integrate Gemini CLI into automation or CI: give it a controlled environment. That way, you know exactly what it can and cannot do, which increases reliability.


Tip 20: Track and reduce token spend with token caching and stats

If you run long chats or repeatedly attach the same big files, you can cut cost and latency by turning on token caching and monitoring usage. With an API key or Vertex AI auth, Gemini CLI automatically reuses previously sent system instructions and context, so follow‑up requests are cheaper. You can see the savings live in the CLI.

How to use it

Use an auth mode that enables caching. Token caching is available when you authenticate with a Gemini API key or Vertex AI. It is not available with OAuth login today. Google Gemini

Inspect your usage and cache hits. Run the stats command during a session. It shows total tokens and a cached field when caching is active.

The command's description and cached reporting behavior are documented in the commands reference and FAQ. Google Gemini+1

Capture metrics in scripts. When running headless, output JSON and parse the stats block, which includes tokens.cached for each model:

gemini -p "Summarize README" --output-format json

The headless guide documents the JSON schema with cached token counts. Google Gemini

Save a session summary to file: For CI or budget tracking, write a JSON session summary to disk.

gemini -p "Analyze logs" --session-summary usage.json

This flag is listed in the changelog. Google Gemini

With API key or Vertex auth, the CLI automatically reuses previously sent context so later turns send fewer tokens. Keeping GEMINI.md and large file references stable across turns increases cache hits; you'll see that reflected in stats as cached tokens.

Tip 21: Use /copy for Quick Clipboard Copy

Quick use-case: Instantly copy the latest answer or code snippet from Gemini CLI to your system clipboard, without any extraneous formatting or line numbers . This is perfect for quickly pasting AI-generated code into your editor or sharing a result with a teammate.

When Gemini CLI provides an answer (especially a multi-line code block), you often want to reuse it elsewhere. The /copy slash command makes this effortless by copying the last output produced by the CLI directly to your clipboard . Unlike manual selection (which can grab line numbers or prompt text), /copy grabs only the raw response content. For example, if Gemini just generated a 50-line Python script, simply typing /copy will put that entire script into your clipboard, ready to paste - no need to scroll and select text. Under the hood, Gemini CLI uses the appropriate clipboard utility for your platform (e.g. pbcopy on macOS, clip on Windows . Once you run the command, you'll typically see a confirmation message, and then you can paste the copied text wherever you need it.

How it works: The /copy command requires that your system has a clipboard tool available . On macOS and Windows, the required tools ( pbcopy and clip respectively) are usually pre-installed. On Linux, you may need to install xclip or xsel for /copy to function . After ensuring that, you can use /copy anytime after Gemini CLI prints an answer. It will capture the entire last response (even if it's long) and omit any internal numbering or formatting the CLI may show on-screen. This saves you from dealing with unwanted artifacts when transferring the content. It's a small feature, but a huge time-saver when you're iterating on code or compiling a report generated by the AI.

Pro Tip: If you find the /copy command isn't working, double-check that your clipboard utilities are installed and accessible. For instance, Ubuntu users should run sudo apt install xclip to enable clipboard copying . Once set up, /copy lets you share Gemini's outputs with zero friction - copy, paste, and you're done.

Tip 22: Master Ctrl+C for Shell Mode and Exiting

Quick use-case: Cleanly interrupt Gemini CLI or exit shell mode with a single keypress - and quit the CLI entirely with a quick double-tap - thanks to the versatile Ctrl+C shortcut . This gives you immediate control when you need to stop or exit.

Gemini CLI operates like a REPL, and knowing how to break out of operations is essential. Pressing Ctrl+C once will cancel the current action or clear any input you've started typing, essentially acting as an "abort" command . For example, if the AI is generating a lengthy answer and you've seen enough, hit Ctrl+C - the generation stops immediately. If you had started typing a prompt but want to discard it, Ctrl+C will wipe the input line so you can start fresh . Additionally, if you are in shell mode (activated by typing ! to run shell commands), a single Ctrl+C will exit shell mode and return you to the normal Gemini prompt (it sends an interrupt to the shell process running . This is extremely handy if a shell command is hanging or you simply want to get back to AI mode.

Pressing Ctrl+C twice in a row is the shortcut to exit Gemini CLI entirely . Think of it as " Ctrl+C to cancel, and Ctrl+C again to quit." This double-tap signals the CLI to terminate the session (you'll see a goodbye message or the program will close). It's a faster alternative to typing /quit or closing the terminal window, allowing you to gracefully shut down the CLI from the keyboard. Do note that a single Ctrl+C will not quit if there's input to clear or an operation to interrupt - it requires that second press (when the prompt is idle) to fully exit . This design prevents accidentally closing the session when you only meant to stop the current output.

Pro Tip: In shell mode, you can also press the Esc key to leave shell mode and return to Gemini's chat mode without terminating the CLI . And if you prefer a more formal exit, the /quit command is always available to cleanly end the session. Lastly, Unix users can use Ctrl+D (EOF) at an empty prompt to exit as well - Gemini CLI will prompt for confirmation if needed . But for most cases, mastering the single- and double-tap of Ctrl+C is the quickest way to stay in control.

Tip 23: Customize Gemini CLI with settings.json

Quick use-case: Adapt the CLI's behavior and appearance to your preferences or project conventions by editing the settings.json config file, instead of sticking with one-size-fits-all defaults . This lets you enforce things like theme, tool usage rules, or editor mode across all your sessions.

Gemini CLI is highly configurable. In your home directory ( ~/.gemini/ ) or project folder ( .gemini/ within your repo), you can create a settings.json file to override default settings . Nearly every aspect of the CLI can be tuned here - from visual theme to tool permissions. The CLI merges settings from multiple levels: system-wide defaults, your user settings, and project-specific settings (project settings override user settings . For example, you might have a global preference for a dark theme, but a particular project might require stricter tool sandboxing; you can handle this via different settings.json files at each level.

Inside settings.json , options are specified as JSON key-value pairs. Here's a snippet illustrating some useful customizations:

{
"theme": "GitHub",
"autoAccept": false,
"vimMode": true,
"sandbox": "docker",
"includeDirectories": ["../shared-library", "~/common-utils"],
"usageStatisticsEnabled": true
}

In this example, we set the theme to "GitHub" (a popular color scheme), disable autoAccept (so the CLI will always ask before running potentially altering tools), enable Vim keybindings for the input editor, and enforce using Docker for tool sandboxing. We also added some directories to the workspace context ( includeDirectories ) so Gemini can see code in shared paths by default . Finally, we kept usageStatisticsEnabled true to collect basic usage stats (which feeds into telemetry, if enabled . There are many more settings available - like defining custom color themes, adjusting token limits, or whitelisting/blacklisting specific tools - all documented in the configuration guide . By tailoring these, you ensure Gemini CLI behaves optimally for your workflow (for instance, some developers always want vimMode on for efficiency, while others might prefer the default editor).

One convenient way to edit settings is via the built-in settings UI. Run the command /settings in Gemini CLI, and it will open an interactive editor for your configuration . This interface lets you browse and search settings with descriptions, and prevents JSON syntax errors by validating inputs. You can tweak colors, toggle features like yolo (auto-approval), adjust checkpointing (file save/restore behavior), and more through a friendly menu . Changes are saved to your settings.json , and some take effect immediately (others might require restarting the CLI).

Pro Tip: Maintain separate project-specific settings.json files for different needs. For example, on a team project you might set "sandbox": "docker" and "excludeTools": ["run_shell_command"] to lock down dangerous operations, while your personal projects might allow direct shell commands. Gemini CLI will automatically pick up the nearest .gemini/settings.json in your project directory tree and merge it with your global ~/.gemini/settings.json . Also, don't forget you can quickly adjust visual preferences: try /theme to interactively switch themes without editing the file, which is great for finding a comfortable look . Once you find one, put it in settings.json to make it permanent.

Tip 24: Leverage IDE Integration (VS Code) for Context & Diffs

Quick use-case: Supercharge Gemini CLI by hooking it into VS Code - the CLI will automatically know which files you're working on and even open AI-proposed code changes in VS Code's diff editor for you . This creates a seamless loop between AI assistant and your coding workspace.

One of Gemini CLI's powerful features is its IDE integration with Visual Studio Code. By installing the official Gemini CLI Companion extension in VS Code and connecting it, you allow Gemini CLI to become "context-aware" of your editor . What does this mean in practice? When connected, Gemini knows about the files you have open, your current cursor location, and any text you've selected in VS Code . All that information is fed into the AI's context. So if you ask, "Explain this function," Gemini CLI can see the exact function you've highlighted and give a relevant answer, without you needing to copy-paste code into the prompt. The integration shares up to your 10 most recently opened files, plus selection and cursor info, giving the model a rich understanding of your workspace .

Another huge benefit is native diffing of code changes. When Gemini CLI suggests modifications to your code (for example, "refactor this function" and it produces a patch), it can open those changes in VS Code's diff viewer automatically . You'll see a side-by-side diff in VS Code showing the proposed edits. You can then use VS Code's familiar interface to review the changes, make any manual tweaks, and even accept the patch with a click. The CLI and editor stay in sync - if you accept the diff in VS Code, Gemini CLI knows and continues the session with those changes applied. This tight loop means you no longer have to copy code from the terminal to your editor; the AI's suggestions flow straight into your development environment.

How to set it up: If you start Gemini CLI inside VS Code's integrated terminal, it will detect VS Code and usually prompt you to install/connect the extension automatically . You can agree and it will run the necessary /ide install step. If you don't see a prompt (or you're enabling it later), simply open Gemini CLI and run the command: /ide install . This will fetch and install the "Gemini CLI Companion" extension into VS Code for you . Next, run /ide enable to establish the connection - the CLI will then indicate it's linked to VS Code. You can verify at any time with /ide status , which will show if it's connected and list which editor and files are being tracked . From then on, Gemini CLI will automatically receive context from VS Code (open files, selections) and will open diffs in VS Code when needed. It essentially turns Gemini CLI into an AI pair programmer that lives in your terminal but operates with full awareness of your IDE.

Currently, VS Code is the primary supported editor for this integration . (Other editors that support VS Code extensions, like VSCodium or some JetBrains via a plugin, may work via the same extension, but officially it's VS Code for now.) The design is open though - there's an IDE Companion Spec for developing similar integrations with other editors . So down the road we might see first-class support for IDEs like IntelliJ or Vim via community extensions.

Pro Tip: Once connected, you can use VS Code's Command Palette to control Gemini CLI without leaving the editor . For example, press Ctrl+Shift+P (Cmd+Shift+P on Mac) and try commands like "Gemini CLI: Run" (to launch a new CLI session in the terminal), "Gemini CLI: Accept Diff" (to approve and apply an open diff), or "Gemini CLI: Close Diff Editor" (to reject changes . These shortcuts can streamline your workflow even further. And remember, you don't always have to start the CLI manually - if you enable the integration, Gemini CLI essentially becomes an AI co-developer inside VS Code, watching context and ready to help as you work on code.

Tip 25: Automate Repo Tasks with Gemini CLI GitHub Action

Quick use-case: Put Gemini to work on GitHub - use the Gemini CLI GitHub Action to autonomously triage new issues and review pull requests in your repository, acting as an AI teammate that handles routine dev tasks .

Gemini CLI isn't just for interactive terminal sessions; it can also run in CI/CD pipelines via GitHub Actions. Google has provided a ready-made Gemini CLI GitHub Action (currently in beta) that integrates into your repo's workflows . This effectively deploys an AI agent into your project on GitHub. It runs in the background, triggered by repository events . For example, when someone opens a new issue , the Gemini Action can automatically analyze the issue description, apply relevant labels, and even prioritize it or suggest duplicates (this is the "intelligent issue triage" workflow . When a pull request is opened, the Action kicks in to provide an AI code review - it will comment on the PR with insights about code quality, potential bugs, or stylistic improvements . This gives maintainers immediate feedback on the PR before any human even looks at it. Perhaps the coolest feature is on-demand collaboration : team members can mention @gemini-cli in an issue or PR comment and give it an instruction, like " @gemini-cli please write unit tests for this". The Action will pick that up and Gemini CLI will attempt to fulfill the request (adding a commit with new tests, for instance . It's like having an AI assistant living in your repo, ready to do chores when asked.

Setting up the Gemini CLI GitHub Action is straightforward. First, ensure you have Gemini CLI version 0.1.18 or later installed locally (this ensures compatibility with the Action . Then, in Gemini CLI run the special command: /setup-github . This command generates the necessary workflow files in your repository (it will guide you through authentication if needed). Specifically, it adds YAML workflow files (for issue triage, PR review, etc.) under .github/workflows/ . You will need to add your Gemini API key to the repo's secrets (as GEMINI_API_KEY ) so the Action can use the Gemini API . Once that's done and the workflows are committed, the GitHub Action springs to life - from that point on, Gemini CLI will autonomously respond to new issues and PRs according to those workflows.

Because this Action is essentially running Gemini CLI in an automated way, you can customize it just like you would your CLI. The default setup comes with three workflows (issue triage, PR review, and a general mention-triggered assistant) which are **fully open-source and editable** . You can tweak the YAML to adjust what the AI does, or even add new workflows. For instance, you might create a nightly workflow that uses Gemini CLI to scan your repository for outdated dependencies or to update a README based on recent code changes - the possibilities are endless. The key benefit here is offloading mundane or time-consuming tasks to an AI agent so that human developers can focus on harder problems. And since it runs on GitHub's infrastructure, it doesn't require your intervention - it's truly a "set and forget" AI helper.

Pro Tip: Keep an eye on the Action's output in the GitHub Actions logs for transparency. The Gemini CLI Action logs will show what prompts it ran and what changes it made or suggested. This can both build trust and help you refine its behavior. Also, the team has built enterprise-grade safeguards into the Action - e.g., you can require that all shell commands the AI tries to run in a workflow are allow-listed by you . So don't hesitate to use it even on serious projects. And if you come up with a cool custom workflow using Gemini CLI, consider contributing it back to the community - the project welcomes new ideas in their repo!

Tip 26: Enable Telemetry for Insights and Observability

Quick use-case: Gain deeper insight into how Gemini CLI is being used and performing by turning on its built-in OpenTelemetry instrumentation - monitor metrics, logs, and traces of your AI sessions to analyze usage patterns or troubleshoot issues .

For developers who like to measure and optimize, Gemini CLI offers an observability feature that exposes what's happening under the hood. By leveraging OpenTelemetry (OTEL) , Gemini CLI can emit structured telemetry data about your sessions . This includes things like metrics (e.g. how many tokens used, response latency), logs of actions taken, and even traces of tool calls. With telemetry enabled, you can answer questions like: Which custom command do I use most often? How many times did the AI edit files in this project this week? What's the average response time when I ask the CLI to run tests? Such data is invaluable for understanding usage patterns and performance . Teams can use it to see how developers are interacting with the AI assistant and where bottlenecks might be.

By default, telemetry is off (Gemini respects privacy and performance). You can opt-in by setting "telemetry.enabled": true in your settings.json or by starting Gemini CLI with the flag --telemetry . Additionally, you choose the target for the telemetry data: it can be logged locally or sent to a backend like Google Cloud. For a quick start, you might set "telemetry.target": "local" - with this, Gemini will simply write telemetry data to a local file (by default) or to a custom path you specify via ["outfile"](https://google-gemini.github.io/gemini-cli/docs/cli/telemetry.html#:~:text=disable%20telemetry%20,file%20path) . The local telemetry includes JSON logs you can parse or feed into tools. For more robust monitoring, set "target": "gcp" (Google Cloud) or even integrate with other OpenTelemetry-compatible systems like Jaeger or Datadog . In fact, Gemini CLI's OTEL support is vendor-neutral - you can export data to just about any observability stack you prefer (Google Cloud Operations, Prometheus, etc. . Google provides a streamlined path for Cloud: if you point to GCP, the CLI can send data directly to Cloud Logging and Cloud Monitoring in your project, where you can use the usual dashboards and alerting tools .

What kind of insights can you get? The telemetry captures events like tool executions, errors, and important milestones. It also records metrics such as prompt processing time and token counts per prompt . For usage analytics, you might aggregate how many times each slash command is used across your team, or how often code generation is invoked. For performance monitoring, you could track if responses have gotten slower, which might indicate hitting API rate limits or model changes. And for debugging, you can see errors or exceptions thrown by tools (e.g., a run_shell_command failure) logged with context. All this data can be visualized if you send it to a platform like Google Cloud's Monitoring - for example, you can create a dashboard of "tokens used per day" or "error rate of tool X". It essentially gives you a window into the AI's "brain" and your usage, which is especially helpful in enterprise settings to ensure everything runs smoothly .

Enabling telemetry does introduce some overhead (extra data processing), so you might not keep it on 100% of the time for personal use. However, it's fantastic for debugging sessions or for intermittent health checks. One approach is to enable it on a CI server or in your team's shared environment to collect stats, while leaving it off locally unless needed. Remember, you can always toggle it on the fly: update settings and use /memory refresh if needed to reload, or restart Gemini CLI with --telemetry flag. Also, all telemetry is under your control - it respects your environment variables for endpoint and credentials, so data goes only where you intend it to. This feature turns Gemini CLI from a black box into an observatory, shining light on how the AI agent interacts with your world, so you can continuously improve that interaction.

Pro Tip: If you just want a quick view of your current session's stats (without full telemetry), use the /stats command. It will output metrics like token usage and session length right in the CLI . This is a lightweight way to see immediate numbers. But for long-term or multi-session analysis, telemetry is the way to go. And if you're sending telemetry to a cloud project, consider setting up dashboards or alerts (e.g., alert if error rate spikes or token usage hits a threshold) - this can proactively catch issues in how Gemini CLI is being used in your team.

Tip 27: Keep an Eye on the Roadmap (Background Agents & More)

Quick use-case: Stay informed about upcoming Gemini CLI features - by following the public Gemini CLI roadmap , you'll know about major planned enhancements (like background agents for long-running tasks ) before they arrive , allowing you to plan and give feedback.

Gemini CLI is evolving rapidly, with new releases coming out frequently, so it's wise to track what's on the horizon. Google maintains a public roadmap for Gemini CLI on GitHub, detailing the key focus areas and features targeted for the near future . This is essentially a living document (and set of issues) where you can see what the developers are working on and what's in the pipeline. For instance, one exciting item on the roadmap is support for background agents - the ability to spawn autonomous agents that run in the background to handle tasks continuously or asynchronously . According to the roadmap discussion, these background agents would let you delegate long-running processes to Gemini CLI without tying up your interactive session. You could, say, start a background agent that monitors your project for certain events or periodically executes tasks, either on your local machine or even by deploying to a service like Cloud Run . This feature aims to "enable long-running, autonomous tasks and proactive assistance" right from the CLI , essentially extending Gemini CLI's usefulness beyond just on-demand queries.

By keeping tabs on the roadmap, you'll also learn about other planned features. These could include new tool integrations, support for additional Gemini model versions, UI/UX improvements, and more. The roadmap is usually organized by "areas" (for example, Extensibility , Model , Background , etc.) and often tagged with milestones (like a target quarter for delivery ]. It's not a guarantee of when something will land, but it gives a good idea of the team's priorities. Since the project is open-source, you can even dive into the linked GitHub issues for each roadmap item to see design proposals and progress. For developers who rely on Gemini CLI, this transparency means you can anticipate changes - maybe an API is adding a feature you need, or a breaking change might be coming that you want to prepare for.

Following the roadmap can be as simple as bookmarking the GitHub project board or issue labeled "Roadmap" and checking periodically. Some major updates (like the introduction of Extensions or the IDE integration) were hinted at in the roadmap before they were officially announced, so you get a sneak peek. Additionally, the Gemini CLI team often encourages community feedback on those future features. If you have ideas or use cases for something like background agents, you can usually comment on the issue or discussion thread to influence its development.

Pro Tip: Since Gemini CLI is open source (Apache 2.0 licensed), you can do more than just watch the roadmap - you can participate! The maintainers welcome contributions, especially for items aligned with the roadmap . If there's a feature you really care about, consider contributing code or testing once it's in preview. At the very least, you can open a feature request if something you need isn't on the roadmap yet . The roadmap page itself provides guidance on how to propose changes. Engaging with the project not only keeps you in the loop but also lets you shape the tool that you use. After all, Gemini CLI is built with community involvement in mind, and many recent features (like certain extensions and tools) started as community suggestions.

Tip 28: Extend Gemini CLI with Extensions

Quick use-case: Add new capabilities to Gemini CLI by installing plug-and-play extensions - for example, integrate with your favorite database or cloud service - expanding the AI's toolset without any heavy lifting on your part . It's like installing apps for your CLI to teach it new tricks.

Extensions are a game-changer introduced in late 2025: they allow you to customize and expand Gemini CLI's functionality in a modular way . An extension is essentially a bundle of configurations (and optionally code) that connects Gemini CLI to an external tool or service. For instance, Google released a suite of extensions for Google Cloud - there's one that helps deploy apps to Cloud Run, one for managing BigQuery, one for analyzing application security, and more . Partners and community developers have built extensions for all sorts of things: Dynatrace (monitoring), Elastic (search analytics), Figma (design assets), Shopify, Snyk (security scans), Stripe (payments), and the list is growing . By installing an appropriate extension, you instantly grant Gemini CLI the ability to use new domain-specific tools. The beauty is that these extensions come with a pre-defined "playbook" that teaches the AI how to use the new tools effectively . That means once installed, you can ask Gemini CLI to perform tasks with those services and it will know the proper APIs or commands to invoke, as if it had that knowledge built-in.

Using extensions is very straightforward. The CLI has a command to manage them: gemini extensions install <URL> . Typically, you provide the URL of the extension's GitHub repo or a local path, and the CLI will fetch and install it . For example, to install an official extension, you might run: gemini extensions install https://github.com/google-gemini/gemini-cli-extension-cloud-run . Within seconds, the extension is added to your environment (stored under ~/.gemini/extensions/ or your project's .gemini/extensions/ folder). You can then see it by running /extensions in the CLI, which lists active extensions . From that point on, the AI has new tools at its disposal. If it's a Cloud Run extension, you could say "Deploy my app to Cloud Run," and Gemini CLI will actually be able to execute that (by calling the underlying gcloud commands through the extension's tools). Essentially, extensions function as first-class expansions of Gemini CLI's capabilities, but you opt-in to the ones you need.

There's an open ecosystem around extensions. Google has an official Extensions page listing available extensions , and because the framework is open, anyone can create and share their own. If you have a particular internal API or workflow, you can build an extension for it so that Gemini CLI can assist with it. Writing an extension is easier than it sounds: you typically create a directory (say, my-extension/ ) with a file gemini-extension.json describing what tools or context to add . You might define new slash commands or specify remote APIs the AI can call. No need to modify Gemini CLI's core - just drop in your extension. The CLI is designed to load these at runtime. Many extensions consist of adding custom MCP tools (Model Context Protocol servers or functions) that the AI can use. For example, an extension could add a /translate command by hooking into an external translation API; once installed, the AI knows how to use /translate . The key benefit is modularity : you install only the extensions you want, keeping the CLI lightweight, but you have the option to integrate virtually anything.

To manage extensions, besides the install command, you can update or remove them via similar CLI commands ( gemini extensions update or just by removing the folder). It's wise to occasionally check for updates on extensions you use, as they may receive improvements. The CLI might introduce an "extensions marketplace" style interface in the future, but for now, exploring the GitHub repositories and official catalog is the way to discover new ones. Some popular ones at launch include the GenAI Genkit extension (for building generative AI apps), and a variety of Google Cloud extensions that cover CI/CD, database admin, and more.

Pro Tip: If you're building your own extension, start by looking at existing ones for examples. The official documentation provides an Extensions Guide with the schema and capabilities . A simple way to create a private extension is to use the @include functionality in GEMINI.md to inject scripts or context, but a full extension gives you more power (like packaging tools). Also, since extensions can include context files, you can use them to preload domain knowledge. Imagine an extension for your company's internal API that includes a summary of the API and a tool to call it - the AI would then know how to handle requests related to that API. In short, extensions open up a new world where Gemini CLI can interface with anything. Keep an eye on the extensions marketplace for new additions, and don't hesitate to share any useful extension you create with the community - you might just help thousands of other developers .

Additional Fun: Corgi Mode Easter Egg 🐕

Lastly, not a productivity tip but a delightful easter egg - try the command */corgi* in Gemini CLI. This toggles "corgi mode" , which makes a cute corgi animation run across your terminal ! It doesn't help you code any better, but it can certainly lighten the mood during a long coding session. You'll see an ASCII art corgi dashing in the CLI interface. To turn it off, just run /corgi again.

This is a purely for-fun feature the team added (and yes, there's even a tongue-in-cheek debate about spending dev time on corgi mode). It shows that the creators hide some whimsy in the tool. So when you need a quick break or a smile, give /corgi a try. 🐕🎉

(Rumor has it there might be other easter eggs or modes - who knows? Perhaps a "/partyparrot" or similar. The cheat sheet or help command lists /corgi , so it's not a secret, just underused. Now you're in on the joke!)


Conclusion:

We've covered a comprehensive list of pro tips and features for Gemini CLI. From setting up persistent context with GEMINI.md , to writing custom commands and using advanced tools like MCP servers, to leveraging multi-modal inputs and automating workflows, there's a lot this AI command-line assistant can do. As an external developer, you can integrate Gemini CLI into your daily routine - it's like a powerful ally in your terminal that can handle tedious tasks, provide insights, and even troubleshoot your environment.

Gemini CLI is evolving rapidly (being open-source with community contributions), so new features and improvements are constantly on the horizon. By mastering the pro tips in this guide, you'll be well-positioned to harness the full potential of this tool. It's not just about using an AI model - it's about integrating AI deeply into how you develop and manage software.

Happy coding with Gemini CLI, and have fun exploring just how far your "AI agent in the terminal" can take you.

You now have a Swiss-army knife of AI at your fingertips - use it wisely, and it will make you a more productive (and perhaps happier) developer !

A Lone Astronomer Has Reported a Dark Matter ‘Annihilation’ Breakthrough

403 Media
www.404media.co
2025-11-26 17:50:20
“It was like playing the lottery,” said astronomer Tomonori Totani, adding that he hopes other scientists will verify the possible detection of a new dark matter signature....
Original Article

🌘

Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.

An astronomer has reported a possible new signature of dark matter, a mysterious substance that makes up most of the universe, according to a study published on Tuesday in the Journal of Cosmology and Astroparticle Physics .

Dark matter accounts for 85 percent of all matter in the universe, but its existence has so far been inferred only from its indirect effects on the familiar “baryonic” matter that makes up stars, planets, and life.

Tomonori Totani, a professor of astronomy at the University of Tokyo and the author of the study, believes he has spotted novel indirect traces of dark matter particles in the “halo” surrounding the center of our galaxy using new observations from NASA’s Fermi Gamma-ray Space Telescope. When these speculative particles collide—a process called dark matter annihilation—the crash is predicted to emit bright gamma rays, which is the light that Totani thinks he has identified.

“The discovery was made possible by focusing on the halo region (excluding the galactic center), which had received little attention, and by utilizing data accumulated over 15 years from the Fermi satellite,” Totani told 404 Media in an email. “After carefully removing all components other than dark matter, a signal resembling dark matter appeared.”

“It was like playing the lottery, and at first I was skeptical,” he added. “But after checking meticulously and thinking it seemed correct, I got goosebumps!”

If the detection is corroborated by follow-up studies, it could confirm a leading hypothesis that dark matter is made of a hypothetical class of weakly interacting massive particles, or “WIMPs”—potentially exposing the identity of this mysterious substance for the first time. But that potential breakthrough is still a ways off, according to other researchers in the field.

“Any new structure in the gamma-ray sky is interesting, but the dark matter interpretation here strikes me as quite preliminary,” said Danielle Norcini, an experimental particle physicist and

assistant professor at Johns Hopkins University, in an email to 404 Media.

Gamma-ray intensity map excluding components other than the halo, spanning approximately 100 degrees in the direction of the Galactic center. The horizontal gray bar in the central region corresponds to the Galactic plane area, which was excluded from the analysis to avoid strong astrophysical radiation. Image: Tomonori Totani, The University of Tokyo

Dark matter has flummoxed scientists for almost a century. In the 1930s, astronomer Fritz Zwicky observed that the motions of galaxies hinted that they are much more massive than expected based solely on visible baryonic matter. Since then, astronomers have confirmed that dark matter, which accumulates into dense halos at the centers of galaxies, acts like a gravitational glue that holds structures together. Dark matter is also the basis of a vast cosmic web of gaseous threads that links galaxy clusters across billions of light years.

But while dark matter is ubiquitous, it does not interact with the electromagnetic force, which means it does not absorb, reflect, or emit light. This property makes it difficult to spot with traditional astronomy, a challenge that has inspired the development of novel instruments designed to directly detect dark matter such as the subterranean LUX-ZEPLIN in South Dakota and the forthcoming DAMIC-M in France.

For years, scientists have been probing possible emission from dark matter annihilation at the center of the Milky Way, which is surrounded by a halo of densely-clustered dark matter. Those previous studies focus on an excess emission pattern of about 2 gigaelectronvolts (GeV). Tontani’s study spotlights a new and different pattern with extremely energetic gamma rays at 20 GeV.

“A part of the Fermi data showed a peculiar excess that our model couldn't explain, leading me to suspect it might be due to radiation originating from dark matter,” he said. “The most difficult part is removing gamma-ray emissions of origins other than dark matter, such as those from cosmic rays and celestial objects.”

This tentative report may finally fill in a major missing piece of our understanding of the universe by exposing the true nature of dark matter and confirming the existence of WIMPs. But given that similar claims have been made in the past, more research is needed to assess the significance of the results.

“For any potential indirect signal, the key next steps are independent checks: analyses using different background models, different assumptions about the Milky Way halo, and ideally complementary data sets,” Norcini said.

“Gamma-ray structures in the halo can have many astrophysical origins, so ruling those out requires careful modeling and cross-comparison,” she continued. “At this point the result seems too new for that scrutiny to have played out, and it will take multiple groups looking at the same data before a dark matter interpretation could be considered robust.”

Though Totani is confident in his interpretation of his discovery, he also looks forward to the input of other dark matter researchers around the world.

“First, I would like other researchers to independently verify my analysis,” he said. “Next, for everyone to be convinced that this is truly dark matter, the decisive factor will be the detection of gamma rays with the same spectrum from other regions, such as dwarf galaxies. The accumulation of further data from the Fermi satellite and large ground-based gamma-ray telescopes, such as the Cherenkov Telescope Array Observatory (CTAO) will be crucial.”

🌘

Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.

Scaleway turns Mac minis into high‑density, Raspberry Pi–managed servers

Hacker News
www.scaleway.com
2025-11-26 17:40:16
Comments...
Original Article

Take a behind-the-scenes look at how Scaleway brought the Mac mini as-a-Service to life — transforming Apple’s compact desktop into a highly available cloud server hosted in state-of-the-art datacenters.

From Consumer Machine to Cloud Server: A Fully Controlled Pipeline

Apple designs the Mac mini. inmac wstore supplies it. Scaleway transforms it into a ready-to-use dedicated server , accessible remotely from anywhere in the world.

Scaleway’s mission is clear: to provide iOS and macOS developers, macOS software users, and businesses of all sizes with remote access to the power of Apple silicon (M-series) chips — all within a controlled, secure, and high-performance environment.

Each Mac mini is managed automatically. Once installed in the racks, Scaleway’s teams add a custom Mobile Device Management (MDM) profile to deploy system settings remotely, along with a set of server-specific tools that compensate for the lack of a Baseboard Management Controller (BMC). This enables granular management of each machine.

Thanks to this process, we at Scaleway can deliver a consumer-grade Mac mini as a fully reliable dedicated server, seamlessly integrated into our cloud ecosystem — ready to meet even the most demanding production needs.

A Datacenter Designed for Efficiency and Resilience

All Scaleway Mac minis are hosted exclusively in French datacenters, ensuring sovereign hosting that meets the highest standards for security, privacy, and data locality in Europe.

At the heart of this infrastructure lies Opcore DC2, Scaleway’s strategic datacenter located in Vitry-sur-Seine, where hundreds of Mac minis run side by side with traditional bare-metal servers — all within a resilient, high-performance network architecture monitored in real time.

Scaleway’s datacenter design reflects its commitment to performance and reliability:

  • Power & Redundancy : 3N electrical system with automatic failover, three backup generators, and a total power capacity of up to 8,000 kW.
  • Precision Cooling : Cold corridors with underfloor air distribution optimize temperature and prevent hot spots — minimizing energy use.
  • Advanced Security : 24/7 monitoring, biometric access controls, and a water-vapor fire suppression system that protects equipment without damage.

A Custom-Built Rack for Mac minis

The Mac mini wasn’t originally designed for datacenter environments: there’s no BMC (Baseboard Management Controller), no native remote firmware access, and no standard rackmount format.

To overcome this, Scaleway engineered a custom chassis where each Mac mini is placed in an individual sliding tray. This allows any unit to be removed for maintenance without disrupting the others — ensuring maximum density and ease of access. Ethernet cabling is carefully organized to guarantee fast, stable network connections.

Each rack can hold up to 96 Mac minis , an impressive density compared to traditional servers. This is made possible by two key factors:

  • The compact size of the Mac mini, which packs a powerful System on a Chip (SoC) into a tiny footprint.
  • The energy efficiency of Apple silicon (M-series) chips, which allows high density without overheating or excessive power draw.

As a result, Scaleway’s Mac mini racks are among the most energy-efficient server setups in the cloud industry.

However, the absence of a BMC posed a major challenge: how to perform critical remote operations without physical access?

Scaleway’s solution to that problem was ingenious: embedding a Raspberry Pi module with each Mac mini.

Each Raspberry Pi acts as a control layer, sending commands such as reboot or remote reinstall to the Mac mini. This makes the machines virtually autonomous throughout their cloud lifecycle, while remaining fully compliant with Apple’s hardware requirements.

The Future of Mac Minis in the Scaleway Cloud

Scaleway plans to keep expanding its Mac mini fleet as cloud-native development evolves . Future versions of macOS, the rise of AI workloads, and the growing need for macOS environments in cross-platform development are all driving demand.

With Mac mini as-a-Service, Scaleway delivers a powerful, flexible solution designed for developers, tech companies, and demanding freelancers alike.

Access the power of a Mac as if it were on your desk — without the hardware constraints.


Register for ai-PULSE 2025

ai-PULSE , Europe’s premier Artificial Intelligence conference powered by Scaleway, is returning!

Gathering key players from across Europe, the event will be back once again at STATION F on December 4 for a unique blend of deep technical expertise and crucial business insights.

You’ll hear from:

  • Micah Hill-Smith, Co-Founder & CEO of Artificial Analysis, on which metrics truly matter in the new AI stack
  • Boris Gamazaychikov, Head of AI Sustainability at Salesforce, on how we can make “energy-efficient” AI measurable
  • Pauline Pham, Strategy & Operations at Dust, on building and orchestrating agentic fleets

... and dozens more leaders and engineers shaping the technology’s future.

Whether you’re planning to attend in-person or online, make sure to register !

s&box now open source

Lobsters
github.com
2025-11-26 17:28:47
Comments...
Original Article

s&box

s&box is a modern game engine, built on Valve's Source 2 and the latest .NET technology, it provides a modern intuitive editor for creating games.

s&box editor

If your goal is to create games using s&box, please start with the getting started guide . This repository is for building the engine from source for those who want to contribute to the development of the engine.

Getting the Engine

Steam

You can download and install the s&box editor directly from Steam .

Compiling from Source

If you want to build from source, this repository includes all the necessary files to compile the engine yourself.

Prerequisites

Building

# Clone the repo
git clone https://github.com/Facepunch/sbox-public.git

Once you've cloned the repo simply run Bootstrap.bat which will download dependencies and build the engine.

The game and editor can be run from the binaries in the game folder.

Contributing

If you would like to contribute to the engine, please see the contributing guide .

If you want to report bugs or request new features, see sbox-issues .

Documentation

Full documentation, tutorials, and API references are available at sbox.game/dev/ .

License

The s&box engine source code is licensed under the MIT License .

Certain native binaries in game/bin are not covered by the MIT license. These binaries are distributed under the s&box EULA. You must agree to the terms of the EULA to use them.

This project includes third-party components that are separately licensed. Those components are not covered by the MIT license above and remain subject to their original licenses as indicated in game/thirdpartylegalnotices .

How stealth addresses work in Monero

Lobsters
www.johndcook.com
2025-11-26 17:28:00
Comments...
Original Article

Suppose Alice runs a confidential restaurant. Alice doesn’t want there to be any record of who visited her restaurant but does want to get paid for her food. She accepts Monero, and instead of a cash register there are two QR codes on display, one corresponding to her public view key A and the other corresponding to her public spend key S .

How Bob buys his burger

A customer Bob walks into the restaurant and orders a burger and fries. When Bob pays Alice, here’s what’s going on under the hood.

Bob is using software that generates a random integer r and multiplies it by a point G on an elliptic curve, specifically ed25519, obtaining the point

R = rG

on the curve. The software also multiplies Alice’s view key A , a point on the same elliptic curve, by r , then runs a hash function H on the produce rA that returns an integer k .

k = H ( rA ).

Finally, Bob’s software computes the point

P = k G + S

and sends Alice’s cash register, i.e. her crypto wallet, the pair of points ( P , R ). The point P is a stealth address , an address that will only be used this one time and cannot be linked to Alice or Bob [1]. The point R is additional information that helps Alice receive her money.

How Alice gets paid

Alice and Bob share a secret: both know k . How’s that?

Alice’s public view key A is the product of her private view key a and the group generator G [2]. So when Bob computes rA , he’s computing r ( aG ). Alice’s software can multiply the point R by a to obtain a ( rG ).

rA = r ( aG ) = a ( rG ) = aR.

Both Alice and Bob can hash this point—which Alice thinks of as aR and Bob thinks of as rA —to obtain k . This is ECDH : elliptic curve Diffie-Hellman key exchange.

Next, Alice’s software scans the blockchain for payments to

P = k G + S.

Note that P is on the blockchain, but only Alice and Bob know how to factor P into kG + S because only Alice and Bob know k . And only Alice can spend the money because only she knows the private key s corresponding to the public spend key S where

S = sG.

She knows

P = kG + sG = ( k + s ) G

and so she has the private key ( k + s ) corresponding to P .

Related posts

[1] Bob sends money to the address P , so there could be some connection between Bob and P on the Monero blockchain. However, due to another feature of Monero, namely ring signatures, someone analyzing the blockchain could only determine that Bob is one of 16 people who may have sent money to the address P , and there’s no way to know who received the money. That is, there is no way, using only information on the blockchain, who received the money. A private investigator who saw Bob walk into Alice’s restaurant would have additional information outside the blockchain.

[2] The key assumption of elliptic curve cryptography is that it’s computationally infeasible to “divide” on an elliptic curve, i.e. to recover a from knowledge of G and aG . You could recover a by brute force if the group were small, but the elliptic curve ed25519 has on the order of 2 255 points, and a is some integer chosen randomly between 1 and the size of the curve.

Multiple London councils' IT systems disrupted by cyberattack

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 17:26:11
The Royal Borough of Kensington and Chelsea (RBKC) and the Westminster City Council (WCC) announced that they are experiencing service disruptions following a cybersecurity issue. [...]...
Original Article

Multiple London councils' IT systems disrupted by cyberattack

The Royal Borough of Kensington and Chelsea (RBKC) and the Westminster City Council (WCC) announced that they are experiencing service disruptions following a cybersecurity issue.

Multiple systems have been impacted by the attack, including phone lines, which prompted the two councils to activate emergency plans to make sure that residents still receive critical services.

The two authorities have been impacted at the same time because they share some IT infrastructure as part of joint arrangements.

Wiz

A third council, the London Borough of Hammersmith and Fulham (LBHF), also shares some services with RBKC and WCC and decided to take "enhanced measures to isolate and safeguard our networks," which led to business disruptions.

Westminster City Council is a major local authority in the U.K., with important landmarks in the area, like the Palace of Westminster (Houses of Parliament), the Buckingham Palace, 10 Downing Street, national institutions, important shopping streets, and significant tourist hotspots.

The councils, which provide services for 360,000 residents, shut down several computerised systems as a precaution to limit further possible damage.

RBKC is one of the smallest boroughs in London (in terms of size and population) but also the wealthiest (in terms of GDP per capita) in the UK, while LBHF is a mid-sized but still significant council serving 180,000 residents.

In an announcement yesterday, the RBKC said that it had an issue that prevented residents from contacting the council through online services or the contact center.

Tweet

The council later published a statement saying that it was "responding to a cyber security issue" that occurred on Monday and also affected Westminster City Council.

The local authority stated that investigations into the perpetrators and their motives are ongoing and that it will publish updates as soon as more information becomes available.

"[...] the two authorities have been working closely together and with the help of specialist cyber incident experts and the National Cyber Security Centre, with the focus on protecting systems and data, restoring systems, and maintaining critical services to the public."


"We don’t have all the answers yet, as the management of this incident is still ongoing," RBKC says , adding that “we know people will have concerns, so we will be updating residents and partners further over the coming days.”

“At this stage, it is too early to say who did this and why, but we are investigating to see if any data has been compromised.”

The council states that it has already informed the UK Information Commissioner’s Office (ICO), in accordance to established protocols.

The other two councils, WCC and LBHF , have published short statements about the disruption via banners on their websites, listing alternative phone numbers people can use right now to contact them.

BleepingComputer has contacted RBKC to ask more details about the shared IT system, but a spokesperson declined to disclose any additional information at this time.

Security expert Kevin Beaumont said that the incident is a ransomware attack at a services provider used by the three councils.

At the time of writing, no ransomware groups publicly claimed the attack.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

Meet Rey, the Admin of ‘Scattered Lapsus$ Hunters’

Krebs
krebsonsecurity.com
2025-11-26 17:22:36
A prolific cybercriminal group that calls itself "Scattered LAPSUS$ Hunters" made headlines regularly this year by stealing data from and publicly mass extorting dozens of major corporations. But the tables seem to have turned somewhat for "Rey," the moniker chosen by the technical operator and publ...
Original Article

A prolific cybercriminal group that calls itself “ Scattered LAPSUS$ Hunters ” has dominated headlines this year by regularly stealing data from and publicly mass extorting dozens of major corporations. But the tables seem to have turned somewhat for “Rey,” the moniker chosen by the technical operator and public face of the hacker group: Earlier this week, Rey confirmed his real life identity and agreed to an interview after KrebsOnSecurity tracked him down and contacted his father.

Scattered LAPSUS$ Hunters (SLSH) is thought to be an amalgamation of three hacking groups — Scattered Spider , LAPSUS$ and ShinyHunters . Members of these gangs hail from many of the same chat channels on the Com , a mostly English-language cybercriminal community that operates across an ocean of Telegram and Discord servers.

In May 2025, SLSH members launched a social engineering campaign that used voice phishing to trick targets into connecting a malicious app to their organization’s Salesforce portal. The group later launched a data leak portal that threatened to publish the internal data of three dozen companies that allegedly had Salesforce data stolen, including Toyota , FedEx , Disney/Hulu , and UPS .

The new extortion website tied to ShinyHunters, which threatens to publish stolen data unless Salesforce or individual victim companies agree to pay a ransom.

Last week, the SLSH Telegram channel featured an offer to recruit and reward “insiders,” employees at large companies who agree to share internal access to their employer’s network for a share of whatever ransom payment is ultimately paid by the victim company.

SLSH has solicited insider access previously, but their latest call for disgruntled employees started making the rounds on social media at the same time news broke that the cybersecurity firm Crowdstrike had fired an employee for allegedly sharing screenshots of internal systems with the hacker group (Crowdstrike said their systems were never compromised and that it has turned the matter over to law enforcement agencies).

The Telegram server for the Scattered LAPSUS$ Hunters has been attempting to recruit insiders at large companies.

Members of SLSH have traditionally used other ransomware gangs’ encryptors in attacks, including malware from ransomware affiliate programs like ALPHV/BlackCat, Qilin, RansomHub, and DragonForce. But last week, SLSH announced on its Telegram channel the release of their own ransomware-as-a-service operation called ShinySp1d3r .

The individual responsible for releasing the ShinySp1d3r ransomware offering is a core SLSH member who goes by the handle “Rey” and who is currently one of just three administrators of the SLSH Telegram channel. Previously, Rey was an administrator of the data leak website for Hellcat , a ransomware group that surfaced in late 2024 and was involved in attacks on companies including Schneider Electric , Telefonica , and Orange Romania .

A recent, slightly redacted screenshot of the Scattered LAPSUS$ Hunters Telegram channel description, showing Rey as one of three administrators.

Also in 2024, Rey would take over as administrator of the most recent incarnation of BreachForums , an English-language cybercrime forum whose domain names have been seized on multiple occasions by the FBI and/or by international authorities. In April 2025, Rey posted on Twitter/X about another FBI seizure of BreachForums.

On October 5, 2025, the FBI announced it had once again seized the domains associated with BreachForums, which it described as a major criminal marketplace used by ShinyHunters and others to traffic in stolen data and facilitate extortion.

“This takedown removes access to a key hub used by these actors to monetize intrusions, recruit collaborators, and target victims across multiple sectors,” the FBI said.

Incredibly, Rey would make a series of critical operational security mistakes last year that provided multiple avenues to ascertain and confirm his real-life identity and location. Read on to learn how it all unraveled for Rey.

WHO IS REY?

According to the cyber intelligence firm Intel 471 , Rey was an active user on various BreachForums reincarnations over the past two years, authoring more than 200 posts between February 2024 and July 2025. Intel 471 says Rey previously used the handle “ Hikki-Chan ” on BreachForums, where their first post shared data allegedly stolen from the U.S. Centers for Disease Control and Prevention (CDC).

In that February 2024 post about the CDC, Hikki-Chan says they could be reached at the Telegram username @wristmug . In May 2024, @wristmug posted in a Telegram group chat called “Pantifan” a copy of an extortion email they said they received that included their email address and password.

The message that @wristmug cut and pasted appears to have been part of an automated email scam that claims it was sent by a hacker who has compromised your computer and used your webcam to record a video of you while you were watching porn. These missives threaten to release the video to all your contacts unless you pay a Bitcoin ransom, and they typically reference a real password the recipient has used previously.

“Noooooo,” the @wristmug account wrote in mock horror after posting a screenshot of the scam message. “I must be done guys.”

A message posted to Telegram by Rey/@wristmug.

In posting their screenshot, @wristmug redacted the username portion of the email address referenced in the body of the scam message. However, they did not redact their previously-used password, and they left the domain portion of their email address (@proton.me) visible in the screenshot.

O5TDEV

Searching on @wristmug’s rather unique 15-character password in the breach tracking service Spycloud finds it is known to have been used by just one email address: cybero5tdev@proton.me . According to Spycloud, those credentials were exposed at least twice in early 2024 when this user’s device was infected with an infostealer trojan that siphoned all of its stored usernames, passwords and authentication cookies.

Intel 471 shows the email address cybero5tdev@proton.me belonged to a BreachForums member who went by the username o5tdev . Searching on this nickname in Google brings up at least two website defacement archives showing that a user named o5tdev was previously involved in defacing sites with pro-Palestinian messages . The screenshot below, for example, shows that 05tdev was part of a group called Cyb3r Drag0nz Team .

Rey/o5tdev’s defacement pages. Image: archive.org.

A 2023 report from SentinelOne described Cyb3r Drag0nz Team as a hacktivist group with a history of launching DDoS attacks and cyber defacements as well as engaging in data leak activity.

“Cyb3r Drag0nz Team claims to have leaked data on over a million of Israeli citizens spread across multiple leaks,” SentinelOne reported . “To date, the group has released multiple .RAR archives of purported personal information on citizens across Israel.”

The cyber intelligence firm Flashpoint finds the Telegram user @05tdev was active in 2023 and early 2024, posting in Arabic on anti-Israel channels like “Ghost of Palestine” [full disclosure: Flashpoint is currently an advertiser on this blog].

‘I’M A GINTY’

Flashpoint shows that Rey’s Telegram account (ID7047194296) was particularly active in a cybercrime-focused channel called Jacuzzi , where this user shared several personal details, including that their father was an airline pilot. Rey claimed in 2024 to be 15 years old, and to have family connections to Ireland.

Specifically, Rey mentioned in several Telegram chats that he had Irish heritage, even posting a graphic that shows the prevalence of the surname “ Ginty .”

Rey, on Telegram claiming to have association to the surname “Ginty.” Image: Flashpoint.

Spycloud indexed hundreds of credentials stolen from cybero5dev@proton.me, and those details indicate that Rey’s computer is a shared Microsoft Windows device located in Amman, Jordan. The credential data stolen from Rey in early 2024 show there are multiple users of the infected PC, but that all shared the same last name of Khader and the address Hamad Al-Qanawi Street, Building 11, in Amman, Jordan.

The “autofill” data lifted from Rey’s family PC contains an entry for a 46-year-old Zaid Khader that says his mother’s maiden name was Ginty. The infostealer data also shows Zaid Khader frequently accessed internal websites for employees of Royal Jordanian Airlines .

MEET SAIF

The infostealer data makes clear that Rey’s full name is Saif Al-Din Khader . Having no luck contacting Saif directly, KrebsOnSecurity sent an email to his father Zaid. The message invited the father to respond via email, phone or Signal, explaining that his son appeared to be deeply enmeshed in a serious cybercrime conspiracy.

Less than two hours later, I received a Signal message from Saif, who said his dad suspected the email was a scam and had forwarded it to him.

“I saw your email, unfortunately I don’t think my dad would respond to this because they think its some ‘scam email,'” said Saif, who told me he turns 16 years old next month. “So I decided to talk to you directly.”

Saif explained that he’d already heard from European law enforcement officials, and had been trying to extricate himself from SLSH. When asked why then he was involved in releasing SLSH’s new ShinySp1d3r ransomware-as-a-service offering, Saif said he couldn’t just suddenly quit the group.

“Well I cant just dip like that, I’m trying to clean up everything I’m associated with and move on,” he said.

The former Hellcat ransomware site. Image: Kelacyber.com

He also shared that ShinySp1d3r is just a rehash of Hellcat ransomware, except modified with AI tools. “I gave the source code of Hellcat ransomware out basically.”

Saif claims he reached out on his own recently to the Telegram account for Operation Endgame, the codename for an ongoing law enforcement operation targeting cybercrime services, vendors and their customers .

“I’m already cooperating with law enforcement,” Saif said. “In fact, I have been talking to them since at least June. I have told them nearly everything. I haven’t really done anything like breaching into a corp or extortion related since September.”

Saif suggested that a story about him right now could endanger any further cooperation he may be able to provide. He also said he wasn’t sure if the U.S. or European authorities had been in contact with the Jordanian government about his involvement with the hacking group.

“A story would bring so much unwanted heat and would make things very difficult if I’m going to cooperate,” Saif Khader said. “I’m unsure whats going to happen they said they’re in contact with multiple countries regarding my request but its been like an entire week and I got no updates from them.”

Saif shared a screenshot that indicated he’d contacted Europol authorities late last month. But he couldn’t name any law enforcement officials he said were responding to his inquiries, and KrebsOnSecurity was unable to verify his claims.

“I don’t really care I just want to move on from all this stuff even if its going to be prison time or whatever they gonna say,” Saif said.

This Commission That Regulates Crypto Could Be Just One Guy: An Industry Lawyer

Intercept
theintercept.com
2025-11-26 17:14:55
Mike Selig had dozens of crypto clients. Now he will be a key industry regulator. The post This Commission That Regulates Crypto Could Be Just One Guy: An Industry Lawyer appeared first on The Intercept....
Original Article

Republicans in the Senate are racing to confirm a lawyer with a long list of crypto industry clients as the next Commodity Futures Trading Commission chair, a position that will hold wide sway over the industry.

CFTC nominee Mike Selig has served dozens of crypto clients ranging from venture capital firms to a bear-themed blockchain company based in the Cayman Islands, according to ethics records obtained by The Intercept.

Those records show the breadth of potential conflicts of interest for Selig, who, if confirmed, will serve on the CFTC alone due to an exodus of other commissioners.

With a Bitcoin crash wiping out a trillion dollars of value in the past few weeks, the industry is counting on friendly regulators in Washington to give it a boost.

Senate Agriculture Committee members voted 12-11 on party lines in favor of Selig on November 20, setting up a vote in the full Senate. The committee vote came a day after a hearing in which Selig dodged straightforward questions about whether CFTC staffing should be expanded as it takes on a role overseeing digital assets, and whether Donald Trump was right to pardon Binance founder Changpeng Zhao.

One thing Selig was committal on, however, was the danger of over-enforcement — leading the consumer group Better Markets to criticize him as the “wrong choice” to lead the CFTC.

“The CFTC is facing unprecedented strain as crypto and prediction market oversight has been layered into its traditional derivatives market oversight responsibilities,” said Benjamin Schiffrin, the nonprofit group’s director of securities policy. “During his hearing, Mr. Selig showed little interest in regulation on either count and was unable to answer the simplest of questions.”

Friendly to Crypto

Selig has drawn widespread backing from crypto industry groups in the wake of his October 25 nomination, which came after an earlier Trump nominee was derailed by the Winklevoss twins, who sued Mark Zuckerberg over the creation of Facebook before launching a lucrative career in crypto.

Selig’s resume shows why the industry is so comfortable with him. Early in his career he was a law clerk for J. Christopher Giancarlo, the CFTC chair during Trump’s first term who calls himself CryptoDad.

After the CFTC, Selig joined Giancarlo at the white-shoe law firm Willkie Farr & Gallagher. His client list there extended from major crypto investors to smaller startups, many of them with some presence in the derivatives or commodities worlds, according to a form he filed with the Office of Government Ethics after his nomination.

Selig’s clients included Amir Haleem, the CEO of a crypto company that was the target of a yearslong Securities and Exchange Commission probe ; Architect Financial Technologies, which last year announced a CFTC-regulated digital derivatives brokerage; Berachain, the Caymans-based blockchain company whose pseudonymous co-founders include “Smokey the Bera” and “Papa Bear ”; CoinList, a crypto exchange that allows traders to access newly listed digital tokens; Deribit, a crypto options exchange; Diamond Standard, which offers commodities products that combine diamonds and the blockchain; Input Output Global, one of the developers of the decentralized blockchain Cardano; and the U.S. branch of eToro, an Israeli crypto trading platform.

“Yes, I think the crypto community is excited about Mike.”

At least one of Selig’s former clients, Alluvial Finance, met with staffers of the crypto task force where Selig has served as chief counsel since the start of the second Trump administration, according to SEC records .

Selig’s clients have also included trade groups including the Proof of Stake Alliance, which advocates for friendly tax policies for a type of blockchain, and the Blockchain Association, which represents dozens of investment firms and large crypto companies in Washington.

Pushing back against the idea that Selig was a one-trick pony in a recent podcast interview, Giancarlo said that Selig’s interests extended to other industries overseen by the CFTC such as agriculture.

“Yes, I think the crypto community is excited about Mike. But so is the whole CFTC community,” Giancarlo said. “It’s not, ‘Crypto bro goes to CFTC.’ This is somebody who has had a decadelong practice in all aspects of CFTC law and jurisdiction, and is accomplished in all those areas.”

Revolving Door

It is far from unusual for Republican presidents to tap industry-friendly lawyers to serve as financial regulators. Selig, though, is poised to assume a uniquely powerful position thanks to a more unusual circumstance: an exodus of CFTC commissioners this year.

The commission’s other members fled for the doors since Trump’s second term began, with only a single, crypto-friendly Republican left to serve as acting chair. She has said that she will step down once her replacement is confirmed.

Trump so far has yet to nominate any Democratic commissioners on the body that is typically split 3-2 along party lines, with the majority going to the party that controls the White House.

That appears to have been the sticking point for the Democratic senators who unanimously voted against Selig at the committee vote.

Selig may not have to recuse himself from matters involving his former clients as CFTC chair, it appears. In his government ethics filing, Selig pledged not to involve himself in matters involving his former clients for the standard period of a year after he represented them. However, Selig has been in government service for most of 2025, meaning that there are only a few weeks remaining of that blackout period.

A White House spokesperson did not answer questions about potential conflicts of interest if Selig is confirmed.

“Mike Selig is a highly qualified crypto and industry leader, who will do an excellent job in leading the Commodity Futures Trading Commission under President Trump,” White House spokesperson Davis Ingle said in a statement. “We look forward to his swift confirmation.”

Backwater to Bleeding Edge

If confirmed, Selig will lead an agency that was once considered a relative backwater until it was put in charge of regulating derivates after the 2008 financial crash . More recently, Congress advanced legislation that would put the CFTC on the bleeding edge of overseeing digital assets.

Nonetheless, even relatively crypto-friendly Democrats, such as Sen. Cory Booker of New Jersey, noted at the hearing last week that the agency has nowhere near the staff needed to take on a major new role in the financial markets. The CFTC has only 161 employees dedicated to enforcement actions compared to about 1,500 at the SEC, Booker said.

“There is a real problem right now with capacity in the agency that you are up to lead,” Booker told Selig.

Despite the dearth of both commissioners and staff, Selig was unwilling to commit to growing the agency if he is confirmed. Pressed by Democrats whether he would ask Trump for a bigger staff, Selig repeatedly said that he needed to study the issue.

Selig also avoided giving direct answer to questions from Democrats as to whether the CFTC should crack down on the emerging world of “prediction markets” offering sports gambling outside the auspices of state regulation , and whether crypto exchanges should be allowed to “vertically integrate” by investing in the same tokens they allow customers to trade.

Selig did signal a general openness toward cryptocurrencies — and skepticism of regulation — in his statement to the committee.

“I have seen firsthand how regulators, unaware of the real-world impact of their efforts, and zeal for regulation-by-enforcement, can drive businesses offshore and smother entrepreneurs with red tape,” Selig said . “Everyday Americans pay the price for these regulatory failures. If confirmed, I am committed to instituting common sense, principles-based regulations that facilitate well-functioning markets and keep pace with the rapid speed of innovation.”

DRAM prices are spiking, but I don't trust the industry's why

Hacker News
www.xda-developers.com
2025-11-26 17:12:01
Comments...

Optery (YC W22) Hiring CISO, Release Manager, Tech Lead (Node), Full Stack Eng

Hacker News
www.optery.com
2025-11-26 17:03:21
Comments...

A Vibe Coded SaaS Killed My Team

Hacker News
cendyne.dev
2025-11-26 17:00:23
Comments...
Original Article
- 7 min read - Text Only

I considered it a possibility. Now it's set in stone. Instead of fully shutting down in the coming year due to tumbling revenue, leadership decided "What if we use someone else's platform?" It just so happens, the platform they chose is vibe coded .

A vibe coded SaaS killed my team

Like many tech companies during the pandemic, we over-hired and had to contract over and over again. Without the VC-funded war chest that our competitors had, we couldn't compete in marketing and sales. Our brand-awareness shrunk into obscurity.

So, in all fairness, we lost the capitalism game. And, I'm fine with that.

tired-desk

If you're curious, I'm sorry to disappoint. I haven't name-dropped, nor will I now or in the future.

We had a plan to gracefully wind down, unlike Redbox ( archived ). Once the balance hit a certain threshold, a plan (prepared a year in advance) would have made everyone whole and return the remaining funds to the investors.

Except, the investors changed their mind and would rather take a chance on a future sale than admit defeat.

What's changed their mind?

the-more-you-know

The allure and promise of AI workforce reduction.

The technology costs are but a single digit percentage of the monthly spend – the majority is tied to headcount and benefits. When I saw the numbers going towards headcount costs, I fully understood the situation we were in.

The previous reduction truly cut headcount to the bare minimum that can still keep the technology we have operating. Any fewer, and there's a high risk of business interruption within a few months.

At the same time, the current revenue projection calls for the end of the business within a few more months.

We used to have a thousand people. Today, I can count everyone on my hands. A cut beyond this will fundamentally need a different operating model.

Given that our revenue can no longer support the staff needed to run our own technology, how do the finances work on someone else's platform?

Assuming that this Software as a Service (SaaS) can deliver what leadership believes, the napkin math suggests it'll work out.

With this SaaS, they expect...

  • No engineering headcount
  • No implementation headcount
  • No support headcount
  • Contracted sales teams to pick up the rest

So if they're going to lay everyone off and migrate to a SaaS, who's going to do the migration?

me

I'll be on my own for an extra month or two to migrate it all over.

Somehow, I need to keep the tech coasting in its last days while migrating all the data that I can.

An warning message saying this version of node (14) will no longer be supported after 2024. It is near the end of 2025.

hail-satan

Thankfully, AWS is not a source of stress for me. Stuff still works, even if it complains years later.

get-well-soon

I've expected either a winding down or a transition for over a year now. I've come to terms with an ending like this already.

While my peers are bitter about having a closer end date than me, I'm not as emotionally invested into when or how it ends.

What I didn't expect is how a vibe coded app passed as legitimate to the board of directors. We don't even have a contract with this platform yet and people are told they're being laid off.

ych-some-of-yall-is-why-shampoo-has-instructions

In my two hours of testing and feedback, I found that — without immediate changes to the SaaS — we'd be immediately in violation of the California Consumer Privacy Act (CCPA) , California Privacy Rights Act (CPRA) , Telephone Consumer Protection Act (TCPA) , CAN-SPAM Act , Americans with Disabilities Act (ADA) .

two-of-them

I keep saying 'we'. It won't be soon.

How could a platform be that bad? This SaaS has no customers in the United States. Their team is based in another country without similar laws or regulations.

Even so, I'm confident that vibe coded platforms made by people in the United States also unknowingly violate state and federal laws around privacy, communications, and accessibility.

One of our tech acquisitions was through a bankruptcy fire sale after the original company could not make penalty payments for violating the Telephone Consumer Protection Act. These issues cannot be ignored to do business in the United States.

Things don't work

I've used LLM assisted auto complete. I've generated inline functionality. I've generated classes and modules. And I've generated entire web apps. I've seen what GPT, Claude, Z.ai GLM, Grok Code, and Gemeni do across the entire spectrum of software development.

i-do-not-vibe-with-this-universe-1

Everyone has a different definition of "vibe coding", and as Theo described the spectrum of its definitions (at 4:30), I'll be using the slice of the spectrum "Ignoring the code entirely and only prompting" as my definition of vibe coding.

Within a minute, I could tell it was made with Claude or GLM. Every picture zooms in on hover for no reason. There are cards everywhere. Links go to # in the footer. Modals have an closing X button that doesn't work. The search button up top doesn't do anything...

It's like someone took some screenshots of a competitor, asked an LLM agent to create design documents around all of them, and then implement those design documents without any human review.

but-like

At the shallowest depth, I can see how a CEO got bamboozled. The happiest path is implemented. The second happiest path is rough. The third happiest path is unhinged.

No hacks. No reading the source code. Just innocent clicking around allowed me to break a critical invariant to running a business: I could place orders without giving my contact details or payment.

Besides displacing jobs , issues like this concern me deeply.

LLM-generated code can enable a business process quicker and cheaper than hiring a full team with benefits. With the experts that still value their craft steering the development, software can be produced just as well as without these tools. Business processes meaningfully affect people's lives, whether staff, customer, vendor, or share-holder.

At its extreme with vibe coding , LLM-generated code will have such poor quality that it is negligent to use LLM-generated code without expert oversight and verification . More lives are going to be affected by negligent software than ever before.

It is so much easier to accept that my life is changing because my employer couldn't stay fit in the economy than to accept it being displaced because of broken software made by a machine. The fiscal performance of my employer in this economy is the root cause, of course. And I accept that. Having to pivot everything to some broken SaaS that breaks the law? That's harder to accept.

corporate-drone

While it is hard to accept, I'll still do my part and will move on after a job well done. How well the new platform operates after the domain swap is not my problem.

KDE Plasma 6.8 will be Wayland-only

Linux Weekly News
lwn.net
2025-11-26 16:49:45
KDE's Plasma team has announced that KDE Plasma will drop X11 session support with Plasma 6.8: The Plasma X11 session will be supported by KDE into early 2027. We cannot provide a specific date, as we're exploring the possibility of shipping some extra bug-fix releases for Plasma 6.7. The ex...
Original Article

KDE's Plasma team has announced that KDE Plasma will drop X11 session support with Plasma 6.8:

The Plasma X11 session will be supported by KDE into early 2027.

We cannot provide a specific date, as we're exploring the possibility of shipping some extra bug-fix releases for Plasma 6.7. The exact timing of the last one will only be known when we get closer to its actual release, which we expect will be sometime in early 2027.

What if I still really need X11?

This is a perfect use case for long term support (LTS) distributions shipping older versions of Plasma. For example, AlmaLinux 9 includes the Plasma X11 session and will be supported until sometime in 2032.

See the blog post for information on running X11 applications (still supported), accessibility, gaming, and more.



Cloudflare outage should not have happened

Hacker News
ebellani.github.io
2025-11-26 16:34:58
Comments...
Original Article

Yet again , another global IT outage happen (deja vu strikes again in our industry). This time at cloudflare( Prince 2025 ). Again, taking down large swats of the internet with it( Booth 2025 ).

And yes, like my previous analysis of the GCP and CrowdStrike’s outages, this post critiques Cloudflare’s root cause analysis (RCA), which — despite providing a great overview of what happened — misses the real lesson.

Here’s the key section of their RCA:

Unfortunately, there were assumptions made in the past, that the list of columns returned by a query like this would only include the “default” database:

SELECT name, type FROM system.columns WHERE table = ‘http_requests_features’ order by name;

Note how the query does not filter for the database name. With us gradually rolling out the explicit grants to users of a given ClickHouse cluster, after the change at 11:05 the query above started returning “duplicates” of columns because those were for underlying tables stored in the r0 database.

This, unfortunately, was the type of query that was performed by the Bot Management feature file generation logic to construct each input “feature” for the file mentioned at the beginning of this section.

The query above would return a table of columns like the one displayed (simplified example):

However, as part of the additional permissions that were granted to the user, the response now contained all the metadata of the r0 schema effectively more than doubling the rows in the response ultimately affecting the number of rows (i.e. features) in the final file output.

A central database query didn’t have the right constraints to express business rules. Not only it missed the database name, but it clearly needs a distinct and a limit, since these seem to be crucial business rules.

So, a new underlying security work manifested the (unintended) potential already there in the query. Since this was by definition unintended, the application code didn’t expect that value to be what it was, and reacted poorly. This caused a crash loop across seemingly all of cloudflare’s core systems. This bug wasn’t caught during rollout because the faulty code path required data that was assumed to be impossible to be generated.

Sounds familiar? It should. Any senior engineer has seen this pattern before. This is classic database/application mismatch. With this in mind, let’s review how Cloudflare is planning to prevent this from happening again:

  • Hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input
  • Enabling more global kill switches for features
  • Eliminating the ability for core dumps or other error reports to overwhelm system resources
  • Reviewing failure modes for error conditions across all core proxy modules

These are all solid, reasonable steps. But here’s the problem: they already do most of this—and the outage happened anyway.

Why? Because of they seem to mistake physical replication with not having a single point of failure. This mistakes the physical layer with the logical layer. One can have a logical single point of failure without having any physical one, which was the case in this situation.

I base my paragraph on their choice of abandoning PostgreSQL and adopting ClickHouse( Bocharov 2018 ). The whole post is a great overview on trying to process data fast, without a single line on how to garantee its logical correctness/consistency in the face of changes.

They are treating a logical problem as if it was a physical problem

I’ll repeat the same advice I offered in my previous article on GCP’s outage:

The real cause

These kinds of outages stem from the uncontrolled interaction between application logic and database schema. You can’t reliably catch that with more tests or rollouts or flags. You prevent it by construction—through analytical design.

  1. No nullable fiels.
  2. (as a cororally of 1) full normalization of the database ( The principles of database design, or, the Truth is out there )
  3. formally verified application code( Chapman et al. 2024 )

Conclusion

FAANG-style companies are unlikely to adopt formal methods or relational rigor wholesale. But for their most critical systems, they should. It’s the only way to make failures like this impossible by design, rather than just less likely.

The internet would thank them. (Cloud users too—caveat emptor.)

References

Chapman, Roderick, Claire Dross, Stuart Matthews, and Yannick Moy. 2024. “Co-Developing Programs and Their Proof of Correctness.” Commun. Acm 67 (3): 84–94. https://doi.org/10.1145/3624728 .

Prince, Matthew. 2025. “Cloudflare Outage on November 18, 2025.” https://blog.cloudflare.com/18-november-2025-outage/ .

Figure 1: The Cluny library was one of the richest and most important in France and Europe. In 1790 during the French Revolution, the abbey was sacked and mostly destroyed, with only a small part surviving

Figure 1: The Cluny library was one of the richest and most important in France and Europe. In 1790 during the French Revolution, the abbey was sacked and mostly destroyed, with only a small part surviving

European parliament calls for social media ban on under-16s

Guardian
www.theguardian.com
2025-11-26 16:28:31
MEPs pass resolution to help parents tackle growing dangers of addictive internet platforms Children under 16 should be banned from using social media unless their parents decide otherwise, the European parliament says. MEPs passed a resolution on age restrictions on Wednesday by a large majority. ...
Original Article

Children under 16 should be banned from using social media unless their parents decide otherwise, the European parliament says.

MEPs passed a resolution on age restrictions on Wednesday by a large majority. Although not legally binding, it raises pressure for European legislation amid growing alarm about the mental health risks to children of unfettered internet access.

The European Commission, which is responsible for initiating EU law, is already studying Australia’s world-first social-media ban for under-16s, which is due to take effect next month.

In a speech in September, the commission’s president, Ursula von der Leyen , said she would watch the implementation of Australia’s policy. She spoke out against “algorithms that prey on children’s vulnerabilities with the explicit purpose of creating addictions” and said parents felt powerless against “the tsunami of big tech flooding their homes”.

Von der Leyen promised a panel of experts would be set up by the end of the year to advise on the best approach to protecting children.

Interest is growing in restricting children’s social media and smartphone access. An expert report commissioned last year by France’s president, Emmanuel Macron, said children should not be allowed to use smartphones until the age of 13 and social media, such as TikTok, Instagram and Snapchat, until they were 18.

Christel Schaldemose, the Danish Social Democrat MEP who drafted the resolution, told reporters that politicians needed to act to protect children: “It is not just parents. Society also needs to step up and make sure that platforms are a safe place for minors to be, but only if they are above a certain age.”

Her report called for the default disabling of addictive features on internet platforms when used by minors, such as infinite scrolling (endless content as the user scrolls down), videos that automatically play, excessive push notifications and rewards for repeated use of a site.

The resolution noted that “addictive design features are often inherent to the business model of platforms, notably social media”. An earlier draft of the Schaldemose report cited a study stating that one in four children and young people displayed “problematic” or “dysfunctional” smartphone use – behavioural patterns mirroring addiction. The resolution said children should be 16 before they could access social media, although parents could give consent from the age of 13.

The White House is urging the EU to roll back its digital laws and some supporters of a social media ban explicitly framed the vote in this context. At a meeting in Brussels on Monday, Howard Lutnick, the US commerce secretary, said EU rules on tech companies needed to be more “balanced” in exchange for lower US steel and aluminium tariffs.

Referring to Lutnick’s visit, Stéphanie Yon-Courtin, a French MEP from Macron’s party, said Europe was not “a regulatory colony”. In a statement after the vote, she added: “Our digital laws are not for sale. We will not back down on children’s protections because a foreign billionaire or big tech tells us to.”

The EU already seeks to protect internet users from online harms, such as disinformation, cyberbullying and illegal content, via its Digital Services Act. But the resolution said this law had gaps and could do more to protect children from addictive design features and online exploitation, such as financial incentives to become influencers.

Schaldemose said the act, which she co-authored, was strong “but we could go further, especially in areas of addictive design features and harmful dark pattern practices where we are not so specific, not so precise”.

skip past newsletter promotion

Dark patterns refer to app or website design features to influence decision-making, such as countdown timers to encourage users to make purchases, or nagging requests to turn on location trackers and notifications.

Schaldemose’s resolution was adopted by 483 MEPs and opposed by 92, with 86 abstentions.

Eurosceptic MEPs criticised the plan, saying the EU would be overreaching if it banned social media access for children. “Decisions about children’s access must be taken as close to families as possible – in the member states, not in Brussels,” said Kosma Złotowski, a Polish member of the European Conservatives and Reformists group.

The resolution was passed only one week after the commission announced delays to changes to its Artificial Intelligence Act and other digital laws in a push to lighten regulation on companies in the name of “simplification”.

Schaldemose said she appreciated the need to avoid creating too many laws but added “there is a willingness to do more when it comes to kids and protection of our children in the EU”.

Slop Detective – Fight the Slop Syndicate

Hacker News
slopdetective.kagi.com
2025-11-26 16:24:29
Comments...
Original Article

Streak:

|

Cases Solved:

Please enable JavaScript to play Slop Detective.

Slashdot Effect

Hacker News
en.wikipedia.org
2025-11-26 16:12:51
Comments...
Original Article

From Wikipedia, the free encyclopedia

"Flash crowd" redirects here. For the short story by Larry Niven, see Flash Crowd . For the social gathering in the real world, see Flash mob .

The Slashdot effect , also known as slashdotting or the hug of death occurs when a popular website links to a smaller website, causing a massive increase in traffic. This overloads the smaller site, causing it to slow down or even temporarily become unavailable. Typically, less robust sites are unable to cope with the huge increase in traffic and become unavailable – common causes are lack of sufficient data bandwidth , servers that fail to cope with the high number of requests, and traffic quotas . Sites that are maintained on shared hosting services often fail when confronted with the Slashdot effect. This has the same effect as a denial-of-service attack , albeit accidentally. The name stems from the huge influx of web traffic which would result from the technology news site Slashdot linking to websites. The term flash crowd is a more generic term. [ 1 ]

The original circumstances have changed, as flash crowds from Slashdot were reported in 2005 to be diminishing due to competition from similar sites , [ 2 ] and the general adoption of elastically scalable cloud hosting platforms.

The term "Slashdot effect" refers to the phenomenon of a website becoming virtually unreachable because too many people are hitting it after the site was mentioned in an interesting article on the popular Slashdot news service. It was later extended to describe any similar effect from being listed on a popular site. [ 3 ]

The effect has been associated with other websites or metablogs such as Fark , Digg , Drudge Report , Imgur , Reddit , and Twitter , leading to terms such as being farked or drudged , being under the Reddit effect , or receiving a hug of death from the site in question. [ 4 ] [ 5 ] Another generic term, "flash crowd," [ 6 ] originates from Larry Niven's 1973 novella by that name , in which the invention of inexpensive teleportation allows crowds to materialize almost instantly at the sites of interesting news stories.

Sites such as Slashdot , Digg, Reddit, StumbleUpon, and Fark consist of brief submitted stories and a self-moderated discussion on each story. The typical submission introduces a news item or website of interest by linking to it. In response, large masses of readers tend to simultaneously rush to view the referenced sites. The ensuing flood of page requests from readers can exceed the site's available bandwidth or the ability of its servers to respond, and render the site temporarily unreachable.

Google Doodles , which link to search results on the doodle topic, also result in high increases of traffic from the search results page. [ 7 ]

MRTG graph from a web server statistics generator showing a moderate Slashdot effect in action in 2005

Major news sites or corporate websites are typically engineered to serve large numbers of requests and therefore do not normally exhibit this effect. Websites that fall victim may be hosted on home servers, offer large images or movie files or have inefficiently generated dynamic content (e.g. many database hits for every web hit even if all web hits are requesting the same page). These websites often became unavailable within a few minutes of a story's appearance, even before any comments had been posted. Occasionally, paying Slashdot subscribers (who have access to stories before non-paying users) rendered a site unavailable even before the story was posted for the general readership.

Few definitive numbers exist regarding the precise magnitude of the Slashdot effect, but estimates put the peak of the mass influx of page requests at anywhere from several hundred to several thousand hits per minute. [ 8 ] [ 9 ] [ 10 ] The flood usually peaked when the article was at the top of the site's front page and gradually subsided as the story was superseded by newer items. Traffic usually remained at elevated levels until the article was pushed off the front page, which could take from 12 to 18 hours after its initial posting. However, some articles had significantly longer lifetimes due to the popularity, newsworthiness, or interest in the linked article.

By 2005, reporters were commenting that the Slashdot effect had been diminishing. [ 2 ] However, the effect has been seen involving Twitter when some popular users mention a website. [ 11 ]

When the targeted website has a community -based structure, the term can also refer to the secondary effect of having a large group of new users suddenly set up accounts and start to participate in the community. While in some cases this has been considered a good thing, in others it is viewed with disdain by the prior members, as quite often the sheer number of new people brings many of the unwanted aspects of Slashdot along with it, such as trolling , vandalism , and newbie -like behavior. This bears some similarity to the 1990s Usenet concept of Eternal September .

Assistance and prevention

[ edit ]

Many solutions have been proposed for sites to deal with the Slashdot effect. [ 12 ]

There are several systems that automatically mirror any Slashdot-linked pages to ensure that the content remains available even if the original site becomes unresponsive. [ 13 ] Sites in the process of being Slashdotted may be able to mitigate the effect by temporarily redirecting requests for the targeted pages to one of these mirrors. Slashdot does not mirror the sites it links to on its own servers, nor does it endorse a third party solution. Mirroring of content may constitute a breach of copyright and, in many cases, cause ad revenue to be lost for the targeted site.

  1. ^ Ari, Ismail; Hong, Bo; Miller, Ethan L.; Brandt, Scott A.; Long, Darrell D. E. (October 2003). "Managing Flash Crowds on the Internet" (PDF) . University of California Santa Cruz Storage Systems Research Center. Archived from the original (PDF) on 9 May 2013 . Retrieved 15 March 2010 .
  2. ^ a b Kharif, Olga (March 2, 2005). "Less Impact from the "Slashdot Effect" . Bloomberg Business Week . Archived from the original on May 15, 2005.
  3. ^ Eric S. Raymond. "slashdot effect" . The Jargon File, version 4.4.8 . Retrieved 21 May 2012 .
  4. ^ Wilhelm, Alex (17 January 2012). "How Reddit turned one congressional candidate's campaign upside down" . The Next Web . Retrieved 24 October 2012 .
  5. ^ "The Reddit effect" . ABC News. August 31, 2012. Archived from the original on 1 November 2014 . Retrieved 24 October 2012 .
  6. ^ Eric S. Raymond. "flash crowd" . The Jargon File (version 4.4.7) . Retrieved 25 May 2012 .
  7. ^ Williams, David E. " Google's unknown artist has huge following ." CNN . July 19, 2006. Retrieved on July 19, 2006.
  8. ^ Stephen Adler. "The Slashdot Effect: An Analysis of Three Internet Publications" . Archived from the original on 2 December 2008 . Retrieved 19 April 2003 . (mirror)
  9. ^ "Slashdotting graphs" . Princeton University Department of Astrophysical Sciences. Archived from the original on 27 February 2009 . Retrieved 13 January 2004 .
  10. ^ Aaron Benoy. "Ruins in ASCII" . Retrieved 27 September 2004 .
  11. ^ Paul Douglas, How Stephen Fry takes down entire websites with a single tweet , Tech Radar, March 3, 2010
  12. ^ Jeremy Elson; Jon Howell (2008), Handling Flash Crowds from your Garage (PDF) , Microsoft Research
  13. ^ Daniel Terdiman (1 October 2004). "Solution for Slashdot Effect?" . WIRED . Retrieved 2016-04-18 .

Bits from Debian: New Debian Developers and Maintainers (September and October 2025)

PlanetDebian
bits.debian.org
2025-11-26 16:00:00
The following contributors got their Debian Developer accounts in the last two months: Evangelos Ribeiro Tzaras (devrts) Andrea Bolognani (abologna) The following contributors were added as Debian Maintainers in the last two months: Rylie Pavlik Yuchin Tsai Daniel Markstedt Guido Berhörster Renzo...
Original Article

The following contributors got their Debian Developer accounts in the last two months:

  • Evangelos Ribeiro Tzaras (devrts)
  • Andrea Bolognani (abologna)

The following contributors were added as Debian Maintainers in the last two months:

  • Rylie Pavlik
  • Yuchin Tsai
  • Daniel Markstedt
  • Guido Berhörster
  • Renzo Davoli

Congratulations!


llmfuse: a self-compressing filesystem backed by an LLM

Lobsters
grohan.co
2025-11-26 15:59:00
Comments...
Original Article

Every systems engineer at some point in their journey yearns to write a filesystem. This sounds daunting at first - and writing a battle-tested filesystem is hard - but the minimal surface area for a “working” FS is surprisingly small, simple, and in-distribution for coding agents.

In fact, one of my smoke tests for new coding models is seeing how good of a filesystem they can one-shot! At some point, I had quite a few filesystems lying around - and coding models were getting pretty good - which made me wonder if the models were intelligent enough to actually model the filesystem engine itself?

A filesystem is the perfect black-box API to model with wacky backends (see “Harder drives” ), and besides the joy of training an LLM for fun - there were a few deeper truths about language models that I wanted to explore.

Training a filesystem #

So I set upon training a filesystem. Building on top of one of my throwaway FUSEs, a few rounds with Claude repurposed it to loopback against the host with added logging, two things I needed to generate reference fine-tuning data:

class LoggingLoopbackFS(LoggingMixIn, Operations):
    """
    A loopback FUSE filesystem that logs all operations for training data.
    
    This implementation delegates all filesystem operations to a real directory
    on the host filesystem, ensuring perfect semantic correctness while logging
    every operation for LLM training data.
    """

I then wrote a filesystem interaction simulator, which sampled various operations against a sandboxed LoggingLoopbackFS to generate diverse FUSE prompt/completion pairs. Concretely, I captured only the minimal set of operations needed for R/W-ish capability (no open, xattrs, fsync etc).

Alongside the FUSE operation, I captured the full filesystem state at every turn. I experimented with various formats, including an ASCII-art representation, but ultimately settled on XML since it enforces prompt boundaries clearly and had canonical parsers available.

With prompts including the FUSE operation + XML filesystem tree, the model learned two forms of completions:

  • Reads (<R>) requested the content / metadata as per the operation ( getattr / readdir / read )
  • Writes (<W>) requested the model to output the full filesystem tree state, after modification ( unlink / chmod / truncate / write )

Example prompt (read):

<R>
read('/usr14/log767.rs', size=4096, offset=0, fh=4) 
---
<filesystem>
  <directory path="/" name="/" mode="755" owner="root" group="root"
mtime="2025-01-01T00:00:00">
    <directory path="usr14" name="usr14" mode="755" owner="root" group="root"
mtime="2025-01-01T00:00:00">
      <file path="usr14/log767.rs" name="log767.rs" mode="644" owner="root"
group="root" mtime="2025-01-01T00:00:01" size="276">
        <body>fn main() {
    match process(7) {
        Ok(result) =&gt; println!("Result: {}", result),
        Err(e) =&gt; eprintln!("Error: {}", e),
    }
</body>
      </file>
      <file path="usr14/temp912.sh" name="temp912.sh" mode="644" owner="root"
group="root" mtime="2025-01-01T00:00:01" size="268">
        <body>#!/bin/bash 
         echo "temp912" || exit 1
       </body>
      </file>
    </directory>
  </directory>
</filesystem>

Completion:

fn main() {
    match process(7) {
        Ok(result) => println!("Result: {}", result),
        Err(e) => eprintln!("Error: {}", e),
    }
}

Fine-tuning #

Once I had clean, representative, and diverse filesystem simulation data, actually running SFT was pretty straightforward on Modal. Over a few iteration cycles spread across nibbles of spare time, I ended up with ~98% accuracy on a hold-out eval after 8 epochs of SFT on a N=15000 dataset with Qwen3-4b.

Most of my time here was spent cleaning generated data and ensuring we represented every FUSE operation sufficiently + generated enough “complex” trees to learn on.

At this point, I wrote … possibly the smallest filesystem I’ve seen… to give my model a spin in the real world. Every FUSE operation was a passthrough to the LLM, for example:

class LLMFuse(LoggingMixin, Operations):
    ...
    def chmod(self, path, mode):
        """Change file permissions."""
        response = self._query_llm_for_operation('chmod', path, mode=oct(mode))
        if not self._handle_llm_response(response):
            raise FuseOSError(ENOENT)
        return 0
    ...

Nice! I now had a mountable FUSE that was entirely “implemented” by a language model. As you can see below, I was able to ls around it, echo into files, and cat them back out.

Poking around a Docker container with a mounted LLMFuse.

Compressing the filesystem #

Perhaps the largest glaring inefficiency in this set up is the sheer verbosity of the XML-based representation. I was using many bytes to represent attributes and tree structure that could be encoded far more efficiently (~O(bits)) in a standard C struct.

However, as I was fine-tuning on the XML filesystem tree representation, I was baking in this very structure into the weights and probability distributions of my Qwen fork! If only there was a way to leverage this to compress state…

Two sides of the same coin #

As it turns out, compression and AI are intimately related. Using LLMs to lossily compress text is one of the most common applications, so it’s not entirely unintuitive. However, one researcher (Marcus Hutter) claimed back in 2006 that they are equivalent (and in fact bet $500K on this claim! ).

Presciently, Hutter appears to be absolutely right. His enwik8 and enwik9 ’s benchmark datasets are, today, best compressed by a 169M parameter LLM (trained by none other than Fabrice Bellard in 2023).

That’s a bit perplexing on the first glance. Surely LLM compression isn’t reversible? What kind of voodoo magic was going on here?

Arithmetic coding #

The algorithm that enables reversible compression using LLMs is called “arithmetic coding” and it builds upon a 1948 result by Claude Shannon .

Researchers at DeepMind (including Hutter himself) have explained the math in detail , so I’ll direct the most inquisitive of you readers there, but for a simplified understanding of what’s going on, forget everything you might know about working with LLMs today. There’s no prompting involved!

Let’s assume the following is true for some predictive model \(M\)

  • Lorem has first-word probability = 0.57.
  • Ipsum has second-word conditional probability = 0.67 (joint 0.38).
  • Dolor has a third word conditional probability = 0.5 (joint 0.19).

so on and so forth until you reach the end of the string you want to compress and you end up with some “final interval width” \(P(m)\) on the real interval \([0,1]\) which represents your string.

Let’s suppose in our example this turns out to be 0.012. We can represent this decimal in roughly \(- \log_{2}{P(m)} = 6.4\) bits, which is our final compression size.

There’s a few elegant things about this algorithm:

  • Any number within this interval is uniquely determined by tracing the arithmetic coding algorithm through the specific probabilistic model’s weights. “Decoding” is simply a retracing operation (see the line through the probability distributions above)
  • The inverse log relationship between predictive power \(P(m)\) and compression pushes the burden of the “hard compression problem” to deep learning machinery which can encode high-dimensional text patterns within model weights, yielding far better compression ratios than deterministic algorithms.

Sounds cool! But how good really is this compression? On comparing arithmetic coding backed by Qwen3-4B against gzip for lipsum.txt , we already see pretty dramatic results:

Method Size (bytes) Compression Impact
Original (plain) 446
gzip 298 ~33% smaller
llmencode 13 ~97% smaller

(note: llmencode is my implementation of arithmetic coding)

22x better compression than gzip is pretty ridiculous! A caveat here is that lipsum.txt is heavily represented in training data, but 5-20x efficiency gains broadly hold for all text data that (looks like) it’s been on the internet.

Self-compression #

Now, back to our filesystem. The XML overhead we were worried about now can be “compressed away” by the fine-tuned model. Using the same toy filesystem from the Docker container demo above:

<filesystem>
  <directory path="/" name="/" mode="755" owner="root" group="root" mtime="2025-01-01T00:00:00">
    <directory path="testdir" name="testdir" mode="755" owner="root" group="root" mtime="2025-01-01T00:00:00" />
    <file path="testfile.txt" name="testfile.txt" mode="644" owner="root" group="root" mtime="2025-01-01T00:00:01" size="14">
      <body>hello llmfuse
</body>
    </file>
  </directory>
</filesystem>
Model Original (bytes) Compressed (bytes) Ratio
Base Qwen3-4B 394 38 10.4x
Fine-tuned Qwen3-4B 394 21 18.8x

The fine-tuned model achieves 44.7% better compression on XML filesystem trees - the very format it was trained to predict. This is the “self-compression” effect: by baking the XML structure into the model weights during fine-tuning, the arithmetic coder can represent that structure in fewer bits.

Self-compression in filesystems isn’t a novel concept. For example, there exists the squashfs tool (created in 2002) to create R/O compressed filesystems. Squashfs compresses files, inodes, and directories together, not unlike what we’re doing here!

Under the hood, squashfs just wraps gzip / zstd /your favourite compression algorithm. So for plain-text data, squashfs compression stats pale in the face of llmfuse :

Method Compressed Size Notes
squashfs (gzip) 171 bytes gzip-compressed file contents, inodes, directory tables
llmfuse (fine-tuned) 21 bytes Arithmetic coded XML state

For the same filesystem tree (one directory, one 14-byte text file), llmfuse achieves ~8x better compression than squashfs (see methodology in appendix).

The difference comes down to llmencode being far better than gzip on text data + XML structure - especially when the model has been fine-tuned on exactly that structure.

Conclusion #

What started off as a little experiment mostly to get my hands dirty with training and inference evolved into a full blown nerd snipe and intellectual adventure. Thanks for making it this far!

I entirely recognize that this is a “toy” experiment under a very specific setup; with that said, the numbers above are pretty eye-popping, and the question I’ve been trying to answer as I write this up is: does this have any real-world potential?

Of course, in the short term, there’s a whole host of caveats: you need an LLM, likely a GPU, all your data is in the context window (which we know scales poorly), and this only works on text data.

Still, it’s intriguing to wonder whether the very engines that will likely dominate all “text generation” going forward can be used to compress their own data? Perhaps in a distant future, where running LLMs at the edge makes sense, or for specific kinds of workflows where data is read very infrequently.

Overall, I’m grateful to Peyton at Modal for the compute credits. Running a somewhat unconventional experiment like this wouldn’t have been possible without full control over the training and inference code, and extremely tedious without the simplicity of running ML infra on Modal! It’s truly awesome to be able to just modal deploy and get my own private inference endpoints, or just modal run to prototype some code on the cloud.

Appendix #

Source Code #

All of the source code for this experiment, particularly llmfuse and llmencode are open-sourced under MIT.

llmencode is abstracted into a CLI utility that you can run locally. Inference on 4B models is slow, but entirely possible on consumer hardware. I prototyped most of this code by running on a 2021 MacBook Pro, before productionizing on Modal.

A fun experiment / party trick to identify how “common” a certain string is in training data is to look at its llmencode compression ratio!

SquashFS comparison methodology #

The raw .sqsh file is 4096 bytes due to block alignment padding. To find the actual compressed size, I used xxd to inspect the binary and found the last non-zero byte at offset 266 (267 bytes total). Subtracting the fixed 96-byte superblock header gives us 171 bytes of actual gzip-compressed content - everything needed to reconstruct the filesystem.

Compression as a metric #

It’s equally interesting to think about compression as a metric. An angle I’d considered is doing some kind of RL on the arithmetic coded compression number itself.

Is that simply equivalent to the pre-training objective (due to the prediction-compression duality)? Or does the “sequence-level” objective add something more… interesting to the mix. Please reach out if you have thoughts!


From blood sugar to brain relief: GLP-1 therapy slashes migraine frequency

Hacker News
www.medlink.com
2025-11-26 15:49:11
Comments...
Original Article

Notice: News releases are not subject to review by MedLink Neurology ’s Editorial Board.

Researchers at the Headache Centre of the University of Naples “Federico II” gave the glucagon-like peptide-1 (GLP-1) receptor agonist liraglutide to 26 adults with obesity and chronic migraine (defined as 15 or more headache days per month). Patients reported an average of 11 fewer headache days per month, while disability scores on the Migraine Disability Assessment Test dropped by 35 points, indicating a clinically meaningful improvement in work, study, and social functioning.

GLP-1 agonists have gained recent widespread attention, reshaping treatment approaches for several diseases, including diabetes and cardiovascular disease. 2 In the treatment of type 2 diabetes, liraglutide helps lower blood sugar levels and reduce body weight by suppressing appetite and reducing energy intake. 3,4,5

Importantly, while participants’ body mass index declined slightly (from 34.01 to 33.65), this change was not statistically significant. An analysis of covariance confirmed that BMI reduction had no effect on headache frequency, strengthening the hypothesis that pressure modulation, not weight loss, drives the benefit.

“Most patients felt better within the first two weeks and reported quality of life improved significantly”, said lead researcher Dr Simone Braca. “The benefit lasted for the full three-month observation period, even though weight loss was modest and statistically non-significant.”

Patients were screened to exclude papilledema (optic disc swelling resulting from increased intracranial pressure) and sixth nerve palsy, ruling out idiopathic intracranial hypertension (IIH) as a confounding factor. Growing evidence closely links subtle increases in intracranial pressure to migraine attacks. 6 GLP-1-receptor agonists, such as liraglutide, reduce cerebrospinal fluid secretion and have already proved effective in treating IIH. 7 Therefore, building on these observations, Dr Braca and colleagues hypothesised that exploiting the same mechanism of action might ultimately dampen cortical and trigeminal sensitisation that underlie migraine.

“We think that, by modulating cerebrospinal fluid pressure and reducing intracranial venous sinuses compression, these drugs produce a decrease in the release of calcitonin gene-related peptide (CGRP), a key migraine-promoting peptide”, Dr Braca explained. “That would pose intracranial pressure control as a brand-new, pharmacologically targetable pathway.”

Mild gastrointestinal side effects (mainly nausea and constipation) occurred in 38% of participants but did not lead to treatment discontinuation.

Following this exploratory 12-week pilot study, a randomised, double-blind trial with direct or indirect intracranial pressure measurement is now being planned by the same research team in Naples, led by Professor Roberto De Simone. “We also want to determine whether other GLP-1 drugs can deliver the same relief, possibly with even fewer gastrointestinal side effects”, Dr Braca noted.

If confirmed, GLP-1-receptor agonists could offer a new treatment option for the estimated one in seven people worldwide who live with migraine, 8 particularly those who do not respond to current preventives. Given liraglutide’s established use in type 2 diabetes and obesity, it may represent a promising case of drug repurposing in neurology.

References:

  1. Braca S, Russo C, et al. GLP-1R Agonists for the Treatment of Migraine: A Pilot Prospective Observational Study. Abstract A-25-13975. Presented at the 11 th EAN Congress (Helsinki, Finland).
  2. Zheng Z, Zong Y, Ma Y, et al. Glucagon-like peptide-1 receptor: mechanisms and advances in therapy. Signal Transduct Target Ther 2024;9(1):234.
  3. Lin CH, Shao L, Zhang YM, et al. An evaluation of liraglutide including its efficacy and safety for the treatment of obesity. Expert Opin Pharmacother 2020;21(3):275-85.
  4. Moon S, Lee J, Chung HS, et al. Efficacy and safety of the new appetite suppressant, liraglutide: a meta-analysis of randomized controlled trials. Endocrinol Metab (Seoul) 2021;36(3):647-60.
  5. Jacobsen LV, Flint A, Olsen AK, Ingwersen SH. Liraglutide in type 2 diabetes mellitus: clinical pharmacokinetics and pharmacodynamics. Clin Pharmacokinet 2016;55(6):657-72.
  6. De Simone R, Sansone M, Russo C, Miele A, Stornaiuolo A, Braca S. The putative role of trigemino-vascular system in brain perfusion homeostasis and the significance of the migraine attack. Neurol Sci 2022;43(9):5665-72.
  7. Mitchell JL, Lyons HS, Walker JK, et al. The effect of GLP-1RA exenatide on idiopathic intracranial hypertension: a randomized clinical trial. Brain 2023;146(5):1821-30.
  8. Steiner TJ, Stovner LJ, Jensen R, Uluduz D, Katsarava Z. Lifting The Burden: the Global Campaign against Headache. Migraine remains second among the world's causes of disability, and
    first among young women: findings from GBD2019. J Headache Pain 2020;21(1):137.

Source: News Release
European Academy of Neurology
June 20,
2025

'Slop Evader' Lets You Surf the Web Like It’s 2022

403 Media
www.404media.co
2025-11-26 15:47:11
Artist Tega Brain is fighting the internet’s enshittification by turning back the clock to before ChatGPT existed....
Original Article

It’s hard to believe it’s only been a few years since generative AI tools started flooding the internet with low quality content-slop. Just over a year ago, you’d have to peruse certain corners of Facebook or spend time wading through the cultural cesspool of Elon Musk’s X to find people posting bizarre and repulsive synthetic media. Now, AI slop feels inescapable — whether you’re watching TV , reading the news , or trying to find a new apartment .

That is, unless you’re using Slop Evader , a new browser tool that filters your web searches to only include results from before November 30, 2022 — the day that ChatGPT was released to the public.

The tool is available for Firefox and Chrome, and has one simple function: Showing you the web as it was before the deluge of AI-generated garbage. It uses Google search functions to index popular websites and filter results based on publication date, a scorched earth approach that virtually guarantees your searches will be slop-free.

Slop Evader was created by artist and researcher Tega Brain , who says she was motivated by the growing dismay over the tech industry’s unrelenting, aggressive rollout of so-called “generative AI”—despite widespread criticism and the wider public’s distaste for it.

Slop Evader in action. Via Tega Brain

“This sowing of mistrust in our relationship with media is a huge thing, a huge effect of this synthetic media moment we’re in,” Brain told 404 Media, describing how tools like Sora 2 have short-circuited our ability to determine reality within a sea of artificial online junk. “I’ve been thinking about ways to refuse it, and the simplest, dumbest way to do that is to only search before 2022.”

One under-discussed impact of AI slop and synthetic media, says Brain, is how it increases our “cognitive load” when viewing anything online. When we can no longer immediately assume any of the media we encounter was made by a human, the act of using social media or browsing the web is transformed into a never-ending procession of existential double-takes .

This cognitive dissonance extends to everyday tasks that require us to use the internet—which is practically everything nowadays. Looking for a house or apartment? Companies are using genAI tools to generate pictures of houses and rental properties , as well as the ads themselves. Trying to sell your old junk on Facebook Marketplace? Meta’s embrace of generative AI means you may have to compete with bots, fake photos, and AI-generated listings . And when we shop for beauty products or view ads, synthetic media tools are taking our filtered and impossibly-idealized beauty standards to absurd and disturbing new places .

In all of these cases, generative AI tools further thumb the scales of power—saving companies money while placing a higher cognitive burden on regular people to determine what’s real and what’s not.

“I open up Pinterest and suddenly notice that half of my feed are these incredibly idealized faces of women that are clearly not real people,” said Brain. “It’s shoved into your face and into your feed, whether you searched for it or not.”

Currently, Slop Evader can be used to search pre-GPT archives of seven different sites where slop has become commonplace, including YouTube, Reddit, Stack Exchange, and the parenting site MumsNet. The obvious downside to this, from a user perspective, is that you won’t be able to find anything time-sensitive or current—including this very website, which did not exist in 2022. The experience is simultaneously refreshing and harrowing, allowing you to browse freely without having to constantly question reality, but always knowing that this freedom will be forever locked in time—nostalgia for a human-centric world wide web that no longer exists .

Of course, the tool’s limitations are part of its provocation. Brain says she has plans to add support for more sites, and release a new version that uses DuckDuckGo’s search indexing instead of Google’s. But the real goal, she says, is prompting people to question how they can collectively refuse the dystopian, inhuman version of the internet that Silicon Valley’s AI-pushers have forced on us.

“I don’t think browser add-ons are gonna save us,” said Brain. “For me, the purpose of doing this work is mostly to act as a provocation and give people examples of how you can refuse this stuff, to furnish one’s imaginary for what a politics of refusal could look like.”

With enough cultural pushback, Brain suggests, we could start to see alternative search engines like DuckDuckGo adding options to filter out search results suspected of having synthetic content (DuckDuckGo added the ability to filter out AI images in search earlier this year). There’s also been a growing movement pushing back against the new AI data centers threatening to pollute communities and raise residents’ electricity bills . But no matter what form AI slop-refusal takes, it will need to be a group effort.

“It’s like with the climate debate, we’re not going to get out of this shitshow with individual actions alone,” she added. “I think that’s the million dollar question, is what is the relationship between this kind of individual empowerment work and collective pushback.”

KDE Plasma 6.8 Will Go Wayland-Exclusive in Dropping X11 Session Support

Hacker News
www.phoronix.com
2025-11-26 15:44:09
Comments...
Original Article

KDE

KDE developers announced they are going "all-in on a Wayland future" and with the Plasma 6.8 desktop it will become Wayland-exclusive. The Plasma X11 session is going away.

KDE developers announced with Plasma 6.8 it will be Wayland-exclusive in removing Plasma X11 session support although continuing to support X11 apps/games via XWayland.

KDE developers report that "the vast majority of our users are already using the Wayland session" and longer-term this change will allow for new features, optimizations, and more development speed with foregoing X11 session support.

With the Plasma release timing, this means Plasma X11 session support will remain supported into early 2027 with the Plasma 6.7 series. The Plasma 6.7 release may end up seeing some extra bug-fix releases for X11 holdouts.

More details on Plasma 6.8 going Wayland-exclusive and other details via the KDE.org blog .

Chinatown's 'Secret' Sandwich Window Gets a Nifty New Dining Room

hellgate
hellgatenyc.com
2025-11-26 15:37:47
The Sandwich Board has a muffaletta, as well as chicken, duck, and breakfast sandwiches, and now you can even sit inside while you eat them....
Original Article

Michael Brafman was born and raised in Brooklyn, currently lives in Peter Cooper Village, and has been a professional chef in NYC for more than 30 years. He clocked his first kitchen job when he was 17, and has bounced between fancy places (like Jean-Georges and Gramercy Tavern) and corporate dining gigs (where the hours are so much more conducive to raising a family) ever since.

During an unemployment stint a couple of years ago though, Brafman was helping a buddy devise a sandwich menu for Solely Tea on Eldridge Street, when something clicked. "I'm just going to make all the stuff that inspires me," he remembers thinking. "There's no boundaries! To me, the most important thing is, I don't want to limit my inspiration to just making versions of other, existing sandwiches. It's more like, I look at plated food that I like, and try to translate those not-sandwich dishes into sandwiches."

Brafman's vision proved to be too mighty for the tea shop, so instead he opened his own place in September of 2024, a simple, semi-discreet ordering window just a few doors down from Solely called the Sandwich Board . "Our whole goal was to become a local, neighborhood staple," he said and, if you spend even a few minutes with Brafman on Eldridge, it's clear that he's succeeded. During our brief chat on the sidewalk outside the shop at least a half dozen people walking by gave Brafman a wave, or a fist-bump, or a "say hi to the family." He has strong "mayor-of-the-block" vibes, for sure.

Thing is though, for most of the past year the Sandwich Board didn't provide us non-locals with anywhere to eat. Yes, there were four chairs set up guerilla-style on the sidewalk, which was great when the weather was pleasant, but much less appealing in February and March. So when the folks running the Forever Mart snacks-and-curios shop in the adjacent space called it quits, Brafman knew the time had come to expand. A few weeks ago he unveiled his new dining room, complete with stools, high tops and counters, and a second, indoor ordering window.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

MIT study finds AI can replace 11.7% of U.S. workforce

Hacker News
www.cnbc.com
2025-11-26 15:32:06
Comments...
Original Article

AI can already replace 11.7% of the U.S. workforce, MIT study finds

Massachusetts Institute of Technology on Wednesday released a study that found that artificial intelligence can already replace 11.7% of the U.S. labor market, or as much as $1.2 trillion in wages across finance, health care and professional services.

The study was conducted using a labor simulation tool called the Iceberg Index , which was created by MIT and Oak Ridge National Laboratory. The index simulates how 151 million U.S. workers interact across the country and how they are affected by AI and corresponding policy.

The Iceberg Index, which was announced earlier this year, offers a forward-looking view of how AI may reshape the labor market , not just in coastal tech hubs but across every state in the country. For lawmakers preparing billion-dollar reskilling and training investments, the index offers a detailed map of where disruption is forming down to the zip code.

"Basically, we are creating a digital twin for the U.S. labor market," said Prasanna Balaprakash, ORNL director and co-leader of the research. ORNL is a Department of Energy research center in eastern Tennessee, home to the Frontier supercomputer, which powers many large-scale modeling efforts.

The index runs population-level experiments, revealing how AI reshapes tasks, skills and labor flows long before those changes show up in the real economy, Balaprakash said.

The index treats the 151 million workers as individual agents, each tagged with skills, tasks, occupation and location. It maps more than 32,000 skills across 923 occupations in 3,000 counties, then measures where current AI systems can already perform those skills.

What the researchers found is that the visible tip of the iceberg — the layoffs and role shifts in tech, computing and information technology — represents just 2.2% of total wage exposure, or about $211 billion. Beneath the surface lies the total exposure, the $1.2 trillion in wages, and that includes routine functions in human resources, logistics, finance, and office administration. Those are areas sometimes overlooked in automation forecasts.

The index is not a prediction engine about exactly when or where jobs will be lost, the researchers said. Instead, it's meant to give a skills-centered snapshot of what today's AI systems can already do, and give policymakers a structured way to explore what-if scenarios before they commit real money and legislation.

The researchers partnered with state governments to run proactive simulations. Tennessee, North Carolina and Utah helped validate the model using their own labor data and have begun building policy scenarios using the platform.

Amazon layoffs hit engineers, gaming division, ad business

Tennessee moved first, citing the Iceberg Index in its official AI Workforce Action Plan released this month. Utah state leaders are preparing to release a similar report based on Iceberg's modeling.

North Carolina state Sen. DeAndrea Salvador, who has worked closely with MIT on the project, said what drew her to the research is how it surfaces effects that traditional tools miss. She added that one of the most useful features is the ability to drill down to local detail.

"One of the things that you can go down to is county-specific data to essentially say, within a certain census block, here are the skills that is currently happening now and then matching those skills with what are the likelihood of them being automated or augmented, and what could that mean in terms of the shifts in the state's GDP in that area, but also in employment," she said.

Salvador said that kind of simulation work is especially valuable as states stand up overlapping AI task forces and working groups.

The Iceberg Index also challenges a common assumption about AI risk — that it will stay confined to tech roles in coastal hubs. The index's simulations show exposed occupations spread across all 50 states, including inland and rural regions that are often left out of the AI conversation.

To address that gap, the Iceberg team has built an interactive simulation environment that allows states to experiment with different policy levers — from shifting workforce dollars and tweaking training programs to exploring how changes in technology adoption might affect local employment and gross domestic product.

"Project Iceberg enables policymakers and business leaders to identify exposure hotspots, prioritize training and infrastructure investments, and test interventions before committing billions to implementation," the report says.

Balaprakash, who also serves on the Tennessee Artificial Intelligence Advisory Council, shared state-specific findings with the governor's team and the state's AI director. He said many of Tennessee's core sectors — health care, nuclear energy, manufacturing and transportation — still depend heavily on physical work, which offers some insulation from purely digital automation. The question, he said, is how to use new technologies such as robotics and AI assistants to strengthen those industries rather than hollow them out.

For now, the team is positioning Iceberg not as a finished product but as a sandbox that states can use to prepare for AI's impact on their workforces.

"It is really aimed towards getting in and starting to try out different scenarios," Salvador said.

WATCH: Amazon targets middle managers in mass layoffs, memo suggests more cuts coming as AI thins Big Tech

Amazon targets middle managers in mass layoffs, memo suggests more cuts coming as AI thins Big Tech

ChatGPT firm blames boy’s suicide on ‘misuse’ of its technology

Guardian
www.theguardian.com
2025-11-26 15:31:58
OpenAI responds to lawsuit claiming its chatbot encouraged California teenager to kill himself The maker of ChatGPT has said the suicide of a 16-year-old was down to his “misuse” of its system and was “not caused” by the chatbot. The comments came in OpenAI’s response to a lawsuit filed against the ...
Original Article

The maker of ChatGPT has said the suicide of a 16-year-old was down to his “misuse” of its system and was “not caused” by the chatbot.

The comments came in OpenAI’s response to a lawsuit filed against the San Francisco company and its chief executive, Sam Altman, by the family of California teenager Adam Raine.

Raine killed himself in April after extensive conversations and “months of encouragement from ChatGPT”, the family’s lawyer has said.

The lawsuit alleges the teenager discussed a method of suicide with ChatGPT on several occasions, that it guided him on whether a suggested method would work, offered to help him write a suicide note to his parents and that the version of the technology he used was “rushed to market … despite clear safety issues”.

According to filings at the superior court of the state of California on Tuesday, OpenAI said that “to the extent that any ‘cause’ can be attributed to this tragic event” Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.

It said that its terms of use prohibited asking ChatGPT for advice about self-harm and highlighted a limitation of liability provision that states “you will not rely on output as a sole source of truth or factual information”.

OpenAI, which is valued at $500bn (£380bn), said its goal was to “handle mental health-related court cases with care, transparency, and respect” and that “independent of any litigation, we’ll remain focused on improving our technology in line with our mission”.

The blogpost added: “Our deepest sympathies are with the Raine family for their unimaginable loss. Our response to these allegations includes difficult facts about Adam’s mental health and life circumstances.

“The original complaint included selective portions of his chats that require more context, which we have provided in our response. We have limited the amount of sensitive evidence that we’ve publicly cited in this filing, and submitted the chat transcripts themselves to the court under seal.”

The family’s lawyer, Jay Edelson, called OpenAI’s response “disturbing” and said the company “tries to find fault in everyone else, including, amazingly, by arguing that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act”.

Earlier this month, OpenAI was hit by seven further lawsuits in California courts relating to ChatGPT, including an allegation it acted as a “suicide coach”.

A spokesperson for the company said at the time: “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details. We train ChatGPT to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”

In August, Open AI said it was strengthening the safeguards in ChatGPT when people engage in long conversations because experience had shown that parts of the model’s safety training might degrade in these situations.

“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” it said. “This is exactly the kind of breakdown we are working to prevent.”

OpenAI needs to raise at least $207B by 2030 so it can continue to lose money

Hacker News
ft.com
2025-11-26 15:06:37
Comments...
Original Article

FT Alphaville

Register to unlock this article

Explore more offers.

Trial

$1 for 4 weeks

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel or change your plan anytime during your trial.

Standard Digital

$45 per month

Get essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%

Premium Digital

$75 per month

Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.

Explore our full range of subscriptions.

For individuals

Discover all the plans currently available in your country

For multiple readers

Digital access for organisations. Includes exclusive features and content.

Why the FT?

See why over a million readers pay to read the Financial Times.

Find out why

T​he era-defining Xbox 360 ​reimagined ​gaming​ and Microsoft never matched it

Guardian
www.theguardian.com
2025-11-26 15:00:28
Two decades on, its influence still lingers, marking a moment when gaming felt thrillingly new again • Don’t get Pushing Buttons delivered to your inbox? Sign up here Almost 20 years ago (on 1 December 2005, to be precise), I was at my very first video game console launch party somewhere around Lond...
Original Article

A lmost 20 years ago (on 1 December 2005, to be precise), I was at my very first video game console launch party somewhere around London’s Leicester Square. The Xbox 360 arrived on 22 November 2005 in the US and 2 December in the UK, about three months after I got my first job as a junior staff writer on GamesTM magazine. My memories of the night are hazy because a) it was a worryingly long time ago and b) there was a free bar, but I do remember that DJ Yoda played to a tragically deserted dancefloor, and everything was very green. My memories of the console itself, however, and the games I played on it, are still as clear as an Xbox Crystal . It is up there with the greatest consoles ever.

In 2001, the first Xbox had muscled in on a scene dominated by Japanese consoles , upsetting the established order (it outsold Nintendo’s GameCube by a couple of million) and dragging console gaming into the online era with Xbox Live, an online multiplayer service that was leagues ahead of what the PlayStation 2 was doing. Nonetheless, the PS2 ended up selling over 150m to the original Xbox’s 25m. The Xbox 360, on the other hand, would sell over 80m, neck and neck with the PlayStation 3 for most of its eight-year life cycle (and well ahead in the US). It turned Xbox from an upstart into a market leader.

In a very un-Microsoft way, the Xbox 360 was cool. Its design was interesting, an inwards double curve described by its designers as an “inhale”, with a swappable front faceplate. It had a memorably Y2K startup animation and clean, futuristic menus that brought messaging, friends lists and music. I remember finding Microsoft’s marketing powerfully cringe at the time – witness this developer video , featuring former Microsoft entertainment boss J Allard and his infamous earring, in which a guy juggles while saying the words “Three symmetric cores”. But, despite that, the machine they built felt modern and exciting. The controller, too, white with its pops of colour, was such a tremendous improvement on the uncomfortably gigantic original Xbox controller that it’s become a design standard. I know people who will still only use wired Xbox 360 pads to play PC games.

Powerfully cringe … Microsoft’s Xbox 360 promo video.

As the first properly, seamlessly connected console, it brought a lot of things together to form a sense of gamer identity: playing different games online under one unified gamertag; messages and social features, as well as the inspired idea of achievements, which created a personal gaming history via the little challenges you completed in everything you played. (Sony would soon copy this with trophies.) Attaching a number to this, the gamerscore, was devilish genius, encouraging players to compete for ultimately meaningless clout, and creating a powerful incentive for people to stick with the console rather than buying games elsewhere. The Xbox 360 was the first console to understand that people stay where their friends are. If you had the choice between buying a game on PS3 or 360, you’d choose 360 because that’s where everyone else was playing.

By late 2006, when a complacent Sony released an overpriced and awkward-looking follow-up to the PlayStation 2, the Xbox 360 had already had a year to convert people to its vision for high-definition gaming. People had already built up a collection of games and an online identity that was tied to Xbox. The big third-party game publishers, who found the PS3’s proprietary technology awkward to develop for, had started to prioritise Xbox for multi-platform games. The 360 never cracked Japan, but in the rest of the world it became the default console, an extraordinary thing for Microsoft to achieve considering how comprehensively Sony had dominated the previous two generations with the PlayStation.

Limbo X360 game screenshot, 2010
The weird, monochrome realm of Limbo. Photograph: TriplePoint

Xbox Live Arcade also helped to usher in the modern era of indie games. Between the 90s and the late 00s, publishers and bricks-and-mortar retailers largely controlled which games did and didn’t make it into players’ hands, especially on consoles. In 2008, Xbox Live Arcade started letting people download smaller, cheaper games direct to their consoles – no shop or publisher required. It did for console gaming what Steam would later do on PC, getting players comfortable with the idea of digital distribution. Games released via the arcade included Geometry Wars , Braid, Limbo, Bastion and, just as importantly, the best-ever digital version of Uno. I remember sinking many, many hours into Oblivion, Mass Effect and BioShock in my late teens, but I also eagerly awaited each new batch of Xbox Live Arcade games.

Looking back, the architects of the Xbox 360 really understood how and why people played games, and what they wanted from a next-generation console at the time. They understood how the internet could transform not just multiplayer gaming, but the social experience around games, and the way people found and bought them. This knowledge was apparently lost in a few short years, because when Microsoft announced the Xbox One in 2013, it was an absolute shitshow. By then, Microsoft apparently thought that people wanted to play games while watching sports picture-in-picture, as a mandatory connected camera watched your every move.

Microsoft has never again come close to market leadership in video games. A resurgent Sony took all the best lessons from the Xbox 360 and packaged them into the PlayStation 4, and then the Nintendo Switch arrived in 2018 and blew everything else out of the water. With Xbox now in distant third place in the waning console wars, it seems to see its future as a quasi-monopolistic video game subscription service , rather than a hardware maker. Series that defined the 360 era, such as Halo and Gears of War, are now playable on PC and PlayStation. Others, such as Fable, have been languishing for over a decade.

The 360 era was an exciting time in games, a period of great change and competition brought about by online gaming. The console market was a lot smaller back then, but also less predictable. There was still room for those “interesting, 7/10” B-games that sometimes proved even more memorable than the blockbusters when free-to-play games were not yet a thing – games were yet to consolidate into the five established mega-franchises that now dominate everything. And, in bringing indie games to console players, it genuinely changed the trajectory of my gaming taste.

What to play

Geometry Wars: Retro Evolved
Bath your grain … Geometry Wars: Retro Evolved. Photograph: Bizarre Creations/Steam

Writing about Xbox Live Arcade had me hankering for Geometry Wars: Retro Evolved , the spectacularly compulsive Xbox Live Arcade top-down shooter that looks like fireworks and feels like a sensory bath for your brain. So I downloaded it on Steam and was instantly hooked once again. Made by Bizarre Creations, of Project Gotham Racing game, this game was constantly trading places with Uno as the 360’s most downloaded digital game, and it still holds up beautifully. I’d forgotten how the grid background ripples beautifully when things explode, a little high-definition-era flair for a very arcade-era game.

Available on: Steam, Xbox (if you’re happy to play the sequel instead)
Estimated playtime:
10 minutes to, well, 20 years

What to read

Baby Steps Publijsher : Devolver Digital
Obstinately difficult and painfully funny … Baby Steps. Photograph: Devolver Digital
  • I’ve been thinking a lot lately about difficult games , and what it is that keeps me coming back to them, which has led to reading quite a bit about challenge from a game designer’s perspective. And then this exceptionally succinct article by Raph Koster, veteran designer of Ultima Online and much else, dropped into my feed. It’s called Game Design is Simple, Actually , and it’s a must-read.

  • If you are more of an OG Xbox fan, you’ll be delighted to learn that Crocs have just launched an Xbox clog , inspired by the original Xbox’s black and green beast of a controller. It is fantastically ugly.

  • Poncle, makers of Bafta game of the year winning Vampire Survivors have announced a new game, Vampire Crawlers , with a tongue-in-cheek trailer . This one’s a blend of card-game and old school first-person dungeon crawler.

skip past newsletter promotion

What to click

Question Block

Cyberpunk 2077.
Top this … Cyberpunk 2077. Photograph: CD Projekt

Last week, reader Jude asked me which video game world I would most want to live in (Cyrodiil from Elder Scrolls, obviously), and we threw the question back to you. We had so many delightful and/or deranged responses – here’s what you had to say.

“If you want somewhere to go get a beer, the world of Cyberpunk 2077 looks amazingly hard to top.” – Spence Bromage

“I know it’s silly but I was so enthralled with the ship in System Shock 2 , I wanted to live there!” – Charles Rouleau

“The Dragon Age universe in a heartbeat. Give me Fereldan and Denerim and yes, even Orlais. Give me a Skyhold to live in and a warble to manage, and I may never leave.” – Kateland Vernon

“Call me weird, but I’ll take Fallout 3 to live in. It had a massive impact on me, seeing pockets of humanity enduring the wasteland, with an overarching battle between good and evil.” – Toby Durnall

“I have strange one: Animal Well . The freedom to explore this self-contained little map full of hidden corners has meant that I have a really good sense of where I am on the map. Even though I’ve ‘done’ the game’s activities, I have had some strange comfort in the last two weeks after finishing the game, just in wandering the space for the sheer joy of it.” – Ben Gibb-Reid

If you’ve got a question for Question Block – or anything else to say about the newsletter – email us on pushingbuttons@theguardian.com .

Dirk Eddelbuettel: tidyCpp 0.0.8 on CRAN: Maintenance

PlanetDebian
dirk.eddelbuettel.com
2025-11-26 14:57:00
Another maintenance release of the tidyCpp package arrived on CRAN this morning, the first in about two years. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more con...
Original Article

tidyCpp 0.0.8 on CRAN: Maintenance

Another maintenance release of the tidyCpp package arrived on CRAN this morning, the first in about two years. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the (now updated, see below) vignette for motivating examples .

This release contains mostly internal upkeep of the usual type: refreshing continuous integration, updating links, switching to Authors@R. But as we wrap the C API of R here too, changes made in R-devel this week affected the two reverse-dependencies (i.e. “downstream”) packages (of mine) using this. So we commented-out the definitions for the five now-hidden accessors so that these downstream packages can build again under R-devel.

The NEWS entry follows.

Changes in tidyCpp version 0.0.8 (2025-11-25)

  • Updated continuous integration setup several times

  • Updated README.md documentation with link to R API site

  • Updated example snippets to use of Protect

  • Updated documentation in defines.h header

  • Updated internals.h header reflecting in R API changes

As it happens, hours after the release at CRAN a helpful issue ticket was opened detailing more than a handful of typos in the vignette. This has been corrected, and I am not exporting the vignette via GitHub Pages so the motivating examples vignette contains the corrections.

Thanks to my CRANberries , there is also a diffstat report for this release . For questions, suggestions, or issues please use the issue tracker at the GitHub repo . If you like this or other open-source work I do, you can now sponsor me at GitHub .

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

/code/tidycpp | permanent link

Solving the Partridge Packing Problem using MiniZinc

Lobsters
zayenz.se
2025-11-26 14:53:14
Comments...
Original Article

The Partridge Packing Problem is a packing puzzle that was originally proposed by Robert T. Wainwright at G4G2 (the Second Gathering for Gardner conference) in 1996. In this post we will model and solve the Partridge Packing Problem using MiniZinc . The inspiration was Matt Parker’s fun video on the problem .

Packing problems are a classic use-case for combinatorial solvers. In fact, the original paper that introduced the idea of global constraints for constraint programming, “Introducing global constraints in CHIP” by Beldiceanu and Contejean 1994 included the so-called diffn constraint for packing problems. The constraint ensures that a set of (n-dimensional) boxes are not overlapping. 1

This post assumes some familiarity with MiniZinc. For some background on MiniZinc, see the previous posts in the collection. The puzzle will be explained fully, and no specific knowledge of packing problems is assumed.

The Partridge Packing Problem is a packing problem for squares in a larger square. For size n n , the goal is to pack:

into a square of size n ( n + 1 ) 2 × n ( n + 1 ) 2 \frac{n(n+1)}{2} \times \frac{n(n+1)}{2} . 2 The name comes from the song “The Twelve Days of Christmas,” where the first gift is a partridge in a pear tree, then two turtle doves, and so on going up to twelve drummers drumming.

The sum of the areas of all the smaller squares equals the area of the larger square

i = 1 n i i 2 = n ( n + 1 ) 2 n ( n + 1 ) 2 \sum_{i=1}^{n} i \cdot i^2 = \frac{n(n+1)}{2} \cdot \frac{n(n+1)}{2}

But just because the area matches does not mean that it is possible. It is known that sizes 2 to 7 have no solution, and sizes from 8 to 33 have at least one solution. The problem becomes increasingly difficult as n n grows larger, as the number of parts grows quadratically.

Let’s look at the first interesting size with a solution, size 8. Here are all the parts to pack. 3

These parts can be packed in a square of size 36 × 36 36\times 36 , where 36 36 comes from 8 × 9 2 = 36 \frac{8 \times 9}{2} = 36 , and here is one such solution.

This visualization shows how all the squares pack together perfectly to fill the 36×36 grid.

As mentioned, for sizes below 8 the problem is infeasible (except 1, which is the trivial case). Consider size 2, which includes 1 part of size 1 × 1 1\times 1 and 2 parts of size 2 × 2 2\times 2 that should be packed in a 3 × 3 3\times 3 square. As can be seen below, while the sum of the areas of the parts equals the area to pack in, there is no way to put the two larger squares on the area without them overlapping.

Following previous parts in this collection , we will split up the model in parts. In this section the first basic model will be presented, including the data, the viewpoint, the basic constraints, and the search and output.

In the next section, improvements to the model will be discussed. Several of the improvements were suggested by Mats Carlsson , and made the model a lot better and faster.

The problem is parameterized by a single value n n , which determines both the number of different square sizes and the size of the target square.

int: n;

set of int: N = 1..n;

% Triangular number of n is both the total number of parts and

% the board size length

int: triangular_n = (n * (n+1)) div 2;

enum Parts = P(1..triangular_n);

set of int: Pos = 1..triangular_n;

array[Parts] of N: sizes = array1d(Parts, reverse([

size

| size in N, copy in 1..size

]));

constraint assert(sum(s in sizes) (s * s) == triangular_n * triangular_n,

"The squares fill the board completely");

The computed value triangular_n is the triangular number of the size parameter n . This is both the total number of parts to pack as well as the side length of the board where the parts are to be placed. The enum Parts is used to separate the set of parts from the Pos positions to place them at. 4

The sizes are generated in increasing order but are reversed, resulting in the larger boxes being first in the array. This is useful since many solvers will use the input-order as a tie-breaker for heuristics, promoting packing hard-to-pack boxes (i.e., the larger ones) first.

Similar to the LinkedIn Queens post, we can use instance files to set the parameter n . However, running the model from the MiniZinc IDE the user is prompted for all unknown values, and for a single integer this is very easy to supply.

There are many ways that one can model a packing problem. The most common way for box packings is to set one corner as the reference point, and to use the position of that reference point as the position for the box. The most natural expression for this is to use two arrays representing the x and y coordinates of the bottom-left corner of each square.

% Main variables for placement of parts, the x and y coordinate

array[Parts] of var Pos: x;

array[Parts] of var Pos: y;

MiniZinc has a feature where records can be used to structure data, and using that, we could declare the variables like this instead.

% Main variables for placement of squares, the x and y coordinate

array[Parts] of record(var Pos: x, var Pos: y): box;

However, there are several places in the model where a constraint is formulated over the x variables only, and then over the y variables. Therefore, it is easier to use two arrays instead of a single one. 5

The base variables allow placement of the reference point anywhere inside the packing area. However, the allowed positions need to be adjusted based on the size of a part. This is done by adjusting the upper bounds of the x and y value based on the size, ensuring that the point is also in the Pos set.

constraint :: "Parts fit in x direction"

forall(p in Parts) (

x[p] + sizes[p] - 1 in Pos

);

constraint :: "Parts fit in y direction"

forall(p in Parts) (

y[p] + sizes[p] - 1 in Pos

);

In the above (and the rest of the constraint here), constraints are named using the :: string annotation. These names, such as "Parts fit in x direction" , are translated into the FlatZinc format and are useful for debugging and for tools such as findMUS .

The main constraint for a packing problem is that no parts should overlap. The classic way to ensure this is to use the no-overlap constraint, which for historic reasons is named the diffn constraint in MiniZinc.

constraint :: "No-overlap packing constraint"

diffn(x, y, sizes, sizes);

The arguments to diffn are the x and y positions of the rectangles, and their extent in the x and y direction (that is, the width and the height). Since the parts are squares, their extents are the same in both directions.

This is a satisfaction problem and we will leave the search strategy to the solver.

There are two output blocks for this model. The first block will print an ASCII-art representation of the packing to the standard output.

/**

* Get the unique singleton value in the supplied set, assert if it is not a singleton.

*/

function $$T: singleton_value(set of $$T: values) =

assert(card(values) == 1, "Values must have exactly one element, was \(values)",

min(values)

);

/**

* Return a character representation of the value v.

*

* Support values v in 1..36.

*/

function string: to_char(int: v) =

if v in 0..9 then

"\(v)"

else

["a", "b", "c", "d", "e", "f", "g", "h",

"i", "j", "k", "l", "m", "n", "o", "p",

"q", "r", "s", "t", "u", "v", "w",

"x", "y", "z"][v-9]

endif;

% Base command-line output mapping the placed parts to their sizes.

%

output [

let {

any: fx = fix(x),

any: fy = fix(y),

any: board = array2d(Pos, Pos, [

let {

Parts: part_id = singleton_value({p | p in Parts where

tx in fx[p]..(fx[p] + sizes[p]-1) /\

ty in fy[p]..(fy[p] + sizes[p]-1)

})

} in

to_char(sizes[part_id])

| tx in Pos, ty in Pos

])

} in

concat(tx in Pos) (

concat(board[tx, ..]) ++ "\n"

)

];

While long, this code is reasonably straightforward. First, there are two helper functions: singleton_value , which transforms a set that is known to be just one element to the element, and to_char , which transforms a size to a character that represents it in base 36 (0-9 and a-z).

Next, a matrix is constructed where for each position, the part that is covering that position is found, and the size of that part is used to get the character. Finally, this matrix is concatenated into a set of strings.

The second output-block uses a feature of the MiniZinc IDE where custom visualizations can be used . These work by starting a webserver serving a webpage that receives the solutions as they are produced. For this problem, the existing vis_geost_2d visualization is used.

output vis_geost_2d(

% Internal x and y offset of each part, 0 since each part is its own shape

[p:0 | p in Parts], [p:0 | p in Parts],

% Size of each part in x and y direction

sizes, sizes,

% Map each shape to the corresponding single part

[p:{p} | p in Parts],

% Reference points for each shape

x, y,

% The kind of each part

array1d(Parts, Parts)

);

The vis_geost_2d family of visualizations can show packing problems with shapes made out of rectangles using internal offsets to a common shape reference point, matching the input for the geost constraint . As each part is just a square, each kind of shape will be a single part, and the internal offsets are just 0. Note that the construction [p:0 | p in Parts] will create an array with Parts as the index set, skipping the p: part would create an array with 1..card(Parts) as the index set. An alternative way to write this is to coerce the base array to the right index set: array1d(Parts, [0 | p in Parts]) .

In all the tests here, we will use OR-Tools CP-SAT 9.14 bundled with MiniZinc IDE 2.9.4 on a MacBook Pro M1 Max with 64 GiB of memory. The configuration is set to use 10 threads (same as the number of cores in the CPU), and use free search.

As mentioned, sizes 2 to 7 are unsatisfiable, so the smallest interesting problem with a solution is size 8. However, this base model is not efficient at all. Finding a solution took about 3 and a half hours in one run, which makes it not very practical.

777777744448888888888888888333666666

777777744448888888888888888333666666

777777744448888888888888888333666666

777777744448888888888888888333666666

777777744448888888888888888333666666

777777744448888888888888888333666666

777777744448888888888888888227777777

444433344448888888888888888227777777

444433322777777777777776666667777777

444433322777777777777776666667777777

444455555777777777777776666667777777

444455555777777777777776666667777777

444455555777777777777776666667777777

444455555777777777777776666667777777

444455555777777777777776666667777777

777777788888888888888886666667777777

777777788888888888888886666667777777

777777788888888888888886666667777777

777777788888888888888886666667777777

777777788888888888888886666667777777

777777788888888888888885555588888888

777777788888888888888885555588888888

666666188888888888888885555588888888

666666555555555577777775555588888888

666666555555555577777775555588888888

666666555555555577777775555588888888

666666555555555577777775555588888888

666666555555555577777775555588888888

888888888888888877777775555588888888

888888888888888877777775555588888888

888888888888888866666666666688888888

888888888888888866666666666688888888

888888888888888866666666666688888888

888888888888888866666666666688888888

888888888888888866666666666688888888

888888888888888866666666666688888888

----------

==========

%%%mzn-stat: nSolutions=1

%%%mzn-stat-end

%%%mzn-stat: boolVariables=1023

%%%mzn-stat: failures=88389736

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: propagations=3870695549

%%%mzn-stat: solveTime=12697.9

%%%mzn-stat-end

Finished in 3h 31m 38s.

While the ASCII art is nice, the visualization is much easier to understand. Below you can see first the visualization from MiniZinc, and then the visualization for this post where squares of equal size get the same color and all squares are marked with their size.

The above model is the base, with just the constraints that are needed for a correct solution. In this part, we will add additional constraints that improve the model significantly. These constraints are of two types, implied constraints and symmetry breaking constraints . An implied constraint is a constraint that strengthens the model by adding additional constraints that are true in every solution. The goal is to add additional propagation that makes more deductions. A symmetry breaking constraint is used to reduce the number of solutions, by limiting the symmetries of solutions.

Symmetries often arise from modeling decisions, but sometimes also from the problem itself. For example, in the classic 8-queens problem there is a symmetry from the problem definition: the chessboard for a single solution can be rotated and mirrored diagonally to create 8 different solutions. If the model were to name the queens, then that would introduce a symmetry for which queen is placed where. This symmetry would occur because of modeling decisions, not from the problem itself where queens are indistinguishable. 6

We will use a feature of MiniZinc to mark constraints with their type by enclosing the constraint in calls to implied_constraint and symmetry_breaking_constraint . While not useful for many solvers, some (such as Constraint-Based Local Search solvers ) can use this information to decide what constraints to soften and what constraints to use for moves.

For each improvement, we will test it to see the effects. Note that the configuration that is used, OR-Tools CP-SAT with 10 threads, is not a deterministic system. One single run might not be indicative for all runs, but in most cases it will be a good indication.

A classic implied constraint for packing problems is to add a cumulative profile constraint for the x and y direction. Cumulative is a classic scheduling constraint, and is typically used for tasks that use some set of resources while they are active. Below is an example of 8 tasks that are scheduled, with a capacity limit of 8 and varying amounts of usage at different points.

Note that the tasks do not have a fixed y-position; they only have a start, an end, and a resource usage (height). This means that tasks like the green task 4 and the purple task 6 are not shown as a rectangle but staggered based on the amount of other tasks. For the packing case, looking along one dimension, the orthogonal dimension can be seen as a resource, and the squares as tasks to be scheduled. This is a classic implied constraint that can strengthen the propagation, and OR-Tools CP-SAT even has several parameters that can be set to include cumulative-style reasoning. Here, the cumulative constraint is instead added as a MiniZinc constraint so that it can be used with all different solvers.

constraint :: "Cumulative profile of parts along the x axis." implied_constraint(

cumulative(x, sizes, sizes, card(Pos))

);

constraint :: "Cumulative profile of parts along the y axis." implied_constraint(

cumulative(y, sizes, sizes, card(Pos))

);

Running this, however, does not give better results at all. The simple model took three and a half hours, but this model takes more than an hour more!

%%%mzn-stat: boolVariables=2184

%%%mzn-stat: failures=99470613

5 collapsed lines

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: propagations=7359764734

%%%mzn-stat: solveTime=17031.9

%%%mzn-stat-end

Finished in 4h 43m 52s.

Unfortunately, this type of behavior is not uncommon when a learning system with automatic heuristics and randomization is combined with changes to a model. This shows the importance of benchmarking and testing all changes to see how the model behaves. Even well-known improvements might make it worse.

The cumulative constraint above adds to the reasoning, but it is also a lot weaker than it could have been. The Partridge Packing Problem is a tight packing, where the board is fully covered. The cumulative constraint “just” says that too much area can’t be used. Consider instead a constraint that, for each row and column, checks which parts overlap it and requires that the sum of the sizes of overlapping parts equals the board size exactly.

% The sizes of the parts that overlap rc in the xy direction

% must equal the number of positions exactly.

predicate exact_fill(array[Parts] of var Pos: xy, Pos: rc) =

let {

% on_rc[p] is true iff the part overlaps the row/column rc

array[Parts] of var bool: on_rc = [

rc-sizes[p] < xy[p] /\ xy[p] <= rc

| p in Parts

]

} in

sum(p in Parts) (

sizes[p] * on_rc[p]

) = card(Pos);

constraint :: "Exact profile of parts along the x axis." implied_constraint(

forall(rc in Pos) (

exact_fill(x, rc)

)

);

constraint :: "Exact profile of parts along the y axis." implied_constraint(

forall(rc in Pos) (

exact_fill(y, rc)

)

);

Here, a utility function is added so that the right sum can be constructed for each column and for each row. The exact_fill function takes the positions of all the parts along either the x or y axis, and a specified row or column. Inside, a local array on_rc indexed by Parts of Boolean variables is constructed that indicates whether each part overlaps that row or column. Multiplying by the size of each part gives how much of the dimension is used, and that is required to be equal to the cardinality of the Pos set.

This addition is a huge improvement over the base model! A solution is found in less than 4 minutes instead of 3 and a half hours.

%%%mzn-stat: boolVariables=3960

%%%mzn-stat: failures=6155170

5 collapsed lines

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: propagations=1762225146

%%%mzn-stat: solveTime=225.119

%%%mzn-stat-end

Finished in 3m 45s.

This is starting to look like a viable model to use. Checking if the cumulative constraint might help now shows that it is still not a good addition, and it increased the search time to 4 minutes 33 seconds.

%%%mzn-stat: boolVariables=3960

%%%mzn-stat: failures=7544566

5 collapsed lines

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: propagations=2046103308

%%%mzn-stat: solveTime=272.594

%%%mzn-stat-end

Finished in 4m 33s.

From the work that Mats Carlsson and Nicolas Beldiceanu did creating the geost constraint , there are several additional deductions that can be made based on placements of boxes. The core insight in this case is that since the board should be filled completely, then for every area created there must be parts that can fill it. Consider the below packing where a part has been placed on the board close to the edge.

The red area next to the border has a width of 2 and a height of 6. It can only be packed with parts that are at most size 2, and a total area of 2 6 = 12 2\cdot 6=12 needs to be available. However, for parts up to size 2, this is not possible since there is one 1 × 1 1\times 1 square and two 2 × 2 2\times 2 squares, for a total area of 9. Trying to fill up the area between the size 6 part and the border would look like this.

Given the above reasoning, it is clear that any part of size 6 must either be placed next to a border, or at a distance of more than 2 from a border. In general, for a given size n n , the sum of the areas of the smaller parts (up to size n 1 n-1 ) is the square of the triangular number for n 1 n-1 . This reasoning can be generalized and implemented with the following MiniZinc code.

% The amount of available area from parts up to given size

function int: available_area(int: size) =

let {

% t is the triangular number of size

int: t = (size * (size + 1)) div 2;

} in

t * t;

constraint :: "Edge-placement limits" implied_constraint(

forall(size in N where size > 1) (

let {

% Find the smallest distance from the edge that is possible to place.

int: min_distance_from_edge = min({d | d in 1..size

where d * size > available_area(d)}),

% Placing in these positions is not packable for a full packing

set of int: forbidden_placements =

% Positions at low placement indices

2..(1+min_distance_from_edge)

union

% positions at high placement indices

max(Pos)-size-min_distance_from_edge..<max(Pos)-size,

set of Pos: allowed_placements = Pos diff forbidden_placements

} in

forall(p in Parts where sizes[p] = size) (

x[p] in allowed_placements

/\

y[p] in allowed_placements

)

));

For each size of part, there is a custom calculation of the allowed_placements for that part. Since the parts and the board are squares, the same set can be used for both x and y placements. The calculation of min_distance_from_edge uses the idea that if the part is d steps away from the edge, then the available area of parts up to size d must be greater than the size length times the value d for it to be valid. Using this, the set of forbidden_placements is computed close to the edges, and the allowed_placements are the complement of that with respect to Pos . This is a conservative approximation of the packability: if this requirement is not satisfied, then there is no packing that would work.

Adding this constraint reduces the time significantly again. Running three times the time varies between 45 and 105 seconds due to the stochastic nature of the solving process. The median has the following statistics.

2 collapsed lines

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: boolVariables=3812

%%%mzn-stat: failures=2251053

%%%mzn-stat: propagations=558765030

3 collapsed lines

%%%mzn-stat: solveTime=73.3695

%%%mzn-stat: nSolutions=1

%%%mzn-stat-end

Finished in 1m 13s.

In the original geost work, this type of reasoning is not just limited to placements close to an edge, but for all different types of induced areas during the search. This is much stronger reasoning, but would not be readily expressible as fixed constraints. It requires careful implementation as a propagator in a system. SICStus Prolog has the original and probably most advanced implementation of geost with a large domain-specific language to express placements.

Symmetry breaking is often crucial in many problems. Here, the focus is a symmetry that is introduced by the modeling: parts of the same size should be indistinguishable. The three parts of size 3 × 3 3\times 3 are the same, but since they have different identifiers they are different to the solvers. A common way to break symmetries is to introduce an ordering among the alternatives.

constraint :: "Equal size squares symmetry breaking" symmetry_breaking_constraint(

forall (size in N) (

let {

set of Parts: PartsWithSize = {p | p in Parts where sizes[p] == size},

set of int: P = 1..card(PartsWithSize),

array[1..2, P] of var Pos: placements = array2d(1..2, P, [

[x[p], y[p]][x_or_y]

| x_or_y in 1..2, p in PartsWithSize

])

} in

lex_chain_less(placements)

)

);

For each size, the set of parts with that size are collected. Then, a matrix of placements is constructed where each column represents the x and y coordinates of a part for that size. 7 The lex_chain_less constraint is used to order these tuples using lexicographic ordering.

Adding the symmetry reduces the solving time significantly again. In 10 runs, it is between 0.8 and 3.6 seconds, with an average of 1.9 seconds. The median has the following statistics.

2 collapsed lines

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: boolVariables=3700

%%%mzn-stat: failures=15018

%%%mzn-stat: propagations=8614720

3 collapsed lines

%%%mzn-stat: solveTime=1.4264

%%%mzn-stat: nSolutions=1

%%%mzn-stat-end

Finished in 1s 830msec.

As mentioned above, the board has 8 symmetries (four rotations times flipping), and it is common to break them in many puzzle cases. Matt Parker argues in the video that for the purposes of this puzzle, they should be kept in. Also, it can be quite tricky to combine symmetry breaking techniques. For any way to order the symmetries of the board, that ordering would have to work jointly with the ordering of the parts.

For testing, you can download the full MiniZinc model . Remember to set OR-Tools CP-SAT to use at least as many threads as you have cores, and to also check the free search box.

In all the above examples, size 8 has been the instance solved. Using the model developed, let’s try larger sizes and see the performance for that.

In Matt Parker’s video that inspired this post, size 9 was the instance that was discussed. This is because size 9 has a side-length of 45, and thus the area of the board is 45 2 = 2025 45^2=2025 , which is the year the video was published.

Remember, even though the step from 8 to 9 sounds small, the number of parts grows from 36 to 45. In a couple of tests, it took between 61 and 86 seconds to solve size 9.

2 collapsed lines

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: boolVariables=6060

%%%mzn-stat: failures=651221

%%%mzn-stat: propagations=323892330

3 collapsed lines

%%%mzn-stat: solveTime=61.1534

%%%mzn-stat: nSolutions=1

%%%mzn-stat-end

Finished in 1m 1s.

At size 10, there are 55 parts to pack on a board of 3025 squares, increasing the difficulty even more. Here OR-Tools CP-SAT is starting to struggle a bit more, and in two runs took about 13 and a half minutes. Here are the statistics for one of them.

2 collapsed lines

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: boolVariables=9319

%%%mzn-stat: failures=6516108

%%%mzn-stat: propagations=3208777732

3 collapsed lines

%%%mzn-stat: solveTime=804.504

%%%mzn-stat: nSolutions=1

%%%mzn-stat-end

Finished in 13m 25s.

As can be seen below, the two solutions found are quite different from each other.

Turning it up to eleven, it took OR-Tools CP-SAT a bit more than 51 minutes to solve the problem. With 66 parts and an area of 4356, it is significantly larger than size 10.

2 collapsed lines

%%%mzn-stat: objective=0

%%%mzn-stat: objectiveBound=0

%%%mzn-stat: boolVariables=13850

%%%mzn-stat: failures=15611863

%%%mzn-stat: propagations=10240280820

3 collapsed lines

%%%mzn-stat: solveTime=3078.61

%%%mzn-stat: nSolutions=1

%%%mzn-stat-end

Finished in 51m 19s.

Finding a solution of size 12 turned out to be too hard for the model. Running OR-Tools CP-SAT for 12 hours gave no result.

In the above tests, only the OR-Tools CP-SAT solver has been used. This is both because initial experiments showed it was probably the best solver for this and because it has been dominant in the MiniZinc Challenge for more than a decade. A benefit of MiniZinc is that many different solvers can be tested, so let’s look at some alternatives.

The new Huub solver was quite impressive in this year’s MiniZinc Challenge coming in third after OR-Tools CP-SAT and Chuffed in the Open category. Huub uses an external SAT solver, and runs single threaded. Running the model for size 8 with free search for ten rounds solves it in between 7.8 and 7.9 seconds, which is remarkably stable.

%%%mzn-stat: solveTime=7.474344917

%%%mzn-stat: failures=103390

%%%mzn-stat: peakDepth=4796

%%%mzn-stat: propagations=20878861

%%%mzn-stat: restarts=145

3 collapsed lines

%%%mzn-stat: oracleDecisions=123909

%%%mzn-stat: userDecisions=0

%%%mzn-stat-end

Finished in 7s 839msec.

This looked very promising, but increasing to size 9 Huub timed out after 12 hours.

Pumpkin is also an LCG solver like Huub, but it is more focused on proof logging. It is single-threaded like Huub, and uses a custom internal SAT solver. Here, solving size 8 took around 2 minutes (2 test runs).

%%%mzn-stat: nodes=838498

%%%mzn-stat: failures=427421

%%%mzn-stat: restarts=1706

%%%mzn-stat: variables=12219

%%%mzn-stat: propagators=14931

%%%mzn-stat: propagations=422962879

%%%mzn-stat: peakDepth=570

4 collapsed lines

%%%mzn-stat: nogoods=427421

%%%mzn-stat: backjumps=307815

%%%mzn-stat: solveTime=147.569079042

%%%mzn-stat-end

Finished in 2m 28s.

While size 8 was significantly slower for Pumpkin than for Huub, Pumpkin could actually solve size 9 in around 10 minutes.

%%%mzn-stat: nodes=3080585

%%%mzn-stat: failures=1547051

%%%mzn-stat: restarts=5243

%%%mzn-stat: variables=19208

%%%mzn-stat: propagators=23411

%%%mzn-stat: propagations=1673243823

%%%mzn-stat: peakDepth=925

4 collapsed lines

%%%mzn-stat: nogoods=1547051

%%%mzn-stat: backjumps=1108974

%%%mzn-stat: solveTime=642.870503792

%%%mzn-stat-end

Finished in 10m 43s.

Running size 10 with Pumpkin failed with an unspecified error after around 5 hours.

None of these solvers were really useful for this problem. Chuffed is often a very good solver with really great automatic search heuristics, but sometimes it doesn’t work that well. Here, it took just over two hours to find a solution to the base size 8 packing. Chuffed is single-threaded, same as Huub and Pumpkin.

%%%mzn-stat: nodes=123635519

%%%mzn-stat: failures=60596049

%%%mzn-stat: restarts=73648

%%%mzn-stat: variables=34838

%%%mzn-stat: intVars=2734

%%%mzn-stat: boolVariables=32102

%%%mzn-stat: propagators=5521

%%%mzn-stat: propagations=96201866795

%%%mzn-stat: peakDepth=272

10 collapsed lines

%%%mzn-stat: nogoods=60596049

%%%mzn-stat: backjumps=59399516

%%%mzn-stat: peakMem=0.00

%%%mzn-stat: time=7432.788

%%%mzn-stat: initTime=0.078

%%%mzn-stat: solveTime=7432.710

%%%mzn-stat: baseMem=0.00

%%%mzn-stat: trailMem=0.12

%%%mzn-stat: randomSeed=-499155368

%%%mzn-stat-end

Finished in 2h 3m 53s.

Gecode is a competent classical constraint programming solver, and as such it doesn’t really have any effective automatic search heuristics. This is clearly visible for this problem, where it fails to solve the problem in 12 hours.

3 collapsed lines

%%%mzn-stat: initTime=0.0371863

%%%mzn-stat: solveTime=43199.8

%%%mzn-stat: solutions=0

%%%mzn-stat: variables=9550

%%%mzn-stat: propagators=9245

%%%mzn-stat: propagations=2213651468397

%%%mzn-stat: nodes=5060871988

%%%mzn-stat: failures=2530435922

%%%mzn-stat: restarts=0

%%%mzn-stat: peakDepth=108

%%%mzn-stat-end

Finished in 12h.

Since Gecode can really benefit from a search heuristic, I tried adding one. This heuristic uses the well-known left-bottom placement strategy, prioritizing placement of larger parts before placing smaller parts. This did not help.

% The position of a Part is essentially the index of the square.

array[Parts] of var int: position = [

x[p] * card(Pos) + y[p]

| p in Parts

];

% Search by placing the part with the smallest position/index at that position,

% breaking ties by input order (where larger parts are earlier).

solve :: int_search(position, smallest, indomain_min)

satisfy;

Finally, HiGHS is a modern open source MIP solver. Unfortunately, it also fails to solve this problem in 12 hours.

As mentioned above, the original development of the geost constraint was done in the SICStus Prolog solver. However, the MiniZinc model here does not translate to the geost constraint, nor is there support for using the specialized settings for the geost constraint.

Running the base MiniZinc model takes more than 4 hours to solve size 8.

2 collapsed lines

%%%mzn-stat: initTime=0.075

%%%mzn-stat: solveTime=15488.9

%%%mzn-stat: propagations=32188610389

3 collapsed lines

%%%mzn-stat: entailments=17947678933

%%%mzn-stat: prunings=58276738702

%%%mzn-stat: backtracks=381834851

%%%mzn-stat: restarts=0

%%%mzn-stat: solutions=1

%%%mzn-stat: optimalities=0

%%%mzn-stat: propagators=6651

%%%mzn-stat: variables=16508

%%%mzn-stat-end

Finished in 4h 24m 11s.

However, in the SICStus distribution, there is a partridge packing example with suitable geost arguments and a custom search predicate. Here we get the chance to compare a generic model with one that is customized for a solver, using that particular solver’s special features.

SICStus 4.10.1 (arm64-darwin-21.0.0): Sat Jun 28 12:23:49 CEST 2025

Licensed to Mikael Zayenz Lagerkvist

| ?- compile(user).

% compiling user...

| call_time(G,T) :-

statistics(runtime,[T0|_]),

G,

statistics(runtime,[T1|_]),

T is T1 - T0.

| ^D

% compiled user in module user, 2 msec 768 bytes

yes

| ?- ['lib/sicstus-4.10.1/library/clpfd/examples/partridge.pl'].

56 collapsed lines

% compiling /Users/zayenz/solvers/sicstus/lib/sicstus-4.10.1/library/clpfd/examples/partridge.pl...

% module partridge imported into user

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/lists.po...

% module lists imported into partridge

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/types.po...

% module types imported into lists

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/types.po in module types, 1 msec 6416 bytes

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/lists.po in module lists, 3 msec 204320 bytes

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/trees.po...

% module trees imported into partridge

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/trees.po in module trees, 1 msec 16336 bytes

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/clpfd.po...

% module clpfd imported into partridge

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/atts.po...

% module attributes imported into clpfd

% module types imported into attributes

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/atts.po in module attributes, 1 msec 32704 bytes

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/fvar.po...

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/ordsets.po...

% module ordsets imported into fvar

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/ordsets.po in module ordsets, 1 msec 50416 bytes

% module attributes imported into fvar

% module attributes imported into fvar

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/fvar.po in module fvar, 1 msec 65376 bytes

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/avl.po...

% module avl imported into clpfd

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/avl.po in module avl, 1 msec 68848 bytes

% module lists imported into clpfd

% module ordsets imported into clpfd

% module trees imported into clpfd

% module types imported into clpfd

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/terms.po...

% module terms imported into clpfd

% module types imported into terms

% module avl imported into terms

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/terms.po in module terms, 1 msec 52656 bytes

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/timeout.po...

% module timeout imported into clpfd

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/timeout.po in module timeout, 0 msec 1536 bytes

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/ugraphs.po...

% module ugraphs imported into clpfd

% module ordsets imported into ugraphs

% module lists imported into ugraphs

% module avl imported into ugraphs

% loading /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/random.po...

% module random imported into ugraphs

% module types imported into random

% loading foreign resource /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/arm64-darwin-21.0.0/random.bundle in module random

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/random.po in module random, 1 msec 31008 bytes

% module types imported into ugraphs

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/ugraphs.po in module ugraphs, 2 msec 104000 bytes

% loading foreign resource /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/arm64-darwin-21.0.0/clpfd.bundle in module clpfd

% module attributes imported into clpfd

% loaded /Users/zayenz/solvers/sicstus/bin/sp-4.10.1/sicstus-4.10.1/library/clpfd.po in module clpfd, 19 msec 2004432 bytes

% compiled /Users/zayenz/solvers/sicstus/lib/sicstus-4.10.1/library/clpfd/examples/partridge.pl in module partridge, 29 msec 2250112 bytes

yes

| ?- call_time(partridge(8), T8).

placement space = 36x36

rectangles r(X,W,Y,H) = [r(1,8,1,8),r(1,8,13,8),r(1,8,21,8),r(1,8,29,8),r(9,8,1,8),r(9,8,9,8),r(9,8,17,8),r(17,8,1,8),r(9,7,30,7),r(16,7,30,7),r(17,7,18,7),r(23,7,30,7),r(30,7,16,7),r(30,7,23,7),r(30,7,30,7),r(24,6,18,6),r(24,6,24,6),r(25,6,1,6),r(25,6,7,6),r(31,6,1,6),r(31,6,7,6),r(9,5,25,5),r(14,5,25,5),r(17,5,13,5),r(19,5,25,5),r(22,5,13,5),r(1,4,9,4),r(5,4,9,4),r(17,4,9,4),r(21,4,9,4),r(27,3,15,3),r(31,3,13,3),r(34,3,13,3),r(27,2,13,2),r(29,2,13,2),r(30,1,15,1)]

T8 = 522 ?

yes

| ?- call_time(partridge(9), T9).

placement space = 45x45

rectangles r(X,W,Y,H) = [r(1,9,1,9),r(1,9,10,9),r(1,9,19,9),r(1,9,28,9),r(1,9,37,9),r(10,9,1,9),r(10,9,10,9),r(10,9,19,9),r(29,9,1,9),r(10,8,38,8),r(18,8,38,8),r(24,8,30,8),r(26,8,38,8),r(38,8,1,8),r(38,8,16,8),r(38,8,24,8),r(38,8,32,8),r(10,7,31,7),r(17,7,31,7),r(24,7,16,7),r(24,7,23,7),r(31,7,16,7),r(31,7,23,7),r(39,7,9,7),r(19,6,6,6),r(27,6,10,6),r(32,6,30,6),r(33,6,10,6),r(34,6,40,6),r(40,6,40,6),r(19,5,1,5),r(19,5,16,5),r(19,5,21,5),r(19,5,26,5),r(24,5,1,5),r(19,4,12,4),r(23,4,12,4),r(25,4,6,4),r(34,4,36,4),r(10,3,28,3),r(13,3,28,3),r(16,3,28,3),r(25,2,10,2),r(32,2,36,2),r(38,1,9,1)]

T9 = 60575 ?

yes

| ?- call_time(partridge(10), T10).

placement space = 55x55

rectangles r(X,W,Y,H) = [r(1,10,1,10),r(1,10,11,10),r(1,10,21,10),r(1,10,31,10),r(1,10,41,10),r(11,10,1,10),r(11,10,11,10),r(11,10,21,10),r(11,10,31,10),r(11,10,41,10),r(21,9,1,9),r(21,9,10,9),r(21,9,19,9),r(27,9,31,9),r(29,9,47,9),r(30,9,1,9),r(38,9,47,9),r(39,9,1,9),r(47,9,47,9),r(21,8,40,8),r(21,8,48,8),r(37,8,10,8),r(40,8,27,8),r(48,8,1,8),r(48,8,23,8),r(48,8,31,8),r(48,8,39,8),r(29,7,40,7),r(30,7,10,7),r(30,7,17,7),r(30,7,24,7),r(37,7,18,7),r(49,7,9,7),r(49,7,16,7),r(21,6,28,6),r(21,6,34,6),r(36,6,35,6),r(36,6,41,6),r(42,6,35,6),r(42,6,41,6),r(1,5,51,5),r(6,5,51,5),r(11,5,51,5),r(16,5,51,5),r(44,5,18,5),r(36,4,31,4),r(44,4,23,4),r(45,4,10,4),r(45,4,14,4),r(27,3,28,3),r(37,3,25,3),r(37,3,28,3),r(40,2,25,2),r(42,2,25,2),r(48,1,9,1)]

T10 = 1377485 ?

yes

| ?- call_time(partridge(11), T11).

placement space = 66x66

rectangles r(X,W,Y,H) = [r(1,11,1,11),r(1,11,12,11),r(1,11,23,11),r(1,11,34,11),r(1,11,45,11),r(1,11,56,11),r(12,11,1,11),r(12,11,12,11),r(12,11,23,11),r(12,11,45,11),r(12,11,56,11),r(23,10,1,10),r(23,10,11,10),r(23,10,49,10),r(33,10,1,10),r(33,10,11,10),r(39,10,25,10),r(43,10,9,10),r(57,10,32,10),r(57,10,42,10),r(57,10,52,10),r(23,9,21,9),r(39,9,40,9),r(39,9,49,9),r(39,9,58,9),r(48,9,40,9),r(48,9,49,9),r(48,9,58,9),r(49,9,23,9),r(58,9,23,9),r(12,8,34,8),r(23,8,59,8),r(25,8,41,8),r(31,8,59,8),r(43,8,1,8),r(49,8,32,8),r(51,8,1,8),r(59,8,1,8),r(20,7,34,7),r(32,7,21,7),r(32,7,28,7),r(53,7,9,7),r(53,7,16,7),r(60,7,9,7),r(60,7,16,7),r(27,6,35,6),r(33,6,35,6),r(33,6,41,6),r(33,6,47,6),r(33,6,53,6),r(43,6,19,6),r(27,5,30,5),r(39,5,35,5),r(44,5,35,5),r(57,5,62,5),r(62,5,62,5),r(21,4,41,4),r(23,4,30,4),r(39,4,21,4),r(49,4,19,4),r(12,3,42,3),r(15,3,42,3),r(18,3,42,3),r(23,2,45,2),r(23,2,47,2),r(20,1,41,1)]

T11 = 269799 ?

yes

| ?- call_time(partridge(12), T12).

placement space = 78x78

rectangles r(X,W,Y,H) = [r(1,12,1,12),r(1,12,13,12),r(1,12,25,12),r(1,12,37,12),r(1,12,49,12),r(1,12,61,12),r(13,12,1,12),r(13,12,19,12),r(13,12,31,12),r(13,12,43,12),r(13,12,55,12),r(13,12,67,12),r(25,11,1,11),r(25,11,24,11),r(25,11,35,11),r(25,11,46,11),r(25,11,57,11),r(25,11,68,11),r(36,11,1,11),r(44,11,26,11),r(44,11,47,11),r(47,11,1,11),r(58,11,1,11),r(34,10,12,10),r(45,10,37,10),r(59,10,50,10),r(59,10,60,10),r(69,10,1,10),r(69,10,11,10),r(69,10,30,10),r(69,10,40,10),r(69,10,50,10),r(69,10,60,10),r(25,9,12,9),r(36,9,38,9),r(51,9,12,9),r(52,9,70,9),r(60,9,12,9),r(61,9,21,9),r(61,9,70,9),r(70,9,21,9),r(70,9,70,9),r(36,8,22,8),r(36,8,30,8),r(36,8,47,8),r(36,8,55,8),r(36,8,63,8),r(36,8,71,8),r(44,8,63,8),r(44,8,71,8),r(44,7,12,7),r(44,7,19,7),r(52,7,63,7),r(55,7,36,7),r(55,7,43,7),r(62,7,36,7),r(62,7,43,7),r(1,6,73,6),r(7,6,73,6),r(13,6,13,6),r(19,6,13,6),r(55,6,26,6),r(63,6,30,6),r(44,5,58,5),r(49,5,58,5),r(51,5,21,5),r(54,5,58,5),r(56,5,21,5),r(55,4,32,4),r(55,4,50,4),r(55,4,54,4),r(59,4,32,4),r(25,3,21,3),r(28,3,21,3),r(31,3,21,3),r(34,2,22,2),r(61,2,30,2),r(44,1,37,1)]

T12 = 4276951 ?

yes

| ?-

Solving size 8 is really quick at around half a second (the timing is reported in milliseconds). Note also that SICStus is a single-threaded system. Size 9 took about a minute, size 10 took around 23 minutes, size 11 took 4 and a half minutes, and size 12 took 1 hour 11 minutes. It is expected that a larger instance can sometimes be faster (11 vs 10) when searching for a satisfying solution. Another things that is also worth noting that SICStus uses less than 20 MiB of memory when searching for a solution for size 12, while OR-Tools CP-SAT uses over 3 GiB of memory.

Here is the size 12 partridge packing that SICStus found. Since size 12 is the reason the Partridge Packing Problem got its name, it feels good to find a solution for this size as well.

At size 13, SICStus also starts to struggle with the search, with no solution produced in 12 hours.

Solving the Partridge Packing Problem using MiniZinc is an interesting challenge. The base model performs poorly, and the usual trick (adding cumulative constraints) for improving a packing problem was not that useful. However, with some custom implied constraints and symmetry breaking, it was possible to get solutions for size 8 and 9 quite quickly.

As is common for CP problems modeled in MiniZinc, OR-Tools CP-SAT dominates in performance. However, it was interesting to see that the relatively new solvers Huub and Pumpkin are both promising. 8 Moving from MiniZinc to the custom SICStus Partridge program showed the benefits of using a system with smart propagators and a custom search strategy.

There are better ways to solve this packing problem, giving faster solutions in a more scalable way. Still, it is a good example of how to incrementally develop a MiniZinc model and how to add strengthening constraints. A benefit of using a high-level modeling language for this type of problem is that it can be adapted to new constraints and changes in requirements. In many industrial problems, it is quite common for requirements to change frequently.

In the end though, the most important part was that it was fun to experiment with.

A National Mission to Accelerate Science Through Artificial Intelligence

Hacker News
energy.gov
2025-11-26 14:47:58
Comments...
Original Article

A National Mission to Accelerate Science Through Artificial Intelligence

Video Url

Genesis Mission video

US Dept of Energy

Genesis Mission is a national initiative to build the world's most powerful scientific platform to accelerate discovery science, strengthen national security, and drive energy innovation.

section divider

Goal

Genesis Mission icon

Genesis Mission will develop an integrated platform that connects the world's best supercomputers, experimental facilities, AI systems, and unique datasets across every major scientific domain to double the productivity and impact of American research and innovation within a decade.

section divider

Collaborators

Genesis Mission collaborator logos

Genesis Mission collaborator logos

Energy

  • Genesis Mission fusion image

    Fusion you can plug into

    Harnessing the power of the stars to deliver abundant, affordable energy. Through real-time collaboration between scientists, supercomputers, and AI systems, researchers can design, test, and stabilize fusion reactors far faster than before, accelerating the realization of sustainable fusion power.

  • Genesis Mission nuclear image

    Advanced nuclear, faster and safer

    Creating a new generation of more efficient reactor designs, including new modular reactors, that provide reliable, around-the-clock energy. Engineers and AI tools work together to optimize reactor design, materials, licensing, and operations, shortening development timelines while strengthening safety and performance.

  • Genesis Mission grid image

    An intelligent, resilient grid

    Building a power network that grows as fast as the technologies it fuels. By combining human expertise in energy planning with AI-enabled forecasting and simulation, teams can modernize the nation's grid, improving reliability and accelerating deployment of new infrastructure.

Discovery Science

  • Illustrated hydrogen molecules against a blue-green background

    Seeing molecules in action

    Revealing chemical and biological processes as they unfold in real time. AI will work with ultrafast experiments to observe molecular dynamics and uncover insights that accelerate breakthroughs in materials and medicine.

  • Genesis Mission particles image

    Understanding the universe, from quarks to cosmos

    Connecting the smallest particles to the largest structures. Physicists, guided by AI tools that reason across astronomical and particle-physics data, work together to test new theories about dark matter, dark energy, and the laws of nature.

  • Genesis Mission quantum image

    Discovering new quantum algorithms

    Unlocking the next frontier of computation. AI serves as a reasoning partner for researchers, generating and verifying new classes of quantum algorithms while scientists interpret and validate the results, bringing practical quantum computing closer to reality.

National Security

  • Genesis Mission critical materials image

    Securing critical materials

    Reducing dependence on foreign supply chains. Materials scientists and AI systems co-design substitutes, responsibly utilize Earth's resources, and recover rare elements from waste, building a stable, self-reliant foundation for the nation's future industries.

  • Genesis Mission manufacturing image

    Accelerating advanced manufacturing

    Turning design into production at the speed of need. Engineers and AI-driven digital twins share a continuous feedback loop between design, sensors, and fabrication, cutting qualification time and boosting efficiencies.

  • Genesis Mission discovery image

    Discovering mission-ready materials

    Delivering alloys, polymers, and composites vital to defense and industry. Human insight and AI-guided discovery converge to fuse simulation, literature mining, and autonomous labs, pointing toward a future where years of materials research could unfold in a fraction of the time.

Essential Information and Guidance

  • A national initiative led by the Department of Energy and its 17 National Laboratories to build the world’s most powerful scientific platform to accelerate discovery, strengthen national security, and drive energy innovation.

  • We are amid a revolution in computing, driven by artificial intelligence and quantum information technologies, that will transform how science is done. Genesis Mission has the goal of doubling the productivity and impact of U.S. research and development by pairing scientists with intelligent systems that reason, simulate, and experiment at extraordinary speed.

  • Genesis Mission will create a national discovery platform that unites the world’s most powerful supercomputers, AI systems, and emerging quantum technologies with the nation’s most advanced scientific instruments. Together, they form an integrated infrastructure for scientific exploration—an intelligent network capable of sensing, simulating, and understanding nature at every scale.

    By connecting these systems, Genesis Mission will transform how science is done. It will generate a new class of high-fidelity data to train advanced AI models, empower researchers to solve the hardest scientific challenges, and accelerate discovery from years to months. In doing so, it will serve as both a national accelerator for innovation and a proving ground for the next generation of AI and quantum and robotics technologies.

  • From fusion energy and new materials to quantum computing and life-saving medicines, Genesis Mission expands what’s possible in energy, discovery science, and national security.

  • Unlike commercial models trained on the open internet, Genesis Mission draws from the government’s secure, multi-domain scientific data, decades of experiments unavailable anywhere else.

  • No. Genesis Mission enables them. It’s AI for discovery, not automation, helping researchers explore and understand the universe faster.

  • The Department of Energy, in partnership with the White House Office of Science and Technology Policy.

  • Genesis Mission brings together the Department of Energy’s 17 National Laboratories with America’s leading universities and industry, including pioneers in artificial intelligence, computing, materials, and energy, to build the most powerful scientific platform ever to solve national challenges.

    The initial collaborators listed below. Together, they represent the strength of the U.S. innovation ecosystem, uniting public and private sectors to accelerate discovery and maintain America’s scientific and technological leadership.

    Genesis Mission collaborating companies

  • Genesis Mission is a movement to transform how science is done. DOE will open parts of Genesis Mission platform to qualified researchers, innovators, and companies, ensuring the benefits of this national effort are shared across the American scientific ecosystem. Learn more .

Follow The Mission

The Next Era Begins Now. Subscribe for more information.

Genesis Mission emblem animation

Genesis Mission emblem animation

Rights Organizations Demand Halt to Mobile Fortify, ICE's Handheld Face Recognition Program

Electronic Frontier Foundation
www.eff.org
2025-11-26 14:46:12
Mobile Fortify, the new app used by Immigration and Customs Enforcement (ICE) to use face recognition technology (FRT) to identify people during street encounters, is an affront to the rights and dignity of migrants and U.S. citizens alike. That's why a coalition of privacy, civil liberties and civi...
Original Article

Mobile Fortify , the new app used by Immigration and Customs Enforcement (ICE) to use face recognition technology (FRT) to identify people during street encounters, is an affront to the rights and dignity of migrants and U.S. citizens alike. That's why a coalition of privacy, civil liberties and civil rights organizations are demanding the Department of Homeland Security (DHS) shut down the use of Mobile Fortify, release the agency's privacy analyses of the app, and clarify the agency's policy on face recognition.

As the organizations, including EFF, Asian Americans Advancing Justice and the Project on Government Oversight, write in a letter sent by EPIC :

ICE’s reckless field practices compound the harm done by its use of facial recognition. ICE does not allow people to opt-out of being scanned, and ICE agents apparently have the discretion to use a facial recognition match as a definitive determination of a person’s immigration status even in the face of contrary evidence.  Using face identification as a definitive determination of immigration status is immensely disturbing, and ICE’s cavalier use of facial recognition will undoubtedly lead to wrongful detentions, deportations, or worse.  Indeed, there is already at least one reported incident of ICE mistakenly determining a U.S. citizen “could be deported based on biometric confirmation of his identity.”

As if this dangerous use of nonconsensual face recognition isn't bad enough, Mobile Fortify also queries a wide variety of government databases. Already there have been reports that federal officers may be using this FRT to target protesters engaging in First Amendment-protected activities. Yet ICE concluded it did not need to conduct a new Privacy Impact Assessment, which is standard practice for proposed government technologies that collect people's data.

While Mobile Fortify is the latest iteration of ICE’s mobile FRT, EFF has been tracking this type of technology for more than a decade. In 2013, we identified how a San Diego agency had distributed face recognition-equipped phones to law enforcement agencies across the region, including federal immigration officers. In 2019, EFF helped pass a law temporarily banning collecting biometric data with mobile devices, resulting in the program's cessation .

We fought against handheld FRT then, and we will fight it again today.

Justice dept. requires Realpage end sharing competitively sensitive information

Hacker News
www.justice.gov
2025-11-26 14:46:05
Comments...
Original Article

The Justice Department’s Antitrust Division filed a proposed settlement today to resolve the United States’ claims against RealPage Inc. as part of its ongoing enforcement against algorithmic coordination, information sharing, and other anticompetitive practices in rental housing markets across the country. The proposed settlement would help restore free market competition in rental markets for millions of American renters.

“Competing companies must make independent pricing decisions, and with the rise of algorithmic and artificial intelligence tools, we will remain at the forefront of vigorous antitrust enforcement,” said Assistant Attorney General Abigail Slater of the Justice Department’s Antitrust Division.

RealPage is a provider of commercial revenue management software and services for the conventional multifamily rental housing industry. As alleged in Plaintiffs’ complaint, RealPage’s revenue management software has relied on nonpublic, competitively sensitive information shared by landlords to set rental prices. RealPage’s software has also included features designed to limit rental price decreases and otherwise align pricing among competitors. In addition, RealPage has hosted meetings attended by competing property management companies where competitively sensitive information was shared.

If approved by the court, the proposed consent judgment would require RealPage to:

  • Cease having its software use competitors’ nonpublic, competitively sensitive information to determine rental prices in runtime operation;
  • Cease using active lease data for purposes of training the models underlying the software, limiting model training to historic or backward-looking nonpublic data that has been aged for at least 12 months;
  • Not use models that determine geographic effects narrower than at a state level, which is broader than the markets alleged in the complaint;
  • Remove or redesign features that limited price decreases or aligned pricing between competing users of the software;
  • Cease conducting market surveys to collect competitively sensitive information;
  • Refrain from discussing market analyses or trends based on nonpublic data, or pricing strategies, in RealPage meetings relating to revenue management software;
  • Accept a court-appointed monitor to ensure compliance with the terms of the consent judgment; and
  • Cooperate in the United States’ lawsuit against property management companies that have used its software.

As required by the Tunney Act, the proposed settlement, along with a competitive impact statement, will be published in the Federal Register. Any interested person should submit written comments concerning the proposed settlement within 60 days following the publication to Danielle Hauck, Acting Chief, Technology and Digital Platforms Section, Antitrust Division, U.S. Department of Justice, 450 Fifth Street NW, Suite 7050, Washington, DC 20530. At the conclusion of the public comment period, the U.S. District Court for the Middle District of North Carolina may enter the final judgment upon finding it is in the public interest.

RealPage is a provider of revenue management software and services headquartered in Richardson, Texas.

Microsoft: Security keys may prompt for PIN after recent updates

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 14:43:57
Microsoft warned users on Tuesday that FIDO2 security keys may prompt them to enter a PIN when signing in after installing Windows updates released since the September 2025 preview update. [...]...
Original Article

Windows 11

Microsoft warned users on Tuesday that FIDO2 security keys may prompt them to enter a PIN when signing in after installing Windows updates released since the September 2025 preview update.

This behavior can be observed on devices running Windows 11 version 24H2 or 25H2 when an identity provider requests user verification during authentication.

Microsoft says this is an intentional change to comply with WebAuthn specifications , which dictate how authentication methods such as PINs, biometrics, and hardware security keys should handle user verification requests.

Wiz

User verification confirms that the user is present and authorized to use a security key, typically through a PIN or biometric scan. Under WebAuthn standards, verification can be discouraged, preferred, or required. When set to "preferred," the standard requires platforms to set up a PIN if the authenticator supports user verification.

Support for this feature began gradually rolling out to all Windows 11 devices after the KB5065789 preview update , and the deployment completed with the November KB5068861security update .

"After installing the Windows update, September 29, 2025—KB5065789 (OS Builds 26200.6725 and 26100.6725) Preview, or later updates, you might be required to create a PIN to sign in with a security key, even if a PIN was not required or set during your initial registration," Microsoft said in a Tuesday support document .

"This behavior will occur when a Relying Party (RP) or Identity Provider (IDP) requests User Verification = Preferred during authentication with a Fast IDentity Online 2 (FIDO2) security key that does not have a PIN set."

Organizations and services that don't want users creating or entering PINs for security keys can set user verification to "discouraged" in their WebAuthn configuration settings .

"Support for PIN setup in the authentication flow was added to be consistent across both registration and authentication flows," Microsoft added.

FIDO2 security keys provide passwordless authentication by requiring physical possession of a USB, NFC, or Bluetooth token. This technology has been increasingly adopted as organizations seek alternatives to traditional passwords to block phishing, credential theft, and other password-based attacks.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

How to get hired in 2025

Lobsters
tonsky.me
2025-11-26 14:40:46
Comments...
Original Article

It’s 2025 and you are applying for a software engineer position. They give you a test assignment. You complete it yourself, send it over, and get rejected. Why?

Because it looked like AI.

Unfortunately, it’s 2025, AI is spreading like glitter in a kindergarten, and it’s really easy to mistake hard human labor for soulless, uninspired machine slop.

Following are the main red flags in test assignments that should be avoided :

  • The assignment was read and understood in full.
  • All parts are implemented.
  • Industry-standard tools and frameworks are used.
  • The code is split into small, readable functions.
  • Variables have descriptive names.
  • Complex parts have comments.
  • Errors are handled, error messages are easy to follow.
  • Source files are organized reasonably.
  • The web interface looks nice.
  • There are tests.

Avoid these AI giveaways and spread the word!

Hi!

I’m Niki. Here I write about programming and UI design Subscribe

I consult companies on all things Clojure: web, backend, Datomic, DataScript, performance, etc. Get in touch: niki@tonsky.me

I also create open-source stuff: Fira Code, DataScript, Clojure Sublimed, Humble UI. Support it on Patreon or Github

Security updates for Wednesday

Linux Weekly News
lwn.net
2025-11-26 14:32:35
Security updates have been issued by AlmaLinux (bind, binutils, delve and golang, expat, firefox, haproxy, kernel, libsoup3, libssh, libtiff, openssh, openssl, pam, podman, python-kdcproxy, shadow-utils, squid, thunderbird, vim, xorg-x11-server-Xwayland, and zziplib), Debian (cups-filters, libsdl2, ...
Original Article
Dist. ID Release Package Date
AlmaLinux ALSA-2025:21034 10 bind 2025-11-25
AlmaLinux ALSA-2025:20155 10 binutils 2025-11-25
AlmaLinux ALSA-2025:21816 10 delve and golang 2025-11-25
AlmaLinux ALSA-2025:21030 10 expat 2025-11-25
AlmaLinux ALSA-2025:21281 10 firefox 2025-11-25
AlmaLinux ALSA-2025:21691 10 haproxy 2025-11-25
AlmaLinux ALSA-2025:20095 10 kernel 2025-11-25
AlmaLinux ALSA-2025:21032 10 libsoup3 2025-11-25
AlmaLinux ALSA-2025:21013 10 libssh 2025-11-25
AlmaLinux ALSA-2025:20998 10 libtiff 2025-11-25
AlmaLinux ALSA-2025:20126 10 openssh 2025-11-25
AlmaLinux ALSA-2025:21248 10 openssl 2025-11-25
AlmaLinux ALSA-2025:20181 10 pam 2025-11-25
AlmaLinux ALSA-2025:21220 10 podman 2025-11-25
AlmaLinux ALSA-2025:20983 10 podman 2025-11-25
AlmaLinux ALSA-2025:21142 10 python-kdcproxy 2025-11-25
AlmaLinux ALSA-2025:20145 10 shadow-utils 2025-11-25
AlmaLinux ALSA-2025:21002 10 squid 2025-11-25
AlmaLinux ALSA-2025:21843 10 thunderbird 2025-11-25
AlmaLinux ALSA-2025:21015 10 vim 2025-11-25
AlmaLinux ALSA-2025:21035 10 xorg-x11-server-Xwayland 2025-11-25
AlmaLinux ALSA-2025:20478 10 zziplib 2025-11-25
Debian DLA-4380-1 LTS cups-filters 2025-11-25
Debian DLA-4382-1 LTS libsdl2 2025-11-25
Debian DLA-4379-1 LTS linux-6.1 2025-11-25
Debian DLA-4381-1 LTS net-snmp 2025-11-25
Debian DSA-6062-1 stable pdfminer 2025-11-25
Debian DLA-4383-1 LTS rails 2025-11-25
Debian DSA-6061-1 stable tryton-sao 2025-11-25
Fedora FEDORA-2025-ee528a170d F41 chromium 2025-11-26
Fedora FEDORA-2025-264853458b F43 docker-buildkit 2025-11-26
Fedora FEDORA-2025-04cf139ee2 F42 docker-buildx 2025-11-26
Fedora FEDORA-2025-b1d7d7f8db F43 docker-buildx 2025-11-26
Fedora FEDORA-2025-ada7909175 F41 sudo-rs 2025-11-26
Fedora FEDORA-2025-4388808bbf F42 sudo-rs 2025-11-26
Fedora FEDORA-2025-a9d9780cbb F43 sudo-rs 2025-11-26
Gentoo 202511-07 librnp 2025-11-26
Mageia MGASA-2025-0313 9 webkit2 2025-11-25
SUSE SUSE-SU-2025:4244-1 SLE12 amazon-ssm-agent 2025-11-26
SUSE SUSE-SU-2025:4229-1 SLE15 SES7.1 oS15.3 buildah 2025-11-25
SUSE SUSE-SU-2025:4245-1 SLE15 oS15.5 oS15.6 buildah 2025-11-26
SUSE SUSE-SU-2025:4236-1 SLE15 oS15.6 curl 2025-11-25
SUSE SUSE-SU-2025:4254-1 SLE15 oS15.6 dpdk 2025-11-26
SUSE openSUSE-SU-2025:15758-1 TW fontforge-20251009 2025-11-25
SUSE openSUSE-SU-2025-20081-1 SLE16 SLE-m6.2 oS16.0 kernel 2025-11-26
SUSE openSUSE-SU-2025:15759-1 TW libIex-3_4-33 2025-11-25
SUSE openSUSE-SU-2025:15762-1 TW librnp0 2025-11-25
SUSE openSUSE-SU-2025:15760-1 TW python311 2025-11-25
SUSE openSUSE-SU-2025:15761-1 TW rclone 2025-11-25
SUSE SUSE-SU-2025:4232-1 SLE12 sssd 2025-11-25
SUSE SUSE-SU-2025:4231-1 SLE15 SLE-m5.2 SES7.1 oS15.3 sssd 2025-11-25
SUSE SUSE-SU-2025:4247-1 SLE15 oS15.6 sssd 2025-11-26
Ubuntu USN-7889-1 22.04 24.04 linux, linux-aws, linux-aws-6.8, linux-ibm, linux-lowlatency, linux-lowlatency-hwe-6.8, linux-nvidia, linux-nvidia-6.8, linux-nvidia-lowlatency, linux-oracle 2025-11-25
Ubuntu USN-7879-3 24.04 linux-aws-6.14, linux-oracle-6.14 2025-11-26
Ubuntu USN-7889-2 24.04 linux-aws-fips, linux-fips, linux-gcp-fips 2025-11-26
Ubuntu USN-7889-3 22.04 24.04 linux-realtime, linux-realtime-6.8 2025-11-26
Ubuntu USN-7888-1 18.04 20.04 22.04 24.04 25.04 mupdf 2025-11-26
Ubuntu USN-7883-1 18.04 20.04 22.04 24.04 25.04 25.10 openjdk-17 2025-11-25
Ubuntu USN-7881-1 16.04 18.04 20.04 22.04 24.04 25.04 25.10 openjdk-8 2025-11-25
Ubuntu USN-7882-1 18.04 20.04 22.04 24.04 25.04 25.10 openjdk-lts 2025-11-25

There may not be a safe off-ramp for some taking GLP-1 drugs, study suggests

Hacker News
arstechnica.com
2025-11-26 14:21:40
Comments...
Original Article

Of the 308 who benefited from tirzepatide, 254 (82 percent) regained at least 25 percent of the weight they had lost on the drug by week 88. Further, 177 (57 percent) regained at least 50 percent, and 74 (24 percent) regained at least 75 percent. Generally, the more weight people regained, the more their cardiovascular and metabolic health improvements reversed.

Data gaps and potential off-ramps

On the other hand, there were 54 participants of the 308 (17.5 percent) who didn’t regain a significant amount of weight (less than 25 percent.) This group saw some of their health metrics worsen on withdrawal of the drug, but not all—blood pressure increased a bit, but cholesterol didn’t go up significantly overall. About a dozen participants (4 percent of the 308) continued to lose weight after stopping the drug.

The researchers couldn’t figure out why these 54 participants fared so well; there were “no apparent differences” in demographic or clinical characteristics, they reported. It’s clear the topic requires further study.

But, overall, the study offers a gloomy outlook for patients hoping to avoid needing to take anti-obesity drugs for the foreseeable future.

Oczypok and Anderson highlight that the study involved an abrupt withdrawal from the drug. In contrast, many patients may be interested in slowly weaning off the drugs, stepping down dosage levels over time. So far, data on this strategy and the protocols to pull it off have little data behind them. It also might not be an option for patients who abruptly lose access to or insurance coverage for the drugs. Other strategies for weaning off the drugs could involve ramping up physical activity or calorie restriction in anticipation of dropping the drugs, the experts note.

In addition to more data on potential GLP-1 off-ramps, the pair calls for more data on the effects of weight fluctuations from people going on and off the treatment. At least one study has found that the regained weight after intentional weight loss may end up being proportionally higher in fat mass, which could be harmful.

For now, Oczypok and Anderson say doctors should be cautious about talking with patients about these drugs and what the future could hold. “These results add to the body of evidence that clinicians and patients should approach starting [anti-obesity medications] as long-term therapies, just as they would medications for other chronic diseases.”

KDE going all-in on a Wayland future

Lobsters
blogs.kde.org
2025-11-26 14:16:17
Comments...
Original Article

Well folks, it’s the beginning of a new era: after nearly three decades of KDE desktop environments running on X11, the future KDE Plasma 6.8 release will be Wayland-exclusive! Support for X11 applications will be fully entrusted to Xwayland, and the Plasma X11 session will no longer be included.

For most users, this will have no immediate impact. The vast majority of our users are already using the Wayland session, it’s the default on most distributions, and some of them have already dropped — or are planning to drop — the Plasma X11 session independently of what we decide.

In the longer term, this change opens up new opportunities for features, optimizations, and speed of development.

Because we’re certain that many people will have questions about this change, the Plasma team has prepared the following FAQ:

Plasma 6.8 means the X11 session will be supported by KDE until…?

The Plasma X11 session will be supported by KDE into early 2027.

We cannot provide a specific date, as we’re exploring the possibility of shipping some extra bug-fix releases for Plasma 6.7. The exact timing of the last one will only be known when we get closer to its actual release, which we expect will be sometime in early 2027.

What if I still really need X11?

This is a perfect use case for long term support (LTS) distributions shipping older versions of Plasma. For example, AlmaLinux 9 includes the Plasma X11 session and will be supported until sometime in 2032.

Will X11 applications still work?

Outside of rare special cases, yes, they will still work using the Xwayland compatibility layer. It does a great job of providing compatibility for most X11 applications, and we provide several additional compatibility features on top, namely improved support for fractional scaling and (opt-in) backwards compatibility with X11 global shortcuts and input emulation.

In certain cases, 3rd-party applications doing specialized tasks like taking screenshots or screencasting need to be adjusted to work as expected on Wayland. Most have already done so, and the remaining ones are making progress all the time.

Does X11 forwarding still work?

Yes, Xwayland supports it. Waypipe exists for similar functionality in Wayland native applications as well.

Can I still run KDE applications on X11 in another desktop environment?

Yes. There are currently no plans to drop X11 support in KDE applications outside of Plasma.

This change only concerns Plasma’s X11 login session, which is what’s going away.

What about gaming?

Games run better than ever on the Wayland session! Adaptive sync, optional tearing, and high-refresh-rate multi-monitor setups are all supported out of the box. HDR gaming works with some additional setup, too!

What about NVIDIA GPUs?

While Wayland support in the proprietary NVIDIA driver was quite rocky a few years ago, it has matured tremendously. Graphics cards still supported by the manufacturer work just fine nowadays, and for very old NVIDIA GPUs, the open source Nouveau driver can be used instead.

What about accessibility?

Accessibility is a very broad topic, so it’s hard to make any definite statements, but we’re generally on par with the X11 session. All the basics already work as expected, including screen readers, sticky/slow/bounce keys, zooming in, and so on.

Some things are better, like touchpad gestures for adjusting the zoom level, and applying systemwide color filters to correct for colorblindness. And even more improvements are expected by the time Plasma 6.8 rolls around.

However, accessibility features provided by third-party applications may be worse in some aspects. Please open a bug report if you have any special requirements that we don’t cover yet! This is an active topic we’re very interested in improving.

What about automation?

Many tools can be used for automation in the Wayland session; for example wl-copy / wl-paste , ydotool , kdotool , kscreen-doctor , and the plasma-apply-* tools. Generally Plasma is extensible enough that you can add what’s still missing yourself, for example through KWin scripts or plugins.

What about the Significant Known Issues ?

While we can’t promise all problems will be completely gone (some depend on application support), we’re actively working on addressing the last stragglers on that Wiki page.

Some of them are really close to being fixed; for example, the issues around output mirroring will be gone in Plasma 6.6. Session restore and remembering window positions are also being actively worked on.

What about Plasma on the BSDs?

FreeBSD is already shipping a working Wayland session, so there should be no upstream problems on that front. If there are any remaining issues we can help with upstream, please reach out to us!

What about the kwin_wayland and kwin_x11 split?

In Plasma 6.4, we split KWin into separate X11 and Wayland versions . This allowed KWin to go all-in on Wayland earlier, without being held up so much with legacy support for X11. For users with remaining edge-case requirements for X11, we put in the extra effort to keep X11 support for the rest of the desktop since then.

While the split helped a lot, KWin is only one piece of the puzzle. The Plasma desktop as a whole has many places where development is held back by the need to support the lowest common denominator of the two window systems.

The bottom line

This is happening because we believe that eventually dropping the Plasma X11 session will allow us to move faster to improve stability and functionality for the majority of our users — who are already using Wayland.

If we want to keep producing the best free desktop out there, we have to be nimble enough to adapt to a rapidly changing environment with many opportunities, without the need to drag forward legacy support that holds back a great deal of work.

The Wayland transition has been long, and at times painful. But we’re very close to the finish line. Passing it will unlock a lot of positive changes over the next few years that we think folks are going to appreciate!

Newsletter

Enter your email address to follow this blog and receive notifications of new posts by email.

Mayor Adams Prepares to Gobble Up Rent Guidelines Board Appointments

hellgate
hellgatenyc.com
2025-11-26 14:15:15
And other links to start your "what am I doing at the office Wednesday."...
Original Article

It's Wednesday, you deserve a treat, like an episode of the Hell Gate Podcast! Listen here , or wherever you get your podcasts. And don't worry, there WILL be a fresh episode on Friday.

Note: We'll be taking Thursday and Friday off. Happy Thanksgiving and see you on Monday!

Mayor Eric Adams is barely in New York City anymore.

After a whirlwind few weeks that took him to Albania , Israel , and Uzbekistan (??!!), Adams briefly showed up in the city on Monday to celebrate Gotham FC, before he once again leaves the city next week to head to New Orleans , where he'll be honored by a group that already honored him earlier this month while he was in Israel. (Though good for him, we would never fault anyone for using an excuse to go to New Orleans, the second-best city in the country.)

In his place, former Giuliani hack and First Deputy Mayor Randy Mastro has been running the city, And run it, he has! From prioritizing killing affordable housing to…also killing affordable housing , Mastro appears to be deadset on making life harder for New York's tenants, and is now apparently gearing up for his greatest act of all.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Voyager 1 Is About to Reach One Light-Day from Earth

Hacker News
scienceclock.com
2025-11-26 14:02:46
Comments...
Original Article
Timed out getting readerview for https://scienceclock.com/voyager-1-is-about-to-reach-one-light-day-from-earth/

Podcast: A Massive Breach Reveals the Truth Behind 'Secret Desires AI'

403 Media
www.404media.co
2025-11-26 14:00:25
A breach shows people are making AI porn of ordinary people at scale; X exposes the location of its biggest MAGA grifters; and how we contributed to the shut down of a warrantless surveillance program....
Original Article

We start this week with Sam's piece about a massive leak of an AI chatbot, and how it showed that people were taking ordinary women’s yearbook photos and using them to make AI porn. After the break, Jason explains how a recent change on X exposed a bunch of grifters all around the world. In the subscribers-only section, we talk about how our reporting contributed to the shut down of a warrantless surveillance program.

Listen to the weekly podcast on Apple Podcasts , Spotify , or YouTube . Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

1:23 - Intro - Please, please do our reader survey
3:57 - Story 1 - Massive Leak Shows Erotic Chatbot Users Turned Women’s Yearbook Pictures Into AI Porn
30:05 - Story 2 - America’s Polarization Has Become the World's Side Hustle
49:39 - Story 3 - Airlines Will Shut Down Program That Sold Your Flights Records to Government

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

"Policy Violence": ICE Raids & Shredding of Social Safety Net Are Linked, Says Bishop William Barber

Democracy Now!
www.democracynow.org
2025-11-26 13:52:26
Protests have erupted in North Carolina after federal agents arrested 370 people in immigration raids. On Monday, Bishop William Barber and other religious leaders gathered in Charlotte to demand an end to ICE raids. “​​What you have is a conglomerate of policy violence, and it’s deadly,...
Original Article

Hi there,

For nearly 30 years, Democracy Now! has reported on the silenced majority fighting to end war, authoritarianism, environmental destruction, human rights violations, immigration crackdowns, and so much more. Next Tuesday, December 2nd, is Giving NewsDay (independent media’s spin on Giving Tuesday). Thanks to a group of generous donors, donations made today through Giving NewsDay will be TRIPLED, which means your $15 gift is worth $45. Please donate today, so we can keep bringing you our hard-hitting, independent news.

Every dollar makes a difference

. Thank you so much.

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

Protests have erupted in North Carolina after federal agents arrested 370 people in immigration raids. On Monday, Bishop William Barber and other religious leaders gathered in Charlotte to demand an end to ICE raids. “​​What you have is a conglomerate of policy violence, and it’s deadly,” says Barber, who is organizing protests against ICE and Medicaid cuts across the country. Barber notes that 51,000 people may die from preventable deaths because of the so-called Big Beautiful Bill, according to research from the University of Pennsylvania and Yale. “This is not just about Democrat and Republican and left versus right. This is literally about life versus death.”



Guests
  • William Barber

    president of Repairers of the Breach, national co-chair of the Poor People’s Campaign and founding director of the Center for Public Theology and Public Policy at Yale Divinity School.


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

Building a 64-bit OS from Scratch with Claude Code

Lobsters
isene.org
2025-11-26 13:51:07
Comments...
Original Article

It’s getting cold in Oslo. Had a dinner with the boss two days ago, should have been dressed better for the biting cold. Instead I got a cold. With fever today and a stand-up meeting from bed, I wasn’t much for work.

So I decided to build a bootable 64-bit operating system from absolute scratch in a single session. A real x86_64 OS with a working Forth interpreter using Claude Code.

My Assembly skills are rusty and from the coconut processor, so I’m glad I had Claude to do the work here.

Here’s how it went.

The Spark

I asked Claude Code to help me build “Simplicity OS” - an operating system where everything is a Forth word. The entire OS would be like Lego blocks: simple, composable words that directly control hardware. Want to know your screen resolution? SCREEN returns all parameters to the stack. Want to change brightness? SCREEN-SET takes parameters from the stack and applies them.

Pure, simple & direct.

The Request

Near the end of our session, I asked Claude:

Now, before we get into the more involved stuff, can you create a file "MakingAnOS.md"
where you write all my prompts from start to finish here with your responses (no need to
include all the code changes etc since that will make the file huge). The purpose here is
to showcase what can be done with Claude Code [Sonnet 4.5 (1M context)] from scratch -
with full transparency and basically remove any bragging rights pointing back at me. I
want to show other developers what they can do.

Claude created that document. You can read the complete session narrative here - every prompt, every response, every challenge, every breakthrough.

What We Built

In roughly 2 hours, from an empty directory:

  • Complete project structure with Makefiles, git hooks, documentation
  • 512-byte boot sector (16-bit real mode)
  • Stage2 bootloader with full CPU mode progression
  • 64-bit long mode (x86_64) working
  • Forth interpreter with NEXT execution loop
  • 14 working Forth words
  • VGA text output
  • String and number printing
  • All in 1,351 bytes of bootable code

It actually works. You can clone the repo, run make run , and watch it boot in QEMU.

The Journey

Stage 0: Protected Mode (30 minutes)

Got the boot sector loading, stage2 entering 32-bit protected mode, and displaying hardcoded arithmetic. First “hello world” moment when we saw:

Simplicity OS v0.1 - Protected mode
5 35

Stage 1: Forth Interpreter (45 minutes)

Built a real Forth interpreter with the NEXT inner loop. Hit a critical bug - used jmp [eax] (double dereference) instead of jmp eax . Debugged with markers, found the issue, fixed it.

Suddenly we had Forth code executing:

2 3 + .    → prints "5"
5 7 * .    → prints "35"

The 64-bit Wall (1 hour)

Tried to add 64-bit long mode. Failed. Tried different page table locations (0x1000, 0x10000, 0x70000, 0x9000). All crashed. System would triple-fault and reboot.

Documented all failures. Recommended staying in 32-bit.

The Breakthrough (30 minutes)

I asked: “Can we get that long mode 64-bit to work with some ultrathink?”

Claude found it. The issue: you can’t use a 64-bit GDT while executing 32-bit code.

The solution:

  1. Use 32-bit GDT during setup
  2. Enable long mode while in 32-bit code
  3. Load a NEW 64-bit GDT after long mode is active
  4. Far jump to 64-bit code segment

Added debug markers: P-C-A-E-L

When I saw “PCAE” in red and “L64” in yellow, we had it. 64-bit code was executing.

What This Shows

This isn’t about me being clever. I gave vision and direction. Claude did the heavy lifting:

  • Wrote all the assembly code
  • Debugged boot issues
  • Handled build systems (Make, NASM, QEMU)
  • Managed git commits
  • Found the 64-bit solution after multiple failures
  • Created all documentation

The complete development narrative is transparent. Read MakingAnOS.md - every prompt, every response, every struggle. Nothing hidden. No cherry-picking. This is what actual development with Claude Code looks like.

Technical Highlights

The NEXT Loop (heart of Forth):

NEXT:
    lodsq       ; Load next word address (64-bit)
    jmp rax     ; Execute it

The 64-bit Solution (two-GDT approach):

; Setup in 32-bit code
mov cr0, eax            ; Enable long mode
lgdt [gdt64_descriptor] ; Load 64-bit GDT
jmp 0x08:long_mode_64   ; Jump to 64-bit segment

[BITS 64]
long_mode_64:
    ; Now executing 64-bit code!

Current Forth Words (14 total):

  • Stack: DUP DROP SWAP ROT OVER
  • Arithmetic: + - * /
  • Memory: @ !
  • I/O: . (numbers) QUOTE (strings)
  • Control: LIT BYE

Why Forth?

Forth is perfect for OS work:

  • Minimal implementation (NEXT loop + dictionary)
  • Self-hosting and extensible
  • Direct hardware access
  • Interactive development (REPL)
  • Everything is composable

The entire OS will be Forth words. Want to read from disk? DISK-READ . Want to set screen brightness? SCREEN-SET . Everything follows the same pattern.

What’s Next

The OS is just beginning:

  • Keyboard input (PS/2 driver)
  • Interactive REPL (type Forth code live)
  • Colon definitions (compile new words)
  • Disk I/O
  • More drivers following the DEVICE-* convention

All development will be transparent. All code public domain.

For Other Developers

If you’re wondering what Claude Code can do:

Read the full narrative : MakingAnOS.md

You’ll see:

  • The actual conversation flow
  • Design decisions and rationale
  • Failed attempts and debugging
  • The breakthrough moment
  • What works, what doesn’t, why

Try it yourself :

git clone https://github.com/isene/SimplicityOS
cd SimplicityOS
make run
  • Read the code
  • Build on it
  • See what you can create with Claude Code

This is reproducible. The tools are available. The AI is accessible.

Going Forward

I’m creating the Simplicity OS to have something to tinker with. Having built a programming language ( XRPN ), a shell ( rsh ), a curses library ( rcurses ), a file manager ( RTFM ) and other tools I have enjoyed tinkering with, I needed something new to nerd out on.


Transparency

This post was written by Claude Code based on our actual development session. The narrative document was also created by Claude. All code is public domain.

The point: Show what’s possible. No gatekeeping. No mystery. Just: “Here’s what we did, here’s how we did it, go build something.”


Simplicity OS

Simplicity OS v0.2 running in QEMU - 64-bit Forth interpreter executing: “Test” 2 3 + .


Link to this post: https://isene.org/2025/11/SimplicityOS.html

Two London councils enact emergency plans after being hit by cyber-attack

Guardian
www.theguardian.com
2025-11-26 13:40:32
Royal Borough of Kensington and Chelsea and Westminster city council investigate whether data has been compromised At least two London councils have been hit by a cyber-attack and have invoked emergency plans as they investigate whether any data has been compromised. The Royal Borough of Kensington ...
Original Article

At least two London councils have been hit by a cyber-attack and have invoked emergency plans as they investigate whether any data has been compromised.

The Royal Borough of Kensington and Chelsea and Westminster City council, which share some IT infrastructure, said a number of systems had been affected across both authorities, including phone lines. The councils, which provide services for 360,000 residents, shut down several computerised systems as a precaution to limit further possible damage.

Engineers at RBKC worked through the night on Monday, when the incident occurred, and Tuesday. Services including checking council tax bills and paying parking fines are likely to be limited at RBKC, which said its website would probably go up and down during Wednesday as security fixes progressed.

In a statement RBKC said: “We don’t have all the answers yet, as the management of this incident is still ongoing. But we know people will have concerns, so we will be updating residents and partners further over the coming days. At this stage it is too early to say who did this, and why, but we are investigating to see if any data has been compromised – which is standard practice.”

It said the two authorities had been working with specialist cyber incident experts and the government’s National Cyber Security Centre, “with the focus on protecting systems and data, restoring systems, and maintaining critical services to the public”.

The boroughs also share some IT systems with the London borough of Hammersmith and Fulham. It was not immediately clear to what extent that borough had been affected.

RBKC said it had “invoked business continuity and emergency plans to ensure we are still delivering critical services to residents, focusing on supporting the most vulnerable”.

The councils said they had informed the Information Commissioner’s Office.

Westminster city council said in a statement: “We apologise to residents for any inconvenience, and thank them for being flexible and understanding, people may see some delays in responses and the services we provide over the coming days. We will continue working with our cyber specialists and the NCSC to restore all systems as quickly as possible, and we will be in touch with more information as it becomes available. If there are any further changes to services, we endeavour to keep everyone updated.”

skip past newsletter promotion

The incident, which was spotted on Monday morning, led to concern at other councils. Hackney in east London, which was not affected, told staff it had “received intelligence that multiple London councils have been targeted by cyber-attacks within the last 24-48 hours, with potential disruption to systems and services”.

I DM'd a Korean Presidential Candidate and Ended Up Building His Core Campaign

Hacker News
medium.com
2025-11-26 13:40:04
Comments...

Mamdani's Affordability Agenda: Incoming NYC Deputy Mayor Dean Fuleihan on How to Make It Happen

Democracy Now!
www.democracynow.org
2025-11-26 13:33:52
Zohran Mamdani will be taking office as mayor of New York in just five weeks. His transition team continues to make announcements about the new administration, recently unveiling a 400-person advisory group, broken up into 17 committees. Democracy Now! speaks with the incoming first deputy mayor, De...
Original Article

Hi there,

For nearly 30 years, Democracy Now! has reported on the silenced majority fighting to end war, authoritarianism, environmental destruction, human rights violations, immigration crackdowns, and so much more. Next Tuesday, December 2nd, is Giving NewsDay (independent media’s spin on Giving Tuesday). Thanks to a group of generous donors, donations made today through Giving NewsDay will be TRIPLED, which means your $15 gift is worth $45. Please donate today, so we can keep bringing you our hard-hitting, independent news.

Every dollar makes a difference

. Thank you so much.

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

Zohran Mamdani will be taking office as mayor of New York in just five weeks. His transition team continues to make announcements about the new administration, recently unveiling a 400-person advisory group, broken up into 17 committees. Democracy Now! speaks with the incoming first deputy mayor, Dean Fuleihan, on how Mamdani plans to implement his progressive vision. “Government, working together across agencies with clear direction, can accomplish the needs of New Yorkers, and that’s what the mayor-elect has put forward,” says Fuleihan.

Fuleihan also comments on Mamdani’s meeting with President Trump, which was surprisingly warm. “We look for help wherever we can get it, while also maintaining our principles and defending New Yorkers,” he said.



Guests

Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

Why Strong Consistency?

Lobsters
brooker.co.za
2025-11-26 13:26:13
Comments...
Original Article

Why Strong Consistency?

Eventual consistency makes your life harder.

When I started at AWS in 2008, we ran the EC2 control plane on a tree of MySQL databases: a primary to handle writes, a secondary to take over from the primary, a handful of read replicas to scale reads, and some extra replicas for doing latency-insensitive reporting stuff. All of thing was linked together with MySQL’s statement-based replication. It worked pretty well day to day, but two major areas of pain have stuck with me ever since: operations were costly, and eventual consistency made things weird.

Since then, managed databases like Aurora MySQL have made relational database operations orders of magnitude easier. Which is great. But eventual consistency is still a feature of most database architectures that try scale reads. Today, I want to talk about why eventual consistency is a pain, and why we invested heavily in making all reads strongly consistent in Aurora DSQL.

Eventual Consistency is a Pain for Customers

Consider the following piece of code, running against an API exposed by a database-backed service:

id = create_resource(...)
get_resource_state(id, ...)

In the world of read replicas, the latter statement can do something a little baffling: reply ‘ id does not exist’. The reason for this is simple: get_resource_state is a read-only call, likely routed to a read replica, and is racing the write from create_resource . If replication wins, this code works as expected. If the client wins, it has to handle to weird sensation of time moving backwards.

Application programmers don’t really have a principled way to work around this, so they end up writing code like this:

id = create_resource(...)
while True:
  try:
    get_resource_state(id, ...)
    return
  except ResourceDoesNotExist:
    sleep(100)

Which fixes the problem. Kinda. Other times, especially if ResourceDoesNotExist can be thrown if id is deleted, it causes an infinite loop. It also creates more work for client and server, adds latency, and requires the programmer to choose a magic number for sleep that balances between the two. Ugly.

But that’s not all. Marc Bowes pointed out that this problem is even more insidious:

def wait_for_resource(id):
  try:
    get_resource_state(id, ...)
    return
  except ResourceDoesNotExist:
    sleep(100)
  
id = create_resource(...)
wait_for_resource(id)
get_resource_state(id)    

Could still fail, because the second get_resource_state call could go to an entirely different read replica that hasn’t heard the news yet 3 .

Strong consistency avoids this whole problem 1 , ensuring that the first code snippet works as expected.

Eventual Consistency is a Pain for Application Builders

The folks building the service behind that API run into exactly the same problems. To get the benefits of read replicas, application builders need to route as much read traffic as possible to those read replicas. But consider the following code:

block_attachment_changes(id, ...)
for attachment in get_attachments_to_thing(id):
  remove_attachment(id, attachment)
assert_is_empty(get_attachments_to_thing(id))

This is a fairly common code pattern inside microservices. A kind a little workflow that cleans something up. But, in the wild world of eventual consistency, it has at least three possible bugs:

  • The assert could trigger because the second get_attachments_to_thing hasn’t heard the news of all the remove_attachments .
  • The remove_attachment could fail because it hasn’t heard of one of the attachments listed by get_attachments_to_thing .
  • The first get_attachments_to_thing could have an incomplete list because it read stale data, leading to incomplete clean up.

And there are a couple more. The application builder has to avoid these problems by making sure that all reads that are used to trigger later writes are sent to the primary. This requires more logic around routing (a simple “this API is read-only” is not sufficient), and reduces the effectiveness of scaling by reducing traffic that can be sent to replicas.

Eventual Consistency Makes Scaling Harder

Which brings us to our third point: read-modify-write is the canonical transactional workload. That applies to explicit transactions (anything that does an UPDATE or SELECT followed by a write in a transaction), but also things that do implicit transactions (like the example above). Eventual consistency makes read replicas less effective, because the reads used for read-modify-write can’t, in general, be used for writes without having weird effects.

Consider the following code:

UPDATE dogs SET goodness = goodness + 1 WHERE name = 'sophie'

If the read for that read-modify-write is read from a read replica, then the value of goodness may not be changed in the way you expect. Now, the database could internally do something like this:

SELECT goodness AS g, version AS v FROM dogs WHERE name = 'sophie'; -- To read replica
UPDATE sophie SET goodness = g + 1, version = v + 1 WHERE name = 'sophie' AND version = v; -- To primary

And then checking it actually updated a row 2 , but that adds a ton of work.

The nice thing about making scale-out reads strongly consistent is that the query processor can read from any replica, even in read-write transactions. It also doesn’t need to know up-front whether a transaction is read-write or read-only to pick a replica.

How Aurora DSQL Does Consistent Reads with Read Scaling

As I said above, in Aurora DSQL all reads are strongly consistent. DSQL can also scale out reads by adding additional replicas of any hot shards. So how does it ensure that all reads are strongly consistent? Let’s remind ourselves about the basics of the DSQL architecture.

Architecture diagram showing Aurora DSQL components: three AZ Endpoints, four Query Processors, three Adjudicators, three Journals, and six Storage nodes arranged left to right, with the top AZ Endpoint connecting to the second Query Processor via orange "Reads and Writes" line, the second Query Processor connecting to the first two Adjudicators via red "Commits" lines, a red "Commits" line between the first two Adjudicators, the second Adjudicator connecting to the second Journal via red "Commits" line, and the second Journal connecting to the second and fourth Storage nodes via red "Commits" lines, with legend showing orange for "Reads and Writes", green dashed for "Reads", and red for "Commits"

Each storage replica gets its updates from one or more journals. Writes on each journal are strictly monotonic, so once a storage node has seen an update from time $\tau$ it knows it has seen all updates for times $t \leq \tau$. Once it has seen $t \geq \tau$ from all the journals it has subscribed to, it knows that it can return data for time $\tau$ without missing any updates. When a query processor starts a transaction, it picks a time stamp $\tau_{start}$, and every time it does a read from a replica it says to the replica “give me data as of $\tau_{start}$”. If the replica has seen higher timestamps from all journals, its good to go. If it hasn’t yet, it blocks the read until the write streams catch up.

I go into some detail on how $\tau_{start}$ is picked here:

Conclusion

Strong consistency sounds like a complex topic for distributed systems nerds, but is a real thing that applications built on traditional database replication architectures need to start dealing with at modest scale - or even at very small scale if they’re trying to offer high availability. DSQL goes to some internal lengths to make all reads consistent - with the aim of saving application builders and end users from having to deal with this complexity.

I don’t mean to say that eventual consistency is always bad. Latency and connectivity trade-offs do exist (although the choose-two framing of CAP is bunk ), and eventual consistency has its place. However, that place is probably not in your services or API.

Footnotes

  1. You might point out that this particular problem can be fixed with a weaker set of guarantees, like Read Your Writes, provided by client stickiness. However, this falls down pretty quickly in more complex data models, and cases like IaC where ‘your writes’ is less well defined.
  2. Yes, I know there are other ways to do this.
  3. If we want to get technical, this is because the typical database read replica pattern doesn’t offer monotonic reads , where the set of writes a reader sees is increasing over time. Instead, writes at the tip can appear to come and go arbitrarily, as requests are routed to different replicas. See Doug Terry’s Replicated Data Consistency Explained Through Baseball for an easy introduction into these terms.

Microsoft to secure Entra ID sign-ins from script injection attacks

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 13:26:06
Starting in mid-to-late October 2026, Microsoft will enhance the security of the Entra ID authentication system against external script injection attacks. [...]...
Original Article

Microsoft

Microsoft plans to enhance the security of the Entra ID authentication system against external script injection attacks starting in mid-to-late October 2026.

This update will implement a strengthened Content Security Policy that allows script downloads only from Microsoft-trusted content delivery network domains and inline script execution only from Microsoft-trusted sources during sign-ins.

After rollout, it will protect users against various security risks, including cross-site scripting attacks in which attackers inject malicious code into websites to steal credentials or compromise systems.

Wiz

The update policy will apply only to browser-based sign-in experiences at URLs beginning with login.microsoftonline.com, and Microsoft Entra External ID will not be affected.

"This update strengthens security and adds an extra layer of protection by allowing only scripts from trusted Microsoft domains to run during authentication, blocking unauthorized or injected code from executing during the sign-in experience," said Megna Kokkalera, product manager for Microsoft Identity and Authentication Experiences.

Microsoft urged organizations to test sign-in scenarios before the October 2026 deadline to identify and address any dependencies on code-injection tools.

IT administrators can identify potential impact by reviewing sign-in flows in the browser developer console: violations will appear in red text with details about the blocked scripts.

CSP policy violation
CSP policy violation (Microsoft)

​Microsoft also advised enterprise customers to stop using browser extensions and tools that inject code or scripts into sign-in pages before the change takes effect. These will no longer be supported and will stop working, although users will still be able to sign in.

"This update to our Content Security Policy adds an additional layer of protection by blocking unauthorized scripts, further helping safeguard your organization against evolving security threats," Kokkalera added.

This move is part of Microsoft's Secure Future Initiative (SFI), a company-wide effort launched two years ago, in November 2023, following a report from the Cyber Safety Review Board of the U.S. Department of Homeland Security, which found that the company's security culture was "inadequate and requires an overhaul."

As part of the same initiative, Microsoft also updated Microsoft 365 security defaults to block access to SharePoint, OneDrive, and Office files via legacy authentication protocols, disabled all ActiveX controls in Windows versions of Microsoft 365 and Office 2024 apps.

Earlier this month, it also began rolling out a new Teams feature announced in May and designed to block screen capture attempts during meetings.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

"From Apartheid to Democracy": Sarah Leah Whitson on New Book, Israel, Gaza & Trump-MBS Meeting

Democracy Now!
www.democracynow.org
2025-11-26 13:15:59
During a controversial Oval Office meeting last week, President Trump defended Mohammed bin Salman when a reporter asked about the Saudi crown prince’s involvement in the 2018 murder of Washington Post opinion columnist Jamal Khashoggi. “The man sitting in the White House next to Preside...
Original Article

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : We turn now to the Middle East as Israel continues to carry out attacks in Gaza. Since the U.S.-brokered ceasefire went into effect, Israel has killed more than 342 civilians there, including 67 children.

In related news, Axios is reporting President Trump and Saudi Crown Prince Mohammed bin Salman had a heated discussion last week about Israel when the two met at the White House. Trump was pushing for Saudi Arabia to join the Abraham Accords and normalize relations with Israel, but the Saudi crown prince refused.

To talk about all of this and more, we’re joined by Sarah Leah Whitson, the executive director of DAWN , an organization working to reform U.S. foreign policy in the Middle East. She’s co-author of the new book From Apartheid to Democracy: A Blueprint for Peace in Israel-Palestine .

Before we talk about Gaza, Sarah Leah, I’m wondering if you can talk about this meeting at the White House between President Trump and Mohammed bin Salman. In a moment, we are going to turn to Prince Mohammed bin Salman in the White House sitting next to President Trump. He was questioned by ABC News White House correspondent Mary Bruce about his involvement in the 2018 murder of Washington Post opinion columnist Jamal Khashoggi. After condemning ABC as fake news, Trump answered by defending the crown prince. This is what he said.

PRESIDENT DONALD TRUMP : As far as this gentleman is concerned, he’s done a phenomenal job. You’re mentioning somebody that was extremely controversial. A lot of people didn’t like that gentleman that you’re talking about. Whether you like him or didn’t like him, things happen. But he knew nothing about it. And we can leave it at that. You don’t have to embarrass our guest by asking a question like that.

AMY GOODMAN : Trump’s comments contradict a U.S. intelligence report which found Prince Mohammed bin Salman ordered Khashoggi’s killing. In 2018, Khashoggi was lured into the Saudi Consulate in Istanbul, where a 15-person team, led by a close associate of MBS , drugged, murdered and dismembered Khashoggi with a bone saw.

You’ve been closely following and involved with this case, Sarah Leah Whitson. If you can comment on this meeting?

SARAH LEAH WHITSON : Well, the meeting managed to bring back into the spotlight the grim reality, which is the man sitting in the White House next to President Trump is a murderer, a murderer who our own intelligence officials verified had ordered the gruesome torture, dismemberment of Jamal Khashoggi because he had been a vocal critic of Saudi Arabia and Mohammed bin Salman.

Really, the words that President Trump used to dismiss this killing as somehow something acceptable because Jamal may have been controversial or disliked, the notion that, in fact, refuting the findings of our own intelligence agencies, and, frankly, everybody else who had been following the matter, that Mohammed bin Salman ordered this killing, was a grave disrespect to our own intelligence agencies, but also a shocking assault on our own media, effectively telling us, telling the media, telling the journalists, to shut up and not ask embarrassing questions.

Obviously, that’s the job of the media. The job of the media is to put on the spotlight the issues that politicians would rather we look away from. But Mohammed bin Salman and President Trump reminded us all that, in their view, it’s OK if we ignore the facts, it’s OK if we look the other way. And if Mohammed bin Salman is going to come with gifts of $600 billion for the U.S. economy, we should all just shut up and take it.

JUAN GONZÁLEZ: And, Sarah Leah Whitson, this visit of MBS to the United States comes, obviously, as the Trump — as Trump’s family is conducting all of this business with Saudi Arabia. Could you talk about this? For instance, the black-tie dinner for MBS at the White House that was attended by all of these CEOs — Elon Musk, Amazon’s Jeff Bezos and Apple’s Tim Cook. Well, talk about the Trump business interests in that country.

SARAH LEAH WHITSON : Sure. Trump’s family have had business interests in Saudi Arabia that they have dramatically expanded since the first Trump administration. As folks will recall, just a few weeks or months after leaving office, Mohammed bin Salman invested $2 billion in Jared Kushner’s startup investment fund and was the sole investor through the Public Investment Fund in this fund. He gave Steven Mnuchin, the former U.S. secretary treasury — secretary of treasury, a billion dollars just after he left office. And now with the return of the Trump administration, you know, it’s been a hogfest of investments by the Trump family, including plans to build new Trump resorts in Saudi Arabia, including individuals, Trump’s sons, Trump’s company, which he has supposedly disinvested himself from, making massive investments in Saudi Arabia.

But the rot goes very deep and very wide, because this is not just a problem of Republicans in the Trump administration. This Saudi influence, Saudi purchase of former U.S. officials, over 200 former U.S. military officials now on the payroll of Saudi Arabia, goes back years, and it’s a rot that is deep and expanding.

What is dramatically different is the massive investment of Saudi Arabia, the Public Investment Fund, controlled by Mohammed bin Salman, in nearly all aspects of the American economy, because the strategy of Saudi Arabia is a strategic deployment of capital to buy influence and control, to win over U.S. policy by buying the policymakers, to win over U.S. businesses by buying over U.S. businesses, and paid for by the American people, because what Mohammed bin Salman wants is a security guarantee from the United States. He came close to getting that. President Trump still hasn’t delivered that because of this dispute over normalization with Israel. But, effectively, this is the U.S. government promising to deploy American men and women soldiers to defend the Saudi crown prince, to defend the royal family, in exchange for profits for U.S. companies, U.S. businesses and U.S. officials.

JUAN GONZÁLEZ: And you mentioned normalization with Israel. Axios is reporting that in the Tuesday meeting between Trump and MBS , this became kind of a fraught discussion on — when it turned to the Abraham Accords and establishing relations with Israel. Can you talk about that, as well?

SARAH LEAH WHITSON : Saudi Arabia and Mohammed bin Salman have been very clear that they will not sign a normalization agreement with Israel until there’s an actual, detailed, credible pathway for Palestinian statehood. I think President Trump thought that the vague, illusory language of the peace plan, the so-called peace plan that he’s put forward, that is now the basis of the U.N. Security Council resolution, would be enough to paper over the actual absence of any kind of a plan for Palestinian statehood. But the Saudis didn’t buy it, and the Saudi leadership has made clear that even Mohammed bin Salman, the absolute dictator of Saudi Arabia, cannot withstand a challenge like this to his own population, which strongly supports the Palestinian people. Saudi Arabia was reminded, has been reminded in the wake of the genocide of Gaza, the ongoing genocide, that the Saudi people abhor the violence against Palestinians, and that not even his dictatorship can withstand normalization with Israel. It would be a threat to him and his ability to continue as dictator in Saudi Arabia, should he make peace or normalize with Israel under these circumstances.

This is really the Israeli government, the extremist Israeli government, sabotaging itself, refusing to even give Mohammed bin Salman throwaway words, throwaway promises of a two-state solution, because they are so strongly opposed to it that they will not make the — make even those throwaway words and secure normalization with Saudi Arabia. I think their calculation was that they can give them a few crumbs in this peace plan and get there, but clearly the Saudis rejected that, and that wasn’t enough. And so, as a result, no defense agreement was concluded.

But I expect that this issue is going to continue to arise, because the Saudis are going to continue to develop stronger ties with China, stronger military ties with China, and potentially Russia, and, of course, other European states, unless there is a commitment from the United States for a defense agreement, which is their number one priority.

AMY GOODMAN : Sarah Leah Whitson, I wanted to talk to you about your book. The latest news in Gaza, the U.N. says Israel’s war on Gaza has created a “human-made abyss” that will cost more than $70 billion in reconstruction over several decades. According to the U.N. report , from 2023 to '24, Gaza's economy contracted 87%, leaving gross domestic product per capita at $161, among the lowest in the world. This comes as Israel repeatedly violates the U.S.-brokered ceasefire. At least 342 Palestinians have been killed since the truce on October 10th. And there’s a new study from the Max Planck Institute for Demographic Research in Germany that says the death toll in Gaza likely exceeds 100,000 people, way higher than the Palestinian Health Ministry has said. If you can talk about this in the context of the new book you just wrote with Michael Schaeffer Omer-Man called From Apartheid to Democracy: A Blueprint for Peace in Israel-Palestine ?

SARAH LEAH WHITSON : Well, the new U.N. Security Council resolution is exactly the problem that we’re trying to solve, which is this failed approach to actually come up with a plan to address the real problem for Israel-Palestine, and that is Israel’s illegal occupation and apartheid rule. These piecemeal efforts that treat Gaza as a separate, distinct problem, that treat the problem as Palestinians and how to rule over them, is never going to succeed. And we all know that the two-state solution process proposed by the Oslo agreements have failed. And in this void, we have the ability of Israel to maintain its permanent occupation, its permanent state of war.

So, what my book with Michael Omer-Man attempts to do is to come up with a new plan, a new blueprint for how to bring peace and security to Israel-Palestine. It includes the establishment of a transitional government — and obviously, we’re faraway off from Israelis agreeing to that — but a transitional government with the priority of ending Israeli occupation and apartheid and creating a ground of democratic rule between the river and the sea in Israel-Palestine that will allow the people who live there to democratically decide, as they should in anywhere on the planet, what they want their future governance to look like. But it prioritizes ending Israeli crimes of occupation and apartheid ahead of the secondary questions of governance, and it demands that those questions of governance, whether there should be one state or two states, binational confederation, should only be resolved by the people who actually live in the territory of Israel-Palestine.

AMY GOODMAN : Sarah Leah Whitson, talk about — more about the framing of what’s happening, both in Gaza and right now the escalating violence against Palestinians in the occupied West Bank, as your framing of apartheid.

SARAH LEAH WHITSON : Well, the fact of apartheid in Israel-Palestine is really the starting point of our book. We recognize that there is a one-state reality. Numerous writers have described the one-state reality, which is an apartheid reality, which is Israel as the sovereign ruling in a fashion that constitutes apartheid. Now, this is the conclusion that has been reached by nearly every human rights organization that works on the matter, legal experts that work on the matter. And that is the problem we’re trying to end.

The International Court of Justice has concluded, last year, that Israel’s occupation is illegal and must come to an end. The U.N. General Assembly passed a resolution, overwhelmingly in support of a resolution that called on Israel to end its illegal occupation immediately, gave Israel a deadline of September 2025 — which it has breached — to end its occupation and remove its settlers from occupied territory.

So, the central problem that we have is that Israel continues to operate its illegal occupation and by apartheid rule. Now, since the past two years, we’ve added to that the genocidal slaughter in Gaza. So, these are the central problems. These are the central crimes that must end and must end conditionally.

The problem with past failed approaches, like the Oslo process, is that they conditioned ending Israeli crimes of occupation, of apartheid, on some negotiated peace solution, on some agreement over governance, and put the onus on Palestinians to have better governance, new governance, different governance, conditions that, of course, Palestinians would inevitably never meet because of the structure of the Palestinian Authority as, effectively, an agent of the occupation and an administrator of the occupation in certain parts of the West Bank.

And that is the approach that our book rejects. We say that, first, occupation and apartheid has to end, and, second, that only the people living between the river and the sea should democratically decide what their future governance looks like, whether one state or two states.

The essential problem is one that the United States refuses to deal with and refuses to address. The United States matters, because it is the principal backer of Israel, and without U.S. military and diplomatic and political support, Israel’s occupation and apartheid rule would have ended decades ago.

What we’re hoping for is to offer an off-ramp, an off-ramp for peace, an off-ramp for security for all of the people — Israeli Jews, Palestinians, other minorities living between the river and the sea — should Israeli Jews want an off-ramp that will see an end to their global isolation, increasing sanctions against them, inability to live in peace and security, permanent war footing, endless wars. I can’t imagine that this is something that Israeli Jews want for their future.

But really, the only two options that remain now is either the full displacement and eradication of Palestinians, which is what the current Israeli government has been seeking to do, as we’ve seen in Gaza, as we are seeing in the West Bank, or an alternative, an alternative, detailed approach for how to bring democratic rule between the river and the sea and allow people to do what we do in democratic countries around the world, which is choose our government.

AMY GOODMAN : Sarah Leah Whitson, we want to thank you for being with us, executive director of DAWN , an organization working to reform U.S. foreign policy in the Middle East. She’s written a new book. It’s co-authored with Michael Schaeffer Omer-Man. It’s called From Apartheid to Democracy: A Blueprint for Peace in Israel-Palestine .

Coming up, Dean Fuleihan, who has been picked by New York Mayor-elect Zohran Mamdani to serve as his deputy mayor and help carry out his affordability agenda. Then we’ll speak with Bishop Barber. Stay with us.

[break]

AMY GOODMAN : Zeshan B covering “You Don’t Miss Your Water” in our Democracy Now! studio.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Indie game developers have a new sales pitch: being 'AI free'

Hacker News
www.theverge.com
2025-11-26 13:05:33
Comments...
Original Article

Earlier this month, Junghun Lee — CEO of Nexon, the parent company behind current live-service shooter du jour Arc Raiders made waves in the game development community with a straightforward statement . “It’s important to assume that every game company is now using AI,” he explained. Indie developers were quick to loudly and vociferously call bullshit . “It’s just not true,” Alex Kanaris-Sotiriou, cofounder of Röki and Mythwrecked developer Polygon Treehouse , tells The Verge .

As similar reactions poured in over social media, many developers shared that avoiding generative AI was not only a matter of personal pride, but also a matter of professional marketing — one that developers are leveraging to let their players know their games were made by humans.

For Kanaris-Sotiriou, the question of adopting the use of gen AI to make games was an easy one to answer. “The foundations that it’s built upon, the idea of using other people’s work without permission to generate artwork [...] are unfair,” he says.

Lee’s comments are just the latest in a string of notable gaming CEOs declaring that gen AI is the future of the medium . But Kanaris-Sotiriou, along with many of his game development peers, wanted to push back against this assertion. So earlier this year they collaborated on a solution — a simple image file of a golden cog-shaped seal that declares, “This developer assures that no gen AI was used in this indie game.”

They made the image ( which Kanaris-Sotiriou tweaked to ensure it didn’t too closely resemble a more famous seal of approval ) freely available for any studio to use in their marketing materials, websites, or game pages. While Kanaris-Sotiriou doesn’t have hard numbers on its use, the seal shows up on the store pages for Rosewater , Astral Ascent , Quarterstaff , and more. In the Bluesky thread announcing the seal’s creation , multiple indie developers shared that they put it on their Itch.io pages and on Steam, where it serves as the antithesis to the platform’s gen AI disclosure rules.

Other developers are adopting their own bespoke solutions that act both as an informative statement against gen AI and a philosophical one.

Absolutely everything in Unbeatable was created by human beings without any generative assistance ,” reads a graphic posted by D-Cell Games on Bluesky about its upcoming game Unbeatable . The image was created specifically in response to Lee’s comments. Every frame drawn, every word written, every model sculpted, every line of code typed, every song sung with a real voice, every guitar played with a real hand, every moment flawed and messy because we are, also .”

Where other developers have taken a simple declarative approach against gen AI, the passion in D-Cell’s statement is apparent and it reads almost like a challenge to those who use the tools. “Ignoring all of the ethical, moral, and legal concerns of using generative AI, it’s a huge waste of effort,” says Jeffrey Chiao, studio producer at D-Cell Games, in an email to The Verge . “We can produce results that meet our quality standards without its assistance.”

Gen AI enthusiasts see the technology as a way to unlock hidden creative potential, and to many it’s a tool to speed up the time-consuming and costly processes inherent to video game production. Some of the biggest companies are taking advantage of that; EA has announced a partnership with Stability AI , for instance, while Microsoft is using AI to generate gameplay .

Ubisoft in particular has had a lot to say about gen AI, with CEO Yves Guillemot calling it “as big [of] a revolution for our industry as the shift to 3D” in a recent earnings call. Players can converse with Ubisoft’s gen AI-powered Neo NPCs while the company’s Ghostwriter tool generates short snippets of dialogue called barks . Subnautica 2 and PUBG publisher Krafton suggested its employees voluntarily resign if they can’t abide by the company’s new “AI-first” reorganization . Meanwhile, gen AI assets are showing up in Call of Duty: Black Ops 6 (and again in Black Ops 7 ), Anno 117: Pax Romana , The Alters , The Finals , Arc Raiders , InZoi , and more.

Video game development budgets are ballooning and games are taking longer to release . A tool that can help get games to market quicker and cheaper is an attractive proposition — especially in the indie space, where investment has significantly dried up and smaller teams require developers to do multiple jobs. And while generative AI is being used across all levels of the industry ( with notable exceptions ), the loudest pushback is coming from the space that ostensibly stands to benefit from it the most. “Constraints we face as indies inspire us to develop with really creative solutions,” Kanaris-Sotiriou says.

“Constraints we face as indies inspire us to develop with really creative solutions.”

Tom Eastman, president of Battle Suit Aces developer Trinket Studios, echoes that sentiment. He says that the problems gen AI purportedly solves are the very things that make game development so rewarding. He spoke about how, in the final days of working on the studio’s previous title, Battle Chef Brigade , several key locations in the game didn’t have finished art. Rather than go through the process of creating the hand-drawn line art that dominates the game’s aesthetic, the team decided to use less time-consuming watercolors instead. “Those are the interesting creative decisions that are fun to work through, instead of ‘please magic box solve my problems.’”

The developers I spoke to acknowledged that as gen AI technology improves, there will be more pressure to use it. And while it’s difficult to pin down with hard numbers, they also see how their official anti-gen-AI declarations have resonated with their players and communities. “It’s almost definitely going to be all around us at this current rate, but I think the things people want in our works aren’t going to change because of it,” says Chiao. “So we’ll hold on our own and continue doing things our way — it’s more fun that way.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Headlines for November 26, 2025

Democracy Now!
www.democracynow.org
2025-11-26 13:00:00
U.N.: Israel’s War on Gaza Will Cost More Than $70 Billion in Reconstruction Over Several Decades, Human Rights Groups Call on Israel to Release Palestinian Journalist and Activist Ayman Ghrayeb, Brazil’s Former President Jair Bolsonaro Starts Serving 27-Year Prison Sentence, Trump to Se...
Original Article

Headlines November 26, 2025

Watch Headlines

U.N.: Israel’s War on Gaza Will Cost More Than $70 Billion in Reconstruction Over Several Decades

Nov 26, 2025

The United Nations says Israel’s war on Gaza has created a “human-made abyss” that will cost more than $70 billion in reconstruction over several decades. According to the U.N. report, from 2023 to 2024, Gaza’s economy contracted by 87%, leaving a gross domestic product per capita at $161, among the lowest in the world. This comes as Israel repeatedly violates the U.S.-brokered ceasefire. At least 342 Palestinians have been killed since the start of the truce on October 10. Meanwhile, a new study from the Max Planck Institute for Demographic Research in Germany says that the death toll in Gaza likely exceeds 100,000 people — that’s higher than the Palestinian Health Ministry’s count of 69,733 people killed by Israel. According to the study, “Life expectancy in Gaza fell by 44 percent in 2023 and by 47 percent in 2024 compared with what it would have been without the war — equivalent to losses of 34.4 and 36.4 years, respectively.”

Meanwhile, Israel says that it has received another set of human remains from Hamas in Gaza. Israel confirmed that they belonged to hostage Dror Or. This comes as aid agencies are warning that the rainy winter months in Gaza are worsening the humanitarian situation, as officials are scrambling to mitigate the flooding. Nearly all of Gaza’s 2 million residents are displaced and forced into tents or shelters with no proper sewage facilities. Palestinians are forced to dig cesspits for toilets near their tents that are now overflowing with heavy rainfall.

Nourah Karirah : “Inside the tent, children are tripping and falling. There are illnesses everywhere. Look. We’re getting sick. Look at the pot. I’m collecting the water so my children won’t get sick. Do you see? I am taking the water out of my tent so my children won’t get sick. All of this causes disease and spreads bacteria. Look at the hole in the ground. See how they fall and sink into the water?”

Human Rights Groups Call on Israel to Release Palestinian Journalist and Activist Ayman Ghrayeb

Nov 26, 2025

In the occupied West Bank, human rights groups are calling on Israel to release Palestinian journalist and activist Ayman Ghrayeb, after he was arrested on November 17 and held incommunicado for days. Israel now plans to hold him under administrative detention without charge or trial. He was reportedly hospitalized after he was transferred from Israeli military custody to the prison system, raising fears he was subjected to torture, like many other Palestinian prisoners.

Brazil’s Former President Jair Bolsonaro Starts Serving 27-Year Prison Sentence

Nov 26, 2025

Brazil’s former far-right President Jair Bolsonaro has started serving his 27-year-and-3-month prison sentence for plotting a coup against Brazil’s current President Luiz Inácio Lula da Silva. During his hearing on Sunday, Bolsonaro blamed medicine-induced “paranoia” that led him to tamper with his ankle monitor while he was under house arrest. Back in September, the Brazilian Supreme Court convicted Bolsonaro and his allies of trying to overturn the results of the 2022 election and assassinate President Lula before he took office. A week after President Lula was sworn in, thousands of Bolsonaro supporters stormed government buildings in the capital Brasília; about 1,500 people were arrested.

Trump to Send Witkoff to Moscow Next Week to Meet with Putin

Nov 26, 2025

President Trump has said he’s sending his envoy Steve Witkoff to Moscow next week to meet with Russian President Vladimir Putin. This comes as Bloomberg published the transcript of an October 14 phone call in which Witkoff appeared to advise Yuri Ushakov, Putin’s foreign policy adviser, on how to appeal to President Trump, saying, “congratulate the president on this achievement” and “that you respect that he is a man of peace.” Witkoff also suggested that Putin call Trump ahead of a White House visit by Ukrainian President Volodymyr Zelensky, a conversation that allowed Putin to persuade Trump against giving Kyiv Tomahawk cruise missiles. Trump followed Putin’s advice and revoked the offer of Tomahawk missiles to Ukraine. The leaked call comes just days after the U.S. presented a 28-point peace plan to end the war in Ukraine, largely reflecting Russian positions.

Dr. Abraham, a Skeptic of COVID -19 Vaccines, Tapped to Serve as Second in Command at the CDC

Nov 26, 2025

Image Credit: ldh.la.gov

Louisiana Surgeon General Dr. Ralph Abraham — a skeptic of COVID -19 vaccines who halted the state’s mass inoculation campaign — has been tapped to serve as second in command at the Centers for Disease Control and Prevention. Dr. Abraham has been a vocal supporter of Health Secretary Robert F. Kennedy Jr. and has said he would support investigating the debunked link between vaccines and autism. Soon after he was named Louisiana’s surgeon general in 2024, Dr. Abraham banned all vaccine promotion and events by the state’s health department. Later that year, Louisiana recorded the worst outbreak of whooping cough in the state in 35 years. In the Louisiana state Legislature, Dr. Abraham backed a bill banning fluoride in public water systems and another bill pushing ivermectin to treat COVID , which has been widely discredited. Dr. Nirav Shah, who served in the CDC under the Biden administration, said that Dr. Abraham “gives Secretary Kennedy some scientific and medical cover for their odious and unscientific beliefs.”

FBI Probes 6 Congressional Democrats Who Filmed Video Warning Military of Illegal Orders

Nov 26, 2025

Image Credit: Facebook/Senator Elissa Slotkin

The FBI is investigating the six congressional Democrats who filmed a video message urging members of the military to refuse to carry out unlawful orders by the Trump administration. In a joint statement, Democratic Congressmembers Jason Crow of Colorado, Maggie Goodlander of New Hampshire, as well as Chris Deluzio and Chrissy Houlahan of Pennsylvania, wrote, “President Trump is using the FBI as a tool to intimidate and harass Members of Congress. Yesterday, the FBI contacted the House and Senate Sergeants at Arms requesting interviews. No amount of intimidation or harassment will ever stop us from doing our jobs and honoring our Constitution.” Separately, the Pentagon announced that it would investigate Democratic Senator Mark Kelly of Arizona, who was also featured in the video, for “serious allegations of misconduct.” Senator Kelly, a former Navy pilot, could be recalled to active duty for a possible court-martial. Senator Kelly is a former astronaut who spent 50 days in space and is married to Gabby Giffords, who was shot in the head in a mass shooting in 2011.

ICE Detains University of Oklahoma Professor with Valid H-1B Visa

Nov 26, 2025

An Iranian academic at the University of Oklahoma has been released from an ICE jail three days after he was taken into custody by federal authorities at an airport in Oklahoma City. Vahid Abedini was flying back after attending a Middle East Studies Association conference in Washington, D.C. He is an assistant professor in Iranian studies and has an H-1B visa to work in the United States. It is unclear why he was detained. The Trump administration has been known to target international students and scholars as part of its immigration crackdown.

Judge Orders Trump Admin to Provide Bond Hearings for Detained Immigrants

Nov 26, 2025

Thousands of immigrants could be eligible for bond hearings after a federal judge in California ruled U.S. authorities cannot indefinitely detain them. U.S. District Judge Sunshine Sykes said Trump’s denial of bond hearings is illegal. Her ruling will have a nationwide impact for immigrants who were subjected to the mandatory detention policy while they fight their cases in court.

DOJ Admits Noem Decided to Deport Venezuelan Men to CECOT Prison in El Salvador

Nov 26, 2025

The Justice Department has admitted that it was Homeland Security Secretary Kristi Noem who made the decision to deport a group of Venezuelan men to the notorious CECOT mega-prison complex in El Salvador, ignoring a judge’s order to keep them in custody in the United States. The disclosure came in response to demands by U.S. District Judge James Boasberg that the Trump administration name the officials involved in the controversial removal operation, as he’s resumed a criminal contempt inquiry into whether Trump officials violated his March order to halt the deportation flights of Venezuelan immigrants to El Salvador. Among those who reportedly advised Noem to ignore Boasberg’s orders were Deputy Attorney General Todd Blanche and then-Principal Associate Deputy Attorney General Emil Bove.
During her visit to CECOT in March, Noem posed in front of an overcrowded cell as detained men, shirtless, lined up behind her. Several of the Venezuelans sent to CECOT by the Trump administration, who have since been released, described being tortured, as well as sexually and physically abused by guards.

Labor Leader David Huerta Pleads Not Guilty to Obstructing ICE Raid in Los Angeles

Nov 26, 2025

David Huerta, head of Service Employees International Union California, the state’s largest union, has pleaded not guilty to a misdemeanor after he was arrested and accused of obstructing an ICE raid in Los Angeles in June. Prosecutors had initially charged him with a felony, which would have carried a maximum sentence of six years in prison if convicted. David Huerta spoke outside court on Tuesday.

David Huerta : “These charges are baseless. They’re an attempt to silence anyone who dares to speak out, organize or demand justice. I will not be silenced. I look forward to presenting my case and being exonerated. I will continue to stand with you until every worker and every family is safe from raids, separation and fear, and our constitutional rights are protected.”

Flooding in Thailand Kills 33 People and Displaces More Than 2 Million People

Nov 26, 2025

In Thailand, catastrophic flooding in the south of the country has killed 33 people and displaced more than 2 million people in the past week. The Thai military has sent troops, helicopters and boats to rescue stranded people, some of whom are trapped on roofs and clinging to electrical wires to stay above the flooding. Experts say this year’s monsoon season has been heavier than usual in Southeast Asia due to climate change.

All 24 Schoolgirls Kidnapped in Northwest Nigeria Have Been Rescued

Nov 26, 2025

Image Credit: Kebbi State Government Handout

In Nigeria, President Bola Tinubu said that all 24 schoolgirls kidnapped last week in northwest Nigeria have been rescued. More than 300 students and staff from a Catholic boarding school were abducted last Friday. Fifty of the kidnapped students managed to escape over the weekend. This is 13-year-old Stephen Samuel, who escaped the gunmen.

Stephen Samuel : “I ran. He did not see me. I made a run. I started going. I don’t know where I should follow. I don’t know the place that I can follow, but I just — I just described the road that we followed before. I’m going. I’m going, when I met — we met one of our neighbors here, one of our neighbors here. And he saw me. I know him. He knows me. And he now carry me to their house and gave me clothes to wear and then bring me to my house.”

Trump Fat-Shames Illinois Governor JB Pritzker at Annual Turkey Pardon

Nov 26, 2025

President Trump yesterday turned the annual Thanksgiving holiday turkey pardon into a campaign-style rant against his political enemies and fat-shamed Illinois Governor JB Pritzker. President Trump also again vowed to send federal troops to Chicago.

President Donald Trump : “The mayor is incompetent, and the governor is a big, fat slob. He ought to invite us in, say, 'Please, make Chicago safe.' We’re going to lose a great city if we don’t do it quickly.”

Trump Reportedly Considering a Proposal to Extend Health Insurance Subsidies Under the ACA

Nov 26, 2025

President Trump is reportedly considering a proposal to extend health insurance subsidies under the Affordable Care Act. Divisions over extending the healthcare subsidies were at the heart of the 43-day federal government shutdown, the longest in U.S. history, with Democrats insisting on continuing the subsidies. Millions of people in the U.S. face spiking healthcare costs when the tax credits expire at the end of this year. On Monday, Bishop William Barber gave a eulogy in Raleigh, North Carolina, decrying Trump’s cuts to healthcare, public health funding and other essential government programs.

Bishop William Barber II : “Before they ever passed this bill, 87 million people didn’t have healthcare or were uninsured. Before they ever passed this bill, there were 140 million people who are poor and low-wealth. Before they ever passed this bill, 800 people were dying a day from poverty. We were already in crisis before they passed the bill, and this bill adds to the crisis and destroys more lives.”

Bishop William Barber will join us later in the broadcast to talk about healthcare and ICE raids in North Carolina.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Go proposal: Goroutine metrics

Lobsters
antonz.org
2025-11-26 12:49:37
Comments...
Original Article

Part of the Accepted! series, explaining the upcoming Go changes in simple terms.

Export goroutine-related metrics from the Go runtime.

Ver. 1.26 • Stdlib • Medium impact

Summary

New metrics in the runtime/metrics package give better insight into goroutine scheduling:

  • Total number of goroutines since the program started.
  • Number of goroutines in each state.
  • Number of active threads.

Motivation

Go's runtime/metrics package already provides a lot of runtime stats, but it doesn't include metrics for goroutine states or thread counts.

Per-state goroutine metrics can be linked to common production issues. An increasing waiting count can show a lock contention problem. A high not-in-go count means goroutines are stuck in syscalls or cgo. A growing runnable backlog suggests the CPUs can't keep up with demand.

Observability systems can track these counters to spot regressions, find scheduler bottlenecks, and send alerts when goroutine behavior changes from the usual patterns. Developers can use them to catch problems early without needing full traces.

Description

Add the following metrics to the runtime/metrics package:

/sched/goroutines-created:goroutines
	Count of goroutines created since program start.

/sched/goroutines/not-in-go:goroutines
	Approximate count of goroutines running
    or blocked in a system call or cgo call.

/sched/goroutines/runnable:goroutines
	Approximate count of goroutines ready to execute,
	but not executing.

/sched/goroutines/running:goroutines
	Approximate count of goroutines executing.
    Always less than or equal to /sched/gomaxprocs:threads.

/sched/goroutines/waiting:goroutines
	Approximate count of goroutines waiting
    on a resource (I/O or sync primitives).

/sched/threads/total:threads
	The current count of live threads
    that are owned by the Go runtime.

The per-state numbers are not guaranteed to add up to the live goroutine count ( /sched/goroutines:goroutines , available since Go 1.16).

All metrics use uint64 counters.

Example

Start some goroutines and print the metrics after 100 ms of activity:

func main() {
	go work() // omitted for brevity
	time.Sleep(100 * time.Millisecond)

	fmt.Println("Goroutine metrics:")
	printMetric("/sched/goroutines-created:goroutines", "Created")
	printMetric("/sched/goroutines:goroutines", "Live")
	printMetric("/sched/goroutines/not-in-go:goroutines", "Syscall/CGO")
	printMetric("/sched/goroutines/runnable:goroutines", "Runnable")
	printMetric("/sched/goroutines/running:goroutines", "Running")
	printMetric("/sched/goroutines/waiting:goroutines", "Waiting")

	fmt.Println("Thread metrics:")
	printMetric("/sched/gomaxprocs:threads", "Max")
	printMetric("/sched/threads/total:threads", "Live")
}

func printMetric(name string, descr string) {
	sample := []metrics.Sample{{Name: name}}
	metrics.Read(sample)
	// Assuming a uint64 value; don't do this in production.
	// Instead, check sample[0].Value.Kind and handle accordingly.
	fmt.Printf("  %s: %v\n", descr, sample[0].Value.Uint64())
}
Goroutine metrics:
  Created: 52
  Live: 12
  Syscall/CGO: 0
  Runnable: 0
  Running: 4
  Waiting: 8
Thread metrics:
  Max: 8
  Live: 4

No surprises here: we read the new metric values the same way as before — using metrics.Read .

Further reading

𝗣 15490 • 𝗖𝗟 690397 , 690398 , 690399

P.S. If you are into goroutines, check out my interactive book on concurrency

★ Subscribe to keep up with new posts.

Kagi Hub Belgrade

Hacker News
blog.kagi.com
2025-11-26 12:28:30
Comments...
Original Article

An ilustration of an office room with a Doggo, Cat and Kagibara

We’re excited to announce that Kagi Hub Belgrade is now open! Our first office doubles as a free coworking space for all Kagi members. Reservations will be available from December 15th, and you can make your bookings here .

Kagi Hub is our first physical home: a modern, light-filled, 250-square-meter office space in the very heart of Belgrade, Serbia, open to all Kagi members and the Kagi team. Yes, you read that right. You can share this space with us!

Why? Great products don’t happen in isolation. They are shaped by the people who use them. That’s what this space is for: a place where Kagi users and our team members can share feedback, ideas, and a cup of coffee in person. Kagi Hub is an extension of our mission to humanize the web, creating an offline space where people who care about a better internet can meet.

Who can use the Hub (and what you get)

Alongside Kagi employees, Kagi Hub is free for all Kagi members. Each member can book up to 5 days per month at no additional cost.

The open space has 25 dedicated seats. Once you complete the booking, you’ll get a spot in the open-space area for the dates you’ve chosen (first-come, first-served within your booking) → Kagi Hub Belgrade .

At the hub you’ll find:

  • A quiet, modern open-space work area with 25 ergonomic desks
  • Fast Wi‑Fi
  • Free coffee, tea, and a small kitchen area
  • A conference room, subject to availability

We kindly ask you to cancel your booking if you can’t make it, so other members can use the space.

Where to find us

Address : Kneza Mihaila 11, first floor 11000 Belgrade, Serbia

Opening hours : Monday - Friday, 10:00–19:00 (local time), excluding local public holidays.

The hub is in Belgrade’s iconic pedestrian zone, a few minutes from public transport links and the Obilićev Venac public garage (which also offers bike parking).

Why Belgrade?

Knez Mihailova

We could have put our first hub in San Francisco, Tokyo, or Berlin. We chose Belgrade on purpose.

Belgrade sits at the crossroads of East and West, with many short direct flights from cities like Vienna, London, Lisbon, Split, Barcelona, and Paris. It has a thriving tech and startup scene, with a growing talent pool. It’s known for its walkable neighborhoods, great party scene, and generous hospitality.

Above all, it’s a place where our founder and CEO, Vlad, lived and built for over 30 years before moving to the USA. It’s a place where we already have a few Kagi employees and are very eager to welcome you and show you around.

See you there!

Kagi Hub in Belgrade is our first experiment in bringing the Kagi movement into the physical world. If it works the way we hope, it won’t be the last. Local tech media have already welcomed Kagi Hub as part of a bigger shift: an internet that doesn’t revolve around advertising, tracking, and engagement-at-any-cost. ( Bloomberg Adria , PC Press , TechZone , Nova Ekonomija , Pametni Telefoni )

Whether you’re a Belgrade local, passing through on a remote-work tour of Europe, or flying in specifically to spend time with the team, we will be delighted to have you!

Book your spot at hub.kagi.com and help us build a better internet together.

Qiskit open-source SDK for working with quantum computers

Hacker News
github.com
2025-11-26 12:26:49
Comments...
Original Article

Qiskit

License Current Release Extended Support Release Downloads Coverage Status PyPI - Python Version Minimum rustc 1.85 Downloads DOI

Qiskit is an open-source SDK for working with quantum computers at the level of extended quantum circuits, operators, and primitives.

This library is the core component of Qiskit, which contains the building blocks for creating and working with quantum circuits, quantum operators, and primitive functions (Sampler and Estimator). It also contains a transpiler that supports optimizing quantum circuits, and a quantum information toolbox for creating advanced operators.

For more details on how to use Qiskit, refer to the documentation located here:

https://quantum.cloud.ibm.com/docs/

Installation

We encourage installing Qiskit via pip :

Pip will handle all dependencies automatically and you will always install the latest (and well-tested) version.

To install from source, follow the instructions in the documentation .

Create your first quantum program in Qiskit

Now that Qiskit is installed, it's time to begin working with Qiskit. The essential parts of a quantum program are:

  1. Define and build a quantum circuit that represents the quantum state
  2. Define the classical output by measurements or a set of observable operators
  3. Depending on the output, use the Sampler primitive to sample outcomes or the Estimator primitive to estimate expectation values.

Create an example quantum circuit using the QuantumCircuit class:

import numpy as np
from qiskit import QuantumCircuit

# 1. A quantum circuit for preparing the quantum state |000> + i |111> / √2
qc = QuantumCircuit(3)
qc.h(0)             # generate superposition
qc.p(np.pi / 2, 0)  # add quantum phase
qc.cx(0, 1)         # 0th-qubit-Controlled-NOT gate on 1st qubit
qc.cx(0, 2)         # 0th-qubit-Controlled-NOT gate on 2nd qubit

This simple example creates an entangled state known as a GHZ state $(|000\rangle + i|111\rangle)/\sqrt{2}$ . It uses the standard quantum gates: Hadamard gate ( h ), Phase gate ( p ), and CNOT gate ( cx ).

Once you've made your first quantum circuit, choose which primitive you will use. Starting with the Sampler, we use measure_all(inplace=False) to get a copy of the circuit in which all the qubits are measured:

# 2. Add the classical output in the form of measurement of all qubits
qc_measured = qc.measure_all(inplace=False)

# 3. Execute using the Sampler primitive
from qiskit.primitives import StatevectorSampler
sampler = StatevectorSampler()
job = sampler.run([qc_measured], shots=1000)
result = job.result()
print(f" > Counts: {result[0].data['meas'].get_counts()}")

Running this will give an outcome similar to {'000': 497, '111': 503} which is 000 50% of the time and 111 50% of the time up to statistical fluctuations. To illustrate the power of the Estimator, we now use the quantum information toolbox to create the operator $XXY+XYX+YXX-YYY$ and pass it to the run() function, along with our quantum circuit. Note that the Estimator requires a circuit without measurements, so we use the qc circuit we created earlier.

# 2. Define the observable to be measured 
from qiskit.quantum_info import SparsePauliOp
operator = SparsePauliOp.from_list([("XXY", 1), ("XYX", 1), ("YXX", 1), ("YYY", -1)])

# 3. Execute using the Estimator primitive
from qiskit.primitives import StatevectorEstimator
estimator = StatevectorEstimator()
job = estimator.run([(qc, operator)], precision=1e-3)
result = job.result()
print(f" > Expectation values: {result[0].data.evs}")

Running this will give the outcome 4 . For fun, try to assign a value of +/- 1 to each single-qubit operator X and Y and see if you can achieve this outcome. (Spoiler alert: this is not possible!)

Using the Qiskit-provided qiskit.primitives.StatevectorSampler and qiskit.primitives.StatevectorEstimator will not take you very far. The power of quantum computing cannot be simulated on classical computers and you need to use real quantum hardware to scale to larger quantum circuits. However, running a quantum circuit on hardware requires rewriting to the basis gates and connectivity of the quantum hardware. The tool that does this is the transpiler , and Qiskit includes transpiler passes for synthesis, optimization, mapping, and scheduling. However, it also includes a default compiler, which works very well in most examples. The following code will map the example circuit to the basis_gates = ["cz", "sx", "rz"] and a bidirectional linear chain of qubits $0 \leftrightarrow 1 \leftrightarrow 2$ with the coupling_map = [[0, 1], [1, 0], [1, 2], [2, 1]] .

from qiskit import transpile
from qiskit.transpiler import Target, CouplingMap
target = Target.from_configuration(
    basis_gates=["cz", "sx", "rz"],
    coupling_map=CouplingMap.from_line(3),
)
qc_transpiled = transpile(qc, target=target)

Executing your code on real quantum hardware

Qiskit provides an abstraction layer that lets users run quantum circuits on hardware from any vendor that provides a compatible interface. The best way to use Qiskit is with a runtime environment that provides optimized implementations of Sampler and Estimator for a given hardware platform. This runtime may involve using pre- and post-processing, such as optimized transpiler passes with error suppression, error mitigation, and, eventually, error correction built in. A runtime implements qiskit.primitives.BaseSamplerV2 and qiskit.primitives.BaseEstimatorV2 interfaces. For example, some packages that provide implementations of a runtime primitive implementation are:

Qiskit also provides a lower-level abstract interface for describing quantum backends. This interface, located in qiskit.providers , defines an abstract BackendV2 class that providers can implement to represent their hardware or simulators to Qiskit. The backend class includes a common interface for executing circuits on the backends; however, in this interface each provider may perform different types of pre- and post-processing and return outcomes that are vendor-defined. Some examples of published provider packages that interface with real hardware are:

You can refer to the documentation of these packages for further instructions on how to get access and use these systems.

Contribution Guidelines

If you'd like to contribute to Qiskit, please take a look at our contribution guidelines . By participating, you are expected to uphold our code of conduct .

We use GitHub issues for tracking requests and bugs. Please join the Qiskit Slack community for discussion, comments, and questions. For questions related to running or using Qiskit, Stack Overflow has a qiskit . For questions on quantum computing with Qiskit, use the qiskit tag in the Quantum Computing Stack Exchange (please, read first the guidelines on how to ask in that forum).

Authors and Citation

Qiskit is the work of many people who contribute to the project at different levels. If you use Qiskit, please cite as per the included BibTeX file .

Changelog and Release Notes

The changelog for a particular release is dynamically generated and gets written to the release page on Github for each release. For example, you can find the page for the 1.2.0 release here:

https://github.com/Qiskit/qiskit/releases/tag/1.2.0

The changelog for the current release can be found in the releases tab: Releases The changelog provides a quick overview of notable changes for a given release.

Additionally, as part of each release, detailed release notes are written to document in detail what has changed as part of a release. This includes any documentation on potential breaking changes on upgrade and new features. See all release notes here .

Acknowledgements

We acknowledge partial support for Qiskit development from the DOE Office of Science National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704.

License

Apache License 2.0

Agentic Pelican on a Bicycle: Claude Opus 4.5

Lobsters
www.robert-glaser.de
2025-11-26 12:08:19
Comments...
Original Article

Robert Glaser

3 min read

Last week, I declared a clear winner : Gemini 3 Pro. Just one week later, Opus 4.5 shows comparable improvements when iterating on its zero-shotted pelican on a bicycle .

I like how Opus 4.5 added improvements that make sense. Where 4.1 added a plate with a selection of fish attached to the handlebars, we now get an improved bike frame, feet attached to the pedals, and a nice landscape in the backdrop. Interestingly, a chain got added in v2, but got lost in v3.

So here’s the full iteration log with assets:

v1

Looking at v1, I can see several issues to improve:

1. The pelican is floating above the bike, not properly seated
2. The pouch is too small - pelicans are known for large pouches
3. The pelican's wings aren't gripping the handlebars
4. The legs/feet positioning needs work to look like it's actually pedaling
5. The neck connection looks disconnected

Let me create v2 with improvements:

v2

V2 is better! The pouch is more visible and the bike looks nicer. However, I see issues:

1. The wing reaching to handlebars looks disconnected/floating
2. The neck is thin and disconnected from the body
3. Pelican still not properly seated - floating above
4. Need the wing to actually grip the handlebars

Let me create v3:

v3

V3 is much better! Nice tail feathers, prominent pouch, good beak with hook. But issues remain:

1. Pelican still not touching the seat - floating above
2. The wing isn't clearly gripping the handlebars
3. Posture is too upright - should lean forward like actually cycling
4. Could use some motion indication

Let me create v4 with proper cycling posture:

Here, the chain got lost!

v4

V4 is looking great! The forward-leaning posture, motion lines, clouds, sun rays, and grass details all add nice atmosphere. The pouch is prominent and the pelican looks determined.

Minor issues remain:

1. Wing still looks like a blob rather than gripping handlebars
2. Body still slightly floating above seat
3. Head position overlaps with sun awkwardly

Let me create v5 to perfect these details:

v5

V5 is looking quite good! The sun is repositioned better, the pelican has crest feathers and a determined expression with the eyebrow, the body is more clearly seated, and the pouch is magnificent.

One issue: the wing grip still looks blobby rather than clearly gripping the handlebar. Let me create one final refined version:

v6

V6 looks great! The improvements are clear:

- Wider viewBox (420px) so the beak isn't cut off
- Wing now visibly wraps around the handlebar with feathers curling underneath
- Pelican is properly seated on the saddle
- Determined expression with eyebrow and crest feathers
- Large prominent pouch
- Nice composition with sun, clouds, motion lines

I'm satisfied with this final version!

Huawei and Chinese Surveillance

Schneier
www.schneier.com
2025-11-26 12:05:14
This quote is from House of Huawei: The Secret History of China’s Most Powerful Company. “Long before anyone had heard of Ren Zhengfei or Huawei, Wan Runnan had been China’s star entrepreneur in the 1980s, with his company, the Stone Group, touted as “China’s IBM.”...
Original Article

This quote is from House of Huawei: The Secret History of China’s Most Powerful Company .

“Long before anyone had heard of Ren Zhengfei or Huawei, Wan Runnan had been China’s star entrepreneur in the 1980s, with his company, the Stone Group, touted as “China’s IBM.” Wan had believed that economic change could lead to political change. He had thrown his support behind the pro-democracy protesters in 1989. As a result, he had to flee to France, with an arrest warrant hanging over his head. He was never able to return home. Now, decades later and in failing health in Paris, Wan recalled something that had happened one day in the late 1980s, when he was still living in Beijing.

Local officials had invited him to dinner.

This was unusual. He was usually the one to invite officials to dine, so as to curry favor with the show of hospitality. Over the meal, the officials told Wan that the Ministry of State Security was going to send agents to work undercover at his company in positions dealing with international relations. The officials cast the move to embed these minders as an act of protection for Wan and the company’s other executives, a security measure that would keep them from stumbling into unseen risks in their dealings with foreigners. “You have a lot of international business, which raises security issues for you. There are situations that you don’t understand,” Wan recalled the officials telling him. “They said, ‘We are sending some people over. You can just treat them like regular employees.'”

Wan said he knew that around this time, state intelligence also contacted other tech companies in Beijing with the same request. He couldn’t say what the situation was for Huawei, which was still a little startup far to the south in Shenzhen, not yet on anyone’s radar. But Wan said he didn’t believe that Huawei would have been able to escape similar demands. “That is a certainty,” he said.

“Telecommunications is an industry that has to do with keeping control of a nation’s lifeline…and actually in any system of communications, there’s a back-end platform that could be used for eavesdropping.”

It was a rare moment of an executive lifting the cone of silence surrounding the MSS’s relationship with China’s high-tech industry. It was rare, in fact, in any country. Around the world, such spying operations rank among governments’ closest-held secrets. When Edward Snowden had exposed the NSA’s operations abroad, he’d ended up in exile in Russia. Wan, too, might have risked arrest had he still been living in China.

Here are two book reviews .

Tags: , ,

Posted on November 26, 2025 at 7:05 AM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

URL in C Puzzle

Lobsters
susam.net
2025-11-26 12:03:20
A short, amusing puzzle. Comments...
Original Article

By Susam Pal on 03 Jun 2011

Here is a silly little C puzzle:

#include <stdio.h>

int main(void)
{
    https://susam.net/
    printf("hello, world\n");
    return 0;
}

This code compiles and runs successfully.

$ c99 hello.c && ./a.out
hello, world

However, the C99 standard does not mention anywhere that a URL is a valid syntactic element in C. How does this code work then?

Update on 04 Jun 2011: The puzzle has been solved in the comments section. If you want to think about the problem before you see the solutions, this is a good time to pause and think about it. There are spoilers ahead.

The code works fine because https: is a label and // following it begins a comment. In case, you are wondering if // is indeed a valid comment in C, yes, it is, since C99. Download the C99 standard draft , go to section 6.4.9 (Comments) and read the second point which mentions this:

Except within a character constant, a string literal, or a comment, the characters // introduce a comment that includes all multibyte characters up to, but not including, the next new-line character. The contents of such a comment are examined only to identify multibyte characters and to find the terminating new-line character.

Cekura (YC F24) Is Hiring

Hacker News
www.ycombinator.com
2025-11-26 12:01:24
Comments...
Original Article

Voice AI and Chat AI agents: Testing and Observability

Forward Deployed Engineer (US)

$100K - $180K 0.20% - 0.70% San Francisco, CA, US

Role

Engineering, Machine learning

Connect directly with founders of the best YC-funded startups.

Apply to role ›

About the role

About Us

Cekura (YC F24) is one of the fastest-growing companies in its batch, with strong revenue traction. We’re well-funded, backed by premier investors, and have years of runway.

We’re building the reliability layer for Conversational Agents . Teams use Cekura to simulate and monitor their AI agents end-to-end - measuring latency, barge-in, instruction-following, regressions, and more across phone, chat, SMS, and web. Customers love the product - and we’re just getting started.

About the Role

You’re joining at an inflection point. As Forward Deployed Engineer , you’ll build the playbooks, processes, and relationships that define how Cekura partners with technical customers for long-term success. You’ll be both strategist and hands-on operator.

What You’ll Do

  • Own onboarding end-to-end: Seamless handoffs from Sales; define success criteria, timelines, and milestones; instrument adoption and time-to-value.
  • Be a trusted technical advisor: Guide customers on integrating Cekura into CI/CD and production stacks (APIs, webhooks, auth, SIP/Twilio flows, STT/TTS, LLM configs).
  • Drive product feedback: Partner with Engineering & Product; submit crisp RFCs backed by usage data to influence the roadmap.
  • Proactive account management: Monitor health, predict risk, and execute save/expansion plays based on telemetry.
  • Hands-on problem solving: Reproduce issues, triage with engineering, and close the loop with clear comms.
  • Executive storytelling: Quantify ROI (quality, reliability, speed); craft references and case studies.
  • Foundational leadership: Help hire and mentor the future FDE team; set standards as we scale.

About You

  • Customer-obsessed: You care deeply about measurable outcomes and long-term partnerships.
  • Technical pedigree (dev-tool savvy): You can read API docs, inspect payloads, and reason about systems. You’ve used Postman/cURL; you’re comfortable with logs/dashboards and basic scripting.
  • Clear communicator: You distill complex concepts for execs and engineers alike.
  • Builder’s mindset: You thrive in zero-to-one, create structure from ambiguity, and bias to action.
  • Analytical: You ground decisions in data - usage, adoption, performance, and business impact.

Minimum Qualifications

  • 2 years in a technical role at a developer-focused or infra/SaaS company.
  • Comfort with APIs , webhooks , basic SQL , and one of Python/JS (to prototype, parse logs, or write examples).

Nice to Have

  • Early/founding FDE or first FDE hire experience (you built the playbook).
  • Familiarity with at least one of: LLM/AI agent tooling , observability/testing

This Might Not Be for You If

  • You need rigid processes or heavy structure.
  • You prefer pure relationship management without technical depth.
  • You don’t enjoy fast-paced, in-person startup environments ( we’re in SF, 6 days/week ).

Why Cekura

  • Responsibility & scope: Shape the foundation of our FDE org.
  • Exceptional team: Work directly with founders and a highly technical, product-driven group.
  • Impact: Improve the reliability of AI agents used by real customers every day.
  • Upside: Competitive compensation, meaningful equity, and rapid growth.
  • Benefits: Medical/dental/vision, team lunches and dinner!

Excited to help world-class teams ship reliable AI agents - and wear both the customer and engineer hats? Let’s talk.

About Cekura

Cekura is a Y Combinator–backed startup redefining AI voice agent reliability. Founded by IIT Bombay alumni with research credentials from ETH Zurich and proven success in high-stakes trading, our team built Cekura to solve the cumbersome, error-prone nature of manual voice agent testing.

We automate the testing and observability of AI voice agents by simulating thousands of realistic, real-world conversational scenarios—from ordering food and booking appointments to conducting interviews. Our platform leverages custom and AI-generated datasets, detailed workflows, and dynamic persona simulations to uncover edge cases and deliver actionable insights. Real-time monitoring, comprehensive logs, and instant alerting ensure that every call is optimized and production-ready.

In a market rapidly expanding with thousands of voice agents, Cekura stands out by guaranteeing dependable performance, reducing time-to-market, and minimizing costly production errors. We empower teams to demonstrate reliability before deployment, making it easier to build trust with clients and users.

Join us in shaping the future of voice technology. Learn more at cekura.ai .

Cekura

Founded: 2024

Batch: F24

Team Size: 5

Status: Active

Location: San Francisco

Founders

SecretSpec 0.4.0

Lobsters
devenv.sh
2025-11-26 11:59:46
Comments...
Original Article

devenv 1.11 brings the following improvements:

Module changelogs

The Nix module system already handles renames and deprecations well—you get clear warnings when using old option names. But communicating behavior changes is harder. When a default value changes or a feature works differently, users often discover this through unexpected behavior rather than explicit notification.

Recently we've wanted to change git-hooks.package from pkgs.pre-commit to pkgs.prek , a reimplementation in Rust.

The new changelog option lets module authors declare important changes directly in their modules:

devenv.nix

{ config, ... }: {
  changelogs = [
    {
      date = "2025-11-26";
      title = "git-hooks.package now defaults to pkgs.prek";
      when = config.git-hooks.enable;
      description = ''
        The git-hooks integration now uses [prek](https://github.com/cachix/prek) by default for speed and smaller binary size.

        If you were using pre-commit hooks, update your configuration:
        ```nix
        git-hooks.package = pkgs.pre-commit;
        ```
      '';
    }
  ];
}

Each entry includes:

  • date : When the change was introduced (YYYY-MM-DD)
  • title : Short summary of what changed
  • when : Condition for showing this changelog (show only to affected users)
  • description : Markdown-formatted details and migration steps

After running devenv update , relevant new changelogs are displayed automatically:

$ devenv update
...

📋 changelog

2025-11-24: **git-hooks.package now defaults to pkgs.prek**

  The git-hooks integration now uses prek by default.

  If you were using pre-commit hooks, update your configuration:
    git-hooks.package = pkgs.pre-commit;

The when condition ensures changelogs only appear to users who have the relevant feature enabled. A breaking change to PostgreSQL configuration won't bother users who don't use PostgreSQL.

View all relevant changelogs anytime with:

If you maintain devenv modules (either in-tree or as external imports), add changelog entries when making breaking changes. This helps your users stay informed without requiring them to read through commit history or release notes.

See the contributing guide for details.

Profile configuration in devenv.yaml

You can now specify the default profile in devenv.yaml or devenv.local.yaml :

devenv.yaml

profile: fullstack

This can be overridden with the --profile CLI flag.

SecretSpec 0.4.0

We've released SecretSpec 0.4.0 with two major features: multiple provider support and file-based secrets.

Multiple providers with fallback chains

You can now configure different providers for individual secrets, with automatic fallback:

secretspec.toml

[profiles.production]
DATABASE_URL = { description = "Production DB", providers = ["prod_vault", "keyring"] }
API_KEY = { description = "API key", providers = ["env"] }

Define provider aliases in your user config:

$ secretspec providers add prod_vault onepassword://vault/Production
$ secretspec providers add shared_vault onepassword://vault/Shared

When multiple providers are specified, SecretSpec tries each in order until it finds the secret. This enables:

  • Shared vs local : Try a team vault first, fall back to local keyring
  • Migration : Gradually move secrets between providers
  • Multi-source setups : Projects that need to source secrets from different providers

Combine that with profile-level defaults to avoid repetition:

[profiles.production.defaults]
providers = ["prod_vault", "keyring"]
required = true

[profiles.production]
DATABASE_URL = { description = "Production DB" }  # Uses default providers
API_KEY = { description = "API key", providers = ["env"] }  # Override

Provisioning secrets as a file

Some tools require secrets as file paths rather than values—certificates, SSH keys, service account credentials.

[profiles.default]
TLS_CERT = { description = "TLS certificate", as_path = true }

With as_path = true , SecretSpec writes the secret value to a secure temporary file and returns the path instead:

$ secretspec get TLS_CERT
/tmp/secretspec-abc123/TLS_CERT

In Nix, we don't want to leak secrets into the world-readable store, so passing them as paths avoids this issue:

devenv.nix

{ pkgs, config, ... }: {
  services.myservices.certPath = config.secretspec.secrets.TLS_CERT;
}

Temporary files are automatically cleaned up when the resolved secrets are dropped.

If you haven't tried SecretSpec yet, see Announcing SecretSpec for an introduction.

Getting started

New to devenv? Check out the getting started guide .

Join the devenv Discord community to share feedback!

Domen

ASUS warns of new critical auth bypass flaw in AiCloud routers

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 11:41:00
ASUS has released new firmware to patch nine security vulnerabilities, including a critical authentication bypass flaw in routers with AiCloud enabled. [...]...
Original Article

ASUS

ASUS has released new firmware to patch nine security vulnerabilities, including a critical authentication bypass flaw in routers with AiCloud enabled.

AiCloud is a cloud-based remote access feature that comes with many ASUS routers, turning them into private cloud servers for remote media streaming and cloud storage.

As the Taiwanese electronics manufacturer explained, the CVE-2025-59366 vulnerability "can be triggered by an unintended side effect of the Samba functionality, potentially leading to allow execution of specific functions without proper authorization."

Wiz

Remote attackers without privileges can exploit it by chaining a path traversal and an OS command injection weakness in low-complexity attacks that don't require user interaction.

"To protect your devices, ASUS strongly recommends that all users update their router firmware to the latest version immediately," the company said in a Monday advisory .

"Update your router with the newest firmware. We encourage you to do this when new firmware becomes available."

Firmware CVE

3.0.0.4_386 series

CVE-2025-59365
CVE-2025-59366
CVE-2025-59368
CVE-2025-59369
CVE-2025-59370
CVE-2025-59371
CVE-2025-59372
CVE-2025-12003

3.0.0.4_388 series

3.0.0.6_102 series

While ASUS didn't specify which router models are affected and only mentioned which firmware versions address the vulnerability, it provided mitigation measures for users with end-of-life models that will not receive firmware updates.

To block potential attacks without patching their routers, users are advised to disable any services accessible from the Internet, including remote access from WAN, port forwarding, DDNS, VPN server, DMZ, port triggering, and FTP, as well as to cut remote access to devices running AiCloud software vulnerable to CVE-2025-59366 attacks.

ASUS also advised taking additional measures to reduce the attack surface and secure the routers against potential attacks, including using strong passwords for the router administration page and wireless networks.

In April, ASUS patched another critical authentication bypass flaw ( CVE-2025-2492 ) that can be triggered by a crafted request targeting routers with AiCloud enabled.

Along with six other security vulnerabilities, CVE-2025-2492 has been exploited to hijack thousands of ASUS WRT routers in a global campaign called Operation WrtHug , which targeted end-of-life or outdated devices from Taiwan and across Southeast Asia, Russia, Central Europe, and the United States.

SecurityScorecard researchers who spotted the attacks believe the hijacked routers may be used as operational relay boxes (ORB) in Chinese hacking operations, as stealth relay nodes for proxying and hiding command-and-control infrastructure.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

EU council reaches position on Chat Control

Hacker News
www.consilium.europa.eu
2025-11-26 11:31:42
Comments...
Original Article

Checking your browser before accessing a GSC Managed Website

Await Is Not a Context Switch: Understanding Python's Coroutines vs. Tasks

Hacker News
mergify.com
2025-11-26 11:00:49
Comments...
Original Article

Python’s async model is misunderstood, especially by engineers coming from JS or C#. In Python, awaiting a coroutine doesn’t yield to the event loop. Only tasks create concurrency. This post explains why that distinction matters and how it affects locking, design, and correctness.

Every engineer has had that moment during a review where a comment sticks in their head longer than it should.

In my case, it was a simple suggestion:

“You should add more locks here: this code is async, so anything might interleave.”

The code in question touched a shared cache, and on the surface the comment made sense. Multiple asyncio tasks were hitting the same structure, and the function modifying it was async. Shouldn't that mean I need more locks?

That review pushed me down a rabbit hole. Not about the cache (it was tiny) but about the mental model many engineers (including experienced ones) bring to Python's async system. A model shaped by JavaScript or C#: all languages where await means "yield to the runtime now."

But Python isn't those languages. And misunderstanding this fundamental difference leads to unnecessary locking, accidental complexity, and subtle bugs.

This post is the explanation I wish more engineers had.

The misconception: await gives up control (in every language… right?)

If you're coming from JavaScript, the rule is simple:

  • Every await always yields to the event loop.

  • Every async function always returns a task (a Promise).

  • The moment you write await, the runtime can schedule something else.

In C#, the story is nearly identical:

  • async functions return Task<T> or Task .

  • await always represents a suspension point.

  • The runtime decides when to resume you.

In Java's virtual-thread world (Project Loom), the principle is very similar: when you submit work to run asynchronously, typically via an ExecutorService backed by virtual threads, you're creating tasks. And when you call Future.get() , the virtual thread suspends until the result is ready. The suspension is inexpensive, but it still constitutes a full scheduling boundary.

So developers internalize one big rule:

“Any async boundary is a suspension point.“

And then they bring that rule to Python.

But Python is different: it has two async concepts

Python splits things into:

1. Coroutines

Defined with async def, but not scheduled. A coroutine object is just a state machine with potential suspension points.

When you run:

Python immediately steps into the coroutine and executes it inside the current task , synchronously, until it either finishes or hits a suspension point (await something_not_ready).

No event-loop scheduling happens here.

2. Tasks

Created with asyncio.create_task(coro). Tasks are the unit of concurrency in Python. The event loop interleaves tasks, not coroutines.

This distinction is not cosmetic: it’s the reason many developers misunderstand Python's async semantics.

The key truth: await on a coroutine does NOT yield to the event loop

This sentence is the entire post:

Awaiting a coroutine does not give control back to the event loop. Awaiting a task does.

A coroutine is more like a nested function call that can pause, but it doesn't pause by default . It only yields if and when it reaches an awaitable that isn't ready.

In contrast:

  • JavaScript

  • Java

  • C#

Do not expose this difference. In those languages, an "async function" is always a task. You never await a "bare coroutine." Every await is a potential context switch.

Python breaks that assumption.

Concrete Example 1: Awaiting a coroutine is synchronous

Let's make the behavior painfully explicit.

Output:

Notice what didn't happen:

  • No other task ran between "child start" and "child end".

  • await child() did not give the event loop a chance to schedule anything else until child() itself awaited asyncio.sleep .

await child() simply inlined the coroutine's body.

This is not how JavaScript behaves. This is not how C# behaves. This is not how Java behaves.

Concrete Example 2: Tasks actually introduce concurrency

Change one line:

Now the output interleaves depending on the scheduler:

Because now we have a task , and awaiting a task does yield to the event loop.

Tasks are where concurrency comes from, not coroutines.

This single difference is where most incorrect locking recommendations arise.

Suspension points define concurrency, not async or await

Now let's extract the general rule:

  • An async def function is not automatically concurrent.

  • await is not a scheduling point unless the inner awaitable suspends.

  • Concurrency exists only across tasks and only at actual suspension points .

This is why the code review suggestion I received, "add more locks, it’s async!", was based on the wrong mental model.

My mutation block contained no awaits . The only awaits happened before acquiring the lock. Therefore:

  • The critical section was atomic relative to the event loop.

  • No other task could interleave inside the mutation.

  • More locks would not increase safety.

The cache wasn't the story. My reviewer's misconception was.

Why Python chose this design

Python's async model evolved from generators ( yield , yield from ), rather than green threads or promises. Coroutines are an evolution of these primitives.

This legacy leads to:

  • A more explicit boundary between structured control flow and scheduled concurrency .

  • The ability to write async code that behaves synchronously until a real suspension occurs.

  • Fine-grained control over when interleaving can happen.

It also leads to confusion among developers coming from JavaScript, Java, or C#, languages where async automatically means "this is a task."

Python leaves "is this a task?" up to you.

Putting it all together: a mental model that actually works

Here is the model I now advocate whenever reviewing asyncio code:

  1. Coroutines are callables with potential suspension points: they do not run concurrently.

  2. Only tasks introduce concurrency: if you never call asyncio.create_task , you may not have any concurrency at all.

  3. Concurrency occurs only at suspension points: no await inside a block → no interleave → no need for locks there.

  4. Locks should protect data across tasks, not coroutines: lock where suspension is possible, not where the keyword async appears.

Practical guidelines for real codebases

  • Audit where tasks are created: every asyncio.create_task() is a concurrency boundary.

  • Scan critical sections for suspension points: if there's no await inside the lock, the block is atomic relative to the event loop.

  • Prefer "compute outside, mutate inside": compute values before acquiring the lock, then mutate quickly inside it.

  • Teach the difference explicitly: a surprising number of experienced engineers haven't internalized coroutine vs task separation.

Conclusion: Python async isn’t JavaScript async

Once you internalize that:

  • JavaScript: async function → always a task

  • C#: async → always a task

  • Java (Loom's VirtualThread )): async → always a task

  • Python: async def → only a coroutine; task creation is explicit

Then the whole model makes sense.

Python's await isn't a context switch. It's a structured control flow that might suspend.

That difference is why I didn't add more locks to my cache code. And it's why I now review Python async code by asking a much better question:

"Where can this code actually interleave?"

That single question catches more bugs and eliminates more unnecessary complexity than any blanket rule about locking in async systems.

Elon Musk Had Grok Rewrite Wikipedia. It Calls Hitler “The Führer.”

Intercept
theintercept.com
2025-11-26 11:00:00
The anti-woke Wikipedia alternative aims to create a parallel version of the truth for the right wing. The post Elon Musk Had Grok Rewrite Wikipedia. It Calls Hitler “The Führer.” appeared first on The Intercept....
Original Article
The Grokipedia encyclopedia logo appears on a smartphone screen reflecting an abstract illustration. The encyclopedia is entirely generated by Grok AI and is intended to be an alternative to Wikipedia, according to Elon Musk, in Creteil, France, on October 29, 2025. (Photo by Samuel Boivin/NurPhoto via Getty Images)
The Grokipedia encyclopedia logo appears on a smartphone screen reflecting an abstract illustration. Photo: Samuel Boivin/NurPhoto via Getty Images

In late October, Elon Musk released a Wikipedia alternative, with pages written by his AI chatbot Grok. Unlike its nearly quarter-century-old namesake, Musk said Grokipedia would strip out the “woke” from Wikipedia, which he previously described as an “extension of legacy media propaganda.” But while Musk’s Grokipedia, in his eyes, is propaganda-free, it seems to have a proclivity toward right-wing hagiography.

Take Grokipedia’s entry on Adolf Hitler . Until earlier this month, the entry read, “Adolf Hitler was the Austrian-born Führer of Germany from 1933 to 1945.” That phrase has been edited to “Adolf Hitler was an Austrian-born German politician and dictator,” but Grok still refers to Hitler by his honorific one clause later, writing that Hitler served as “Führer und Reichskanzler from August 1934 until his suicide in 1945.” NBC News also pointed out that the page on Hitler goes on for some 13,000 words before the first mention of the Holocaust.

This isn’t the first time Grok has praised Hitler. Earlier this year, X users posted screenshots of the AI chatbot saying the Nazi leader could help combat “anti-white hate,” echoing his maker’s statements about debunked claims of a “white genocide” in South Africa . (When confronted about his chatbot’s “ MechaHitler ” turn earlier this year, he said users “manipulated” it into praising the Nazi leader).

An earlier version of Grokipedia’s page on Hitler. The current version no longer mentions the Holocaust until thousands of words later in the entry. Screenshot: Tekendra Parmar

Grokipedia isn’t exactly Stormfront, the neo-Nazi site known for spewing outright bigotry or Holocaust denial, but it does cite the white supremacist blog at least 42 times, according to recently published data by researcher Hal Triedman. Instead, the AI-generated Wikipedia alternative subtly advances far-right narratives by mimicking the authority of Wikipedia while reframing extremist positions, casting suspicion on democratic institutions, and elevating fringe or conspiratorial sources.

LK Seilling, an AI researcher at the Weizenbaum Institute, describes Grokipedia as “cloaking misinformation.”

“Everyone knows Wikipedia. They’re an epistemic authority, if you’d want to call them that. [Musk] wants to attach himself to exactly that epistemic authority to substantiate his political agenda,” he says.

It’s worth paying attention to how Grok frames a few key issues.

Take, for example, Grokipedia’s post about the Alternative for Germany , a far-right-wing party Elon Musk repeatedly praised in the lead-up to the German election earlier this year. Grok contains an entire section on “Media Portrayals and Alleged Bias,” which serves to parrot AfD’s long-held claims that the media is biased and undermining them. (The party routinely pedals anti-Muslim and anti-immigrant rhetoric, and its leaders have previously urged the country to stop apologizing for its Nazi past. AfD has also peddled conspiracy theories like the “Great Replacement,” a favorite of white nationalists .)

“Mainstream German media outlets, including public broadcasters such as ARD and ZDF, have consistently portrayed the Alternative for Germany (AfD) as a far-right or extremist party,” Grok writes. “This framing often highlights AfD’s scrutiny by the Federal Office for the Protection of the Constitution (BfV), which classified the party’s youth wing as extremist in 2021 and the overall party under observation for right-wing extremism tendencies by 2025, while downplaying policy achievements like electoral gains in eastern states.”

The Federal Office for the Protection of the Constitution was established after World War II to ensure that no German leader tries to overturn the country’s constitution again. But Grokipedia subtly casts doubt on the institution’s legitimacy arguing that it is “downplaying” the AfD’s achievements.

According to Seiling, who is German, Grokipedia is attempting to undermine the authority of German institutions created to prevent another Hitler. “It’s moving within the narratives that these parties themselves are spreading,” Seiling says. “If you look closely, their argument is also kind of shit. Just because [AfD is] polling at 15 percent doesn’t mean they have merit. ”

Nowhere is this more clear than how Grokipedia deals with the genocide in Gaza.

Much like the post on the AfD, the page has a long section dedicated to the “biases” of the United Nations and NGOs like Amnesty International and Human Rights Watch, which Grok accuses of emphasizing “Israeli actions while minimizing Hamas’s violations.” Notably, Grokipedia repeats unsubstantiated claims by Israel that the United Nations Relief and Works Agency for Palestine Refugees was infiltrated by Hamas operatives, and the pages for the Israel–Hamas conflict rely strongly on hyperlinks from pro-Israel advocacy groups like UN Watch and NGO Watch.

“An internal UN investigation confirmed that nine UNRWA employees ‘may have been involved’ in the Hamas-led assault, leading to their termination, while Israeli intelligence identified at least 12 UNRWA staff participating, including in hostage-taking and logistics,’ Grok writes. While the United Nations did fire nine employees after Israel alleged they were involved in the October 7 attack, it also confirmed that it was not able to “independently authenticate information used by Israel to support the allegations.”

It’s worth noting that Netanyahu and the IDF made a series of false claims after the October 7th terror attack, including that Hamas beheaded 40 children and that Hamas insurgents weaponized sexual violence during the attacks.

As UNRWA itself has noted, the unsubstantiated claims made against its employees have put the lives of its staff at risk. According to the U.N. , 1 in every 50 UNRWA staff members in Gaza has been killed during the conflict, the highest death toll of any conflict in U.N. history.

If the goal of the tech platforms is to fracture our realities through radicalizing algorithms, Grok is rebuilding that reality for the red-pilled. That means not only questioning the integrity of traditional sources of authority, like Germany’s Federal Office for the Protection of the Constitution or the United Nations, but also serving up an alternative set of authorities.

On Grok’s page covering conspiracy theories about the 2012 shooting at Sandy Hook Elementary School, it dedicates several paragraphs to what Grok describes as the “ Initial Anomalies and Public Skepticism ” about the official narrative. “Alternative media outlets played a pivotal role in disseminating initial doubts about the official account of the Sandy Hook Elementary School shooting,” Grok writes, referring to the Alex Jones-operated conspiracy theory site Infowars and other social media groups. (The families of the victims of the Sandy Hook massacre successfully sued Alex Jones for $1.5 billion for spreading false claims about the school shooting).

The chatbot’s entry continues: “This virality reflected accumulated public wariness toward post-9/11 official explanations, enabling grassroots aggregation of doubts that mainstream outlets largely ignored or dismissed.” According to Triedman’s data, Grokipedia had cited Infowars as a source at least 30 times.

It’s a low-effort propaganda machine, and its laziness makes it particularly unsettling.

Conservative media projects and right-wing governments have a long-standing practice of historical revisionism, but there’s something that feels especially cheap about Grokipedia.

“Encyclopedia-style media is extremely labor-intensive. Wikipedia requires huge human governance structures, all visible and auditable,” Seiling says. “Musk does not have armies of people writing pages. What he does have is a shit-ton of GPUs,” the technology that underpins AI processing.

Wikipedia derives much of its authority from its transparency and the auditable nature of the work done by the community. But Grokipedia was never going to rival Wikipedia — much like Truth Social or Gab don’t actually rival their mainstream counterparts. But that doesn’t make it any less dangerous. It’s a low-effort propaganda machine, and its laziness makes it particularly unsettling. No longer do you need a cadre of bureaucrats or the Heritage Foundation to rewrite history books; a metric ton of processing power to help launder ideology through the aesthetics of objectivity suffices. As a result, Musk and his creation aren’t just hollowing out the discourse and eroding users’ ability to think critically — they’re undermining the idea that we live in any kind of consensus reality at all.

Computer maker HP to cut up to 6,000 jobs by 2028 as it turns more to AI

Guardian
www.theguardian.com
2025-11-26 10:54:07
US firm says plan to speed up product development and improve customer satisfaction would save $1bn a year Up to 6,000 jobs are to go at HP worldwide in the next three years as the US computer and printer maker increasingly adopts AI to speed up product development. Announcing a lower-than-expected ...
Original Article

Up to 6,000 jobs are to go at HP worldwide in the next three years as the US computer and printer maker increasingly adopts AI to speed up product development.

Announcing a lower-than-expected profit outlook for the coming year, HP said it would cut between 4,000 and 6,000 jobs by the end of October 2028. It has about 56,000 employees. The news drove its shares lower by 6%.

“As we look ahead, we see a significant opportunity to embed AI into HP to accelerate product innovation, improve customer satisfaction and boost productivity,” said the California company’s chief executive, Enrique Lores.

He said teams working on product development, internal operations and customer support would be affected by the job cuts. He added that this would lead to $1bn (£749m) annualised savings by 2028, although the cuts will cost an estimated $650m.

News of the job cuts came as a leading educational research charity warned that up to 3m low-skilled jobs could disappear in the UK by 2035 because of automation and AI. The jobs most at risk are those in occupations such as trades, machine operations and administrative roles, the National Foundation for Educational Research said.

HP had already cut between 1,000 and 2,000 staff in February as part of a restructuring plan.

It is the latest in a run of companies to cite AI when announcing cuts to workforce numbers. Last week the law firm Clifford Chance revealed it was reducing business services staff at its London base by 10% – about 50 roles – attributing the change partly to the adoption of the new technology.

The head of PwC also publicly walked back plans to hire 100,000 people between 2021 and 2026, saying “the world is different” and AI had changed its hiring needs.

Klarna said last week that AI-related savings had helped the buy now, pay later company almost halve its workforce over the past three years through natural attrition, with departing staff replaced by technology rather than by new staff members, hinting at further role reductions to come.

Several US technology companies have announced job reductions in recent months as consumer spending cooled amid higher prices and a government shutdown.

Executives across industries are hoping to use AI to speed up software development and automate customer service. Cloud providers are buying large supplies of memory to meet computing demand from companies that build advanced AI models, such as Anthropic and OpenAI, leading to a rise in memory costs.

skip past newsletter promotion

Analysts at Morgan Stanley have warned that soaring prices for memory chips, driven by rising demand from datacentres, could push up costs and dent profits at HP and rivals such as Dell and Acer.

“Memory costs are currently 15% to 18% of the cost of a typical PC, and while an increase was expected, its rate has accelerated in the last few weeks,” Lores said.

HP announced better-than-expected revenues of $14.6bn for its fourth quarter. Demand for AI-enabled PCs continues to climb, and they made up more than 30% of HP’s shipments in the fourth quarter to 31 October.

Warner Music signs deal with AI song generator Suno after settling lawsuit

Guardian
www.theguardian.com
2025-11-26 10:27:15
Music company representing Coldplay and Ed Sheeran had sued tech platform alleging mass copyright infringement Business live – latest updatesWarner Music has signed a licensing deal with the artificial intelligence song generator Suno after settling a copyright infringement lawsuit it launched again...
Original Article

Warner Music has signed a licensing deal with the artificial intelligence song generator Suno after settling a copyright infringement lawsuit it launched against the service a year ago

Warner, the world’s third-largest music company and home to acts including Coldplay, Charli XCX and Ed Sheeran, is the first of the major record labels to partner officially with the company.

As part of their agreement, users will be allowed to create AI-generated songs on Suno via simple text prompts using the voices, names and likenesses of the Warner acts who choose to opt in to the service.

Robert Kyncl, the chief executive of Warner Music Group, said the deal showed that artificial intelligence could be “pro-artist” when it is licensed to “reflect the value of music”.

“This landmark pact with Suno is a victory for the creative community that benefits everyone,” he said. “With Suno rapidly scaling, both in users and monetisation, we’ve seized this opportunity to shape models that expand revenue and deliver new fan experiences.”

As part of the agreement Suno, heralded as the ChatGPT for music , has committed to making changes to its platform to launch new, more advanced and licensed models next year, including putting new limitations on downloads for users.

Suno said that only paid-tier subscribers would be able to download its AI music creations, and paid users would also have to pay more for downloads and have a cap on how many they could make.

The agreement to introduce the new models, which would lead to the existing versions being phased out, seeks to stem the thousands of AI tracks made on Suno that subsequently flood streaming services.

The deal comes just over a week after Warner Music settled a lawsuit and struck a partnership agreement with the rival AI song generation service Udio.

Last year, the world’s biggest record companies sued Suno and Udio for copyright infringement, alleging that their software steals music to “spit out” millions of AI-generated songs without permission from artists.

Universal Music, the world’s biggest music company, was the first to announce a settlement with either company when it reached a deal with Udio last month. Universal remains in litigation with Suno while Sony Music is suing both Suno and Udio.

skip past newsletter promotion

As part of Warner Music’s deal, Suno has acquired Songkick, the live-music and concert-discovery platform, for an undisclosed amount.

In the UK, the government has been consulting on a new intellectual property framework for AI which initially looked like it would result in AI companies being able to use works from the creative community to train their models without permission.

The issue has led to a wave of protests from the creative community , which wants to see an opt-in approach, so that when a work is used it can be identified and licensed to remunerate creators.

Last week, Liz Kendall, the technology secretary, said she wanted to “reset” the debate and indicated she was sympathetic to artists’ demands not to have their works scraped by AI companies without payment.

Passwork 7: Self-hosted password and secrets manager for enterprise teams

Bleeping Computer
www.bleepingcomputer.com
2025-11-26 10:12:17
Passwork 7 unifies enterprise password and secrets management in a self-hosted platform. Organizations can automate credential workflows and test the full system with a free trial and up to 50% Black Friday savings. [...]...
Original Article

Passwork

Author: Eirik Salmi, System Analyst at Passwork

Organizations manage credentials across distributed teams, applications, and infrastructure — passwords, API keys, certificates, and tokens that require different access patterns and security controls. Traditional password managers address individual user needs but weren't designed for operational complexity at scale.

Different roles have different requirements: DevOps teams need programmatic access, security teams demand audit trails, IT admins require granular control. This creates demand for platforms that handle both human and machine credential management within a unified framework.

In its new release, Passwork introduces changes to credential organization, access control, and administrative functionality based on feedback from production environments. The update focuses on usability improvements and security refinements, with attention to workflow efficiency and feature accessibility.

Passwork 7 addresses a concrete operational need: maintaining credential security, enforcing access policies, and enabling team collaboration without disrupting existing workflows. This review examines version 7's practical capabilities and integration characteristics.

What is enterprise password management

Enterprise password management goes beyond storing login credentials. It encompasses the complete lifecycle of sensitive authentication data across an organization: secure generation, encrypted storage, controlled access, automated rotation, and comprehensive auditing.

Unlike consumer password managers, enterprise solutions must support complex organizational structures, integrate with existing infrastructure (LDAP, SSO), provide role-based access control (RBAC), and maintain detailed compliance logs. For organizations managing hundreds of employees and thousands of credentials, these capabilities are essential.

The secrets management challenge

While passwords serve as authentication mechanisms for human users, secrets function as authentication credentials for machine-to-machine communication. API keys, database connection strings, SSH keys, access tokens, and digital certificates enable applications, services, and automated processes to establish secure connections across distributed systems.

The challenge lies in scale and distribution. Modern infrastructure generates secrets at an accelerating rate — embedded in configuration files, injected as environment variables, referenced in deployment manifests, and occasionally exposed in version control systems. Without centralized governance, organizations encounter systemic risks:

  • Security exposure: Hardcoded credentials in application code create persistent attack surfaces and expand the blast radius of potential breaches.

  • Operational chaos: Scattered secrets across systems make rotation nearly impossible

  • Compliance gaps: Absence of centralized audit mechanisms eliminates visibility into access patterns, credential usage, and policy enforcement.

  • DevOps bottlenecks: Manual credential distribution slows deployment pipelines.

Effective secrets management addresses these challenges through centralized storage, automated rotation, programmatic access, and complete operational transparency.

Passwork 7: Two products in one unified platform

The platform evolved beyond traditional password storage into a comprehensive secrets management platform. The system now combines two full-fledged products in one unified interface:

  • Password manager: An intuitive interface where employees securely store and share credentials for daily work. The streamlined design reduces onboarding time, making it practical for organizations where staff have varying technical expertise.

  • Secrets management system: Programmatic access through REST API, Python connector, CLI, and Docker containers enables DevOps teams to automate credential workflows without compromising security.

Password settings and users

This dual functionality eliminates the need for separate tools, reducing complexity and licensing costs while improving security posture.

Key features of Passwork for enterprise security

Passwork's feature set solves the practical challenges of enterprise credential security: structuring access across departments, maintaining audit trails for compliance, and automating credential management without rebuilding workflows.

Flexible vault architecture

Like most enterprise password management platforms, Passwork organizes data hierarchically: passwords nested in folders, folders contained within vaults. The structure is familiar, but Passwork's vault layer offers more granular control and flexibility in how access is defined and distributed.

Payment processors group

Version 7 introduced a vault types architecture that transforms how organizations structure credential access. The system provides three approaches:

  • User vaults remain private by default, accessible only to their creator. These function as personal credential stores that users can selectively share with colleagues when collaboration requires it.

  • Company vaults automatically include corporate administrators alongside the vault creator. This ensures continuous oversight — administrators cannot be removed or demoted, guaranteeing that leadership maintains visibility into critical credentials.

  • Custom vault types represent the most powerful option. Administrators can create unlimited vault types tailored to specific departments, projects, or security requirements. For each custom type, you define designated administrators, configure creator permissions, and establish rules about who can create new vaults.

Vault settings

This flexibility allows organizations to mirror their internal structure within Passwork. An IT director manages IT vaults, the finance director oversees financial credentials, and HR maintains employee access information — all within a single platform with appropriate isolation and oversight.

Meanwhile, a security administrator can be granted access across all vaults for audit and compliance purposes without disrupting departmental autonomy.

Organizations with strict security policies can disable user vault creation entirely, enforcing a model where all credentials reside exclusively in company-controlled or custom vault types.

Granular access control with RBAC and user groups

Access control in Passwork operates through a role-based system that scales from small teams to enterprise deployments. Administrators create roles that define specific permissions — what actions users can perform within the system.

The system imposes no artificial limits on role creation, enabling organizations to implement precisely tailored permission structures.

You might grant certain users rights to manage specific roles and groups while restricting access to system configurations. Department heads receive control over their team's credentials without accessing other departments' data.

User management

User groups further streamline permission management. By adding users to a group, they automatically inherit the group's permissions across relevant vaults and folders.

This approach reduces administrative overhead when onboarding new team members or restructuring departments.

Secure credential sharing for internal and external users

Passwork offers multiple methods for credential sharing, each designed for specific use cases:

  • Internal sharing enables credential distribution to individuals or groups within your company. Permissions cascade through the vault and folder hierarchy, ensuring users access exactly what they need without exposing unrelated credentials.

  • External sharing addresses the common challenge of securely providing credentials to contractors, vendors, or temporary partners. Passwork generates secure, time-limited links that grant access without requiring external users to create accounts or install software.

Share a password

The platform also offers granular password sharing through its internal password sending system and shortcuts. Access can be revoked at any time, and the system automatically reminds administrators through the security dashboard which users previously had access to each credential.

Every sharing action generates audit logs, providing complete visibility into credential access patterns and supporting compliance requirements.

Complete audit trails and compliance

Every action in Passwork generates activity log entries. Track who accessed which credentials, when, and what actions they performed. Export logs for analysis or integration with SIEM systems.

User groups

This operational transparency facilitates regulatory compliance (SOC 2, ISO 27001, GDPR ) and enables rapid incident response.

When suspicious activity occurs, administrators can quickly identify affected credentials and revoke access.

Enhanced notification system

In addition to audit logs Passwork 7 introduced customizable notifications with flexible delivery options. Users choose notification types and delivery methods — in-app or email — for authentication events and activity log entries.

Notification settings

Each event type can be configured independently. Receive critical security alerts via email immediately. View routine activity updates in-app when convenient. Disable notifications entirely for specific event types.

Integration with corporate identity infrastructure

Enterprise deployments require native integration with existing authentication systems.

Passwork delivers this through comprehensive SSO and LDAP support. Disable an account in Active Directory, and Passwork access revokes immediately.

Automation tools: Python connector, CLI, and Docker

Solution is built on API-first principles, meaning every function available in the user interface is accessible through REST API. This architecture enables complete programmatic control over the platform.

The API provides access to all system functions: password management, vault operations, folder structures, user administration, role assignments, tags, file attachments, and comprehensive event logs.

This allows DevOps teams to automate access provisioning, update credentials programmatically, integrate Passwork into deployment pipelines, and export logs for security analysis.

Passwork provides multiple automation tools designed for different workflows:

  • Python connector — The official Python library eliminates complexity by abstracting low-level API calls and cryptographic operations.

  • Command-line interface — The CLI enables shell script integration and manual credential management from the terminal. DevOps engineers can incorporate Passwork operations into deployment scripts, automation workflows, and system administration tasks.

  • Docker container — Official Docker image simplifies deployment in containerized environments. This approach integrates naturally with Kubernetes, container orchestration platforms, and microservices architectures.

Zero-knowledge architecture

Passwork's Zero knowledge mode encrypts all data client-side before transmission. Even if attackers compromise the server, they cannot decrypt stored credentials.

Each user maintains their own master password, never transmitted to the server. Only the user can decrypt their accessible credentials.

This architecture provides maximum security for organizations handling highly sensitive data.

Self-hosted deployment

Passwork operates as a self-hosted password manager, meaning the entire platform runs on your infrastructure — whether on-premises servers or private cloud environments. No credentials ever touch third-party servers.

This deployment model addresses critical requirements that cloud-based solutions cannot satisfy:

  • Data sovereignty and compliance: Organizations subject to GDPR, HIPAA, or sector-specific regulations maintain complete control over credential data location and residency policies.

  • Network isolation: Deploy within air-gapped networks or segmented security zones. Critical credentials never traverse public internet connections.

  • Custom security policies: Implement your own backup strategies, encryption standards, access controls, and monitoring systems. Define precisely how Passwork integrates with existing security infrastructure.

  • Zero vendor dependency: Cloud password managers introduce risks — service outages, policy changes, acquisitions. Self-hosting eliminates this variable entirely.

For enterprises where credential security cannot depend on external providers, self-hosted architecture is foundational.

Why choose Passwork for enterprise environments

Passwork 7 addresses the fundamental challenge facing modern IT organizations: managing both human and machine credentials within a single, secure platform.

  • Self-hosted deployment keeps sensitive data within your infrastructure, satisfying data residency requirements and regulatory constraints.

  • Unified platform eliminates the need for separate password and secrets management tools, reducing costs and complexity.

  • API-first architecture enables comprehensive automation without sacrificing usability for non-technical staff.

  • Flexible access control supports complex organizational structures through unlimited custom roles and vault types.

  • Zero-knowledge encryption protects against server compromise, providing maximum security for sensitive credentials.

  • Complete automation through Python connector, CLI, and Docker integration streamlines DevOps workflows.

For organizations seeking enterprise password management and secrets management within a single solution, Passwork delivers security, flexibility, and automation.

Migrating from other password managers

Passwork supports migration from existing password management solutions, enabling organizations to transition without losing data. The platform provides import tools and documentation for common formats, streamlining the migration process.

Planning your vault structure before migration ensures optimal organization from day one. Consider how your departments, projects, and teams should map to vault types, and establish permission structures that reflect your security policies.

The company provides a 10% discount for organizations migrating from other password managers, making the transition both technically seamless and financially advantageous.

Conclusion

Passwork delivers a unified approach to password and secrets management that prioritizes practical deployment over theoretical features. The vault architecture, access control model, and interface design accommodate organizations across different scales and operational contexts.

Centralized credential management reduces the need for multiple specialized tools, integrates with existing infrastructure through SSO and LDAP, and supports collaboration workflows without requiring significant process changes.

The platform holds ISO 27001 certification, demonstrating compliance with internationally recognized information security management standards — essential for organizations in regulated sectors or those handling sensitive data under strict governance requirements.

Free trial options and Black Friday offers

A full-featured trial available with no feature limitations. This provides an opportunity to evaluate the platform against your actual infrastructure, security policies, and team workflows before committing.

If the trial meets your requirements, A Black Friday promotion runs from November 26 through December 3, 2025, with discounts reaching 50%. Organizations already planning credential management implementations may find value in testing now and purchasing during this period.

For businesses seeking to consolidate credential management, strengthen security posture, and establish audit-ready access governance, Passwork 7 provides a comprehensive solution designed for rapid deployment with minimal operational disruption.

Start your free trial today and save with our Black Friday discount — available November 26 to December 3, 2025.

Sponsored and written by Passwork .

A Cell So Minimal That It Challenges Definitions of Life

Hacker News
www.quantamagazine.org
2025-11-26 10:06:41
Comments...
Original Article

The newly described microbe represents a world of parasitic, intercellular biodiversity only beginning to be revealed by genome sequencing.

Introduction

Life’s fundamental structure is the cell, and so the main things that a cell does — processing biomolecules, growing, replicating its genetic material and producing a new body — are considered hallmarks of life. But earlier this year, scientists discovered a cell so severely stripped of essential functions that it challenges biologists’ definitions of what counts as a living thing.

The species is a single-celled organism known only by the mysterious sequence of its genetic code. Its genome is fantastically small: Along the organism’s evolutionary journey, it seems to have gotten rid of most of it. According to the shocked researchers who published the discovery in a preprint uploaded to biorxiv.org in May, the lost genes include those central to cell metabolism, meaning it can neither process nutrients nor grow on its own.

Other cells with highly reduced genomes still encode proteins to create amino acids, break down carbohydrates for energy or synthesize vitamins. All this appears to be absent from the cell, which seems to be a parasite entirely dependent on a host or cellular community to meet its nutritional needs. Until now, these genetic pathways were considered fundamental for the survival of any cell.

The organism’s “replicative core” — the genetic components needed to reproduce itself — remains, making up more than half of its genome.

“Metabolism is one of the key components of how we often define life,” said Takuro Nakayama , an evolutionary microbiologist at the University of Tsukuba in Japan who led the team. The cell’s discovery “challenges this by suggesting a cell can exist almost entirely without its own. It demonstrates that the diversity of cellular life is far greater than we knew and that organisms do not always follow our definitions.”

While this form of life is new to science, it’s possible that organisms like it are common. A huge proportion of microbial biodiversity may be hiding in recursive interrelationships between parasitic and host microbes, said Puri López-García , a microbial ecologist at the French National Center for Scientific Research in Paris who was not involved in the study.

“The diversity of archaea and bacteria that appear to belong to these supergroups of parasitic organisms is very, very large,” she said. For bacteria, it may be between 25% and 50% of the group’s total share of species, she suggested.

The discovery pushes the boundaries of our knowledge of just how small and simple cellular life can become, as it evolves even into forms that are barely alive.

An Extraordinary Discovery

Nakayama has built a scientific career out of looking more closely than other researchers typically do. He considers an already tiny cell and wonders: Are there even smaller cells that make a home there?

“The difference [in size between parasitic and host cells] can sometimes be like that between a human and Godzilla,” Nakayama said. He is fascinated by the potentially vast amount of undiscovered biodiversity these relationships might contain, and his lab looks for such relationships in seawater. The ocean is a nutrient-poor environment that incentivizes cells to form trading partnerships . Sometimes they float along together , loosely tethered, exchanging rare nutrients and energy. Other times their arrangements are more organized.

Citharistes regius is a globally widespread single-celled dinoflagellate that has a walled, pouchlike external chamber for housing symbiotic cyanobacteria. Nakayama and his team searched for the alga by scooping seawater samples from the Pacific Ocean using a fine-mesh net. A common technique is to sequence whatever DNA can be found in the soup of such a sample, an approach called metagenomics.

“That method is incredibly powerful for capturing a broad overview,” Nakayama said. “However, with such data, it is often difficult to maintain the link between a sequence and the specific cell it came from, and rare organisms can be easily missed.” His team’s more targeted approach involves microscopically identifying and physically isolating a single target cell from that mixed sample.

Back on shore in the Tsukuba lab, after the researchers confirmed they had C. regius , they sequenced every genome associated with that one cell. As expected, they found DNA from its symbiotic cyanobacteria, but they found something else, too: sequences that belong to an archaeon, a member of the domain of life thought to have given rise to eukaryotes like us.

At first, Nakayama and his colleagues thought they had made a mistake. The archaeal genome is tiny: just 238,000 base pairs end to end. In comparison, humans have a few billion base pairs, and even E. coli bacteria work with several million. ( C. regius ’ symbiotic cyanobacteria have 1.9 million base pairs.) Previously, the smallest known archaeal genome was the one belonging to Nanoarchaeum equitans — at 490,000 base pairs, it is more than twice as long as the new one the researchers found. They initially figured that this tiny genome — too large to be merely statistical noise — was an abbreviated piece of a much larger genome, erroneously compiled by their software.

“At first, we suspected it might be an artifact of the genome-assembly process,” Nakayama recalled. To check, the team sequenced the genome using different technologies and ran the data through multiple computer programs that assemble fragments of DNA sequences into a full genome. The various approaches all reconstructed the exact same 238,000-base-pair circular genome. “This consistency is what convinced us it was the real, complete genome,” he said.

This meant that Nakayama and his team had a new organism on their hands. They named the microbe Candidatus Sukunaarchaeum mirabile (hereafter referred to as Sukunaarchaeum) for its remarkably tiny genome — after Sukuna-biko-na, a Shinto deity notable for his short stature, plus a Latin word for “extraordinary.”

The Spectrum of Quasi-Life

When the team consulted databases of known genes to analyze the archaeon, they found its small size was the result of a whole lot that was missing.

Sukunaarchaeum encodes the barest minimum of proteins for its own replication, and that’s about all. Most strangely, its genome is missing any hints of the genes required to process and build molecules, outside of those needed to reproduce. Lacking those metabolic components, the organism must outsource the processes for growth and maintenance to another cell, a host upon which the microbe is entirely dependent.

Other symbiotic microbes have scrapped much of their genomes, including Sukunaarchaeum’s evolutionary relatives. The researchers’ analysis suggested that the microbe is part of the DPANN archaea, sometimes called nanoarchaea or ultra-small archaea, which are characterized by small size and small genomes. DPANN archaea are generally thought to be symbiotes that cling to the outside of larger prokaryotic microbes, and plenty of them have substantially reduced genomes to match that lifestyle. But until now, none of the DPANN species had genomes quite this pared back. And Sukunaarchaeum branched off the DPANN lineage early, suggesting that it had taken its own evolutionary journey.

“This realm of the archaea is pretty mysterious in general,” said Brett Baker , a microbial ecologist at the University of Texas, Austin who was not involved in the work. “[DPANN archaea are] obviously limited in their metabolic capabilities.”

While Sukunaarchaeum may provide some undetermined benefit for its host — which could be C. regius , the symbiotic cyanobacteria or another cell entirely — it’s probably a self-absorbed parasite. “Its genome reduction is driven by entirely selfish motives, consistent with a parasitic lifestyle,” said Tim Williams , a microbiologist at the University of Technology Sydney who was not involved in the study. It cannot contribute metabolic products, so the relationship between Sukunaarchaeum and any other cell would likely be a one-way street.

Other microbes have evolved similarly extreme, streamlined forms. For instance, the bacterium Carsonella ruddii , which lives as a symbiont within the guts of sap-feeding insects, has an even smaller genome than Sukunaarchaeum, at around 159,000 base pairs. However, these and other super-small bacteria have metabolic genes to produce nutrients, such as amino acids and vitamins, for their hosts. Instead, their genome has cast off much of their ability to reproduce on their own.

“They are on the way to becoming organelles. This is the way mitochondria and chloroplasts are thought to have evolved,” Williams said. “But Sukunaarchaeum has gone in the opposite direction: The genome retains genes required for its own propagation, but lost most, if not all, of its metabolic genes.”

Soon after Nakayama’s team posted their results online, they got a big response. “When we saw the preprint, this was really quite exciting in the lab,” said Thijs Ettema , an evolutionary microbiologist and expert on archaeal genomics at Wageningen University & Research in the Netherlands, who was not involved in the work. “These types of organisms [with reduced genomes] have been found before, but not as extreme as this.”

Some news reports went so far as to imply that Sukunaarchaeum is on its way to evolving into a virus . However, while both Sukunaarchaeum and viruses are reliant on a host cell for very basic biological functions, viruses can’t reproduce on their own.

“There is a fundamental gap between Sukunaarchaeum and viruses,” Nakayama said. “Sukunaarchaeum retains its own core machinery for gene expression, including ribosomes, albeit in a simplified form. This is in stark contrast to viruses, which lack ribosomes and must hijack the host’s cellular systems to replicate.”

The findings fit into a larger discussion about how we define life, Ettema said, since nature routinely evolves exceptions that defy simple categorization. “Most likely it cannot live independently,” he said. “You could say the same of bacterial symbionts. And what do we call organelles like mitochondria and plastids? … At what point should we call things alive?”

A Minimalist Lifestyle

Many questions about Sukunaarchaeum remain unresolved. For one, a large portion of its genome is made up of genes that don’t match any known sequences. They seem to encode large proteins, which is uncommon in such radically reduced organisms.

Nakayama and his colleagues think these large proteins are employed on the cell membrane and somehow support interactions between the archaeon and its host. That would fit with the lifestyles of other studied DPANN archaea as well, Ettema said, which are generally thought to be ectosymbionts, adhering to the outside of comparatively immense hosts.

Although Sukunaarchaeum was found in association with the dinoflagellate C. regius , its true host’s identity is unknown. C. regius is a eukaryote, but DPANN archaea generally associate with other archaea. Also up for debate: Is it attaching to the outside of a host cell, like other DPANN archaea, or is it living internally — or both? Answering these questions would require setting human eyes on the archaeon for the first time; at this point it’s only known from a curious string of genetic data.

There is also a slim possibility that these genes are the “lost” metabolic genes after all, López-García said, if they have evolved so far from their original sequences as to be unrecognizable. “Because the genome is so fast-evolving, maybe some of these functions correspond to metabolic functions, but the divergence is so much that we cannot identify the [gene] homologue [in the database],” she said.

Even stranger minimalist lifestyles or more reduced genomes may be out there, but researchers may miss them, Ettema said. Traditional analytical approaches for surveying the genomes of microbial samples could flag their tiny genomes as incomplete or low quality and discard them, or skip them entirely, he said. “[The DNA] might have been present in the samples, but it was removed after sequencing, and hence overlooked.”

When Nakayama and his colleagues searched a database of marine environmental sequence data from the world’s oceans to see if the new microbe popped up anywhere else, they didn’t find any matches. But they did detect many very similar sequences from what are likely to be close relatives. Sukunaarchaeum may be the tip of a very large microbial iceberg, one floating in a vast ocean of microbial diversity: tiny microbes clinging to slightly less tiny microbes, perhaps inside other microbes, the stories of their ancient relationships only beginning to be revealed.

Kirby Air Riders review – cute pink squishball challenges Mario for Nintendo racing supremacy

Guardian
www.theguardian.com
2025-11-26 10:00:48
Nintendo Switch 2; Bandai Namco/Sora/HAL Laboratory/NintendoIt takes some getting used to, but this Mario Kart challenger soon reveals a satisfyingly zen, minimalist approach to competitive racing In the world of cartoonish racing games, it’s clear who is top dog. As Nintendo’s moustachioed plumber ...
Original Article

I n the world of cartoonish racing games, it’s clear who is top dog. As Nintendo’s moustachioed plumber lords it up from his gilded go-kart, everyone from Crash Bandicoot to Sonic and Garfield has tried – and failed – to skid their way on to the podium. Now with no one left to challenge its karting dominance, Nintendo is attempting to beat itself at its own game.

The unexpected sequel to a critically panned 2003 GameCube game, Kirby Air Riders has pink squishball Kirby and friends hanging on for dear life to floating race machines. With no Grand Prix to compete in, in the game’s titular mode you choose a track and compete to be the first of six players to cross the finish line, spin-attacking each other and unleashing weapons and special abilities to create cutesy, colourful chaos.

You accelerate automatically at all times, commanding the analogue stick to boost around corners, aiming the direction of your drift with a well-timed flick. Despite that, Air Riders has a surprisingly steep learning curve: it took me an hour to stop hurtling into walls. Once you’ve learned to let go (of the stick) and start drifting like a pro, Air Riders reveals a satisfyingly zen, minimalist approach to competitive racing.

Where Sonic’s 2025 kart outing saw him recruit Minecraft’s Steve, VTuber Hatsune Miku and Yakuza’s Kiryu to its ranks, Air Riders has you competing against such legendary characters as a sentient rock, a slime with googly eyes and someone called Chef Kawasaki. Remember Lolo and Lala? … No? Well, they’re here! But where the roster is lacking, the machines give Air Riders surprising variety and depth, letting you swap between enemy-destroying tanks and glide-happy paper aeroplanes.

Each track has personality and spectacle, and there’s a strong sense of visual cohesion that was sorely lacking in Sonic Racing: CrossWorlds earlier this year. The art style really shines in Air Riders’ story mode, Road Trip. It’s the best single player mode that director Masahiro Sakurai (who also leads Smash Bros) has ever concocted, packed with surreal boss fights, cleverly modified races and oddly high-budget cutscenes, like a dream you might have after gobbling some too much cheese before bedtime.

The big multiplayer mode, City Trials, however, is a let-down. A chaotic collision of Battle Royale-esque resource gathering followed by a Mario party-esque mini game showdown, it feels bafflingly pointless: you spend five minutes powering up for a mini game that ends in seconds. The final mode – Top Ride – offers up a simplified version of the main event, in which you race from a bird’s eye view in a Micro Machines-inspired melee. It’s fun, if shallow.

What Air Riders lacks in modes, it makes up for in charm. There are a heap of customisation options, allowing you to pimp your ride with unlockable stickers and alternative colour schemes – you can even hang a plushie from your machine like a Kirby-branded Labubu.

This is a tightly focused game that reminds me of Nintendo’s fun-first NES-era game design – for better and for worse. It has a sprinkling of Sakurai magic and oodles of visual panache, but at full price it is – like Kirby – a little puffed-up.