Search Results: "troy"

3 November 2025

Russ Allbery: Review: The Raven Scholar

Review: The Raven Scholar, by Antonia Hodgson
Series: Eternal Path Trilogy #1
Publisher: Orbit
Copyright: April 2025
ISBN: 0-316-57723-5
Format: Kindle
Pages: 651
The Raven Scholar is an epic fantasy and the first book of a projected trilogy. It is Antonia Hodgson's first published fantasy novel; her previous published novels are historical mystery. I would classify this as adult fantasy the main character is thirty-four with a stable court position but it has strong YA vibes because of the generational turnover feel of the main plot. Eight years before the start of this book, Andren Valit attempted to assassinate the emperor and failed. Since then, his widow and three children twins Yana and Ruko and infant Nisthala have been living in disgrace in a cramped apartment, subject to constant inspections and suspicion. As the story opens, they have been summoned to appear before the emperor, escorted by a young and earnest Hound (essentially the state security services) named Shal Worthy. The resulting interrogation is full of dangerous traps. Not all of them will be avoided. The formalization of the consequences of that imperial summons falls to an unpopular Junior Archivist (Third Class) whose one notable skill is her penmanship. A meeting that was disasterous for the Valits becomes unexpectedly fortunate for the archivist, albeit with a poisonous core. Eight years later, Neema Kraa is High Scholar, and Emperor Bersun's twenty-four years of permitted reign is coming to an end. The Festival is about to begin. One representative from each of the empire's eight anats (religious schools) will compete in seven days of Trials, save for the Dragons who do not want the throne and will send a proxy. The victor according to the Trials scoring system will become emperor and reign unquestioned for twenty-four years or until resignation. This is the system that put an end to the era of chaos and has been in place for over a thousand years. On the eve of the Trials, the Raven contender is found murdered. Neema is immediately a suspect; she even has reasons to suspect herself. She volunteers to lead the investigation because she has to know what happened. She is also volunteered to be the replacement Raven contender. There is no chance that she will become emperor; she doesn't even know how to fight. But agnostic Neema has a rather unexpected ally.
As the last chime fades we drop neatly on to the balcony's rusting hand rail, folding our wings with a soft shuffle. Noon, on the ninth day of the eighth month, 1531. Neema Kraa's lodgings. We are here, exactly where we should be, at exactly the right moment, because we are the Raven, and we are magnificent.
The Raven Scholar is a rather good epic fantasy, with some caveats that I'll get to in a moment, but I found it even more fascinating as a genre artifact. I've read my share of epic fantasy over the years, although most of my familiarity of the current wave of new adult fairy epics comes from reviews rather than personal experience. The Raven Scholar is epic fantasy, through and through. There is court intrigue, a main character who is a court functionary unexpectedly thrown into the middle of some problem, civilization-wide stakes, dramatic political alliances, detailed magic and mythological systems, and gods. There were moments that reminded me of a Guy Gavriel Kay novel, although Hodgson's characters tend more towards disarming moments of humanization instead of Kay's operatic scenes of emotional intensity. But The Raven Scholar is also a murder mystery, complete with a crime scene, clues, suspects, evidence, an investigation, a possibly compromised detective, and a morass of possible motives and red herrings. I'm not much of a mystery reader, but this didn't feel like sort of ancillary mystery that might crop up in the course of a typical epic fantasy. It felt like a full-fledged investigation with an amateur detective; one can tell that Hodgson's previous four books were historical mysteries. And then there's the Trials, which are the centerpiece of the book. This book helped me notice that people (okay, me, I'm the people) have been sleeping on the influence of The Hunger Games, Battle Royale, and reality TV (specifically Survivor) on genre fiction, possibly because the more obvious riffs on the idea (Powerless, The Selection) have been young adult or new adult. Once I started looking, I realized this idea is everywhere now: Throne of Glass, Fourth Wing, even The Night Circus to some extent. Competitions with consequences are having a moment. I suspect having a competition to decide the next emperor is going to strike some traditional fantasy readers as sufficiently absurd and unbelievable that it will kick them out of the book. I had a moment of "okay, this is weird, why would anyone stick with this system for so long" myself. But I would encourage such readers to interrogate whether that's only a response from unfamiliarity; after all, strange women lying in ponds distributing swords is no basis for a system of government either. This is hardly the most unrealistic epic fantasy trope, and it has the advantage of being a hell of a plot generator when handled well. Hodgson handles it well. Society in this novel is structured around the anats and the eight Guardians, gods who, according to myth, had returned seven times previously to save the world, but who will destroy the world when they return again. Each Guardian represents a group of characteristics and useful societal functions: the Ox is trustworthy, competent and hard-working; the Fox is a trickster and a rule-bender; the Raven is shrewd and careful and is the Guardian of scholars and lawyers. Each Trial is organized by one of the anats and tests the contenders for the skills most valued by that Guardian, often in subtle and rather ingenious ways. There are flaws here that you could poke at if you wanted to, but I was charmed and thoroughly entertained by how well Hodgson weaves the story around the Trials and uses the conflicting values to create character conflict, unexpected alliances, and engrossing plot. Most importantly for a book of this sort, I liked Neema. She has a charming combination of competence, quirks (she is almost physically unable to not correct people's factual errors), insecurity, imposter syndrome, and determination. She is way out of her depth and knows it, but she has an ethical core and an insatiable curiosity that won't let her leave the central mysteries of the book alone. And the character dynamics are great; there are a lot of characters, including the competition problem of having to juggle eight contenders and give them all sufficient characterization to be meaningful, but this book uses its length to give each character some room to breathe. This is a long book, well over 600 pages, but it felt packed with events and plot twists. After every chapter I had to fight the urge to read just one more. The biggest drawback of this book is that it is very much the first book of a trilogy, none of the other volumes are out yet, and the ending is rather nasty. This is the sort of trilogy that opens with a whole lot of bad things happening, and while I am thoroughly hooked and will purchase the next volume as soon as it's available, I wish Hodgson had found a way to end the book on a somewhat more positive or hopeful note. The middle of the book was great; the end was a bit of an emotional slog, alas. The writing is good enough here that I'm fairly sure the depression will be worth it, but if you need your endings to be triumphant (and who could blame you in this moment in history), you may want to wait on this one until more volumes are out. Apart from that, though, this was a lot of fun. The Guardians felt like they came from a different strand of fantasy than you usually see in epic, more of a traditional folk tale vibe, which adds an intriguing twist to the epic fantasy setting. The characters all work, and Hodgson even pulls off some Game of Thrones style twists that make you sympathetic to characters you previously hated. The magic system apart from the Guardians felt underbaked, but the politics had more depth than a lot of fantasy novels. If you want the truly complex and twisty politics you would get from one of Guy Gavriel Kay's historical rewrites, you will come away disappointed, but it was good enough for me. And I did enjoy the Raven.
Respect, that's all we demand. Recognition of our magnificence. Offerings. Love. Fear. Trembling awe. Worship. Shiny things. Blood sacrifice, some of us very much enjoy blood sacrifice. Truly, we ask for so little.
Followed by an as-yet untitled sequel that I hope will materialize. Rating: 7 out of 10

30 September 2025

Antoine Beaupr : Proper services

During 2025-03-21-another-home-outage, I reflected upon what's a properly ran service and blurted out what turned out to be something important I want to outline more. So here it is, again, on its own for my own future reference.
Typically, I tend to think of a properly functioning service as having four things:
  1. backups
  2. documentation
  3. monitoring
  4. automation
  5. high availability (HA)
Yes, I miscounted. This is why you need high availability. A service doesn't properly exist if it doesn't at least have the first 3 of those. It will be harder to maintain without automation, and inevitably suffer prolonged outages without HA.

The five components of a proper service

Backups Duh. If data is maliciously or accidentally destroyed, you need a copy somewhere. Preferably in a way that malicious Joe can't get to. This is harder than you think.

Documentation I have an entire template for this. Essentially, it boils down to using https://diataxis.fr/ and this "audit" guide. For me, the most important parts are:
  • disaster recovery (includes backups, probably)
  • playbook
  • install/upgrade procedures (see automation)
You probably know this is hard, and this is why you're not doing it. Do it anyways, you'll think it sucks, it will grow out of sync with reality, but you'll be really grateful for whatever scraps you wrote when you're in trouble. Any docs, in other words, is better than no docs, but are no excuse for doing the work correctly.

Monitoring If you don't have monitoring, you'll know it fails too late, and you won't know it recovers. Consider high availability, work hard to reduce noise, and don't have machine wake people up, that's literally torture and is against the Geneva convention. Consider predictive algorithm to prevent failures, like "add storage within 2 weeks before this disk fills up". This is also harder than you think.

Automation Make it easy to redeploy the service elsewhere. Yes, I know you have backups. That is not enough: that typically restores data and while it can also include configuration, you're going to need to change things when you restore, which is what automation (or call it "configuration management" if you will) will do for you anyways. This also means you can do unit tests on your configuration, otherwise you're building legacy. This is probably as hard as you think.

High availability Make it not fail when one part goes down. Eliminate single points of failures. This is easier than you think, except for storage and DNS ("naming things" not "HA DNS", that is easy), which, I guess, means it's harder than you think too.

Assessment In the above 5 items, I currently check two in my lab:
  1. backups
  2. documentation
And barely: I'm not happy about the offsite backups, and my documentation is much better at work than at home (and even there, I have a 15 year backlog to catchup on). I barely have monitoring: Prometheus is scraping parts of the infra, but I don't have any sort of alerting -- by which I don't mean "electrocute myself when something goes wrong", I mean "there's a set of thresholds and conditions that define an outage and I can look at it". Automation is wildly incomplete. My home server is a random collection of old experiments and technologies, ranging from Apache with Perl and CGI scripts to Docker containers running Golang applications. Most of it is not Puppetized (but the ratio is growing). Puppet itself introduces a huge attack vector with kind of catastrophic lateral movement if the Puppet server gets compromised. And, fundamentally, I am not sure I can provide high availability in the lab. I'm just this one guy running my home network, and I'm growing older. I'm thinking more about winding things down than building things now, and that's just really sad, because I feel we're losing (well that escalated quickly).

Side note about Tor The above applies to my personal home lab, not work! At work, of course, it's another (much better) story:
  1. all services have backups
  2. lots of services are well documented, but not all
  3. most services have at least basic monitoring
  4. most services are Puppetized, but not crucial parts (DNS, LDAP, Puppet itself), and there are important chunks of legacy coupling between various services that make the whole system brittle
  5. most websites, DNS and large parts of email are highly available, but key services like the the Forum, GitLab and similar applications are not HA, although most services run under replicated VMs that can trivially survive a total, single-node hardware failure (through Ganeti and DRBD)

27 August 2025

Russell Coker: ZRAM and VMs

I ve just started using zram for swap on VMs. The use of compression for swap in Linux apparently isn t new, it s been in the Linux kernel since version 3.2 (since 2012). But until recent years I hadn t used it. When I started using Mobian (the Debian distribution for phones) zram was in the default setup, it basically works and I never needed to bother with it which is exactly what you want from such a technology. After seeing it s benefits in Mobian I started using it on my laptops where it worked well. Benefits of ZRAM ZRAM means that instead of paging data to storage it is compressed to another part of RAM. That means no access to storage which is a significant benefit if storage is slow (typical for phones) or if storage wearing out is a problem. For servers you typically have SSDs that are fast and last for significant write volumes, for example the 120G SSDs referenced in my blog post about swap (not) breaking SSD [1] are running well in my parents PC because they outlasted all the other hardware connected to them and 120G isn t usable for anything more demanding than my parents use nowadays. Those are Intel 120G 2.5 DC grade SATA SSDs. For most servers ZRAM isn t a good choice as you can just keep doing IO on the SSDs for years. A server that runs multiple VMs is a special case because you want to isolate the VMs from each other. Support for quotas for storage IO in Linux isn t easy to configure while limiting the number of CPU cores is very easy. If a system or VM using ZRAM for swap starts paging excessively the bottleneck will be CPU, this probably isn t going to be great on a phone with a slow CPU but on a server class CPU it will be less of a limit. Whether compression is slower or faster than SSD is a complex issue but it will definitely be just a limit for that VM. When I setup a VM server I want to have some confidence that a DoS attack or configuration error on one VM isn t going to destroy the performance of other VMs. If the VM server has 4 cores (the smallest VM server I run) and no VM has more than 2 cores then I know that the system can still run adequately if half the CPU performance is being wasted. Some servers I run have storage limits that make saving the disk space for swap useful. For servers I run in Hetzner (currently only one server but I have run up to 6 at various times in the past) the storage is often limited, Hetzner seems to typically have storage that is 8* the size of RAM so if you have many VMs configured with the swap that they might need in the expectation that usually at most one of them will be actually swapping then it can make a real difference to usable storage. 5% of storage used for swap files isn t uncommon or unreasonable. Big Servers I am still considering the implications of zram on larger systems. If I have a ML server with 512G of RAM would it make sense to use it? It seems plausible that a system might need 550G of RAM and zram could make the difference between jobs being killed with OOM and the jobs just completing. The CPU overhead of compression shouldn t be an issue as when you have dozens of cores in the system having one or two used for compression is no big deal. If a system is doing strictly ML work there will be a lot of data that can t be compressed, so the question is how much of the memory is raw input data and the weights used for calculations and how much is arrays with zeros and other things that are easy to compress. With a big server nothing less than 32G of swap will make much difference to the way things work and if you have 32G of data being actively paged then the fastest NVMe devices probably won t be enough to give usable performance. As zram uses one stream per CPU code if you have 44 cores that means 44 compression streams which should handle greater throughput. I ll write another blog post if I get a chance to test this.

28 July 2025

Russ Allbery: Review: Cyteen

Review: Cyteen, by C.J. Cherryh
Series: Cyteen #1
Publisher: Warner Aspect
Copyright: 1988
Printing: September 1995
ISBN: 0-446-67127-4
Format: Trade paperback
Pages: 680
The main text below is an edited version of my original review of Cyteen written on 2012-01-03. Additional comments from my re-read are after the original review. I've reviewed several other C.J. Cherryh books somewhat negatively, which might give the impression I'm not a fan. That is an artifact of when I started reviewing. I first discovered Cherryh with Cyteen some 20 years ago, and it remains one of my favorite SF novels of all time. After finishing my reading for 2011, I was casting about for what to start next, saw Cyteen on my parents' shelves, and decided it was past time for my third reading, particularly given the recent release of a sequel, Regenesis. Cyteen is set in Cherryh's Alliance-Union universe following the Company Wars. It references several other books in that universe, most notably Forty Thousand in Gehenna but also Downbelow Station and others. It also has mentions of the Compact Space series (The Pride of Chanur and sequels). More generally, almost all of Cherryh's writing is loosely tied together by an overarching future history. One does not need to read any of those other books before reading Cyteen; this book will fill you in on all of the politics and history you need to know. I read Cyteen first and have never felt the lack. Cyteen was at one time split into three books for publishing reasons: The Betrayal, The Rebirth, and The Vindication. This is an awful way to think of the book. There are no internal pauses or reasonable volume breaks; Cyteen is a single coherent novel, and Cherryh has requested that it never be broken up that way again. If you happen to find all three portions as your reading copy, they contain all the same words and are serviceable if you remember it's a single novel under three covers, but I recommend against reading the portions in isolation. Human colonization of the galaxy started with slower-than-light travel sponsored by the private Sol Corporation. The inhabitants of the far-flung stations and the crews of the merchant ships that supplied them have formed their own separate cultures, but initially remained attached to Earth. That changed with the discovery of FTL travel and a botched attempt by Earth to reassert its authority. At the time of Cyteen, there are three human powers: distant Earth (which plays little role in this book), the merchanter Alliance, and Union. The planet Cyteen is one of only a few Earth-like worlds discovered by human expansion, and is the seat of government and the most powerful force in Union. This is primarily because of Reseune: the Cyteen lab that produces the azi. If Cyteen is about any one thing, it's about azi: genetically engineered human clones who are programmed via intensive psychological conditioning starting before birth. The conditioning uses a combination of drugs to make them receptive and "tape," specific patterns of instruction and sensory stimulation. They are designed for specific jobs or roles, they're conditioned to be obedient to regular humans, and they're not citizens. They are, in short, slaves. In a lot of books, that's as deep as the analysis would go. Azi are slaves, and slavery is certainly bad, so there would probably be a plot around azi overthrowing their conditioning, or around the protagonists trying to free them from servitude. But Cyteen is not any SF novel, and azi are considerably more complex and difficult than that analysis. We learn over the course of the book that the immensely powerful head of Reseune Labs, Ariane Emory, has a specific broader purpose in mind for the azi. One of the reasons why Reseune fought for and gained the role of legal protector of all azi in Union, regardless of where they were birthed, is so that Reseune could act to break any permanent dependence on azi as labor. And yet, they are slaves; one of the protagonists of Cyteen is an experimental azi, which makes him the permanent property of Reseune and puts him in constant jeopardy of being used as a political prisoner and lever of manipulation against those who care about him. Cyteen is a book about manipulation, about programming people, about what it means to have power over someone else's thoughts, and what one can do with that power. But it's also a book about connection and identity, about what makes up a personality, about what constitutes identity and how people construct the moral codes and values that they hold at their core. It's also a book about certainty. Azi are absolutely certain, and are capable of absolute trust, because that's part of their conditioning. Naturally-raised humans are not. This means humans can do things that azi can't, but the reverse is also true. The azi are not mindless slaves, nor are they mindlessly programmed, and several of the characters, both human and azi, find a lot of appeal in the core of certainty and deep self-knowledge of their own psychological rules that azis can have. Cyteen is a book about emotions, and logic, and where they come from and how to balance them. About whether emotional pain and uncertainty is beneficial or damaging, and about how one's experiences make up and alter one's identity. This is also a book about politics, both institutional and personal. It opens with Ariane Emory, Councilor for Science for five decades and the head of the ruling Union Expansionist party. She's powerful, brilliant, dangerously good at reading people, and dangerously willing to manipulate and control people for her own ends. What she wants, at the start of the book, is to completely clone a Special (the legal status given to the most brilliant minds of Union). This was attempted before and failed, but Ariane believes it's now possible, with a combination of tape, genetic engineering, and environmental control, to reproduce the brilliance of the original mind. To give Union another lifespan of work by their most brilliant thinkers. Jordan Warrick, another scientist at Reseune, has had a long-standing professional and personal feud with Ariane Emory. As the book opens, he is fighting to be transferred out from under her to the new research station that would be part of the Special cloning project, and he wants to bring his son Justin and Justin's companion azi Grant with them. Justin is a PR, a parental replicate, meaning he shares Jordan's genetic makeup but was not an attempt to reproduce the conditions of Jordan's rearing. Grant was raised as his brother. And both have, for reasons that are initially unclear, attracted the attention of Ariane, who may be using them as pawns. This is just the initial setup, and along with this should come a warning: the first 150 pages set up a very complex and dangerous political situation and build the tension that will carry the rest of the book, and they do this by, largely, torturing Justin and Grant. The viewpoint jumps around, but Justin and Grant are the primary protagonists for this first section of the book. While one feels sympathy for both of them, I have never, in my multiple readings of the book, particularly liked them. They're hard to like, as opposed to pity, during this setup; they have very little agency, are in way over their heads, are constantly making mistakes, and are essentially having their lives destroyed. Don't let this turn you off on the rest of the book. Cyteen takes a dramatic shift about 150 pages in. A new set of protagonists are introduced who are some of the most interesting, complex, and delightful protagonists in any SF novel I have read, and who are very much worth waiting for. While Justin has his moments later on (his life is so hard that his courage can be profoundly moving), it's not necessary to like him to love this book. That's one of the reasons why I so strongly dislike breaking it into three sections; that first section, which is mostly Justin and Grant, is not representative of the book. I can't talk too much more about the plot without risking spoiling it, but it's a beautiful, taut, and complex story that is full of my favorite things in both settings and protagonists. Cyteen is a book about brilliant people who think on their feet. Cherryh succeeds at showing this through what they do, which is rarely done as well as it is here. It's a book about remembering one's friends and remembering one's enemies, and waiting for the most effective moment to act, but it also achieves some remarkable transformations. About 150 pages in, you are likely to loathe almost everyone in Reseune; by the end of the book, you find yourself liking, or at least understanding, nearly everyone. This is extremely hard, and Cherryh pulls it off in most cases without even giving the people she's redeeming their own viewpoint sections. Other than perhaps George R.R. Martin I've not seen another author do this as well. And, more than anything else, Cyteen is a book with the most wonderful feeling of catharsis. I think this is one of the reasons why I adore this book and have difficulties with some of Cherryh's other works. She's always good at ramping up the tension and putting her characters in awful, untenable positions. Less frequently does she provide the emotional payoff of turning the tables, where you get to watch a protagonist do everything you've been wanting them to do for hundreds of pages, except even better and more delightfully than you would have come up with. Cyteen is one of the most emotionally satisfying books I've ever read. I could go on and on; there is just so much here that I love. Deep questions of ethics and self-control, presented in a way that one can see the consequences of both bad decisions and good ones and contrast them. Some of the best political negotiations in fiction. A wonderful look at friendship and loyalty from several directions. Two of the best semi-human protagonists I've seen, who one can see simultaneously as both wonderful friends and utterly non-human and who put nearly all of the androids in fiction to shame by being something trickier and more complex. A wonderful unfolding sense of power. A computer that can somewhat anticipate problems and somewhat can't, and that encapsulates much of what I love about semi-intelligent bases in science fiction. Cyteen has that rarest of properties of SF novels: Both the characters and the technology meld in a wonderful combination where neither could exist without the other, where the character issues are illuminated by the technology and the technology supports the characters. I have, for this book, two warnings. The first, as previously mentioned, is that the first 150 pages of setup is necessary but painful to read, and I never fully warmed to Justin and Grant throughout. I would not be surprised to hear that someone started this book but gave up on it after 50 or 100 pages. I do think it's worth sticking out the rocky beginning, though. Justin and Grant continue to be a little annoying, but there's so much other good stuff going on that it doesn't matter. The other warning is that part of the setup of the story involves the rape of an underage character. This is mostly off-camera, but the emotional consequences are significant (as they should be) and are frequently discussed throughout the book. There is also rather frank discussion of adolescent sexuality later in the book. I think both of these are relevant to the story and handled in a way that isn't gratuitous, but they made me uncomfortable and I don't have any past history with those topics. Those warnings notwithstanding, this is simply one of the best SF novels ever written. It uses technology to pose deep questions about human emotions, identity, and interactions, and it uses complex and interesting characters to take a close look at the impact of technology on lives. And it does this with a wonderfully taut, complicated plot that sustains its tension through all 680 pages, and with characters whom I absolutely love. I have no doubt that I'll be reading it for a fourth and fifth time some years down the road. Followed by Regenesis, although Cyteen stands well entirely on its own and there's no pressing need to read the sequel. Rating: 10 out of 10

Some additional thoughts after re-reading Cyteen in 2025: I touched on this briefly in my original review, but I was really struck during this re-read how much the azi are a commentary on and a complication of the role of androids in earlier science fiction. Asimov's Three Laws of Robotics were an attempt to control the risks of robots, but can also be read as turning robots into slaves. Azis make the slavery more explicit and disturbing by running the programming on a human biological platform, but they're more explicitly programmed and artificial than a lot of science fiction androids. Artificial beings and their relationship to humans have been a recurring theme of SF since Frankenstein, but I can't remember a novel that makes the comparison to humans this ambiguous and conflicted. The azi not only like being azi, they can describe why they prefer it. It's clear that Union made azi for many of the same reasons that humans enslave other humans, and that Ariane Emory is using them as machinery in a larger (and highly ethically questionable) plan, but Cherryh gets deeper into the emergent social complications and societal impact than most SF novels manage. Azi are apparently closer to humans than the famous SF examples such as Commander Data, but the deep differences are both more subtle and more profound. I've seen some reviewers who are disturbed by the lack of a clear moral stance by the protagonists against the creation of azi. I'm not sure what to think about that. It's clear the characters mostly like the society they've created, and the groups attempting to "free" azi from their "captivity" are portrayed as idiots who have no understanding of azi psychology. Emory says she doesn't want azi to be a permanent aspect of society but clearly has no intention of ending production any time soon. The book does seem oddly unaware that the production of azi is unethical per se and, unlike androids, has an obvious exit ramp: Continue cloning gene lines as needed to maintain a sufficient population for a growing industrial civilization, but raise the children as children rather than using azi programming. If Cherryh included some reason why that was infeasible, I didn't see it, and I don't think the characters directly confronted it. I don't think societies in books need to be ethical, or that Cherryh intended to defend this one. There are a lot of nasty moral traps that civilizations can fall into that make for interesting stories. But the lack of acknowledgment of the problem within the novel did seem odd this time around. The other part of this novel that was harder to read past in this re-read is the sexual ethics. There's a lot of adolescent sexuality in this book, and even apart from the rape scene which was more on-the-page than I had remembered and which is quite (intentionally) disturbing there is a whole lot of somewhat dubious consent. Maybe I've gotten older or just more discriminating, but it felt weirdly voyeuristic to know this much about the sex lives of characters who are, at several critical points in the story, just a bunch of kids. All that being said, and with the repeated warning that the first 150 pages of this novel are just not very good, there is still something magic about the last two-thirds of this book. It has competence porn featuring a precociously brilliant teenager who I really like, it has one of the more interesting non-AI programmed computer systems that I've read in SF, it has satisfying politics that feel like modern politics (media strategy and relationships and negotiated alliances, rather than brute force and ideology), and it has a truly excellent feeling of catharsis. The plot resolution is a bit too abrupt and a bit insufficiently explained (there's more in Regenesis), but even though this was my fourth time through this book, the pacing grabbed me again and I could barely put down the last part of the story. Ethics aside (and I realize that's quite the way to start a sentence), I find the azi stuff fascinating. I know the psychology in this book is not real and is hopelessly simplified compared to real psychology, but there's something in the discussions of value sets and flux and self-knowledge that grabs my interest and makes me want to ponder. I think it's the illusion of simplicity and control, the what-if premise of thought where core motivations and moral rules could be knowable instead of endlessly fluid the way they are in us humans. Cherryh's azi are some of the most intriguing androids in science fiction to me precisely because they don't start with computers and add the humanity in, but instead start with humanity and overlay a computer-like certainty of purpose that's fully self-aware. The result is more subtle and interesting than anything Star Trek managed. I was not quite as enamored with this book this time around, but it's still excellent once the story gets properly started. I still would recommend it, but I might add more warnings about the disturbing parts.

Re-read rating: 9 out of 10

31 May 2025

Russell Coker: Links May 2025

Christopher Biggs gave an informative Evrything Open lecture about voice recognition [1]. We need this on Debian phones. Guido wrote an informative blog post about booting a custom Android kernel on a Pixel 3a [2]. Good work in writing this up, but a pity that Google made the process so difficult. Interesting to read about an expert being a victim of a phishing attack [3]. It can happen to anyone, everyone has moments when they aren t concentrating. Interesting advice on how to leak to a journalist [4]. Brian Krebs wrote an informative article about the ways that Trump is deliberately reducing the cyber security of the US government [5]. Brian Krebs wrote an interesting article about the main smishng groups from China [6]. Louis Rossmann (who is known for high quality YouTube videos about computer repair) made an informative video about a scammy Australian company run by a child sex offender [7]. The Helmover was one of the wildest engineering projects of WW2, an over the horizon guided torpedo that could one-shot a battleship [8]. Interesting blog post about DDoSecrets and the utter failure of the company Telemessages which was used by the US government [9]. Jonathan McDowell wrote an interesting blog post about developing a free software competitor to Alexa etc, the listening hardware costs $13US per node [10]. Noema Magazine published an insightful article about Rewilding the Internet, it has some great ideas [11].

1 May 2025

Guido G nther: Free Software Activities April 2025

Another short status update of what happened on my side last month. Notable might be the Cell Broadcast support for Qualcomm SoCs, the rest is smaller fixes and QoL improvements. phosh phoc phosh-mobile-settings pfs feedbackd feedbackd-device-themes gmobile Debian git-buildpackage wlroots ModemManager Libqmi gnome-clocks gnome-calls qmi-parse-kernel-dump xwayland-run osmo-cbc phosh-nightly Blog posts Bugs Reviews This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

11 April 2025

Gunnar Wolf: Culture as a positive freedom

This post is an unpublished review for La cultura libre como libertad positiva
Please note: This review is not meant to be part of my usual contributions to ACM's Computing Reviews . I do want, though, to share it with people that follow my general interests and such stuff.
This article was published almost a year ago, and I read it just after relocating from Argentina back to Mexico. I came from a country starting to realize the shock it meant to be ruled by an autocratic, extreme right-wing president willing to overrun its Legislative and bent on destroying the State itself not too different from what we are now witnessing on a global level. I have been a strong proponent and defender of Free Software and of Free Culture throughout my adult life. And I have been a Socialist since my early teenage years. I cannot say there is a strict correlation between them, but there is a big intersection of people and organizations who aligns to both sides And rtica (and Mariana Fossatti) are clearly among them. Freedom is a word that has brought us many misunderstanding throughout the past many decades. We will say that Freedom can only be brought hand-by-hand with Equality, Fairness and Tolerance. But the extreme-right wing (is it still bordering Fascism, or has it finally embraced it as its true self?) that has grown so much in many countries over the last years also seems to have appropriated the term, even taking it as their definition. In English (particularly, in USA English), liberty is a more patriotic term, and freedom is more personal (although the term used for the market is free market); in Spanish, we conflate them both under libre. Mariana refers to a third blog, by Rolando Astarita, where the author introduces the concepts positive and negative freedom/liberties. Astarita characterizes negative freedom as an individual s possibility to act without interferences or coertion, and is limited by other people s freedom, while positive freedom is the real capacity to exercise one s autonomy and achieve self-realization; this does not depend on a person on its own, but on different social conditions; Astarita understands the Marxist tradition to emphasize on the positive freedom. Mariana brings this definition to our usual discussion on licensing: If we follow negative freedom, we will understand free licenses as the idea of access without interference to cultural or information goods, as long as it s legal (in order not to infringe other s property rights). Licensing is seen as a private content, and each individual can grant access and use to their works at will. The previous definition might be enough for many, but she says, is missing something important. The practical effect of many individuals renouncing a bit of control over their property rights produce, collectively, the common goods. They constitute a pool of knowledge or culture that are no longer an individual, contractual issue, but grow and become social, collective. Negative freedom does not go further, but positive liberty allows broadening the horizon, and takes us to a notion of free culture that, by strengthening the commons, widens social rights. She closes the article by stating (and I ll happily sign as if they were my own words) that we are Free Culture militants not only because it affirms the individual sovereignty to deliver and receive cultural resources, in an intellectual property framework guaranteed by the state. Our militancy is of widening the cultural enjoying and participation to the collective through the defense of common cultural goods ( ) We want to build Free Culture for a Free Society. But a Free Society is not a society of free owners, but a society emancipated from the structures of economic power and social privilege that block this potential collective .

22 March 2025

Antoine Beaupr : Minor outage at Teksavvy business

This morning, internet was down at home. The last time I had such an issue was in February 2023, when my provider was Oricom. Now I'm with a business service at Teksavvy Internet (TSI), in which I pay 100$ per month for a 250/50 mbps business package, with a static IP address, on which I run, well, everything: email services, this website, etc.

Mitigation

Email The main problem when the service goes down like this for prolonged outages is email. Mail is pretty resilient to failures like this but after some delay (which varies according to the other end), mail starts to drop. I am actually not sure what the various settings are among different providers, but I would assume mail is typically kept for about 24h, so that's our mark. Last time, I setup VMs at Linode and Digital Ocean to deal better with this. I have actually kept those VMs running as DNS servers until now, so that part is already done. I had fantasized about Puppetizing the mail server configuration so that I could quickly spin up mail exchangers on those machines. But now I am realizing that my Puppet server is one of the service that's down, so this would not work, at least not unless the manifests can be applied without a Puppet server (say with puppet apply). Thankfully, my colleague groente did amazing work to refactor our Postfix configuration in Puppet at Tor, and that gave me the motivation to reproduce the setup in the lab. So I have finally Puppetized part of my mail setup at home. That used to be hand-crafted experimental stuff documented in a couple of pages in this wiki, but is now being deployed by Puppet. It's not complete yet: spam filtering (including DKIM checks and graylisting) are not implemented yet, but that's the next step, presumably to do during the next outage. The setup should be deployable with puppet apply, however, and I have refined that mechanism a little bit, with the run script. Heck, it's not even deployed yet. But the hard part / grunt work is done.

Other The outage was "short" enough (5 hours) that I didn't take time to deploy the other mitigations I had deployed in the previous incident. But I'm starting to seriously consider deploying a web (and caching) reverse proxy so that I endure such problems more gracefully.

Side note on proper servics Typically, I tend to think of a properly functioning service as having four things:
  1. backups
  2. documentation
  3. monitoring
  4. automation
  5. high availability
Yes, I miscounted. This is why you have high availability.

Backups Duh. If data is maliciously or accidentally destroyed, you need a copy somewhere. Preferably in a way that malicious joe can't get to. This is harder than you think.

Documentation I have an entire template for this. Essentially, it boils down to using https://diataxis.fr/ and this "audit" guide. For me, the most important parts are:
  • disaster recovery (includes backups, probably)
  • playbook
  • install/upgrade procedures (see automation)
You probably know this is hard, and this is why you're not doing it. Do it anyways, you'll think it sucks, but you'll be really grateful for whatever scraps you wrote when you're in trouble.

Monitoring If you don't have monitoring, you'll know it fails too late, and you won't know it recovers. Consider high availability, work hard to reduce noise, and don't have machine wake people up, that's literally torture and is against the Geneva convention. Consider predictive algorithm to prevent failures, like "add storage within 2 weeks before this disk fills up". This is harder than you think.

Automation Make it easy to redeploy the service elsewhere. Yes, I know you have backups. That is not enough: that typically restores data and while it can also include configuration, you're going to need to change things when you restore, which is what automation (or call it "configuration management" if you will) will do for you anyways. This also means you can do unit tests on your configuration, otherwise you're building legacy. This is probably as hard as you think.

High availability Make it not fail when one part goes down. Eliminate single points of failures. This is easier than you think, except for storage and DNS (which, I guess, means it's harder than you think too).

Assessment In the above 5 items, I check two:
  1. backups
  2. documentation
And barely: I'm not happy about the offsite backups, and my documentation is much better at work than at home (and even there, I have a 15 year backlog to catchup on). I barely have monitoring: Prometheus is scraping parts of the infra, but I don't have any sort of alerting -- by which I don't mean "electrocute myself when something goes wrong", I mean "there's a set of thresholds and conditions that define an outage and I can look at it". Automation is wildly incomplete. My home server is a random collection of old experiments and technologies, ranging from Apache with Perl and CGI scripts to Docker containers running Golang applications. Most of it is not Puppetized (but the ratio is growing). Puppet itself introduces a huge attack vector with kind of catastrophic lateral movement if the Puppet server gets compromised. And, fundamentally, I am not sure I can provide high availability in the lab. I'm just this one guy running my home network, and I'm growing older. I'm thinking more about winding things down than building things now, and that's just really sad, because I feel we're losing (well that escalated quickly).

Resolution In the end, I didn't need any mitigation and the problem fixed itself. I did do quite a bit of cleanup so that feels somewhat good, although I despaired quite a bit at the amount of technical debt I've accumulated in the lab.

Timeline Times are in UTC-4.
  • 6:52: IRC bouncer goes offline
  • 9:20: called TSI support, waited on the line 15 minutes then was told I'd get a call back
  • 9:54: outage apparently detected by TSI
  • 11:00: no response, tried calling back support again
  • 11:10: confirmed bonding router outage, no official ETA but "today", source of the 9:54 timestamp above
  • 12:08: TPA monitoring notices service restored
  • 12:34: call back from TSI; service restored, problem was with the "bonder" configuration on their end, which was "fighting between Montr al and Toronto"

Antoine Beaupr : Losing the war for the free internet

Warning: this is a long ramble I wrote after an outage of my home internet. You'll get your regular scheduled programming shortly. I didn't realize this until relatively recently, but we're at war. Fascists and capitalists are trying to take over the world, and it's bringing utter chaos. We're more numerous than them, of course: this is only a handful of people screwing everyone else over, but they've accumulated so much wealth and media control that it's getting really, really hard to move around. Everything is surveilled: people are carrying tracking and recording devices in their pockets at all time, or they drive around in surveillance machines. Payments are all turning digital. There's cameras everywhere, including in cars. Personal data leaks are so common people kind of assume their personal address, email address, and other personal information has already been leaked. The internet itself is collapsing: most people are using the network only as a channel to reach a "small" set of "hyperscalers": mind-boggingly large datacenters that don't really operate like the old internet. Once you reach the local endpoint, you're not on the internet anymore. Netflix, Google, Facebook (Instagram, Whatsapp, Messenger), Apple, Amazon, Microsoft (Outlook, Hotmail, etc), all those things are not really the internet anymore. Those companies operate over the "internet" (as in the TCP/IP network), but they are not an "interconnected network" as much as their own, gigantic silos so much bigger than everything else that they essentially dictate how the network operates, regardless of standards. You access it over "the web" (as in "HTTP") but the fabric is not made of interconnected links that cross sites: all those sites are trying really hard to keep you captive on their platforms. Besides, you think you're writing an email to the state department, for example, but you're really writing to Microsoft Outlook. That app your university or border agency tells you to install, the backend is not hosted by those institutions, it's on Amazon. Heck, even Netflix is on Amazon. Meanwhile I've been operating my own mail server first under my bed (yes, really) and then in a cupboard or the basement for almost three decades now. And what for? So I can tell people I can? Maybe! I guess the reason I'm doing this is the same reason people are suddenly asking me about the (dead) mesh again. People are worried and scared that the world has been taken over, and they're right: we have gotten seriously screwed. It's the same reason I keep doing radio, minimally know how to grow food, ride a bike, build a shed, paddle a canoe, archive and document things, talk with people, host an assembly. Because, when push comes to shove, there's no one else who's going to do it for you, at least not the way that benefits the people. The Internet is one of humanity's greatest accomplishments. Obviously, oligarchs and fascists are trying to destroy it. I just didn't expect the tech bros to be flipping to that side so easily. I thought we were friends, but I guess we are, after all, enemies. That said, that old internet is still around. It's getting harder to host your own stuff at home, but it's not impossible. Mail is tricky because of reputation, but it's also tricky in the cloud (don't get fooled!), so it's not that much easier (or cheaper) there. So there's things you can do, if you're into tech. Share your wifi with your neighbours. Build a LAN. Throw a wire over to your neighbour too, it works better than wireless. Use Tor. Run a relay, a snowflake, a webtunnel. Host a web server. Build a site with a static site generator and throw it in the wind. Download and share torrents, and why not a tracker. Run an IRC server (or Matrix, if you want to federate and lose high availability). At least use Signal, not Whatsapp or Messenger. And yes, why not, run a mail server, join a mesh. Don't write new software, there's plenty of that around already. (Just kidding, you can write code, cypherpunk.) You can do many of those things just by setting up a FreedomBox. That is, after all, the internet: people doing their own thing for their own people. Otherwise, it's just like sitting in front of the television and watching the ads. Opium of the people, like the good old time. Let a billion droplets build the biggest multitude of clouds that will storm over this world and rip apart this fascist conspiracy. Disobey. Revolt. Build. We are more than them.

17 March 2025

Vincent Bernat: Offline PKI using 3 YubiKeys and an ARM single board computer

An offline PKI enhances security by physically isolating the certificate authority from network threats. A YubiKey is a low-cost solution to store a root certificate. You also need an air-gapped environment to operate the root CA.
PKI relying on a set of 3 YubiKeys: 2 for the root CA and 1 for the intermediate CA.
Offline PKI backed up by 3 YubiKeys
This post describes an offline PKI system using the following components: It is possible to add more YubiKeys as a backup of the root CA if needed. This is not needed for the intermediate CA as you can generate a new one if the current one gets destroyed.

The software part offline-pki is a small Python application to manage an offline PKI. It relies on yubikey-manager to manage YubiKeys and cryptography for cryptographic operations not executed on the YubiKeys. The application has some opinionated design choices. Notably, the cryptography is hard-coded to use NIST P-384 elliptic curve. The first step is to reset all your YubiKeys:
$ offline-pki yubikey reset
This will reset the connected YubiKey. Are you sure? [y/N]: y
New PIN code:
Repeat for confirmation:
New PUK code:
Repeat for confirmation:
New management key ('.' to generate a random one):
WARNING[pki-yubikey] Using random management key: e8ffdce07a4e3bd5c0d803aa3948a9c36cfb86ed5a2d5cf533e97b088ae9e629
INFO[pki-yubikey]  0: Yubico YubiKey OTP+FIDO+CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.management] Device config written
INFO[yubikit.piv] PIV application data reset performed
INFO[yubikit.piv] Management key set
INFO[yubikit.piv] New PUK set
INFO[yubikit.piv] New PIN set
INFO[pki-yubikey] YubiKey reset successful!
Then, generate the root CA and create as many copies as you want:
$ offline-pki certificate root --permitted example.com
Management key for Root X:
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: y
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: n
You can inspect the result:
$ offline-pki yubikey info
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[pki-yubikey] Slot 9C (SIGNATURE):
INFO[pki-yubikey]   Private key type: ECCP384
INFO[pki-yubikey]   Public key:
INFO[pki-yubikey]     Algorithm:  secp384r1
INFO[pki-yubikey]     Issuer:     CN=Root CA
INFO[pki-yubikey]     Subject:    CN=Root CA
INFO[pki-yubikey]     Serial:     1
INFO[pki-yubikey]     Not before: 2024-07-05T18:17:19+00:00
INFO[pki-yubikey]     Not after:  2044-06-30T18:17:19+00:00
INFO[pki-yubikey]     PEM:
-----BEGIN CERTIFICATE-----
MIIBcjCB+aADAgECAgEBMAoGCCqGSM49BAMDMBIxEDAOBgNVBAMMB1Jvb3QgQ0Ew
HhcNMjQwNzA1MTgxNzE5WhcNNDQwNjMwMTgxNzE5WjASMRAwDgYDVQQDDAdSb290
IENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAERg3Vir6cpEtB8Vgo5cAyBTkku/4w
kXvhWlYZysz7+YzTcxIInZV6mpw61o8W+XbxZV6H6+3YHsr/IeigkK04/HJPi6+i
zU5WJHeBJMqjj2No54Nsx6ep4OtNBMa/7T9foyMwITAPBgNVHRMBAf8EBTADAQH/
MA4GA1UdDwEB/wQEAwIBhjAKBggqhkjOPQQDAwNoADBlAjEAwYKy/L8leJyiZSnn
xrY8xv8wkB9HL2TEAI6fC7gNc2bsISKFwMkyAwg+mKFKN2w7AjBRCtZKg4DZ2iUo
6c0BTXC9a3/28V5aydZj6rvx0JqbF/Ln5+RQL6wFMLoPIvCIiCU=
-----END CERTIFICATE-----
Then, you can create an intermediate certificate with offline-pki yubikey intermediate and use it to sign certificates by providing a CSR to offline-pki certificate sign. Be careful and inspect the CSR before signing it, as only the subject name can be overridden. Check the documentation for more details. Get the available options using the --help flag.

The hardware part To ensure the operations on the root and intermediate CAs are air-gapped, a cost-efficient solution is to use an ARM64 single board computer. The Libre Computer Sweet Potato SBC is a more open alternative to the well-known Raspberry Pi.1
Libre Computer Sweet Potato single board computer relying on the Amlogic S905X SOC
Libre Computer Sweet Potato SBC, powered by the AML-S905X SOC
I interact with it through an USB to TTL UART converter:
$ tio /dev/ttyUSB0
[16:40:44.546] tio v3.7
[16:40:44.546] Press ctrl-t q to quit
[16:40:44.555] Connected to /dev/ttyUSB0
GXL:BL1:9ac50e:bb16dc;FEAT:ADFC318C:0;POC:1;RCY:0;SPI:0;0.0;CHK:0;
TE: 36574
BL2 Built : 15:21:18, Aug 28 2019. gxl g1bf2b53 - luan.yuan@droid15-sz
set vcck to 1120 mv
set vddee to 1000 mv
Board ID = 4
CPU clk: 1200MHz
[ ]

The Nix glue To bring everything together, I am using Nix with a Flake providing:
  • a package for the offline-pki application, with shell completion,
  • a development shell, including an editable version of the offline-pki application,
  • a NixOS module to setup the offline PKI, resetting the system at each boot,
  • a QEMU image for testing, and
  • an SD card image to be used on the Sweet Potato or another ARM64 SBC.
# Execute the application locally
nix run github:vincentbernat/offline-pki -- --help
# Run the application inside a QEMU VM
nix run github:vincentbernat/offline-pki\#qemu
# Build a SD card for the Sweet Potato or for the Raspberry Pi
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.potato
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.generic
# Get a development shell with the application
nix develop github:vincentbernat/offline-pki

  1. The key for the root CA is not generated by the YubiKey. Using an air-gapped computer is all the more important. Put it in a safe with the YubiKeys when done!

20 February 2025

Paul Tagliamonte: boot2kier

I can t remember exactly the joke I was making at the time in my work s slack instance (I m sure it wasn t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with userland . I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven t seen it, this post will seem perhaps weirder than it actually is. I promise I haven t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan. As for how to do it I figured I d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn t the sort of thing I d usually post about except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI. First thing s first gotta create a rust project (I ll leave that part to you depending on your life choices), and to add the uefi crate to your Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi =   version = "0.33", features = ["panic_handler", "alloc", "global_allocator"]  
We also need to teach cargo about how to go about building for the UEFI target, so we need to create a rust-toolchain.toml with one (or both) of the UEFI targets we re interested in:
[toolchain]
targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]
Unfortunately, I wasn t able to use the image crate, since it won t build against the uefi target. This looks like it s because rustc had no way to compile the required floating point operations within the image crate without hardware floating point instructions specifically. Rust tends to punt a lot of that to libm usually, so this isnt entirely shocking given we re nostd for a non-hardfloat target. So-called softening requires a software floating point implementation that the compiler can use to polyfill (feels weird to use the term polyfill here, but I guess it s spiritually right?) the lack of hardware floating point operations, which rust hasn t implemented for this target yet. As a result, I changed tactics, and figured I d use ImageMagick to pre-compute the pixels from a jpg, rather than doing it at runtime. A bit of a bummer, since I need to do more out of band pre-processing and hardcoding, and updating the image kinda sucks as a result but it s entirely manageable.
$ convert -resize 1280x900 kier.jpg kier.full.jpg
$ convert -depth 8 kier.full.jpg rgba:kier.bin
This will take our input file (kier.jpg), resize it to get as close to the desired resolution as possible while maintaining aspect ration, then convert it from a jpg to a flat array of 4 byte RGBA pixels. Critically, it s also important to remember that the size of the kier.full.jpg file may not actually be the requested size it will not change the aspect ratio, so be sure to make a careful note of the resulting size of the kier.full.jpg file. Last step with the image is to compile it into our Rust bianary, since we don t want to struggle with trying to read this off disk, which is thankfully real easy to do.
const KIER: &[u8] = include_bytes!("../kier.bin");
const KIER_WIDTH: usize = 1280;
const KIER_HEIGHT: usize = 641;
const KIER_PIXEL_SIZE: usize = 4;
Remember to use the width and height from the final kier.full.jpg file as the values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we have 4 byte wide values for each pixel as a result of our conversion step into RGBA. We ll only use RGB, and if we ever drop the alpha channel, we can drop that down to 3. I don t entirely know why I kept alpha around, but I figured it was fine. My kier.full.jpg image winds up shorter than the requested height (which is also qemu s default resolution for me) which means we ll get a semi-annoying black band under the image when we go to run it but it ll work. Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We ll just need to hack up a container for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
///  image::RgbImage , but we can associate the size of
/// the image along with the flat buffer of pixels.
struct RgbImage  
/// Size of the image as a tuple, as the
 /// (width, height)
 size: (usize, usize),
/// raw pixels we'll send to the display.
 inner: Vec<BltPixel>,
 
impl RgbImage  
/// Create a new  RgbImage .
 fn new(width: usize, height: usize) -> Self  
RgbImage  
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
 
 
/// Take our pixels and request that the UEFI GOP
 /// display them for us.
 fn write(&self, gop: &mut GraphicsOutput) -> Result  
gop.blt(BltOp::BufferToVideo  
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
 )
 
 
impl Index<(usize, usize)> for RgbImage  
type Output = BltPixel;
fn index(&self, idx: (usize, usize)) -> &BltPixel  
let (x, y) = idx;
&self.inner[y * self.size.0 + x]
 
 
impl IndexMut<(usize, usize)> for RgbImage  
fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel  
let (x, y) = idx;
&mut self.inner[y * self.size.0 + x]
 
 
We also need to do some basic setup to get a handle to the UEFI GOP via the UEFI crate (using uefi::boot::get_handle_for_protocol and uefi::boot::open_protocol_exclusive for the GraphicsOutput protocol), so that we have the object we need to pass to RgbImage in order for it to write the pixels to the display. The only trick here is that the display on the booted system can really be any resolution so we need to do some capping to ensure that we don t write more pixels than the display can handle. Writing fewer than the display s maximum seems fine, though.
fn praise() -> Result  
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
 // our image and the display we're using.
 let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
let mut buffer = RgbImage::new(width, height);
for y in 0..height  
for x in 0..width  
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel = &mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r + 1];
pixel.blue = KIER[idx_r + 2];
 
 
buffer.write(&mut gop)?;
Ok(())
 
Not so bad! A bit tedious we could solve some of this by turning KIER into an RgbImage at compile-time using some clever Cow and const tricks and implement blitting a sub-image of the image but this will do for now. This is a joke, after all, let s not go nuts. All that s left with our code is for us to write our main function and try and boot the thing!
#[entry]
fn main() -> Status  
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
 
If you re following along at home and so interested, the final source is over at gist.github.com. We can go ahead and build it using cargo (as is our tradition) by targeting the UEFI platform.
$ cargo build --release --target x86_64-unknown-uefi

Testing the UEFI Blob While I can definitely get my machine to boot these blobs to test, I figured I d save myself some time by using QEMU to test without a full boot. If you ve not done this sort of thing before, we ll need two packages, qemu and ovmf. It s a bit different than most invocations of qemu you may see out there so I figured it d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it ll create us an EFI partition as a drive and attach it to the VM off a local directory so let s construct an EFI partition file structure, and drop our binary into the conventional location. If you haven t done this before, and are only interested in running this in a VM, don t worry too much about it, a lot of it is convention and this layout should work for you.
$ mkdir -p esp/efi/boot
$ cp target/x86_64-unknown-uefi/release/*.efi \
 esp/efi/boot/bootx64.efi
With all this in place, we can kick off qemu, booting it in UEFI mode using the ovmf firmware, attaching our EFI partition directory as a drive to our VM to boot off of.
$ qemu-system-x86_64 \
 -enable-kvm \
 -m 2048 \
 -smbios type=0,uefi=on \
 -bios /usr/share/ovmf/OVMF.fd \
 -drive format=raw,file=fat:rw:esp
If all goes well, soon you ll be met with the all knowing gaze of Chosen One, Kier Eagan. The thing that really impressed me about all this is this program worked first try it all went so boringly normal. Truly, kudos to the uefi crate maintainers, it s incredibly well done.

Booting a live system Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or may not need to unplug and replug your USB stick), we can go ahead and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn t mean backdooring your system. Disabling Secure Boot runs counter to the Core Principals, such as Probity, and not doing this would surely run counter to Verve, Wit and Vision. This bit does require that you ve taken the step to enroll a MOK and know how to use it, right about now is when we can use sbsign to sign our UEFI binary we want to boot from to continue enforcing Secure Boot. The details for how this command should be run specifically is likely something you ll need to work out depending on how you ve decided to manage your MOK.
$ doas sbsign \
 --cert /path/to/mok.crt \
 --key /path/to/mok.key \
 target/x86_64-unknown-uefi/release/*.efi \
 --output esp/efi/boot/bootx64.efi
I figured I d leave a signed copy of boot2kier at /boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled and enforcing, just took a matter of going into my BIOS to add the right boot option, which was no sweat. I m sure there is a way to do it using efibootmgr, but I wasn t smart enough to do that quickly. I let er rip, and it booted up and worked great! It was a bit hard to get a video of my laptop, though but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I d grab a video of me booting a physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted system which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don t think I care to do that much work.

But now I must away If I wanted to keep this joke going, I d likely try and find a copy of the original video when Helly 100%s her file and boot into that or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don t have any friends involved with production (yet?), so I reckon all that s out for now. I ll likely stop playing with this the joke was done and I m only writing this post because of how great everything was along the way. All in all, this reminds me so much of building a homebrew kernel to boot a system into but like, good, though, and it s a nice reminder of both how fun this stuff can be, and how far we ve come. UEFI protocols are light-years better than how we did it in the dark ages, and the tooling for this is SO much more mature. Booting a custom UEFI binary is miles ahead of trying to boot your own kernel, and I can t believe how good the uefi crate is specifically. Praise Kier! Kudos, to everyone involved in making this so delightful .

9 February 2025

Antoine Beaupr : Qalculate hacks

This is going to be a controversial statement because some people are absolute nerds about this, but, I need to say it. Qalculate is the best calculator that has ever been made. I am not going to try to convince you of this, I just wanted to put out my bias out there before writing down those notes. I am a total fan. This page will collect my notes of cool hacks I do with Qalculate. Most examples are copy-pasted from the command-line interface (qalc(1)), but I typically use the graphical interface as it's slightly better at displaying complex formulas. Discoverability is obviously also better for the cornucopia of features this fantastic application ships.

Qalc commandline primer On Debian, Qalculate's CLI interface can be installed with:
apt install qalc
Then you start it with the qalc command, and end up on a prompt:
anarcat@angela:~$ qalc
> 
Then it's a normal calculator:
anarcat@angela:~$ qalc
> 1+1
  1 + 1 = 2
> 1/7
  1 / 7   0.1429
> pi
  pi   3.142
> 
There's a bunch of variables to control display, approximation, and so on:
> set precision 6
> 1/7
  1 / 7   0.142857
> set precision 20
> pi
  pi   3.1415926535897932385
When I need more, I typically browse around the menus. One big issue I have with Qalculate is there are a lot of menus and features. I had to fiddle quite a bit to figure out that set precision command above. I might add more examples here as I find them.

Bandwidth estimates I often use the data units to estimate bandwidths. For example, here's what 1 megabit per second is over a month ("about 300 GiB"):
> 1 megabit/s * 30 day to gibibyte 
  (1 megabit/second)   (30 days)   301.7 GiB
Or, "how long will it take to download X", in this case, 1GiB over a 100 mbps link:
> 1GiB/(100 megabit/s)
  (1 gibibyte) / (100 megabits/second)   1 min + 25.90 s

Password entropy To calculate how much entropy (in bits) a given password structure, you count the number of possibilities in each entry (say, [a-z] is 26 possibilities, "one word in a 8k dictionary" is 8000), extract the base-2 logarithm, multiplied by the number of entries. For example, an alphabetic 14-character password is:
> log2(26*2)*14
  log (26   2)   14   79.81
... 80 bits of entropy. To get the equivalent in a Diceware password with a 8000 word dictionary, you would need:
> log2(8k)*x = 80
  (log (8   000)   x) = 80  
  x   6.170
... about 6 words, which gives you:
> log2(8k)*6
  log (8   1000)   6   77.79
78 bits of entropy.

Exchange rates You can convert between currencies!
> 1 EUR to USD
  1 EUR   1.038 USD
Even fake ones!
> 1 BTC to USD
  1 BTC   96712 USD
This relies on a database pulled form the internet (typically the central european bank rates, see the source). It will prompt you if it's too old:
It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y
As a reader pointed out, you can set the refresh rate for currencies, as some countries will require way more frequent exchange rates. The graphical version has a little graphical indicator that, when you mouse over, tells you where the rate comes from.

Other conversions Here are other neat conversions extracted from my history
> teaspoon to ml
  teaspoon = 5 mL
> tablespoon to ml
  tablespoon = 15 mL
> 1 cup to ml 
  1 cup   236.6 mL
> 6 L/100km to mpg
  (6 liters) / (100 kilometers)   39.20 mpg
> 100 kph to mph
  100 kph   62.14 mph
> (108km - 72km) / 110km/h
  ((108 kilometers)   (72 kilometers)) / (110 kilometers/hour)  
  19 min + 38.18 s

Completion time estimates This is a more involved example I often do.

Background Say you have started a long running copy job and you don't have the luxury of having a pipe you can insert pv(1) into to get a nice progress bar. For example, rsync or cp -R can have that problem (but not tar!). (Yes, you can use --info=progress2 in rsync, but that estimate is incremental and therefore inaccurate unless you disable the incremental mode with --no-inc-recursive, but then you pay a huge up-front wait cost while the entire directory gets crawled.)

Extracting a process start time First step is to gather data. Find the process start time. If you were unfortunate enough to forget to run date --iso-8601=seconds before starting, you can get a similar timestamp with stat(1) on the process tree in /proc with:
$ stat /proc/11232
  File: /proc/11232
  Size: 0               Blocks: 0          IO Block: 1024   directory
Device: 0,21    Inode: 57021       Links: 9
Access: (0555/dr-xr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-02-07 15:50:25.287220819 -0500
Modify: 2025-02-07 15:50:25.287220819 -0500
Change: 2025-02-07 15:50:25.287220819 -0500
 Birth: -
So our start time is 2025-02-07 15:50:25, we shave off the nanoseconds there, they're below our precision noise floor. If you're not dealing with an actual UNIX process, you need to figure out a start time: this can be a SQL query, a network request, whatever, exercise for the reader.

Saving a variable This is optional, but for the sake of demonstration, let's save this as a variable:
> start="2025-02-07 15:50:25"
  save("2025-02-07T15:50:25"; start; Temporary; ; 1) =
  "2025-02-07T15:50:25"

Estimating data size Next, estimate your data size. That will vary wildly with the job you're running: this can be anything: number of files, documents being processed, rows to be destroyed in a database, whatever. In this case, rsync tells me how many bytes it has transferred so far:
# rsync -ASHaXx --info=progress2 /srv/ /srv-zfs/
2.968.252.503.968  94%    7,63MB/s    6:04:58  xfr#464440, ir-chk=1000/982266) 
Strip off the weird dots in there, because that will confuse qalculate, which will count this as:
  2.968252503968 bytes   2.968 B
Or, essentially, three bytes. We actually transferred almost 3TB here:
  2968252503968 bytes   2.968 TB
So let's use that. If you had the misfortune of making rsync silent, but were lucky enough to transfer entire partitions, you can use df (without -h! we want to be more precise here), in my case:
Filesystem              1K-blocks       Used  Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036  179205040  98% /srv
tank/srv               7667173248 2870444032 4796729216  38% /srv-zfs
(Otherwise, of course, you use du -sh $DIRECTORY.)

Digression over bytes Those are 1 K bytes which is actually (and rather unfortunately) Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.
> 2870444032 KiB
  2870444032 kibibytes   2.939 TB
> 2870444032 kB
  2870444032 kilobytes   2.870 TB
At this scale, those details matter quite a bit, we're talking about a 69GB (64GiB) difference here:
> 2870444032 KiB - 2870444032 kB
  (2870444032 kibibytes)   (2870444032 kilobytes)   68.89 GB
Anyways. Let's take 2968252503968 bytes as our current progress. Our entire dataset is 7258298064 KiB, as seen above.

Solving a cross-multiplication We have 3 out of four variables for our equation here, so we can already solve:
> (now-start)/x = (2996538438607 bytes)/(7258298064 KiB) to h
  ((actual   start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes))
  x   59.24 h
The entire transfer will take about 60 hours to complete! Note that's not the time left, that is the total time. To break this down step by step, we could calculate how long it has taken so far:
> now-start
  now   start   23 h + 53 min + 6.762 s
> now-start to s
  now   start   85987 s
... and do the cross-multiplication manually, it's basically:
x/(now-start) = (total/current)
so:
x = (total/current) * (now-start)
or, in Qalc:
> ((7258298064  kibibytes) / ( 2996538438607 bytes) ) *  85987 s
  ((7258298064 kibibytes) / (2996538438607 bytes))   (85987 secondes)  
  2 d + 11 h + 14 min + 38.81 s
It's interesting it gives us different units here! Not sure why.

Now and built-in variables The now here is actually a built-in variable:
> now
  now   "2025-02-08T22:25:25"
There is a bewildering list of such variables, for example:
> uptime
  uptime = 5 d + 6 h + 34 min + 12.11 s
> golden
  golden   1.618
> exact
  golden = ( (5) + 1) / 2

Computing dates In any case, yay! We know the transfer is going to take roughly 60 hours total, and we've already spent around 24h of that, so, we have 36h left. But I did that all in my head, we can ask more of Qalc yet! Let's make another variable, for that total estimated time:
> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)
  save(((now   start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes)); total; Temporary; ; 1)  
  2 d + 11 h + 14 min + 38.22 s
And we can plug that into another formula with our start time to figure out when we'll be done!
> start+total
  start + total   "2025-02-10T03:28:52"
> start+total-now
  start + total   now   1 d + 11 h + 34 min + 48.52 s
> start+total-now to h
  start + total   now   35 h + 34 min + 32.01 s
That transfer has ~1d left, or 35h24m32s, and should complete around 4 in the morning on February 10th. But that's icing on top. I typically only do the cross-multiplication and calculate the remaining time in my head. I mostly did the last bit to show Qalculate could compute dates and time differences, as long as you use ISO timestamps. Although it can also convert to and from UNIX timestamps, it cannot parse arbitrary date strings (yet?).

Other functionality Qalculate can:
  • Plot graphs;
  • Use RPN input;
  • Do all sorts of algebraic, calculus, matrix, statistics, trigonometry functions (and more!);
  • ... and so much more!
I have a hard time finding things it cannot do. When I get there, I typically need to resort to programming code in Python, use a spreadsheet, and others will turn to more complete engines like Maple, Mathematica or R. But for daily use, Qalculate is just fantastic. And it's pink! Use it!

Further reading and installation This is just scratching the surface, the fine manual has more information, including more examples. There is also of course a qalc(1) manual page which also ships an excellent EXAMPLES section. Qalculate is packaged for over 30 Linux distributions, but also ships packages for Windows and MacOS. There are third-party derivatives as well including a web version and an Android app.

Updates Colin Watson liked this blog post and was inspired to write his own hacks, similar to what's here, but with extras, check it out!

26 January 2025

Russ Allbery: Review: Dark Matters

Review: Dark Matters, by Michelle Diener
Series: Class 5 #4
Publisher: Eclipse
Copyright: October 2019
ISBN: 0-6454658-6-0
Format: Kindle
Pages: 307
Dark Matters is the fourth book in the science fiction semi-romance Class 5 series. There are spoilers for all of the previous books, and although enough is explained that you could make sense of the story starting here, I wouldn't recommend it. As with the other books in the series, it follows new protagonists, but the previous protagonists make an appearance. You will be unsurprised to hear that the Tecran kidnapped yet another Earth woman. The repetitiveness of the setup would be more annoying if the book took itself too seriously, but it doesn't, and so I mostly find it entertaining. I thought Diener was going to dodge the obvious series structure, but now I am wondering if we're going to end up with one woman per Class 5 ship after all. Lucy is not on a ship, however, Tecran or otherwise. She is a captive in a military research facility on the Tecran home world. The Tecran are in very deep trouble given the events of the previous book and have decided that Lucy's existence is a liability. Only the intervention of some sympathetic Tecran scientists she partly befriended during her captivity lets her escape the facility before it's destroyed. Now she's alone, on an alien world, being hunted by the military. It's not entirely the fault of this book that it didn't tell the story that I wanted to read. The setup for Dark Matters implies this book will see the arrival of consequences for the Tecran's blatant violations of the Sentient Beings Agreement. I was looking forward to a more political novel about how such consequences could be administered. This is the sort of problem that we struggle with in our politics: Collective punishment isn't acceptable, but there have to be consequences sufficient to ensure that a state doesn't repeat the outlawed behavior, and yet attempting to deliver those consequences feels like occupation and can set off worse social ruptures and even atrocities. I wasn't expecting that deep of political analysis of what is, after all, a lighthearted SF adventure series, but Diener has been willing to touch on hard problems. The ethics of violence has been an ongoing theme of the series. Alas for me, this is not what we get. The arriving cavalry, in the form of a Class 5 and the inevitable Grih hunk to serve as the love interest du jour, quickly become more interested in helping Lucy elude pursuers (or escape captors) than in the delicate political situation. The conflict between the local population is a significant story element, but only as backdrop. Instead, this reads like a thriller or an action movie, complete with alien predators and a cinematic set piece finale. The political conflict between the Tecran and the United Council does reach a conclusion of sorts, but it's not that satisfying. Perhaps some of the political fallout will happen in future books, but here Diener simplifies the morality of the story in the climax and dodges out of the tricky ethical and social challenge of how to punish a sovereign nation. One of the things I like about this series is that it takes moral indignation seriously, but now that Diener has raised the (correct) complication that people have strong motivations to find excuses for the actions of their own side, I hope she can find a believable political resolution that isn't simple brute force. This entry in the series wasn't bad, but it didn't grab me. Lucy was fine as a protagonist; her ability to manipulate the Tecran into making mistakes fits the longer time she's had to study them and keeps her distinct from the other protagonists. But the small bit of politics we do see is unsatisfying and conveniently simplistic, and this book mostly degenerates into generic action sequences. Bane, the Class 5 ship featured in this story, is great when he's active, and I continue to be entertained by the obsession the Class 5 ships have with Earth women, but he's sidelined for too much of the story. I felt like Diener focused on the least interesting part of the story setup. If you've read this far, there's nothing wrong with this entry. You'll probably want to keep reading. But it felt like a missed opportunity. Followed in publication order by Dark Ambitions, a novella that returns to Rose to tell a side story. The next novel is Dark Class, in which we'll presumably see the last kidnapped Earth woman. Rating: 6 out of 10

18 January 2025

Dominique Dumont: How we solved storage API throttling on our Azure Kubernetes clusters

Hi This issue was quite puzzling, so I m sharing how we investigated this issue. I hope it can be useful for you. My client informed me that he was no longer able to install new instances of his application. k9s showed that only some pods could not be created, only the ones that created physical volume (PV). The description of these pods showed a HTTP error 429 when creating pods: New PVC could not be created because we were throttled by Azure storage API. This issue was confirmed by Azure diagnostic console on Kubernetes ( menu Diagnose and solve problems Cluster and Control Plane Availability and Performance Azure Resource Request Throttling ). We had a lot of throttling:
2025-01-18_11-01-k8s-throttles.png
Which were explained by the high call rate:
2025-01-18_11-01-k8s-calls.png
The first clue was found at the bottom of Azure diagnostic page:
2025-01-18_11-27-throttles-by-user-agent.png
According, to this page, throttling is done by services whose user agent is:
Go/go1.23.1 (amd64-linux) go-autorest/v14.2.1 Azure-SDK-For-Go/v68.0.0
storage/2021-09-01microsoft.com/aks-operat azsdk-go-armcompute/v1.0.0 (go1.22.3; linux)
The main information is Azure-SDK-For-Go, which means the program making all these calls to storage API is written in Go. All our services are written in Typescript or Rust, so they are not suspect. That leaves controllers running in kube-systems namespace. I could not find anything suspects in the logs of these services. At that point I was convinced that a component in Kubernetes control plane was making all those calls. Unfortunately, AKS is managed by Microsoft and I don t have access to the control plane logs. However, we re realized that we had quite a lot of volumesnapshots that are created in our clusters using k8s-scheduled-volume-snapshotter: We suspected that kubernetes reconciliation loop is throttled when checking the status of all these snapshots. May be so, but we also had the same issues and throttle rates on preprod and prod were the number of snapshots were quite different. We tried to get more information using Azure console on our snapshot account, but it was also broken by the throttling issue. We were so puzzled that we decided to try L odagan s advice (tout cr mer pour repartir sur des bases saines, loosely translated as burn everything down to start from scratch ) and we destroyed piece by piece our dev cluster while checking if the throttling stopped. First, we removed all our applications, no change. Then, all ancillary components like rabbitmq, cert-manager were removed, no change. Then, we tried remove the namespace containing our applications. But, we faced another issue: Kubernetes was unable to remove the namespace because it could not destroy some PVC and volumesnapshots. That was actually good news, because it meant that we were close to the actual issue. We managed to destroy the PVC and volumesnapshots by removing their finalizers. Finalizers are some kind of markers that tell kubernetes that something needs to be done before actually deleting a resource. The finalizers were removed with a command like:
kubectl patch volumesnapshots $ volumesnapshot  \
  -p ' \"metadata\": \"finalizers\":null '  --type merge
Then, we got the first progress : the throttling and high call rate stopped on our dev cluster. To make sure that the snapshots were the issue, we re-installed the ancillary components and our applications. Everything was copacetic. So, the problem was indeed with PVC and snapshots. Even though we have backups outside of Azure, we weren t really thrilled at trying L odagan s method on our prod cluster So we looked for a better fix to try on our preprod cluster. Poking around in PVC and volumesnapshots, I finally found this error message in the description on a volumesnapshotcontents:
Code="ShareSnapshotCountExceeded" Message="The total number of snapshots
for the share is over the limit."
The number of snapshots found in our cluster was not that high. So I wanted to check the snapshots present in our storage account using Azure console, which was still broken. Fortunately, Azure CLI is able to retry HTTP calls when getting 429 errors. I managed to get a list of snapshots with
az storage share list --account-name [redacted] --include-snapshots \
      tee preprod-list.json
There, I found a lot of snapshots dating back from 2024. These were no longer managed by Kubernetes and should have been cleaned up. That was our smoking gun. I guess that we had a chain of events like: To make things worse, k8s-scheduled-volume-snapshotter creates new snapshots when it cannot list the old ones. So we had 4 new snapshots per day instead of one. Since we had the chain of events, fixing the issue was not too difficult (but quite long ):
  1. stop k8s-scheduled-volume-snapshotter by disabling its cron job
  2. delete all volumesnapshots and volume snapshots contents from k8s.
  3. since Azure API was throttled, we also had to remove their finalizers
  4. delete all snapshots from azure using az command and a Perl script (this step took several hours)
  5. re-enable k8s-scheduled-volume-snapshotter
After these steps, preprod was back to normal. I m now applying the same recipe on prod. We still don t know why we had all these stale snapshots. It may have been a human error or a bug in k8s-scheduled-volume-snapshotter. Anyway, to avoid this problem is the future, we will: My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn. All the best

1 December 2024

Russ Allbery: Review: Unexploded Remnants

Review: Unexploded Remnants, by Elaine Gallagher
Publisher: Tordotcom
Copyright: 2024
ISBN: 1-250-32522-6
Format: Kindle
Pages: 111
Unexploded Remnants is a science fiction adventure novella. The protagonist and world background would support an episodic series, but as of this writing it stands alone. It is Elaine Gallagher's first professional publication. Alice is the last survivor of Earth: an explorer, information trader, and occasional associate of the Archive. She scouts interesting places, looks for inconsistencies in the stories the galactic civilizations tell themselves, and pokes around ruins for treasure. As this story opens, she finds a supposedly broken computer core in the Alta Sidoie bazaar that is definitely not what the trader thinks it is. Very shortly thereafter, she's being hunted by a clan of dangerous Delosi while trying to decide what to do with a possibly malevolent AI with frightening intrusion abilities. This is one of those stories where all the individual pieces sounded great, but the way they were assembled didn't click for me. Unusually, I'm not entirely sure why. Often it's the characters, but I liked Alice well enough. The Lewis Carroll allusions were there but not overdone, her computer agent Bugs is a little too much of a Warner Brothers cartoon but still interesting, and the world building has plenty of interesting hooks. I certainly can't complain about the pacing: the plot moves briskly along to a somewhat predictable but still adequate conclusion. The writing is smooth and competent, and the world is memorable enough that I'm still thinking about it. And yet, I never connected with this story. I think it may be because both Alice and the tight third-person narrator tend towards breezy confidence and matter-of-fact descriptions. Alice does, at times, get scared or angry, but I never felt those emotions. They were just events that were described to me. There wasn't an emotional hook, a place where the character grabbed me, and so it felt like everything was happening at an odd remove. The advantage of this approach is that there are no overwrought emotional meltdowns or brooding angstful protagonists, just an adventure story about a competent and thoughtful character, but I think I wanted a bit more emotional involvement than I got. The world background is the best part and feels like it could be part of a larger series. The Milky Way is connected by an old, vast, and only partly understood network of teleportation portals, which had cut off Earth for unknown reasons and then just as mysteriously reactivated when Alice, then Andrew, drunkenly poked at a standing stone while muttering an old prayer in Gaelic. The Archive spent a year sorting out her intellectual diseases (capitalism was particularly alarming) and giving her a fresh start with a new body. Humanity subsequently destroyed itself in a paroxysm of reactionary violence, leaving Alice a free agent, one of a kind in a galaxy of dizzying variety and forgotten history. Gallagher makes great use of the weirdness of the portal network to create a Star Wars style of universe: the focus is more on the diversity of the planets and alien species than on a coherent unifying structure. The settings of this book are not prone to Planet of the Hats problems. They instead have the contrasts that one would get if one dropped portals near current or former Earth population centers and then took a random walk through them (or, in other words, what playing GeoGuessr on a world map feels like). I liked this effect, but I have to admit that it also added to that sense of sliding off the surface of the story. The place descriptions were great bits of atmosphere, but I never cared about them. There isn't enough emotional coherence to make them memorable. One of the more notable quirks of this story is the description of ideologies and prejudices as viral memes that can be cataloged, cured, and deployed like weapons. This is a theme of the world-building as well: this society, or at least the Archive-affiliated parts of it, classifies some patterns of thought as potentially dangerous but treatable contagious diseases. I'm not going to object too much to this as a bit of background and characterization in a fairly short novella stuffed with a lot of other world-building and plot, but there's was something about treating ethical systems like diseases that bugged me in much the same way that medicalization of neurodiversity bugs me. I think some people will find that sense of moral clarity relaxing and others will find it vaguely irritating, and I seem to have ended up in the second group. Overall, I would classify this as an interesting not-quite-success. It felt like a side story in a larger universe, like a story that would work better if I already knew Alice from other novels and had an established emotional connection with her. As is, I would not really recommend it, but there are enough good pieces here that I would be interested to see what Gallagher does next. Rating: 6 out of 10

1 November 2024

Russ Allbery: Review: Overdue and Returns

Review: Overdue and Returns, by Mark Lawrence
Publisher: Mark Lawrence
Copyright: June 2023
Copyright: February 2024
ASIN: B0C9N51M6Y
ASIN: B0CTYNQGBX
Format: Kindle
Pages: 99
Overdue is a stand-alone novelette in the Library Trilogy universe. Returns is a collection of two stories, the novelette "Returns" and the short story "About Pain." All of them together are about the length of a novella, so I'm combining them into a single review. These are ancillary stories in the same universe as the novels, but not necessarily in the same timeline. (Trying to fit "About Pain" into the novel timeline will give you a headache and I am choosing to read it as author's fan fiction.) I'm guessing they're part of the new fad for releasing short fiction on Amazon to tide readers over and maintain interest between books in a series, a fad about which I have mixed feelings. Given the total lack of publisher metadata in either the stories or on Amazon, I'm assuming they were self-published even though the novels are published by Ace, but I don't know that for certain. There are spoilers for The Book That Wouldn't Burn, so don't read these before that novel. There are no spoilers for The Book That Broke the World, and I don't think the reading order would matter. I found all three of these stories irritating and thuddingly trite. "Returns" is probably the best of the lot in terms of quality of storytelling, but I intensely dislike the structural implications of the nature of the book at its center and am therefore hoping that it's non-canonical. I would not waste your time with these even if you are enjoying the novels. "Overdue": Three owners of the same bookstore at different points in time have encounters with an albino man named Yute who is on a quest. One of the owners is trying to write a book, one of them is older, depressed, and closed off, and one of them has regular conversations with her sister's ghost. The nature of the relationship between the three is too much of a spoiler, but it involves similar shenanigans as The Book That Wouldn't Burn. Lawrence uses my least favorite resolution of benign ghost stories. The story tries very hard to sell it as a good thing, but I thought it was cruel and prefer fantasy that rejects both branches of that dilemma. Other than that, it was fine, I guess, although the moral was delivered with all of the subtlety of the last two minutes of a Saturday morning cartoon. (5) "Returns": Livira returns a book deep inside the library and finds that she can decipher it, which leads her to a story about Yute going on a trip to recover another library book. This had a lot of great Yute lines, plus I always like seeing Livira in exploration mode. The book itself is paradoxical in a causality-destroying way, which is handwaved away as literal magic. I liked this one the best of the three stories, but I hope the world-building of the main series does not go in this direction and I'm a little afraid it might. (6) "About Pain": A man named Holden runs into a woman named Clovis at the gym while carrying a book titled Catcher that his dog found and that he's returning to the library. I thoroughly enjoy Clovis and was happy to read a few more scenes about her. Other than that, this was fine, I guess, although it is a story designed to deliver a point and that point is one that appears in every discussion of classics and re-reading that has ever happened on the Internet. Also, I know I'm being grumpy, but Lawrence's puns with authors and character names are chapter-epigraph amusing but not short-story-length funny. Yes, yes, his name is Holden, we get it. (5) Rating: 5 out of 10

21 August 2024

Russ Allbery: Review: These Burning Stars

Review: These Burning Stars, by Bethany Jacobs
Series: Kindom Trilogy #1
Publisher: Orbit
Copyright: October 2023
ISBN: 0-316-46342-6
Format: Kindle
Pages: 430
These Burning Stars is a science fiction thriller with cyberpunk vibes. It is Bethany Jacobs's first novel and the first of an expected trilogy, and won the 2024 Philip K. Dick Award for the best SF paperback original published in the US. Generation starships brought humanity to the three star systems of the Treble, where they've built a new and thriving culture of billions. The Treble is ruled by the Kindom, a tripartite government structure built around the worship of six gods and the aristocratic power of the First Families. The Clerisy handle religion, the Secretaries run the bureaucracy, and the Cloaksaan enforce the decisions of the other branches. The Nightfoots are one of the First Families. They control sevite, the propellant required to move between the systems of the Treble now that the moon Jeve and the sole source of natural jevite has been destroyed. Esek Nightfoot is a cleric, theoretically following the rules of the Clerisy, but she has made a career of training cloaksaan. She is is mercurial, powerful, ruthless, ambitious, politically well-connected, and greatly feared. She is also obsessed with a person named Six: an orphan she first encountered at a training school who was too young to have a gender or a name but who was already one of the best fighters in the school. In the sort of manipulative challenge typical of Esek, she dangled the offer of a place as a student and challenged the child to learn enough to do something impressive. The subsequent twenty years of elusive taunts and mysterious gifts from the impossible-to-locate Six have driven Esek wild. Cleric Chono was beside Esek for much of that time. One of Six's classmates and another of Esek's rescues, Chono is the rare student who became a cleric rather than a cloaksaan. She is pious, cautious, and careful, the opposite of Esek's mercurial rage, but it's impossible to spend that much time around the woman and not be affected and manipulated by her. As this story opens, Chono is summoned by the First Cleric to join Esek on an assignment: recover a data coin that was stolen from a pirate raid on the Nightfoot compound. He refuses to tell them what data is on it, only saying that he believes it could be used to undermine public trust in the Nightfoot family. Jun is a hacker with considerably fewer connections to power or government and no desire to meet any of these people. She and her partner Liis make a dubiously legal living from smaller, quieter jobs. Buying a collection of stolen data coins for an archivist consortium is riskier than she prefers, but she's been tracking down rumors of this coin for months. The deal is worth a lot of money, enough to make a huge difference for her family. This is the second book I've read recently with strong cyberpunk vibes, although These Burning Stars mixes them with political thriller. This is a messy world with complicated political and religious systems, a lot of contentious history, and vast inequality. The story is told in two interleaved time sequences: the present-day fight over the data coin and the information that it contains, and a sequence of flashbacks telling the history of Esek's relationship with Six and Chono. Jun's story is the most cyberpunk and the one I found the most enjoyable to read, but Chono is a good viewpoint character for Esek's vicious energy and abusive charisma. Six is not a viewpoint character. For most of the book, they're present mostly in shadows, glimpses, and consequences, but they're the strongest character of the book. Both Esek and Six are larger than life, creatures of legend stuffed into mundane politics but too full of strong emotions, both good and bad, to play by any of the rules. Esek has the power base and access to the levers of government, but Six's quiet competence and mercilessly targeted morality may make them the more dangerous of the pair. I found the twisty political thriller part of this book engrossing and very difficult to put down, but it was also a bit too much drama for me in places. Jacobs has some surprises in store, one of which I did not expect at all, and they're set up beautifully and well-done within the story, but Esek and Six become an emotional star that the other characters orbit around and are in danger of getting pulled into. Chono is an accomplished and powerful character in her own right, but she's also an abuse victim, and while those parts are realistic, I didn't entirely enjoy reading them. There is quiet competence here alongside the drama, but I think I wanted the balance of emotion to tip a bit more towards the competence. There is one thing that Jacobs does with the end of the book that greatly impressed me. Unfortunately I can't even hint at it for fear of spoilers, but the ending is unsettling in a way that I found surprising and thought-provoking. I think what I can say is that this book respects the intelligence and skill of secondary characters in a way that I think is rare in a story with such overwhelming protagonists. I'm still thinking about that, and it's going to pull me right into the sequel. This is not going to be to everyone's taste. Esek is a viewpoint character and she can be very nasty. There's a lot of violence and abuse, including one rather graphic fight scene that I thought dragged on much longer than it needed to. But it's a satisfying, complex story with a true variety of characters and some real surprises. I'm glad I read it. Followed by On Vicious Worlds, not yet published as I write this. Content warnings: emotional and physical abuse, graphic violence, off-screen rape and sexual abuse of minors. Rating: 7 out of 10

21 July 2024

Russell Coker: SE Linux Policy for Dell Management

The recent issue of Windows security software killing computers has reminded me about the issue of management software for Dell systems. I wrote policy for the Dell management programs that extract information from iDRAC and store it in Linux. After the break I ve pasted in the policy. It probably needs some changes for recent software, it was last tested on a PowerEdge T320 and prior to that was used on a PowerEdge R710 both of which are old hardware and use different management software to the recent hardware. One would hope that the recent software would be much better but usually such hope is in vain. I deliberately haven t submitted this for inclusion in the reference policy because it s for proprietary software and also it permits many operations that we would prefer not to permit. The policy is after the break because it s larger than you want on a Planet feed. But first I ll give a few selected lines that are bad in a noteworthy way:
  1. sys_admin means the ability to break everything
  2. dac_override means break Unix permissions
  3. mknod means a daemon creates devices due to a lack of udev configuration
  4. sys_rawio means someone didn t feel like writing a device driver, maintaining a device driver for DKMS is hard and getting a driver accepted upstream requires writing quality code, in any case this is a bad sign.
  5. self:lockdown is being phased out, but used to mean bypassing some integrity protections, that would usually be related to sys_rawio or similar.
  6. dev_rx_raw_memory is bad, reading raw memory allows access to pretty much everything and execute of raw memory is something I can t imagine a good use for, the Reference Policy doesn t use this anywhere!
  7. dev_rw_generic_chr_files usually means a lack of udev configuration as udev should do that.
  8. storage_raw_write_fixed_disk shouldn t be needed for this sort of thing, it doesn t do anything that involves managing partitions.
Now without network access or other obvious ways of remote control this level of access while excessive isn t necessarily going to allow bad things to happen due to outside attack. But if there are bugs in the software there s nothing to stop it from giving the worst results.
allow dell_datamgrd_t self:capability   dac_override dac_read_search mknod sys_rawio sys_admin  ;
allow dell_datamgrd_t self:lockdown integrity;
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:capability   sys_admin sys_rawio  ;
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)
The best thing that Dell could do for their customers is to make this free software and allow the community to fix some of these issues.
Here is dellsrvadmin.te:
policy_module(dellsrvadmin,1.0.0)
require  
  type dmidecode_exec_t;
  type udev_t;
  type device_t;
  type event_device_t;
  type mon_local_test_t;
 
type dellsrvadmin_t;
type dellsrvadmin_exec_t;
init_daemon_domain(dellsrvadmin_t, dellsrvadmin_exec_t)
type dell_datamgrd_t;
type dell_datamgrd_exec_t;
init_daemon_domain(dell_datamgrd_t, dell_datamgrd_t)
type dellsrvadmin_var_t;
files_type(dellsrvadmin_var_t)
domain_transition_pattern(udev_t, dellsrvadmin_exec_t, dellsrvadmin_t)
modutils_domtrans(dellsrvadmin_t)
allow dell_datamgrd_t device_t:dir rw_dir_perms;
allow dell_datamgrd_t device_t:chr_file create;
allow dell_datamgrd_t event_device_t:chr_file   read write  ;
allow dell_datamgrd_t self:process signal;
allow dell_datamgrd_t self:fifo_file rw_file_perms;
allow dell_datamgrd_t self:sem create_sem_perms;
allow dell_datamgrd_t self:capability   dac_override dac_read_search mknod sys_rawio sys_admin  ;
allow dell_datamgrd_t self:lockdown integrity;
allow dell_datamgrd_t self:unix_dgram_socket create_socket_perms;
allow dell_datamgrd_t self:netlink_route_socket r_netlink_socket_perms;
modutils_domtrans(dell_datamgrd_t)
can_exec(dell_datamgrd_t, dmidecode_exec_t)
allow dell_datamgrd_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:file manage_file_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:lnk_file read;
allow dell_datamgrd_t dellsrvadmin_var_t:sock_file manage_file_perms;
kernel_read_network_state(dell_datamgrd_t)
kernel_read_system_state(dell_datamgrd_t)
kernel_search_fs_sysctls(dell_datamgrd_t)
kernel_read_vm_overcommit_sysctl(dell_datamgrd_t)
# for /proc/bus/pci/*
kernel_write_proc_files(dell_datamgrd_t)
corecmd_exec_bin(dell_datamgrd_t)
corecmd_exec_shell(dell_datamgrd_t)
corecmd_shell_entry_type(dell_datamgrd_t)
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
files_search_tmp(dell_datamgrd_t)
files_read_etc_files(dell_datamgrd_t)
files_read_etc_symlinks(dell_datamgrd_t)
files_read_usr_files(dell_datamgrd_t)
logging_search_logs(dell_datamgrd_t)
miscfiles_read_localization(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)
can_exec(mon_local_test_t, dellsrvadmin_exec_t)
allow mon_local_test_t dellsrvadmin_var_t:dir search;
allow mon_local_test_t dellsrvadmin_var_t:file read_file_perms;
allow mon_local_test_t dellsrvadmin_var_t:file setattr;
allow mon_local_test_t dellsrvadmin_var_t:sock_file write;
allow mon_local_test_t dell_datamgrd_t:unix_stream_socket connectto;
allow mon_local_test_t self:sem   create read write destroy unix_write  ;
allow dellsrvadmin_t self:process signal;
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:sem create_sem_perms;
allow dellsrvadmin_t self:fifo_file rw_file_perms;
allow dellsrvadmin_t self:packet_socket create;
allow dellsrvadmin_t self:unix_stream_socket   connectto create_stream_socket_perms  ;
allow dellsrvadmin_t self:capability   sys_admin sys_rawio  ;
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)
allow dellsrvadmin_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:file manage_file_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:lnk_file read;
allow dellsrvadmin_t dellsrvadmin_var_t:sock_file write;
allow dellsrvadmin_t dell_datamgrd_t:unix_stream_socket connectto;
kernel_read_network_state(dellsrvadmin_t)
kernel_read_system_state(dellsrvadmin_t)
kernel_search_fs_sysctls(dellsrvadmin_t)
kernel_read_vm_overcommit_sysctl(dellsrvadmin_t)
corecmd_exec_bin(dellsrvadmin_t)
corecmd_exec_shell(dellsrvadmin_t)
corecmd_shell_entry_type(dellsrvadmin_t)
files_read_etc_files(dellsrvadmin_t)
files_read_etc_symlinks(dellsrvadmin_t)
files_read_usr_files(dellsrvadmin_t)
logging_search_logs(dellsrvadmin_t)
miscfiles_read_localization(dellsrvadmin_t)
Here is dellsrvadmin.fc:
/opt/dell/srvadmin/sbin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/sbin/dsm_sa_datamgrd        --        gen_context(system_u:object_r:dell_datamgrd_t,s0)
/opt/dell/srvadmin/bin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/var(/.*)?                        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)
/opt/dell/srvadmin/etc/srvadmin-isvc/ini(/.*)?        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)

1 July 2024

Russell Coker: Links June 2024

Modos Labs have released the design of an e-ink display connected by USB-C [1]. They have provided a lot of background information on e-ink displays which isn t available elsewhere. Excellent work! Informative article about a company giving renters insecure locks while facilitating collusion to raise rents [2]. Insightful video by JimmyTheGiant about the destruction of housing estates in the UK [3]. I wonder how much of this was deliberate by the Tories. Insightful video by Modern Vintage Gamer about the way Nintendo is destroying history by preventing people playing old games [4]. Interesting video by Louis Rossmann about the low quality of products and reviews on Amazon [5]. We all know about Enshittification, but it seems that Amazon is getting to the stage of being unusable for some products. Amusing video by Folding Ideas about Decentraland an attampt at a blockchain based second life type thing which failed as you expect blockchain things to fail [6]. The top comment is a transcription of the actions of the speaker s pet cat. ;)

12 May 2024

Elana Hashman: I am very sick

I have not been able to walk since February 18, 2023. When people ask me how I'm doing, this is the first thing that comes to mind. "Well, you know, the usual, but also I still can't walk," I think to myself. If I dream at night, I often see myself walking or running. In conversation, if I talk about going somewhere, I'll imagine walking there. Even though it's been over a year, I remember walking to the bus, riding to see my friends, going out for brunch, cooking community dinners. But these days, I can't manage going anywhere except by car, and I can't do the driving, and I can't dis/assemble and load my chair. When I'm resting in bed and follow a guided meditation, I might be asked to imagine walking up a staircase, step by step. Sometimes, I do. Other times, I imagine taking a little elevator in my chair, or wheeling up ramps. I feel like there is little I can say that can express the extent of what this illness has taken from me, but it's worth trying. To an able-bodied person, seeing me in a power wheelchair is usually "enough." One of my acquaintances cried when they last saw me in person. But frankly, I love my wheelchair. I am not "wheelchair-bound" I am bed-bound, and the wheelchair gets me out of bed. My chair hasn't taken anything from me. *** In October of 2022, I was diagnosed with myalgic encephalomyelitis. Scientists and doctors don't really know what myalgic encephalomyelitis (ME) is. Diseases like it have been described for over 200 years.1 It primarily affects women between the ages of 10-39, and the primary symptom is "post-exertional malaise" or PEM: debilitating, disproportionate fatigue following activity, often delayed by 24-72 hours and not relieved by sleep. That fatigue has earned the illness the misleading name of "Chronic Fatigue Syndrome" or CFS, as though we're all just very tired all the time. But tired people respond to exercise positively. People with ME/CFS do not.2 Given the dearth of research and complete lack of on-label treatments, you may think this illness is at least rare, but it is actually quite common: in the United States, an estimated 836k-2.5m people3 have ME/CFS. It is frequently misdiagnosed, and it is estimated that as many as 90% of cases are missed,4 due to mild or moderate symptoms that mimic other diseases. Furthermore, over half of Long COVID cases likely meet the diagnostic criteria for ME,5 so these numbers have increased greatly in recent years. That is, ME is at least as common as rheumatoid arthritis,6 another delightful illness I have. But while any doctor knows what rheumatoid arthritis is, not enough7 have heard of "myalgic encephalomylitis." Despite a high frequency and disease burden, post-viral associated conditions (PASCs) such as ME have been neglected for medical funding for decades.8 Indeed, many people, including medical care workers, find it hard to believe that after the acute phase of illness, severe symptoms can persist. PASCs such as ME and Long COVID defy the typical narrative around common illnesses. I was always told that if I got sick, I should expect to rest for a bit, maybe take some medications, and a week or two later, I'd get better, right? But I never got better. These are complex, multi-system diseases that do not neatly fit into the Western medical system's specializations. I have seen nearly every specialty because ME/CFS affects nearly every system of the body: cardiology, nephrology, pulmonology, neurology, opthalmology, and, many, many more. You'd think they'd hand out frequent flyer cards, or a medical passport with fun stamps, but nope. Just hundreds of pages of medical records. And when I don't fit neatly into one particular specialist's box, then I'm sent back to my primary care doctor to regroup while we try to troubleshoot my latest concerning symptoms. "Sorry, can't help you. Not my department." With little available medical expertise, a lot of my disease management has been self-directed in partnership with primary care. I've read hundreds of articles, papers, publications, CME material normally reserved for doctors. It's truly out of necessity, and I'm certain I would be much worse off if I lacked the skills and connections to do this; there are so few ME/CFS experts in the US that there isn't one in my state or any adjacent state.9 So I've done a lot of my own work, much of it while barely being able to read. (A text-to-speech service is a real lifesaver.) To facilitate managing my illness, I've built a mental model of how my particular flavour of ME/CFS works based on the available research I've been able to read and how I respond to treatments. Here is my best attempt to explain it: The best way I have learned to manage this is to prevent myself from doing activities where I will exceed that aerobic threshold by wearing a heartrate monitor,12 but the amount of activity that permits in my current state of health is laughably restrictive. Most days I'm unable to spend more than one to two hours out of bed. Over time, this has meant worsening from a persistent feeling of tiredness all the time and difficulty commuting into an office or sitting at a desk, to being unable to sit at a desk for an entire workday even while working from home and avoiding physically intense chores or exercise without really understanding why, to being unable to leave my apartment for days at a time, and finally, being unable to stand for more than a minute or two or walk. But it's not merely that I can't walk. Many folks in wheelchairs are able to live excellent lives with adaptive technology. The problem is that I am so fatigued, any activity can destroy my remaining quality of life. In my worst moments, I've been unable to read, move my arms or legs, or speak aloud. Every single one of my limbs burned, as though I had caught fire. Food sat in my stomach for hours, undigested, while my stomach seemingly lacked the energy to do its job. I currently rely on family and friends for full-time caretaking, plus a paid home health aide, as I am unable to prep meals, shower, or leave the house independently. This assistance has helped me slowly improve from my poorest levels of function. While I am doing better than I was at my worst, I've had to give up essentially all of my hobbies with physical components. These include singing, cooking, baking, taking care of my houseplants, cross-stitching, painting, and so on. Doing any of these result in post-exertional malaise so I've had to stop; this reduction of activity to prevent worsening the illness is referred to as "pacing." I've also had to cut back essentially all of my volunteering and work in open source; I am only cleared by my doctor to work 15h/wk (from bed) as of writing. *** CW: severe illness, death, and suicide (skip this section) The difficulty of living with a chronic illness is that there's no light at the end of the tunnel. Some diseases have a clear treatment path: you take the medications, you complete the procedures, you hit all the milestones, and then you're done, perhaps with some long-term maintenance work. But with ME, there isn't really an end in sight. The median duration of illness reported in one 1997 study was over 6 years, with some patients reporting 20 years of symptoms.13 While a small number of patients spontaneously recover, and many improve, the vast majority of patients are unable to regain their baseline function.14 My greatest fear since losing the ability to walk is getting worse still. Because, while I already require assistance with nearly every activity of daily living, there is still room for decline. The prognosis for extremely ill patients is dismal, and many require feeding tubes and daily nursing care. This may lead to life-threatening malnutrition;15 a number of these extremely severe patients have died, either due to medical neglect or suicide.16 Extremely severe patients cannot tolerate light, sound, touch, or cognitive exertion,17 and often spend most of their time lying flat in a darkened room with ear muffs or an eye mask.18 This is all to say, my prognosis is not great. But while I recognize that the odds aren't exactly in my favour, I am also damn stubborn. (A friend once cheerfully described me as "stubbornly optimistic!") I only get one shot at life, and I do not want to spend the entirety of it barely able to perceive what's going on around me. So while my prognosis is uncertain, there's lots of evidence that I can improve somewhat,19 and there's also lots of evidence that I can live 20+ years with this disease. It's a bitter pill to swallow, but it also means I might have the gift of time something that not all my friends with severe complex illnesses have had. I feel like I owe it to myself to do the best I can to improve; to try to help others in a similar situation; and to enjoy the time that I have. I already feel like my life has been moving in slow motion for the past 4 years there's no need to add more suffering. Finding joy, as much as I can, every day, is essential to keep up my strength for this marathon. Even if it takes 20 years to find a cure, I am convinced that the standard of care is going to improve. All the research and advocacy that's been happening over the past decade is plenty to feel hopeful about.20 Hope is a discipline,21 and I try to remind myself of this on the hardest days. *** I'm not entirely sure why I decided to write this. Certainly, today is International ME/CFS Awareness Day, and I'm hoping this post will raise awareness in spaces that aren't often thinking about chronic illnesses. But I think there is also a part of me that wants to share, reach out in some way to the people I've lost contact with while I've been treading water, managing the day to day of my illness. I experience this profound sense of loss, especially when I think back to the life I had before. Everyone hits limitations in what they can do and accomplish, but there is so little I can do with the time and energy that I have. And yet, I understand even this precious little could still be less. So I pace myself. Perhaps I can inspire you to take action on behalf of those of us too fatigued to do the advocacy we need and deserve. Should you donate to a charity or advocacy organization supporting ME/CFS research? In the US, there are many excellent organizations, such as ME Action, the Open Medicine Foundation, SolveME, the Bateman Horne Center, and the Workwell Foundation. I am also happy to match any donations through the end of May 2024 if you send me your receipts. But charitable giving only goes so far, and I think this problem deserves the backing of more powerful organizations. Proportionate government funding and support is desperately needed. It's critical for us to push governments22 to provide the funding required for research that will make an impact on patients' lives now. Many organizers are running campaigns around the world, advocating for this investment. There is a natural partnership between ME advocacy and Long COVID advocacy, for example, and we have an opportunity to make a great difference to many people by pushing for research and resources inclusive of all PASCs. Some examples I'm aware of include: But outside of collective organizing, there are a lot of sick individuals out there that need help, too. Please, don't forget about us. We need you to visit us, care for us, be our confidantes, show up as friends. There are a lot of people who are very sick out here and need your care. I'm one of them.

Next.