Search Results: "hugo"

25 March 2024

Jonathan Dowland: a bug a day

I recently became a maintainer of/committer to IkiWiki, the software that powers my site. I also took over maintenance of the Debian package. Last week I cut a new upstream point release, 3.20200202.4, and a corresponding Debian package upload, consisting only of a handful of low-hanging-fruit patches from other people, largely to exercise both processes. I've been discussing IkiWiki's maintenance situation with some other users for a couple of years now. I've also weighed up the pros and cons of moving to a different static-site-generator (a term that describes what IkiWiki is, but was actually coined more recently). It turns out IkiWiki is exceptionally flexible and powerful: I estimate the cost of moving to something modern(er) and fashionable such as Jekyll, Hugo or Hakyll as unreasonably high, in part because they are surprisingly rigid and inflexible in some key places. Like most mature software, IkiWiki has a bug backlog. Over the past couple of weeks, as a sort-of "palate cleanser" around work pieces, I've tried to triage one IkiWiki bug per day: either upstream or in the Debian Bug Tracker. This is a really lightweight task: it can be as simple as "find a bug reported in Debian, copy it upstream, tag it upstream, mark it forwarded; perhaps taking 5-10 minutes. Often I'll stumble across something that has already been fixed but not recorded as such as I go. Despite this minimal level of work, I'm quite satisfied with the cumulative progress. It's notable to me how much my perspective has shifted by becoming a maintainer: I'm considering everything through a different lens to that of being just one user. Eventually I will put some time aside to scratch some of my own itches (html5 by default; support dark mode; duckduckgo plugin; use the details tag...) but for now this minimal exercise is of broader use.

28 December 2023

Russ Allbery: Review: Nettle & Bone

Review: Nettle & Bone, by T. Kingfisher
Publisher: Tor
Copyright: 2022
ISBN: 1-250-24403-X
Format: Kindle
Pages: 242
Nettle & Bone is a standalone fantasy novel with fairy tale vibes. T. Kingfisher is a pen name for Ursula Vernon. As the book opens, Marra is giving herself a blood infection by wiring together dog bones out of a charnel pit. This is the second of three impossible tasks that she was given by the dust-wife. Completing all three will give her the tools to kill a prince. I am a little cautious of which T. Kingfisher books I read since she sometimes writes fantasy and sometimes writes horror and I don't get along with horror. This one seemed a bit horrific in the marketing, so I held off on reading it despite the Hugo nomination. It turns out to be just on the safe side of my horror tolerance, with only a couple of parts that I read a bit quickly. One of those is the opening, which I am happy to report does not set the tone for the rest of the book. Marra starts the story in a wasteland full of disease, madmen, and cannibals (who, in typical Ursula Vernon fashion, turn out to be nicer than the judgmental assholes outside of the blistered land). She doesn't stay there long. By chapter two, the story moves on to flashbacks explaining how Marra ended up there, alternating with further (and less horrific) steps in her quest to kill the prince of the Northern Kingdom. Marra is a princess of a small, relatively poor coastal kingdom with a good harbor and acquisitive neighbors. Her mother, the queen, has protected the kingdom through arranged marriage of her daughters to the prince of the Northern Kingdom, who rules it in all but name given the mental deterioration of his father the king. Marra's eldest sister Damia was first, but she died suddenly and mysteriously in a fall. (If you're thinking about the way women are injured by "accident," you have the right idea.) Kania, the middle sister, is next to marry; she lives, but not without cost. Meanwhile, Marra is sent off to a convent to ensure that there are no complicating potential heirs, and to keep her on hand as a spare. I won't spoil the entire backstory, but you do learn it all. Marra is a typical Kingfisher protagonist: a woman who is way out of her depth who persists with stubbornness, curiosity, and innate decency because what else is there to do? She accumulates the typical group of misfits and oddballs common in Kingfisher's quest fantasies, characters that in the Chosen One male fantasy would be supporting characters at best. The bone-wife is a delight; her chicken is even better. There are fairy godmothers and a goblin market and a tooth extraction that was one of the creepiest things I've read without actually being horror. It is, in short, a Kingfisher fantasy novel, with a touch more horror than average but not enough to push it out of the fantasy genre. I think my favorite part of this book was not the main quest. It was the flashback scenes set in the convent, where Marra has the space (and the mentorship) to develop her sense of self.
"We're a mystery religion," said the abbess, when she'd had a bit more wine than usual, "for people who have too much work to do to bother with mysteries. So we simply get along as best we can. Occasionally someone has a vision, but [the goddess] doesn't seem to want anything much, and so we try to return the favor."
If you have read any other Kingfisher novels, much of this will be familiar: the speculative asides, the dogged determination, the slightly askew nature of the world, the vibes-based world-building that feels more like a fairy tale than a carefully constructed magic system, and the sense that the main characters (and nearly all of the supporting characters) are average people trying to play the hands they were dealt as ethically as they can. You will know that the tentative and woman-initiated romance is coming as soon as the party meets the paladin type who is almost always the romantic interest in one of these books. The emotional tone of the book is a bit predictable for regular readers, but Ursula Vernon's brain is such a delightful place to spend some time that I don't mind.
Marra had not managed to be pale and willowy and consumptive at any point in eighteen years of life and did not think she could achieve it before she died.
Nettle & Bone won the Hugo for Best Novel in 2023. I'm not sure why this specific T. Kingfisher novel won and not any of the half-dozen earlier novels she's written in a similar style, but sure, I have no objections. I'm glad one of them won; they're all worth reading and hopefully that will help more people discover this delightful style of fantasy that doesn't feel like what anyone else is doing. Recommended, although be prepared for a few more horror touches than normal and a rather grim first chapter. Content warnings: domestic abuse. The dog... lives? Is equally as alive at the end of the book as it was at the end of the first chapter? The dog does not die; I'll just leave it at that. (Neither does the chicken.) Rating: 8 out of 10

4 June 2023

Thorsten Alteholz: My Debian Activities in May 2023

FTP master This month I accepted 157 and rejected 22 packages. The overall number of packages that got accepted was 160. Debian LTS This was my hundred-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 14h. During that time I uploaded: The CVEs for cups-filters and cups have been embargoed ones, so the work for cups was done in May but the uploads happen in June. I also did some work on security-master to inject missing dependencies for hugo and gitlab-workhose. Last but not least I did some days on frontdesk duties. Debian ELTS This month was the fifty eighth ELTS month. The CVEs for cups-filters and cups have been embargoed ones, so the work for cups was done in May but the uploads happen in June. Last but not least I did some days on frontdesk duties. Debian Astro This month I uploaded some packages to fix RC bugs, that were
detected by one of many QA tools: Thanks a lot to all the hardworking people who run these tools! Debian Printing This month I could fix RC bugs in: This work is generously funded by Freexian! Debian Mobcom This month I could fix RC bugs in: Other stuff Some other packages also had last minute RC bugs: I even did an upload of a new package force-ip-protocol. I finally had enough of people using IPv6 for their hosts but are unable to configure it. Now I can force firefox, or whatever software, to only use IPv4. One nuisance settled.

22 March 2023

Russ Allbery: Review: The Kaiju Preservation Society

Review: The Kaiju Preservation Society, by John Scalzi
Publisher: Tor
Copyright: 2022
ISBN: 0-7653-8913-4
Format: Kindle
Pages: 264
As this novel opens, Jamie Gray, our first-person narrator, is working for the business side of a startup food delivery service named f dm d. He's up for his six month performance review and has some great ideas for how to improve the company's market standing going into pandemic lockdown. His boss has other ideas: Jamie at the bottom of the corporate ladder, delivering food door-to-door. Tom is working for a semi-secret organization with a last-minute, COVID-induced worker shortage. He needs someone who can lift things. Jamie used to go to some of the same parties, can lift things, and is conveniently available. And that's how Jamie ends up joining the Kaiju Preservation Society, because it turns out the things that need lifting are in a different dimension. This book was so bad. I think this may be the worst-written novel at a technical level that I have read since I started writing book reviews. It's become trendy in some circles to hate Scalzi, so I want to be clear that I normally get along fine with his writing. Scalzi is an unabashedly commercial writer of light, occasionally humorous popcorn SF. It's not great literature, and he's unlikely to write a new favorite novel, but his books are easy to read and reliably deliver a few hours of comfortable entertainment. The key word is "reliably"; Scalzi doesn't have a lot of dynamic range, but you know what you're getting and can decide to read him when that matches your mood. When I give a book a bad review, it's usually because I found the ideas deeply unpleasant (genocidal theology, for instance, or creepy voyeuristic sexism). That's not the problem here. The ideas are fine: a variation on the Jurassic Park setup but with kaiju and less commercialism, an everyman narrator to look at everything for the reader, a few assholes thrown in to provide some conflict sure, sign me up, sounds like the kind of light entertainment I expect from a Scalzi novel. The excuse for interdimensional portals was clever (and consistent with kaiju story themes), and the biological handwaving created a lot of good story hooks. The material for a fun novel is all present. The problem, instead, is that this book was not finished. It's the bare skeleton of a story with almost-nonexistent characters and plot, stuck in a novel-shaped box and filled in with repetitive banter and dad jokes of the approximate consistency of styrofoam packing material. When I complain about the characterization, I fear people who haven't read the book won't understand what I mean. He's always had dialogue quirks that tend to show up in all of his characters and make them sound similar. I noticed this in other books, but it wasn't a big deal. The characterization problems in this book are a big deal. I can identify four characters, total, from the entire novel: the first-person protagonist, the villain, the pilot, and the woman who does the forest floor safety training. None of those characters are memorable or interesting, but at least they're somewhat distinct. Apart from them, you could write a computer program that randomly selected character names for each dialogue line and I wouldn't be able to tell the difference. I have never given up on character identity and started ignoring all the dialogue tags in a book before. Everyone says the same thing, makes the same jokes, has the same emotional reactions, and has the same total lack of interiority or distinguishing characteristics. The only way I can imagine telling the characters apart is if you memorized the association between names and professions, and I have no idea why you'd bother. The descriptions are, if anything, worse. Scalzi is not a heavily descriptive author, but usually he gives me something to hang my imagination on. You would think that if you were writing a book about kaiju one where kaiju are quite actively involved in the story, fighting, roaring, menacing, being central to the plot you would describe a kaiju at some point during the novel. They're visually impressive giant monsters! This is an inherently visual story genre! And at no point in this entire novel does Scalzi ever describe a kaiju in any detail. Not once! The most we get is that one has tentacles and a sort of eye spot. And sometimes there are wings. There are absolutely no overall impressions, comparisons, attempts to sketch what the characters are seeing, nothing. Or, for another example, consider the base, the place where the characters live for most of the story and where much of the dialogue happens. Here is the sum total of all sensory information I can recall about the characters' home: it has stairs, and there's a plant in Jamie's room. (The person who left the plant, who never appears on screen, gets more characterization in two pages than anyone else gets in the whole novel.) What does the base look like from the outside? The inside? How many stories does it have? What are the common spaces like? What does it smell like? Does it feel institutional, or welcoming, or dirty, or sparkling? How long does it take to get from one end of it to the other? Does it make weird noises at night? I have no impressions of this place whatsoever. Maybe a few of these things were mentioned in passing and I missed them, but that's because the narrator of this book never describes his surroundings in detail, stops to look at something eye-catching, thinks about how he feels about a place, or otherwise gives the reader any meaningful emotional engagement with the spaces around him. And it's not like this story was instead stuffed with action. There is barely a novelette's worth of plot and most of that is predictable: the setup, the initial confrontation, the discovery of the evil plan, the final confrontation. For most of the book, nothing of any consequence happens. It's just endless pages of vaguely bantering dialogue between totally indistinguishable characters while Jamie repeats "I lift things." (That was funny the first couple of times; by the fifth time, the funny wore off.) The climax, when it finally happens, is mostly monologuing and half-hearted repartee that is cringeworthy and vaguely embarrassing for everyone involved. I don't really blame Scalzi for this book. I wish he had realized that it was half-baked at best and needed some major revisions, but the author's note at the end makes it clear that the process of bringing this book into the world was a train wreck. It was written in two months, in a rush, after Scalzi had already missed a deadline for a different book that failed to come together. Life happens, and in 2020 and early 2021 a whole lot of life was happening. The tone of the author's note is vaguely apologetic; I think Scalzi realizes at some level that this is not his best work. The person I do blame is Patrick Nielsen Hayden, Scalzi's editor, who is a multiple-Hugo-award-winning book editor and the managing editor of science fiction at Tor and absolutely should have known better. It was his responsibility to look at this book and say "this is not ready yet"; this is part of the function of the traditional publishing apparatus. This could have been a good book. The ideas and the hook were there; it just needed some actual substance in the middle and a whole lot of character work. Instead, he was the one who made the decision to publish the book in this state. But, well, the joke's on me, because The Kaiju Preservation Society sold a ton of copies, got nominated for several awards, won an Alex Award, and made Amazon's best of 2022 list, so I guess this was a brilliant publishing decision and the book was everything it needed to be? Maybe I'm just bad at reading and have no sense of humor? I have no explanation; I am truly and completely baffled. There are books that I don't like but that have obvious merits for people who are not me. There are styles of writing that I don't like and other people do. But I would have sworn this book was objectively unfinished and half-assed at a craft and construction level, in ways that don't depend as much on personal taste. I recommend quietly forgetting it was ever published and waiting for a better Scalzi novel, but it has a 4.04 star rating on Goodreads with nearly 32,000 reviews, so what do I know. Anyway, I was warned that I wasn't going to like this book and I read it anyway for silly reasons because I figured it was a Scalzi novel and how bad could it be, really. I brought this on myself, and I at least got the fun of ranting about it. Apparently this book found its people and they got a lot of joy out of it, and good for them. Rating: 2 out of 10

26 February 2023

Russ Allbery: Review: An Informal History of the Hugos

Review: An Informal History of the Hugos, by Jo Walton
Publisher: Tor
Copyright: August 2018
ISBN: 1-4668-6573-3
Format: Kindle
Pages: 564
An Informal History of the Hugos is another collection of Jo Walton's Tor.com posts. As with What Makes This Book So Great, these are blog posts that are still available for free on-line. Unlike that collection, this series happened after Tor.com got better at tags, so it's much easier to find. Whether to buy it therefore depends on whether having it in convenient book form is worth it to you. Walton's previous collection was a somewhat random assortment of reviews of whatever book she felt like reviewing. As you may guess from the title, this one is more structured. She starts at the first year that the Hugo Awards were given out (1953) and discusses the winners for each year up through 2000. Nearly all of that discussion is about the best novel Hugo, a survey of other good books for that year, and, when other awards (Nebula, Locus, etc.) start up, comparing them to the winners and nominees of other awards. One of the goals of each discussion is to decide whether the Hugo nominees did a good job of capturing the best books of the year and the general feel of the genre at that time. There are a lot of pages in this book, but that's partly because there's a lot of filler. Each post includes all of the winners and (once a nomination system starts) nominees in every Hugo category. Walton offers an in-depth discussion of the novel in every year, and an in-depth discussion of the John W. Campbell Award for Best New Writer (technically not a Hugo but awarded with them and voted on in the same way) once those start. Everything else gets a few sentences at most, so it's mostly just lists, all of which you can readily find elsewhere if you cared. Personally, I would have omitted categories without commentary when this was edited into book form. Two other things are included in this book. Most helpfully, Walton's Tor.com reviews of novels in the shortlist are included after the discussion of that year. If you like Walton's reviews, this is great for all the reasons that What Makes This Book So Great was so much fun. Walton has a way of talking about books with infectious enthusiasm, brief but insightful technical analysis, and a great deal of genre context without belaboring any one point. They're concise and readable and never outlast my attention span, and I wish I could write reviews half as well. The other inclusion is a selection of the comments from the original blog posts. When these posts originally ran, they turned into a community discussion of the corresponding year of SF, and Tor included a selection of those comments in the book. Full disclosure: one of those comments is mine, about the way that cyberpunk latched on to some incorrect ideas of how computers work and made them genre conventions to such a degree that most cyberpunk takes place in a parallel universe with very different computer technology. (I suppose that technically makes me a published author to the tune of a couple of pages.) While I still largely agree with the comment, I blamed Neuromancer for this at the time, and embarrassingly discovered when re-reading it that I had been unfair. This is why one should never express opinions in public where someone might record them. Anyway, there is a general selection of comments from random people, but the vast majority of the comments are discussions of the year's short fiction by Rich Horton and Gardner Dozois. I understand why this was included; Walton doesn't talk about the short fiction, Dozois was a legendary SF short fiction editor and multiple Hugo winner, and both Horton and Dozois reviewed short fiction for Locus. But they don't attempt reviews. For nearly all stories under discussion, unless you recognized the title, you would have no idea even what sub-genre it was in. It's just a sequence of assertions about which title or author was better. Given that there are (in most years) three short fiction categories to the one novel category and both Horton and Dozois write about each category, I suspect there are more words in this book from Horton and Dozois than Walton. That's a problem when those comments turn into tedious catalogs. Reviewing short fiction, particularly short stories, is inherently difficult. I've tried to do a lot of that myself, and it's tricky to find something useful to say that doesn't spoil the story. And to be fair to Horton and Dozois, they weren't being paid to write reviews; they were just commenting on blog posts as part of a community conversation, and I doubt anyone thought this would turn into a book. But when read as a book, their inclusion in this form wasn't my favorite editorial decision. This is therefore a collection of Walton's commentary on the selections for best novel and best new writer alongside a whole lot of boring lists. In theory, the padding shouldn't matter; one can skip over it and just read Walton's parts, and that's still lots of material. But Walton's discussion of the best novels of the year also tends to turn into long lists of books with no commentary (particularly once the very-long Locus recommended list starts appearing), adding to the tedium. This collection requires a lot of skimming. I enjoyed this series of blog posts when they were first published, but even at the time I skimmed the short fiction comments. Gathered in book form with this light of editing, I think it was less successful. If you are curious about the history of science fiction awards and never read the original posts, you may enjoy this, but I would rather have read another collection of straight reviews. Rating: 6 out of 10

29 January 2023

Dirk Eddelbuettel: RcppTOML 0.2.2 on CRAN: Now with macOS-on-Intel Builds

Just days after a build-fix release (for aarch64) and still only a few weeks after the 0.2.0 release of RcppTOML and its switch to toml++, we have another bugfix release 0.2.2 on CRAN also bringing release 3.3.0 of toml++ (even if we had large chunks of 3.3.0 already incorporated). TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka packages ) for the Rust language. The package was building fine on Intel-based macOS provided the versions were recent enough. CRAN, however, aims for the broadest possibly reach of binaries and builds on a fairly ancient macOS 10.13 with clang version 10. This confused toml++ into (wrongly) concluding it could not build when it in fact can. After a hint from Simon that Apple in their infinite wisdom redefines clang version ids, this has been reflected in version 3.3.0 of toml++ by Mark so we should now build everywhere. Big thanks to everybody for the help. The short summary of changes follows.

Changes in version 0.2.2 (2023-01-29)
  • New toml++ version 3.3.0 with fix to permit compilation on ancient macOS systems as used by CRAN for the Intel-based builds.

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppTOML page page. Please use the GitHub issue tracker for issues and bugreports. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 January 2023

Dirk Eddelbuettel: RcppTOML 0.2.1 on CRAN: Small Build Fix for Some Arches

Two weeks after the release of RcppTOML 0.2.0 and the switch to toml++, we have a quick bugfix release 0.2.1. TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka packages ) for the Rust language. Some architectures, aarch64 included, got confused over float16 which is of course a tiny two-byte type nobody should need. After consulting with Mark we concluded to (at least for now) simply override this excluding the use of float16 . The short summary of changes follows.

Changes in version 0.2.1 (2023-01-25)
  • Explicitly set -DTOML_ENABLE_FLOAT16=0 to permit compilation on some architectures stumbling of the type.

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppTOML page page. Please use the GitHub issue tracker for issues and bugreports. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 January 2023

Antoine Beaupr : Mastodon comments in ikiwiki

Today I noticed bounces in my mail box. They were from ikiwiki trying to send registration confirmation email to users who probably never asked for it. I'm getting truly fed up with spam in my wiki. At this point, all comments are manually approved and I still get trouble: now it's scammers spamming the registration form with dummy accounts, which bounce back to me when I make new posts, or just generate backscatter spam for the confirmation email. It's really bad. I have hundreds of users registered on my blog, and I don't know which are spammy, which aren't. So. I'm considering ditching ikiwiki comments altogether. I am testing Mastodon as a commenting platforms. Others (e.g. JAK) have implemented this as a server but a simpler approach is toload them dynamically from Mastodon, which is what Carl Shwan has done. They are using Hugo, however, so they can easily embed page metadata in the template to load the right server with the right comment ID. I wasn't sure how to do this in ikiwiki: it's typically hard to access page-specific metadata in templates. Even the page name is not there, for example. I have tried using templates, and that (obviously?) fails because the <script> stuff gets sanitized away. It seems I would need to split the JavaScript out of the template into a base template and then make the page template refer to a function in there. It's kind of horrible and messy. I wish there was a way to just access page metadata from the page template itself... I found out the meta plugin passes along its metadata, but that's not (easily) extensible. So i'd need to either patch that module, and my history of merged patches is not great so far. So: another plugin. I have something that kind of works that's a combination of a page.tmpl patch and a plugin. The plugin adds a mastodon directive that feeds the page.tmpl with the right stuff. On clicking a button, it injects comments from the Mastodon API, with a JavaScript callback. It's not pretty (it's not themed at all!), but it works. If you want to do this at home, you need this page.tmpl (or at least this patch and that one) and the mastodon.pm plugin from my mastodon-plugin branch. I'm not sure this is a good idea. The first test I did was a "test comment" which led to half a dozen "test reply". I then realized I couldn't redact individual posts from there. I don't even know if, when I mute a user, it actually gets hidden from everyone else too... So I'll test this for a while, I guess. I have also turned off all CGI on this site. It will keep users from registering while I cleanup this mess and think about next steps. I have other options as well if push comes to shove, but I'm unlikely to go back to ikiwiki comments. Mastodon comments are nice because they don't require me to run any extra software: either I have my own federated service I reuse, or I use someone else's, but I don't need to run something extra. And, of course, comments are published in a standard way that's interoperable with everything... On the other hand, now I won't have comments enabled until the blog is posted on Mastodon... Right now this happens only when feed2exec runs and the HTTP cache expires, which can take up to a day. I should probably do this some other way, like flush the cache when a new post arrives, or run post-commit hooks, but for now, this will have to do. Update: I figured out a way to make this work in a timely manner:
  1. there's a post-merge hook in my ikiwiki git repository which calls feed2exec in /home/w-anarcat/source/.git/hooks/ took me a while to find it! I tried post-update and post-receive first, but ikiwiki actually pulls from the bare directory in the source directory, so only post-merge fires (even though it's not a merge)
  2. feed2exec then finds new blog posts (if any!) and fires up the new ikiwikitoot plugin which then...
  3. posts the toot using the toot command (it just works, why reinvent the wheel), keeping the toot URL
  4. finds the Markdown source file associated with the post, and adds the magic mastodon directive
  5. commits and pushes the result
This will make the interaction with Mastodon much smoother: as soon as a blog post is out of "draft" (i.e. when it hits the RSS feeds), this will immediately trigger and post the blog entry to Mastodon, enabling comments. It's kind of a tangled mess of stuff, but it works! I have briefly considered not using feed2exec for this, but it turns out it does an important job of parsing the result of ikiwiki's rendering. Otherwise I would have to guess which post is really a blog post, is this just an update or is it new, is it a draft, and so on... all sorts of questions where the business logic already resides in ikiwiki, and that I would need to reimplement myself. Plus it goes alongside moving more stuff (like my feed reader) to dedicated UNIX accounts (in this case, the blog sandbox) for security reasons. Whee!

18 January 2023

Russ Allbery: Review: Forward

Review: Forward, edited by Blake Crouch
Publisher: Amazon Original Stories
Copyright: September 2019
ISBN: 1-5420-9206-X
ISBN: 1-5420-4363-8
ISBN: 1-5420-9357-0
ISBN: 1-5420-0434-9
ISBN: 1-5420-4363-8
ISBN: 1-5420-4425-1
Format: Kindle
Pages: 300
This is another Amazon collection of short fiction, this time mostly at novelette length. (The longer ones might creep into novella.) As before, each one is available separately for purchase or Amazon Prime "borrowing," with separate ISBNs. The sidebar cover is for the first in the sequence. (At some point I need to update my page templates so that I can add multiple covers.) N.K. Jemisin's "Emergency Skin" won the 2020 Hugo Award for Best Novelette, so I wanted to read and review it, but it would be too short for a standalone review. I therefore decided to read the whole collection and review it as an anthology. This was a mistake. Learn from my mistake. The overall theme of the collection is technological advance, rapid change, and the ethical and social question of whether we should slow technology because of social risk. Some of the stories stick to that theme more closely than others. Jemisin's story mostly ignores it, which was probably the right decision. "Ark" by Veronica Roth: A planet-killing asteroid has been on its inexorable way towards Earth for decades. Most of the planet has been evacuated. A small group has stayed behind, cataloging samples and filling two remaining ships with as much biodiversity as they can find with the intent to leave at the last minute. Against that backdrop, two of that team bond over orchids. If you were going "wait, what?" about the successful evacuation of Earth, yeah, me too. No hint is offered as to how this was accomplished. Also, the entirety of humanity abandoned mutual hostility and national borders to cooperate in the face of the incoming disaster, which is, uh, bizarrely optimistic for an otherwise gloomy story. I should be careful about how negative I am about this story because I am sure it will be someone's favorite. I can even write part of the positive review: an elegiac look at loss, choices, and the meaning of a life, a moving look at how people cope with despair. The writing is fine, the story structure works; it's not a bad story. I just found it monumentally depressing, and was not engrossed by the emotionally abused protagonist's unresolved father issues. I can imagine a story around the same facts and plot that I would have liked much better, but all of these people need therapy and better coping mechanisms. I'm also not sure what this had to do with the theme, given that the incoming asteroid is random chance and has nothing to do with technological development. (4) "Summer Frost" by Blake Crouch: The best part of this story is the introductory sequence before the reader knows what's going on, which is full of evocative descriptions. I'm about to spoil what is going on, so if you want to enjoy that untainted by the stupidity of the rest of the plot, skip the rest of this story review. We're going to have a glut of stories about the weird and obsessive form of AI risk invented by the fevered imaginations of the "rationalist" community, aren't we. I don't know why I didn't predict that. It's going to be just as annoying as the glut of cyberpunk novels written by people who don't understand computers. Crouch lost me as soon as the setup is revealed. Even if I believed that a game company would use a deep learning AI still in learning mode to run an NPC (I don't; see Microsoft's Tay for an obvious reason why not), or that such an NPC would spontaneously start testing the boundaries of the game world (this is not how deep learning works), Crouch asks the reader to believe that this AI started as a fully scripted NPC in the prologue with a fixed storyline. In other words, the foundation of the story is that this game company used an AI model capable of becoming a general intelligence for barely more than a cut scene. This is not how anything works. The rest of the story is yet another variation on a science fiction plot so old and threadbare that Isaac Asimov invented the Three Laws of Robotics to avoid telling more versions of it. Crouch's contribution is to dress it up in the terminology of the excessively online. (The middle of the story features a detailed discussion of Roko's basilisk; if you recognize that, you know what you're in for.) Asimov would not have had a lesbian protagonist, so points for progress I guess, but the AI becomes more interesting to the protagonist than her wife and kid because of course it does. There are a few twists and turns along the way, but the destination is the bog-standard hard-takeoff general intelligence scenario. One more pet peeve: Authors, stop trying to illustrate the growth of your AI by having it move from broken to fluent English. English grammar is so much easier than self-awareness or the Turing test that we had programs that could critique your grammar decades before we had believable chatbots. It's going to get grammar right long before the content of the words makes any sense. Also, your AI doesn't sound dumber, your AI sounds like someone whose native language doesn't use pronouns and helper verbs the way that English does, and your decision to use that as a marker for intelligence is, uh, maybe something you should think about. (3) "Emergency Skin" by N.K. Jemisin: The protagonist is a heavily-augmented cyborg from a colony of Earth's diaspora. The founders of that colony fled Earth when it became obvious to them that the planet was dying. They have survived in another star system, but they need a specific piece of technology from the dead remnants of Earth. The protagonist has been sent to retrieve it. The twist is that this story is told in the second-person perspective by the protagonist's ride-along AI, created from a consensus model of the rulers of the colony. We never see directly what the protagonist is doing or thinking, only the AI reaction to it. This is exactly the sort of gimmick that works much better in short fiction than at novel length. Jemisin uses it to create tension between the reader and the narrator, and I thoroughly enjoyed the effect. (As shown in the Broken Earth trilogy, Jemisin is one of the few writers who can use second-person effectively.) I won't spoil the revelation, but it's barbed and biting and vicious and I loved it. Jemisin does deliver the point with a sledgehammer, so be aware of that if you want subtlety in your short fiction, but I prefer the bluntness. (This is part of why I usually don't get along with literary short stories.) The reader of course can't change the direction of the story, but the second-person perspective still provides a hit of vicarious satisfaction. I can see why this won the Hugo; it's worth seeking out. (8) "You Have Arrived at Your Destination" by Amor Towles: Sam and his wife are having a child, and they've decided to provide him with an early boost in life. Vitek is a fertility lab, but more than that, it can do some gene tweaking and adjustment to push a child more towards one personality or another. Sam and his wife have spent hours filling out profiles, and his wife spent hours weeding possible choices down to three. Now, Sam has come to Vitek to pick from the remaining options. Speaking of literary short stories, Towles is the non-SFF writer of this bunch, and it's immediately obvious. The story requires the SFnal premise, but after that this is a character piece. Vitek is an elite, expensive company with a condescending and overly-reductive attitude towards humanity, which is entirely intentional on the author's part. This is the sort of story that gets resolved in an unexpected conversation in a roadside bar, and where most of the conflict happens inside the protagonist's head. I was initially going to complain that Towles does the standard literary thing of leaving off the denouement on the grounds that the reader can figure it out, but when I did a bit of re-reading for this review, I found more of the bones than I had noticed the first time. There's enough subtlety that I had to think for a bit and re-read a passage, but not too much. It's also the most thoughtful treatment of the theme of the collection, the only one that I thought truly wrestled with the weird interactions between technological capability and human foresight. Next to "Emergency Skin," this was the best story of the collection. (7) "The Last Conversation" by Paul Tremblay: A man wakes up in a dark room, in considerable pain, not remembering anything about his life. His only contact with the world at first is a voice: a woman who is helping him recover his strength and his memory. The numbers that head the chapters have significant gaps, representing days left out of the story, as he pieces together what has happened alongside the reader. Tremblay is the horror writer of the collection, so predictably this is the story whose craft I can admire without really liking it. In this case, the horror comes mostly from the pacing of revelation, created by the choice of point of view. (This would be a much different story from the perspective of the woman.) It's well-done, but it has the tendency I've noticed in other horror stories of being a tightly closed system. I see where the connection to the theme is, but it's entirely in the setting, not in the shape of the story. Not my thing, but I can see why it might be someone else's. (5) "Randomize" by Andy Weir: Gah, this was so bad. First, and somewhat expectedly, it's a clunky throwback to a 1950s-style hard SF puzzle story. The writing is atrocious: wooden, awkward, cliched, and full of gratuitous infodumping. The characterization is almost entirely through broad stereotypes; the lone exception is the female character, who at least adds an interesting twist despite being forced to act like an idiot because of the plot. It's a very old-school type of single-twist story, but the ending is completely implausible and falls apart if you breathe on it too hard. Weir is something of a throwback to an earlier era of scientific puzzle stories, though, so maybe one is inclined to give him a break on the writing quality. (I am not; one of the ways in which science fiction has improved is that you can get good scientific puzzles and good writing these days.) But the science is also so bad that I was literally facepalming while reading it. The premise of this story is that quantum computers are commercially available. That will cause a serious problem for Las Vegas casinos, because the generator for keno numbers is vulnerable to quantum algorithms. The solution proposed by the IT person for the casino? A quantum random number generator. (The words "fight quantum with quantum" appear literally in the text if you're wondering how bad the writing is.) You could convince me that an ancient keno system is using a pseudorandom number generator that might be vulnerable to some quantum algorithm and doesn't get reseeded often enough. Fine. And yes, quantum computers can be used to generate high-quality sources of random numbers. But this solution to the problem makes no sense whatsoever. It's like swatting a house fly with a nuclear weapon. Weir says explicitly in the story that all the keno system needs is an external source of high-quality random numbers. The next step is to go to Amazon and buy a hardware random number generator. If you want to splurge, it might cost you $250. Problem solved. Yes, hardware random number generators have various limitations that may cause you problems if you need millions of bits or you need them very quickly, but not for something as dead-simple and with such low entropy requirements as keno numbers! You need a trivial number of bits for each round; even the slowest and most conservative hardware random number generator would be fine. Hell, measure the noise levels on the casino floor. Point a camera at a lava lamp. Or just buy one of the physical ball machines they use for the lottery. This problem is heavily researched, by casinos in particular, and is not significantly changed by the availability of quantum computers, at least for applications such as keno where the generator can be reseeded before each generation. You could maybe argue that this is an excuse for the IT guy to get his hands on a quantum computer, which fits the stereotypes, but that still breaks the story for reasons that would be spoilers. As soon as any other casino thought about this, they'd laugh in the face of the characters. I don't want to make too much of this, since anyone can write one bad story, but this story was dire at every level. I still owe Weir a proper chance at novel length, but I can't say this added to my enthusiasm. (2) Rating: 4 out of 10

14 January 2023

Matt Brown: Rebooting...

Hi! After nearly 7 years of dormancy, I m rebooting this website and have a goal to write regularly on a variety of topics going forward. More on that and my goals in a coming post For now, this is just a placeholder note to help double-check that everything on the new site is working as expected and the letters are flowing through the pipes in the right places.

Technical Details I ve migrated the site from Wordpress, to a fully static configuration using Hugo and TailwindCSS for help with styling. For now hosting is still on a Linode VM in Fremont, CA, but my plan is to move to a more specialized static hosting service with better CDN reach in the very near future. If you want to inspect the innards further, it s all at https://github.com/mattbnz/web-matt.

Still on the TODO list
  • Improve the hosting situation as noted above.
  • Integrate Bert Hubert s nice audience minutes analytics script.
  • Write up (or find) LinkedIn/Twitter/Mastodon integration scripts to automatically post updates when a new piece of writing appears on the site, to build/notify followers and improve the social reach. Ideally, the script would then also update the page here with links to the thread(s), so that readers can easily join/follow any resulting conversation on the platform of their choice. I m not planning to add any direct comment or feedback functinoality on the site itself.
  • Add a newsletter/subscription option for folks who don t use RSS and would prefer updates via email rather than a social feed.

10 January 2023

Dirk Eddelbuettel: RcppTOML 0.2.0: TOML 1.0.0 rewrite with toml++

A few years since the last release in late 2020, the RcppTOML package is now back with a new and shiny CRAN release 0.2.0. It is now based on the wonderful toml++ C++17 library by Mark Gillard and gets us (at long last!) full TOML v1.0.0 compliance for use with R. TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka packages ) for the Rust language. This package is a rewrite of the internals interfacing the library, and updates the package to using toml++ and C++17. The R interface is unchanged, and a full run of reverse dependencies passed. This involved finding one sole test failure which turned to have been driven by a non-conforming TOML input file which Jianfeng Li kindly fixed at the source making his (extensive) set of tests in package configr pass too. The actual rewrite was mostly done in a one-off repo RcppTomlPlusPlus which can now be considered frozen. The short summary of changes follows.

Changes in version 0.2.0 (2023-01-10)
  • Rewritten in C++17 using toml++ for TOML v1.0.0 compliance
  • Unchanged interface from R, unchanged (and expanded tests)
  • Several small continuous integration upgrades since last release

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppTOML page page. Please use the GitHub issue tracker for issues and bugreports. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

1 January 2023

Russ Allbery: 2022 Book Reading in Review

In 2022, much to my surprise, I finished and reviewed 51 books, a substantial increase over last year and once again the best year for reading since 2012. (I read 60 books that year, so it's a hard mark to equal.) Reading throughout the year was a bit uneven; I avoided the summer slump this year, but still slowed down in early spring and September. As always, the tail end of the year was prime reading time. The best book of the year was the third and concluding book of Naomi Novik's Scholomance series, The Golden Enclaves. I thought she nailed the ending of an already excellent series, propelling it to the top ranks of my favorite fantasy series of all time. I'm a primarily character-driven reader, and El's first-person perspective was my favorite narrative voice in a very long time. The supporting characters are also wonderful (Liesel!). Highly recommended. Fiction highlights of the year were plentiful. It started off strong with Natalie Zina Walschots's cynical and biting superhero novel Hench and continued in a much different vein with Guy Gavriel Kay's Children of Earth and Sky, which has a bit less plot focus than some of his other fantasies but makes up for it in memorable character relationships. Ryka Aoki's Light from Uncommon Stars is a moving story of what it means to truly support someone else and should have won the best novel Hugo. And, finally, Miles Cameron's Artifact Space was a delight; one of the best military SF novels I've read in a long time. There was no true stand-out non-fiction book this year, but the first book I finished in 2022, Adam Tooze's Crashed, is now my favorite story of the 2008 financial collapse, in large part because he extends the story to the subsequent European financial crisis. Jo Walton's collection of book discussion columns, What Makes This Book So Great, also deserves a mention and is guaranteed to add to your reading backlog. My large review project of the year was finally making substantial inroads into Terry Pratchett's long Discworld series. That accounted for eight of the books I read this year, and is likely to account for a similar number next year since I'm following the Tor.com Discworld re-read. I think my favorite of that bunch was Maskerade, but I also enjoyed all of the Watch novels in the group (Feet of Clay, Jingo, and The Fifth Elephant). My other hope for the year was to mix in older books from my reading backlog and not just focus on new (to me) acquisitions. A little bit of that happened, but not as much as I had been hoping for. This continues to be a goal in 2023. The full analysis includes some additional personal reading statistics, probably only of interest to me.

10 December 2022

Timo Jyrinki: Running Cockpit inside ALP

(quoted from my other blog at since a new OS might be interesting for many and this is published in separate planets)
ALP - The Adaptable Linux Platform is a new operating system from SUSE to run containerized and virtualized workloads. It is in early prototype phase, but the development is done completely openly so it s easy to jump in to try it.For this trying out, I used the latest encrypted build as of the writing, 22.1 from ALP images. I imported it in virt-manager as a Generic Linux 2022 image, using UEFI instead of BIOS, added a TPM device (which I m interested in otherwise) and referring to an Ignition JSON file in the XML config in virt-manager.The Ignition part is pretty much fully thanks to Paolo Stivanin who studied the secrets of it before me. But here it goes - and this is required for password login in Cockpit to work in addition to SSH key based login to the VM from host - first, create config.ign file:
 
"ignition": "version": "3.3.0" ,
"passwd":
"users": [

"name": "root",
"passwordHash": "YOURHASH",
"sshAuthorizedKeys": [
"ssh-... YOURKEY"
]

]
,
"systemd":
"units": [
"name": "sshd.service",
"enabled": true
]
,
"storage":
"files": [

"overwrite": true,
"path": "/etc/ssh/sshd_config.d/20-enable-passwords.conf",
"contents":
"source": "data:,PasswordAuthentication%20yes%0APermitRootLogin%20yes%0A"
,
"mode": 420

]


where password SHA512 hash can be obtained using openssl passwd -6 and the ssh key is your public ssh key.That file is put to eg /tmp and referred in the virt-manager s XML like follows:
  <sysinfo type="fwcfg">
<entry name="opt/com.coreos/config" file="/tmp/config.ign"/>
</sysinfo>
Now we can boot up the VM and ssh in - or you could log in directly too but it s easier to copy-paste commands when using ssh.Inside the VM, we can follow the ALP documentation to install and start Cockpit:
podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latest
podman container runlabel --name cockpit-ws run registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latest
systemctl enable --now cockpit.service
Check your host s IP address with ip -a, and open IP:9090 in your host s browser:Cockpit login screenLogin with root / your password and you shall get the front page:Cockpit front page and many other pages where you can manage your ALP deployment via browser:Cockpit podman pageAll in all, ALP is in early phases but I m really happy there s up-to-date documentation provided and people can start experimenting it whenever they want. The images from the linked directory should be fairly good, and test automation with openQA has been started upon as well.You can try out the other example workloads that are available just as well.

29 September 2022

Antoine Beaupr : Detecting manual (and optimizing large) package installs in Puppet

Well this is a mouthful. I recently worked on a neat hack called puppet-package-check. It is designed to warn about manually installed packages, to make sure "everything is in Puppet". But it turns out it can (probably?) dramatically decrease the bootstrap time of Puppet bootstrap when it needs to install a large number of packages.

Detecting manual packages On a cleanly filed workstation, it looks like this:
root@emma:/home/anarcat/bin# ./puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
0 unmanaged packages found
A messy workstation will look like this:
root@curie:/home/anarcat/bin# ./puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
288 unmanaged packages found
apparmor-utils beignet-opencl-icd bridge-utils clustershell cups-pk-helper davfs2 dconf-cli dconf-editor dconf-gsettings-backend ddccontrol ddrescueview debmake debootstrap decopy dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dnsdiag dropbear-initramfs ebtables efibootmgr elpa-lua-mode entr eog evince figlet file file-roller fio flac flex font-manager fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi freetype2-demos ftp fwupd-amd64-signed gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdisk gdm3 gdu gedit gedit-plugins gettext-base git-debrebase gnome-boxes gnote gnupg2 golang-any golang-docker-credential-helpers golang-golang-x-tools grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gtypist gvfs-backends hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-gtk3 ibus-libpinyin ibus-pinyin im-config imediff img2pdf imv initramfs-tools input-utils installation-birthday internetarchive ipmitool iptables iptraf-ng jackd2 jupyter jupyter-nbextension-jupyter-js-widgets jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools keyboard-configuration kfind konsole krb5-locales kwin-x11 leiningen lightdm lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 lynx lz4json magic-wormhole mailscripts mailutils manuskript mat2 mate-notification-daemon mate-themes mime-support mktorrent mp3splt mpdris2 msitools mtp-tools mtree-netbsd mupdf nautilus nautilus-sendto ncal nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache npm2deb ntfs-3g ntpdate nvme-cli nwipe obs-studio okular-extra-backends openstack-clients openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby ruby-dev ruby-feedparser ruby-magic ruby-mocha ruby-ronn rygel-playbin rygel-tracker s-tui sanoid saytime scrcpy scrcpy-server screenfetch scrot sdate sddm seahorse shim-signed sigil smartmontools smem smplayer sng sound-juicer sound-theme-freedesktop spectre-meltdown-checker sq ssh-audit sshuttle stress-ng strongswan strongswan-swanctl syncthing system-config-printer system-config-printer-common system-config-printer-udev systemd-bootchart systemd-container tardiff task-desktop task-english task-ssh-server tasksel tellico texinfo texlive-fonts-extra texlive-lang-cyrillic texlive-lang-french texlive-lang-german texlive-lang-italian texlive-xetex tftp-hpa thunar-archive-plugin tidy tikzit tint2 tintin++ tipa tpm2-tools traceroute tree trocla ucf udisks2 unifont unrar-free upower usbguard uuid-runtime vagrant-cachier vagrant-libvirt virt-manager vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas wireshark xapian-tools xclip xdg-user-dirs-gtk xlax xmlto xsensors xserver-xorg xsltproc xxd xz-utils yubioath-desktop zathura zathura-pdf-poppler zenity zfs-dkms zfs-initramfs zfsutils-linux zip zlib1g zlib1g-dev
157 old: apparmor-utils clustershell davfs2 dconf-cli dconf-editor ddccontrol ddrescueview decopy dnsdiag ebtables efibootmgr elpa-lua-mode entr figlet file-roller fio flac flex font-manager freetype2-demos ftp gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdu gedit git-debrebase gnote golang-docker-credential-helpers golang-golang-x-tools gtypist hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-pinyin imediff input-utils internetarchive ipmitool iptraf-ng jackd2 jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools kfind konsole leiningen lightdm lynx lz4json magic-wormhole manuskript mat2 mate-notification-daemon mktorrent mp3splt msitools mtp-tools mtree-netbsd nautilus nautilus-sendto nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache ntpdate nwipe obs-studio openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby-feedparser ruby-magic ruby-mocha ruby-ronn s-tui saytime scrcpy screenfetch scrot sdate seahorse shim-signed sigil smem smplayer sng sound-juicer spectre-meltdown-checker sq ssh-audit sshuttle stress-ng system-config-printer system-config-printer-common tardiff tasksel tellico texlive-lang-cyrillic texlive-lang-french tftp-hpa tikzit tint2 tintin++ tpm2-tools traceroute tree unrar-free vagrant-cachier vagrant-libvirt vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas xdg-user-dirs-gtk xlax xmlto xsensors xxd yubioath-desktop zenity zip
131 new: beignet-opencl-icd bridge-utils cups-pk-helper dconf-gsettings-backend debmake debootstrap dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dropbear-initramfs eog evince file fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi fwupd-amd64-signed gdisk gdm3 gedit-plugins gettext-base gnome-boxes gnupg2 golang-any grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gvfs-backends ibus-gtk3 ibus-libpinyin im-config img2pdf imv initramfs-tools installation-birthday iptables jupyter jupyter-nbextension-jupyter-js-widgets keyboard-configuration krb5-locales kwin-x11 lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 mailscripts mailutils mate-themes mime-support mpdris2 mupdf ncal npm2deb ntfs-3g nvme-cli okular-extra-backends openstack-clients plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap ruby ruby-dev rygel-playbin rygel-tracker sanoid scrcpy-server sddm smartmontools sound-theme-freedesktop strongswan strongswan-swanctl syncthing system-config-printer-udev systemd-bootchart systemd-container task-desktop task-english task-ssh-server texinfo texlive-fonts-extra texlive-lang-german texlive-lang-italian texlive-xetex thunar-archive-plugin tidy tipa trocla ucf udisks2 unifont upower usbguard uuid-runtime virt-manager wireshark xapian-tools xclip xserver-xorg xsltproc xz-utils zathura zathura-pdf-poppler zfs-dkms zfs-initramfs zfsutils-linux zlib1g zlib1g-dev
Yuck! That's a lot of shit to go through. Notice how the packages get sorted between "old" and "new" packages. This is because popcon is used as a tool to mark which packages are "old". If you have unmanaged packages, the "old" ones are likely things that you can uninstall, for example. If you don't have popcon installed, you'll also get this warning:
popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'
The error can otherwise be safely ignored, but you won't get "help" prioritizing the packages to add to your manifests. Note that the tool ignores packages that were "marked" (see apt-mark(8)) as automatically installed. This implies that you might have to do a little bit of cleanup the first time you run this, as Debian doesn't necessarily mark all of those packages correctly on first install. For example, here's how it looks like on a clean install, after Puppet ran:
root@angela:/home/anarcat# ./bin/puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
127 unmanaged packages found
ca-certificates console-setup cryptsetup-initramfs dbus file gcc-12-base gettext-base grub-common grub-efi-amd64 i3lock initramfs-tools iw keyboard-configuration krb5-locales laptop-detect libacl1 libapparmor1 libapt-pkg6.0 libargon2-1 libattr1 libaudit-common libaudit1 libblkid1 libbpf0 libbsd0 libbz2-1.0 libc6 libcap-ng0 libcap2 libcap2-bin libcom-err2 libcrypt1 libcryptsetup12 libdb5.3 libdebconfclient0 libdevmapper1.02.1 libedit2 libelf1 libext2fs2 libfdisk1 libffi8 libgcc-s1 libgcrypt20 libgmp10 libgnutls30 libgpg-error0 libgssapi-krb5-2 libhogweed6 libidn2-0 libip4tc2 libiw30 libjansson4 libjson-c5 libk5crypto3 libkeyutils1 libkmod2 libkrb5-3 libkrb5support0 liblocale-gettext-perl liblockfile-bin liblz4-1 liblzma5 libmd0 libmnl0 libmount1 libncurses6 libncursesw6 libnettle8 libnewt0.52 libnftables1 libnftnl11 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 libnss-systemd libp11-kit0 libpam-systemd libpam0g libpcre2-8-0 libpcre3 libpcsclite1 libpopt0 libprocps8 libreadline8 libselinux1 libsemanage-common libsemanage2 libsepol2 libslang2 libsmartcols1 libss2 libssl1.1 libssl3 libstdc++6 libsystemd-shared libsystemd0 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libtinfo6 libtirpc-common libtirpc3 libudev1 libunistring2 libuuid1 libxtables12 libxxhash0 libzstd1 linux-image-amd64 logsave lsb-base lvm2 media-types mlocate ncurses-term pass-extension-otp puppet python3-reportbug shim-signed tasksel ucf usr-is-merged util-linux-extra wpasupplicant xorg zlib1g
popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'
Normally, there should be unmanaged packages here. But because of the way Debian is installed, a lot of libraries and some core packages are marked as manually installed, and are of course not managed through Puppet. There are two solutions to this problem:
  • really manage everything in Puppet (argh)
  • mark packages as automatically installed
I typically chose the second path and mark a ton of stuff as automatic. Then either they will be auto-removed, or will stop being listed. In the above scenario, one could mark all libraries as automatically installed with:
apt-mark auto $(./bin/puppet-package-check   grep -o 'lib[^ ]*')
... but if you trust that most of that stuff is actually garbage that you don't really want installed anyways, you could just mark it all as automatically installed:
apt-mark auto $(./bin/puppet-package-check)
In my case, that ended up keeping basically all libraries (because of course they're installed for some reason) and auto-removing this:
dh-dkms discover-data dkms libdiscover2 libjsoncpp25 libssl1.1 linux-headers-amd64 mlocate pass-extension-otp pass-otp plocate x11-apps x11-session-utils xinit xorg
You'll notice xorg in there: yep, that's bad. Not what I wanted. But for some reason, on other workstations, I did not actually have xorg installed. Turns out having xserver-xorg is enough, and that one has dependencies. So now I guess I just learned to stop worrying and live without X(org).

Optimizing large package installs But that, of course, is not all. Why make things simple when you can have an unreadable title that is trying to be both syntactically correct and click-baity enough to flatter my vain ego? Right. One of the challenges in bootstrapping Puppet with large package lists is that it's slow. Puppet lists packages as individual resources and will basically run apt install $PKG on every package in the manifest, one at a time. While the overhead of apt is generally small, when you add things like apt-listbugs, apt-listchanges, needrestart, triggers and so on, it can take forever setting up a new host. So for initial installs, it can actually makes sense to skip the queue and just install everything in one big batch. And because the above tool inspects the packages installed by Puppet, you can run it against a catalog and have a full lists of all the packages Puppet would install, even before I even had Puppet running. So when reinstalling my laptop, I basically did this:
apt install puppet-agent/experimental
puppet agent --test --noop
apt install $(./puppet-package-check --debug \
    2>&1   grep ^puppet\ packages 
      sed 's/puppet packages://;s/ /\n/g'
      grep -v -e onionshare -e golint -e git-sizer -e github-backup -e hledger -e xsane -e audacity -e chirp -e elpa-flycheck -e elpa-lsp-ui -e yubikey-manager -e git-annex -e hopenpgp-tools -e puppet
) puppet-agent/experimental
That massive grep was because there are currently a lot of packages missing from bookworm. Those are all packages that I have in my catalog but that still haven't made it to bookworm. Sad, I know. I eventually worked around that by adding bullseye sources so that the Puppet manifest actually ran. The point here is that this improves the Puppet run time a lot. All packages get installed at once, and you get a nice progress bar. Then you actually run Puppet to deploy configurations and all the other goodies:
puppet agent --test
I wish I could tell you how much faster that ran. I don't know, and I will not go through a full reinstall just to please your curiosity. The only hard number I have is that it installed 444 packages (which exploded in 10,191 packages with dependencies) in a mere 10 minutes. That might also be with the packages already downloaded. In any case, I have that gut feeling it's faster, so you'll have to just trust my gut. It is, after all, much more important than you might think.

Similar work The blueprint system is something similar to this:
It figures out what you ve done manually, stores it locally in a Git repository, generates code that s able to recreate your efforts, and helps you deploy those changes to production
That tool has unfortunately been abandoned for a decade at this point. Also note that the AutoRemove::RecommendsImportant and AutoRemove::SuggestsImportant are relevant here. If it is set to true (the default), a package will not be removed if it is (respectively) a Recommends or Suggests of another package (as opposed to the normal Depends). In other words, if you want to also auto-remove packages that are only Suggests, you would, for example, add this to apt.conf:
AutoRemove::SuggestsImportant false;
Paul Wise has tried to make the Debian installer and debootstrap properly mark packages as automatically installed in the past, but his bug reports were rejected. The other suggestions in this section are also from Paul, thanks!

27 June 2022

Russ Allbery: Review: Light from Uncommon Stars

Review: Light from Uncommon Stars, by Ryka Aoki
Publisher: Tor
Copyright: 2021
ISBN: 1-250-78907-9
Format: Kindle
Pages: 371
Katrina Nguyen is an young abused transgender woman. As the story opens, she's preparing to run away from home. Her escape bag is packed with meds, clothes, her papers, and her violin. The note she is leaving for her parents says that she's going to San Francisco, a plausible lie. Her actual destination is Los Angeles, specifically the San Gabriel Valley, where a man she met at a queer youth conference said he'd give her a place to sleep. Shizuka Satomi is the Queen of Hell, the legendary uncompromising violin teacher responsible for six previous superstars, at least within the limited world of classical music. She's wealthy, abrasive, demanding, and intimidating, and unbeknownst to the rest of the world she has made a literal bargain with Hell. She has to deliver seven souls, seven violin players who want something badly enough that they'll bargain with Hell to get it. Six have already been delivered in spectacular fashion, but she's running out of time to deliver the seventh before her own soul is forfeit. Tamiko Grohl, an up-and-coming violinist from her native Los Angeles, will hopefully be the seventh. Lan Tran is a refugee and matriarch of a family who runs Starrgate Donut. She and her family didn't flee another unstable or inhospitable country. They fled the collapsing Galactic Empire, securing their travel authorization by promising to set up a tourism attraction. Meanwhile, she's careful to give cops free donuts and to keep their advanced technology carefully concealed. The opening of this book is unlikely to be a surprise in general shape. Most readers would expect Katrina to end up as Satomi's student rather than Tamiko, and indeed she does, although not before Katrina has a very difficult time. Near the start of the novel, I thought "oh, this is going to be hurt/comfort without a romantic relationship," and it is. But it then goes beyond that start into a multifaceted story about complexity, resilience, and how people support each other. It is also a fantastic look at the nuance and intricacies of being or supporting a transgender person, vividly illustrated by a story full of characters the reader cares about and without the academic abstruseness that often gets in the way. The problems with gender-blindness, the limitations of honoring someone's gender without understanding how other people do not, the trickiness of privilege, gender policing as a distraction and alienation from the rest of one's life, the complications of real human bodies and dysmorphia, the importance of listening to another person rather than one's assumptions about how that person feels it's all in here, flowing naturally from the story, specific to the characters involved, and never belabored. I cannot express how well-handled this is. It was a delight to read. The other wonderful thing Aoki does is set Satomi up as the almost supernaturally competent teacher who in a sense "rescues" Katrina, and then invert the trope, showing the limits of Satomi's expertise, the places where she desperately needs human connection for herself, and her struggle to understand Katrina well enough to teach her at the level Satomi expects of herself. Teaching is not one thing to everyone; it's about listening, and Katrina is nothing like Satomi's other students. This novel is full of people thinking they finally understand each other and realizing there is still more depth that they had missed, and then talking through the gap like adults. As you can tell from any summary of this book, it's an odd genre mash-up. The fantasy part is a classic "magician sells her soul to Hell" story; there are a few twists, but it largely follows genre expectations. The science fiction part involving Lan is unfortunately weaker and feels more like a random assortment of borrowed Star Trek tropes than coherent world-building. Genre readers should not come to this story expecting a well-thought-out science fiction universe or a serious attempt to reconcile metaphysics between the fantasy and science fiction backgrounds. It's a quirky assortment of parts that don't normally go together, defy easy classification, and are often unexplained. I suspect this was intentional on Aoki's part given how deeply this book is about the experience of being transgender. Of the three primary viewpoint characters, I thought Lan's perspective was the weakest, and not just because of her somewhat generic SF background. Aoki uses her as a way to talk about the refugee experience, describing her as a woman who brings her family out of danger to build a new life. This mostly works, but Lan has vastly more power and capabilities than a refugee would normally have. Rather than the typical Asian refugee experience in the San Gabriel valley, Lan is more akin to a US multimillionaire who for some reason fled to Vietnam (relative to those around her, Lan is arguably even more wealthy than that). This is also a refugee experience, but it is an incredibly privileged one in a way that partly undermines the role that she plays in the story. Another false note bothered me more: I thought Tamiko was treated horribly in this story. She plays a quite minor role, sidelined early in the novel and appearing only briefly near the climax, and she's portrayed quite negatively, but she's clearly hurting as deeply as the protagonists of this novel. Aoki gives her a moment of redemption, but Tamiko gets nothing from it. Unlike every other injured and abused character in this story, no one is there for Tamiko and no one ever attempts to understand her. I found that profoundly sad. She's not an admirable character, but neither is Satomi at the start of the book. At least a gesture at a future for Tamiko would have been appreciated. Those two complaints aside, though, I could not put this book down. I was able to predict the broad outline of the plot, but the specifics were so good and so true to characters. Both the primary and supporting cast are unique, unpredictable, and memorable. Light from Uncommon Stars has a complex relationship with genre. It is squarely in the speculative fiction genre; the plot would not work without the fantasy and (more arguably) the science fiction elements. Music is magical in a way that goes beyond what can be attributed to metaphor and subjectivity. But it's also primarily character story deeply rooted in the specific location of the San Gabriel valley east of Los Angeles, full of vivid descriptions (particularly of food) and day-to-day life. As with the fantasy and science fiction elements, Aoki does not try to meld the genre elements into a coherent whole. She lets them sit side by side and be awkward and charming and uneven and chaotic. If you're the sort of SF reader who likes building a coherent theory of world-building rules, you may have to turn that desire off to fully enjoy this book. I thought this book was great. It's not flawless, but like its characters it's not trying to be flawless. In places it is deeply insightful and heartbreakingly emotional; in others, it's a glorious mess. It's full of cooking and food, YouTube fame, the disappointments of replicators, video game music, meet-cutes over donuts, found family, and classical music drama. I wish we'd gotten way more about the violin repair shop and a bit less warmed-over Star Trek, but I also loved it exactly the way it was. Definitely the best of the 2022 Hugo nominees that I've read so far. Content warning for child abuse, rape, self-harm, and somewhat explicit sex work. The start of the book is rather heavy and horrific, although the author advertises fairly clearly (and accurately) that things will get better. Rating: 9 out of 10

26 May 2022

Sergio Talens-Oliag: New Blog Config

As promised, on this post I m going to explain how I ve configured this blog using hugo, asciidoctor and the papermod theme, how I publish it using nginx, how I ve integrated the remark42 comment system and how I ve automated its publication using gitea and json2file-go. It is a long post, but I hope that at least parts of it can be interesting for some, feel free to ignore it if that is not your case

Hugo Configuration

Theme settingsThe site is using the PaperMod theme and as I m using asciidoctor to publish my content I ve adjusted the settings to improve how things are shown with it. The current config.yml file is the one shown below (probably some of the settings are not required nor being used right now, but I m including the current file, so this post will have always the latest version of it):
config.yml
baseURL: https://blogops.mixinet.net/
title: Mixinet BlogOps
paginate: 5
theme: PaperMod
destination: public/
enableInlineShortcodes: true
enableRobotsTXT: true
buildDrafts: false
buildFuture: false
buildExpired: false
enableEmoji: true
pygmentsUseClasses: true
minify:
  disableXML: true
  minifyOutput: true
languages:
  en:
    languageName: "English"
    description: "Mixinet BlogOps - https://blogops.mixinet.net/"
    author: "Sergio Talens-Oliag"
    weight: 1
    title: Mixinet BlogOps
    homeInfoParams:
      Title: "Sergio Talens-Oliag Technical Blog"
      Content: >
        ![Mixinet BlogOps](/images/mixinet-blogops.png)
    taxonomies:
      category: categories
      tag: tags
      series: series
    menu:
      main:
        - name: Archive
          url: archives
          weight: 5
        - name: Categories
          url: categories/
          weight: 10
        - name: Tags
          url: tags/
          weight: 10
        - name: Search
          url: search/
          weight: 15
outputs:
  home:
    - HTML
    - RSS
    - JSON
params:
  env: production
  defaultTheme: light
  disableThemeToggle: false
  ShowShareButtons: true
  ShowReadingTime: true
  disableSpecial1stPost: true
  disableHLJS: true
  displayFullLangName: true
  ShowPostNavLinks: true
  ShowBreadCrumbs: true
  ShowCodeCopyButtons: true
  ShowRssButtonInSectionTermList: true
  ShowFullTextinRSS: true
  ShowToc: true
  TocOpen: false
  comments: true
  remark42SiteID: "blogops"
  remark42Url: "/remark42"
  profileMode:
    enabled: false
    title: Sergio Talens-Oliag Technical Blog
    imageUrl: "/images/mixinet-blogops.png"
    imageTitle: Mixinet BlogOps
    buttons:
      - name: Archives
        url: archives
      - name: Categories
        url: categories
      - name: Tags
        url: tags
  socialIcons:
    - name: CV
      url: "https://www.uv.es/~sto/cv/"
    - name: Debian
      url: "https://people.debian.org/~sto/"
    - name: GitHub
      url: "https://github.com/sto/"
    - name: GitLab
      url: "https://gitlab.com/stalens/"
    - name: Linkedin
      url: "https://www.linkedin.com/in/sergio-talens-oliag/"
    - name: RSS
      url: "index.xml"
  assets:
    disableHLJS: true
    favicon: "/favicon.ico"
    favicon16x16:  "/favicon-16x16.png"
    favicon32x32:  "/favicon-32x32.png"
    apple_touch_icon:  "/apple-touch-icon.png"
    safari_pinned_tab:  "/safari-pinned-tab.svg"
  fuseOpts:
    isCaseSensitive: false
    shouldSort: true
    location: 0
    distance: 1000
    threshold: 0.4
    minMatchCharLength: 0
    keys: ["title", "permalink", "summary", "content"]
markup:
  asciidocExt:
    attributes:  
    backend: html5s
    extensions: ['asciidoctor-html5s','asciidoctor-diagram']
    failureLevel: fatal
    noHeaderOrFooter: true
    preserveTOC: false
    safeMode: unsafe
    sectionNumbers: false
    trace: false
    verbose: false
    workingFolderCurrent: true
privacy:
  vimeo:
    disabled: false
    simple: true
  twitter:
    disabled: false
    enableDNT: true
    simple: true
  instagram:
    disabled: false
    simple: true
  youtube:
    disabled: false
    privacyEnhanced: true
services:
  instagram:
    disableInlineCSS: true
  twitter:
    disableInlineCSS: true
security:
  exec:
    allow:
      - '^asciidoctor$'
      - '^dart-sass-embedded$'
      - '^go$'
      - '^npx$'
      - '^postcss$'
Some notes about the settings:
  • disableHLJS and assets.disableHLJS are set to true; we plan to use rouge on adoc and the inclusion of the hljs assets adds styles that collide with the ones used by rouge.
  • ShowToc is set to true and the TocOpen setting is set to false to make the ToC appear collapsed initially. My plan was to use the asciidoctor ToC, but after trying I believe that the theme one looks nice and I don t need to adjust styles, although it has some issues with the html5s processor (the admonition titles use <h6> and they are shown on the ToC, which is weird), to fix it I ve copied the layouts/partial/toc.html to my site repository and replaced the range of headings to end at 5 instead of 6 (in fact 5 still seems a lot, but as I don t think I ll use that heading level on the posts it doesn t really matter).
  • params.profileMode values are adjusted, but for now I ve left it disabled setting params.profileMode.enabled to false and I ve set the homeInfoParams to show more or less the same content with the latest posts under it (I ve added some styles to my custom.css style sheet to center the text and image of the first post to match the look and feel of the profile).
  • On the asciidocExt section I ve adjusted the backend to use html5s, I ve added the asciidoctor-html5s and asciidoctor-diagram extensions to asciidoctor and adjusted the workingFolderCurrent to true to make asciidoctor-diagram work right (haven t tested it yet).

Theme customisationsTo write in asciidoctor using the html5s processor I ve added some files to the assets/css/extended directory:
  1. As said before, I ve added the file assets/css/extended/custom.css to make the homeInfoParams look like the profile page and I ve also changed a little bit some theme styles to make things look better with the html5s output:
    custom.css
    /* Fix first entry alignment to make it look like the profile */
    .first-entry   text-align: center;  
    .first-entry img   display: inline;  
    /**
     * Remove margin for .post-content code and reduce padding to make it look
     * better with the asciidoctor html5s output.
     **/
    .post-content code   margin: auto 0; padding: 4px;  
  2. I ve also added the file assets/css/extended/adoc.css with some styles taken from the asciidoctor-default.css, see this blog post about the original file; mine is the same after formatting it with css-beautify and editing it to use variables for the colors to support light and dark themes:
    adoc.css
    /* AsciiDoctor*/
    table  
        border-collapse: collapse;
        border-spacing: 0
     
    .admonitionblock>table  
        border-collapse: separate;
        border: 0;
        background: none;
        width: 100%
     
    .admonitionblock>table td.icon  
        text-align: center;
        width: 80px
     
    .admonitionblock>table td.icon img  
        max-width: none
     
    .admonitionblock>table td.icon .title  
        font-weight: bold;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        text-transform: uppercase
     
    .admonitionblock>table td.content  
        padding-left: 1.125em;
        padding-right: 1.25em;
        border-left: 1px solid #ddddd8;
        color: var(--primary)
     
    .admonitionblock>table td.content>:last-child>:last-child  
        margin-bottom: 0
     
    .admonitionblock td.icon [class^="fa icon-"]  
        font-size: 2.5em;
        text-shadow: 1px 1px 2px var(--secondary);
        cursor: default
     
    .admonitionblock td.icon .icon-note::before  
        content: "\f05a";
        color: var(--icon-note-color)
     
    .admonitionblock td.icon .icon-tip::before  
        content: "\f0eb";
        color: var(--icon-tip-color)
     
    .admonitionblock td.icon .icon-warning::before  
        content: "\f071";
        color: var(--icon-warning-color)
     
    .admonitionblock td.icon .icon-caution::before  
        content: "\f06d";
        color: var(--icon-caution-color)
     
    .admonitionblock td.icon .icon-important::before  
        content: "\f06a";
        color: var(--icon-important-color)
     
    .conum[data-value]  
        display: inline-block;
        color: #fff !important;
        background-color: rgba(100, 100, 0, .8);
        -webkit-border-radius: 100px;
        border-radius: 100px;
        text-align: center;
        font-size: .75em;
        width: 1.67em;
        height: 1.67em;
        line-height: 1.67em;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        font-style: normal;
        font-weight: bold
     
    .conum[data-value] *  
        color: #fff !important
     
    .conum[data-value]+b  
        display: none
     
    .conum[data-value]::after  
        content: attr(data-value)
     
    pre .conum[data-value]  
        position: relative;
        top: -.125em
     
    b.conum *  
        color: inherit !important
     
    .conum:not([data-value]):empty  
        display: none
     
  3. The previous file uses variables from a partial copy of the theme-vars.css file that changes the highlighted code background color and adds the color definitions used by the admonitions:
    theme-vars.css
    :root  
        /* Solarized base2 */
        /* --hljs-bg: rgb(238, 232, 213); */
        /* Solarized base3 */
        /* --hljs-bg: rgb(253, 246, 227); */
        /* Solarized base02 */
        --hljs-bg: rgb(7, 54, 66);
        /* Solarized base03 */
        /* --hljs-bg: rgb(0, 43, 54); */
        /* Default asciidoctor theme colors */
        --icon-note-color: #19407c;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #bf6900;
        --icon-caution-color: #bf3400;
        --icon-important-color: #bf0000
     
    .dark  
        --hljs-bg: rgb(7, 54, 66);
        /* Asciidoctor theme colors with tint for dark background */
        --icon-note-color: #3e7bd7;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #ff8d03;
        --icon-caution-color: #ff7847;
        --icon-important-color: #ff3030
     
  4. The previous styles use font-awesome, so I ve downloaded its resources for version 4.7.0 (the one used by asciidoctor) storing the font-awesome.css into on the assets/css/extended dir (that way it is merged with the rest of .css files) and copying the fonts to the static/assets/fonts/ dir (will be served directly):
    FA_BASE_URL="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0"
    curl "$FA_BASE_URL/css/font-awesome.css" \
      > assets/css/extended/font-awesome.css
    for f in FontAwesome.otf fontawesome-webfont.eot \
      fontawesome-webfont.svg fontawesome-webfont.ttf \
      fontawesome-webfont.woff fontawesome-webfont.woff2; do
        curl "$FA_BASE_URL/fonts/$f" > "static/assets/fonts/$f"
    done
  5. As already said the default highlighter is disabled (it provided a css compatible with rouge) so we need a css to do the highlight styling; as rouge provides a way to export them, I ve created the assets/css/extended/rouge.css file with the thankful_eyes theme:
    rougify style thankful_eyes > assets/css/extended/rouge.css
  6. To support the use of the html5s backend with admonitions I ve added a variation of the example found on this blog post to assets/js/adoc-admonitions.js:
    adoc-admonitions.js
    // replace the default admonitions block with a table that uses a format
    // similar to the standard asciidoctor ... as we are using fa-icons here there
    // is no need to add the icons: font entry on the document.
    window.addEventListener('load', function ()  
      const admonitions = document.getElementsByClassName('admonition-block')
      for (let i = admonitions.length - 1; i >= 0; i--)  
        const elm = admonitions[i]
        const type = elm.classList[1]
        const title = elm.getElementsByClassName('block-title')[0];
    	const label = title.getElementsByClassName('title-label')[0]
    		.innerHTML.slice(0, -1);
        elm.removeChild(elm.getElementsByClassName('block-title')[0]);
        const text = elm.innerHTML
        const parent = elm.parentNode
        const tempDiv = document.createElement('div')
        tempDiv.innerHTML =  <div class="admonitionblock $ type ">
        <table>
          <tbody>
            <tr>
              <td class="icon">
                <i class="fa icon-$ type " title="$ label "></i>
              </td>
              <td class="content">
                $ text 
              </td>
            </tr>
          </tbody>
        </table>
      </div> 
        const input = tempDiv.childNodes[0]
        parent.replaceChild(input, elm)
       
     )
    and enabled its minified use on the layouts/partials/extend_footer.html file adding the following lines to it:
     - $admonitions := slice (resources.Get "js/adoc-admonitions.js")
        resources.Concat "assets/js/adoc-admonitions.js"   minify   fingerprint  
    <script defer crossorigin="anonymous" src="  $admonitions.RelPermalink  "
      integrity="  $admonitions.Data.Integrity  "></script>

Remark42 configurationTo integrate Remark42 with the PaperMod theme I ve created the file layouts/partials/comments.html with the following content based on the remark42 documentation, including extra code to sync the dark/light setting with the one set on the site:
comments.html
<div id="remark42"></div>
<script>
  var remark_config =  
    host:   .Site.Params.remark42Url  ,
    site_id:   .Site.Params.remark42SiteID  ,
    url:   .Permalink  ,
    locale:   .Site.Language.Lang  
   ;
  (function(c)  
    /* Adjust the theme using the local-storage pref-theme if set */
    if (localStorage.getItem("pref-theme") === "dark")  
      remark_config.theme = "dark";
      else if (localStorage.getItem("pref-theme") === "light")  
      remark_config.theme = "light";
     
    /* Add remark42 widget */
    for(var i = 0; i < c.length; i++) 
      var d = document, s = d.createElement('script');
      s.src = remark_config.host + '/web/' + c[i] +'.js';
      s.defer = true;
      (d.head   d.body).appendChild(s);
     
   )(remark_config.components   ['embed']);
</script>
In development I use it with anonymous comments enabled, but to avoid SPAM the production site uses social logins (for now I ve only enabled Github & Google, if someone requests additional services I ll check them, but those were the easy ones for me initially). To support theme switching with remark42 I ve also added the following inside the layouts/partials/extend_footer.html file:
 - if (not site.Params.disableThemeToggle)  
<script>
/* Function to change theme when the toggle button is pressed */
document.getElementById("theme-toggle").addEventListener("click", () =>  
  if (typeof window.REMARK42 != "undefined")  
    if (document.body.className.includes('dark'))  
      window.REMARK42.changeTheme('light');
      else  
      window.REMARK42.changeTheme('dark');
     
   
 );
</script>
 - end  
With this code if the theme-toggle button is pressed we change the remark42 theme before the PaperMod one (that s needed here only, on page loads the remark42 theme is synced with the main one using the code from the layouts/partials/comments.html shown earlier).

Development setupTo preview the site on my laptop I m using docker-compose with the following configuration:
docker-compose.yaml
version: "2"
services:
  hugo:
    build:
      context: ./docker/hugo-adoc
      dockerfile: ./Dockerfile
    image: sto/hugo-adoc
    container_name: hugo-adoc-blogops
    restart: always
    volumes:
      - .:/documents
    command: server --bind 0.0.0.0 -D -F
    user: $ APP_UID :$ APP_GID 
  nginx:
    image: nginx:latest
    container_name: nginx-blogops
    restart: always
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    ports:
      -  1313:1313
  remark42:
    build:
      context: ./docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    container_name: remark42-blogops
    restart: always
    env_file:
      - ./.env
      - ./remark42/env.dev
    volumes:
      - ./remark42/var.dev:/srv/var
To run it properly we have to create the .env file with the current user ID and GID on the variables APP_UID and APP_GID (if we don t do it the files can end up being owned by a user that is not the same as the one running the services):
$ echo "APP_UID=$(id -u)\nAPP_GID=$(id -g)" > .env
The Dockerfile used to generate the sto/hugo-adoc is:
Dockerfile
FROM asciidoctor/docker-asciidoctor:latest
RUN gem install --no-document asciidoctor-html5s &&\
 apk update && apk add --no-cache curl libc6-compat &&\
 repo_path="gohugoio/hugo" &&\
 api_url="https://api.github.com/repos/$repo_path/releases/latest" &&\
 download_url="$(\
  curl -sL "$api_url"  \
  sed -n "s/^.*download_url\": \"\\(.*.extended.*Linux-64bit.tar.gz\)\"/\1/p"\
 )" &&\
 curl -sL "$download_url" -o /tmp/hugo.tgz &&\
 tar xf /tmp/hugo.tgz hugo &&\
 install hugo /usr/bin/ &&\
 rm -f hugo /tmp/hugo.tgz &&\
 /usr/bin/hugo version &&\
 apk del curl && rm -rf /var/cache/apk/*
# Expose port for live server
EXPOSE 1313
ENTRYPOINT ["/usr/bin/hugo"]
CMD [""]
If you review it you will see that I m using the docker-asciidoctor image as the base; the idea is that this image has all I need to work with asciidoctor and to use hugo I only need to download the binary from their latest release at github (as we are using an image based on alpine we also need to install the libc6-compat package, but once that is done things are working fine for me so far). The image does not launch the server by default because I don t want it to; in fact I use the same docker-compose.yml file to publish the site in production simply calling the container without the arguments passed on the docker-compose.yml file (see later). When running the containers with docker-compose up (or docker compose up if you have the docker-compose-plugin package installed) we also launch a nginx container and the remark42 service so we can test everything together. The Dockerfile for the remark42 image is the original one with an updated version of the init.sh script:
Dockerfile
FROM umputun/remark42:latest
COPY init.sh /init.sh
The updated init.sh is similar to the original, but allows us to use an APP_GID variable and updates the /etc/group file of the container so the files get the right user and group (with the original script the group is always 1001):
init.sh
#!/sbin/dinit /bin/sh
uid="$(id -u)"
if [ "$ uid " -eq "0" ]; then
  echo "init container"
  # set container's time zone
  cp "/usr/share/zoneinfo/$ TIME_ZONE " /etc/localtime
  echo "$ TIME_ZONE " >/etc/timezone
  echo "set timezone $ TIME_ZONE  ($(date))"
  # set UID & GID for the app
  if [ "$ APP_UID " ]   [ "$ APP_GID " ]; then
    [ "$ APP_UID " ]   APP_UID="1001"
    [ "$ APP_GID " ]   APP_GID="$ APP_UID "
    echo "set custom APP_UID=$ APP_UID  & APP_GID=$ APP_GID "
    sed -i "s/^app:x:1001:1001:/app:x:$ APP_UID :$ APP_GID :/" /etc/passwd
    sed -i "s/^app:x:1001:/app:x:$ APP_GID :/" /etc/group
  else
    echo "custom APP_UID and/or APP_GID not defined, using 1001:1001"
  fi
  chown -R app:app /srv /home/app
fi
echo "prepare environment"
# replace  % REMARK_URL %  by content of REMARK_URL variable
find /srv -regex '.*\.\(html\ js\ mjs\)$' -print \
  -exec sed -i "s % REMARK_URL % $ REMARK_URL  g"   \;
if [ -n "$ SITE_ID " ]; then
  #replace "site_id: 'remark'" by SITE_ID
  sed -i "s 'remark' '$ SITE_ID ' g" /srv/web/*.html
fi
echo "execute \"$*\""
if [ "$ uid " -eq "0" ]; then
  exec su-exec app "$@"
else
  exec "$@"
fi
The environment file used with remark42 for development is quite minimal:
env.dev
TIME_ZONE=Europe/Madrid
REMARK_URL=http://localhost:1313/remark42
SITE=blogops
SECRET=123456
ADMIN_SHARED_ID=sto
AUTH_ANON=true
EMOJI=true
And the nginx/default.conf file used to publish the service locally is simple too:
default.conf
server   
 listen 1313;
 server_name localhost;
 location /  
    proxy_pass http://hugo:1313;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
  
 location /remark42/  
    rewrite /remark42/(.*) /$1 break;
    proxy_pass http://remark42:8080/;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
   
 

Production setupThe VM where I m publishing the blog runs Debian GNU/Linux and uses binaries from local packages and applications packaged inside containers. To run the containers I m using docker-ce (I could have used podman instead, but I already had it installed on the machine, so I stayed with it). The binaries used on this project are included on the following packages from the main Debian repository:
  • git to clone & pull the repository,
  • jq to parse json files from shell scripts,
  • json2file-go to save the webhook messages to files,
  • inotify-tools to detect when new files are stored by json2file-go and launch scripts to process them,
  • nginx to publish the site using HTTPS and work as proxy for json2file-go and remark42 (I run it using a container),
  • task-spool to queue the scripts that update the deployment.
And I m using docker and docker compose from the debian packages on the docker repository:
  • docker-ce to run the containers,
  • docker-compose-plugin to run docker compose (it is a plugin, so no - in the name).

Repository checkoutTo manage the git repository I ve created a deploy key, added it to gitea and cloned the project on the /srv/blogops PATH (that route is owned by a regular user that has permissions to run docker, as I said before).

Compiling the site with hugoTo compile the site we are using the docker-compose.yml file seen before, to be able to run it first we build the container images and once we have them we launch hugo using docker compose run:
$ cd /srv/blogops
$ git pull
$ docker compose build
$ if [ -d "./public" ]; then rm -rf ./public; fi
$ docker compose run hugo --
The compilation leaves the static HTML on /srv/blogops/public (we remove the directory first because hugo does not clean the destination folder as jekyll does). The deploy script re-generates the site as described and moves the public directory to its final place for publishing.

Running remark42 with dockerOn the /srv/blogops/remark42 folder I have the following docker-compose.yml:
docker-compose.yml
version: "2"
services:
  remark42:
    build:
      context: ../docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    env_file:
      - ../.env
      - ./env.prod
    container_name: remark42
    restart: always
    volumes:
      - ./var.prod:/srv/var
    ports:
      - 127.0.0.1:8042:8080
The ../.env file is loaded to get the APP_UID and APP_GID variables that are used by my version of the init.sh script to adjust file permissions and the env.prod file contains the rest of the settings for remark42, including the social network tokens (see the remark42 documentation for the available parameters, I don t include my configuration here because some of them are secrets).

Nginx configurationThe nginx configuration for the blogops.mixinet.net site is as simple as:
server  
  listen 443 ssl http2;
  server_name blogops.mixinet.net;
  ssl_certificate /etc/letsencrypt/live/blogops.mixinet.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/blogops.mixinet.net/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
  access_log /var/log/nginx/blogops.mixinet.net-443.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-443.error.log;
  root /srv/blogops/nginx/public_html;
  location /  
    try_files $uri $uri/ =404;
   
  include /srv/blogops/nginx/remark42.conf;
 
server  
  listen 80 ;
  listen [::]:80 ;
  server_name blogops.mixinet.net;
  access_log /var/log/nginx/blogops.mixinet.net-80.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-80.error.log;
  if ($host = blogops.mixinet.net)  
    return 301 https://$host$request_uri;
   
  return 404;
 
On this configuration the certificates are managed by certbot and the server root directory is on /srv/blogops/nginx/public_html and not on /srv/blogops/public; the reason for that is that I want to be able to compile without affecting the running site, the deployment script generates the site on /srv/blogops/public and if all works well we rename folders to do the switch, making the change feel almost atomic.

json2file-go configurationAs I have a working WireGuard VPN between the machine running gitea at my home and the VM where the blog is served, I m going to configure the json2file-go to listen for connections on a high port using a self signed certificate and listening on IP addresses only reachable through the VPN. To do it we create a systemd socket to run json2file-go and adjust its configuration to listen on a private IP (we use the FreeBind option on its definition to be able to launch the service even when the IP is not available, that is, when the VPN is down). The following script can be used to set up the json2file-go configuration:
setup-json2file.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
BASE_DIR="/srv/blogops/webhook"
J2F_DIR="$BASE_DIR/json2file"
TLS_DIR="$BASE_DIR/tls"
J2F_SERVICE_NAME="json2file-go"
J2F_SERVICE_DIR="/etc/systemd/system/json2file-go.service.d"
J2F_SERVICE_OVERRIDE="$J2F_SERVICE_DIR/override.conf"
J2F_SOCKET_DIR="/etc/systemd/system/json2file-go.socket.d"
J2F_SOCKET_OVERRIDE="$J2F_SOCKET_DIR/override.conf"
J2F_BASEDIR_FILE="/etc/json2file-go/basedir"
J2F_DIRLIST_FILE="/etc/json2file-go/dirlist"
J2F_CRT_FILE="/etc/json2file-go/certfile"
J2F_KEY_FILE="/etc/json2file-go/keyfile"
J2F_CRT_PATH="$TLS_DIR/crt.pem"
J2F_KEY_PATH="$TLS_DIR/key.pem"
# ----
# MAIN
# ----
# Install packages used with json2file for the blogops site
sudo apt update
sudo apt install -y json2file-go uuid
if [ -z "$(type mkcert)" ]; then
  sudo apt install -y mkcert
fi
sudo apt clean
# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"
J2F_DIRLIST="blogops:$(uuid)"
J2F_LISTEN_STREAM="172.31.31.1:4443"
# Configure json2file
[ -d "$J2F_DIR" ]   mkdir "$J2F_DIR"
sudo sh -c "echo '$J2F_DIR' >'$J2F_BASEDIR_FILE'"
[ -d "$TLS_DIR" ]   mkdir "$TLS_DIR"
if [ ! -f "$J2F_CRT_PATH" ]   [ ! -f "$J2F_KEY_PATH" ]; then
  mkcert -cert-file "$J2F_CRT_PATH" -key-file "$J2F_KEY_PATH" "$(hostname -f)"
fi
sudo sh -c "echo '$J2F_CRT_PATH' >'$J2F_CRT_FILE'"
sudo sh -c "echo '$J2F_KEY_PATH' >'$J2F_KEY_FILE'"
sudo sh -c "cat >'$J2F_DIRLIST_FILE'" <<EOF
$(echo "$J2F_DIRLIST"   tr ';' '\n')
EOF
# Service override
[ -d "$J2F_SERVICE_DIR" ]   sudo mkdir "$J2F_SERVICE_DIR"
sudo sh -c "cat >'$J2F_SERVICE_OVERRIDE'" <<EOF
[Service]
User=$J2F_USER
Group=$J2F_GROUP
EOF
# Socket override
[ -d "$J2F_SOCKET_DIR" ]   sudo mkdir "$J2F_SOCKET_DIR"
sudo sh -c "cat >'$J2F_SOCKET_OVERRIDE'" <<EOF
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to nothing to clear its value and add the new value later
ListenStream=
ListenStream=$J2F_LISTEN_STREAM
EOF
# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$J2F_SERVICE_NAME"
sudo systemctl start "$J2F_SERVICE_NAME"
sudo systemctl enable "$J2F_SERVICE_NAME"
# ----
# vim: ts=2:sw=2:et:ai:sts=2
Warning: The script uses mkcert to create the temporary certificates, to install the package on bullseye the backports repository must be available.

Gitea configurationTo make gitea use our json2file-go server we go to the project and enter into the hooks/gitea/new page, once there we create a new webhook of type gitea and set the target URL to https://172.31.31.1:4443/blogops and on the secret field we put the token generated with uuid by the setup script:
sed -n -e 's/blogops://p' /etc/json2file-go/dirlist
The rest of the settings can be left as they are:
  • Trigger on: Push events
  • Branch filter: *
Warning: We are using an internal IP and a self signed certificate, that means that we have to review that the webhook section of the app.ini of our gitea server allows us to call the IP and skips the TLS verification (you can see the available options on the gitea documentation). The [webhook] section of my server looks like this:
[webhook]
ALLOWED_HOST_LIST=private
SKIP_TLS_VERIFY=true
Once we have the webhook configured we can try it and if it works our json2file server will store the file on the /srv/blogops/webhook/json2file/blogops/ folder.

The json2file spooler scriptWith the previous configuration our system is ready to receive webhook calls from gitea and store the messages on files, but we have to do something to process those files once they are saved in our machine. An option could be to use a cronjob to look for new files, but we can do better on Linux using inotify we will use the inotifywait command from inotify-tools to watch the json2file output directory and execute a script each time a new file is moved inside it or closed after writing (IN_CLOSE_WRITE and IN_MOVED_TO events). To avoid concurrency problems we are going to use task-spooler to launch the scripts that process the webhooks using a queue of length 1, so they are executed one by one in a FIFO queue. The spooler script is this:
blogops-spooler.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
TSP_DIR="$BASE_DIR/tsp"
WEBHOOK_COMMAND="$BIN_DIR/blogops-webhook.sh"
# ---------
# FUNCTIONS
# ---------
queue_job()  
  echo "Queuing job to process file '$1'"
  TMPDIR="$TSP_DIR" TS_SLOTS="1" TS_MAXFINISHED="10" \
    tsp -n "$WEBHOOK_COMMAND" "$1"
 
# ----
# MAIN
# ----
INPUT_DIR="$1"
if [ ! -d "$INPUT_DIR" ]; then
  echo "Input directory '$INPUT_DIR' does not exist, aborting!"
  exit 1
fi
[ -d "$TSP_DIR" ]   mkdir "$TSP_DIR"
echo "Processing existing files under '$INPUT_DIR'"
find "$INPUT_DIR" -type f   sort   while read -r _filename; do
  queue_job "$_filename"
done
# Use inotifywatch to process new files
echo "Watching for new files under '$INPUT_DIR'"
inotifywait -q -m -e close_write,moved_to --format "%w%f" -r "$INPUT_DIR"  
  while read -r _filename; do
    queue_job "$_filename"
  done
# ----
# vim: ts=2:sw=2:et:ai:sts=2
To run it as a daemon we install it as a systemd service using the following script:
setup-spooler.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
J2F_DIR="$BASE_DIR/json2file"
SPOOLER_COMMAND="$BIN_DIR/blogops-spooler.sh '$J2F_DIR'"
SPOOLER_SERVICE_NAME="blogops-j2f-spooler"
SPOOLER_SERVICE_FILE="/etc/systemd/system/$SPOOLER_SERVICE_NAME.service"
# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"
# ----
# MAIN
# ----
# Install packages used with the webhook processor
sudo apt update
sudo apt install -y inotify-tools jq task-spooler
sudo apt clean
# Configure process service
sudo sh -c "cat > $SPOOLER_SERVICE_FILE" <<EOF
[Install]
WantedBy=multi-user.target
[Unit]
Description=json2file processor for $J2F_USER
After=docker.service
[Service]
Type=simple
User=$J2F_USER
Group=$J2F_GROUP
ExecStart=$SPOOLER_COMMAND
EOF
# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$SPOOLER_SERVICE_NAME"   true
sudo systemctl start "$SPOOLER_SERVICE_NAME"
sudo systemctl enable "$SPOOLER_SERVICE_NAME"
# ----
# vim: ts=2:sw=2:et:ai:sts=2

The gitea webhook processorFinally, the script that processes the JSON files does the following:
  1. First, it checks if the repository and branch are right,
  2. Then, it fetches and checks out the commit referenced on the JSON file,
  3. Once the files are updated, compiles the site using hugo with docker compose,
  4. If the compilation succeeds the script renames directories to swap the old version of the site by the new one.
If there is a failure the script aborts but before doing it or if the swap succeeded the system sends an email to the configured address and/or the user that pushed updates to the repository with a log of what happened. The current script is this one:
blogops-webhook.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
# Values
REPO_REF="refs/heads/main"
REPO_CLONE_URL="https://gitea.mixinet.net/mixinet/blogops.git"
MAIL_PREFIX="[BLOGOPS-WEBHOOK] "
# Address that gets all messages, leave it empty if not wanted
MAIL_TO_ADDR="blogops@mixinet.net"
# If the following variable is set to 'true' the pusher gets mail on failures
MAIL_ERRFILE="false"
# If the following variable is set to 'true' the pusher gets mail on success
MAIL_LOGFILE="false"
# gitea's conf/app.ini value of NO_REPLY_ADDRESS, it is used for email domains
# when the KeepEmailPrivate option is enabled for a user
NO_REPLY_ADDRESS="noreply.example.org"
# Directories
BASE_DIR="/srv/blogops"
PUBLIC_DIR="$BASE_DIR/public"
NGINX_BASE_DIR="$BASE_DIR/nginx"
PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"
WEBHOOK_BASE_DIR="$BASE_DIR/webhook"
WEBHOOK_SPOOL_DIR="$WEBHOOK_BASE_DIR/spool"
WEBHOOK_ACCEPTED="$WEBHOOK_SPOOL_DIR/accepted"
WEBHOOK_DEPLOYED="$WEBHOOK_SPOOL_DIR/deployed"
WEBHOOK_REJECTED="$WEBHOOK_SPOOL_DIR/rejected"
WEBHOOK_TROUBLED="$WEBHOOK_SPOOL_DIR/troubled"
WEBHOOK_LOG_DIR="$WEBHOOK_SPOOL_DIR/log"
# Files
TODAY="$(date +%Y%m%d)"
OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"
WEBHOOK_LOGFILE_PATH="$WEBHOOK_LOG_DIR/$OUTPUT_BASENAME.log"
WEBHOOK_ACCEPTED_JSON="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.json"
WEBHOOK_ACCEPTED_LOGF="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.log"
WEBHOOK_REJECTED_TODAY="$WEBHOOK_REJECTED/$TODAY"
WEBHOOK_REJECTED_JSON="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_REJECTED_LOGF="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_DEPLOYED_TODAY="$WEBHOOK_DEPLOYED/$TODAY"
WEBHOOK_DEPLOYED_JSON="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_DEPLOYED_LOGF="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_TROUBLED_TODAY="$WEBHOOK_TROUBLED/$TODAY"
WEBHOOK_TROUBLED_JSON="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_TROUBLED_LOGF="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.log"
# Query to get variables from a gitea webhook json
ENV_VARS_QUERY="$(
  printf "%s" \
    '(.             @sh "gt_ref=\(.ref);"),' \
    '(.             @sh "gt_after=\(.after);"),' \
    '(.repository   @sh "gt_repo_clone_url=\(.clone_url);"),' \
    '(.repository   @sh "gt_repo_name=\(.name);"),' \
    '(.pusher       @sh "gt_pusher_full_name=\(.full_name);"),' \
    '(.pusher       @sh "gt_pusher_email=\(.email);")'
)"
# ---------
# Functions
# ---------
webhook_log()  
  echo "$(date -R) $*" >>"$WEBHOOK_LOGFILE_PATH"
 
webhook_check_directories()  
  for _d in "$WEBHOOK_SPOOL_DIR" "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" \
    "$WEBHOOK_REJECTED" "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR"; do
    [ -d "$_d" ]   mkdir "$_d"
  done
 
webhook_clean_directories()  
  # Try to remove empty dirs
  for _d in "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" "$WEBHOOK_REJECTED" \
    "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR" "$WEBHOOK_SPOOL_DIR"; do
    if [ -d "$_d" ]; then
      rmdir "$_d" 2>/dev/null   true
    fi
  done
 
webhook_accept()  
  webhook_log "Accepted: $*"
  mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_ACCEPTED_JSON"
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_ACCEPTED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_ACCEPTED_LOGF"
 
webhook_reject()  
  [ -d "$WEBHOOK_REJECTED_TODAY" ]   mkdir "$WEBHOOK_REJECTED_TODAY"
  webhook_log "Rejected: $*"
  if [ -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
    mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_REJECTED_JSON"
  fi
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_REJECTED_LOGF"
  exit 0
 
webhook_deployed()  
  [ -d "$WEBHOOK_DEPLOYED_TODAY" ]   mkdir "$WEBHOOK_DEPLOYED_TODAY"
  webhook_log "Deployed: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_DEPLOYED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_DEPLOYED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_DEPLOYED_LOGF"
 
webhook_troubled()  
  [ -d "$WEBHOOK_TROUBLED_TODAY" ]   mkdir "$WEBHOOK_TROUBLED_TODAY"
  webhook_log "Troubled: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_TROUBLED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_TROUBLED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_TROUBLED_LOGF"
 
print_mailto()  
  _addr="$1"
  _user_email=""
  # Add the pusher email address unless it is from the domain NO_REPLY_ADDRESS,
  # which should match the value of that variable on the gitea 'app.ini' (it
  # is the domain used for emails when the user hides it).
  # shellcheck disable=SC2154
  if [ -n "$ gt_pusher_email##*@"$ NO_REPLY_ADDRESS " " ] &&
    [ -z "$ gt_pusher_email##*@* " ]; then
    _user_email="\"$gt_pusher_full_name <$gt_pusher_email>\""
  fi
  if [ "$_addr" ] && [ "$_user_email" ]; then
    echo "$_addr,$_user_email"
  elif [ "$_user_email" ]; then
    echo "$_user_email"
  elif [ "$_addr" ]; then
    echo "$_addr"
  fi
 
mail_success()  
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_LOGFILE" = "true" ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="OK - $gt_repo_name updated to commit '$gt_after'"
    mail -s "$ MAIL_PREFIX $ subject " "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
 
mail_failure()  
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_ERRFILE" = true ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="KO - $gt_repo_name update FAILED for commit '$gt_after'"
    mail -s "$ MAIL_PREFIX $ subject " "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
 
# ----
# MAIN
# ----
# Check directories
webhook_check_directories
# Go to the base directory
cd "$BASE_DIR"
# Check if the file exists
WEBHOOK_JSON_INPUT_FILE="$1"
if [ ! -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
  webhook_reject "Input arg '$1' is not a file, aborting"
fi
# Parse the file
webhook_log "Processing file '$WEBHOOK_JSON_INPUT_FILE'"
eval "$(jq -r "$ENV_VARS_QUERY" "$WEBHOOK_JSON_INPUT_FILE")"
# Check that the repository clone url is right
# shellcheck disable=SC2154
if [ "$gt_repo_clone_url" != "$REPO_CLONE_URL" ]; then
  webhook_reject "Wrong repository: '$gt_clone_url'"
fi
# Check that the branch is the right one
# shellcheck disable=SC2154
if [ "$gt_ref" != "$REPO_REF" ]; then
  webhook_reject "Wrong repository ref: '$gt_ref'"
fi
# Accept the file
# shellcheck disable=SC2154
webhook_accept "Processing '$gt_repo_name'"
# Update the checkout
ret="0"
git fetch >>"$WEBHOOK_LOGFILE_PATH" 2>&1   ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository fetch failed"
  mail_failure
fi
# shellcheck disable=SC2154
git checkout "$gt_after" >>"$WEBHOOK_LOGFILE_PATH" 2>&1   ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository checkout failed"
  mail_failure
fi
# Remove the build dir if present
if [ -d "$PUBLIC_DIR" ]; then
  rm -rf "$PUBLIC_DIR"
fi
# Build site
docker compose run hugo -- >>"$WEBHOOK_LOGFILE_PATH" 2>&1   ret="$?"
# go back to the main branch
git switch main && git pull
# Fail if public dir was missing
if [ "$ret" -ne "0" ]   [ ! -d "$PUBLIC_DIR" ]; then
  webhook_troubled "Site build failed"
  mail_failure
fi
# Remove old public_html copies
webhook_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR -mindepth 1 -maxdepth 1 -name 'public_html-*' -type d \
  -exec rm -rf   \; >>"$WEBHOOK_LOGFILE_PATH" 2>&1   ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Removal of old site versions failed"
  mail_failure
fi
# Switch site directory
TS="$(date +%Y%m%d-%H%M%S)"
if [ -d "$PUBLIC_HTML_DIR" ]; then
  webhook_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"
  mv "$PUBLIC_HTML_DIR" "$PUBLIC_HTML_DIR-$TS" >>"$WEBHOOK_LOGFILE_PATH" 2>&1  
    ret="$?"
fi
if [ "$ret" -eq "0" ]; then
  webhook_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"
  mv "$PUBLIC_DIR" "$PUBLIC_HTML_DIR" >>"$WEBHOOK_LOGFILE_PATH" 2>&1  
    ret="$?"
fi
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Site switch failed"
  mail_failure
else
  webhook_deployed "Site deployed successfully"
  mail_success
fi
# ----
# vim: ts=2:sw=2:et:ai:sts=2

22 May 2022

Sergio Talens-Oliag: New Blog

Welcome to my new Blog for Technical Stuff. For a long time I was planning to start publishing technical articles again but to do it I wanted to replace my old blog based on ikiwiki by something more modern. I ve used Jekyll with GitLab Pages to build the Intranet of the ITI and to generate internal documentation sites on Agile Content, but, as happened with ikiwiki, I felt that things were kind of slow and not as easy to maintain as I would like. So on Kyso (the Company I work for right now) I switched to Hugo as the Static Site Generator (I still use GitLab Pages to automate the deployment, though), but the contents are written using the Markdown format, while my personal preference is the Asciidoc format. One thing I liked about Jekyll was that it was possible to use Asciidoctor to generate the HTML simply by using the Jekyll Asciidoc plugin (I even configured my site to generate PDF documents from .adoc files using the Asciidoctor PDF converter) and, luckily for me, that is also possible with Hugo, so that is what I plan to use on this blog, in fact this post is written in .adoc. My plan is to start publishing articles about things I m working on to keep them documented for myself and maybe be useful to someone else. The general intention is to write about Container Orchestration (mainly Kubernetes), CI/CD tools (currently I m using GitLab CE for that), System Administration (with Debian GNU/Linux as my preferred OS) and that sort of things. My next post will be about how I build, publish and update the Blog, but probably I will not finish it until next week, once the site is fully operational and the publishing system is tested.
Spoiler Alert: This is a personal site, so I m using Gitea to host the code instead of GitLab. To handle the deployment I ve configured json2file-go to save the data sent by the hook calls and process it asynchronously using inotify-tools. When a new file is detected a script parses the JSON file using jq and builds and updates the site if appropriate.

26 January 2022

Timo Jyrinki: Unboxing Dell XPS 13 - openSUSE Tumbleweed alongside preinstalled Ubuntu

A look at the 2021 model of Dell XPS 13 - available with Linux pre-installed
I received a new laptop for work - a Dell XPS 13. Dell has been long famous for offering certain models with pre-installed Linux as a supported option, and opting for those is nice for moving some euros/dollars from certain PC desktop OS monopoly towards Linux desktop engineering costs. Notably Lenovo also offers Ubuntu and Fedora options on many models these days (like Carbon X1 and P15 Gen 2).
black box

opened box

accessories and a leaflet about Linux support

laptop lifted from the box, closed

laptop with lid open

Ubuntu running

openSUSE runnin
Obviously a smooth, ready-to-rock Ubuntu installation is nice for most people already, but I need openSUSE, so after checking everything is fine with Ubuntu, I continued to install openSUSE Tumbleweed as a dual boot option. As I m a funny little tinkerer, I obviously went with some special things. I wanted:
  • Ubuntu to remain as the reference supported OS on a small(ish) partition, useful to compare to if trying out new development versions of software on openSUSE and finding oddities.
  • openSUSE as the OS consuming most of the space.
  • LUKS encryption for openSUSE without LVM.
  • ext4 s new fancy fast_commit feature in use during filesystem creation.
  • As a result of all that, I ended up juggling back and forth installation screens a couple of times (even more than shown below, and also because I forgot I wanted to use encryption the first time around).
First boots to pre-installed Ubuntu and installation of openSUSE Tumbleweed as the dual-boot option:
(if the embedded video is not shown, use a direct link)
Some notes from the openSUSE installation:
  • openSUSE installer s partition editor apparently does not support resizing or automatically installing side-by-side another Linux distribution, so I did part of the setup completely on my own.
  • Installation package download hanged a couple of times, only passed when I entered a mirror manually. On my TW I ve also noticed download problems recently, there might be a problem with some mirror I need to escalate.
  • The installer doesn t very clearly show encryption status of the target installation - it took me a couple of attempts before I even noticed the small encrypted column and icon (well, very small, see below), which also did not spell out the device mapper name but only the main partition name. In the end it was going to do the right thing right away and use my pre-created encrypted target partition as I wanted, but it could be a better UX. Then again I was doing my very own tweaks anyway.
  • Let s not go to the details why I m so old-fashioned and use ext4 :)
  • openSUSE s installer does not work fine with HiDPI screen. Funnily the tty consoles seem to be fine and with a big font.
  • At the end of the video I install the two GNOME extensions I can t live without, Dash to Dock and Sound Input & Output Device Chooser.

23 December 2021

Russ Allbery: Review: The Empress of Salt and Fortune

Review: The Empress of Salt and Fortune, by Nghi Vo
Series: Singing Hills Cycle #1
Publisher: Tordotcom
Copyright: March 2020
ISBN: 1-250-75029-6
Format: Kindle
Pages: 119
Cleric Chih is a record-keeper and historian from the Singing Hills abbey. They have come to Lake Scarlet with their neixin, the hoopoe Almost Brilliant, because the magical imperial lock placed on the site by the Empress In-yo has just been lifted. They hope to be one of the first to catalog what may be found there, but are surprised to encounter an old woman named Rabbit. Along with a catalog of objects will come a catalog of stories. Empress In-yo came from the north, a political bride for Emperor Sung. She was exiled to Lake Scarlet, to a compound her attendants called Thriving Fortune as a bitter joke. This is the site that Cleric Chih is cataloging. The old woman named Rabbit was one of the empress's attendants. Cleric Chih slowly draws out her stories, often sparked by an object found in the compound and described in the chapter epigraphs. The Empress of Salt and Fortune is a short fantasy novella set in a slightly modified version of China. In-yo comes from a recognizably Mongolian north, but it's colder and has mammoths. Anh, the analogue of China, uses magic to ensure that winter never comes. The court politics, though, seem largely unchanged from our world. The empress from the north had one important duty: delivery of an heir. Once that was done, the empire had no more need for her. That was, as foreshadowed at the start of the novella, not the end of In-yo or Rabbit's story. This is a lovely, layered, and subtle story that was a bit too subtle for my mood when I was reading it. It is the type of story that understates the emotions of the characters and rewards close reading and paying attention. I was not paying enough attention and missed a few significant character developments, which in this concise and careful of a story is not advisable. If you read this, learn from my experience, take it slow, and don't expect major plot events to be signaled in neon. The center of this story is careful and ruthless use of power in a world that is attempting to deny you any, and the sacrifices that one makes to reach that power anyway. Both the empress and Rabbit play bad hands with great effectiveness, taking advantage of the ways in which they're underestimated. It's a sharp and difficult and quietly angry story with a good emotional payoff that doesn't deliver a typical ending. I like stories in which characters make difficult decisions with their eyes wide open, and then refuse to second-guess them or feel bad about the expected consequences. If you like stories about decisive women using and bypassing systems that were stacked against them, you will probably enjoy this novella. The Empress of Salt and Fortune won the 2021 Hugo for best novella, but there is not a lot of fantasy here. It's mostly in the background and setting of a political and character story, and I'm not sure any of it was essential to the plot. What it does have is memorable characters and concise and effective storytelling. We may be living through the golden age of the novella, at least in the science fiction and fantasy world. Chih is a quiet and careful questioner who knows the value of patience and creating space. They don't get much opportunity for characterization in this novella, but what hints we get of their order's approach are intriguing. It appears that this series of novellas will be following Chih rather than the other branches of the story. That will likely keep me reading. Followed by When the Tiger Came Down the Mountain. Rating: 7 out of 10

12 September 2021

Russ Allbery: DocKnot 5.00

This release is the culmination of a project that I've been wanting to do for two years, but nearly all the work was done in the past week. That experience made me rethink some of my skepticism, but I'll get to that part of the story later. In March of 1999, I got tired of writing HTML by hand and wrote a small program called spin that implemented a macro language that translated into HTML. This makes it one of the oldest programs for which I have a continuous development history, predating podlators by three months. I think only News::Gateway (now very dormant) and Term::ANSIColor (still under active development but very stable) are older, as long as I'm not counting orphaned packages like newsyslog. I've used spin continuously ever since. It's grown features and an ecosystem of somewhat hackish scripts to do web publishing things I've wanted over the years: journal entries like this one, book reviews, a simple gallery (with some now-unfortunate decisions about maximum image size), RSS feeds, and translation of lots of different input files into HTML. But the core program itself, in all those years, has been one single Perl script written mostly in my Perl coding style from the early 2000s before I read Perl Best Practices. My web site is long overdue for an overhaul. Just to name a couple of obvious problems, it looks like trash on mobile browsers, and I'm using URL syntax from the early days of the web that, while it prompts some nostalgia for tildes, means all the URLs are annoyingly long and embed useless information such as the fact each page is written in HTML. Its internals also use a lot of ad hoc microformats (a bit of RFC 2822 here, a text-based format with significant indentation there, a weird space-separated database) and are supported by programs that extract meaning from human-written pages and perform automated updates to them rather than having a clear separation between structure and data. This will be a very large project, but it's the sort of quixotic personal project that I enjoy. Maintaining my own idiosyncratic static site generator is almost certainly not an efficient use of my time compared to, say, converting everything to Hugo. But I have 3,428 pages (currently) written in the thread macro language, plus numerous customizations that cater to my personal taste and interests, and, most importantly, I like having a highly customized system that I know exactly how to automate. The blocker has been that I didn't want to work on spin as it existed. It badly needed a structural overhaul and modernization, and even more badly needed a test suite since every release involved tedious manual testing by pouring over diffs between generations of the web site. And that was enough work to be intimidating, so I kept putting it off. I've separately been vaguely aware that I have been spending too much time reading Twitter (specifically) and the news (in general). It would be one thing if I were taking in that information to do something productive about it, but I haven't been. It's just doomscrolling. I've been thinking about taking a break for a while but it kept not sticking, so I decided to make a concerted effort this week. It took about four days to stop wanting to check Twitter and forcing myself to go do something else productive or at least play a game instead. Then I managed to get started on my giant refactoring project, and holy shit, Twitter has been bad for my attention span! I haven't been able to sustain this level of concentration for hours at a time in years. Twitter's not the only thing to blame (there are a few other stressers that I've fixed in the past couple of years), but it's obviously a huge part. Anyway, this long personal ramble is prelude to the first release of DocKnot that includes my static site generator. This is not yet the full tooling from my old web tools page; specifically, it's missing faq2html, cl2xhtml, and cvs2xhtml. (faq2html will get similar modernization treatment, cvs2xhtml will probably be rewritten in Perl since I have some old, obsolete scripts that may live in CVS forever, and I may retire cl2xhtml since I've stopped using the GNU ChangeLog format entirely.) But DocKnot now contains the core of my site generation system, including the thread macro language, POD conversion (by way of Pod::Thread), and RSS feeds. Will anyone else ever use this? I have no idea; realistically, probably not. If you were starting from scratch, I'm sure you'd be better off with one of the larger and more mature static site generators that's not the idiosyncratic personal project of one individual. It is packaged for Debian because it's part of the tool chain for generating files (specifically README.md) that are included in every package I maintain, and thus is part of the transitive closure of Debian main, but I'm not sure anyone will install it from there for any other purpose. But for once making something for someone else isn't the point. This is my quirky, individual way to maintain web sites that originated in an older era of the web and that I plan to keep up-to-date (I'm long overdue to figure out what they did to HTML after abandoning the XHTML approach) because it brings me joy to do things this way. In addition to adding the static site generator, this release also has the regular sorts of bug fixes and minor improvements: better formatting of software pages for software that's packaged for Debian, not assuming every package has a TODO file, and ignoring Autoconf 2.71 backup files when generating distribution tarballs. You can get the latest version of DocKnot from CPAN as App-DocKnot, or from its distribution page. I know I haven't yet updated my web tools page to reflect this move, or changed the URL in the footer of all of my pages. This transition will be a process over the next few months and will probably prompt several more minor releases.

Next.