Search Results: "agi"

18 December 2024

Simon Josefsson: Guix Container Images for GitLab CI/CD

I am using GitLab CI/CD pipelines for several upstream projects (libidn, libidn2, gsasl, inetutils, libtasn1, libntlm, ) and a long-time concern for these have been that there is too little testing on GNU Guix. Several attempts have been made, and earlier this year Ludo came really close to finish this. My earlier effort to idempotently rebuild Debian recently led me to think about re-bootstrapping Debian. Since Debian is a binary distribution, it re-use earlier binary packages when building new packages. The prospect of re-bootstrapping Debian in a reproducible way by rebuilding all of those packages going back to the beginning of time does not appeal to me. Instead, wouldn t it be easier to build Debian trixie (or some future release of Debian) from Guix, by creating a small bootstrap sandbox that can start to build Debian packages, and then make sure that the particular Debian release can idempotently rebuild itself in a reproducible way? Then you will eventually end up with a reproducible and re-bootstrapped Debian, which pave the way for a trustworthy release of Trisquel. Fortunately, such an endeavour appears to offer many rabbit holes. Preparing Guix container images for use in GitLab pipelines is one that I jumped into in the last few days, and just came out of. Let s go directly to the point of this article: here is a GitLab pipeline job that runs in a native Guix container image that builds libksba after installing the libgpg-error dependency from Guix using the pre-built substitutes.
test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - lndir /gnu/store/*profile/etc/ /etc
  - rm -f /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - export LANG=C.UTF-8
  - guix-daemon --disable-chroot --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix package -i libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1
You can put that in a .gitlab-ci.yml and push it to GitLab and you will end up with a nice pipeline job output. As you may imagine, there are several things that are sub-optimal in the before_script above that ought to be taken care of by the Guix container image, and I hope to be able to remove as much of the ugliness as possible. However that doesn t change that these images are useful now, and I wanted to announce this work to allow others to start testing them and possibly offer help. I have started to make use of these images in some projects, see for example the libntlm commit for that. You are welcome to join me in the Guix container images for GitLab CI/CD project! Issues and merge requests are welcome happy hacking folks!

16 December 2024

Dirk Eddelbuettel: #45: Some r-ci Updates

market monitor Welcome to post 45 in the $R^4 series! We introduced r-ci here in post #32 here nearly four years ago. It has found pretty widespread use and adoption, and we received a few kind words then (in the linked issue) and also more recently (in a follow-up comment) from which we merrily quote:
[ ] almost 3 years later on and I have had zero problems with this CI setup. For people who want reliable R software, resources like these are invaluable.
And while we followed up with post #41 about r2u for simple continuous integration, we may not have posted when we based r-ci on r2u (for the obvious Linux usage case). So let s make time now for a (comparitively smaller) update, and an update usage examples. We made two changes in the last few days. One is a (obvious in hindsight) simplification. Given that the bootstrap step was always executed, and needed no parameters, we pulled it into a new aggregated setup simply called r-ci that includes it so that it can be omitted as a step in the yaml file. Second, we recently needed Fortran on macOS too, and realized it was not installed by default so we just added that too. With that a real and used example is now as simple as the screenshot to the left (and hence one paragraph shorter). The trained eye will no doubt observe that there is nothing specific to a given repo. And that is basically the key feature: we can simply copy this file around and get fast and easy and reliable CI by taking advantage of the underlying robustness of r2u solving all dependency automagically and reliably. The option to enable macOS is also solid and compelling as the GitHub runners are fast (but more expensive in how the count against the limit of minutes so again a tradeoff to make), as is the option to run coverage if one so desires. Some of my repos do too. Take a look at the r-ci website which has more examples for the other supported CI servics it can used with, and feel free to ask questions as issue in the repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: Review: Finders

Review: Finders, by Melissa Scott
Series: Firstborn, Lastborn #1
Publisher: Candlemark & Gleam
Copyright: 2018
ISBN: 1-936460-87-4
Format: Kindle
Pages: 409
Finders is a far future science fiction novel with cyberpunk vibes. It is the first of a series, but the second (and, so far, only other) book of the series is a prequel. It stands alone reasonably well (more on that later). Cassilde Sam is a salvor. That means she specializes in exploring ancient wrecks and ruins left behind by the Ancients and salvaging materials that can be reused. The most important of those are what are called Ancestral elements: BLUE, which can hold programming; GOLD, which which reacts to BLUE instructions; RED, which produces actions or output; and GREEN, the rarest and most valuable, which powers everything else. Cassilde and her partner Dai Winter file claims on newly-discovered or incompletely salvaged Ancestor sites and then extract elemental material and anything else of value in their small salvage ship. Cassilde is also dying. She has Lightman's, an incurable degenerative disease that can only be treated with ever-increasing quantities of GREEN. It's hard to sleep, hard to get warm, hard to breathe, and eventually she'll run out of money to pay for the GREEN and she'll die. To push that day off into the future, she and Dai need work. The good news is that the wreckage of a new Ancestor sky palace was discovered in a long orbit and will create enough salvage work for every experienced salvor in the system. The bad news is that they're not qualified to bid on it. They need a scholar with a class-one license to bid on the best sections, and they haven't had a reliable scholar since their former partner and lover Summerland Ashe picked the opposite side in the Troubles and left the Fringe for the Entente, the more densely settled and connected portion of human space. But, unexpectedly and suspiciously, Ashe may be back and offering to work with them again. So, first, I love this setting. This is far from the first SF novel that is set in the aftermath of a general collapse of human civilization and revolving around discovering lost mysteries. Most examples of that genre are post-apocalyptic novels limited to Earth or the local solar system, but Kate Elliott's Unconquerable Sun comes immediately to mind. It's also not the first space archaeology series I've read; Kristine Kathyrn Rusch's story series starting with "Diving into the Wreck" also came to mind. But I don't recall the last time I've seen the author sell the setting so effectively. This is a world with starships and spaceports and clearly advanced technology, but it feels like a post-collapse society that's built on ruins. It's not just that technology runs on half-understood Ancestral elements and states fight over control of debris fields. It's also that the society repurposes Ancestral remnants in ways that both they and the reader know weren't originally intended, and that sometimes are more ingenious or efficient than how the Ancestors probably used them. There's a creative grittiness here that reminds me of good cyberpunk. It's not just good atmospheric writing, though. Scott makes a world-building decision that is going to sound trivial when I say it, but that has brilliant implications for the rest of the setting. There was not just one collapse; there were two. The Ancestor civilization, presumed to be the first human civilization, has passed into myth, quite literally when it comes to the stories around its downfall in the aftermath of a war against AIs. After the Ancestors came the Successors, who followed a similar salvage and rebuild approach and got as far as inventing their own warp drive technology that was based on but different than the Ancestor technology. Then they also collapsed, leaving their adapted technology and salvage operations layered over Ancestor sites. Cassilde's civilization is the third human starfaring civilization, and it is very specifically the third, neither the second nor one of dozens. This has so many small but effective implications that improve this story. A fall happened twice, so it feels like a pattern that makes Cassilde's civilization paranoid, but it happened for two very different reasons, so there is room to argue against it being a pattern. Salvage is harder because of the layering of Ancestor and Successor activity. Successors had their own way of controlling technology that is not accessible to Cassilde and her crew but is also not how the technology was intended to be used, which sends small ripples of interesting complexity through the background. And salvors are competing not only against each other but also against Successor salvage operations for which they have fragmentary records. It's a beautifully effective touch. Melissa Scott has been publishing science fiction for forty years, and it shows in this book. The protagonists are older characters: established professionals with resource problems but also social connections and an earned reputation, people who are trying to do a job and live their lives, not change the world. The writing is competent, deft, and atmospheric, with the confidence of long practice, but it also has the feel of an earlier era of science fiction. I mentioned the cyberpunk influence, which shows in the grittiness of the descriptions, the marginality of the characters in society, and the background theme of repurposing and reusing technology in unintended ways. This is the sort of book that feels solidly in the center of science fiction, without the genre mixing into either fantasy or romance that has become somewhat more common, and also without the dramatics of space opera (although the reader discovers that the stakes of this novel may be higher than anyone realized). And yet, so much of this book is about navigating a complicated romantic relationship, and that's where the story structure felt a bit odd. Cassilde, Dai, and Ashe were a polyamorous triad (polyamory also shows up in Scott's excellent Roads of Heaven series), and much of the first third of the book deals with the fracturing of trust with Ashe and their renegotiation of that relationship given his return. This is refreshingly written as the thoughtful interaction of three adults who take issues of trust seriously, but that also means it's much less dramatic than it sounds, and that means this book starts exceptionally slow. Scott is going somewhere, and the slow build became engrossing around the midpoint of the book, but I had to fight to stick with it at the start. About 80% of the way through this book, I had no idea how Scott was going to wrap things up in the pages remaining and was bracing myself for some sort of series cliffhanger. This is not what happens; the plot is not fully resolved in every detail, but it reaches a conclusion of sorts that does not mandate a sequel. I did think the end was a little bit unsatisfying, though, and I want another book that explores the implications of the ending. I think it would have to be a much different book, and the tonal shift might be stark. I've had this book on my to-read list for a while and kept putting it off because I wasn't sure I was in the mood for something precarious and gritty. This turned out to be an accurate worry: this is literally a book about salvaging the pieces of something full of wonders inextricably connected to dangers. You have to be in a cyberpunk sort of mood. But I've never read a bad Melissa Scott book, and this is no exception. The simplicity and ALL-CAPSNESS of the Ancestral elements grated a bit, but apart from that, the world-building is exceptional and well worth the trip. Recommended, although be warned that, if you're like me, it may not grab you from the first page. Followed by Fallen, but that book is a prequel that does not share any protagonists. Content notes: disability and degenerative illness in a universe where magical cures are possible, so be warned if that specific thematic combination is not what you're looking for. Rating: 7 out of 10

13 December 2024

Emanuele Rocca: Murder Mystery: GCC Builds Failing After sbuild Refactoring

This is the story of an investigation conducted by Jochen Sprickerhof, Helmut Grohne, and myself. It was true teamwork, and we would have not reached the bottom of the issue working individually. We think you will find it as interesting and fun as we did, so here is a brief writeup. A few of the steps mentioned here took several days, others just a few minutes. What is described as a natural progression of events did not always look very obvious at the moment at all.
Let us go through the Six Stages of Debugging together.

Stage 1: That cannot happen
Official Debian GCC builds start failing on multiple architectures in late November.
The build error happens on the build servers when running the testuite, but we know this cannot happen. GCC builds are not meant to fail in case of testsuite failures! Return codes are not making the build fail, make is being called with -k, it just cannot happen.
A lot of the GCC tests are always failing in fact, and an extensive log of the results is posted to the debian-gcc mailing list, but the packages always build fine regardless.
On the build daemons, build failures take several hours.

Stage 2: That does not happen on my machine
Building on my machine running Bookworm is just fine. The Build Daemons run Bookworm and use a Sid chroot for the build environment, just like I am. Same kernel.
The only obvious difference between my setup and the Debian buildds is that I am using sbuild 0.85.0 from bookworm, and the buildds have 0.86.3~bpo12+1 from bookworm-backports. Trying again with 0.86.3~bpo12+1, the build fails on my system too. The build daemons were updated to the bookworm-backports version of sbuild at some point in late November. Ha.

Stage 3: That should not happen
There are quite a few sbuild versions in between 0.85.0 and 0.86.3~bpo12+1, but looking at recent sbuild bugs shows that sbuild 0.86.0 was breaking "quite a number of packages". Indeed, with 0.86.0 the build still fails. Trying the version immediately before, 0.85.11, the build finishes correctly. This took more time than it sounds, one run including the tests takes several hours. We need a way to shorten this somehow.
The Debian packaging of GCC allows to specify which languages you may want to skip, and by default it builds Ada, Go, C, C++, D, Fortran, Objective C, Objective C++, M2, and Rust. When running the tests sequentially, the build logs stop roughly around the tests of a runtime library for D, libphobos. So can we still reproduce the failure by skipping everything except for D? With DEB_BUILD_OPTIONS=nolang=ada,go,c,c++,fortran,objc,obj-c++,m2,rust the build still fails, and it fails faster than before. Several minutes, not hours. This is progress, and time to file a bug. The report contains massive spoilers, so no link. :-)

Stage 4: Why does that happen?
Something is causing the build to end prematurely. It s not the OOM killer, and the kernel does not have anything useful to say in the logs. Can it be that the D language tests are sending signals to some process, and that is what s killing make ? We start tracing signals sent with bpftrace by writing the following script, signals.bt:
tracepoint:signal:signal_generate  
    printf("%s PID %d (%s) sent signal %d to PID %d\n", comm, pid, args->sig, args->pid);
 
And executing it with sudo bpftrace signals.bt.
The build takes its sweet time, and it fails. Looking at the trace output there s a suspicious process.exe terminating stuff.
process.exe (PID: 2868133) sent signal 15 to PID 711826
That looks interesting, but we have no clue what PID 711826 may be. Let s change the script a bit, and trace signals received as well.
tracepoint:signal:signal_generate  
    printf("PID %d (%s) sent signal %d to %d\n", pid, comm, args->sig, args->pid);
 
tracepoint:signal:signal_deliver  
    printf("PID %d (%s) received signal %d\n", pid, comm, args->sig);
 
The working version of sbuild was using dumb-init, whereas the new one features a little init in perl. We patch the current version of sbuild by making it use dumb-init instead, and trace two builds: one with the perl init, one with dumb-init.
Here are the signals observed when building with dumb-init.
PID 3590011 (process.exe) sent signal 2 to 3590014
PID 3590014 (sleep) received signal 9
PID 3590011 (process.exe) sent signal 15 to 3590063
PID 3590063 (std.process tem) received signal 9
PID 3590011 (process.exe) sent signal 9 to 3590065
PID 3590065 (std.process tem) received signal 9
And this is what happens with the new init in perl:
PID 3589274 (process.exe) sent signal 2 to 3589291
PID 3589291 (sleep) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589338
PID 3589338 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 9 to 3589340
PID 3589340 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589341
PID 3589274 (process.exe) sent signal 15 to 3589323
PID 3589274 (process.exe) sent signal 15 to 3589320
PID 3589274 (process.exe) sent signal 15 to 3589274
PID 3589274 (process.exe) received signal 9
PID 3589341 (sleep) received signal 9
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589320
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589323
There are a few additional SIGTERM being sent when using the perl init, that s helpful. At this point we are fairly convinced that process.exe is worth additional inspection. The source code of process.d shows something interesting:
1221 @system unittest
1222  
[...]
1247     auto pid = spawnProcess(["sleep", "10000"],
[...]
1260     // kill the spawned process with SIGINT
1261     // and send its return code
1262     spawn((shared Pid pid)  
1263         auto p = cast() pid;
1264         kill(p, SIGINT);
So yes, there s our sleep and the SIGINT (signal 2) right in the unit tests of process.d, just like we have observed in the bpftrace output.
Can we study the behavior of process.exe in isolation, separatedly from the build? Indeed we can. Let s take the executable from a failed build, and try running it under /usr/libexec/sbuild-usernsexec.
First, we prepare a chroot inside a suitable user namespace:
unshare --map-auto --setuid 0 --setgid 0 mkdir /tmp/rootfs
cd /tmp/rootfs
cat /home/ema/.cache/sbuild/unstable-arm64.tar   unshare --map-auto --setuid 0 --setgid 0 tar xf  -
unshare --map-auto --setuid 0 --setgid 0 mkdir /tmp/rootfs/whatever
unshare --map-auto --setuid 0 --setgid 0 cp process.exe /tmp/rootfs/
Now we can run process.exe on its own using the perl init, and trace signals at will:
/usr/libexec/sbuild-usernsexec --pivotroot --nonet u:0:100000:65536  g:0:100000:65536 /tmp/rootfs ema /whatever -- /process.exe
We can compare the behavior of the perl init vis-a-vis the one using dumb-init in milliseconds instead of minutes.

Stage 5: Oh, I see.
Why does process.exe send more SIGTERMs when using the perl init is now the big question. We have a simple reproducer, so this is where using strace becomes possible.
sudo strace --user ema --follow-forks -o sbuild-dumb-init.strace ./sbuild-usernsexec-dumb-init --pivotroot --nonet u:0:100000:65536  g:0:100000:65536 /tmp/dumbroot ema /whatever -- /process.exe
We start comparing the strace output of dumb-init with that of perl-init, looking in particular for different calls to kill.
Here is what process.exe does under dumb-init:
3593883 kill(-2, SIGTERM)               = -1 ESRCH (No such process)
No such process. Under perl-init instead:
3593777 kill(-2, SIGTERM <unfinished ...>
The process is there under perl-init!
That is a kill with negative pid. From the kill(2) man page:
If pid is less than -1, then sig is sent to every process in the process group whose ID is -pid.
It would have been very useful to see this kill with negative pid in the output of bpftrace, why didn t we? The tracepoint used, tracepoint:signal:signal_generate, shows when signals are actually being sent, and not the syscall being called. To confirm, one can trace tracepoint:syscalls:sys_enter_kill and see the negative PIDs, for example:
PID 312719 (bash) sent signal 2 to -312728
The obvious question at this point is: why is there no process group 2 when using dumb-init?

Stage 6: How did that ever work?
We know that process.exe sends a SIGTERM to every process in the process group with ID 2. To find out what this process group may be, we spawn a shell with dumb-init and observe under /proc PIDs 1, 16, and 17. With perl-init we have 1, 2, and 17. When running dumb-init, there are a few forks before launching the program, explaining the difference. Looking at /proc/2/cmdline we see that it s bash, ie. the program we are running under perl-init. When building a package, that is dpkg-buildpackage itself.
The test is accidentally killing its own process group.
Now where does this -2 come from in the test?
2363     // Special values for _processID.
2364     enum invalid = -1, terminated = -2;
Oh. -2 is used as a special value for PID, meaning "terminated". And there s a call to kill() later on:
2694     do   s = tryWait(pid);   while (!s.terminated);
[...]
2697     assertThrown!ProcessException(kill(pid));
What sets pid to terminated you ask?
Here is tryWait:
2568 auto tryWait(Pid pid) @safe
2569  
2570     import std.typecons : Tuple;
2571     assert(pid !is null, "Called tryWait on a null Pid.");
2572     auto code = pid.performWait(false);
And performWait:
2306         _processID = terminated;
The solution, dear reader, is not to kill.
PS: the bug report with spoilers for those interested is #1089007.

9 December 2024

Freexian Collaborators: Debian Contributions: OpenMPI transitions, cPython 3.12.7+ update uploads, Python 3.13 Transition, and more! (by Anupa Ann Joseph, Stefano Rivera)

Debian Contributions: 2024-11 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Transition management, by Emilio Pozuelo Monfort Emilio has been helping finish the mpi-defaults switch to mpich on 32-bit architectures, and the openmpi transitions. This involves filing bugs for the reverse dependencies, doing NMUs, and requesting removals for outdated (Not Built from Source) binaries on 32-bit architectures where openmpi is no longer available. Those transitions got entangled with a few others, such as the petsc stack, and were blocking many packages from migrating to testing. These transitions were completed in early December.

cPython 3.12.7+ update uploads, by Stefano Rivera Python 3.12 had failed to build on mips64el, due to an obscure dh_strip failure. The mips64el porters never figured it out, but the missing build on mips64el was blocking migration to Debian testing. After waiting a month, enough changes had accumulated in the upstream 3.12 maintenance git branch that we could apply them in the hope of changing the output enough to avoid breaking dh_strip. This worked. Of course there were other things to deal with too. A test started failing due to a Debian-specific patch we carry for python3.x-minimal, and it needed to be reworked. And Stefano forgot to strip the trailing + from PY_VERSION, which confuses some python libraries. This always requires another patch when applying git updates from the maintenance branch. Stefano added a build-time check to catch this mistake in the future. Python 3.12.7 migrated.

Python 3.13 Transition, by Stefano Rivera and Colin Watson During November the Python 3.13-add transition started. This is the first stage of supporting a new version of Python in Debian archive (after preparatory work), adding it as a new supported but non-default version. All packages with compiled Python extensions need to be re-built to add support for the new version. We have covered the lead-up to this transition in the past. Due to preparation, many of the failures we hit were expected and we had patches waiting in the bug tracker. These could be NMUed to get the transition moving. Others had been known about but hadn t been worked on, yet. Some other packages ran into new issues, as we got further into the transition than we d been able to in preparation. The whole Debian Python team has been helping with this work. The rebuild stage of the 3.13-add transition is now over, but many packages need work before britney will let python3-defaults migrate to testing.

Limiting build concurrency based on available RAM, by Helmut Grohne In recent years, the concurrency of CPUs has been increasing as has the demand for RAM by linkers. What has not been increasing as quickly is the RAM supply in typical machines. As a result, we more frequently run into situations where the package builds exhaust memory when building at full concurrency. Helmut initiated a discussion about generalizing an approach to this in Debian packages. Researching existing code that limits concurrency as well as providing possible extensions to debhelper and dpkg to provide concurrency limits based on available system RAM. Thus far there is consensus on the need for a more general solution, but ideas are still being collected for the precise solution.

MiniDebConf Toulouse at Capitole du Libre The whole Freexian Collaborator team attended MiniDebConf Toulouse, part of the Capitole du Libre event. Several members of the team gave talks: Stefano and Anupa worked as part of the video team, streaming and recording the event s talks.

Miscellaneous contributions
  • Stefano looked into packaging the latest upstream python-falcon version in Debian, in support of the Python 3.13 transition. This appeared to break python-hug, which is sadly looking neglected upstream, and the best course of action is probably its removal from Debian.
  • Stefano uploaded videos from various 2024 Debian events to PeerTube and YouTube.
  • Stefano and Santiago visited the site for DebConf 2025 in Brest, after the MiniDebConf in Toulouse, to meet with the local team and scout out the venue. The on-going DebConf 25 organization work of last month also included handling the logo and artwork call for proposals.
  • Stefano helped the press team to edit a post for bits.debian.org on OpenStreetMap s migration to Debian.
  • Carles implemented multiple language support on po-debconf-manager and tested it using Portuguese-Brazilian during MiniDebConf Toulouse. The system was also tested and improved by reviewing more than 20 translations to Catalan, creating merge requests for those packages, and providing user support to new users. Additionally, Carles implemented better status transitions, configuration keys management and other small improvements.
  • Helmut sent 32 patches for cross build failures. The wireplumber one was an interactive collaboration with Dylan A ssi.
  • Helmut continued to monitor the /usr-move, sent a patch for lib64readline8 and continued several older patch conversations. lintian now reports some aliasing issues in unstable.
  • Helmut initiated a discussion on the semantics of *-for-host packages. More feedback is welcome.
  • Helmut improved the crossqa.debian.net infrastructure to fail running lintian less often in larger packages.
  • Helmut continued maintaining rebootstrap mostly dropping applied patches and continuing discussions of submitted patches.
  • Helmut prepared a non-maintainer upload of gzip for several long-standing bugs.
  • Colin came up with a plan for resolving the multipart vs. python-multipart name conflict, and began work on converting reverse-dependencies.
  • Colin upgraded 42 Python packages to new upstream versions. Some were complex: python-catalogue had some upstream version confusion, pydantic and rpds-py involved several Rust package upgrades as prerequisites, and python-urllib3 involved first packaging python-quart-trio and then vendoring an unpackaged test-dependency.
  • Colin contributed Incus support to needrestart upstream.
  • Lucas set up a machine to do a rebuild of all ruby reverse dependencies to check what will be broken by adding ruby 3.3 as an alternative interpreter. The tool used for this is mass-rebuild and the initial rebuilds have already started. The ruby interpreter maintainers are planning to experiment with debusine next time.
  • Lucas is organizing a Debian Ruby sprint towards the end of January in Paris. The plan of the team is to finish any missing bits of Ruby 3.3 transition at the time, try to push Rails 7 transition and fix RC bugs affecting the ruby ecosystem in Debian.
  • Anupa attended a Debian Publicity team meeting in-person during MiniDebCamp Toulouse.
  • Anupa moderated and posted in the Debian Administrator group in LinkedIn.

8 December 2024

Dirk Eddelbuettel: pinp 0.0.11 on CRAN: Maintenance

A new version of our pinp package arrived on CRAN today, and is the first release in four years. The pinp package allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page. pinp vignette This release contains no new features or new user-facing changes but reflects the standard package and repository maintenance over the four-year window since the last release: updating of actions, updating of URLs and addressing small packaging changes spotted by ever-more-vigilant R checking code. The NEWS entry for this release follows.

Changes in pinp version 0.0.11 (2024-12-08)
  • Standard package maintenance for continuous integration, URL updates, and packaging conventions
  • Correct two minor nags in the Rd file

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the ping page. For questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: Review: Why Buildings Fall Down

Review: Why Buildings Fall Down, by Matthys Levy & Mario Salvadori
Illustrator: Kevin Woest
Publisher: W.W. Norton
Copyright: 1992
Printing: 1994
ISBN: 0-393-31152-X
Format: Trade paperback
Pages: 314
Why Buildings Fall Down is a non-fiction survey of the causes of structure collapses, along with some related topics. It is a sequel of sorts to Why Buildings Stand Up by Mario Salvadori, which I have not read. Salvadori was, at the time of writing, Professor Emeritus of Architecture at Columbia University (he died in 1997). Levy is an award-winning architectural engineer, and both authors were principals at the structural engineering firm Weidlinger Associates. There is a revised and updated 2002 edition, but this review is of the original 1992 edition. This is one of those reviews that comes with a small snapshot of how my brain works. I got fascinated by the analysis of the collapse of Champlain Towers South in Surfside, Florida in 2021, thanks largely to a random YouTube series on the tiny channel of a structural engineer. Somewhere in there (I don't remember where, possibly from that channel, possibly not) I saw a recommendation for this book and grabbed a used copy in 2022 with the intent of reading it while my interest was piqued. The book arrived, I didn't read it right away, I got distracted by other things, and it migrated to my shelves and sat there until I picked it up on an "I haven't read nonfiction in a while" whim. Two years is a pretty short time frame for a book to sit on my shelf waiting for me to notice it again. The number of books that have been doing that for several decades is, uh, not small. Why Buildings Fall Down is a non-technical survey of structure failures. These are mostly buildings, but also include dams, bridges, and other structures. It's divided into 18 fairly short chapters, and the discussion of each disaster is brisk and to the point. Most of the structures discussed are relatively recent, but the authors talk about the Meidum Pyramid, the Parthenon (in the chapter on intentional destruction by humans), and the Pavia Civic Tower (in the chapter about building death from old age). If you are someone who has already been down the structural failure rabbit hole, you will find chapters on the expected disasters like the Tacoma Narrows Bridge collapse and the Hyatt Regency walkway collapse, but there are a lot of incidents here, including a short but interesting discussion of the Leaning Tower of Pisa in the chapter on problems caused by soil properties. What you're going to get, in other words, is a tour of ways in which structures can fail, which is precisely what was promised by the title. This wasn't quite what I was expecting, but now I'm not sure why I was expecting something different. There is no real unifying theme here; sometimes the failure was an oversight, sometimes it was a bad design, sometimes it was a last-minute change, and sometimes it was something unanticipated. There are a lot of factors involved in structure design and any of them can fail. The closest there is to a common pattern is a lack of redundancy and sufficient safety factors, but that lack of redundancy was generally not deliberate and therefore this is not a guide to preventing a collapse. The result is a book that feels a bit like a grab-bag of structural trivia that is individually interesting but only occasionally memorable. The writing style I suspect will be a matter of taste, but once I got used to it, I rather enjoyed it. In a co-written book, it's hard to separate the voices of the authors, but Salvadori wrote most of the chapter on the law in the first person and he's clearly a character. (That chapter is largely the story of two trials he testified in, which, from his account, involved him verbally fencing with lawyers who attempted to claim his degrees from the University of Rome didn't count as real degrees.) If this translates to his speaking style, I suspect he was a popular lecturer at Columbia. The explanations of the structural failures are concise and relatively clear, although even with Kevin Woest's diagrams, it's hard to capture the stresses and movement in a written description. (I've found from watching YouTube videos that animations, or even annotations drawn while someone is talking, help a lot.) The framing discussion, well, sometimes that is bombastic in a way that I found amusing:
But we, children of a different era, do not want our lives to be enclosed, to be shielded from the mystery. We are eager to participate in it, to gather with our brothers and sisters in a community of thought that will lift us above the mundane. We need to be together in sorrow and in joy. Thus we rarely build monolithic monuments. Instead, we build domes.
It helps that passages like this are always short and thus don't wear out their welcome. My favorite line in the whole book is a throwaway sentence in a discussion of building failures due to explosions:
With a similar approach, it can be estimated that the chance of an explosion like that at Forty-fifth Street was at most one in thirty million, and probably much less. But this is why life is dangerous and always ends in death.
Going hard, structural engineering book! It's often appealing to learn about things from their failures because the failures are inherently more dramatic and thus more interesting, but if you were hoping for an introduction to structural engineering, this is probably not the book you want. There is an excellent and surprisingly engaging appendix that covers the basics of structural analysis in 45 pages, but you would probably be better off with Why Buildings Stand Up or another architecture or structural engineering textbook (or maybe a video course). The problem with learning by failure case study is that all the case studies tend to blend together, despite the authors' engaging prose, and nearly every collapse introduces a new structural element with new properties and new failure modes and only the briefest of explanations. This book might make you a slightly more informed consumer of the news, but for most readers I suspect it will be a collection of forgettable trivia told in an occasionally entertaining style. I think the book I wanted to read was something that went deeper into the process of forensic engineering, not just the outcomes. It's interesting to know what the cause of a failure was, but I'm more interested in how one goes about investigating a failure. What is the process, how do you organize the investigation, and how does the legal system around engineering failures work? There are tidbits and asides here, but this book is primarily focused on the structural analysis and elides most of the work done to arrive at those conclusions. That said, I was entertained. Why Buildings Fall Down is a bit dated the opening chapter on airplanes hitting buildings reads much differently now than when it was written in 1992, and I'm sure it was updated in the 2002 edition but it succeeds in being clear without being soulless or sounding like a textbook. I appreciate an occasional rant about nuclear weapons in a book about architecture. I'm not sure I really recommend this, but I had a good time with it. Also, I'm now looking for opportunities to say "this is why life is dangerous and always ends in death," so there is that. Rating: 6 out of 10

7 December 2024

Dominique Dumont: New cme command to update Debian Standards-Version field

Hi While updating my Debian package, I often have to update a field from debian/control file. This field is named Standards-Version and it declares which version of Debian policy the package complies to. When updating this field, one must follow the upgrading checklist. That being said, I maintain a lot of similar package and I often have to update this Standards-Version field. This field can be updated manually with cme fix dpkg (see Managing Debian packages with cme). But this command may make other changes and does not commit the result. So I ve created a new update-standards-version cme script that: For instance:
$ cme run update-standards-version 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Connecting to api.ftp-master.debian.org to check 31 package versions. Please wait...
Got info from api.ftp-master.debian.org for 31 packages.
Warning in 'source Standards-Version': Current standards version is '4.7.0'. Please read https://www.debian.org/doc/debian-policy/upgrading-checklist.html for the changes that may be needed on your package
to upgrade it from standard version '4.6.2' to '4.7.0'.
Offending value: '4.6.2'
Changes applied to dpkg-control configuration:
- source Standards-Version: '4.6.2' -> '4.7.0'
[master 552862c1] control: declare compliance with Debian policy 4.7.0
 1 file changed, 1 insertion(+), 1 deletion(-)
Here s the generated commit. Note that the generated log mentions the new policy version:
$ git show
commit 552862c1f24479b1c0c8c35a6289557f65e8ff3b (HEAD -> master)
Author: Dominique Dumont <dod[at]debian.org>
Date:   Sat Dec 7 19:06:14 2024 +0100
    control: declare compliance with Debian policy 4.7.0
diff --git a/debian/control b/debian/control
index cdb41dc0..e888012e 100644
--- a/debian/control
+++ b/debian/control
@@ -48,7 +48,7 @@ Build-Depends-Indep: dh-sequence-bash-completion,
                      libtext-levenshtein-damerau-perl,
                      libyaml-tiny-perl,
                      po-debconf
-Standards-Version: 4.6.2
+Standards-Version: 4.7.0
 Vcs-Browser: https://salsa.debian.org/perl-team/modules/packages/libconfig-model-perl
 Vcs-Git: https://salsa.debian.org/perl-team/modules/packages/libconfig-model-perl.git
 Homepage: https://github.com/dod38fr/config-model/wiki
Notes: I hope this will be useful to all my fellow Debian developers to reduce the boring parts of packaging activities. All the best

5 December 2024

Russ Allbery: Review: Paladin's Hope

Review: Paladin's Hope, by T. Kingfisher
Series: The Saint of Steel #3
Publisher: Red Wombat Studio
Copyright: 2021
ISBN: 1-61450-613-2
Format: Kindle
Pages: 303
Paladin's Hope is a fantasy romance novel and the third book of The Saint of Steel series. Each book of that series features different protagonists in closer to the romance series style than the fantasy series style and stands alone reasonably well. There are a few spoilers for the previous books here, so you probably want to read the series in order. Galen is one of the former paladins of the Saint of Steel, left bereft and then adopted by the Temple of the Rat after their god dies. Even more than the paladin protagonists of the previous two books, he reacted very badly to that death and has ongoing problems with nightmares and going into berserker rages when awakened. As the book opens, he's the escort for a lich-doctor named Piper who is examining a corpse found in the river.
The last of the five was the only one who did not share a certain martial quality. He was slim and well-groomed and would be considered handsome, but he was also extraordinarily pale, as if he lived his life underground. It was this fifth man who nudged the corpse with the toe of his boot and said, "Well, if you want my professional opinion, this great goddamn hole in his chest is probably what killed him."
As it turns out, slim and well-groomed and exceedingly pale is Galen's type. This is another paladin romance, this time between two men. It's almost all romance; the plot is barely worth mentioning. About half of the book is an exploration of a puzzle dungeon of the sort that might be fun in a video game or tabletop RPG, but that I found rather boring and monotonous in a novel. This creates a lot more room for the yearning and angst. Kingfisher tends towards slow-burn romances. This romance is a somewhat faster burn than some of her other books, but instead implodes into one of the most egregiously stupid third-act breakups that I've read in a romance plot. Of all the Kingfisher paladin books, I think this one was hurt the most by my basic difference in taste from the author. Kingfisher finds constant worrying and despair over being good enough for the romantic partner to be an enjoyable element, and I find it incredibly annoying. I think your enjoyment of this book will heavily depend on where you fall on that taste divide. The saving grace of this book are the gnoles, who are by far the best part of this world. Earstripe, a gnole constable, is the one who found the body that the book opens with and he drives most of the plot, such that it is. He's also the source of the best banter in the book, which is full of pointed and amused gnole observations about humans and their various stupidities. Given that I was also grumbling about human stupidities for most of the book, the gnole viewpoint and I got along rather well.
"God's stripes." Earstripe shook his head in disbelief. "Bone-doctor would save some gnole, yes? If some gnole was hurt." "Of course," said Piper. "If I could." "And tomato-man would save some gnole?" He swung his muzzle toward Galen. "If some gnome needed big human with sword?" "Yes, of course." Earstripe spread his hands, claws gleaming. "A gnole saves some human. Same thing." He took a deep breath, clearly choosing his words carefully. "A gnole's compassion does not require fur."
We learn a great deal more about gnole culture, all of which I found fascinating, and we get a rather satisfying amount of gnole acerbic commentary. Kingfisher is very good at banter, and dialogue in general, which also smoothes over the paucity of detailed plot. There was no salvaging the romance, at least for me, but I did at least like Piper, and Galen wasn't too bad when he wasn't being annoyingly self-destructive. I had been wondering a little if gay romance would, like sapphic romance, avoid my dislike of heterosexual gender roles. I think the jury is still out, but it did not work in this book because Galen is so committed to being the self-sacrificing protector who is unable to talk about his feelings that he single-handedly introduced a bunch of annoying pieces of the male gender role anyway. I will have to try that experiment with a book that doesn't involve hard-headed paladins. I have yet to read a bad T. Kingfisher novel, but I thought this one was on the weaker side. The gnoles are great and kept me reading, but I wish there had been a more robust plot, a lot less of the romance, and no third-act breakup. As is, I recommend the other Saint of Steel books over this one. Ah well. Followed by Paladin's Faith. Rating: 6 out of 10

3 December 2024

Russ Allbery: Review: Astrid Parker Doesn't Fail

Review: Astrid Parker Doesn't Fail, by Ashley Herring Blake
Series: Bright Falls #2
Publisher: Berkley Romance
Copyright: November 2022
ISBN: 0-593-33644-5
Format: Kindle
Pages: 365
Astrid Parker Doesn't Fail is a sapphic romance novel and a sequel to Delilah Green Doesn't Care. This is a romance style of sequel, which means that it spoils the previous book but involves a different set of protagonists, one of whom was a supporting character in the previous novel. I suppose the title is a minor spoiler for Delilah Green Doesn't Care, but not one that really matters. Astrid Parker's interior design business is in trouble. The small town of Bright Falls doesn't generate a lot of business, and there are limits to how many dentist office renovations that she's willing to do. The Everwood Inn is her big break: Pru Everwood has finally agreed to remodel and, even better, Innside America wants to feature the project. The show always works with local designers, and that means Astrid. National TV exposure is just what she needs to turn her business around and avoid an unpleasant confrontation with her domineering, perfectionist mother. Jordan Everwood is an out-of-work carpenter and professional fuck-up. Ever since she lost her wife, nothing has gone right either inside or outside of her head. Now her grandmother is renovating the favorite place of her childhood, and her novelist brother had the bright idea of bringing her to Bright Falls to help with the carpentry work. The remodel and the HGTV show are the last chance for the inn to stay in business and stay in the family, and Jordan is terrified that she's going to fuck that up too. And then she dumps coffee all over the expensive dress of a furious woman in a designer dress because she wasn't watching where she was going, and that woman turns out to be the designer of the Everwood Inn renovation. A design that Jordan absolutely loathes. The reader met Astrid in Delilah Green Doesn't Care (which you definitely want to read first). She's a bit better than she was there, but she's still uptight and unhappy and determined not to think too hard about why. When Jordan spills coffee down her favorite dress in their first encounter, shattering her fragile professional calm, it's not a meet-cute. Astrid is awful to her. Her subsequent regret, combined with immediately having to work with her and the degree to which she finds Jordan surprisingly attractive (surprising in part because Astrid thinks she's straight), slowly crack open Astrid's too-controlled life. This book was, once again, just compulsively readable. I read most of it the same day that I started it, staying up much too late, and then finished it the next day. It also once again made me laugh in delight at multiple points. I am a sucker for stories about someone learning how to become a better person, particularly when it involves a release of anxiety, and oh my does Blake ever deliver on that. Jordan's arc is more straightforward than Astrid's she just needs to get her confidence back but her backstory is a lot more complex than it first appears, including a morally ambiguous character who I would hate in person but who I admired as a deft and tricky bit of characterization. The characters from Delilah Green Doesn't Care of course play a significant role. Delilah in particular is just as much of a delight here as she was in the first book, and I enjoyed seeing the development of her relationship with her step-sister. But the new characters, both the HGTV film crew and the Everwoods, are also great. I think Blake has a real knack for memorable, distinct supporting characters that add a lot of depth to the main romance plot. I thought this book was substantially more sex-forward than Delilah Green Doesn't Care, with some lust at first or second sight, a bit more physical description of bodies, and an extended section in the middle of the book that's mostly about sex. If this is or is not your thing in romance novels, you may have a different reaction to this book than the previous one. There is, unfortunately, another third-act break-up, and this one annoyed me more than the one in Delilah Green Doesn't Care because it felt more unnecessary and openly self-destructive. The characters felt like they were headed towards a more sensible and less dramatic resolution, and then that plot twist caught me by surprise in an unpleasant way. After two books, I'm getting the sense that Blake has a preferred plot arc, at least in this series, and I wish she'd varied the story structure a bit more. Still, the third-act conflict was somewhat believable and the resolution was satisfying enough to salvage it. If it weren't for some sour feelings about the shape of that plot climax, I would have said that I liked this book even better than Delilah Green Doesn't Care, and that's a high bar. This series is great, and I will definitely be reading the third one. I'm going to be curious how that goes since it's about Iris, who so far has worked better for me as a supporting character than a protagonist. But Blake has delivered compulsively readable and thoroughly enjoyable books twice now, so I'm definitely here for the duration. If you like this sort of thing, I highly recommend this whole series. Followed by Iris Kelly Doesn't Date in the romance series sense, but as before this book is a complete story with a satisfying ending. Rating: 9 out of 10

2 December 2024

Bits from Debian: Bits from the DPL

This is bits from DPL for November. MiniDebConf Toulouse I had the pleasure of attending the MiniDebConf in Toulouse, which featured a range of engaging talks, complementing those from the recent MiniDebConf in Cambridge. Both events were preceded by a DebCamp, which provided a valuable opportunity for focused work and collaboration. DebCamp During these events, I participated in numerous technical discussions on topics such as maintaining long-neglected packages, team-based maintenance, FTP master policies, Debusine, and strategies for separating maintainer script dependencies from runtime dependencies, among others. I was also fortunate that members of the Publicity Team attended the MiniDebCamp, giving us the opportunity to meet in person and collaborate face-to-face. Independent of the ongoing lengthy discussion on the Debian Devel mailing list, I encountered the perspective that unifying Git workflows might be more critical than ensuring all packages are managed in Git. While I'm uncertain whether these two questions--adopting Git as a universal development tool and agreeing on a common workflow for its use--can be fully separated, I believe it's worth raising this topic for further consideration. Attracting newcomers In my own talk, I regret not leaving enough time for questions--my apologies for this. However, I want to revisit the sole question raised, which essentially asked: Is the documentation for newcomers sufficient to attract new contributors? My immediate response was that this question is best directed to new contributors themselves, as they are in the best position to identify gaps and suggest improvements that could make the documentation more helpful. That said, I'm personally convinced that our challenges extend beyond just documentation. I don't get the impression that newcomers are lining up to join Debian only to be deterred by inadequate documentation. The issue might be more about fostering interest and engagement in the first place. My personal impression is that we sometimes fail to convey that Debian is not just a product to download for free but also a technical challenge that warmly invites participation. Everyone who respects our Code of Conduct will find that Debian is a highly diverse community, where joining the project offers not only opportunities for technical contributions but also meaningful social interactions that can make the effort and time truly rewarding. In several of my previous talks (you can find them on my talks page just search for "team," and don't be deterred if you see "Debian Med" in the title; it's simply an example), I emphasized that the interaction between a mentor and a mentee often plays a far more significant role than the documentation the mentee has to read. The key to success has always been finding a way to spark the mentee's interest in a specific topic that resonates with their own passions. Bug of the Day In my presentation, I provided a brief overview of the Bug of the Day initiative, which was launched with the aim of demonstrating how to fix bugs as an entry point for learning about packaging. While the current level of interest from newcomers seems limited, the initiative has brought several additional benefits. I must admit that I'm learning quite a bit about Debian myself. I often compare it to exploring a house's cellar with a flashlight you uncover everything from hidden marvels to things you might prefer to discard. I've also come across traces of incredibly diligent people who have invested their spare time polishing these hidden treasures (what we call NMUs). The janitor, a service in Salsa that automatically updates packages, fits perfectly into this cellar metaphor, symbolizing the ongoing care and maintenance that keep everything in order. I hadn't realized the immense amount of silent work being done behind the scenes--thank you all so much for your invaluable QA efforts. Reproducible builds It might be unfair to single out a specific talk from Toulouse, but I'd like to highlight the one on reproducible builds. Beyond its technical focus, the talk also addressed the recent loss of Lunar, whom we mourn deeply. It served as a tribute to Lunar's contributions and legacy. Personally, I've encountered packages maintained by Lunar and bugs he had filed. I believe that taking over his packages and addressing the bugs he reported is a meaningful way to honor his memory and acknowledge the value of his work. Advent calendar bug squashing I d like to promote an idea originally introduced by Thorsten Alteholz, who in 2011 proposed a Bug Squashing Advent Calendar for the Debian Med team. (For those unfamiliar with the concept of an Advent Calendar, you can find an explanation on Wikipedia.) While the original version included a fun graphical element which we ve had to set aside due to time constraints (volunteers, anyone?) we ve kept the tradition alive by tackling one bug per day from December 1st to 24th each year. This initiative helps clean up issues that have accumulated over the year. Regardless of whether you celebrate the concept of Advent, I warmly recommend this approach as a form of continuous bug-squashing party for every team. Not only does it contribute to the release readiness of your team s packages, but it s also an enjoyable and bonding activity for team members. Best wishes for a cheerful and productive December
Andreas.

1 December 2024

Guido G nther: Free Software Activities November 2024

Another short status update of what happened on my side last month. The larger blocks are the Phosh 0.43 release, the initial file chooser portal, phosh-osk-stub now handling digit, number, phone and PIN input purpose via special layouts as well as Phoc mostly catching up with wlroots 0.18 and the current development version targeting 0.19. phosh phoc phosh-mobile-settings libphosh-rs phosh-osk-stub phosh-tour pfs xdg-desktop-portal-phosh meta-phosh Debian Calls libcall-ui git-buildpackage wlroots python-dbusmock xdg-spec ashpd govarnam varnam-schemes Reviews This is not code by me but reviews I did on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions! Help Development If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors. Comments? Join the Fediverse thread

Colin Watson: Free software activity in November 2024

Most of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay. Conferences I attended MiniDebConf Toulouse 2024, and the MiniDebCamp before it. Most of my time was spent with the Freexian folks working on debusine; Stefano gave a talk about its current status with a live demo (frantically fixed up over the previous couple of days, as is traditional) and with me and others helping to answer questions at the end. I also caught up with some people I haven t seen in ages, ate a variety of delicious cheeses, and generally had a good time. Many thanks to the organizers and sponsors! After the conference, Freexian collaborators spent a day and a half doing some planning for next year, and then went for an afternoon visiting the Cit de l espace. Rust team I upgraded these packages to new upstream versions, as part of upgrading pydantic and rpds-py: Python team Last month, I mentioned that we still need to work out what to do about the multipart vs. python-multipart name conflict in Debian (#1085728). We eventually managed to come up with an agreed plan; Sandro has uploaded a renamed binary package to experimental, and I ve begun work on converting reverse-dependencies (asgi-csrf, fastapi, python-curies, and starlette done so far). There s a bit more still to do, but I expect we can finish it soon. I fixed problems related to adding Python 3.13 support in: I fixed some packaging problems that resulted in failures any time we add a new Python version to Debian: I fixed other build/autopkgtest failures in: I packaged python-quart-trio, needed for a new upstream version of python-urllib3, and contributed a small packaging tweak upstream. I backported a twisted fix that caused problems in other packages, including breaking debusine s tests. I disentangled some upstream version confusion in python-catalogue, and upgraded to the current upstream version. I upgraded these packages to new upstream versions: Other small fixes I contributed Incus support to needrestart upstream. In response to Helmut s Cross building talk at MiniDebConf Toulouse, I fixed libfilter-perl to support cross-building (5b4c2e10, f9788c27). I applied a patch to move aliased files from / to /usr in iprutils (#1087733). I adjusted debconf to use the new /usr/lib/apt/apt-extracttemplates path (#1087523). I upgraded putty to 0.82.

Russ Allbery: Review: Unexploded Remnants

Review: Unexploded Remnants, by Elaine Gallagher
Publisher: Tordotcom
Copyright: 2024
ISBN: 1-250-32522-6
Format: Kindle
Pages: 111
Unexploded Remnants is a science fiction adventure novella. The protagonist and world background would support an episodic series, but as of this writing it stands alone. It is Elaine Gallagher's first professional publication. Alice is the last survivor of Earth: an explorer, information trader, and occasional associate of the Archive. She scouts interesting places, looks for inconsistencies in the stories the galactic civilizations tell themselves, and pokes around ruins for treasure. As this story opens, she finds a supposedly broken computer core in the Alta Sidoie bazaar that is definitely not what the trader thinks it is. Very shortly thereafter, she's being hunted by a clan of dangerous Delosi while trying to decide what to do with a possibly malevolent AI with frightening intrusion abilities. This is one of those stories where all the individual pieces sounded great, but the way they were assembled didn't click for me. Unusually, I'm not entirely sure why. Often it's the characters, but I liked Alice well enough. The Lewis Carroll allusions were there but not overdone, her computer agent Bugs is a little too much of a Warner Brothers cartoon but still interesting, and the world building has plenty of interesting hooks. I certainly can't complain about the pacing: the plot moves briskly along to a somewhat predictable but still adequate conclusion. The writing is smooth and competent, and the world is memorable enough that I'm still thinking about it. And yet, I never connected with this story. I think it may be because both Alice and the tight third-person narrator tend towards breezy confidence and matter-of-fact descriptions. Alice does, at times, get scared or angry, but I never felt those emotions. They were just events that were described to me. There wasn't an emotional hook, a place where the character grabbed me, and so it felt like everything was happening at an odd remove. The advantage of this approach is that there are no overwrought emotional meltdowns or brooding angstful protagonists, just an adventure story about a competent and thoughtful character, but I think I wanted a bit more emotional involvement than I got. The world background is the best part and feels like it could be part of a larger series. The Milky Way is connected by an old, vast, and only partly understood network of teleportation portals, which had cut off Earth for unknown reasons and then just as mysteriously reactivated when Alice, then Andrew, drunkenly poked at a standing stone while muttering an old prayer in Gaelic. The Archive spent a year sorting out her intellectual diseases (capitalism was particularly alarming) and giving her a fresh start with a new body. Humanity subsequently destroyed itself in a paroxysm of reactionary violence, leaving Alice a free agent, one of a kind in a galaxy of dizzying variety and forgotten history. Gallagher makes great use of the weirdness of the portal network to create a Star Wars style of universe: the focus is more on the diversity of the planets and alien species than on a coherent unifying structure. The settings of this book are not prone to Planet of the Hats problems. They instead have the contrasts that one would get if one dropped portals near current or former Earth population centers and then took a random walk through them (or, in other words, what playing GeoGuessr on a world map feels like). I liked this effect, but I have to admit that it also added to that sense of sliding off the surface of the story. The place descriptions were great bits of atmosphere, but I never cared about them. There isn't enough emotional coherence to make them memorable. One of the more notable quirks of this story is the description of ideologies and prejudices as viral memes that can be cataloged, cured, and deployed like weapons. This is a theme of the world-building as well: this society, or at least the Archive-affiliated parts of it, classifies some patterns of thought as potentially dangerous but treatable contagious diseases. I'm not going to object too much to this as a bit of background and characterization in a fairly short novella stuffed with a lot of other world-building and plot, but there's was something about treating ethical systems like diseases that bugged me in much the same way that medicalization of neurodiversity bugs me. I think some people will find that sense of moral clarity relaxing and others will find it vaguely irritating, and I seem to have ended up in the second group. Overall, I would classify this as an interesting not-quite-success. It felt like a side story in a larger universe, like a story that would work better if I already knew Alice from other novels and had an established emotional connection with her. As is, I would not really recommend it, but there are enough good pieces here that I would be interested to see what Gallagher does next. Rating: 6 out of 10

30 November 2024

Dima Kogan: Strava track filtering validation

After years of seeing people's strava tracks, I became convinced that they insufficiently filter the data, resulting in over-estimating the effort. Today I did a bit of lazy analysis, and half-confirmed this: in the one case I looked at, strava reported reasonable elevation gain numbers, but greatly overestimated the distance traveled. I looked at a single gps track of a long bike ride. This was uploaded to strava manually, as a .gpx file. I can imagine that different things happen if you use the strava app or some device that integrates with the service (the filtering might happen before the data hits the server, and the server could decide to not apply any more filtering). I processed the data with a simple hysteretic filter, ignoring small changes in position and elevation, trying out different thresholds in the process. I completely ignore the timestamps, and only look at the differences between successive points. This handles the usual GPS noise; it does not handle GPS jumps, which I completely ignore in this analysis. Ignoring these would produce inflated elevation/gain numbers, but I'm working with a looong track, so hopefully this is a small effect. Clearly this is not scientific, but it's something.

The code
Parsing .gpx is slow (this is a big file), so I cache that into a .vnl:
import sys
import gpxpy
filename_in  = 'INPUT.gpx'
filename_out = 'OUTPUT.gpx'
with open(filename_in, 'r') as f:
    gpx = gpxpy.parse(f)
f_out = open(filename_out, 'w')
tracks = gpx.tracks
if len(tracks) != 1:
    print("I want just one track", file=sys.stderr)
    sys.exit(1)
track = tracks[0]
segments = track.segments
if len(segments) != 1:
    print("I want just one segment", file=sys.stderr)
    sys.exit(1)
segment = segments[0]
time0 = segment.points[0].time
print("# time lat lon ele_m")
for p in segment.points:
    print(f" (p.time - time0).seconds   p.latitude   p.longitude   p.elevation ",
          file = f_out)
And I process this data with the different filters (this is a silly Python loop, and is slow):
#!/usr/bin/python3
import sys
import numpy as np
import numpysane as nps
import gnuplotlib as gp
import vnlog
import pyproj
geod = None
def dist_ft(lat0,lon0, lat1,lon1):
    global geod
    if geod is None:
        geod = pyproj.Geod(ellps='WGS84')
    return \
        geod.inv(lon0,lat0, lon1,lat1)[2] * 100./2.54/12.
f = 'OUTPUT.gpx'
track,list_keys,dict_key_index = \
    vnlog.slurp(f)
t      = track[:,dict_key_index['time' ]]
lat    = track[:,dict_key_index['lat'  ]]
lon    = track[:,dict_key_index['lon'  ]]
ele_ft = track[:,dict_key_index['ele_m']] * 100./2.54/12.
@nps.broadcast_define( ( (), ()),
                       (2,))
def filter_track(ele_hysteresis_ft,
                 dxy_hysteresis_ft):
    dist        = 0.0
    ele_gain_ft = 0.0
    lon_accepted = None
    lat_accepted = None
    ele_accepted = None
    for i in range(len(lat)):
        if ele_accepted is not None:
            dxy_here  = dist_ft(lat_accepted,lon_accepted, lat[i],lon[i])
            dele_here = np.abs( ele_ft[i] - ele_accepted )
            if dxy_here < dxy_hysteresis_ft and dele_here < ele_hysteresis_ft:
                continue
            if ele_ft[i] > ele_accepted:
                ele_gain_ft += dele_here;
            dist += np.sqrt(dele_here * dele_here +
                            dxy_here  * dxy_here)
        lon_accepted = lon[i]
        lat_accepted = lat[i]
        ele_accepted = ele_ft[i]
    # lose the last point. It simply doesn't matter
    dist_mi = dist / 5280.
    return np.array((ele_gain_ft, dist_mi))
Nele_hysteresis_ft    = 20
ele_hysteresis0_ft    = 5
ele_hysteresis1_ft    = 100
ele_hysteresis_ft_all = np.linspace(ele_hysteresis0_ft,
                                    ele_hysteresis1_ft,
                                    Nele_hysteresis_ft)
Ndxy_hysteresis_ft = 20
dxy_hysteresis0_ft = 5
dxy_hysteresis1_ft = 1000
dxy_hysteresis_ft  = np.linspace(dxy_hysteresis0_ft,
                                 dxy_hysteresis1_ft,
                                 Ndxy_hysteresis_ft)
# shape (Nele,Ndxy,2)
gain,distance = \
    nps.mv( filter_track( nps.dummy(ele_hysteresis_ft_all,-1),
                          dxy_hysteresis_ft),
            -1,0 )
# Stolen from mrcal
def options_heatmap_with_contours( plotoptions, # we update this on output
                                   *,
                                   contour_min           = 0,
                                   contour_max,
                                   contour_increment     = None,
                                   do_contours           = True,
                                   contour_labels_styles = 'boxed',
                                   contour_labels_font   = None):
    r'''Update plotoptions, return curveoptions for a contoured heat map'''
    gp.add_plot_option(plotoptions,
                       'set',
                       ('view equal xy',
                        'view map'))
    if do_contours:
        if contour_increment is None:
            # Compute a "nice" contour increment. I pick a round number that gives
            # me a reasonable number of contours
            Nwant = 10
            increment = (contour_max - contour_min)/Nwant
            # I find the nearest 1eX or 2eX or 5eX
            base10_floor = np.power(10., np.floor(np.log10(increment)))
            # Look through the options, and pick the best one
            m   = np.array((1., 2., 5., 10.))
            err = np.abs(m * base10_floor - increment)
            contour_increment = -m[ np.argmin(err) ] * base10_floor
        gp.add_plot_option(plotoptions,
                           'set',
                           ('key box opaque',
                            'style textbox opaque',
                            'contour base',
                            f'cntrparam levels incremental  contour_max , contour_increment , contour_min '))
        if contour_labels_font is not None:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%d" font " contour_labels_font "' )
        else:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%.0f"' )
        plotoptions['cbrange'] = [contour_min, contour_max]
        # I plot 3 times:
        # - to make the heat map
        # - to make the contours
        # - to make the contour labels
        _with = np.array(('image',
                          'lines nosurface',
                          f'labels  contour_labels_styles  nosurface'))
    else:
        gp.add_plot_option(plotoptions, 'unset', 'key')
        _with = 'image'
    using = \
        f'( dxy_hysteresis0_ft +$1* float(dxy_hysteresis1_ft-dxy_hysteresis0_ft)/(Ndxy_hysteresis_ft-1) ):' + \
        f'( ele_hysteresis0_ft +$2* float(ele_hysteresis1_ft-ele_hysteresis0_ft)/(Nele_hysteresis_ft-1) ):3'
    plotoptions['_3d']     = True
    plotoptions['_xrange'] = [dxy_hysteresis0_ft,dxy_hysteresis1_ft]
    plotoptions['_yrange'] = [ele_hysteresis0_ft,ele_hysteresis1_ft]
    plotoptions['ascii']   = True # needed for using to work
    gp.add_plot_option(plotoptions, 'unset', 'grid')
    return \
        dict( tuplesize=3,
              legend = "", # needed to force contour labels
              using = using,
              _with=_with)
contour_granularity = 1000
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(gain) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(gain) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(gain,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Elevation gain (ft)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed gain vs filtering parameters')
contour_granularity = 10
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(distance) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(distance) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(distance,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Distance (miles)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed distance vs filtering parameters')

Results: gain
Strava says the gain was 46307ft. The analysis says:
strava-gain.png
strava-gain-zoom.png
These show the filtered gain for different values of the distance and gain hysteresis thresholds. The same data is shown at diffent zoom levels. There's no sweet spot, but we get 46307ft with a reasonable amount of filtering. Maybe 46307ft is a bit low even.

Results: distance
Strava says the distance covered was 322 miles. The analysis says:
strava-distance.png
strava-distance-zoom.png
Once again, there's no sweet spot, but we get 322 miles only if we apply no filtering at all. That's clearly too high, and is not reasonable. From the map (and from other people's strava routes) the true distance is closer to 305 miles. Why those people's strava numbers are more believable is anybody's guess.

29 November 2024

Freexian Collaborators: Tryton 7.0 LTS reaches Debian trixie (by Mathias Behrle, Rapha l Hertzog and Anupa Ann Joseph)

Tryton is a FOSS software suite which is highly modular and scalable. Tryton along with its standard modules can provide a complete ERP solution or it can be used for specific functions of a business like accounting, invoicing etc. Debian packages for Tryton are being maintained by Mathias Behrle. You can follow him on Mastodon or get his help on Tryton related projects through MBSolutions (his own consulting company). Freexian has been sponsoring Mathias s packaging work on Tryton for a while, so that Debian gets all the quarterly bug fix releases as well as the security release in a timely manner.

About Tryton 7.0 LTS Lately Mathias has been busy packaging Tryton 7.0 LTS. As the LTS tag implies, this release is recommended for production deployments since it will be supported until November 2028. This release brings numerous bug fixes, performance improvements and various new features. As part of this work, 41 new Tryton modules and 3 dependency packages have been added to Debian, significantly broadening the options available to Debian users and improving integration with Tryton systems.

Running different versions of Tryton on different Debian releases To provide extended compatibility, a dedicated Tryton mirror is being managed and is available at https://debian.m9s.biz/debian/. This mirror hosts backports for all supported Tryton series, ensuring availability for a variety of Debian releases and deployment scenarios. These initiatives highlight MBSolutions technical contributions to the Tryton community, made possible by Freexian s financial backing. Together, we are advancing the Tryton ecosystem for Debian users.

27 November 2024

Bits from Debian: OpenStreetMap migrates to Debian 12

You may have seen this toot announcing OpenStreetMap's migration to Debian on their infrastructure.
After 18 years on Ubuntu, we've upgraded the @openstreetmap servers to Debian 12 (Bookworm). openstreetmap.org is now faster using Ruby 3.1. Onward to new mapping adventures! Thank you to the team for the smooth transition. #OpenStreetMap #Debian
We spoke with Grant Slater, the Senior Site Reliability Engineer for the OpenStreetMap Foundation. Grant shares: Why did you choose Debian?
There is a large overlap between OpenStreetMap mappers and the Debian community. Debian also has excellent coverage of OpenStreetMap tools and utilities, which helped with the decision to switch to Debian. The Debian package maintainers do an excellent job of maintaining their packages - e.g.: osm2pgsql, osmium-tool etc. Part of our reason to move to Debian was to get closer to the maintainers of the packages that we depend on. Debian maintainers appear to be heavily invested in the software packages that they support and we see critical bugs get fixed.
What drove this decision to migrate?
OpenStreetMap.org is primarily run on actual physical hardware that our team manages. We attempt to squeeze as much performance from our systems as possible, with some services being particularly I/O bound. We ran into some severe I/O performance issues with kernels ~6.0 to < ~6.6 on systems with NVMe storage. This pushed us onto newer mainline kernels, which led us toward Debian. On Debian 12 we could simply install the backport kernel and the performance issues were solved.
How was the transition managed?
Thankfully we manage our server setup nearly completely with code. We also use Test Kitchen with inspec to test this infrastructure code. Tests run locally using Podman or Docker containers, but also run as part of our git code pipeline. We added Debian as a test target platform and fixed up the infrastructure code until all the tests passed. The changes required were relatively small, simple package name or config filename changes mostly.
What was your timeline of transition?
In August 2024 we moved the www.openstreetmap.org Ruby on Rails servers across to Debian. We haven't yet finished moving everything across to Debian, but we will upgrade the rest when it makes sense. Some systems may wait until the next hardware upgrade cycle. Our focus is to build a stable and reliable platform for OpenStreetMap mappers.
How has the transition from another Linux distribution to Debian gone?
We are still in the process of fully migrating between Linux distributions, but we can share that we recently moved our frontend servers to Debian 12 (from Ubuntu 22.04) which bumped the Ruby version from 3.0 to 3.1 which allowed us to also upgrade the version of Ruby on Rails that we use for www.openstreetmap.org. We also changed our chef code for managing the network interfaces from using netplan (default in Ubuntu, made by Canonical) to directly using systemd-networkd to manage the network interfaces, to allow commonality between how we manage the interfaces in Ubuntu and our upcoming Debian systems. Over the years we've standardised our networking setup to use 802.3ad bonded interfaces for redundancy, with VLANs to segment traffic; this setup worked well with systemd-networkd. We use netboot.xyz for PXE networking booting OS installers for our systems and use IPMI for the out-of-band management. We remotely re-installed a test server to Debian 12, and fixed a few minor issues missed by our chef tests. We were pleasantly surprised how smoothly the migration to Debian went. In a few limited cases we've used Debian Backports for a few packages where we've absolutely had to have a newer feature. The Debian package maintainers are fantastic. What definitely helped us is our code is libre/free/open-source, with most of the core OpenStreetMap software like osm2pgsql already in Debian and well packaged. In some cases we do run pre-release or custom patches of OpenStreetMap software; with Ubuntu we used launchpad.net's Personal Package Archives (PPA) to build and host deb repositories for these custom packages. We were initially perplexed by the myriad of options in Debian (see this list - eeek!), but received some helpful guidance from a Debian contributor and we now manage our own deb repository using aptly. For the moment we're currently building deb packages locally and pushing to aptly; ideally we'd like to replace this with a git driven pipeline for building the custom packages in the future.
Thank you for taking the time to share your experience with us.
Thank you to all the awesome people who make Debian!

We are overjoyed to share this in-use case which demonstrates our commitment to stability, development, and long term support. Debian offers users, companies, and organisations the ability to plan, scope, develop, and maintain at their own pace using a rock solid stable Linux distribution with responsive developers. Does your organisation use Debian in some capacity? We would love to hear about it and your use of 'The Universal Operating System'. Reach out to us at Press@debian.org - we would be happy to add your organisation to our 'Who's Using Debian?' page and to share your story! About Debian The Debian Project is an association of individuals who have made common cause to create a free operating system. This operating system that we have created is called Debian. Installers and images, such as live systems, offline installers for systems without a network connection, installers for other CPU architectures, or cloud instances, can be found at Getting Debian.

26 November 2024

Sandro Knau : Akademy 2024 in W rzburg

In order to prepare for the Akademy I started some days before to give my Librem 5 ( an Open Hardware Phone) another try and ended up with a non starting Plasma 6. Actually this issue was known already, but hasn't been addressed. In the end I reached the Akademy with my Librem 5 having phosh installed (which is Gnome based), in order to have something working. I met Bushan and Bart who took care and the issue was fixed two days later I could finally install Plasma 6 on it. The last time I tested my Librem 5 with Plasma 5 it felt sluggish and not well working. But this time I was impressed how well the system reacts. Sure there are some things here and there, but in the bigger picture it is quite useable. One annoying issue is that the camera is only working with one app and the other issue is the battery capacity, you have to charge it once a day. Because of missing a QR reader that can use the camera, getting data to the phone was quite challenging. Unfortunately the conference Wifi separated the devices and I couldn't use KDE Connect to transfer data. In the end the only way to import data was taking five photos from the QR Code to import my D-Ticket to Itinerary. With a device with Plasma Mobile, it directly was used for a experiment: How well does Dolphin works on a Plasma Mobile device. Together with Felix Ernst we tried it out and were quite impressed, that Dolphin does work very well on Plasma Mobile, after some simple modifications on the UI. That resulted in a patch to add a mobile UI for Dolphin !826. With more time to play with my Librem 5 I also found an bug in KWeather, that is missing a Refresh option, when used in a Plasma Mobile environment #493656. Akademy is a good place to identify and solve some issues. It is always like that, you chat with someone and they can tell you who to ask to answer the concrete question and in the end you can solve things, that seems unsolvable in the beginning. There was also time to look into the travelling app Itinerary. A lot people are faced with a lot of real world issues, when not in their home town. Itinerary is the best traveling apps I know about. It can import nearly every ticket you have and can get location information from restaurant websites and allow routing to that place. It does add many useful information, while traveling like current delays, platform changes, live updates for elevator, weather information at the destination, a station map and all those features with strong focus on privacy. In detail I found some small things to improve: I additionally learned that it has a lot of details that helps people who have special needs. That is the reason why Daniel Kraut wants to get Itinerary available for iOS. As spoken out, that Daniel wants to reach this goal, others already started to implement the first steps to build apps for iOS. This year I was volunteering in helping out at Akademy. For me it was a lot of fun to meet everyone at the infodesk or help the speakers setup the beamer and microphone. It is also a good opportunity to meet many new faces and get in contact with them. I see also room for improvement. As we were quite busy at the Welcome Event to get out the badges to everyone, I couldn't answer the questions from newcomers, as the queue was too long. I propose that some people volunteer to be available for questions from newcomers. Often it is hard for newcomers to get their first contact(s) in a new community. There is a lot of space for improvement to make it easier for newcomers to join. Some ideas in my head are: Make an event for the newcomers to get them some links into the community and show that everyone is friendly. The tables at the BoFs should make a circle, so everyone can see each other. It was also hard for me to understand everyone as they mostly spoken towards the front. And then BoFs are sometimes full of very specific words and if you are not already deep in the topic you are lost. I can see the problem, on the one side BoFs are also the place where the person that knows the topic already wants to get things done. On the other side new comers join BoFs, are overwhelmed by to many new words get frustrated and think, that they are not welcome. Maybe at least everyone should present itself with name and ask new faces, why they joined the BoF to help them joining. I'm happy, that the food provided for the attendees was very delicious and that I'm not the only one mostly vegetarian with a big amount to be vegan. At the conference the KDE Eco initiation really caught me, as I see a lot of new possibilities in giving more reasons to switch to an Open Source system. The talk from Natalie was great to see how pupils get excited about Open Source and also help their grandparents to move to a Linux system. As I also will start to work as a teacher, I really got ideas what I can do at school. Together with Joseph and Nicole, we finally started to think about how to drive an exploration on what kind of old hardware is still KDE software running. The ones with the oldest hardware will get an old KDE shirt. For more information see #40. The conference was very motivating for me, I also had still energy at the evening to do some Debian packaging and finally pushed kweathercore to Debian and started to work on KWeather. Now I'm even more interested in the KDE apps focusing the mobile world, as I now have some hardware that can actually use those apps. I really enjoyed the workshop how to contribute to Qt by Volker Hilsheimer, especially the way how Volker explained things in a very friendly way, answered every question, sometime postponed some questions but came back to them later. All in all I now have a good overview how Qt is doing development and how I can fix bugs. The daytrip to Rothenburg ob der Tauber was very interesting for me. It was the first time I visited the village. But in my memory it feels like I know the village already. I grew up with reading a lot of comic albums including the good SiFi comic album series "Yoku Tsuno" created by the Belgian writer Roger Leloup. Yoku Tsuno is an electronics engineer, raised in Japan but now living in Belgium. In "On the edge of life" she helps her friend Ingard, who actually lives in Rothenburg. Leloup invested a lot of time to travel to make the make his drawings as accurate as possible. a comic page with Yoko Tsuno in Rothenburg ob der Tauber In order to not have a hard cut from Akademy to normal life, I had a lunch with Carlos, to discuss KDE Neon and how we can improve the interaction with Debian. In the future this should have less friction and make both communities work together more smoothly. Additionally as I used to develop on KDEPIM with the help of Docker images based on Neon I ask for a meta kf6 dev meta package. That should help to get rid of most hand written lists of dev packages in the Docker file in order to make it more simple for new contributors to start hacking on KDEPIM. The rest of the day I finally found time to do the normal tourist stuff: Going to the Wine bridge and having a walk to the castle of W rzburg. Unfortunately you hear a lot of car noises up there, but I could finally relaxe in a Japanese designed garden. Finally at Saturday I started my trip back. The trains towards Eberswalde are broken and I needed to find alternative routing. I got a little bit nervous, as it was the first time I travelled with my Librem 5 and Itinerary only and needed to reach the next train in less than two mins. With the indoor maps provided, I could prepare my run through the train station so I reached successfully my next train. By the way, also if you only only use KDE software, I would recommend everyone to join Akademy ;)

22 November 2024

Matthew Palmer: Your Release Process Sucks

For the past decade-plus, every piece of software I write has had one of two release processes. Software that gets deployed directly onto servers (websites, mostly, but also the infrastructure that runs Pwnedkeys, for example) is deployed with nothing more than git push prod main. I ll talk more about that some other day. Today is about the release process for everything else I maintain Rust / Ruby libraries, standalone programs, and so forth. To release those, I use the following, extremely intricate process:
  1. Create an annotated git tag, where the name of the tag is the software version I m releasing, and the annotation is the release notes for that version.
  2. Run git release in the repository.
  3. There is no step 3.
Yes, it absolutely is that simple. And if your release process is any more complicated than that, then you are suffering unnecessarily. But don t worry. I m from the Internet, and I m here to help.

Sidebar: annotated what-now?!? The annotated tag is one git s best-kept secrets. They ve been available in git for practically forever (I ve been using them since at least 2014, which is practically forever in software development), yet almost everyone I mention them to has never heard of them. A tag , in git parlance, is a repository-unique named label that points to a single commit (as identified by the commit s SHA1 hash). Annotating a tag is simply associating a block of free-form text with that tag. Creating an annotated tag is simple-sauce: git tag -a tagname will open up an editor window where you can enter your annotation, and git tag -a -m "some annotation" tagname will create the tag with the annotation some annotation . Retrieving the annotation for a tag is straightforward, too: git show tagname will display the annotation along with all the other tag-related information. Now that we know all about annotated tags, let s talk about how to use them to make software releases freaking awesome.

Step 1: Create the Annotated Git Tag As I just mentioned, creating an annotated git tag is pretty simple: just add a -a (or --annotate, if you enjoy typing) to your git tag command, and WHAM! annotation achieved. Releases, though, typically have unique and ever-increasing version numbers, which we want to encode in the tag name. Rather than having to look at the existing tags and figure out the next version number ourselves, we can have software do the hard work for us. Enter: git-version-bump. This straightforward program takes one mandatory argument: major, minor, or patch, and bumps the corresponding version number component in line with Semantic Versioning principles. If you pass it -n, it opens an editor for you to enter the release notes, and when you save out, the tag is automagically created with the appropriate name. Because the program is called git-version-bump, you can call it as a git command: git version-bump. Also, because version-bump is long and unwieldy, I have it aliased to vb, with the following entry in my ~/.gitconfig:
[alias]
    vb = version-bump -n
Of course, you don t have to use git-version-bump if you don t want to (although why wouldn t you?). The important thing is that the only step you take to go from here is our current codebase in main to everything as of this commit is version X.Y.Z of this software , is the creation of an annotated tag that records the version number being released, and the metadata that goes along with that release.

Step 2: Run git release As I said earlier, I ve been using this release process for over a decade now. So long, in fact, that when I started, GitHub Actions didn t exist, and so a lot of the things you d delegate to a CI runner these days had to be done locally, or in a more ad-hoc manner on a server somewhere. This is why step 2 in the release process is run git release . It s because historically, you can t do everything in a CI run. Nowadays, most of my repositories have this in the .git/config:
[alias]
    release = push --tags
Older repositories which, for one reason or another, haven t been updated to the new hawtness, have various other aliases defined, which run more specialised scripts (usually just rake release, for Ruby libraries), but they re slowly dying out. The reason why I still have this alias, though, is that it standardises the release process. Whether it s a Ruby gem, a Rust crate, a bunch of protobuf definitions, or whatever else, I run the same command to trigger a release going out. It means I don t have to think about how I do it for this project, because every project does it exactly the same way.

The Wiring Behind the Button
It wasn t the button that was the problem. It was the miles of wiring, the hundreds of miles of cables, the circuits, the relays, the machinery. The engine was a massive, sprawling, complex, mind-bending nightmare of levers and dials and buttons and switches. You couldn t just slap a button on the wall and expect it to work. But there should be a button. A big, fat button that you could press and everything would be fine again. Just press it, and everything would be back to normal.
  • Red Dwarf: Better Than Life
Once you ve accepted that your release process should be as simple as creating an annotated tag and running one command, you do need to consider what happens afterwards. These days, with the near-universal availability of CI runners that can do anything you need in an isolated, reproducible environment, the work required to go from annotated tag to release artifacts can be scripted up and left to do its thing. What that looks like, of course, will probably vary greatly depending on what you re releasing. I can t really give universally-applicable guidance, since I don t know your situation. All I can do is provide some of my open source work as inspirational examples. For starters, let s look at a simple Rust crate I ve written, called strong-box. It s a straightforward crate, that provides ergonomic and secure cryptographic functionality inspired by the likes of NaCl. As it s just a crate, its release script is very straightforward. Most of the complexity is working around Cargo s inelegant mandate that crate version numbers are specified in a TOML file. Apart from that, it s just a matter of building and uploading the crate. Easy! Slightly more complicated is action-validator. This is a Rust CLI tool which validates GitHub Actions and Workflows (how very meta) against a published JSON schema, to make sure you haven t got any syntax or structural errors. As not everyone has a Rust toolchain on their local box, the release process helpfully build binaries for several common OSes and CPU architectures that people can download if they choose. The release process in this case is somewhat larger, but not particularly complicated. Almost half of it is actually scaffolding to build an experimental WASM/NPM build of the code, because someone seemed rather keen on that. Moving away from Rust, and stepping up the meta another notch, we can take a look at the release process for git-version-bump itself, my Ruby library and associated CLI tool which started me down the Just Tag It Already rabbit hole many years ago. In this case, since gemspecs are very amenable to programmatic definition, the release process is practically trivial. Remove the boilerplate and workarounds for GitHub Actions bugs, and you re left with about three lines of actual commands. These approaches can certainly scale to larger, more complicated processes. I ve recently implemented annotated-tag-based releases in a proprietary software product, that produces Debian/Ubuntu, RedHat, and Windows packages, as well as Docker images, and it takes all of the information it needs from the annotated tag. I m confident that this approach will successfully serve them as they expand out to build AMIs, GCP machine images, and whatever else they need in their release processes in the future.

Objection, Your Honour! I can hear the howl of the but, actuallys coming over the horizon even as I type. People have a lot of Big Feelings about why this release process won t work for them. Rather than overload this article with them, I ve created a companion article that enumerates the objections I ve come across, and answers them. I m also available for consulting if you d like a personalised, professional opinion on your specific circumstances.

DVD Bonus Feature: Pre-releases Unless you re addicted to surprises, it s good to get early feedback about new features and bugfixes before they make it into an official, general-purpose release. For this, you can t go past the pre-release. The major blocker to widespread use of pre-releases is that cutting a release is usually a pain in the behind. If you ve got to edit changelogs, and modify version numbers in a dozen places, then you re entirely justified in thinking that cutting a pre-release for a customer to test that bugfix that only occurs in their environment is too much of a hassle. The thing is, once you ve got releases building from annotated tags, making pre-releases on every push to main becomes practically trivial. This is mostly due to another fantastic and underused Git command: git describe. How git describe works is, basically, that it finds the most recent commit that has an associated annotated tag, and then generates a string that contains that tag s name, plus the number of commits between that tag and the current commit, with the current commit s hash included, as a bonus. That is, imagine that three commits ago, you created an annotated release tag named v4.2.0. If you run git describe now, it will print out v4.2.0-3-g04f5a6f (assuming that the current commit s SHA starts with 04f5a6f). You might be starting to see where this is going. With a bit of light massaging (essentially, removing the leading v and replacing the -s with .s), that string can be converted into a version number which, in most sane environments, is considered newer than the official 4.2.0 release, but will be superceded by the next actual release (say, 4.2.1 or 4.3.0). If you re already injecting version numbers into the release build process, injecting a slightly different version number is no work at all. Then, you can easily build release artifacts for every commit to main, and make them available somewhere they won t get in the way of the official releases. For example, in the proprietary product I mentioned previously, this involves uploading the Debian packages to a separate component (prerelease instead of main), so that users that want to opt-in to the prerelease channel simply modify their sources.list to change main to prerelease. Management have been extremely pleased with the easy availability of pre-release packages; they ve been gleefully installing them willy-nilly for testing purposes since I rolled them out. In fact, even while I ve been writing this article, I was asked to add some debug logging to help track down a particularly pernicious bug. I added the few lines of code, committed, pushed, and went back to writing. A few minutes later (next week s job is to cut that in-process time by at least half), the person who asked for the extra logging ran apt update; apt upgrade, which installed the newly-built package, and was able to progress in their debugging adventure. Continuous Delivery: It s Not Just For Hipsters.

+1, Informative Hopefully, this has spurred you to commit your immortal soul to the Church of the Annotated Tag. You may tithe by buying me a refreshing beverage. Alternately, if you re really keen to adopt more streamlined release management processes, I m available for consulting engagements.

Matthew Palmer: Invalid Excuses for Why Your Release Process Sucks

In my companion article, I made the bold claim that your release process should consist of no more than two steps:
  1. Create an annotated Git tag;
  2. Run a single command to trigger the release pipeline.
As I have been on the Internet for more than five minutes, I m aware that a great many people will have a great many objections to this simple and straightforward idea. In the interests of saving them a lot of wear and tear on their keyboards, I present this list of common reasons why these objections are invalid. If you have an objection I don t cover here, the comment box is down the bottom of the article. If you think you ve got a real stumper, I m available for consulting engagements, and if you turn out to have a release process which cannot feasibly be reduced to the above two steps for legitimate technical reasons, I ll waive my fees.

But I automatically generate my release notes from commit messages! This one is really easy to solve: have the release note generation tool feed directly into the annotation. Boom! Headshot.

But all these files need to be edited to make a release! No, they absolutely don t. But I can see why you might think you do, given how inflexible some packaging environments can seem, and since that s how we ve always done it .

Language Packages Most languages require you to encode the version of the library or binary in a file that you want to revision control. This is teh suck, but I m yet to encounter a situation that can t be worked around some way or another. In Ruby, for instance, gemspec files are actually executable Ruby code, so I call code (that s part of git-version-bump, as an aside) to calculate the version number from the git tags. The Rust build tool, Cargo, uses a TOML file, which isn t as easy, but a small amount of release automation is used to take care of that.

Distribution Packages If you re building Linux distribution packages, you can easily apply similar automation faffery. For example, Debian packages take their metadata from the debian/changelog file in the build directory. Don t keep that file in revision control, though: build it at release time. Everything you need to construct a Debian (or RPM) changelog is in the tag version numbers, dates, times, authors, release notes. Use it for much good.

The Dreaded Changelog Finally, there s the CHANGELOG file. If it s maintained during the development process, it typically has an archive of all the release notes, under version numbers, with an Unreleased heading at the top. It s one more place to remember to have to edit when making that preparing release X.Y.Z commit, and it is a gift to the Demon of Spurious Merge Conflicts if you follow the policy of every commit must add a changelog entry . My solution: just burn it to the ground. Add a line to the top with a link to wherever the contents of annotated tags get published (such as GitHub Releases, if that s your bag) and never open it ever again.

But I need to know other things about my release, too! For some reason, you might think you need some other metadata about your releases. You re probably wrong it s amazing how much information you can obtain or derive from the humble tag so think creatively about your situation before you start making unnecessary complexity for yourself. But, on the off chance you re in a situation that legitimately needs some extra release-related information, here s the secret: structured annotation. The annotation on a tag can be literally any sequence of octets you like. How that data is interpreted is up to you. So, require that annotations on release tags use some sort of structured data format (say YAML or TOML or even XML if you hate your release manager), and mandate that it contain whatever information you need. You can make sure that the annotation has a valid structure and contains all the information you need with an update hook, which can reject the tag push if it doesn t meet the requirements, and you re sorted.

But I have multiple packages in my repo, with different release cadences and versions! This one is common enough that I just refer to it as the monorepo drama . Personally, I m not a huge fan of monorepos, but you do you, boo. Annotated tags can still handle it just fine. The trick is to include the package name being released in the tag name. So rather than a release tag being named vX.Y.Z, you use foo/vX.Y.Z, bar/vX.Y.Z, and baz/vX.Y.Z. The release automation for each package just triggers on tags that match the pattern for that particular package, and limits itself to those tags when figuring out what the version number is.

But we don t semver our releases! Oh, that s easy. The tag pattern that marks a release doesn t have to be vX.Y.Z. It can be anything you want. Relatedly, there is a (rare, but existent) need for packages that don t really have a conception of releases in the traditional sense. The example I ve hit most often is automatically generated bindings packages, such as protobuf definitions. The source of truth for these is a bunch of .proto files, but to be useful, they need to be packaged into code for the various language(s) you re using. But those packages need versions, and while someone could manually make releases, the best option is to build new per-language packages automatically every time any of those definitions change. The versions of those packages, then, can be datestamps (I like something like YYYY.MM.DD.N, where N starts at 0 each day and increments if there are multiple releases in a single day). This process allows all the code that needs the definitions to declare the minimum version of the definitions that it relies on, and everything is kept in sync and tracked almost like magic.

Th-th-th-th-that s all, folks! I hope you ve enjoyed this bit of mild debunking. Show your gratitude by buying me a refreshing beverage, or purchase my professional expertise and I ll answer all of your questions and write all your CI jobs.

Next.

Previous.