Search Results: "ag"

6 December 2025

Simon Josefsson: Reproducible Guix Container Images

Around a year ago I wrote about Guix Container Images for GitLab CI/CD and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of Libtasn1 v4.20, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, SASL v2.2.2, Guile-GnuTLS v5.0.1, and OATH Toolkit v2.6.13. See how all those release announcements mention a Guix commit? That s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the verify-reproducible-artifacts project, that I wrote about earlier. Guix s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come. During the last year, unfortunately Guix was removed from Debian stable. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as Guix time-travelling is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort. The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of apt-get install guix they use the official Guix guix-install.sh approach. Read more about that effort in the announcement of Debian with Guix. The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version X to build a container with Guix on it, it will not put Guix version X into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version X-N. I had hope to overcome this somehow (running a guix pull in newly generated images may work), but never finished this before Guix was removed from Debian. So what could a better design look like? For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after reproducibility bugs were fixed I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on non-free software in Debian. I ve been using comparative rebuilds using similar distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as amd64 vs arm64 also help with deeper supply-chain concerns. My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the Guix on Trisquel/Debian project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them. Do you see where the debian-with-guix and guix-on-dpkg projects are leading to? We are now ready to look at the modernized Guix Container Images project. The tags are the same as before:
registry.gitlab.com/debdistutils/guix/container:latest
registry.gitlab.com/debdistutils/guix/container:slim
registry.gitlab.com/debdistutils/guix/container:extra
registry.gitlab.com/debdistutils/guix/container:gash
The method to create them is different. Now there is a build job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu reproduce job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub. How would you use them? A small way to start the container is like this:
jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
  guix 21ce6b3
    repository URL: https://git.guix.gnu.org/guix.git
    branch: master
    commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
sh-5.2# 
The need for --entrypoint=/bin/sh is because Guix s pack command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this. The need for --privileged is more problematic, but is discussed upstream. The above example works fine without it, but running anything more elaborate with guix-daemon installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.
cp -rL /gnu/store/*profile/etc/* /etc/
echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
echo 'root:x:0:' > /etc/group
groupadd --system guixbuild
for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
guix install hello
GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
. "$GUIX_PROFILE/etc/profile"
hello
This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn t be papered over but fixed properly somehow. In some execution environments, you may need to pass --disable-chroot to guix-daemon. To use the containers to build something in a GitLab pipeline, here is an example snippet:
test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - cp -rL /gnu/store/*profile/etc/* /etc/
  - echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
  - echo 'root:x:0:' > /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1
More help on the project page for the Guix Container Images. That s it for tonight folks, and remember, Happy Hacking!

Taavi V n nen: How to import a new Wikipedia language edition (in hard mode)

I created the latest Wikipedia language edition, the Toki Pona Wikipedia, last month. Unlike most other wikis which start their lives in the Wikimedia Incubator before the full wiki is created, in this case the community had been using a completely external MediaWiki site to build the wiki before it was approved as a "proper" Wikipedia wiki,1 and now that external wiki needed to be imported to the newly created Wikimedia-hosted wiki. (As far as I'm aware, the last and previously only time an external wiki has been imported to a Wikimedia project was in 2013 when Wikitravel was forked as Wikivoyage.) Creating a Wikimedia wiki these days is actually pretty straightforward, at least when compared to what it used to be like a couple of years ago. Today the process mostly involves using a script to generate two configuration changes, one to add the basic configuration for a wiki to operate and an another to add the wiki to the list of all wikis that exist, and then running a script to create the wiki database in between of deploying those two configuration changes. And then you wait half an hour while the script to tell all Wikidata client wikis about the new wiki runs on one wiki at a time. The primary technical challenge in importing a third-party wiki is that there's no SUL making sure that a single username maps to the same account on both wikis. This means that the usual strategy of using the functionality I wrote in CentralAuth to manually create local accounts can't be used as is, and so we needed to come up with a new way of matching everyone's contributions to their existing Wikimedia accounts. (Side note: While the user-facing interface tries to present a single "global" user account that can be used on all public Wikimedia wikis, in reality the account management layer in CentralAuth is mostly just a glue layer to link together individual "local" accounts on each wiki that the user has ever visited. These local accounts have independent user ID numbers for example I am user #35938993 on the English Wikipedia but #4 on the new Toki Pona Wikipedia and are what most of MediaWiki code interacts with except for a few features specifically designed with cross-wiki usage in mind. This distinction is also still very much present and visible in the various administrative and anti-abuse workflows.) The approach we ended up choosing was to re-write the dump file before importing, so that a hypothetical account called $Name would be turned $Name~wikipesija.org after the import.2 We also created empty user accounts that would take ownership of the edits to be imported so that we could use the standard account management tools on them later on. MediaWiki supports importing contributions without a local account to attribute them to, but it doesn't seem to be possible to convert an imported actor3 to a regular user later on which we wanted to keep as a possibility, even with the minor downside of creating a few hundred users that'll likely never get touched again later. We also made specific decisions to add the username suffix to everyone, not to just those names that'd conflicted with existing SUL accounts, and to deal with renaming users that wanted their contributions linked to an existing SUL account only after the import. This both reduced complexity and thus risk from the import phase, which already had much more unknowns compared to the rest of the process, but also were much better options ethically as well: suffixing all names meant we would not imply that those people chose to be Wikimedians with those specific usernames (when in reality it was us choosing to import those edits to the Wikimedia universe), and doing renames using the standard MediaWiki account management tooling meant that it produced the normal public log entries that all other MediaWiki administrative actions create. With all of the edits imported, the only major thing remaining was doing those merges I mentioned earlier to attribute imported edits to people's existing SUL accounts. Thankfully, the local account -based system makes it actually pretty simple. Usually CentralAuth prevents renaming individual local accounts that are attached to a global account, but that check can be bypassed with a maintenance script or a privileged enough account. Renaming the user automatically detached it from the previous global account, after which an another maintenance script could be used to attach the user to the correct global account.

  1. That external site was a fork of a fork of the original Toki Pona Wikipedia that was closed in 2005. And because cool URIs don't change, we made the the URLs that the old Wikipedia was using work again. Try it: https://art-tokipona.wikipedia.org.
  2. wikipesija.org was the domain where the old third-party wiki was hosted on, and ~ was used as a separator character in usernames during the SUL finalization in the early 2010s so using it here felt appropriate as well.
  3. An actor is a MediaWiki term and a database table referring to anything that can do edits or logged actions. Usually an actor is a user account or an IP address, but an imported user name in a specific format can also be represented as an actor.

4 December 2025

Colin Watson: Free software activity in November 2025

My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year. You can also support my work directly via Liberapay or GitHub Sponsors. OpenSSH I began preparing for the second stage of the GSS-API key exchange package split (some details have changed since that message). It seems that we ll need to wait until Ubuntu 26.04 LTS has been released, but that s close enough that it s worth making sure we re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I m considering some other package rearrangements to make the split easier to manage, but haven t made any decisions here yet. This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS-API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way. Python packaging I upgraded these packages to new upstream versions: I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team. I fixed or helped to fix several other build/test failures: I fixed a couple of other bugs: Other bits and pieces Code reviews

Ben Hutchings: FOSS activity in November 2025

3 December 2025

Reproducible Builds: Reproducible Builds in November 2025

Welcome to the report for November 2025 from the Reproducible Builds project! These monthly reports outline what we ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:

  1. 10 years of Reproducible Build at SeaGL
  2. Distribution work
  3. Tool development
  4. Website updates
  5. Miscellaneous news
  6. Software Supply Chain Security of Web3
  7. Upstream patches

10 years of Reproducible Builds at SeaGL 2025 On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA. Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris talk:
[ ] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren t all the builds reproducible now?

Distribution work In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian s Salsa Continuous Integration (CI) pipeline with debrebuild. Joschen cites the advantages as being threefold: firstly, that only one extra build needed ; it uses the same sbuild and ccache tooling as the normal build ; and works for any Debian release . The merge request was merged by Emmanuel Arias and is now active. kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary before installing Debian packages. Configuration can be done through a config file, or through a curses-like user interface. Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and J rg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org. Elsewhere, Roland Clobus performed some analysis on the live Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues. Lastly, Jochen Sprickerhof filed a bug announcing their intention to binary NMU a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.
Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.
Julien Malka and Arnout Engelen launched the new hash collection server for NixOS. Aside from improved reporting to help focus reproducible builds efforts within NixOS, it collects build hashes as individually-signed attestations from independent builders, laying the groundwork for further tooling.

Tool development diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [ ][ ][ ] In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:
  • Do not vary the architecture personality if the kernel is not varied. (Thanks to Ra l Cumplido). [ ]
  • Drop the debian/watch file, as Lintian now flags this as error for native Debian packages. [ ][ ]
  • Bump Standards-Version to 4.7.2, with no changes needed. [ ]
  • Drop the Rules-Requires-Root header as it is no longer required.. [ ]
In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. [ ]

Website updates Once again, there were a number of improvements made to our website this month including:

Miscellaneous news

Software Supply Chain Security of Web3 Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:
Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.
Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Michael Ablassmeier: libvirt 11.10 VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN

As with libvirt 11.10 a new flag for backup operation has been inroduced: VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN. According to the documentation It instructs libvirt to avoid termination of the VM if the guest OS shuts down while the backup is still running. The VM is in that scenario reset and paused instead of terminated allowing the backup to finish. Once the backup finishes the VM process is terminated. Added support for this in virtnbdbackup 2.40.

2 December 2025

Simon Josefsson: Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts

Last week I published Guix on Debian container images that prepared for today s announcement of Guix on Trisquel/Ubuntu container images. I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
Or you prefer guix-on-dpkg on Docker Hub:
docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix
You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.
jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release 
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
  guix 136fc8b
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/# 
You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously. I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8. Let s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let s start by defining a job skeleton:
.guile-gnutls: &guile-gnutls
  before_script:
  - /root/.config/guix/current/bin/guix-daemon --version
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - type guix
  - guix --version
  - guix describe
  - time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
  - time apt-get update
  - time apt-get install -y make git texinfo
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  script:
  - git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
  - cd guile-gnutls
  - git checkout v5.0.1
  - ./bootstrap
  - ./configure
  - make V=1
  - make V=1 check VERBOSE=t
  - make V=1 dist
  after_script:
  - mkdir -pv out/$CI_JOB_NAME_SLUG/src
  - mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
  - mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
  artifacts:
    paths:
    - out/**
This installs some packages, clones guile-gnutls (it could be any project, that s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs. Let s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.
guile-gnutls-trisquel11-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
  extends: .guile-gnutls
guile-gnutls-ubuntu22.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
  extends: .guile-gnutls
guile-gnutls-trisquel12-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
  extends: .guile-gnutls
guile-gnutls-ubuntu24.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
  extends: .guile-gnutls
Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let s add a pipeline job to do the comparison:
guile-gnutls-compare:
  image: alpine:latest
  needs: [ guile-gnutls-trisquel11-amd64,
           guile-gnutls-trisquel12-amd64,
           guile-gnutls-ubuntu22.04-arm64,
           guile-gnutls-ubuntu24.04-arm64 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.*   sort   grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.*   sort   grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.*   sort   uniq -c -w64   sort -rn
  - sha256sum */*.tar.* */*/*.tar.*   grep    -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.*   grep -v -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
  - cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
  artifacts:
    when: always
    paths:
    - ./out/**
Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:
$ cd out
$ sha256sum */*.tar.* */*/*.tar.*   sort   grep    -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.*   sort   grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.*   sort   uniq -c -w64   sort -rn
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.*   grep    -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.*   grep -v -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
That s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!

Dirk Eddelbuettel: duckdb-mlpack 0.0.5: Added kmeans, version helpers, documentation

A new release of the still-recent duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb mlpack extension page. This release 0.0.5 adds one new method: kmeans clustering. We also added two version accessors for both mlpack and armadillo. We found during the work on random forests (added in 0.0.4) that the multithreaded random number generation was not quite right in the respective upstream codes. This has by now been corrected in armadillo 15.2.2 as well as the trunk version of mlpack so if you build with those, and set a seed, then your forests and classification will be stable across reruns. We added a second state variable mlpack_silent that can be used to suppress even the minimal prediction quality summary some methods show, and expanded the documentation. For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Birger Schacht: Status update, November 2025

I started this month with a week of vacation which was followed by a small planned surgery and two weeks of sick leave. Nonetheless, I packaged and uploaded new releases of a couple of packages: Besides that I reactivated I project I started in summer 2024: debiverse.org. The idea of that was to have interfaces to Debian bugs and packages that are usable on mobile devices (I know, ludicrous!). Back then I started with Flask and Sqlalchemy, but that soon got out of hand. I now switched the whole stack to FastAPI and SQLModel which makes it a lot easier to manage. And the upside is that it comes with an API and OpenAPI docs. For the rendered HTML pages I use Jinja2 with Tailwind as CSS framework. I am currently using udd-mirror as database backend, which works pretty good (for this single user project). It would be nice to have some of the data in a faster index, like Typesense or Meilisearch. This way it would be possible to have faceted search or more performant full text search. But I haven t found any software that could provide this that is packaged in Debian.
Screenshot of the debiverse bug report list Screenshot of the debiverse swagger API

Fran ois Marier: Recovering from a broken update on the Turris Omnia

The recent Turris OS update from 7.2.3 to 9.0.0 took down my WiFi entirely. The wired network still works fine, but wireless is completely broken.

Factory reset It turns out the Omnia has an extensive (and fast) factory reset / recovery mode via the hardware reset button. Unfortunately, the factory image didn't work for me, possibly because I don't use the stock WiFi radios anymore.

Rolling back with schnapps Thanks to the fact that the Omnia uses a btrfs root filesystem, and the liberal use of snapshots around updates, I was able to rollback to the pre-9.0.0 state. First, I connected to the router using ssh:
ssh root@192.168.1.1
Then I listed the available snapshots:
$ schnapps list
#   Type        Size          Date                          Description
------+-----------+-------------+-----------------------------+------------------------------------
  500   post           15.98MiB   2025-08-09 11:27:48 -0700     Automatic post-update snapshot (TurrisOS 7.2.2 - hbs)
  506   pre            17.92MiB   2025-09-12 03:44:32 -0700     Automatic pre-update snapshot (TurrisOS 7.2.2 - hbs)
  507   post           17.88MiB   2025-09-12 03:45:14 -0700     Automatic post-update snapshot (TurrisOS 7.2.3 - hbs)
  515   time           20.03MiB   2025-11-02 01:05:01 -0700     Snapshot created by cron
  516   time           20.05MiB   2025-11-09 01:05:01 -0800     Snapshot created by cron
  517   time           20.29MiB   2025-11-16 01:05:00 -0800     Snapshot created by cron
  518   time           20.64MiB   2025-11-23 01:05:01 -0800     Snapshot created by cron
  519   time           20.83MiB   2025-11-30 01:05:00 -0800     Snapshot created by cron
  520   pre            87.91MiB   2025-11-30 07:41:10 -0800     Automatic pre-update snapshot (TurrisOS 7.2.3 - hbs)
  521   post          196.32MiB   2025-11-30 07:48:11 -0800     Automatic post-update snapshot (TurrisOS 9.0.0 - hbs)
  523   pre             4.44MiB   2025-11-30 20:47:31 -0800     Automatic pre-update snapshot
  524   post          224.00KiB   2025-11-30 20:47:43 -0800     Automatic post-update snapshot
  525   rollback      224.00KiB   2025-12-01 04:56:32 +0000     Rollback to snapshot factory
  526   pre             4.44MiB   2025-11-30 21:04:19 -0800     Automatic pre-update snapshot
  527   post          272.00KiB   2025-11-30 21:04:31 -0800     Automatic post-update snapshot
  528   rollback      272.00KiB   2025-12-01 05:13:38 +0000     Rollback to snapshot factory
  529   pre             4.52MiB   2025-11-30 21:28:44 -0800     Automatic pre-update snapshot
  530   single        208.00KiB                                 
  531   rollback      224.00KiB   2025-12-01 05:29:47 +0000     Rollback to snapshot factory
Finally, I rolled back to the exact state I was on before the 9.0.0 update:
$ schnapps rollback 520
Current state saved as snapshot number 532
Rolled back to snapshot 520

Full wipe As an aside, it turns out that the factory reset functionality is implemented as a brtfs rollback to a special factory snapshot. This is why is so fast, but it also means that doing a simple factory reset doesn't wipe the data on your router. If you are planning to sell your device or otherwise dispose of it, you also need to delete all btrfs snapshots

Conclusion While this update was very disappointing, especially since it's never happened before with major updates on Turris OS, it made me discover just how great the recovery tools are. It would be pretty tricky to fully brick one of these devices.

1 December 2025

Guido G nther: Free Software Activities November 2025

Another short status update of what happened on my side last month. Hand holding the release machinery for Phosh 0.51.0 but there's more: See below for details on the above and more: phosh phoc phosh-mobile-settings stevia xdg-desktop-portal-phosh pfs Phrog gmobile feedbackd feedbackd-device-themes libcall-ui wirepumber Chatty mobile-broadband-povider-info Debian Mobian wlroots libqrtr-glib libadwaits-rs phosh-site bengalos-debs gtk Reviews This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

Russ Allbery: Review: Forever and a Day

Review: Forever and a Day, by Haley Cass
Series: Those Who Wait #1.5
Publisher: Haley Cass
Copyright: 2020
ISBN: 979-8-5902-5966-3
Format: Kindle
Pages: 101
Forever and a Day is a coda to Haley Cass's self-published sapphic romance novel Those Who Wait. There is no point in reading it unless you have already read and enjoyed the full book and wanted more of a denouement. Given that Those Who Wait is a romance novel, it is definitionally not a spoiler to reveal that Sutton and Charlotte ended up together. This novella is seven scenes sketching out the next few years of their lives, interspersed with press clippings and social media commentary. These tie up loose ends, give the characters a bit more time together, throw in one more conflict and resolution, add one more sex scene, and stick a few exclamation points after the happily ever after. I am the sort of person who likes long denouements in stories, so I'm the target audience for this sort of sequel that's essentially additional chapters to the book. (The funniest version of this I've read is Jacqueline Carey's Saints Astray.) They are usually not great literature, since there are good reasons for not including these chapters in the book. That is exactly what this is: a few more chapters of the characters being happy, entirely forgettable, and of interest only to people who want that. Cass does try to introduce a bit of a plot via some light family conflict, which was sweet and mostly worked, and some conflict over having children, which was very stereotyped and which I did not enjoy as much. I thought the earlier chapters of this novella were the stronger ones, although I do have to give the characters credit in the later chapters for working through conflict in a mature and fairly reasonable way. It does help, though, when the conflict is entirely resolved by one character being right and the other character being happily wrong. That's character conflict on easy mode. I was happy to see that Sutton got a career, although as in the novel I wish Cass had put some more effort into describing Sutton's efforts in building that career. The details are maddeningly vague, which admittedly matches the maddeningly vague description of Charlotte's politics but which left me unsatisfied. Charlotte's political career continues to be pure wish fulfillment in the most utterly superficial and trivialized way, and it bothered me even more in the novella than it did in the novel. We still have absolutely no idea what she stands for, what she wants to accomplish, and why anyone would vote for her, and yet we get endless soft-focus paeans to how wonderful she will be for the country. Her opponents are similarly vague to the point that the stereotypes Cass uses to signal their inferiority to Charlotte are a little suspect. I'm more critical of this in 2025 than I would have been in 2015 because the last ten years have made clear the amount of damage an absolute refusal to stand for anything except hazy bromides causes, and I probably shouldn't be this annoyed that Cass chose to vaguely gesture towards progressive liberalism without muddying her romance denouement with a concrete political debate. But, just, gah. I found the last chapter intensely annoying, in part because the narrative of that chapter was too cliched and trite to sufficiently distract me from the bad taste of the cotton-candy politics. Other than that, this was minor, sweet, and forgettable. If you want another few chapters of an already long novel, this delivers exactly what you would expect. If the novel was plenty, nothing about this novella is going to change your mind and you can safely skip it. I really liked the scene between Charlotte and Sutton's mom, though, and I'm glad I read the novella just for that. Rating: 6 out of 10

30 November 2025

Utkarsh Gupta: FOSS Activites in November 2025

Here s my monthly but brief update about the activities I ve done in the FOSS world.

Debian
Whilst I didn t get a chance to do much, here are still a few things that I worked on:
  • Did a few sessions with the new DFSG team to help kickstart things, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
I joined Canonical to work on Ubuntu full-time back in February 2021. Whilst I can t give a full, detailed list of things I did, here s a quick TL;DR of what I did:
  • Successfully released Resolute Snapshot 1!
    • This one was particularly interesting as it was done without the ISO tracker and cdimage access.
    • There are some wrinkles that need ironing out for the next snapshot.
  • Resolute Raccoon is now fully and formally open.
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • review NEW packages for Ubuntu Studio.
    • remove old binaries that are stalling transition and/or migration.
    • LTS requalification of Ubuntu flavours.
    • bootstrapping dotnet-10 packages.
    • removal of openjdk-19 from Jammy, which sparked some interesting discussions.

Debian (E)LTS
This month I have worked 22 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:
  • wordpress: There were multiple vulnerabilities reported in Wordpress, leading to Sent Data & Cross-site Scripting.
    • [bookworm]: Roberto rightly pointed out that the upload to bookworm hadn t gone through last month, so I re-uploaded wordpress/6.1.9+dfsg1-0+deb12u1 to bookworm-security.
  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.
    • [ELTS]: Last month I had backported fixes for CVE-2025-46727 & CVE-2025-32441 to buster and stretch but the other backports were being a bit tricky due to really old versions.
    • I spent a bit more time but there s a lot to demystify. Gonna take a bit of break from this one and come back to this after doing other updates. Might even consider sending a RFH to the list.
  • libwebsockets: Multiple issues were reported in LWS causing denial of service and stack-based buffer overflow.
  • mako: It was found that Mako, a Python template library, was vulnerable to a denial of service attack via crafted regular expressions.
    • [LTS]: For bullseye, these were fixed via 1.1.3+ds1-2+deb11u1. And released as DLA 4393-1.
    • Backporting tests was an interesting exercise as I had to make them compatible with the bullseye version. :)
  • ceph: Affected by CVE-2024-47866, using the argument x-amz-copy-source to put an object and specifying an empty string as its content leads to the RGW daemon crashing, resulting in a DoS attack.
    • [LTS]: Whilst the patch is straightforward, backports are a bit tricky. I ve prepared the update but would like to reach out to zigo, the maintainer, to make sure nothing regresses.
    • [ELTS]: Same as LTS, I d like to get a quick review and upload to LTS first before I start staging uploads for ELTS.
  • [LTS] Attended the monthly LTS meeting on IRC. Summary here.
    • It was also followed by a 50-minute post-meeting technical discussion/question session.
  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates. Thanks, Sylvain, for a great documentation summary.

Until next time.
:wq for today.

Russ Allbery: Review: The Last Soul Among Wolves

Review: The Last Soul Among Wolves, by Melissa Caruso
Series: The Echo Archives #2
Publisher: Orbit
Copyright: August 2025
ISBN: 0-316-30404-2
Format: Kindle
Pages: 355
The Last Soul Among Wolves is urban high fantasy with strong mystery vibes. It is a direct sequel to The Last Hour Between Worlds. You need the previous book for some character setup (and this book would spoil it badly), but you don't have to remember the first book in detail. Only the main plot outcomes are directly relevant and the characters will remind you of those. Kembrel Thorne is a Hound, the equivalent of a police detective in the medieval-inspired city setting of this series, but this book does not open with an official assignment. Instead, she has been dragged by her childhood friend Jaycel Morningrey as company for a reading of the will of old lady Lovegrace, reclusive owner of a gothic mansion on an island connected to the city by an intermittent sandbar. A surprise reunion with her gang of childhood friends ensues, followed by the revelation that they are all in serious trouble. Shortly after Kem left the group to become a Hound, the remaining four, plus several other apparently random people, got entangled with a powerful Echo artifact. Now that Lovegrace has died, one of them will inherit the artifact and the ability to make a wish, but only one. The rest will be killed at decreasing intervals until only the winner is left alive. The Last Hour Between Worlds was fae fantasy built around a problem that was more of a puzzle than a mystery. The Last Soul Among Wolves is closer to a classic mystery: A cast of characters are brought together and semi-isolated in a rural house, they start dying, and it's up to the detective to solve the mystery of their death before it's too late. In this case, the initial mechanism of death is supernatural and not in doubt the challenge instead is how to stop it from happening again but Kem's problems quickly become more complicated. As mystery plots go, this is more thriller than classical despite the setting. There are a few scenes of analyzing clues, but Kem is more likely to use the time-honored protagonist technique of throwing herself into danger and learning what's going on via the villain monologues. As readers of the previous book would expect, Rika Nonesuch is here too, hired by another of Kem's old friends, and the two navigate their personal feelings and the rivalry between their guilds in much the way that they did in the Last Hour Between Worlds. As in the first book, there is a sapphic romance subplot, but it's a very slow burn asexual romance. The best part of this series continues to be the world-building. The previous book introduced the idea of the Echoes and sent the characters exploring into stranger and stranger depths. This book fleshes out the rules in more detail, creating something that feels partly like a fae realm and partly like high fantasy involving gods, but diverges from both into a logic of its own. The ending satisfyingly passes my test of fantasy mysteries: Resolving the mystery requires understanding and applying the rules of the setting, which are sufficiently strange to create interesting outcomes but coherent enough that the reader doesn't feel like the author is cheating. There are some hissable villains here, but my favorite part of this book was the way Caruso added a lot of nuance and poignancy to the Echoes rather than showing them only as an uncanny threat. That choice made the world feel deeper and richer. It's not yet clear whether that element is setup for a longer-term series plot, but I hope Caruso will develop the story in that direction. It felt to me like Caruso is aiming for an ongoing series rather than a multi-volume story with a definite ending. She avoids a full episodic reset Rika, in particular, gets considerable character development and new complications that bode well for future volumes but it doesn't feel like the series is building towards an imminent climax. This is not a complaint. I enjoy these characters and this world and will happily keep devouring each new series entry. If you liked The Last Hour Between Worlds, I think you will like this. It doesn't have the same delight of initial discovery of the great world-building, but the plot is satisfying and a bit more complex and the supporting characters are even better than those in the first book. Once again, Caruso kept me turning the pages, and I'm now looking forward to a third volume. Recommended. The third book in the series has not yet been announced, but there are indications on social media that it is coming. Rating: 7 out of 10

Otto Kek l inen: DEP-18: A proposal for Git-based collaboration in Debian

Featured image of post DEP-18: A proposal for Git-based collaboration in DebianI am a huge fan of Git, as I have witnessed how it has made software development so much more productive compared to the pre-2010s era. I wish all Debian source code were in Git to reap the full benefits. Git is not perfect, as it requires significant effort to learn properly, and the ecosystem is complex with even more things to learn ranging from cryptographic signatures and commit hooks to Git-assisted code review best practices, forge websites and CI systems. Sure, there is still room to optimize its use, but Git certainly has proven itself and is now the industry standard. Thus, some readers might be surprised to learn that Debian development in 2025 is not actually based on Git. In Debian, the version control is done by the Debian archive itself. Each commit is a new upload to the archive, and the commit message is the debian/changelog entry. The commit log is available at snapshots.debian.org. In practice, most Debian Developers (people who have the credentials to upload to the Debian archive) do use Git and host their packaging source code on salsa.debian.org the GitLab instance of Debian. This is, however, based on each DD s personal preferences. The Debian project does not have any policy requiring that packages be hosted on salsa.debian.org or be in version control at all.

Is collaborative software development possible without git and version control software? Debian, however, has some peculiarities that may be surprising to people who have grown accustomed to GitHub, GitLab or various company-internal code review systems. In Debian:
  • The source code of the next upload is not public but resides only on the developer s laptop.
  • Code contributions are plain patch files, based on the latest revision released in the Debian archive (where the unstable area is equivalent to the main development branch).
  • These patches are submitted by email to a bug tracker that does no validation or testing whatsoever.
  • Developers applying these patches typically have elaborate Mutt or Emacs setups to facilitate fetching patches from email.
  • There is no public staging area, no concept of rebasing patches or withdrawing a patch and replacing it with a better version.
  • The submitter won t see any progress information until a notification email arrives after a new version has been uploaded to the Debian archive.
This system has served Debian for three decades. It is not broken, but using the package archive just feels well, archaic. There is a more efficient way, and indeed the majority of Debian packages have a metadata field Vcs-Git that advertises which version control repository the maintainer uses. However, newcomers to Debian are surprised to notice that not all packages are hosted on salsa.debian.org but at various random places with their own account and code submission systems, and there is nothing enforcing or even warning if the code there is out of sync with what was uploaded to Debian. Any Debian Developer can at any time upload a new package with whatever changes, bypassing the Git repository, even when the package advertised a Git repository. All PGP signed commits, Git tags and other information in the Git repository are just extras currently, as the Debian archive does not enforce or validate anything about them. This also makes contributing to multiple packages in parallel hard. One can t just go on salsa.debian.org and fork a bunch of repositories and submit Merge Requests. Currently, the only reliable way is to download source packages from Debian unstable, develop patches on top of them, and send the final version as a plain patch file by email to the Debian bug tracker. To my knowledge, no system exists to facilitate working with the patches in the bug tracker, such as rebasing patches 6 months later to detect if they or equivalent changes were applied or if sending refreshed versions is needed. To newcomers in Debian, it is even more surprising that there are packages that are on salsa.debian.org but have the Merge Requests feature disabled. This is often because the maintainer does not want to receive notification emails about new Merge Requests, but rather just emails from bugs.debian.org. This may sound arrogant, but keep in mind that these developers put in the effort to set up their Mutt/Emacs workflow for the existing Debian process, and extending it to work with GitLab notifications is not trivial. There are also purists who want to do everything via the command-line (without having to open a browser, run JavaScript and maintain a live Internet connection), and tools like glab are not convenient enough for the full workflow.

Inefficient ways of working prevent Debian from flourishing I would claim, based on my personal experiences from the past 10+ years as a Debian Developer, that the lack of high-quality and productive tooling is seriously harming Debian. The current methods of collaboration are cumbersome for aspiring contributors to learn, and suboptimal to use both for new and seasoned contributors. There are no exit interviews for contributors who left Debian, no comprehensive data on reasons to contribute or stop contributing, nor are there any metrics tracking how many people tried but failed to contribute to Debian. Some data points to support my concerns do exist:
  • The contributor database shows that the number of contributors is growing slower than Debian s popularity.
  • Most packages are maintained by one person working alone (just pick any package at random and look at the upload history).

Debian should embrace git, but decision-making is slow Debian is all about community and collaboration. One would assume that Debian prioritized above all making collaboration tools and processes simpler, faster and less error-prone, as it would help both current and future package maintainers. Yet, it isn t so, due to some reasons unique to Debian. There is no single company or entity running Debian, and it has managed to operate as a pure meritocracy and do-cracy for over 30 years. This is impressive and admirable. Unfortunately, some of the infrastructure and technical processes are also nearly 30 years old and very difficult to change due to the same reason: the nature of Debian s distributed decision-making process. As a software developer and manager with 25+ years of experience, I strongly feel that developing software collaboratively using Git is a major step forward that Debian needs to take, in one form or another, and I hope to see other DDs voice their support if they agree.

Debian Enhancement Proposal 18 Following how consensus is achieved in Debian, I started drafting DEP-18 in 2024, and it is currently awaiting enough thumbs up at https://salsa.debian.org/dep-team/deps/-/merge_requests/21 to get into CANDIDATE status next. In summary the DEP-18 proposes that everyone keen on collaborating should:
  1. Maintain Debian packaging sources in Git on Salsa.
  2. Use Merge Requests to show your work and to get reviews.
  3. Run Salsa CI before upload.
The principles above are not novel. According to stats at e.g. trends.debian.net, and UDD, ~93% of all Debian source packages are already hosted on salsa.debian.org. As of June 1st, 2025, only 1640 source packages remain that are not hosted on Salsa. The purpose of DEP-18 is to state in writing what Debian is currently doing for most packages, and thus express what among others new contributors should be learning and doing, so basic collaboration is smooth and free from structural obstacles. Most packages are also already allowing Merge Requests and using Salsa CI, but there hasn t been any written recommendation anywhere in Debian to do so. The Debian Policy (v.4.7.2) does not even mention the word Salsa a single time. The current process documentation on how to do non-maintainer uploads or salvaging packages are all based on uploading packages to the archive, without any consideration of using git-based collaboration such as posting a Merge Request first. Personally I feel posting a Merge Request would be a better approach, as it would invite collaborators to discuss and provide code reviews. If there are no responses, the submitter can proceed to merge, but compared to direct uploads to the Debian archive, the Merge Request practice at least tries to offer a time and place for discussions and reviews to happen. It could very well be that in the future somebody comes up with a new packaging format that makes upstream source package management easier, or a monorepo with all packages, or some other future structures or processes. Having a DEP to state how to do things now does not prevent people from experimenting and innovating if they intentionally want to do that. The DEP is merely an expression of the minimal common denominators in the packaging workflow that maintainers and contributors should follow, unless they know better.

Transparency and collaboration Among the DEP-18 recommendations is:
The recommended first step in contributing to a package is to use the built-in Fork feature on Salsa. This serves two purposes. Primarily, it allows any contributor to publish their Git branches and submit them as Merge Requests. Additionally, the mere existence of a list of Forks enables contributors to discover each other, and in rare cases when the original package is not accepting improvements, collaboration could arise among the contributors and potentially lead to permanent forks in the general meaning. Forking is a fundamental part of the dynamics in open source that helps drive quality and agreement. The ability to fork ultimately serves as the last line of defense of users rights. Git supports this by making both temporary and permanent forks easy to create and maintain.
Further, it states:
Debian packaging work should be reasonably transparent and public to allow contributors to participate. A maintainer should push their pending changes to Salsa at regular intervals, so that a potential contributor can discover if a particular change has already been made or a bug has been fixed in version control, and thus avoid duplicate work. Debian maintainers should make reasonable efforts to publish planned changes as Merge Requests on Salsa, and solicit feedback and reviews. While pushing changes directly on the main Git branch is the fastest workflow, second only to uploading all changes directly to Debian repositories, it is not an inclusive way to develop software. Even packages that are maintained by a single maintainer should at least occasionally publish Merge Requests to allow new contributors to step up and participate.
I think these are key aspects leading to transparency and true open source collaboration. Even though this talks about Salsa which is based on GitLab the concepts are universal and will work also on other forges, like Forgejo or GitHub. The point is that sharing work-in-progress on a real-time platform, with CI and other supporting features, empowers and motivates people to iterate on code collaboratively. As an example of an anti-pattern, Oracle MySQL publishes the source code for all their releases and are license-compliant, but as they don t publish their Git commits in real-time, it does not feel like a real open source project. Non-Oracle employees are not motivated to participate as second-class developers who are kept in the dark. Debian should embrace git and sharing work in real-time, embodying a true open source spirit.

Recommend, not force Note that the Debian Enhancement Proposals are not binding. Only the Debian Policy and Technical Committee decisions carry that weight. The nature of collaboration is voluntary anyway, so the DEP does not need to force anything on people who don t want to use salsa.debian.org. The DEP-18 is also not a guide for package maintainers. I have my own views and have written detailed guides in blog articles if you want to read more on, for example, how to do code reviews efficiently. Within DEP-18, there is plenty of room to work in many different ways, and it does not try to force one single workflow. The goal here is to simply have agreed-upon minimal common denominators among those who are keen to collaborate using salsa.debian.org, not to dictate a complete code submission workflow. Once we reach this, there will hopefully be less friction in the most basic and recurring collaboration tasks, giving DDs more energy to improve other processes or just invest in having more and newer packages for Debian users to enjoy.

Next steps In addition to lengthy online discussions on mailing lists and DEP reviews, I also presented on this topic at DebConf 2025 in Brest, France. Unfortunately the recording is not yet up on Peertube. The feedback has been overwhelmingly positive. However, there are a few loud and very negative voices that cannot be ignored. Maintaining a Linux distribution at the scale and complexity of Debian requires extraordinary talent and dedication, and people doing this kind of work often have strong views, and most of the time for good reasons. We do not want to alienate existing key contributors with new processes, so maximum consensus is desirable. We also need more data on what the 1000+ current Debian Developers view as a good process to avoid being skewed by a loud minority. If you are a current or aspiring Debian Developer, please add a thumbs up if you think I should continue with this effort (or a thumbs down if not) on the Merge Request that would make DEP-18 have candidate status. There is also technical work to do. Increased Git use will obviously lead to growing adoption of the new tag2upload feature, which will need to get full git-buildpackage support so it can integrate into salsa.debian.org without turning off Debian packaging security features. The git-buildpackage tool itself also needs various improvements, such as making contributing to multiple different packages with various levels of diligence in debian/gbp.conf maintenance less error-prone. Eventually, if it starts looking like all Debian packages might get hosted on salsa.debian.org, I would also start building a review.debian.org website to facilitate code review aspects that are unique to Debian, such as tracking Merge Requests across GitLab projects in ways GitLab can t do, highlighting which submissions need review most urgently, feeding code reviews and approvals into the contributors.debian.org database for better attribution and so forth. Details on this vision will be in a later blog post, so subscribe to updates!

29 November 2025

Freexian Collaborators: Monthly report about Debian Long Term Support, October 2025 (by Roberto C. S nchez)

The Debian LTS Team, funded by Freexian s Debian LTS offering, is pleased to report its activities for October.

Activity summary During the month of October, 21 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below). The team released 37 DLAs fixing 893 CVEs. The team has continued in its usual rhythm, preparing and uploading security updates targeting LTS and ELTS, as well as helping with updates to oldstable, stable, testing, and unstable. Additionally, the team received several contributions of LTS uploads from Debian Developers outside the standing LTS Team. Notable security updates:
  • https-everywhere, prepared by Markus Koschany, deals with a problem created by ownership of the https-rulesets.org domain passing to a malware operator
  • openjdk-17 and openjdk-11, prepared by Emilio Pozuelo Monfort, fixes XML external entity and certificate validation vulnerabilities
  • intel-microcode, prepared by Tobias Frost, fixes a variety of privilege escalation and denial of service vulnerabilities
Notable non-security updates:
  • distro-info-data, prepared by Stefano Rivera, updates information concerning current and upcoming Debian and Ubuntu releases
Contributions from outside the LTS Team:
  • Lukas M rdian, a Debian Developer, provided an update of log4cxx
  • Andrew Ruthven, one of the request-tracker4 maintainers, provided an update of request-tracker4
  • Christoph Goehre, co-maintainer of thunderbird, provided an update of thunderbird
Beyond the typical LTS updates, the team also helped the Debian community more broadly:
  • Guilhem Moulin prepared oldstable/stable updates of libxml2, and an unstable update of libxml2.9
  • Bastien Roucari s prepared oldstable/stable updates of imagemagick
  • Daniel Leidert prepared an oldstable update of python-authlib, oldstable update of libcommons-lang-java and stable update of libcommons-lang3-java
  • Utkarsh Gupta prepared oldstable/stable/testing/unstable updates of ruby-rack
The LTS Team is grateful for the opportunity to contribute to making LTS a high quality for sponsors and users. We are also particularly grateful for the collaboration from others outside the time; their contributions are important to the success of the LTS effort.

Individual Debian LTS contributor reports

Thanks to our sponsors Sponsors that joined recently are in bold.

28 November 2025

Clint Adams: monkeying around bitrot

One of the servers to which I SSH ratcheted up its public key requirements and thus the Monkeysphere key I've been using for 15 years stopped working. Unfortunately, monkeysphere gen-subkey hardcodes RSA keys and if I'm going to be forced to use a new subkey I want mine to be of the 25519 variety. Therefore, to add a subkey by hand: gpg --expert --edit-key $KEYID Follow roughly what's in /usr/share/monkeysphere/m/gen_subkey, but change the key type to 11 (ECC (set your own capabilities)), don't bother with Encrypt capability, and pick Curve25519. monkeysphere subkey-to-ssh-agent and agent-transfer will be all happy with the "ed25519" subkey without any code modifications, and you won't need to rewrite monkeysphere from scratch to use Sequoia for the next 15 years.
Posted on 2025-11-28
Tags:

Simon Josefsson: Container Images for Debian with Guix

The debian-with-guix-container project build and publish container images of Debian GNU/Linux stable with GNU Guix installed. The images are like normal Debian stable containers but have the guix tool and a reasonable fresh guix pull. Supported architectures include amd64 and arm64. The multi-arch container is called: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable It may also be accessed via debian-with-guix at Docker Hub as: docker.io/jas4711/debian-with-guix:stable The container images may be used like this:
$ podman run --privileged -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
root@guix:/# hello
bash: hello: command not found
root@guix:/# guix describe
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild &
[1] 21
root@guix:/# GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
root@guix:/# guix describe
Generation 2    Nov 28 2025 10:14:11    (current)
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# guix install --verbosity=0 hello
accepted connection from pid 55, user root
The following package will be installed:
   hello 2.12.2
hint: Consider setting the necessary environment variables by running:
     GUIX_PROFILE="/root/.guix-profile"
     . "$GUIX_PROFILE/etc/profile"
Alternately, see  guix package --search-paths -p "/root/.guix-profile"'.
root@guix:/# GUIX_PROFILE="/root/.guix-profile"
root@guix:/# . "$GUIX_PROFILE/etc/profile"
root@guix:/# hello
Hello, world!
root@guix:/# 
Below is an example GitLab pipeline job that demonstrate how to run guix install to install additional dependencies, and then download and build a package that pick up the installed package from the system.
test-wget-configure-make-libksba-amd64:
  image: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
  before_script:
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARG &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  - apt-get install --update -y --no-install-recommends build-essential wget ca-certificates bzip2
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1
The images were initially created for use in GitLab CI/CD Pipelines but should work for any use. The images are built in a GitLab CI/CD pipeline, see .gitlab-ci.yml. The containers are derived from official Debian stable images with Guix installed and a successful run of guix pull, built using buildah invoked from build.sh using image/Containerfile that runs image/setup.sh. The pipeline also push images to the GitLab container registry, and then also to Docker Hub. Guix binaries are downloaded from the Guix binary tarballs project because of upstream download site availability and bandwidth concerns. Enjoy these images! Hopefully they can help you overcome the loss of Guix in Debian which made it a mere apt-get install guix away before. There are several things that may be improved further. An alternative to using podman --privileged is to use --security-opt seccomp=unconfined --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN which may be slightly more fine-grained. For ppc64el support I ran into an error message that I wasn t able to resolve:
guix pull: error: while setting up the build environment: cannot set host name: Operation not permitted
For riscv64, I can t even find a Guix riscv64 binary tarball for download, is there one anywhere? For arm64 containers, it seems that you need to start guix-daemon with --disable-chroot to get something to work, at least on GitLab.com s shared runners, otherwise you will get this error message:
guix install: error: clone: Invalid argument
Building the images themselves also require disabling some security functionality, and I was not able to build images with buildah without providing --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN otherwise there were errors like this:
guix pull: error: cloning builder process: Operation not permitted
guix pull: error: clone: Operation not permitted
guix pull: error: while setting up the build environment: cannot set loopback interface flags: Operation not permitted
Finally on amd64 it seems --security-opt seccomp=unconfined is necessary, otherwise there is an error message like this, even if you use --disable-chroot:
guix pull: error: while setting up the child process: in phase setPersonality: cannot set personality: Function not implemented
This particular error is discussed upstream, but I think generally that these error suggest that guix-daemon could use more optional use of features: if some particular feature is not available, gracefully fall back to another mode of operation, instead of exiting with an error. Of course, it should never fall back to an insecure mode of operation, unless the user requests that. Happy Hacking!

Russell Coker: 10gbit and 40gbit Home Networking

Aliexpress has a 4 port 2.5gbit switch with 2*SFP+ sockets for $34.35 delivered [1]. 4 ports isn t very good for the more common use cases (if daisy chaining them then it s only
2 available for devices) so this is really a device for use with 10Gbit uplink. Aliexpress has a pair of SFP+ 10Gbit devices with 1M of copper between them for $15.79 delivered [2]. That page also offers a pair of QSFP+ 40Gbit devices with 1M of copper between them for $27.79 delivered. They have a dual port SFP+ card for a server with two of the pairs of SFP+ 10gbit devices with copper between them for $32.51 delivered [3]. So you can get a 2.5gbit switch with two 10gbit uplink cables to nearby servers for $66.86 including postage. I don t need this but it is tempting. I spent $93.78 to get 2.5gbit networking [4] so spending $66.86 to get part of my network to 10gbit isn t much. It is $99.81 including postage for a Mellanox 2*40Gbit QSFP+ card and two QSFP+ adaptors with 3M of copper between them [5]. It is $55.81 including postage for the Mellanox card without the cable. So that s $155.62 for a point to point 40gbit link between systems that are less than 3M apart, that s affordable for a home lab. As an aside the only NVMe I ve tested which can deliver such speeds was in a Thinkpad and the Thinkpad entered a thermal throttling state after a few seconds of doing that. The best price I could see for a 40Gbit switch is $1280 for a L3 Managed switch with 2*40G QSFP+ slot ports, 4*10G SFP+ ports, and 48*2.5G RJ45 ports [6]. That s quite affordable for the SME market but a bit expensive for home users (although I m sure that someone on r/homelab has one). I m not going to get 40Gbit, that s well above what I need and while a point to point link is quite affordable I don t have servers in that range. But I am seriously considering 10Gbit, I get paid to do enough networking stuff that having some hands on experience with 10Gbit could be useful. For a laptop a 5gbit ethernet USB device is $29.48 including delivery which isn t too expensive [7]. The faster ones seem to be all Thunderbolt and well over $100, which is disappointing as USB 3.2 can do up to 20Gbit. If I start doing 10gbit over ethernet I ll get one of those USB devices for testing. For a single server it s cheaper and easier to get a 4 port 2.5Gbit ethernet for $55.61 [8].

27 November 2025

Russ Allbery: Review: A Matter of Execution

Review: A Matter of Execution, by Nicholas & Olivia Atwater
Series: Tales of the Iron Rose #0
Publisher: Starwatch Press
Copyright: 2024
ISBN: 1-998257-08-8
Format: Kindle
Pages: 131
A Matter of Execution is the introductory novella that kicked off the Tales of the Iron Rose series. It is steampunk fantasy with airships. I previously read and reviewed the subsequent novel, Echoes of the Imperium. As noted in that review, I read the novel first. That was a mistake; this is a much better place to start. A Matter of Execution was clearly intended as the introduction of all of these characters. More importantly, I think reading the novella first would have given me enough affinity with the characters to not mind the worst part of Echoes of the Imperium: the extremely slow first half that seemed filled with the protagonist's impostor syndrome. A Matter of Execution opens, fittingly, with Captain William Blair, a goblin, former Imperial soldier, Oathbreaker, and series first-person protagonist being carted to his execution. He is not alone; in the same prison wagon is an arrogant (and racist) man named Strahl, the killer of one of the rulers of Lyonesse. Strahl is rather contemptuous of Blair's claim to be a captain, given that he's both a goblin and an Oathbreaker. Strahl quickly revises that opinion when Blair's crew, somewhat predictably given that he is the series protagonist, creates a daring escape for both of them. The heat of action gives both a chance to gain some respect for the other, which explains why Blair is not only willing to invite Strahl to join his crew, but to go back for Strahl's companion. Breaking out Strahl's companion will be a more difficult, and surprising, problem. Nicholas Atwater is a role-playing game GM, something that you will learn in the "about the author" section at the end of this novella but probably will have guessed by then. Even more than Echoes of the Imperium, this novella feels like a (good) write-up of an RPG adventure. A wildly varied cast of characters come together and form a party with a well-defined objective that has some surrounding mysteries and surprises. Each of those characters get their individual moments to show off their specific skills. Readers with a certain gaming background will know exactly where to insert the Borderlands-style title card with a slightly demented description of each character. This is not a complaint. You may be able to see the bones of the setup adventure for a long-running campaign, but I like this style of character introduction and the story moves right along. There are a ton of varied characters, some interesting villains and maybe-villains, a rather satisfying heist setup, and some good chemistry and a bit of banter. This is not a deep story it's clearly an introductory episode for both the characters and the world background but it's a fun way to spend a few hours. I think the best part of this series is the world-building. If you have read my review of Echoes of the Imperium, you have unfortunately been mildly spoiled for the revelation in this novella. I don't think it hurt the story that much; you will be able to predict what obvious gaps in the novel backstory the novella is going to fill in, but it's just as enjoyable to see how that happens. But the Atwaters aren't going to drop any of the big world-building bombs in the introductory novella, of course. Instead, you get a gradual introduction to the nature of magic in this world, some of the political setup of the recent war, and a quick introduction to the capabilities of Strahl's mysterious companion. If you've not yet read this series, I recommend starting here. It's a quick investment to see if you'll be interested. The novel is heavier and slower, and the pacing of the first half isn't great, but the world-building is even better. If you've already read the novel, this is still worth reading as long as you enjoyed it. You'll have a few moments of "oh, that's how that happened," and it's a fun and fast-moving way to spend a bit more time with the characters. Followed by Echoes of the Imperium. The back matter of the novella says that The Winds of Fortune is supposedly forthcoming. Rating: 7 out of 10

Next.