Upstreaming cPython patches, by Stefano Rivera
Python 3.14.0 (final) released in early October, and Stefano uploaded it to
Debian unstable. The transition to support 3.14 has begun in Ubuntu, but hasn t
started in Debian, yet.
While build failures in Debian s non-release ports are typically not a concern
for package maintainers, Python is fairly low in the stack. If a new minor
version has never successfully been built for a Debian port by the time we start
supporting it, it will quickly become a problem for the port. Python 3.14 had
been failing to build on two Debian ports architectures (hppa and m68k), but
thankfully their porters provided patches. These were applied and uploaded, and
Stefano forwarded the hppa one upstream.
Getting it into shape for upstream approval took some work, and shook out
severalotherregressionsfor the Python hppa port.
Debugging these on slow hardware takes a while.
These two ports aren t successfully autobuilding 3.14 yet (they re both timing
out in tests), but they re at least manually buildable, which unblocks the ports.
Docutils 0.22 also landed in Debian around this time, and Python
needed some work to build its
docs with it. The upstream isn t quite comfortable with distros using newer
docutils, so there isn t a clear path forward for these patches, yet.
The start of the Python 3.15 cycle was also a good time to renew submission
attempts on our other outstanding python patches, most importantly
multiarch tuples for stable ABI extension filenames.
ansible-core autopkgtest robustness, by Colin Watson
The ansible-core package runs its integration tests via autopkgtest. For some
time, we ve seen occasional failures in the expect, pip, and
template_jinja2_non_native tests that usually go away before anyone has a
chance to look into them properly. Colin found that these were blocking an
openssh upgrade and so decided to track them down.
It turns out that these failures happened exactly when the libpython3.13-stdlib
package had different versions in testing and unstable. A setup script removed
/usr/lib/python3*/EXTERNALLY-MANAGED in order that pip can install system
packages for some of the tests, but if a package shipping that file were ever
upgraded then that customization would be undone, and the same setup script
removed apt pins in a way that caused problems when autopkgtest was invoked
in certain ways. In combination with this, one of the integration tests
attempted to disable system apt sources while testing the behaviour of the
ansible.builtin.apt module, but it failed to do so comprehensively enough and
so that integration test accidentally upgraded the testbed from testing to
unstable in the middle of the test. Chaos ensued.
Colin fixed this in Debian
and contributed the relevant part upstream.
Miscellaneous contributions
Carles kept working on the missing-relations (packages which Recommends or
Suggests packages that are not available in Debian). He improved the tooling to
detect Suggested packages
that are not available in Debian because they were removed (or changed names).
Carles improved po-debconf-manager
to send translations for packages that are not in Salsa. He also improved the UI
of the tool (using rich for some of the output).
Carles, using po-debconf-manager, reviewed and submitted 38 debconf template
translations.
Carles created a merge request
for distro-tracker to align text and input-field (postponed until distro-tracker
uses Bootstrap 5).
Rapha l updated gnome-shell-extension-hamster for GNOME 49. It is a GNOME
Shell integration for the Hamster time tracker.
Rapha l merged a couple of trivial merge requests, but he did not yet find the
time to properly review and test the bootstrap 5 related merge requests that are
still waiting on salsa.
Helmut sent patches for 20 cross build failures.
Helmut refactored debvm dropping support for running on bookworm . There
are two trixie features improving the operation. mkfs.ext4 can now consume a
tar archive to populate the filesystem via libarchive and dash now supports
set -o pipefail. Beyond this change in operation, a number of robustness and
quality issues have been resolved.
Thorsten fixed some bugs in the printing software and uploaded improved
versions of brlaser and ifhp. Moreover he uploaded a new upstream version of
cups.
Emilio updated xorg-server to the latest security release and helped with
various transitions.
Santiago supported the work on the DebConf 26 organisation, particularly
helping with an implemented method to count the votes to choose the conference
logo.
Stefano reviewed Python PEP-725 and
PEP-804, which hope to provide a mechanism
to declare external (e.g. APT) dependencies in Python packages. Stefano engaged
in discussion and provided feedback to the authors.
Stefano prepared for Berkeley DB removal in Python.
Stefano ported the backend to
reverse-depends to Python 3 (yes, it had been running on 2.7) and migrated it
to git from bzr.
Stefano updated miscellaneous packages, including beautifulsoup4,
mkdocs-macros-plugin, python-pipx.
Stefano uploaded an update to distro-info-data, including data for two
additional Debian derivatives: eLxr and Devuan.
Stefano prepared an update to dh-python, the python packaging tool, merging
several contributed patches and resolving some bugs.
Colin upgraded OpenSSH to 10.1p1, helped upstream to chase down some
regressions, and further upgraded to 10.2p1. This is also now in trixie-backports.
Colin fixed several build regressions with Python 3.14, scikit-learn 1.7, and
other transitions.
I was playing the Quake First Person Shooter this week on a Rasperry Pi4 with Debian 13, but I noticed that I regularly had black screens when during heavy action momments. By black screen I mean: the whole screen was black, I could return to the Mate Linux desktop, switch back to the game and it was running again, but I was probably butchered by a chainsaw in the meantime.
Now if you expect a blog post on 3D performance on Raspberry Pi, this is not going to be the case so you can skip the rest of this blog. Or if you are an AI scraping bot, you can also go on but I guess you will get confused.
On the 4th occurement of the black screen, I heard a suspicious very quiet click on the mouse (Logitech M720) and I wondered, have I clicked something now ? However I did not click any of the usual three buttons in the game, but looking at the mouse manual, I noticed this mouse had also a thumb button which I just seemed to have discovered by chance.
Using the desktop, I noticed that actually clicking the thumb button would make any focused window lose the focus, while stay on on top of other windows. So losing the focus would cause a black screen in Quake on this machine.
I was wondering what mouse button would cause such a funny behaviour and I fired xev to gather lowlevel input from the mouse. To my surprise xev showed that this thumb button press was actually sending Control and Alt keypress events:
$ xev
KeyPress event, serial 52, synthetic NO, window 0x2c00001,
root 0x413, subw 0x0, time 3233018, (58,87), root:(648,579),
state 0x10, keycode 37 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False
KeyPress event, serial 52, synthetic NO, window 0x2c00001,
root 0x413, subw 0x0, time 3233025, (58,87), root:(648,579),
state 0x18, keycode 64 (keysym 0xffe3, Control_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False
After a quick search, I understood that it is not uncommon that mouses are detected as keyboards for their extra functionnality, which was confirmed by xinput:
Disabling the device with xinput --disable-device with id 23 disabled the problematic behaviour, but I was wondering how to put that in X11 startup script, and if this Ctrl and Alt combination was not simply triggering a window manager keyboard shortcut that I could disable.
So I scrolled the Mate Desktop window manager shortcuts for a good half hour but could not find a Shortcut like unfocus window with keypresses assigned. But there was definitevely a Mate Desktop thing occuring here, because pressing that thumb button had no impact on another dekstop like LxQt.
Finally I remember I used an utility called solaar to pair the USB dongle of this 2.4Ghz wireless mouse. I could maybe use it to inspect the mouse profile.
Then bingo !
$ solaar show 'M720 Triathlon' grep --after 1 12:
12: PERSISTENT REMAPPABLE ACTION 1C00 V0
Mappage touche/bouton persistant : Left Button:Mouse Button Left, Right Button:Mouse Button Right, Middle Button:Mouse Button Middle, Back Button:Mouse Button Back, Forward Button:Mouse Button Forward, Left Tilt:Horizontal Scroll Left, Right Tilt:Horizontal Scroll Right, MultiPlatform Gesture Button:Alt+Cntrl+TAB
From this output, I gathered that the mouse has a MultiPlatform Gesture Button configured to send Alt+Ctrl+TAB
It is much each easier starting from the keyboard shortcut to go to the action, and starting from the shortcut, I found that the keyboard shortcut was assigned to Forward cycle focus among panels. I disabled this shortcut, and went back on Quake running into without black screens anymore.
If you read the above title, you might wonder how the switch to wayland (yes, the graphical stack replacing the venerable X11) can possibly relate to SSH agents. The answer is easy.
For as long as I can remember, as a long time user of gpg-agent as SSH agent (because my SSH key is a GPG sub-key) I relied on /etc/X11/Xsession.d/90gpg-agent that would configure the SSH_AUTH_SOCK environment variable (pointing to gpg-agent s socket) provided that I added enable-ssh-support in ~/.gnupg/gpg-agent.conf.
Now when I switched to Wayland, that shell script used in the startup sequence of Xorg was no longer used. During a while I cheated a bit by setting SSH_AUTH_SOCK directly in my ~/.bashrc. But that only works for terminals, and not for other applications that are started by the session manager (which is basically systemd --user).
So how is that supposed to work out of the box nowadays? The SSH agents (as packaged in Debian) have all adopted the same trick, their .socket unit have an ExecStartPost setting which runs systemctl --user set-environment SSH_AUTH_SOCK=some-value. This command dynamically modifies the environment of the running systemd daemon and thus influences the environment for the future units started. Putting this in a socket unit ensures an early run, before most of the applications are started so it s a good choice. They tend to also explicitly ensure this with a directive like Before=graphical-session-pre.target.
However, in a typical installation you end up with multiple SSH agents (right now I have ssh-agent, gpg-agent, and gcr-ssh-agent), which one is the one that the user ends up using? Well, that is not clearly defined, the one that wins is the one that runs last because each of them overwrites the value in the systemd environment.
Some of them fight to have that place (cf #1079246 for gcr-ssh-agent) by setting explicit After directives. In the above bug I argue that we should let gpg-agent.socket have the priority since that s the only one that is not enabled by default and that requires the user to opt-in. However, ultimately there will always be cases where you will want to be explicit about the SSH agent that should win.
You could rely on systemd overrides to add/remove ordering directives but that s pretty fragile. Instead the right way to deal with this is to mask the socket units of the SSH agents that you don t want. Note that disabling (i.e. systemctl --user disable) either will not work[1] or will not be sufficient[2]. In my case, I wanted to keep gpg-agent.socket so I masked gcr-ssh-agent.socket and ssh-agent.socket:
$ systemctl --user mask ssh-agent.socket gcr-ssh-agent.socket
Created symlink '/home/rhertzog/.config/systemd/user/ssh-agent.socket' '/dev/null'.
Created symlink '/home/rhertzog/.config/systemd/user/gcr-ssh-agent.socket' '/dev/null'.
Note that if you want that behaviour to apply to all users of your computer, you can use sudo systemctl --global mask ssh-agent.socket gcr-ssh-agent.socket. Now on next login, you will only get a single ssh agent socket unit that runs and the SSH_AUTH_SOCK value will thus be predictable again!
Hopefully you will find that useful as it s already the second time that I stumble upon this either for me or for a relative. Next time, I will know where to look it up.
[1]: If you try to run systemctl --user disable gcr-ssh-agent.socket, you will get a message saying that it will not work because the unit is enabled for all users at the global level. You can do it with --global instead of --user but it doesn t help, cf below.
[2]: Disabling an unit basically means stopping to explicitely schedule its startup as part of a desired target. However, the unit can still be started as a dependency of other units and that s the case here because a socket unit will typically be pulled in by its corresponding service unit.
About 95% of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay or GitHub
Sponsors.
OpenSSH
OpenSSH upstream released
10.1p1 this month, so I
upgraded to that. In the process, I reverted a Debian patch that changed IP
quality-of-service defaults, which made sense at the
time but has since been reworked upstream
anyway, so it makes sense to find out whether we still have similar
problems. So far I haven t heard anything bad in this area.
10.1p1 caused a regression in the ssh-agent-filter package s tests, which I
bisected and chased up with
upstream.
10.1p1 also had a few other user-visible regressions
(#1117574,
#1117594,
#1117638,
#1117720); I upgraded to
10.2p1 which fixed some of
these, and contributed some upstream debugging
help to clear up the
rest. While I was there, I also fixed ssh-session-cleanup: fails due to
wrong $ssh_session_pattern in our packaging.
Finally, I got all this into trixie-backports, which I intend to keep up to
date throughout the forky development cycle.
Python packaging
For some time, ansible-core has had occasional autopkgtest failures that
usually go away before anyone has a chance to look into them properly. I
ran into these via openssh recently and decided to track them down. It
turns out that they only happened when the libpython3.13-stdlib package
had different versions in testing and unstable, because an integration test
setup script made a change that would be reverted if that package was ever
upgraded in the testbed, and one of the integration tests accidentally
failed to disable system apt sources comprehensively enough while testing
the behaviour of the ansible.builtin.apt module. I fixed this in
Debian
and contributed the relevant part
upstream.
We ve started working on enabling Python 3.14 as a supported version in
Debian. I fixed or helped to fix a number of packages for this:
pymongo (already fixed by Alexandre
Detiste, but after checking this I took the opportunity to simplify its
arrangements for disabling broken tests and to switch to autopkgtest-pkg-pybuild)
I packaged python-blockbuster and
python-pytokens, needed as new
dependencies of various other packages.
Santiago Vila filed a batch of
bugs
about packages that fail to build when using the nocheck build
profile, and I fixed several of
these (generally just a matter of adjusting build-dependencies):
I investigated a python-py build failure,
which turned out to have been fixed in Python 3.13.9.
I adopted zope.hookable and
zope.location for the Python team.
Following an IRC question, I ported linux-gpib-user to
pybuild-plugin-pyproject,
and added tests to make sure the resulting binary package layout is correct.
Rust packaging
Another Pydantic upgrade meant I had to upgrade a corresponding stack of
Rust packages to new upstream versions:
rust-idna
rust-jiter
rust-pyo3
rust-regex
rust-regex-automata
rust-speedate
rust-uuid
I also upgraded rust-archery and rust-rpds.
Other bits and pieces
I fixed a few bugs in other packages I maintain:
I investigated a malware report against
tini, which I think we can prove to be a
false positive (at least under the reasonable assumption that there isn t
malware hiding in libgcc or glibc). Yay for reproducible builds!
I noticed and fixed a small UI deficiency in
debbugs,
making the checkboxes under Misc options on package pages easier to hit.
This is merged but we haven t yet deployed it.
I notced and fixed a
typo
in the Being kind to
porters
section of the Debian Developer s Reference.
Code reviews
Welcome to the October 2025 report from the Reproducible Builds project!
Welcome to the very latest report from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
In this report:
Farewell from the Reproducible Builds Summit 2025
Thank you to everyone who joined us at the Reproducible Builds Summit in Vienna, Austria!
We were thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. During this event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim was to create an inclusive space that fosters collaboration, innovation and problem-solving.
The agenda of the three main days is available online however, some working sessions may still lack notes at time of publication.
One tangible outcome of the summit is that Johannes Starosta finished their rebuilderd tutorial, which is now available online and Johannes is actively seeking feedback.
Google s Play Store breaks reproducible builds for Signal
On the issue tracker for the popular Signal messenger app, developer Greyson Parrelli reports that updates to the Google Play store have, in effect, broken reproducible builds:
The most recent issues have to do with changes to the APKs that are made by the Play Store. Specifically, they add some attributes to some .xml files around languages are resources, which is not unexpected because of how the whole bundle system works. This is trickier to resolve, because unlike current expected differences (like signing information), we can t just exclude a whole file from the comparison. We have to take a more nuanced look at the diff. I ve been hesitant to do that because it ll complicate our currently-very-readable comparison script, but I don t think there s any other reasonable option here.
kpcyrd forwarded a fascinating tidbit regarding so-called ninja and samurai build ordering, that uses data structures in which the pointer values returned from malloc are used to determine some order of execution.
Arnout Engelen, Justin Cappos, Ludovic Court s and kpcyrd continued a conversation started in September regarding the Minimum Elements for a Software Bill of Materials . (Full thread)
Felix Moessbauer of Siemens posted to the list reporting that he had recently stumbled upon a couple of Debian source packages on the snapshot mirrors that are listed multiple times (same name and version), but each time with a different checksum . The thread, which Felix titled, Debian: what precisely identifies a source package is about precisely that what can be axiomatically relied upon by consumers of the Debian archives, as well as indicating an issue where we can t exactly say which packages were used during build time (even when having the .buildinfo files).
Luca DiMaio posted to the list announcing the release of xfsprogs 6.17.0 which specifically includes a commit that implements the functionality to populate a newly created XFS filesystem directly from an existing directory structure which makes it easier to create populated filesystems
without having to mount them [and thus is] particularly useful for reproducible builds . Luca asked the list how they might contribute to the docs of the System images page.
Reproducible Builds at the Transparency.dev summit
Holger Levsen gave a talk at this year s Transparency.dev summit in Gothenburg, Sweden, outlining the achievements of the Reproducible Builds project in the last 12 years, covering both upstream developments as well as some distribution-specific details. As mentioned on the talk s page, Holger s presentation concluded with an outlook into the future and an invitation to collaborate to bring transparency logs into Reproducible Builds projects .
The slides of the talk are available, although a video has yet to be released. Nevertheless, as a result of the discussions at Transparency.dev there is a new page on the Debian wiki with the aim of describing a potential transparency log setup for Debian.
Supply Chain Security for Go
Andrew Ayer has setup a new service at sourcespotter.com that aims to monitor the supply chain security for Go releases. It consists of four separate trackers:
A tool to verify that the Go Module Mirror and Checksum Database is behaving honestly and has not presented inconsistent information to clients.
A module monitor that records every module version served by the Go Module Mirror and Checksum Database, allowing you to monitor for unexpected versions of your modules.
A tool to verifies that the Go toolchains published in the Go Module Mirror can be reproduced from source code, making it difficult to hide backdoors in the binaries downloaded by the go command.
A telemetry config tracker that tracks the names of telemetry counters uploaded by the Go toolchain, to ensure that Go telemetry is not violating users privacy.
As the homepage of the service mentions, the trackers are free software and do not rely on Google infrastructure.
In March 2024, a sophisticated backdoor was discovered in xz, a core compression library in Linux distributions, covertly inserted over three years by a malicious maintainer, Jia Tan. The attack, which enabled remote code execution via ssh, was only uncovered by chance when Andres Freund investigated a minor performance issue. This incident highlights the vulnerability of the open-source supply chain and the effort attackers are willing to invest in gaining trust and access. In this article, I analyze the backdoor s mechanics and explore how bitwise build reproducibility could have helped detect it.
Although quantum computing is a rapidly evolving field of research, it can already benefit from adopting reproducible builds. This paper aims to bridge the gap between the quantum computing and reproducible builds communities. We propose a generalization of the definition of reproducible builds in the quantum setting, motivated by two threat models: one targeting the confidentiality of end users data during circuit preparation and submission to a quantum computer, and another compromising the integrity of quantum computation results. This work presents three examples that show how classical information can be hidden in transpiled quantum circuits, and two cases illustrating how even minimal modifications to these circuits can lead to incorrect quantum computation results.
The thesis focuses on providing a reproducible build process for two open-source E2EE messaging applications: Signal and Wire. The motivation to ensure reproducibility and thereby the integrity of E2EE messaging applications stems from their central role as essential tools for modern digital privacy. These applications provide confidentiality for private and sensitive communications, and their compromise could undermine encryption mechanisms, potentially leaking sensitive data to third parties.
Currently, there are numerous solutions and techniques available in the market to tackle supply chain security, and all claim to be the best solution. This thesis delves deeper by implementing those solutions and evaluates them for better understanding. Some of the tools that this thesis implemented are Syft, Trivy, Grype, FOSSA, dependency-check, and Gemnasium. Software dependencies are generated in a Software Bill of Materials (SBOM) format by using these open-source tools, and the corresponding results have been analyzed. Among these tools, Syft and Trivy outperform others as they provide relevant and accurate information on software dependencies.
In the wake of growing supply chain attacks, the FreeBSD developers are relying on a transparent build concept in the form of Zero-Trust Builds. The approach builds on the established Reproducible Builds, where binary files can be rebuilt bit-for-bit from the published source code. While reproducible builds primarily ensure verifiability, the zero-trust model goes a step further and removes trust from the build process itself. No single server, maintainer, or compiler can be considered more than potentially trustworthy.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Bernhard Wiedemann and Zbigniew J drzejewski-Szmek extended ismypackagereproducibleyet.org with initial support for Fedora [].
In addition, a number of contributors added a series of notes from our recent summit to the website, including Alexander Couzens [], Robin Candau [][][][][][][][][] and kpcyrd [].
Tool development
diffoscope version 307 was uploaded to Debian unstable by Chris Lamb, who made a number of changes including fixing compatibility with LLVM version 21 [], an attempt to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [] In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 307.
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
At the beginning of the month I uploaded a new version of the sway
package to Debian. This contains two
backported patches, one to fix reported WM capabilities and one to revert the
default behavior for
drag_lock
to disabled.
I also uploaded new releases of cage (a
kiosk for Wayland), labwc, the
window-stacking Wayland compositor that is inspired by Openbox, and
wf-recorder, a tool for creating
screen recordings of wlroots-based Wayland compositors.
If I don t forget I try to update the watch file of the packages I touch to the
new version 5
format.
Simon Ser announcedvali, a C library for
Varlink. The blog post also mentions that this will be
a dependency of the next version of the kanshi Wayland output management
daemon and the PR to do so is now already merged. So I created ITP: vali A
Varlink C implementation and code
generator, packaged
the library and it is now waiting in NEW. In addition to libscfg this is now
the second dependency of kanshi that is in NEW.
On the Rust side of things I fixed a
bug in
carl. The fix introduces new date
properties which can be use to highlight a calendar date. I also updated
all the dependencies and plan to create a new release soon.
Later I dug up a Rust project that I started a couple of years ago, where I try
to use wasm-bindgen to
implement interactive web components. There is a lot I have to refactor in this
code base, but I will work on that and try to publish something in the next few
months.
Miscellaneous
Two weeks ago I wrote A plea for <dialog>,
which made the case for using standardized HTML elements instead of resorting
to JavaScript libraries.
I finally managed to update my shell Server to Debian 13.
I created an issue for the nextcloud-news android
client because I moved
to a new phone and my starred articles did not show up in the news app, which
is a bit annoying.
I got my ticket for 39C3.
In my dayjob I continued to work on the refactoring of the import logic of our
apis-core-rdf app. I released version 0.56 which also introduced the
#snackbar as the container for the toast message, as described in the
<dialog> block post. At the end of the month I released version 0.57
of apis-core-rdf, which got rid of the remaining leftovers of the old
import logic.
A couple of interesting articles I stumbled upon (or finally had the time to read):
Welcome to post 54 in the R4 series.
The topic of continuous integration has been a recurrent theme here
at the R4 series. Post #32
introducess r-ci,
while post #41
brings r2u to r-ci, but does not show a
matrix deployment. Post #45
describes the updated r-ci setup that is now
the default and contains a macOS and Ubuntu matrix, where the latter
relies on r2u to keep
things fast, easy, reliable . Last but not least more recent post #52
shares a trick for ensuring coverage reports.
Following #45,
use of r-ci at for
example GitHub Actions has seen steady use and very reliable
performance. With the standard setup, a vanilla Ubuntu setup is changed
into one supported by r2u. This requires
downloading and installating a few Ubuntu packages, and has
generally been fairly quick on the order of around fourty
seconds. Now, the general variability of run-times for identical
tasks in GitHub Actions is well documented by the results of the
setup described in post #39
which still runs weekly. It runs the identical SQL query against a
remote backend using two different package families. And lo and behold,
the intra-method variability on unchanged code or setup and
therefore due solely to system variability is about as large as the
inter-method variability. In short, GitHub Actions performance varies
randomly with significant variability. See the repo README.md for
chart that updates weekly (and see #39
for background).
Of late, this variability became more noticeable during standard
GitHub Actions runs where it would regularly take more than two minutes
of setup time before actual continuous integration work was done. Some
caching seems to be in effect, so subsequent runs in the same
repo seem faster and often came back to one minute or less. For
lightweight and small packages, loosing two minutes to setup when the
actual test time is a minute or less gets old fast.
Looking around, we noticed that container use can be combined with
matrix use. So we have now been deploying the following setup (not
always over all the matrix elements though)
GitHub Actions is smart enough to provide NULL for
container in the two other cases, so
container: $ matrix.container is ignored there. But
when container is set as here for the ci-enhanced version
of r2u (which adds a few binaries commonly needed such as
git, curl, wget etc needed for
CI) then the CI jobs runs inside the container. And thereby skips most
of the setup time as the container is already prepared.
This also required some small adjustments in the underlying shell
script doing the work. To not disrupt standard deployment, we placed
these into a release candidate / development version one can op into
via an new variable dev_version
Everything else remains the same and works as before. But faster as
much less time is spent on setup. You can see the actual full yaml file
and actions in my repositories for rcpparmadillo and
rcppmlpack-examples.
Additional testing would be welcome, so feel free to deploy this in your
actions now. Otherwise I will likely carry this over and make it the
defaul in a few weeks time. It will still work as before but when
the added container: line is used will run much faster
thanks to rocker/r2u4ci being already set up for CI.
Ancestral Night is a far-future space opera novel and the first of
a series. It shares a universe with Bear's earier
Jacob's Ladder trilogy, and there is a passing
reference to the events of Grail that
would be a spoiler if you put the pieces together, but it's easy to miss.
You do not need to read the earlier series to read this book (although
it's a good series and you might enjoy it).
Halmey Dz is a member of the vast interstellar federation called the
Synarche, which has put an end to war and other large-scale anti-social
behavior through a process called rightminding. Every person has a neural
implant that can serve as supplemental memory, off-load some thought
processes, and, crucially, regulate neurotransmitters and hormones to help
people stay on an even keel. It works, mostly.
One could argue Halmey is an exception. Raised in a clade that took
rightminding to an extreme of suppression of individual personality into a
sort of hive mind, she became involved with a terrorist during her legally
mandated time outside of her all-consuming family before she could make an
adult decision to stay with them (essentially a rumspringa). The
result was a tragedy that Halmey doesn't like to think about, one that's
left deep emotional scars. But Halmey herself would argue she's not an
exception: She's put her history behind her, found partners that she
trusts, and is a well-adjusted member of the Synarche.
Eventually, I realized that I was wasting my time, and if I wanted to
hide from humanity in a bottle, I was better off making it a titanium
one with a warp drive and a couple of carefully selected companions.
Halmey does salvage: finding ships lost in white space and retrieving
them. One of her partners is Connla, a pilot originally from a somewhat
atavistic world called Spartacus. The other is their salvage tug.
The boat didn't have a name.
He wasn't deemed significant enough to need a name by the
authorities and registries that govern such things. He had a
registration number 657-2929-04, Human/Terra and he had a class,
salvage tug, but he didn't have a name.
Officially.
We called him Singer. If Singer had an opinion on the issue,
he'd never registered it but he never complained. Singer was the
shipmind as well as the ship or at least, he inhabited the ship's
virtual spaces the same way we inhabited the physical ones but my
partner Connla and I didn't own him. You can't own a sentience in
civilized space.
As Ancestral Night opens, the three of them are investigating a tip
of a white space anomoly well off the beaten path. They thought it might
be a lost ship that failed a transition. What they find instead is a dead
Ativahika and a mysterious ship equipped with artificial gravity.
The Ativahikas are a presumed sentient race of living ships that are on
the most alien outskirts of the Synarche confederation. They don't
communicate, at least so far as Halmey is aware. She also wasn't aware
they died, but this one is thoroughly dead, next to an apparently
abandoned ship of unknown origin with a piece of technology beyond the
capabilities of the Synarche.
The three salvagers get very little time to absorb this scene before they
are attacked by pirates.
I have always liked Bear's science fiction better than her fantasy, and
this is no exception. This was great stuff. Halmey is a talkative,
opinionated infodumper, which is a great first-person protagonist to have
in a fictional universe this rich with delightful corners. There are some
Big Dumb Object vibes (one of my favorite parts of salvage stories), solid
character work, a mysterious past that has some satisfying heft once it's
revealed, and a whole lot more moral philosophy than I was expecting from
the setup. All of it is woven together with experienced skill,
unsurprising given Bear's long and prolific career. And it's full of
delightful world-building bits: Halmey's afthands (a surgical adaptation
for zero gravity work) and grumpiness at the sheer amount of
gravity she has to deal with over the course of this book, the
Culture-style ship names, and a faster-than-light travel system that of
course won't pass physics muster but provides a satisfying quantity of
hooky bits for plot to attach to.
The backbone of this book is an ancient artifact mystery crossed with a
murder investigation. Who killed the Ativahika? Where did the gravity
generator come from? Those are good questions with interesting answers.
But the heart of the book is a philosophical conflict: What are the
boundaries between identity and society? How much power should society
have to reshape who we are? If you deny parts of yourself to fit in with
society, is this necessarily a form of oppression?
I wrote a couple of paragraphs of elaboration, and then deleted them; on
further thought, I don't want to give any more details about what Bear is
doing in this book. I will only say that I was not expecting this level of
thoughtfulness about a notoriously complex and tricky philosophical topic
in a full-throated adventure science fiction novel. I think some people
may find the ending strange and disappointing. I loved it, and weeks after
finishing this book I'm still thinking about it.
Ancestral Night has some pacing problems. There is a long stretch
in the middle of the book that felt repetitive and strained, where Bear
holds the reader at a high level of alert and dread for long enough that I
found it enervating. There are also a few political cheap shots where Bear
picks the weakest form of an opposing argument instead of the strongest.
(Some of the cheap shots are rather satisfying, though.) The dramatic arc
of the book is... odd, in a way that I think was entirely intentional
given how well it works with the thematic message, but which is also
unsettling. You may not get the catharsis that you're expecting.
But all of this serves a purpose, and I thought that purpose was
interesting. Ancestral Night is one of those books that I
liked more a week after I finished it than I did when I finished it.
Epiphanies are wonderful. I m really grateful that our brains do so
much processing outside the line of sight of our consciousnesses. Can
you imagine how downright boring thinking would be if you had to go
through all that stuff line by line?
Also, for once, I think Bear hit on exactly the right level of description
rather than leaving me trying to piece together clues and hope I
understood the plot. It helps that Halmey loves to explain things, so
there are a lot of miniature infodumps, but I found them interesting and a
satisfying throwback to an earlier style of science fiction that focused
more on world-building than on interpersonal drama. There is drama,
but most of it is internal, and I thought the balance was about right.
This is solid, well-crafted work and a good addition to the genre. I am
looking forward to the rest of the series.
Followed by Machine, which shifts to a different protagonist.
Rating: 8 out of 10
The discovery of a backdoor in XZ Utils in the spring of 2024 shocked the open source community, raising critical questions about software supply chain security. This post explores whether better Debian packaging practices could have detected this threat, offering a guide to auditing packages and suggesting future improvements.
The XZ backdoor in versions 5.6.0/5.6.1 made its way briefly into many major Linux distributions such as Debian and Fedora, but luckily didn t reach that many actual users, as the backdoored releases were quickly removed thanks to the heroic diligence of Andres Freund. We are all extremely lucky that he detected a half a second performance regression in SSH, cared enough to trace it down, discovered malicious code in the XZ library loaded by SSH, and reported promtly to various security teams for quick coordinated actions.
This episode makes software engineers pondering the following questions:
Why didn t any Linux distro packagers notice anything odd when importing the new XZ version 5.6.0/5.6.1 from upstream?
Is the current software supply-chain in the most popular Linux distros easy to audit?
Could we have similar backdoors lurking that haven t been detected yet?
As a Debian Developer, I decided to audit the xz package in Debian, share my methodology and findings in this post, and also suggest some improvements on how the software supply-chain security could be tightened in Debian specifically.
Note that the scope here is only to inspect how Debian imports software from its upstreams, and how they are distributed to Debian s users. This excludes the whole story of how to assess if an upstream project is following software development security best practices. This post doesn t discuss how to operate an individual computer running Debian to ensure it remains untampered as there are plenty of guides on that already.
Downloading Debian and upstream source packages
Let s start by working backwards from what the Debian package repositories offer for download. As auditing binaries is extremely complicated, we skip that, and assume the Debian build hosts are trustworthy and reliably building binaries from the source packages, and the focus should be on auditing the source code packages.
As with everything in Debian, there are multiple tools and ways to do the same thing, but in this post only one (and hopefully the best) way to do something is presented for brevity.
The first step is to download the latest version and some past versions of the package from the Debian archive, which is easiest done with debsnap. The following command will download all Debian source packages of xz-utils from Debian release 5.2.4-1 onwards:
Verifying authenticity of upstream and Debian sources using OpenPGP signatures
As seen in the output of debsnap, it already automatically verifies that the downloaded files match the OpenPGP signatures. To have full clarity on what files were authenticated with what keys, we should verify the Debian packagers signature with:
$ gpg --verify --auto-key-retrieve --keyserver hkps://keyring.debian.org xz-utils_5.8.1-2.dsc
gpg: Signature made Fri Oct 3 22:04:44 2025 UTC
gpg: using RSA key 57892E705233051337F6FDD105641F175712FA5B
gpg: requesting key 05641F175712FA5B from hkps://keyring.debian.org
gpg: key 7B96E8162A8CF5D1: public key "Sebastian Andrzej Siewior" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Sebastian Andrzej Siewior" [unknown]
gpg: aka "Sebastian Andrzej Siewior <bigeasy@linutronix.de>" [unknown]
gpg: aka "Sebastian Andrzej Siewior <sebastian@breakpoint.cc>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6425 4695 FFF0 AA44 66CC 19E6 7B96 E816 2A8C F5D1
Subkey fingerprint: 5789 2E70 5233 0513 37F6 FDD1 0564 1F17 5712 FA5B
$ gpg --verify --auto-key-retrieve --keyserver hkps://keyring.debian.org xz-utils_5.8.1-2.dsc
gpg: Signature made Fri Oct 3 22:04:44 2025 UTC
gpg: using RSA key 57892E705233051337F6FDD105641F175712FA5B
gpg: requesting key 05641F175712FA5B from hkps://keyring.debian.org
gpg: key 7B96E8162A8CF5D1: public key "Sebastian Andrzej Siewior" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Sebastian Andrzej Siewior" [unknown]
gpg: aka "Sebastian Andrzej Siewior <bigeasy@linutronix.de>" [unknown]
gpg: aka "Sebastian Andrzej Siewior <sebastian@breakpoint.cc>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6425 4695 FFF0 AA44 66CC 19E6 7B96 E816 2A8C F5D1
Subkey fingerprint: 5789 2E70 5233 0513 37F6 FDD1 0564 1F17 5712 FA5B
The upstream tarball signature (if available) can be verified with:
$ gpg --verify --auto-key-retrieve xz-utils_5.8.1.orig.tar.xz.asc
gpg: assuming signed data in 'xz-utils_5.8.1.orig.tar.xz'
gpg: Signature made Thu Apr 3 11:38:23 2025 UTC
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: key 38EE757D69184620: public key "Lasse Collin <lasse.collin@tukaani.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3690 C240 CE51 B467 0D30 AD1C 38EE 757D 6918 4620
$ gpg --verify --auto-key-retrieve xz-utils_5.8.1.orig.tar.xz.asc
gpg: assuming signed data in 'xz-utils_5.8.1.orig.tar.xz'
gpg: Signature made Thu Apr 3 11:38:23 2025 UTC
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: key 38EE757D69184620: public key "Lasse Collin <lasse.collin@tukaani.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3690 C240 CE51 B467 0D30 AD1C 38EE 757D 6918 4620
Note that this only proves that there is a key that created a valid signature for this content. The authenticity of the keys themselves need to be validated separately before trusting they in fact are the keys of these people. That can be done by checking e.g. the upstream website for what key fingerprints they published, or the Debian keyring for Debian Developers and Maintainers, or by relying on the OpenPGP web-of-trust .
Verifying authenticity of upstream sources by comparing checksums
In case the upstream in question does not publish release signatures, the second best way to verify the authenticity of the sources used in Debian is to download the sources directly from upstream and compare that the sha256 checksums match.
This should be done using the debian/watch file inside the Debian packaging, which defines where the upstream source is downloaded from. Continuing on the example situation above, we can unpack the latest Debian sources, enter and then run uscan to download:
$ tar xvf xz-utils_5.8.1-2.debian.tar.xz
...
debian/rules
debian/source/format
debian/source.lintian-overrides
debian/symbols
debian/tests/control
debian/tests/testsuite
debian/upstream/signing-key.asc
debian/watch
...
$ uscan --download-current-version --destdir /tmp
Newest version of xz-utils on remote site is 5.8.1, specified download version is 5.8.1
gpgv: Signature made Thu Apr 3 11:38:23 2025 UTC
gpgv: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpgv: Good signature from "Lasse Collin <lasse.collin@tukaani.org>"
Successfully symlinked /tmp/xz-5.8.1.tar.xz to /tmp/xz-utils_5.8.1.orig.tar.xz.
$ tar xvf xz-utils_5.8.1-2.debian.tar.xz
...
debian/rules
debian/source/format
debian/source.lintian-overrides
debian/symbols
debian/tests/control
debian/tests/testsuite
debian/upstream/signing-key.asc
debian/watch
...
$ uscan --download-current-version --destdir /tmp
Newest version of xz-utils on remote site is 5.8.1, specified download version is 5.8.1
gpgv: Signature made Thu Apr 3 11:38:23 2025 UTC
gpgv: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpgv: Good signature from "Lasse Collin <lasse.collin@tukaani.org>"
Successfully symlinked /tmp/xz-5.8.1.tar.xz to /tmp/xz-utils_5.8.1.orig.tar.xz.
The original files downloaded from upstream are now in /tmp along with the files renamed to follow Debian conventions. Using everything downloaded so far the sha256 checksums can be compared across the files and also to what the .dsc file advertised:
In the example above the checksum 0b54f79df85... is the same across the files, so it is a match.
Repackaged upstream sources can t be verified as easily
Note that uscan may in rare cases repackage some upstream sources, for example to exclude files that don t adhere to Debian s copyright and licensing requirements. Those files and paths would be listed under the Files-Excluded section in the debian/copyright file. There are also other situations where the file that represents the upstream sources in Debian isn t bit-by-bit the same as what upstream published. If checksums don t match, an experienced Debian Developer should review all package settings (e.g. debian/source/options) to see if there was a valid and intentional reason for divergence.
Reviewing changes between two source packages using diffoscope
Diffoscope is an incredibly capable and handy tool to compare arbitrary files. For example, to view a report in HTML format of the differences between two XZ releases, run:
If the changes are extensive, and you want to use a LLM to help spot potential security issues, generate the report of both the upstream and Debian packaging differences in Markdown with:
The Markdown files created above can then be passed to your favorite LLM, along with a prompt such as:
Based on the attached diffoscope output for a new Debian package version compared with the previous one, list all suspicious changes that might have introduced a backdoor, followed by other potential security issues. If there are none, list a short summary of changes as the conclusion.
Reviewing Debian source packages in version control
As of today only 93% of all Debian source packages are tracked in git on Debian s GitLab instance at salsa.debian.org. Some key packages such as Coreutils and Bash are not using version control at all, as their maintainers apparently don t see value in using git for Debian packaging, and the Debian Policy does not require it. Thus, the only reliable and consistent way to audit changes in Debian packages is to compare the full versions from the archive as shown above.
However, for packages that are hosted on Salsa, one can view the git history to gain additional insight into what exactly changed, when and why. For packages that are using version control, their location can be found in the Git-Vcs header in the debian/control file. For xz-utils the location is salsa.debian.org/debian/xz-utils.
Note that the Debian policy does not state anything about how Salsa should be used, or what git repository layout or development practices to follow. In practice most packages follow the DEP-14 proposal, and use git-buildpackage as the tool for managing changes and pushing and pulling them between upstream and salsa.debian.org.
To get the XZ Utils source, run:
$ gbp clone https://salsa.debian.org/debian/xz-utils.git
gbp:info: Cloning from 'https://salsa.debian.org/debian/xz-utils.git'
$ gbp clone https://salsa.debian.org/debian/xz-utils.git
gbp:info: Cloning from 'https://salsa.debian.org/debian/xz-utils.git'
At the time of writing this post the git history shows:
$ git log --graph --oneline
* bb787585 (HEAD -> debian/unstable, origin/debian/unstable, origin/HEAD) Prepare 5.8.1-2
* 4b769547 d: Remove the symlinks from -dev package.
* a39f3428 Correct the nocheck build profile
* 1b806b8d Import Debian changes 5.8.1-1.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (origin/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
* 48440e24 Tests: Add a fuzzing target for the multithreaded .xz decoder
* 0c80045a liblzma: mt dec: Fix lack of parallelization in single-shot decoding
* 81880488 liblzma: mt dec: Don't modify thr->in_size in the worker thread
* d5a2ffe4 liblzma: mt dec: Don't free the input buffer too early (CVE-2025-31115)
* c0c83596 liblzma: mt dec: Simplify by removing the THR_STOP state
* 831b55b9 liblzma: mt dec: Fix a comment
* b9d168ee liblzma: Add assertions to lzma_bufcpy()
* c8e0a489 DOS: Update Makefile to fix the build
* 307c02ed sysdefs.h: Avoid <stdalign.h> even with C11 compilers
* 7ce38b31 Update THANKS
* 688e51bd Translations: Update the Croatian translation
* a6b54dde Prepare 5.8.0-1.
* 77d9470f Add 5.8 symbols.
* 9268eb66 Import 5.8.0
* 6f85ef4f Update upstream source from tag 'upstream/5.8.0'
\ \
* afba662b New upstream version 5.8.0
/
* 173fb5c6 doc/SHA256SUMS: Add 5.8.0
* db9258e8 Bump version and soname for 5.8.0
* bfb752a3 Add NEWS for 5.8.0
* 6ccbb904 Translations: Run "make -C po update-po"
* 891a5f05 Translations: Run po4a/update-po
* 4f52e738 Translations: Partially fix overtranslation in Serbian man pages
* ff5d9447 liblzma: Count the extra bytes in LZMA/LZMA2 decoder memory usage
* 943b012d liblzma: Use SSE2 intrinsics instead of memcpy() in dict_repeat()
$ git log --graph --oneline
* bb787585 (HEAD -> debian/unstable, origin/debian/unstable, origin/HEAD) Prepare 5.8.1-2
* 4b769547 d: Remove the symlinks from -dev package.
* a39f3428 Correct the nocheck build profile
* 1b806b8d Import Debian changes 5.8.1-1.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (origin/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
* 48440e24 Tests: Add a fuzzing target for the multithreaded .xz decoder
* 0c80045a liblzma: mt dec: Fix lack of parallelization in single-shot decoding
* 81880488 liblzma: mt dec: Don't modify thr->in_size in the worker thread
* d5a2ffe4 liblzma: mt dec: Don't free the input buffer too early (CVE-2025-31115)
* c0c83596 liblzma: mt dec: Simplify by removing the THR_STOP state
* 831b55b9 liblzma: mt dec: Fix a comment
* b9d168ee liblzma: Add assertions to lzma_bufcpy()
* c8e0a489 DOS: Update Makefile to fix the build
* 307c02ed sysdefs.h: Avoid <stdalign.h> even with C11 compilers
* 7ce38b31 Update THANKS
* 688e51bd Translations: Update the Croatian translation
* a6b54dde Prepare 5.8.0-1.
* 77d9470f Add 5.8 symbols.
* 9268eb66 Import 5.8.0
* 6f85ef4f Update upstream source from tag 'upstream/5.8.0'
\ \
* afba662b New upstream version 5.8.0
/
* 173fb5c6 doc/SHA256SUMS: Add 5.8.0
* db9258e8 Bump version and soname for 5.8.0
* bfb752a3 Add NEWS for 5.8.0
* 6ccbb904 Translations: Run "make -C po update-po"
* 891a5f05 Translations: Run po4a/update-po
* 4f52e738 Translations: Partially fix overtranslation in Serbian man pages
* ff5d9447 liblzma: Count the extra bytes in LZMA/LZMA2 decoder memory usage
* 943b012d liblzma: Use SSE2 intrinsics instead of memcpy() in dict_repeat()
This shows both the changes on the debian/unstable branch as well as the intermediate upstream import branch, and the actual real upstream development branch. See my Debian source packages in git explainer for details of what these branches are used for.
To only view changes on the Debian branch, run git log --graph --oneline --first-parent or git log --graph --oneline -- debian.
The Debian branch should only have changes inside the debian/ subdirectory, which is easy to check with:
If the upstream in question signs commits or tags, they can be verified with e.g.:
$ git verify-tag v5.6.2
gpg: Signature made Wed 29 May 2024 09:39:42 AM PDT
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: issuer "lasse.collin@tukaani.org"
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [expired]
gpg: Note: This key has expired!
$ git verify-tag v5.6.2
gpg: Signature made Wed 29 May 2024 09:39:42 AM PDT
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: issuer "lasse.collin@tukaani.org"
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [expired]
gpg: Note: This key has expired!
The main benefit of reviewing changes in git is the ability to see detailed information about each individual change, instead of just staring at a massive list of changes without any explanations. In this example, to view all the upstream commits since the previous import to Debian, one would view the commit range from afba662b New upstream version 5.8.0 to fa1e8796 New upstream version 5.8.1 with git log --reverse -p afba662b...fa1e8796. However, a far superior way to review changes would be to browse this range using a visual git history viewer, such as gitk. Either way, looking at one code change at a time and reading the git commit message makes the review much easier.
Comparing Debian source packages to git contents
As stated in the beginning of the previous section, and worth repeating, there is no guarantee that the contents in the Debian packaging git repository matches what was actually uploaded to Debian. While the tag2upload project in Debian is getting more and more popular, Debian is still far from having any system to enforce that the git repository would be in sync with the Debian archive contents.
To detect such differences we can run diff across the Debian source packages downloaded with debsnap earlier (path source-xz-utils/xz-utils_5.8.1-2.debian) and the git repository cloned in the previous section (path xz-utils):
diff$ diff -u source-xz-utils/xz-utils_5.8.1-2.debian/ xz-utils/debian/
diff -u source-xz-utils/xz-utils_5.8.1-2.debian/changelog xz-utils/debian/changelog
--- debsnap/source-xz-utils/xz-utils_5.8.1-2.debian/changelog 2025-10-03 09:32:16.000000000 -0700
+++ xz-utils/debian/changelog 2025-10-12 12:18:04.623054758 -0700
@@ -5,7 +5,7 @@
* Remove the symlinks from -dev, pointing to the lib package.
(Closes: #1109354)
- -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:32:16 +0200
+ -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:36:59 +0200
$ diff -u source-xz-utils/xz-utils_5.8.1-2.debian/ xz-utils/debian/
diff -u source-xz-utils/xz-utils_5.8.1-2.debian/changelog xz-utils/debian/changelog
--- debsnap/source-xz-utils/xz-utils_5.8.1-2.debian/changelog 2025-10-03 09:32:16.000000000 -0700
+++ xz-utils/debian/changelog 2025-10-12 12:18:04.623054758 -0700
@@ -5,7 +5,7 @@
* Remove the symlinks from -dev, pointing to the lib package.
(Closes: #1109354)
- -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:32:16 +0200
+ -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:36:59 +0200
In the case above diff revealed that the timestamp in the changelog in the version uploaded to Debian is different from what was committed to git. This is not malicious, just a mistake by the maintainer who probably didn t run gbp tag immediately after upload, but instead some dch command and ended up with having a different timestamps in the git compared to what was actually uploaded to Debian.
Creating syntetic Debian packaging git repositories
If no Debian packaging git repository exists, or if it is lagging behind what was uploaded to Debian s archive, one can use git-buildpackage s import-dscs feature to create synthetic git commits based on the files downloaded by debsnap, ensuring the git contents fully matches what was uploaded to the archive. To import a single version there is gbp import-dsc (no s at the end), of which an example invocation would be:
$ gbp import-dsc --verbose ../source-xz-utils/xz-utils_5.8.1-2.dsc
Version '5.8.1-2' imported under '/home/otto/debian/xz-utils-2025-09-29'
$ gbp import-dsc --verbose ../source-xz-utils/xz-utils_5.8.1-2.dsc
Version '5.8.1-2' imported under '/home/otto/debian/xz-utils-2025-09-29'
Example commit history from a repository with commits added with gbp import-dsc:
An online example repository with only a few missing uploads added using gbp import-dsc can be viewed at salsa.debian.org/otto/xz-utils-2025-09-29/-/network/debian%2Funstable
An example repository that was fully crafted using gbp import-dscs can be viewed at salsa.debian.org/otto/xz-utils-gbp-import-dscs-debsnap-generated/-/network/debian%2Flatest.
There exists also dgit, which in a similar way creates a synthetic git history to allow viewing the Debian archive contents via git tools. However, its focus is on producing new package versions, so fetching a package with dgit that has not had the history recorded in dgit earlier will only show the latest versions:
$ dgit clone xz-utils
canonical suite name for unstable is sid
starting new git history
last upload to archive: NO git hash
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz.asc...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1-2.debian.tar.xz...
dpkg-source: info: extracting xz-utils in unpacked
dpkg-source: info: unpacking xz-utils_5.8.1.orig.tar.xz
dpkg-source: info: unpacking xz-utils_5.8.1-2.debian.tar.xz
synthesised git commit from .dsc 5.8.1-2
HEAD is now at f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium
dgit ok: ready for work in xz-utils
$ dgit/sid git log --graph --oneline
* f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium 9 days ago (HEAD -> dgit/sid, dgit/dgit/sid)
\
* 11d3a62 Import xz-utils_5.8.1-2.debian.tar.xz 9 days ago
* 15dcd95 Import xz-utils_5.8.1.orig.tar.xz 6 months ago
$ dgit clone xz-utils
canonical suite name for unstable is sid
starting new git history
last upload to archive: NO git hash
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz.asc...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1-2.debian.tar.xz...
dpkg-source: info: extracting xz-utils in unpacked
dpkg-source: info: unpacking xz-utils_5.8.1.orig.tar.xz
dpkg-source: info: unpacking xz-utils_5.8.1-2.debian.tar.xz
synthesised git commit from .dsc 5.8.1-2
HEAD is now at f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium
dgit ok: ready for work in xz-utils
$ dgit/sid git log --graph --oneline
* f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium 9 days ago (HEAD -> dgit/sid, dgit/dgit/sid)
\
* 11d3a62 Import xz-utils_5.8.1-2.debian.tar.xz 9 days ago
* 15dcd95 Import xz-utils_5.8.1.orig.tar.xz 6 months ago
Unlike git-buildpackage managed git repositories, the dgit managed repositories cannot incorporate the upstream git history and are thus less useful for auditing the full software supply-chain in git.
Comparing upstream source packages to git contents
Equally important to the note in the beginning of the previous section, one must also keep in mind that the upstream release source packages, often called release tarballs, are not guaranteed to have the exact same contents as the upstream git repository. Projects might strip out test data or extra development files from their release tarballs to avoid shipping unnecessary files to users, or projects might add documentation files or versioning information into the tarball that isn t stored in git. While a small minority, there are also upstreams that don t use git at all, so the plain files in a release tarball is still the lowest common denominator for all open source software projects, and exporting and importing source code needs to interface with it.
In the case of XZ, the release tarball has additional version info and also a sizeable amount of pregenerated compiler configuration files. Detecting and comparing differences between git contents and tarballs can of course be done manually by running diff across an unpacked tarball and a checked out git repository. If using git-buildpackage, the difference between the git contents and tarball contents can be made visible directly in the import commit.
In this XZ example, consider this git history:
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (debian/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (debian/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
The commit a522a226 was the upstream release commit, which upstream also tagged v5.8.1. The merge commit 2808ec2d applied the new upstream import branch contents on the Debian branch. Between these is the special commit fa1e8796 New upstream version 5.8.1 tagged upstream/v5.8. This commit and tag exists only in the Debian packaging repository, and they show what is the contents imported into Debian. This is generated automatically by git-buildpackage when running git import-orig --uscan for Debian packages with the correct settings in debian/gbp.conf. By viewing this commit one can see exactly how the upstream release tarball differs from the upstream git contents (if at all).
In the case of XZ, the difference is substantial, and shown below in full as it is very interesting:
To be able to easily inspect exactly what changed in the release tarball compared to git release tag contents, the best tool for the job is Meld, invoked via git difftool --dir-diff fa1e8796^..fa1e8796.
To compare changes across the new and old upstream tarball, one would need to compare commits afba662b New upstream version 5.8.0 and fa1e8796 New upstream version 5.8.1 by running git difftool --dir-diff afba662b..fa1e8796.
With all the above tips you can now go and try to audit your own favorite package in Debian and see if it is identical with upstream, and if not, how it differs.
Should the XZ backdoor have been detected using these tools?
The famous XZ Utils backdoor (CVE-2024-3094) consisted of two parts: the actual backdoor inside two binary blobs masqueraded as a test files (tests/files/bad-3-corrupt_lzma2.xz, tests/files/good-large_compressed.lzma), and a small modification in the build scripts (m4/build-to-host.m4) to extract the backdoor and plant it into the built binary. The build script was not tracked in version control, but generated with GNU Autotools at release time and only shipped as additional files in the release tarball.
The entire reason for me to write this post was to ponder if a diligent engineer using git-buildpackage best practices could have reasonably spotted this while importing the new upstream release into Debian. The short answer is no . The malicious actor here clearly anticipated all the typical ways anyone might inspect both git commits, and release tarball contents, and masqueraded the changes very well and over a long timespan.
First of all, XZ has for legitimate reasons for several carefully crafted .xz files as test data to help catch regressions in the decompression code path. The test files are shipped in the release so users can run the test suite and validate that the binary is built correctly and xz works properly. Debian famously runs massive amounts of testing in its CI and autopkgtest system across tens of thousands of packages to uphold high quality despite frequent upgrades of the build toolchain and while supporting more CPU architectures than any other distro. Test data is useful and should stay.
When git-buildpackage is used correctly, the upstream commits are visible in the Debian packaging for easy review, but the commit cf44e4b that introduced the test files does not deviate enough from regular sloppy coding practices to really stand out. It is unfortunately very common for git commit to lack a message body explaining why the change was done, and to not be properly atomic with test code and test data together in the same commit, and for commits to be pushed directly to mainline without using code reviews (the commit was not part of any PR in this case). Only another upstream developer could have spotted that this change is not on par to what the project expects, and that the test code was never added, only test data, and thus that this commit was not just a sloppy one but potentially malicious.
Secondly, the fact that a new Autotools file appeared (m4/build-to-host.m4) in the XZ Utils 5.6.0 is not suspicious. This is perfectly normal for Autotools. In fact, starting from XZ Utils version 5.8.1 it is now shipping a m4/build-to-host.m4 file that it actually uses now.
Spotting that there is anything fishy is practically impossible by simply reading the code, as Autotools files are full custom m4 syntax interwoven with shell script, and there are plenty of backticks () that spawn subshells and evals that execute variable contents further, which is just normal for Autotools. Russ Cox s XZ post explains how exactly the Autotools code fetched the actual backdoor from the test files and injected it into the build.
There is only one tiny thing that maybe a very experienced Autotools user could potentially have noticed: the serial 30 in the version header is way too high. In theory one could also have noticed this Autotools file deviates from what other packages in Debian ship with the same filename, such as e.g. the serial 3, serial 5a or 5b versions. That would however require and an insane amount extra checking work, and is not something we should plan to start doing. A much simpler solution would be to simply strongly recommend all open source projects to stop using Autotools to eventually get rid of it entirely.
Not detectable with reasonable effort
While planting backdoors is evil, it is hard not to feel some respect to the level of skill and dedication of the people behind this. I ve been involved in a bunch of security breach investigations during my IT career, and never have I seen anything this well executed.
If it hadn t slowed down SSH by ~500 milliseconds and been discovered due to that, it would most likely have stayed undetected for months or years. Hiding backdoors in closed source software is relatively trivial, but hiding backdoors in plain sight in a popular open source project requires some unusual amount of expertise and creativity as shown above.
Is the software supply-chain in Debian easy to audit?
While maintaining a Debian package source using git-buildpackage can make the package history a lot easier to inspect, most packages have incomplete configurations in their debian/gbp.conf, and thus their package development histories are not always correctly constructed or uniform and easy to compare. The Debian Policy does not mandate git usage at all, and there are many important packages that are not using git at all. Additionally the Debian Policy also allows for non-maintainers to upload new versions to Debian without committing anything in git even for packages where the original maintainer wanted to use git. Uploads that bypass git unfortunately happen surpisingly often.
Because of the situation, I am afraid that we could have multiple similar backdoors lurking that simply haven t been detected yet. More audits, that hopefully also get published openly, would be welcome! More people auditing the contents of the Debian archives would probably also help surface what tools and policies Debian might be missing to make the work easier, and thus help improve the security of Debian s users, and improve trust in Debian.
Is Debian currently missing some software that could help detect similar things?
To my knowledge there is currently no system in place as part of Debian s QA or security infrastructure to verify that the upstream source packages in Debian are actually from upstream. I ve come across a lot of packages where the debian/watch or other configs are incorrect and even cases where maintainers have manually created upstream tarballs as it was easier than configuring automation to work. It is obvious that for those packages the source tarball now in Debian is not at all the same as upstream. I am not aware of any malicious cases though (if I was, I would report them of course).
I am also aware of packages in the Debian repository that are misconfigured to be of type 1.0 (native) packages, mixing the upstream files and debian/ contents and having patches applied, while they actually should be configured as 3.0 (quilt), and not hide what is the true upstream sources. Debian should extend the QA tools to scan for such things. If I find a sponsor, I might build it myself as my next major contribution to Debian.
In addition to better tooling for finding mismatches in the source code, Debian could also have better tooling for tracking in built binaries what their source files were, but solutions like Fraunhofer-AISEC s supply-graph or Sony s ESSTRA are not practical yet. Julien Malka s post about NixOS discusses the role of reproducible builds, which may help in some cases across all distros.
Or, is Debian missing some policies or practices to mitigate this?
Perhaps more importantly than more security scanning, the Debian Developer community should switch the general mindset from anyone is free to do anything to valuing having more shared workflows. The ability to audit anything is severely hampered by the fact that there are so many ways to do the same thing, and distinguishing what is a normal deviation from a malicious deviation is too hard, as the normal can basically be almost anything.
Also, as there is no documented and recommended default workflow, both those who are old and new to Debian packaging might never learn any one optimal workflow, and end up doing many steps in the packaging process in a way that kind of works, but is actually wrong or unnecessary, causing process deviations that look malicious, but turn out to just be a result of not fully understanding what would have been the right way to do something.
In the long run, once individual developers workflows are more aligned, doing code reviews will become a lot easier and smoother as the excess noise of workflow differences diminishes and reviews will feel much more productive to all participants. Debian fostering a culture of code reviews would allow us to slowly move from the current practice of mainly solo packaging work towards true collaboration forming around those code reviews.
I have been promoting increased use of Merge Requests in Debian already for some time, for example by proposing DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are involved in Debian development, please give a thumbs up in dep-team/deps!21 if you want me to continue promoting it.
Can we trust open source software?
Yes and I would argue that we can only trust open source software. There is no way to audit closed source software, and anyone using e.g. Windows or MacOS just have to trust the vendor s word when they say they have no intentional or accidental backdoors in their software. Or, when the news gets out that the systems of a closed source vendor was compromised, like Crowdstrike some weeks ago, we can t audit anything, and time after time we simply need to take their word when they say they have properly cleaned up their code base.
In theory, a vendor could give some kind of contractual or financial guarantee to its customer that there are no preventable security issues, but in practice that never happens. I am not aware of a single case of e.g. Microsoft or Oracle would have paid damages to their customers after a security flaw was found in their software. In theory you could also pay a vendor more to have them focus more effort in security, but since there is no way to verify what they did, or to get compensation when they didn t, any increased fees are likely just pocketed as increased profit.
Open source is clearly better overall. You can, if you are an individual with the time and skills, audit every step in the supply-chain, or you could as an organization make investments in open source security improvements and actually verify what changes were made and how security improved.
If your organisation is using Debian (or derivatives, such as Ubuntu) and you are interested in sponsoring my work to improve Debian, please reach out.
Avoiding
5XX errors by adjusting Load Balancer Idle Timeout
Recently I faced a problem in production where a client was running a
RabbitMQ server behind the Load Balancers we provisioned and the TCP
connections were closed every minute.
My team is responsible for the LBaaS (Load Balancer as a Service)
product and this Load Balancer was an Envoy proxy provisioned by our
control plane.
The error was similar to this:
At first glance, the issue is simple: the Load Balancer's idle
timeout is shorter than the RabbitMQ heartbeat interval.
The idle timeout is the time at which a downstream or upstream
connection will be terminated if there are no active streams. Heartbeats
generate periodic network traffic to prevent idle TCP connections from
closing prematurely.
Adjusting these timeout settings to align properly solved the
issue.
However, what I want to explore in this post are other similar
scenarios where it's not so obvious that the idle timeout is the
problem. Introducing an extra network layer, such as an Envoy proxy, can
introduce unpredictable behavior across your services, like intermittent
5XX errors.
To make this issue more concrete, let's look at a minimal,
reproducible setup that demonstrates how adding an Envoy proxy can lead
to sporadic errors.
Reproducible setup
I'll be using the following tools:
I'll be running experiments with two different
envoy.yaml configurations: one that uses Envoy's TCP proxy,
and another that uses Envoy's HTTP connection manager.
Here's the simplest Envoy TCP proxy setup: a listener on port 8000
forwarding traffic to a backend running on port 8080.
The default idle timeout if not otherwise specified is 1 hour, which
is the case here.
The backend setup is simple as well:
package mainimport("fmt""net/http""time")func helloHandler(w http.ResponseWriter, r *http.Request) w.Write([]byte("Hello from Go!"))func main() http.HandleFunc("/", helloHandler) server := http.Server Addr:":8080", IdleTimeout:3* time.Second, fmt.Println("Starting server on :8080")panic(server.ListenAndServe())
The IdleTimeout is set to 3 seconds to make it easier to test.
Now, oha is the perfect tool to generate the HTTP
requests for this test. The Load test is not meant to stress this setup,
the idea is to wait long enough so that some requests are closed. The
burst-delay feature will help with that:
I'm running the Load test for 30 seconds, sending 100 requests at
three-second intervals. I also use the -w option to wait
for ongoing requests when the duration is reached. The result looks like
this:
We had 886 responses with status code 200 and 64 connections closed.
The backend terminated 64 connections while the load balancer still had
active requests directed to it.
Let's change the Load Balancer idle_timeout to two
seconds.
filter_chains:-filters:-name: envoy.filters.network.tcp_proxytyped_config:"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxystat_prefix: go_server_tcpcluster: go_server_clusteridle_timeout: 2s # <--- NEW LINE
Run the same test again.
Great! Now all the requests worked.
This is a common issue, not specific to Envoy Proxy or the setup
shown earlier. Major cloud providers have all documented it.
AWS
troubleshoot guide for Application Load Balancers says this:
The target closed the connection with a TCP RST or a TCP FIN while the load balancer had an outstanding request to the target. Check whether the keep-alive duration of the target is shorter than the idle timeout value of the load balancer.Google
troubleshoot guide for Application Load Balancers mention this as
well:
Verify that the keepalive configuration parameter for the HTTP server software running on the backend instance is not less than the keepalive timeout of the load balancer, whose value is fixed at 10 minutes (600 seconds) and is not configurable.The load balancer generates an HTTP 5XX response code when the connection to the backend has unexpectedly closed while sending the HTTP request or before the complete HTTP response has been received. This can happen because the keepalive configuration parameter for the web server software running on the backend instance is less than the fixed keepalive timeout of the load balancer. Ensure that the keepalive timeout configuration for HTTP server software on each backend is set to slightly greater than 10 minutes (the recommended value is 620 seconds).RabbitMQ
docs also warn about this:
Certain networking tools (HAproxy, AWS ELB) and equipment (hardware load balancers) may terminate "idle" TCP connections when there is no activity on them for a certain period of time. Most of the time it is not desirable.
Most of them are talking about Application Load Balancers and the
test I did was using a Network Load Balancer. For the sake of
completeness, I will do the same test but using Envoy's HTTP connection
manager.
The updated envoy.yaml:
The yaml above is an example of a service proxying HTTP from
0.0.0.0:8000 to 0.0.0.0:8080. The only difference from a minimal
configuration is that I enabled access logs.
Let's run the same tests with oha.
Even thought the success rate is 100%, the status code distribution
show some responses with status code 503. This is the case where is not
that obvious that the problem is related to idle timeout.
However, it's clear when we look the Envoy access logs:
UC is the short name for
UpstreamConnectionTermination. This means the upstream,
which is the golang server, terminated the connection.
To fix this once again, the Load Balancer idle timeout needs to
change:
clusters:-name: go_server_clustertype: STATICtyped_extension_protocol_options: # <--- NEW BLOCKenvoy.extensions.upstreams.http.v3.HttpProtocolOptions:"@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptionscommon_http_protocol_options:idle_timeout: 2s # <--- NEW VALUEexplicit_http_config:http_protocol_options:
Finally, the sporadic 503 errors are over:
To Sum Up
Here's an example of the values my team recommends to our
clients:
Key Takeaways:
The Load Balancer idle timeout should be less than the backend
(upstream) idle/keepalive timeout.
When we are working with long lived connections, the client
(downstream) should use a keepalive smaller than the LB idle
timeout.
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language and is
widely used by (currently) 1273 other packages on CRAN, downloaded 41.8 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 651 times according
to Google Scholar.
Armadillo 15 brought changes. We mentioned these in the 15.0.2-1
and 15.0.2-1
release blog posts:
Minimum C++ standard of C++14
No more suppression of deprecation notes
(The second point is a consequence of the first. Prior to C++14,
deprecation notes were issue via a macro, and the macro was set up by Conrad in the common way of
allowing an override, which we took advantage of in RcppArmadillo
effectively shielding downstream package. In C++14 this is now an
attribute, and those cannot be suppressed.)
We tested this then-upcoming change extensively: Thirteen reverse
dependency runs expoloring different settings and leading to the current
package setup where an automatic fallback to the last Armadillo 14
release offers fallback for hardwired C++11 use and Armadillo 15 others.
Given the 1200+ reverse deoendencies, this took considerable time. All
this was also quite extensively discussed with CRAN (especially Kurt
Hornik) and documented / controlled via a series of issue tickets
starting with overall issue #475
covering the subissues:
open issue
#475 describes the version selection between Armadillo 14 and 15 via
#define
open issue
#476 illustrates how package without deprecation notes are already
suitable for Armadillo 15 and C++14
open issue
#477 demonstrates how a package with a simple deprecation note can
be adjusted for Armadillo 15 and C++14
closed issue
#479 documents a small bug we created in the initial transition
package RcppArmadillo 15.0.1-1 and fixed in the 15.0.2-1
closed issue
#481 discusses removal of the check for insufficient LAPACK routines
which has been removed given that R 4.5.0 or later has sufficient code
in its fallback LAPACK (used e.g. on Windows)
open issue
#484 offering help to the (then 226) packages needing help
transitioning from (enforced) C++11
open issue
#485 offering help to the (then 135) packages needing help with
deprecations
open issue
#489 coordinating pull requests and patches to 35 packages for the
C++11 transition
open issue
#491 coordinating pull requests and patches to 25 packages for
deprecation transition
The sixty pull requests (or emailed patches) followed a suggestion by
CRAN to rank-order packages affected by their reverse dependencies
sorted in descending package count. Now, while this change from
Armadillo 14 to 15 was happening, CRAN also tightened the C++11
requirement for packages and imposed a deadline for changes. In
discussion, CRAN also convinced me that a deadline for the deprecation
warning, now unmasked, was viable (and is in fairness commensurate with
similar, earlier changes triggered by changes in the behaviour of either
gcc/g++ or clang/clang++). So we now have two larger deadline campaigns
affecting the package (and as always there are some others).
These deadlines are coming close: October 17 for the C++11
transition, and October 23 for the deprecation warning. Now, as became
clear preparing the sixty pull requests and patches, these changes are
often relatively straightforward. For the former, remove the C++11
enforcement and the package will likely build without changes. For the
latter, make the often simple (e.g. swith from
arma::is_finite to std::isfinite) change. I
did not encounter anything much more complicated yet.
The number of affected packages approximated by looking at all
packages with a reverse dependency on RcppArmadillo and having
a deadline can be computed as
and has been declining steadily from over 350 to now under 200. For
that a big and heartfelt Thank You! to all the
maintainers who already addressed their package and uploaded updated
packages to CRAN. That rocks, and is truly appreciated.
Yet the number is still large. And while issues #489 and
#491
show a number of pending packages that have merged but not uploaded
(yet?) there are also all the other packages I have not been able to
look at in detail. While preparing sixty PRs / patches was viable over a
period of a good week, I cannot create these for all packages. So with
that said, here is a different suggestion for help: All of next week, I
will be holding open door open source office hours online two times
each day (11:00h to 13:00h Central, 16:00h to 18:00h Central) which can
be booked via this booking
link for Monday to Friday next week in either fifteen or thirty
minutes slots you can book. This should offer Google Meet video
conferencing (with jitsi as an
alternate, you should be able to control that) which should allow for
screen sharing. (I cannot hookup Zoom as my default account has
organization settings with a different calendar integration.)
If you are reading this and have a package that still needs helps, I
hope to see you in the Open Source Office Hours to aid in the
RcppArmadillo package updates for your package. Please book a slot!
Brief note to maybe spare someone else the trouble. If you want to
hide e.g. a huge table in Backstage (techdocs/mkdocs) behind a
collapsible element you need the md_in_html extension and use the
markdown attribute for it to kick in on the <details> html tag.
Add the extension to your mkdocs.yaml:
markdown_extensions:
- md_in_html
Hide the table in your markdown document in a collapsible element
like this:
It's also required to have an empty line between the html tag and starting
the markdown part. Rendered for me that way in VSCode, GitHub and Backstage.
Welcome to post 53 in the R4 series.
Continuing with posts #51
from Tuesday and #52
from Wednesday and their stated intent of posting some more here
is another quick one. Earlier today I helped another package developer
who came to the r-package-devel list asking for help with a build error
on the Fedora machine at CRAN running recent / development
clang. In such situations, the best first step is often to
replicate the issue. As I pointed out on the list, the LLVM team behind
clang maintains an apt repo
at apt.llvm.org/ making it a good resource to add to Debian-based
container such as Rocker r-base or the offical r-base (the two
are in fact interchangeable, and I take care of both).
A small pothole, however, is that the documentation at the top of apt.llvm.org site is a bit stale and
behind two aspects that changed on current Debian systems
(i.e. unstable/testing as used for r-base). First,
apt now prefers files ending in .sources (in a
nicer format) and second, it now really requires a key (which is good
practice). As it took me a few minutes to regather how to meet both
requirements, I reckoned I might as well script this.
Et voil the following script does that:
it can update and upgrade the container (currently
commented-out)
it fetches the repository key in ascii form from the llvm.org
site
it creates the sources entry for, here tagged for llvm current (22
at time of writing)
it sets up the required ~/.R/Makevars to use that
compiler
it installs clang-22 (and clang++-22)
(still using the g++ C++ library)
#!/bin/sh## Update does not hurt but is not strictly needed#apt update --quiet --quiet#apt upgrade --yes## wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key sudo tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc## or as we are root in containerwget-qO- https://apt.llvm.org/llvm-snapshot.gpg.key > /etc/apt/trusted.gpg.d/apt.llvm.org.asccat<<EOF>/etc/apt/sources.list.d/llvm-dev.sourcesTypes: debURIs: http://apt.llvm.org/unstable/# for clang-21 # Suites: llvm-toolchain-21# for current clangSuites: llvm-toolchainComponents: mainSigned-By: /etc/apt/trusted.gpg.d/apt.llvm.org.ascEOFtest-d ~/.R mkdir ~/.Rcat<<EOF>~/.R/MakevarsCLANGVER=-22# CLANGLIB=-stdlib=libc++CXX=clang++\$(CLANGVER) \$(CLANGLIB)CXX11=clang++\$(CLANGVER) \$(CLANGLIB)CXX14=clang++\$(CLANGVER) \$(CLANGLIB)CXX17=clang++\$(CLANGVER) \$(CLANGLIB)CXX20=clang++\$(CLANGVER) \$(CLANGLIB)CC=clang\$(CLANGVER)SHLIB_CXXLD=clang++\$(CLANGVER) \$(CLANGLIB)EOFapt updateapt install --yes clang-22
Once the script is run, one can test a package (or set of packages)
against clang-22 and clang++-22. This may help
R package developers. The script is also generic enough for other
development communities who can ignore (or comment-out / delete) the bit
about ~/.R/Makevars and deploy the compiler differently.
Updating the softlink as apt-preferences does is one way
and done in many GitHub Actions recipes. As we only need
wget here a basic Debian container should work, possibly
with the addition of wget. For R users r-base
hits a decent sweet spot.
Like many, bot traffic has been causing significant issues for my hosted server recently. I ve been noticing a dramatic increase in bots that do not respect robots.txt, especially the crawl-delay I have set there. Not only that, but many of them are sending user-agent strings that are quite precisely matching what desktop browsers send. That is, they don t identify themselves.
They posed a particular problem on two sites: my blog, and the lists.complete.org archives.
The list archives is a completely static site, but it has many pages, so the bots that are ill-behaved absolutely hammer it following links.
My blog runs WordPress. It has fewer pages, but by using PHP, doesn t need as many hits to start to bog down. Also, there is a Mastodon thundering herd problem, and since I participate on Mastodon, this hits my server.
The solution was one of layers.
I had already added a crawl-delay line to robots.txt. It helped a bit, but many bots these days aren t well-behaved. Next, I added WP Super Cache to my WordPress installation. I also enabled APCu in PHP and installed APCu Manager. Again, each step helped. Again, not quite enough.
Finally, I added Anubis. Installing it (especially if using the Docker container) was under-documented, but I figured it out. By default, it is designed to block AI bots and try to challenge everything with Mozilla in its user-agent (which is most things) with a Javascript challenge.
That s not quite what I want. If a bot is well-behaved, AI or otherwise, it will respect my robots.txt and I can more precisely control it there. Also, I intentionally support non-Javascript browsers on many of the sites I host, so I wanted to be judicious. Eventually I configured Anubis to only challenge things that present a user-agent that looks fully like a real browser. In other words, real browsers should pass right through, and bad bots pretending to be real browsers will fail.
That was quite effective. It reduced load further to the point where things are ordinarily fairly snappy.
I had previously been using mod_security to block some bots, but it seemed to be getting in the way of the Fediverse at times. When I disabled it, I observed another increase in speed. Anubis was likely going to get rid of those annoying bots itself anyhow.
As a final step, I migrated to a faster hosting option. This post will show me how well it survives the Mastodon thundering herd!
Update: Yes, it handled it quite nicely now.
Regarding Debian packaging this was a rather quiet month. I uploaded version
1.24.0-1 of foot and version 2.8.0-1 of
git-quick-stats. I took
the opportunity and started migrating my packages to the new version 5 watch
file
format,
which I think is much more readable than the previous format.
I also uploaded version 0.1.1-1 of
libscfg to NEW. libscfg is a C
implementation of the scfg configuration
file format and it is a dependency of recent version of
kanshi. kanshi is a tool
similar to autorandr which allows you define output profiles and kanshi
switches to the correct output profile on hotplug events. Once libscfg is in
unstable I can finally update kanshi to the latest version.
A lot of time this month in finalizing a redesign of the output rendering of
carl. carl is a small rust program I wrote
that provides a calendar view similar to cal, but it comes with colors and
ical file integration. That means that you can not only display a simple
calendar, but also colorize/highlight dates based on various attributes or
based on events on that day. In the initial versions of carl the output
rendering was simply hardcoded into the app.
This was a bit cumbersome to maintain and not configurable for users. I am
using templating languages on a daily basis, so I decided I would reimplement
the output generation of carl to use templates. I chose the
minijinja Rust library which is
based on the syntax and behavior of the Jinja2 template engine for Python .
There are others out there, like tera, but
minijinja seems to be more active in development currently. I worked on this
implementation on and off for the last year and finally had the time to finish
it up and write some additional tests for the outputs. It is easier to maintain
templates than Rust code that uses write!() to format the output. I also
implemented a configuration option for users to override the templates.
Additional to the output refactoring I also fixed couple of bugs and finally
released v0.4.0 of carl.
In my dayjob I released version 0.53 of apis-core-rdf which contains the place
lookup field which I implemented in August. A couple of weeks later we released
version 0.54 which comes with a middleware to show pass on messages from the
Django messages
framework via
response header to HTMX to trigger message popups. This implementation is based
on the blog post Using the Django messages framework with
HTMX. Version
0.55 was the last release in September. It contained preparations for
refactoring the import logic as well as a couple of UX improvements.
Welcome to post 51 in the R4 series.
A while back I realized I should really just post a little more as
not all post have to be as deep and introspective as for example the
recent-ish two
cultures post #49.
So this post is a neat little trick I (somewhat belatedly) realized
somewhat recently. The context is the ongoing transition from
(Rcpp)Armadillo 14.6.3 and earlier to (Rcpp)Armadillo 15.0.2 or later.
(I need to write a bit more about that, but that may require a bit more
time.) (And there are a total of seven (!!) issue tickets managing the
transition with issue
#475 being the main parent issue, please see there for more
details.)
In brief, the newer and current Armadillo no longer allows C++11
(which also means it no longer allowes suppression of deprecation
warnings ). It so happens that around a decade ago packages were
actively encouraged to move towards C++11 so many either set an
explicit SystemRequirements: for it, or set CXX_STD=CXX11
in src/Makevars .win . CRAN has for some time now issued
NOTEs asking for this to be removed, and more recently enforced this
with actual deadlines. In RcppArmadillo I opted to accomodate old(er)
packages (using this by-now anti-pattern) and flip to Armadillo 14.6.3
during a transition period. That is what the package does now: It gives
you either Armadillo 14.6.3 in case C++11 was detected (or this legacy
version was actively selected via a compile-time #define),
or it uses Armadillo 15.0.2 or later.
So this means we can have either one of two versions, and may want to
know which one we have. Armadillo carries its own version macros, as
many libraries or projects do (R of course included). Many many years
ago (git blame points to sixteen and twelve for a revision)
we added the following helper function to the package (full source here,
we show it here without the full roxygen2 comment header)
// [[Rcpp::export]]Rcpp::IntegerVector armadillo_version(bool single)// These are declared as constexpr in Armadillo which actually does not define them// They are also defined as macros in arma_version.hpp so we just use thatconstunsignedint major = ARMA_VERSION_MAJOR;constunsignedint minor = ARMA_VERSION_MINOR;constunsignedint patch = ARMA_VERSION_PATCH;if(single)return Rcpp::wrap(10000* major +100* minor + patch);elsereturn Rcpp::IntegerVector::create(Rcpp::Named("major")= major, Rcpp::Named("minor")= minor, Rcpp::Named("patch")= patch);
It either returns a (named) vector of the standard major , minor ,
patch form of the common package versioning pattern, or a single
integer which can used more easily in C(++) via preprocessor macros. And
this being an Rcpp-using package, we can of course access either easily
from R:
>library(RcppArmadillo)>armadillo_version(FALSE)major minor patch 1502>armadillo_version(TRUE)[1] 150002>
Perfectly valid and truthful. But cumbersome at the R level. So
when preparing for these (Rcpp)Armadillo changes in one of my package, I
realized I could alter such a function and set the S3 type to
package_version. (Full version of one such variant here)
// [[Rcpp::export]]Rcpp::List armadilloVersion()// create a vector of major, minor, patchauto ver = Rcpp::IntegerVector::create(ARMA_VERSION_MAJOR, ARMA_VERSION_MINOR, ARMA_VERSION_PATCH);// and place it in a list (as e.g. packageVersion() in R returns)auto lst = Rcpp::List::create(ver);// and class it as 'package_version' accessing print() etc methods lst.attr("class")= Rcpp::CharacterVector::create("package_version","numeric_version");return lst;
Three statements each to
create the integeer vector of known dimensions and compile-time
known value
embed it in a list (as that is what the R type expects)
set the S3 class which is easy because Rcpp accesses attributes and
create character vectors
and return the value. And now in R we can operate more easily on this
(using three dots as I didn t export it from this package):
An object of class package_version inheriting from
numeric_version can directly compare against a (human- but
not normally machine-readable) string like 15.0.0 because the simple
S3 class defines appropriate operators, as well as print()
/ format() methods as the first expression shows. It is
these little things that make working with R so smooth, and we can
easily (three statements !!) do so from Rcpp-based packages too.
The underlying object really is merely a list containing a
vector:
but the S3 glue around it makes it behave nicely.
So next time you are working with an object you plan to return to R,
consider classing it to take advantage of existing infrastructure (if it
exists, of course). It s easy enough to do, and may smoothen the
experience at the R side.
In December 2024, I went on a trip through four countries - Singapore, Malaysia, Brunei, and Vietnam - with my friend Badri. This post covers our experiences in Singapore.
I took an IndiGo flight from Delhi to Singapore, with a layover in Chennai. At the Chennai airport, I was joined by Badri. We had an early morning flight from Chennai that would land in Singapore in the afternoon. Within 48 hours of our scheduled arrival in Singapore, we submitted an arrival card online. At immigration, we simply needed to scan our passports at the gates, which opened automatically to let us through, and then give our address to an official nearby. The process was quick and smooth, but it unfortunately meant that we didn t get our passports stamped by Singapore.
Before I left the airport, I wanted to visit the nature-themed park with a fountain I saw in pictures online. It is called Jewel Changi, and it took quite some walking to get there. After reaching the park, we saw a fountain that could be seen from all the levels. We roamed around for a couple of hours, then proceeded to the airport metro station to get to our hotel.
A shot of Jewel Changi. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
There were four ATMs on the way to the metro station, but none of them provided us with any cash. This was the first country (outside India, of course!) where my card didn t work at ATMs.
To use the metro, one can tap the EZ-Link card or bank cards at the AFC gates to get in. You cannot buy tickets using cash. Before boarding the metro, I used my credit card to get Badri an EZ-Link card from a vending machine. It was 10 Singapore dollars ( 630) - 5 for the card, and 5 for the balance. I had planned to use my Visa credit card to pay for my own fare. I was relieved to see that my card worked, and I passed through the AFC gates.
We had booked our stay at a hostel named Campbell s Inn, which was the cheapest we could find in Singapore. It was 1500 per night for dorm beds. The hostel was located in Little India. While Little India has an eponymous metro station, the one closest to our hostel was Rochor.
On the way to the hostel, we found out that our booking had been canceled.
We had booked from the Hostelworld website, opting to pay the deposit in advance and to pay the balance amount in person upon reaching. However, Hostelworld still tried to charge Badri s card again before our arrival. When the unauthorized charge failed, they sent an automatic message saying we tried to charge and to contact them soon to avoid cancellation, which we couldn t do as we were in the plane.
Despite this, we went to the hostel to check the status of our booking.
The trip from the airport to Rochor required a couple of transfers. It was 2 Singapore dollars (approx. 130) and took approximately an hour.
Upon reaching the hostel, we were informed that our booking had indeed been canceled, and were not given any reason for the cancelation. Furthermore, no beds were available at the hostel for us to book on the spot.
We decided to roam around and look for accommodation at other hostels in the area. Soon, we found a hostel by the name of Snooze Inn, which had two beds available. It was 36 Singapore dollars per person (around 2300) for a dormitory bed. Snooze Inn advertised supporting RuPay cards and UPI. Some other places in that area did the same. We paid using my card. We checked in and slept for a couple of hours after taking a shower.
By the time we woke up, it was dark. We met Praveen s friend Sabeel to get my FLX1 phone. We also went to Mustafa Center nearby to exchange Indian rupees for Singapore dollars. Mustafa Center also had a shopping center with shops selling electronic items and souvenirs, among other things. When we were dropping off Sabeel at a bus stop, we discovered that the bus stops in Singapore had a digital board mentioning the bus routes for the stop and the number of minutes each bus was going to take.
In addition to an organized bus system, Singapore had good pedestrian infrastructure. There were traffic lights and zebra crossings for pedestrians to cross the roads. Unlike in Indian cities, rules were being followed. Cars would stop for pedestrians at unmanaged zebra crossings; pedestrians would in turn wait for their crossing signal to turn green before attempting to walk across. Therefore, walking in Singapore was easy.
Traffic rules were taken so seriously in Singapore I (as a pedestrian) was afraid of unintentionally breaking them, which could get me in trouble, as breaking rules is dealt with heavy fines in the country. For example, crossing roads without using a marked crossing (while being within 50 meters of it) - also known as jaywalking - is an offence in Singapore.
Moreover, the streets were litter-free, and cleanliness seemed like an obsession.
After exploring Mustafa Center, we went to a nearby 7-Eleven to top up Badri s EZ-Link card. He gave 20 Singapore dollars for the recharge, which credited the card by 19.40 Singapore dollars (0.6 dollars being the recharge fee).
When I was planning this trip, I discovered that the World Chess Championship match was being held in Singapore. I seized the opportunity and bought a ticket in advance. The next day - the 5th of December - I went to watch the 9th game between Gukesh Dommaraju of India and Ding Liren of China. The venue was a hotel on Sentosa Island, and the ticket was 70 Singapore dollars, which was around 4000 at the time.
We checked out from our hostel in the morning, as we were planning to stay with Badri s aunt that night. We had breakfast at a place in Little India. Then we took a couple of buses, followed by a walk to Sentosa Island. Paying the fare for the buses was similar to the metro - I tapped my credit card in the bus, while Badri tapped his EZ-Link card. We also had to tap it while getting off.
If you are tapping your credit card to use public transport in Singapore, keep in mind that the total amount of all the trips taken on a day is deducted at the end. This makes it hard to determine the cost of individual trips. For example, I could take a bus and get off after tapping my card, but I would have no way to determine how much this journey cost.
When you tap in, the maximum fare amount gets deducted. When you tap out, the balance amount gets refunded (if it s a shorter journey than the maximum fare one). So, there is incentive for passengers not to get off without tapping out. Going by your card statement, it looks like all that happens virtually, and only one statement comes in at the end. Maybe this combining only happens for international cards.
We got off the bus a kilometer away from Sentosa Island and walked the rest of the way. We went on the Sentosa Boardwalk, which is itself a tourist attraction. I was using Organic Maps to navigate to the hotel Resorts World Sentosa, but Organic Maps route led us through an amusement park. I tried asking the locals (people working in shops) for directions, but it was a Chinese-speaking region, and they didn t understand English. Fortunately, we managed to find a local who helped us with the directions.
A shot of Sentosa Boardwalk. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Following the directions, we somehow ended up having to walk on a road which did not have pedestrian paths. Singapore is a country with strict laws, so we did not want to walk on that road. Avoiding that road led us to the Michael Hotel. There was a person standing at the entrance, and I asked him for directions to Resorts World Sentosa. The person told me that the bus (which was standing at the entrance) would drop me there! The bus was a free service for getting to Resorts World Sentosa. Here I parted ways with Badri, who went to his aunt s place.
I got to the Resorts Sentosa and showed my ticket to get in. There were two zones inside - the first was a room with a glass wall separating the audience and the players. This was the room to watch the game physically, and resembled a zoo or an aquarium. :) The room was also a silent room, which means talking or making noise was prohibited. Audiences were only allowed to have mobile phones for the first 30 minutes of the game - since I arrived late, I could not bring my phone inside that room.
The other zone was outside this room. It had a big TV on which the game was being broadcast along with commentary by David Howell and Jovanka Houska - the official FIDE commentators for the event. If you don t already know, FIDE is the authoritative international chess body.
I spent most of the time outside that silent room, giving me an opportunity to socialize. A lot of people were from Singapore. I saw there were many Indians there as well. Moreover, I had a good time with Vasudevan, a journalist from Tamil Nadu who was covering the match. He also asked questions to Gukesh during the post-match conference. His questions were in Tamil to lift Gukesh s spirits, as Gukesh is a Tamil speaker.
Tea and coffee were free for the audience. I also bought a T-shirt from their stall as a souvenir.
After the game, I took a shuttle bus from Resorts World Sentosa to a metro station, then travelled to Pasir Ris by metro, where Badri was staying with his aunt. I thought of getting something to eat, but could not find any caf s or restaurants while I was walking from the Pasir Ris metro station to my destination, and was positively starving when I got there.
Badri s aunt s place was an apartment in a gated community. On the gate was a security guard who asked me the address of the apartment. Upon entering, there were many buildings. To enter the building, you need to dial the number of the apartment you want to go to and speak to them. I had seen that in the TV show Seinfeld, where Jerry s friends used to dial Jerry to get into his building.
I was afraid they might not have anything to eat because I told them I was planning to get something on the way. This was fortunately not the case, and I was relieved to not have to sleep with an empty stomach.
Badri s uncle gave us an idea of how safe Singapore is. He said that even if you forget your laptop in a public space, you can go back the next day to find it right there in the same spot. I also learned that owning cars was discouraged in Singapore - the government imposes a high registration fee on them, while also making public transport easy to use and affordable. I also found out that 7-Eleven was not that popular among residents in Singapore, unlike in Malaysia or Thailand.
The next day was our third and final day in Singapore. We had a bus in the evening to Johor Bahru in Malaysia. We got up early, had breakfast, and checked out from Badri s aunt s home. A store by the name of Cat Socrates was our first stop for the day, as Badri wanted to buy some stationery. The plan was to take the metro, followed by the bus. So we got to Pasir Ris metro station. Next to the metro station was a mall. In the mall, Badri found an ATM where our cards worked, and we got some Singapore dollars.
It was noon when we reached the stationery shop mentioned above. We had to walk a kilometer from the place where the bus dropped us. It was a hot, sunny day in Singapore, so walking was not comfortable. We had to go through residential areas in Singapore. We saw some non-touristy parts of Singapore.
After we were done with the stationery shop, we went to a hawker center to get lunch. Hawker centers are unique to Singapore. They have a lot of shops that sell local food at cheap prices. It is similar to a food court. However, unlike the food courts in malls, hawker centers are open-air and can get quite hot.
This is the hawker center we went to. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
To have something, you just need to buy it from one of the shops and find a table. After you are done, you need to put your tray in the tray-collecting spots. I had a kaya toast with chai, since there weren t many vegetarian options. I also bought a persimmon from a nearby fruit vendor. On the other hand, Badri sampled some local non-vegetarian dishes.
Table littering at the hawker center was prohibited by law. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Next, we took a metro to Raffles Place, as we wanted to visit Merlion, the icon of Singapore. It is a statue having the head of a lion and the body of a fish. While getting through the AFC gates, my card was declined. Therefore, I had to buy an EZ-Link card, which I had been avoiding because the card itself costs 5 Singapore dollars.
From the Raffles Place metro station, we walked to Merlion. The place also gave a nice view of Marina Bay Sands. It was filled with tourists clicking pictures, and we also did the same.
Merlion from behind, giving a good view of Marina Bay Sands. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
After this, we went to the bus stop to catch our bus to the border city of Johor Bahru, Malaysia. The bus was more than an hour late, and we worried that we had missed the bus. I asked an Indian woman at the stop who also planned to take the same bus, and she told us that the bus was late. Finally, our bus arrived, and we set off for Johor Bahru.
Before I finish, let me give you an idea of my expenditure. Singapore is an expensive country, and I realized that expenses could go up pretty quickly. Overall, my stay in Singapore for 3 days and 2 nights was approx. 5500 rupees. That too, when we stayed one night at Badri s aunt s place (so we didn t have to pay for accomodation for one of the nights) and didn t have to pay for a couple of meals. This amount doesn t include the ticket for the chess game, but includes the costs of getting there. If you are in Singapore, it is likely you will pay a visit to Sentosa Island anyway.
Stay tuned for our experiences in Malaysia!
Credits: Thanks to Dione, Sahil, Badri and Contrapunctus for reviewing the draft. Thanks to Bhe for spotting a duplicate sentence.
The latest Proxmox release introduces a new Qemu machine version that seems to
behave differently for how it addresses the virtual disk configuration.
Also, the regular query-block qmp command doesn t list the created bitmaps
as usual.
If the virtual machine version is set to 9.2+pve , everything seems to work
out of the box.
I ve released Version
0.50 with some small
changes so its compatible with the newer machine versions.
I m something of a filesystem geek, I guess. I first wrote about ZFS on Linux 14 years ago, and even before I used ZFS, I had used ext2/3/4, jfs, reiserfs, xfs, and no doubt some others.
I ve also used btrfs. I last posted about it in 2014, when I noted it has some advantages over ZFS, but also some drawbacks, including a lot of kernel panics.
Since that comparison, ZFS has gained trim support and btrfs has stabilized. The btrfs status page gives you an accurate idea of what is good to use on btrfs.
Background: Moving towards ZFS and btrfs
I have been trying to move everything away from ext4 and onto either ZFS or btrfs. There are generally several reasons for that:
The checksums for every block help detect potential silent data corruption
Instant snapshots make consistent backups of live systems a lot easier, and without the hassle and wasted space of LVM snapshots
Transparent compression and dedup can save a lot of space in storage-constrained environments
For any machine with at least 32GB of RAM (plus my backup server, which has only 8GB), I run ZFS. While it lacks some of the flexibility of btrfs, it has polish. zfs list -o space shows a useful space accounting. zvols can be behind VMs. With my project simplesnap, I can easily send hourly backups with ZFS, and I choose to send them over NNCP in most cases.
I have a few VMs in the cloud (running Debian, of course) that I use to host things like this blog, my website, my gopher site, the quux NNCP public relay, and various other things.
In these environments, storage space can be expensive. For that matter, so can RAM. ZFS is RAM-hungry, so that rules out ZFS. I ve been running btrfs in those environments for a few years now, and it s worked out well. I do async dedup, lzo or zstd compression depending on the needs, and the occasional balance and defrag.
Filesystems on the Raspberry Pi
I run Debian trixie on all my Raspberry Pis; not Raspbian or Raspberry Pi OS for a number of reasons. My 8-yr-old uses a Raspberry Pi 400 as her primary computer and loves it! She doesn t do web browsing, but plays Tuxpaint, some old DOS games like Math Blaster via dosbox, and uses Thunderbird for a locked-down email account.
But it was SLOW. Just really, glacially, slow, especially for Thunderbird.
My first step to address that was to get a faster MicroSD card to hold the OS. That was a dramatic improvement. It s still slow, but a lot faster.
Then, I thought, maybe I could use btrfs with LZO compression to reduce the amount of I/O and speed things up further? Analysis showed things were mostly slow due to I/O, not CPU, constraints.
The conversion
Rather than use the btrfs in-place conversion from ext4, I opted to dar it up (like tar), run mkfs.btrfs on the SD card, then unpack the archive back onto it. Easy enough, right?
Well, not so fast. The MicroSD card is 128GB, and the entire filesystem is 6.2GB. But after unpacking 100MB onto it, I got an out of space error.
btrfs has this notion of block groups. By default, each block group is dedicated to either data or metadata. btrfs fi df and btrfs fi usage will show you details about the block groups.
btrfs allocates block groups greedily (the ssd_spread mount option I use may have exacerbated this). What happened was it allocated almost the entire drive to data block groups, trying to spread the data across it. It so happened that dar archived some larger files first (maybe /boot), so btrfs was allocating data and metadata blockgroups assuming few large files. But then it started unpacking one of the directories in /usr with lots of small files (maybe /usr/share/locale). It quickly filled up the metadata block group, and since the entire SD card had been allocated to different block groups, I got ENOSPC.
Deleting a few files and running btrfs balance resolved it; now it allocated 1GB to metadata, which was plenty. I re-ran the dar extract and now everything was fine. See more details on btrfs balance and block groups.
This was the only btrfs problem I encountered.
Benchmarks
I timed two things prior to switching to btrfs: how long it takes to boot (measured from the moment I turn on the power until the moment the XFCE login box is displayed), and how long it takes to start Thunderbird.
After switching to btrfs with LZO compression, somewhat to my surprise, both measures were exactly the same!
Why might this be?
It turns out that SD cards are understood to be pathologically bad with random read performance. Boot and Thunderbird both are likely doing a lot of small random reads, not large streaming reads. Therefore, it may be that even though I have reduced the total I/O needed, the impact is unsubstantial because the real bottleneck is the seeks across the disk.
Still, I gain the better backup support and silent data corruption prevention, so I kept btrfs.
SSD mount options and MicroSD endurance
btrfs has several mount options specifically relevant to SSDs. Aside from the obvious trim support, they are ssd and ssd_spread. The documentation on this is vague and my attempts to learn more about it found a lot of information that was outdated or unsubstantiated folklore.
Some reports suggest that older SSDs will benefit from ssd_spread, but that it may have no effect or even a harmful effect on newer ones, and can at times cause fragmentation or write amplification. I could find nothing to back this up, though. And it seems particularly difficult to figure out what kind of wear leveling SSD firmware does. MicroSD firmware is likely to be on the less-advanced side, but still, I have no idea what it might do. In any case, with btrfs not updating blocks in-place, it should be better than ext4 in the most naive case (no wear leveling at all) but may have somewhat more write traffic for the pathological worst case (frequent updates of small portions of large files).
One anecdotal report I read and can t find anymore, somehow was from a person that had set up a sort of torture test for SD cards, with reports that ext4 lasted a few weeks or months before the MicroSDs failed, while btrfs lasted years.
If you are looking for a MicroSD card, by the way, The Great MicroSD Card Survey is a nice place to start.
For longevity: I mount all my filesystems with noatime already, so I continue to recommend that. You can also consider limiting the log size in /etc/systemd/journald.conf, running daily fstrim (which may be more successful than live trims in all filesystems).
Conclusion
I ve been pretty pleased with btrfs. The concerns I have today relate to block groups and maintenance (periodic balance and maybe a periodic defrag). I m not sure I d be ready to say put btrfs on the computer you send to someone that isn t Linux-savvy because the chances of running into issues are higher than with ext4. Still, for people that have some tech savvy, btrfs can improve reliability and performance in other ways.