Search Results: "p2"

13 October 2025

Freexian Collaborators: Monthly report about Debian Long Term Support, September 2025 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In September, 20 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 10.0h (out of 10.0h assigned and 4.0h from previous period), thus carrying over 4.0h to the next month.
  • Andreas Henriksson did 1.0h (out of 0.0h assigned and 20.0h from previous period), thus carrying over 19.0h to the next month.
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 20.0h (out of 21.0h assigned), thus carrying over 1.0h to the next month.
  • Carlos Henrique Lima Melara did 10.0h (out of 12.0h assigned), thus carrying over 2.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 21.0h (out of 21.0h assigned).
  • Emilio Pozuelo Monfort did 39.75h (out of 40.0h assigned), thus carrying over 0.25h to the next month.
  • Guilhem Moulin did 15.0h (out of 15.0h assigned).
  • Jochen Sprickerhof did 12.0h (out of 9.25h assigned and 11.75h from previous period), thus carrying over 9.0h to the next month.
  • Lee Garrett did 13.5h (out of 21.0h assigned), thus carrying over 7.5h to the next month.
  • Lucas Kanashiro did 8.0h (out of 20.0h assigned), thus carrying over 12.0h to the next month.
  • Markus Koschany did 15.0h (out of 3.25h assigned and 17.75h from previous period), thus carrying over 6.0h to the next month.
  • Paride Legovini did 6.0h (out of 8.0h assigned), thus carrying over 2.0h to the next month.
  • Roberto C. S nchez did 7.25h (out of 7.75h assigned and 13.25h from previous period), thus carrying over 13.75h to the next month.
  • Santiago Ruano Rinc n did 13.25h (out of 13.5h assigned and 1.5h from previous period), thus carrying over 1.75h to the next month.
  • Sylvain Beucler did 17.0h (out of 7.75h assigned and 13.25h from previous period), thus carrying over 4.0h to the next month.
  • Thorsten Alteholz did 21.0h (out of 21.0h assigned).
  • Tobias Frost did 5.0h (out of 0.0h assigned and 8.0h from previous period), thus carrying over 3.0h to the next month.
  • Utkarsh Gupta did 16.5h (out of 14.25h assigned and 6.75h from previous period), thus carrying over 4.5h to the next month.

Evolution of the situation In September, we released 38 DLAs.
  • Notable security updates:
    • modsecurity-apache prepared by Adrian Bunk, fixes a cross-site scripting vulnerability
    • cups, prepared by Thorsten Alteholz, fixes authentication bypass and denial of service vulnerabilities
    • jetty9, prepared by Adrian Bunk, fixes the MadeYouReset vulnerability (a recent, well-known denial of service vulnerability)
    • python-django, prepared by Chris Lamb, fixes a SQL injection vulnerability
    • firefox-esr and thunderbird, prepared by Emilio Pozuelo Monfort, were updated from the 128.x ESR series to the 140.x ESR series, fixing a number of vulnerabilities as well
  • Notable non-security updates:
    • wireless-regdb prepared by Ben Hutchings, updates information reflecting changes to radio regulations in many countries
There was one package update contributed by a Debian Developer outside of the LTS Team: an update of node-tar-fs, prepared by Xavier Guimard (a member of the Node packaging team). Finally, LTS Team members also contributed updates of the following packages:
  • libxslt (to stable and oldstable), prepared by Guilhem Moulin, to address a regression introduced in a previous security update
  • libphp-adodb (to stable and oldstable), prepared by Abhijith PA
  • cups (to stable and oldstable), prepared by Thorsten Alteholz
  • u-boot (to oldstable), prepared by Daniel Leidert and Jochen Sprickerhof
  • libcommongs-lang3-java (to stable and oldstable), prepared by Daniel Leidert
  • python-internetarchive (to oldstable), prepared by Daniel Leidert
One other notable contribution by a member of the LTS Team is that Sylvain Beucler proposed a fix upstream for CVE-2025-2760 in gimp2. Upstream no longer supports gimp2, but it is still present in Debian LTS, and so proposing this fix upstream is of benefit to other distros which may still be supporting the older gimp2 packages.

Thanks to our sponsors Sponsors that joined recently are in bold.

10 September 2025

John Goerzen: ARM is great, ARM is terrible (and so is RISC-V)

I ve long been interested in new and different platforms. I ran Debian on an Alpha back in the late 1990s and was part of the Alpha port team; then I helped bootstrap Debian on amd64. I ve got somewhere around 8 Raspberry Pi devices in active use right now, and the free NNCPNET Internet email service I manage runs on an ARM instance at a cloud provider. ARM-based devices are cheap in a lot of ways: they use little power and there are many single-board computers based on them that are inexpensive. My 8-year-old s computer is a Raspberry Pi 400, in fact. So I like ARM. I ve been looking for ARM devices that have accelerated AES (Raspberry Pi 4 doesn t) so I can use full-disk encryption with them. There are a number of options, since ARM devices are starting to go more mid-range. Radxa s ROCK 5 series of SBCs goes up to 32GB RAM. The Orange Pi 5 Max and Ultra have up to 16GB RAM, as does the Raspberry Pi 5. Pine64 s Quartz64 has up to 8GB of RAM. I believe all of these have the ARM cryptographic extensions. They re all small and most are economical. But I also dislike ARM. There is a terrible lack of standardization in the ARM community. They say their devices run Linux, but the default there is that every vendor has their own custom Debian fork, and quite likely kernel fork as well. Most don t maintain them very well. Imagine if you were buying x86 hardware. You might have to manage AcerOS, Dellbian, HPian, etc. Most of them have no security support (particularly for the kernel). Some are based on Debian 11 (released in 2021), some Debian 12 (released in 2023), and none on Debian 13 (released a month ago). That is exactly the situation we have on ARM. While Raspberry Pi 4 and below can run Debian trixie directly, Raspberry Pi has not bothered to upstream support for the Pi 5 yet, and Raspberry Pi OS is only based on Debian bookworm (released in 2023) and very explicitly does not support a key Debian feature: you can t upgrade from one Raspberry Pi OS release to the next, so it s a complete reinstall every 2 years instead of just an upgrade. OrangePiOS only supports Debian bookworm but notably, their kernel is mostly stuck at 5.10 for every image they have (bookworm shipped with 6.1 and bookworm-backports supports 6.12). Radxa has a page on running Debian on one specific board, they seem to actually not support Debian directly, but rather their fork Radxa OS. There s a different installer for every board; for instance, this one for the Rock 4D. Looking at it, I can see that it uses files from here and here, with custom kernel, gstreamer, u-boot, and they put zfs in main for some reason. From Pine64, the Quartz64 seems to be based on an ancient 4.6 or 4.19 kernel. Perhaps, though, one might be able to use Debian s Pine A64+ instructions on it. Trixie doesn t have a u-boot image for the Quartz64 but it does have device tree files for it. RISC-V seems to be even worse; not only do we have this same issue there, but support in trixie is more limited and so is performance among the supported boards. The alternative is x86-based mini PCs. There are a bunch based on the N100, N150, or Celeron. Many of them support AES-NI and the prices are roughly in line with the higher-end ARM units. There are some interesting items out there; for instance, the Radxa X4 SBC features both an N100 and a RP2040. Fanless mini PCs are available from a number of vendors. Companies like ZimaBoard have interesting options like the ZimaBlade also. The difference in power is becoming less significant; it seems the newer ARM boards need 20W or 30W power supplies, and that may put them in the range of the mini PCs. As for cost, the newer ARM boards need a heat sink and fan, so by the time you add SBC, fan, storage, etc. you re starting to get into the price range of the mini PCs. It is great to see all the options of small SBCs with ARM and RISC-V processors, but at some point you ve got to throw up your hands and go this ecosystem has a lot of problems and consider just going back to x86. I m not sure if I m quite there yet, but I m getting close. Update 2025-09-11: I found a performant encryption option for the Pi 4, but was stymied by serial console problems; see the update post.

2 September 2025

Debian Outreach Team: Spaarsh Gsoc Report

layout: post title: GSoC 2025 Report: Enhancing Debian packages with ROCm GPU acceleration date: 2025-09-01 categories: gsoc debian ROCm debian-packaging author: Spaarsh Thakkar

GitLab Salsa: @Spaarsh

GitHub: Spaarsh

Introduction I am Spaarsh Thakkar, a final-year Computer Science Engineering undergrad from India. My interests lie in research and systems. My recent work has been in and around Graphics Processing Units and I also hold a keen interest in Computer Networks. At the time of writing, I have been an open-source contributor for almost a year.

Proposal Description (as shown on GSoC Project Profile1) Due to Debian s open-source nature, no Debian package in main can have a proprietary GPU package listed as a dependency. While AI and HPC workloads increasingly rely on GPU acceleration, many Debian packages still focus solely on CUDA, which is proprietary. With the advent of ROCm, an open-source GPU computing platform, we can now integrate full-fledged AMD GPU support into Debian packages. This will improve the experience of developers working in AI/ML and HPC while positioning Debian as a strong OS choice for GPU-driven workloads. The proposal aims to aid in solving the aforementioned program by packaging several ROCm packages for debian and add ROCm support to some existing debian packages. The deliverables are as follows:
  1. New Debian packages with GPU support
  2. Enhanced GPU support within existing Debian packages
  3. More autopackagetests running on the Debian ROCm CI

Key Objectives Enable ROCm in:
  1. dbcsr
  2. gloo
  3. cp2k
Publish the following packages to debian apt archive:
  1. hipblas-common
  2. hipBLASlt

Work Report

1. Publishing hipblas-common to apt This objective was successfully completed, resulting in hipblas-common being published in the apt repository2. The process involved the following steps:
  1. Filing a Intent-To-Package (ITP)3
  2. Pulling the upstream source code repository from GitHub
  3. Adding the debian/ packaging files
  4. Testing the package locally
  5. Creating the corresponding project under rocm-team4
  6. Applying the necessary changes
  7. Building the package
  8. Testing it using sbuild
  9. Signing the package files
  10. Uploading the package to the mentors.debian.net archive(now in official archive)5
  11. Addressing review feedback and making changes
  12. Requesting sponsorship6
  13. Securing sponsorship, which led to the package being accepted into the experimental branch of apt
Since the beginning of GSoC, the package has also been promoted to the unstable branch2.

2. DBCSR ROCm and Multi-Arch Support During my GSoC project, I worked on extending the DBCSR (Distributed Block Compressed Sparse Row)7 package to improve its ROCm/HIP support, and handling multi-architecture GPU kernels in a way that is both practical for upstream maintainers and debian package developers. The code changes can be found at my dbcsr fork here8.

ROCm/HIP Enablement
  • Enabled ROCm backend support to DBCSR, allowing GPU acceleration beyond CUDA by enabling HIP-based builds.
  • Investigated and resolved build issues specific to HIP kernels within DBCSR.

Multi-Architecture GPU Kernel Handling (The following content was presented in greater detail at DebConf 25 as well. The presentation video can be found here9 and the presentation slide can be found here10).
  • DBCSR contains GPU kernels that are heavily optimized for specific architectures. By default, these are built for a single target architecture, which poses challenges for packaging where binaries need to support multiple possible GPU targets.
  • Explored different strategies for solving the multi-arch GPU kernel distribution problem, including:
    • Option 1: Fat binaries (embedding multiple GPU architectures into a single binary, with runtime dispatch). This is ideal for end-users but requires deeper changes upstream and is not straightforward with HIP/ROCm.
    • Option 2: Arch-specific libraries (e.g., libdbcsr.gfxXXX.a), where the alternatives system or explicit user selection would determine which one is used. This solves the problem but pushes complexity downstream into packaging and user configuration.
    • Option 3: Prefixed functions inside a single file, where kernels are compiled separately per architecture, functions are renamed with an arch prefix, and runtime logic in DBCSR decides which kernel to invoke. This shifts complexity upstream but could give a clean downstream experience.
  • I critically analyzed these options in the context of Debian packaging and upstream maintainability. Arch-specific .a files introduce exponential dependency complexity. The prefixed-function approach seemed like a plausible way forward, though it requires upstream buy-in.
  • After consulting with my mentor, these concerns were raised in the dbcsr repository as a discussion here11

Summary My work involved:
  • Enabling HIP/ROCm support in DBCSR.
  • Prototyping strategies for handling GPU multi-arch builds.
  • Evaluating the trade-offs between upstream maintainability and downstream packaging complexity.

3. gloo, hipification and source code issues One of the other packages that were targeted was gloo12. It is a collective communications library and has the implementations of different Machine Learning communication algorithms. The code changes can be found at my gloo fork here13 (some changes have not be committed at the time of writing).

HIP/ROCm Enablement
  1. Fixing old ROCm CMake functions The upstream Gloo codebase still used old ROCm CMake functions that began with the hip_ prefix (for example, hip_add_executable). These functions have since been deprecated/removed. I updated the build system to use the modern ROCm CMake equivalents so that the package can build properly in a current ROCm environment.
  2. Debian packaging changes I modified debian/control to add a new package, libgloo-rocm, in addition to the existing packages. This allows proper separation and handling of ROCm-enabled builds in Debian.
  3. First successful library build After these changes, I was able to successfully build the library. However, I ran into issues when trying to produce the shared library: there were undefined symbol errors at link time.

Source Code Issue On investigating the undefined symbol errors, I identified that these came from a lack of explicit template instantiation for some Gloo classes. Since C++ templates only get compiled when explicitly used or instantiated, this resulted in missing symbols in the shared library. To solve this, I explored the source code and noticed that the HIP backend code was not natively written it was generated from the CUDA backend using a custom hipification script maintained by the repo.
  • I experimented with modifying the HIPification process itself, trying out hipify-perl14 instead of the repository s custom Python script.
  • I also tried tweaking the source code in places where template instantiations were missing, so that the ROCm build would correctly export the needed symbols.

Summary The issue is still unresolved. The core problem lies in how the source code is structured: the HIP backend is almost entirely auto-generated from CUDA code, and the process does not handle template instantiations correctly. Because of this, the Debian package for Gloo with ROCm support is not yet ready for release, and further source-level fixes are required to make the ROCm build reliable.

4. cp2k CP2K15 is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems.

HIP/ROCm Enablement cp2k depends on dbcsr and hence, HIP/ROCm enablement in this package required the dbcsr16 package to get through. Even though dbcsr isn t ready yet, it was worthwhile to plan how it shall be built with HIP/ROCm once we have dbcsr in place. Upon doing this, it was realized that the architecture-wise libraries provided by the dbcsr package will result in a complicated building process for cp2k. No changes have been made to this package yet and more concrete steps shall be taken once the dbcsr package work is completed.

Summary The multi-arch build process for cp2k maybe complicated by the one static-library-per-architecture method used in the dependent package, dbcsr.

Auxiliary Work & Activities While working on the aformentioned GSoC Goals, there were a few other things that were also done.
  1. libamdhip64-dev bug file17 While trying to enable HIP/ROCm in dbcsr, CMakeDetermineHIPCompiler.cmake was unable to find HIP runtime CMake package. After going through some similar issues faced by other developers earlier, it was decided to file a bug report under the libamdhip64-dev package. After discussions with and trying the changes suggested by Cory (my mentor) under the bug, the issue was resolved. Turns out, the wrong compiler was being used by me! The gcc compiler was supposed to be used and I was using hipcc. The bug was closed since it was not due an issue with the package. Cory suggested that I add this info under the ROCm wiki page. It is yet to be done and hopefully I get it done soon.
  2. DebConf25 Talk After facing the multi-arch build dilemma with dbcsr (and also getting to know about the issues faced by other fellow package developers), I came to realise that this was more than a packaging, build or programming issue. GPU-packaging was facing a policy issue. Hence, I decided to cover this problem in greater detail at my DebConf25 Virtual Presentation under the Outreach Session. Shoutout to Cory for his support and Lucas Kanashiro for encouraging me to present my work!
  3. Bi-Weekly AMD ROCm Meetings Shortly after the Coding period started, Cory began the initiative of Bi-Weekly AMD ROCm Meetings18. Being a part of the meetings (participated in all but one!), seeing the work the other folks are doing and being able to discuss my own problems was a delight.
  4. (Upcoming) IndiaFOSS 2025 Talk After understanding the nuances and beauty of the debian packaging ecosystem in these months, I decided to spread the work about debian packaging and packaging software in general. My talk19 for the same got accepted in the upcoming IndiaFOSS 202520 conference! I hope this beings more people towards the packaging ecosystem and to the debian developer ecosystem.

Conclusion My GSoC time was fantastic! I plan to complete the work that I have started during my GSoC and beyond. Working with Cory21 and Utkarsh22 (a fellow GSoC 25 contributor under Cory) has been a very positive experience. HIP/ROCm GPU-packaging is in a nascent stage. It is an exciting time to be in this space right now. The problems are new and never encountered before (CPU packaging isn t architecture specific!). The problems were shall face in the coming time, and our solutions to them will set a precendent for the future.

References 1 : https://summerofcode.withgoogle.com/programs/2025/projects/9s4jUjV0 2 : https://tracker.debian.org/pkg/hipblas-common 3 : https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1105114 4 : https://salsa.debian.org/rocm-team 5 : https://packages.debian.org/source/sid/hipblas-common 6 : https://lists.debian.org/debian-ai/2025/05/msg00088.html 7 : https://www.cp2k.org/dbcsr 8 : https://salsa.debian.org/Spaarsh/dbcsr/ 9 : https://drive.google.com/file/d/14WQuTMcI-L0lbi3zkUc9pT6RGwwVY0j1/view?usp=sharing 10 : https://docs.google.com/presentation/d/1p-nkHPgg5C5jKGy7ySZ8rts5G2vNFQpQJQ8UySOWgVE 11 : https://github.com/cp2k/dbcsr/discussions/933 12 : https://github.com/pytorch/gloo 13 : https://salsa.debian.org/Spaarsh/gloo 14 : https://tracker.debian.org/pkg/hipify 15 : https://www.cp2k.org/ 16 : https://tracker.debian.org/pkg/dbcsr 17 : https://bugs.debian.org/cgi-bin/bugreport.cgi?https://fossunited.org/indiafoss/2025bug=1108159 18 : https://lists.debian.org/debian-ai/2025/05/msg00113.html 19 : https://fossunited.org/c/indiafoss/2025/cfp/dpq0b26ece 20 : https://fossunited.org/indiafoss/2025 21 : https://salsa.debian.org/cgmb 22 : https://salsa.debian.org/utk4r-sh

29 August 2025

Noah Meyerhans: Determining Network Online Status of Dualstack Cloud VMs

When a Debian cloud VM boots, it typically runs cloud-init at various points in the boot process. Each invocation can perform certain operations based on the host s static configuration passed by the user, typically either through a well known link-local network service or an attached iso9660 drive image. Some of the cloud-init steps execute before the network comes up, and others at a couple of different points after the network is up. I recently encountered an unexpected issue when configuring a dualstack (uses both IPv6 and legacy IPv4 networking) VM to use a custom apt server accessible only via IPv6. VM provisioning failed because it was unable to access the server in question, yet when I logged in to investigate, it was able to access the server without any problem. The boot had apparently gone smoothly right up until cloud-init s Package Update Upgrade Install module called apt-get update, which failed and broke subsequent provisioning steps. The errors reported by apt-get indicated that there was no route to the service in question, which more accurately probably meant that there was not yet a route to the service. But there was shortly after, when I investigated. This was surprising because the apt-get invocations occur in a cloud-init sequence that s explicitly ordered after the network is configured according to systemd-networkd-wait-online. Investigation eventually led to similar issues encountered in other environments reported in Debian bug #1111791, systemd: network-online.target reached before IPv6 address is ready . The issue described in that bug is identical to mine, but the bug is tagged wontfix. The behavior is considered correct.

Why the default behavior is the correct one While it s a bit counterintuitive, the systemd-networkd behavior is correct, and it s also not something we d want to override in the cloud images. Without explicit configuration, systemd can t accurately infer the intended network configuration of a given system. If a system is IPv6-only, systemd-networkd-wait-online will introduce unexpected delays in the boot process if it waits for IPv4, and vice-versa. If it assumes dualstack, things are even worse because it would block for a long time (approximately two minutes) in any single stack network before failing, leaving the host in degraded state. So the most reasonable default behavior is to block until any protocol is configured. For these same reasons, we can t change the systemd-networkd-wait-online configuration in our cloud images. All of the cloud environments we support offer both single stack and dual stack networking, so we preserve systemd s default behavior. What s causing problems here is that IPv6 takes significantly longer to configure due to its more complex router solicitation + router advertisement + DHCPv6 setup process. So in this particular case, where I ve got a dualstack VM that needs to access a v6-only apt server during the provisioning process, I need to find some mechanism to override systemd s default behavior and wait for IPv6 connectivity specifically.

What won t work Cloud-init offers the ability to write out arbitrary files during provisioning. So writing a drop-in for systemd-networkd-wait-online.service is trivial. Unfortunately, this doesn t give us everything we actually need. We still need to invoke systemctl daemon-reload to get systemd to actually apply the changes after we ve written them, and of course we need to do that before the service actually runs. Cloud-init provides a bootcmd module that lets us run shell commands very early in the boot process , but it runs too early: it runs before we ve written out our configuration files. Similarly, it provides a runcmd module, but scripts there are towards the end of the boot process, far too late to be useful. Instead of using the bootcmd facility, to simply reload systemd s config, it seemed possible that we could both write the config and trigger the reload, similar to the following:
 bootcmd:
- mkdir -p /etc/systemd/system/systemd-networkd-wait-online.service.d
- echo "[Service]" > /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- systemctl daemon-reload
But even that runs too late, as we can see in the logs that systemd-networkd-wait-online.service has completed before bootcmd is executed:
root@sid-tmp2:~# journalctl --no-pager -l -u systemd-networkd-wait-online.service
Aug 29 17:02:12 sid-tmp2 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Aug 29 17:02:13 sid-tmp2 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured
.
root@sid-tmp2:~# grep -F 'config-bootcmd ran' /var/log/cloud-init.log
2025-08-29 17:02:14,766 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully and took 0.467 seconds
At this point, it s looking like there are few options left!

What eventually worked I ended up identifying two solutions to the issue, both of which involve getting some other component of the provisioning process to run systemd-networkd-wait-online.

Solution 1 The first involves getting apt-get itself to wait for IPv6 configuration. The apt.conf configuration interface allows the definition of an APT::Update::Pre-Invoke hook that s executed just before apt s update operation. By writing the following to a file in /etc/apt/apt.conf.d/, we re able to ensure that we have IPv6 connectivity before apt-get tries accessing the network. This cloud-config snippet accomplishes that:
 - path: /etc/apt/apt.conf.d/99-wait-for-ipv6
content:  
APT::Update::Pre-Invoke   "/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6";  
This is safe to leave in place after provisioning, because the delay will be negligible once IPv6 connectivity is established. It s only during address configuration that it ll block for a noticeable amount of time, but that s what we want. This solution isn t entirely correct, though, because it s only apt-get that s actually affected by it. Other service that start after the system is ostensibly online might only see IPv4 connectivity when they start. This seems acceptable at the moment, though.

Solution 2 The second solution is to simply invoke systemd-networkd-wait-online directly from a cloud-init bootcmd. Similar to the first solution, it s not exactly correct because the host has already reached network-online.target, but it does block enough of cloud-init that package installation happens only after it completes. The cloud-config snippet for this is
bootcmd:
- [/usr/lib/systemd/systemd-networkd-wait-online, --operational-state=routable, --any, --ipv6]
In either case, we still want to write out a snippet to configure systemd-networkd-wait-online to wait for IPv6 connectivity for future reboots. Even though cloud-init won t necessarily run in those cases, and many cloud VMs never reboot at all, it does complete the solution. Additionally, it solves the problem for any derivative images that may be created based on the running VM s state. (At least if we can be certain that instances of those derivative images will never run in an IPv4-only network!)
write_files:
- path: /run/systemd/system/systemd-networkd-wait-online.service.d/99-ipv6-wait.conf
content:  
[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online --any --operational-state=routable --ipv6

How to properly solve it One possible improvement would be for cloud-init to support a configuration key allowing the admin to specify the required protocols. Based on the presence of this key, cloud-init could reconfigure systemd-networkd-wait-online.service accordingly. Alternatively it could set the appropriate RequiredFamilyForOnline= value in the generated .network file. cloud-init supports multiple network configuration backends, so each of those would need to be updated. If using the systemd-networkd configuration renderer, this should be straightforward, but Debian uses the netplan renderer, so that tool might also need to be taught to pass such a configuration along to systemd-networkd.

20 August 2025

Antoine Beaupr : Encrypting a Debian install with UKI

I originally setup a machine without any full disk encryption, then somehow regretted it quickly after. My original reasoning was that this was a "play" machine so I wanted as few restrictions on accessing the machine as possible, which meant removing passwords, mostly. I actually ended up having a user password, but disabled the lock screen. Then I started using the device to manage my photo collection, and suddenly there was a lot of "confidential" information on the device that I didn't want to store in clear text anymore.

Pre-requisites So, how does one convert an existing install from plain text to full disk encryption? One way is to backup to an external drive, re-partition everything and copy things back, but that's slow and boring. Besides, cryptsetup has a cryptsetup-reencrypt command, surely we can do this in place? Having not set aside enough room for /boot, I briefly considered a "encrypted /boot" configuration and conversion (e.g. with this guide) but remembered grub's support for this is flaky, at best, so I figured I would try something else. Here, I'm going to guide you through how I first converted from grub to systemd-boot then to UKI kernel, then re-encrypt my main partition. Note that secureboot is disabled here, see further discussion below.

systemd-boot and Unified Kernel Image conversion systemd folks have been developing UKI ("unified kernel image") to ship kernels. The way this works is the kernel and initrd (and UEFI boot stub) in a single portable executable that lives in the EFI partition, as opposed to /boot. This neatly solves my problem, because I already have such a clear-text partition and won't need to re-partition my disk to convert. Debian has started some preliminary support for this. It's not default, but I found this guide from Vasudeva Kamath which was pretty complete. Since the guide assumes some previous configuration, I had to adapt it to my case. Here's how I did the conversion to both systemd-boot and UKI, all at once. I could have perhaps done it one at a time, but doing both at once works fine. Before your start, make sure secureboot is disabled, see the discussion below.
  1. install systemd tools:
    apt install systemd-ukify systemd-boot
    
  2. Configure systemd-ukify, in /etc/kernel/install.conf:
    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    
    TODO: it doesn't look like this generates a initrd with dracut, do we care?
  3. Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:
    [UKI]
    Cmdline=@/etc/kernel/cmdline
    
    The /etc/kernel/cmdline file doesn't actually exist here, and that's fine. Defaults are okay, as the image gets generated from your current /proc/cmdline. Check your /etc/default/grub and /proc/cmdline if you are unsure. You'll see the generated arguments in bootctl list below.
  4. Build the image:
    dpkg-reconfigure linux-image-$(uname -r)
    
  5. Check the boot options:
    bootctl list
    
    Look for a Type #2 (.efi) entry for the kernel.
  6. Reboot:
    reboot
    
You can tell you have booted with systemd-boot because (a) you won't see grub and (b) the /proc/cmdline will reflect the configuration listed in bootctl list. In my case, a systemd.machine_id variable is set there, and not in grub (compare with /boot/grub/grub.cfg). By default, the systemd-boot loader just boots, without a menu. You can force the menu to show up by un-commenting the timeout line in /boot/efit/loader/loader.conf, by hitting keys during boot (e.g. hitting "space" repeatedly), or by calling:
systemctl reboot --boot-loader-menu=0
See the systemd-boot(7) manual for details on that. I did not go through the secureboot process, presumably I had already disabled secureboot. This is trickier: because one needs a "special key" to sign the UKI image, one would need the collaboration of debian.org to get this working out of the box with the keys shipped onboard most computers. In other words, if you want to make this work with secureboot enabled on your computer, you'll need to figure out how to sign the generated images before rebooting here, because otherwise you will break your computer. Otherwise, follow the following guides:

Re-encrypting root filesystem Now that we have a way to boot an encrypted filesystem, we can switch to LUKS for our filesystem. Note that you can probably follow this guide if, somehow, you managed to make grub work with your LUKS setup, although as this guide shows, you'd need to downgrade the cryptographic algorithms, which seems like a bad tradeoff. We're using cryptsetup-reencrypt for this which, amazingly, supports re-encrypting devices on the fly. The trick is it needs free space at the end of the partition for the LUKS header (which, I guess, makes it a footer), so we need to resize the filesystem to leave room for that, which is the trickiest bit. This is a possibly destructive behavior. Be sure your backups are up to date, or be ready to lose all data on the device. We assume 512 byte sectors here. Check your sector size with fdisk -l and adjust accordingly.
  1. Before you perform the procedure, make sure requirements are installed:
    apt install cryptsetup systemd-cryptsetup cryptsetup-initramfs
    
    Note that this requires network access, of course.
  2. Reboot in a live image, I like GRML but any Debian live image will work, possibly including the installer
  3. First, calculate how many sectors to free up for the LUKS header
    qalc> 32Mibyte / ( 512 byte )
      (32 mebibytes) / (512 bytes) = 65536
    
  4. Find the sector sizes of the Linux partitions:
    fdisk  -l /dev/nvme0n1   awk '/filesystem/   print $1 " " $4  '  
    
    For example, here's an example with a /boot and / filesystem:
    $ sudo fdisk -l /dev/nvme0n1   awk '/filesystem/   print $1 " " $4  '
    /dev/nvme0n1p2 999424
    /dev/nvme0n1p3 3904979087
    
  5. Substract 1 from 2:
    qalc> set precision 100
    qalc> 3904979087 - 65536
    
    Or, last step and this one, in one line:
    fdisk -l /dev/nvme0n1   awk '/filesystem/   print $1 " " $4 - 65536  '
    
  6. Recheck filesystem:
    e2fsck -f /dev/nvme0n1p2
    
  7. Resize filesystem:
    resize2fs /dev/nvme0n1p2 $(fdisk -l /dev/nvme0n1   awk '/nvme0n1p2/   print $4 - 65536  ')s
    
    Notice the trailing s here: it makes resize2fs interpret the number as a 512 byte sector size, as opposed to the default (4k blocks).
  8. Re-encrypt filesystem:
    cryptsetup reencrypt --encrypt /dev/nvme0n1p2 --resize-device-size=32M
    
    This is it! This is the most important step! Make sure your laptop is plugged in and try not to interrupt it. This can, apparently, be resumed without problem, but I'd hate to show you how. This will show progress information like:
    Progress:   2.4% ETA 23m45s,      53GiB written, speed   1.3 GiB/s
    
    Wait until the ETA has passed.
  9. Open and mount the encrypted filesystem and mount the EFI system partition (ESP):
    cryptsetup open /dev/nvme0n1p2 crypt
    mount /dev/mapper/crypt /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
    
    If this fails, now is the time to consider restoring from backups.
  10. Enter the chroot
    for fs in proc sys dev ; do
      mount --bind /$fs /mnt/$fs
    done
    chroot /mnt
    
    Pro tip: this can be done in one step in GRML with:
    grml-chroot /mnt bash
    
  11. Generate a crypttab:
    echo crypt_dev_nvme0n1p2 UUID=$(blkid -o value -s UUID /dev/nvme0n1p2) none luks,discard >> /etc/crypttab
    
  12. Adjust root filesystem in /etc/fstab, make sure you have a line like this:
    /dev/mapper/crypt_dev-nvme0n1p2 /               ext4    errors=remount-ro 0       1
    
    If you were already using a UUID entry for this, there's nothing to change!
  13. Configure the root filesystem in the initrd:
    echo root=/dev/mapper/crypt_dev_nvme0n1p2 > /etc/kernel/cmdline
    
  14. Regenerate UKI:
    dpkg-reconfigure linux-image-$(uname -r)
    
    Be careful here! systemd-boot inherits the command line from the system where it is generated, so this will possibly feature some unsupported commands from your boot environment. In my case GRML had a couple of those, which broke the boot. It's still possible to workaround this issue by tweaking the arguments at boot time, that said.
  15. Exit chroot and reboot
    exit
    reboot
    
Some of the ideas in this section were taken from this guide but was mostly rewritten to simplify the work. My guide also avoids the grub hacks or a specific initrd system (as the guide uses initramfs-tools and grub, while I, above, switched to dracut and systemd-boot). RHEL also has a similar guide, perhaps even better. Somehow I have made this system without LVM at all, which simplifies things a bit (as I don't need to also resize the physical volume/volume groups), but if you have LVM, you need to tweak this to also resize the LVM bits. The RHEL guide has some information about this.

30 July 2025

Utkarsh Gupta: FOSS Activites in July 2025

Here s my 70th monthly but brief update about the activities I ve done in the F/L/OSS world.

Debian
This was my 79th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas 19! \o/ Debian was in freeze throughout so whilst I didn t do many uploads, there s a bunch of other things I did:
  • Attended DebConf25 in Brest, France.
    • Lead the bursary BOF and discussions.
    • Participated in other sessions, especially around the FTP masters.
    • I ve started to look at things with my trainee hat on.
    • Participated in the Debian Security Tracker sprints during DebCamp. More on that below.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
This was my 54th month of actively contributing to Ubuntu. I joined Canonical to work on Ubuntu full-time back in February 2021. Whilst I can t give a full, detailed list of things I did (there s so much and some of it might not be public yet!), here s a quick TL;DR of what I did:
  • Released Questing snapshot 3! \o/
  • EOL d Oracular. o/
  • Participated in the mid-cycle sprints.
  • Got a recognition award for leading 24.04.2 LTS release and leading the Release Management team.
  • Preparing for the 24.04.3 LTS release early next month.

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support). This was my 70th month as a Debian LTS and 57th month as a Debian ELTS paid contributor.
I only worked for 15.00 hours for LTS and 5.00 hours for ELTS and did the following things:
  • [LTS] Released DLA 4263-1 for ruby-graphql.
    • Coordinated with upstream due to lack of clarity on 1.11.4 being affected & not having a clear reproducer.
    • As 1.11.4 was still partially vulnerable and the backport was non-trivial, it was probably conveinent to bump the upstream version to 1.11.12 instead, fixing:
    • CVE-2025-27407): a remote code execution.
    • Salsa repository: https://salsa.debian.org/lts-team/packages/ruby-graphql.
    • Coordinated with the Security team for a p-u fix or a DSA.
  • [E/LTS] Frontdesk duty from 28th July to 04th August.
  • [LTS] Attended the monthly LTS meeting on IRC. Summary here.

Debian Security Tracker sprint 2025 Thanks to the LTS team for also organizing a security tracker sprint during DebCamp25. I attended the sprint and spent 10 hours working on the following tasks: That s all. A quicky shoutout to Roberto for organizing the sprints remotely and being awake at odd hours. <3
Until next time.
:wq for today.

26 July 2025

Birger Schacht: My DebConf 25 review

DebConf 25 happened between 14th July and 19th July and I was there. It was my first DebConf (the big one, I was at a Mini DebConf in Hamburg a couple of years ago) and it was interesting. DebConf 25 happened at a Campus University at the outskirts of Brest and I was rather reluctant to go at first (EuroPython 25 was happening at the same time in Prague), but I decided to use the chance of DebConf happening in Europe, reachable by train from Vienna. We took the nighttrain to Paris, then found our way through the maze that is the Paris underground system and then got to Brest with the TGV. On our way to the Conference site we made a detour to a supermarket, which wasn t that easy because is was a national holiday in France and most of the shops were closed. But we weren t sure about the food situation at DebConf and we also wanted to get some beer. At the conference we were greeted by very friendly people at the badge station and the front desk and got our badges, swag and most important the keys to pretty nice rooms on the campus. Our rooms had a small private bathroom with a toilet and a shower and between the two rooms was a shared kitchen with a refrigerator and a microwave. All in all, the accommodation was simple but provided everything we needed and especially a space to have some privacy. During the next days I watched a lot of talks, met new people, caught up with old friends and also had a nice time with my travel buddies. There was a beach near the campus which I used nearly every day. It was mostly sunny except for the last day of the conference, which apparently was not common for the Brest area, so we got lucky regarding the weather. Landscape view of the sea at Dellec beach Given that we only arrived in the evening of the first day of DebConf, I missed the talk When Free Software Communities Unite: Tails, Tor, and the Fight for Privacy (recording), but I watched it on the way home and it was also covered by LWN. On Tuesday I started the day by visiting a talk about tag2upload (recording). The same day there was also an academic track and I watched the talk titled Integrating Knowledge Graphs into the Debian Ecosystem (recording) which presented a property graph showing relationships between various entities like packages, maintainers or bugs (there is a repository with parts of a paper, but not much other information). The speaker also mentioned the graphcast framework and the ontocast framework which sound interesting - we might have use for something liked this at $dayjob. In the afternoon there was a talk about the ArchWiki (recording) which gave a comprehensive insight in how the ArchWiki and the community behind it works. Right after that was a Debian Wiki BoF. There are various technical limitations with the current wiki software and there are not enough helping hands to maintain the service and do content curation. But the BoF had some nice results: there is now a new debian-wiki mailinglist, an IRC channel, a MediaWiki installation has been set up during DebConf, there are efforts to migrate the data and most importantly: and handful of people who want to maintain the service and organize the content of the wiki. I think the input from the ArchWiki folks gave some ideas how that team could operate. Tag at the wall at Dellec beach Wednesday was the day of the daytrip. I did not sign up for any of the trips and used the time to try out tag2upload, uploaded the latest labwc release to experimental and spent the rest of the day at the beach. Other noteworthy session I ve attended were the Don t fear the TPM talk (recording), which showed me a lot of stuff to try out, the session about lintian-ng (no recording), which is an experimental approach to make lintian faster, the review of the first year of wcurls existence (no recording yet) and the summary of Rust packaging in Debian (no recording yet). In between the sessions I started working on packaging wlr-sunclock (#1109230).

What did not work Vegan food. I might be spoiled by other conferences. Both at EuroPycon last year (definitely bigger, a lot more commercial) and at PyCon CZ 23 (similar in size, a lot more DIY) there was catering with explicitly vegan options. As I ve mentioned in the beginning, we went to a supermarket before we went to the conference and we had to go there one more time during the conference. I think there was a mixture between a total lack of awareness and a LOT of miscommunication. The breakfasts at the conference consisted of pastries and baguettes - I asked at the first day what the vegan options were and the answer was I don t know, maybe the baguette? and we were asked to only take as much baguette as the people who also got pastries. The lunch was prepared by the Restaurant associatif de Kern vent which is a canteen at the university campus. When we asked if there is vegan food, the people there said that there was only a vegetarian option so we only ate salad. Only later we heard via word of mouth that one has to explicitly ask for a vegan meal which was apparently prepared separatly and you had to find the right person that knows about it (I think thats very Debian-like ). But even then a person once got a vegetarian option offered as vegan food. One problem was also the missing / confusing labeling of the food. At the conference dinner there was apparently vegan food, but it was mixed with all the other food. There were some labels but with hundreds of hungry people around and caterers removing empty plates and dropping off plates with other stuff, everything gets mixed up. In the end we ate bread soaked in olive oil, until the olive oil got taken away by the catering people literally while we were dipping the bread in it. And when these issues were raised, some of the reactions can be summarized as You re holding it wrong which was really frustrating. The dinners at the conference hall were similar. At some point I had the impression that vegan and vegetarian was simply seen as the same thing. Dinner menu at the conference If the menus would be written like a debian/copyright file it would probably have looked like this:
Food: *
Diet: Vegan or Vegetarian
But the thing is that Vegan and Vegetarian cannot be mixed. Its similar to non compatible licenses. Once you mix vegan food with vegan food with vegetarian food it s not vegan anymore. Don t get me wrong, I know its hard to organize food for hundreds of people. But if you don t know what it means to provide a vegan option, just communicate the fact so people can look alternatives in advance. During the week some of the vegan people shared food, which was really nice and there were also a lot of non-vegan people who tried to help, organized extra food or simply listened to the hangry rants. Thanks for that!

Paris Saturday was the last day of DebConf and it was a rainy day. On Sunday morning we took the TGV back to Paris and then stayed there for one night because the next night train back to Vienna was on Monday. Luckily the weather was better in Paris. The first thing we did was to look up a vegan burger place. In the evening we strolled along the Seine and had a couple of beers at the Jardins du Trocad ro. Monday the rain also arrived in Paris and we mostly went from one cafe to the next, but also managed to visit Notre Dame.

Conclusio The next DebConf will be in Argentina and I think its likely that DebConf 27 will also not happen anywhere in trainvelling distance. But even if, I think the Mini DebConfs are more my style of happening (there is one planned in Hamburg next spring, and a couple of days ago I learned that there will be a Back to the Future musical show in Hamburg during that time). Nonetheless I had a nice time and I stumbled over some projects I might get more involved in. Thanks also to my travel buddies who put up with me

10 July 2025

Tianon Gravi: Yubi Whati? (YubiKeys, ECDSA, and X.509)

Off-and-on over the last several weeks, I've been spending time trying to learn/understand YubiKeys better, especially from the perspective of ECDSA and signing. I had a good mental model for how "slots" work (canonically referenced by their hexadecimal names such as 9C), but found that it had a gap related to "objects"; while closing that, I was annoyed that the main reference table for this gap lives primarily in either a PDF or inside several implementations, so I figured I should create the reference I want to see in the world, but that it would also be useful to write down some of my understanding for my own (and maybe others') future reference. So, to that end, I'm going to start with a bit ( ) of background information, with the heavy caveat that this only applies to "PIV" ("FIPS 201") usage of YubiKeys, and that I only actually care about ECDSA, although I've been reassured that it's the same for at least RSA (anything outside this is firmly Here Be Not Tianon; "gl hf dd"). (Incidentally, learning all this helped me actually appreciate the simplicity of cloud-based KMS solutions, which was an unexpected side effect. ) At a really high level, ECDSA is like many other (asymmetric) cryptographic solutions you've got a public key and a private key, the private key can be used to "sign" data (tiny amounts of data, in fact, like P-256 can only reasonably sign 256 bits of data, which is where cryptographic hashes like SHA256 come in as secure analogues for larger data in small bit sizes), and the public key can then be used to verify that the data was indeed signed by the private key, and only someone with the private key could've done so. There's some complex math and RNGs involved, but none of that's actually relevant to this post, so find that information elsewhere. Unfortunately, this is where things go off the rails: PIV is X.509 ("x509") heavy, and there's no X.509 in the na ve view of my use case. In a YubiKey (or any other PIV-signing-supporting smart card? do they actually have competitors in this specific niche? ), a given "slot" can hold one single private key. There are ~24 slots which can hold a private key and be used for signing, although "Slot 9c" is officially designated as the "Digital Signature" slot and is encouraged for signing purposes. One of the biggest gotchas is that with pure-PIV (and older YubiKey firmware ) the public key for a given slot is only available at the time the key is generated, and the whole point of the device in the first place is that the private key is never, ever available from it (all cryptographic operations happen inside the device), so if you don't save that public key when you first ask the device to generate a private key in a particular slot, the public key is lost forever (asterisk).
$ # generate a new ECDSA P-256 key in "slot 9c" ("Digital Signature")
$ # WARNING: THIS WILL GLEEFULLY WIPE SLOT 9C WITHOUT PROMPTING
$ yubico-piv-tool --slot 9c --algorithm ECCP256 --action generate
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEtGoWRGyjjUlJFXpu8BL6Rnx8jjKR
5+Mzl2Vepgor+k7N9q7ppOtSMWefjFVR0SEPmXqXINNsCi6LpLtNEigIRg==
-----END PUBLIC KEY-----
Successfully generated a new private key.
$ # this is the only time/place we (officially) get this public key
With that background, now let's get to the second aspect of "slots" and how X.509 fits. For every aforementioned slot, there is a corresponding "object" (read: place to store arbitrary data) which is corresponding only by convention. For all these "key" slots the (again, by convention) corresponding "object" is explicitly supposed to be an X.509 certificate (see also the PDF reference linked above). It turns out this is a useful and topical place to store that public key we need to keep handy! It's also an interesting place to shove additional details about what the key in a given slot is being used for, if that's your thing. Converting the raw public key into a (likely self-signed) X.509 certificate is an exercise for the reader, but if you want to follow the conventions, you need some way to convert a given "slot" to the corresponding "object", and that is the lookup table I wish existed in more forms. So, without further ado, here is the anti-climax:
Slot Object Description
0x9A 0x5FC105 X.509 Certificate for PIV Authentication
0x9E 0x5FC101 X.509 Certificate for Card Authentication
0x9C 0x5FC10A X.509 Certificate for Digital Signature
0x9D 0x5FC10B X.509 Certificate for Key Management
0x82 0x5FC10D Retired X.509 Certificate for Key Management 1
0x83 0x5FC10E Retired X.509 Certificate for Key Management 2
0x84 0x5FC10F Retired X.509 Certificate for Key Management 3
0x85 0x5FC110 Retired X.509 Certificate for Key Management 4
0x86 0x5FC111 Retired X.509 Certificate for Key Management 5
0x87 0x5FC112 Retired X.509 Certificate for Key Management 6
0x88 0x5FC113 Retired X.509 Certificate for Key Management 7
0x89 0x5FC114 Retired X.509 Certificate for Key Management 8
0x8A 0x5FC115 Retired X.509 Certificate for Key Management 9
0x8B 0x5FC116 Retired X.509 Certificate for Key Management 10
0x8C 0x5FC117 Retired X.509 Certificate for Key Management 11
0x8D 0x5FC118 Retired X.509 Certificate for Key Management 12
0x8E 0x5FC119 Retired X.509 Certificate for Key Management 13
0x8F 0x5FC11A Retired X.509 Certificate for Key Management 14
0x90 0x5FC11B Retired X.509 Certificate for Key Management 15
0x91 0x5FC11C Retired X.509 Certificate for Key Management 16
0x92 0x5FC11D Retired X.509 Certificate for Key Management 17
0x93 0x5FC11E Retired X.509 Certificate for Key Management 18
0x94 0x5FC11F Retired X.509 Certificate for Key Management 19
0x95 0x5FC120 Retired X.509 Certificate for Key Management 20
See also "piv-objects.json" for a machine-readable copy of this data. (Major thanks to paultag and jon gzip johnson for helping me learn and generally putting up with me, but especially dealing with my live-stream-of-thoughts while I stumble through the dark. )

11 June 2025

Freexian Collaborators: Monthly report about Debian Long Term Support, May 2025 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In May, 22 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 8.0h (out of 0.0h assigned and 8.0h from previous period).
  • Adrian Bunk did 26.0h (out of 26.0h assigned).
  • Andreas Henriksson did 1.0h (out of 15.0h assigned and 3.0h from previous period), thus carrying over 17.0h to the next month.
  • Andrej Shadura did 3.0h (out of 10.0h assigned), thus carrying over 7.0h to the next month.
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 8.0h (out of 20.0h assigned and 4.0h from previous period), thus carrying over 16.0h to the next month.
  • Carlos Henrique Lima Melara did 12.0h (out of 11.0h assigned and 1.0h from previous period).
  • Chris Lamb did 15.5h (out of 0.0h assigned and 15.5h from previous period).
  • Daniel Leidert did 25.0h (out of 26.0h assigned), thus carrying over 1.0h to the next month.
  • Emilio Pozuelo Monfort did 21.0h (out of 16.75h assigned and 11.0h from previous period), thus carrying over 6.75h to the next month.
  • Guilhem Moulin did 11.5h (out of 8.5h assigned and 6.5h from previous period), thus carrying over 3.5h to the next month.
  • Jochen Sprickerhof did 3.5h (out of 8.75h assigned and 17.5h from previous period), thus carrying over 22.75h to the next month.
  • Lee Garrett did 26.0h (out of 12.75h assigned and 13.25h from previous period).
  • Lucas Kanashiro did 20.0h (out of 18.0h assigned and 2.0h from previous period).
  • Markus Koschany did 20.0h (out of 26.25h assigned), thus carrying over 6.25h to the next month.
  • Roberto C. S nchez did 20.75h (out of 24.0h assigned), thus carrying over 3.25h to the next month.
  • Santiago Ruano Rinc n did 15.0h (out of 12.5h assigned and 2.5h from previous period).
  • Sean Whitton did 6.25h (out of 6.0h assigned and 2.0h from previous period), thus carrying over 1.75h to the next month.
  • Sylvain Beucler did 26.25h (out of 26.25h assigned).
  • Thorsten Alteholz did 15.0h (out of 15.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).
  • Utkarsh Gupta did 1.0h (out of 15.0h assigned), thus carrying over 14.0h to the next month.

Evolution of the situation In May, we released 54 DLAs. The LTS Team was particularly active in May, publishing a higher than normal number of advisories, as well as helping with a wide range of updates to packages in stable and unstable, plus some other interesting work. We are also pleased to welcome several updates from contributors outside the regular team.
  • Notable security updates:
    • containerd, prepared by Andreas Henriksson, fixes a vulnerability that could cause containers launched as non-root users to be run as root
    • libapache2-mod-auth-openidc, prepared by Moritz Schlarb, fixes a vulnerability which could allow an attacker to crash an Apache web server with libapache2-mod-auth-openidc installed
    • request-tracker4, prepared by Andrew Ruthven, fixes multiple vulnerabilities which could result in information disclosure, cross-site scripting and use of weak encryption for S/MIME emails
    • postgresql-13, prepared by Bastien Roucari s, fixes an application crash vulnerability that could affect the server or applications using libpq
    • dropbear, prepared by Guilhem Moulin, fixes a vulnerability which could potentially result in execution of arbitrary shell commands
    • openjdk-17, openjdk-11, prepared by Thorsten Glaser, fixes several vulnerabilities, which include denial of service, information disclosure or bypass of sandbox restrictions
    • glibc, prepared by Sean Whitton, fixes a privilege escalation vulnerability
  • Notable non-security updates:
    • wireless-regdb, prepared by Ben Hutchings, updates information reflecting changes to radio regulations in many countries
This month s contributions from outside the regular team include the libapache2-mod-auth-openidc update mentioned above, prepared by Moritz Schlarb (the maintainer of the package); the update of request-tracker4, prepared by Andrew Ruthven (the maintainer of the package); and the updates of openjdk-17 and openjdk-11, also noted above, prepared by Thorsten Glaser. Additionally, LTS Team members contributed stable updates of the following packages:
  • rubygems and yelp/yelp-xsl, prepared by Lucas Kanashiro
  • simplesamlphp, prepared by Tobias Frost
  • libbson-xs-perl, prepared by Roberto C. S nchez
  • fossil, prepared by Sylvain Beucler
  • setuptools and mydumper, prepared by Lee Garrett
  • redis and webpy, prepared by Adrian Bunk
  • xrdp, prepared by Abhijith PA
  • tcpdf, prepared by Santiago Ruano Rinc n
  • kmail-account-wizard, prepared by Thorsten Alteholz
Other contributions were also made by LTS Team members to packages in unstable:
  • proftpd-dfsg DEP-8 tests (autopkgtests) were provided to the maintainer, prepared by Lucas Kanashiro
  • a regular upload of libsoup2.4, prepared by Sean Whitton
  • a regular upload of setuptools, prepared by Lee Garrett
Freexian, the entity behind the management of the Debian LTS project, has been working for some time now on the development of an advanced CI platform for Debian-based distributions, called Debusine. Recently, Debusine has reached a level of feature implementation that makes it very usable. Some members of the LTS Team have been using Debusine informally, and during May LTS coordinator Santiago Ruano Rinc n has made a call for the team to help with testing of Debusine, and to help evaluate its suitability for the LTS Team to eventually begin using as the primary mechanism for uploading packages into Debian. Team members who have started using Debusine are providing valuable feedback to the Debusine development team, thus helping to improve the platform for all users. Actually, a number of updates, for both bullseye and bookworm, made during the month of May were handled using Debusine, e.g. rubygems s DLA-4163-1. By the way, if you are a Debian Developer, you can easily test Debusine following the instructions found at https://wiki.debian.org/DebusineDebianNet. DebConf, the annual Debian Conference, is coming up in July and, as is customary each year, the week preceding the conference will feature an event called DebCamp. The DebCamp week provides an opportunity for teams and other interested groups/individuals to meet together in person in the same venue as the conference itself, with the purpose of doing focused work, often called sprints . LTS coordinator Roberto C. S nchez has announced that the LTS Team is planning to hold a sprint primarily focused on the Debian security tracker and the associated tooling used by the LTS Team and the Debian Security Team.

Thanks to our sponsors Sponsors that joined recently are in bold.

19 May 2025

Melissa Wen: A Look at the Latest Linux KMS Color API Developments on AMD and Intel

This week, I reviewed the last available version of the Linux KMS Color API. Specifically, I explored the proposed API by Harry Wentland and Alex Hung (AMD), their implementation for the AMD display driver and tracked the parallel efforts of Uma Shankar and Chaitanya Kumar Borah (Intel) in bringing this plane color management to life. With this API in place, compositors will be able to provide better HDR support and advanced color management for Linux users. To get a hands-on feel for the API s potential, I developed a fork of drm_info compatible with the new color properties. This allowed me to visualize the display hardware color management capabilities being exposed. If you re curious and want to peek behind the curtain, you can find my exploratory work on the drm_info/kms_color branch. The README there will guide you through the simple compilation and installation process. Note: You will need to update libdrm to match the proposed API. You can find an updated version in my personal repository here. To avoid potential conflicts with your official libdrm installation, you can compile and install it in a local directory. Then, use the following command: export LD_LIBRARY_PATH="/usr/local/lib/" In this post, I invite you to familiarize yourself with the new API that is about to be released. You can start doing as I did below: just deploy a custom kernel with the necessary patches and visualize the interface with the help of drm_info. Or, better yet, if you are a userspace developer, you can start developing user cases by experimenting with it. The more eyes the better.

KMS Color API on AMD The great news is that AMD s driver implementation for plane color operations is being developed right alongside their Linux KMS Color API proposal, so it s easy to apply to your kernel branch and check it out. You can find details of their progress in the AMD s series. I just needed to compile a custom kernel with this series applied, intentionally leaving out the AMD_PRIVATE_COLOR flag. The AMD_PRIVATE_COLOR flag guards driver-specific color plane properties, which experimentally expose hardware capabilities while we don t have the generic KMS plane color management interface available. If you don t know or don t remember the details of AMD driver specific color properties, you can learn more about this work in my blog posts [1] [2] [3]. As driver-specific color properties and KMS colorops are redundant, the driver only advertises one of them, as you can see in AMD workaround patch 24. So, with the custom kernel image ready, I installed it on a system powered by AMD DCN3 hardware (i.e. my Steam Deck). Using my custom drm_info, I could clearly see the Plane Color Pipeline with eight color operations as below:
 "COLOR_PIPELINE" (atomic): enum  Bypass, Color Pipeline 258  = Bypass
     Bypass
     Color Pipeline 258
         Color Operation 258
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D Curve
             "BYPASS" (atomic): range [0, 1] = 1
             "CURVE_1D_TYPE" (atomic): enum  sRGB EOTF, PQ 125 EOTF, BT.2020 Inverse OETF  = sRGB EOTF
         Color Operation 263
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = Multiplier
             "BYPASS" (atomic): range [0, 1] = 1
             "MULTIPLIER" (atomic): range [0, UINT64_MAX] = 0
         Color Operation 268
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 3x4 Matrix
             "BYPASS" (atomic): range [0, 1] = 1
             "DATA" (atomic): blob = 0
         Color Operation 273
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D Curve
             "BYPASS" (atomic): range [0, 1] = 1
             "CURVE_1D_TYPE" (atomic): enum  sRGB Inverse EOTF, PQ 125 Inverse EOTF, BT.2020 OETF  = sRGB Inverse EOTF
         Color Operation 278
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D LUT
             "BYPASS" (atomic): range [0, 1] = 1
             "SIZE" (atomic, immutable): range [0, UINT32_MAX] = 4096
             "LUT1D_INTERPOLATION" (immutable): enum  Linear  = Linear
             "DATA" (atomic): blob = 0
         Color Operation 285
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 3D LUT
             "BYPASS" (atomic): range [0, 1] = 1
             "SIZE" (atomic, immutable): range [0, UINT32_MAX] = 17
             "LUT3D_INTERPOLATION" (immutable): enum  Tetrahedral  = Tetrahedral
             "DATA" (atomic): blob = 0
         Color Operation 292
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D Curve
             "BYPASS" (atomic): range [0, 1] = 1
             "CURVE_1D_TYPE" (atomic): enum  sRGB EOTF, PQ 125 EOTF, BT.2020 Inverse OETF  = sRGB EOTF
         Color Operation 297
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D LUT
             "BYPASS" (atomic): range [0, 1] = 1
             "SIZE" (atomic, immutable): range [0, UINT32_MAX] = 4096
             "LUT1D_INTERPOLATION" (immutable): enum  Linear  = Linear
             "DATA" (atomic): blob = 0
Note that Gamescope is currently using AMD driver-specific color properties implemented by me, Autumn Ashton and Harry Wentland. It doesn t use this KMS Color API, and therefore COLOR_PIPELINE is set to Bypass. Once the API is accepted upstream, all users of the driver-specific API (including Gamescope) should switch to the KMS generic API, as this will be the official plane color management interface of the Linux kernel.

KMS Color API on Intel On the Intel side, the driver implementation available upstream was built upon an earlier iteration of the API. This meant I had to apply a few tweaks to bring it in line with the latest specifications. You can explore their latest work here. For a more simplified handling, combining the V9 of the Linux Color API, Intel s contributions, and my necessary adjustments, check out my dedicated branch. I then compiled a kernel from this integrated branch and deployed it on a system featuring Intel TigerLake GT2 graphics. Running my custom drm_info revealed a Plane Color Pipeline with three color operations as follows:
 "COLOR_PIPELINE" (atomic): enum  Bypass, Color Pipeline 480  = Bypass
     Bypass
     Color Pipeline 480
         Color Operation 480
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT  = 1D LUT Mult Seg
             "BYPASS" (atomic): range [0, 1] = 1
             "HW_CAPS" (atomic, immutable): blob = 484
             "DATA" (atomic): blob = 0
         Color Operation 487
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT  = 3x3 Matrix
             "BYPASS" (atomic): range [0, 1] = 1
             "DATA" (atomic): blob = 0
         Color Operation 492
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT  = 1D LUT Mult Seg
             "BYPASS" (atomic): range [0, 1] = 1
             "HW_CAPS" (atomic, immutable): blob = 496
             "DATA" (atomic): blob = 0
Observe that Intel s approach introduces additional properties like HW_CAPS at the color operation level, along with two new color operation types: 1D LUT with Multiple Segments and 3x3 Matrix. It s important to remember that this implementation is based on an earlier stage of the KMS Color API and is awaiting review.

A Shout-Out to Those Who Made This Happen I m impressed by the solid implementation and clear direction of the V9 of the KMS Color API. It aligns with the many insightful discussions we ve had over the past years. A huge thank you to Harry Wentland and Alex Hung for their dedication in bringing this to fruition! Beyond their efforts, I deeply appreciate Uma and Chaitanya s commitment to updating Intel s driver implementation to align with the freshest version of the KMS Color API. The collaborative spirit of the AMD and Intel developers in sharing their color pipeline work upstream is invaluable. We re now gaining a much clearer picture of the color capabilities embedded in modern display hardware, all thanks to their hard work, comprehensive documentation, and engaging discussions. Finally, thanks all the userspace developers, color science experts, and kernel developers from various vendors who actively participate in the upstream discussions, meetings, workshops, each iteration of this API and the crucial code review process. I m happy to be part of the final stages of this long kernel journey, but I know that when it comes to colors, one step is completed for new challenges to be unlocked. Looking forward to meeting you in this year Linux Display Next hackfest, organized by AMD in Toronto, to further discuss HDR, advanced color management, and other display trends.

12 May 2025

Freexian Collaborators: Debian Contributions: DebConf 25 preparations, PyPA tools updates, Removing libcrypt-dev from build-essential and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-04 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25 Preparations, by Stefano Rivera and Santiago Ruano Rinc n DebConf 25 preparations continue. In April, the bursary team reviewed and ranked bursary applications. Santiago Ruano Rinc n examined the current state of the conference s finances, to see if we could allocate any more money to bursaries. Stefano Rivera supported the bursary team s work with infrastructure and advice and added some metrics to assist Santiago s budget review. Santiago was also involved in different parts of the organization, including Content team matters, as reviewing the first of proposals, preparing public information about the new Academic Track; or coordinating different aspects of the Day trip activities and the Conference Dinner.

PyPA tools updates, by Stefano Rivera Around the beginning of the freeze (in retrospect, definitely too late) Stefano looked at updating setuptools in the archive to 78.1.0. This brings support for more comprehensive license expressions (PEP-639), that people are expected to adopt soon upstream. While the reverse-autopkgtests all passed, it all came with some unexpected complications, and turned into a mini-transition. The new setuptools broke shebangs for scripts (pypa/setuptools#4952). It also required a bump of wheel to 0.46 and wheel 0.46 now has a dependency outside the standard library (it de-vendored packaging). This meant it was no longer suitable to distribute a standalone wheel.whl file to seed into new virtualenvs, as virtualenv does by default. The good news here is that setuptools doesn t need wheel any more, it included its own implementation of the bdist_wheel command, in 70.1. But the world hadn t adapted to take advantage of this, yet. Stefano scrambled to get all of these issues resolved upstream and in Debian: We re now at the point where python3-wheel-whl is no longer needed in Debian unstable, and it should migrate to trixie.

Removing libcrypt-dev from build-essential, by Helmut Grohne The crypt function was originally part of glibc, but it got separated to libxcrypt. As a result, libc6-dev now depends on libcrypt-dev. This poses a cycle during architecture cross bootstrap. As the number of packages actually using crypt is relatively small, Helmut proposed removing the dependency. He analyzed an archive rebuild kindly performed by Santiago Vila (not affiliated with Freexian) and estimated the necessary changes. It looks like we may complete this with modifications to less than 300 source packages in the forky cycle. Half of the bugs have been filed at this time. They are tracked with libcrypt-* usertags.

Miscellaneous contributions
  • Carles uploaded a new version of simplemonitor.
  • Carles improved the documentation of salsa-ci-team/pipeline regarding piuparts arguments.
  • Carles closed an FTBFS on gcc-15 on qnetload.
  • Carles worked on Catalan translations using po-debconf-manager: reviewed 57 translations and created their merge requests in salsa, created 59 bug reports for packages that didn t merge in more than 30 days. Followed-up merge requests and comments in bug reports. Managed some translations manually for packages that are not in Salsa.
  • Lucas did some work on the DebConf Content and Bursary teams.
  • Lucas fixed multiple CVEs and bugs involving the upgrade from bookworm to trixie in ruby3.3.
  • Lucas fixed a CVE in valkey in unstable.
  • Stefano updated beautifulsoup4, python-authlib, python-html2text, python-packaging, python-pip, python-soupsieve, and unidecode.
  • Stefano packaged python-dependency-groups, a new vendored library in python-pip.
  • During an afternoon Bug Squashing Party in Montevideo, Santiago uploaded a couple of packages fixing RC bugs #1057226 and #1102487. The latter was a sponsored upload.
  • Thorsten uploaded new upstream versions of brlaser, ptouch-driver and sane-airscan to get the latest upstream bug fixes into Trixie.
  • Rapha l filed an upstream bug on zim for a graphical glitch that he has been experiencing.
  • Colin Watson upgraded openssh to 10.0p1 (also known as 10.0p2), and debugged various follow-up bugs. This included adding riscv64 support to vmdb2 in passing, and enabling native wtmpdb support so that wtmpdb last now reports the correct tty for SSH connections.
  • Colin fixed dput-ng s override option, which had never previously worked.
  • Colin fixed a security bug in debmirror.
  • Colin did his usual routine work on the Python team: 21 packages upgraded to new upstream versions, 8 CVEs fixed, and about 25 release-critical bugs fixed.
  • Helmut filed patches for 21 cross build failures.
  • Helmut uploaded a new version of debvm featuring a new tool debefivm-create to generate EFI-bootable disk images compatible with other tools such as libvirt or VirtualBox. Much of the work was prototyped in earlier months. This generalizes mmdebstrap-autopkgtest-build-qemu.
  • Helmut continued reporting undeclared file conflicts and suggested package removals from unstable.
  • Helmut proposed build profiles for libftdi1 and gnupg2. To deal with recently added dependencies in the architecture cross bootstrap package set.
  • Helmut managed the /usr-move transition. He worked on ensuring that systemd would comply with Debian s policy. Dumat continues to locate problems here and there yielding discussion occasionally. He sent a patch for an upgrade problem in zutils.
  • Anupa worked with the Debian publicity team to publish Micronews and Bits posts.
  • Anupa worked with the DebConf 25 content team to review talk and event proposals for DebConf 25.

4 May 2025

Colin Watson: Free software activity in April 2025

About 90% of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay. Request for OpenSSH debugging help Following the OpenSSH work described below, I have an open report about the sshd server sometimes crashing when clients try to connect to it. I can t reproduce this myself, and arm s-length debugging is very difficult, but three different users have reported it. For the time being I can t pass it upstream, as it s entirely possible it s due to a Debian patch. Is there anyone reading this who can reproduce this bug and is capable of doing some independent debugging work, most likely involving bisecting changes to OpenSSH? I d suggest first seeing whether a build of the unmodified upstream 10.0p2 release exhibits the same bug. If it does, then bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian patches. This would be extremely helpful, since at the moment it s a bit like trying to look for a needle in a haystack from the next field over by sending instructions to somebody with a magnifying glass. OpenSSH I upgraded the Debian packaging to OpenSSH 10.0p1 (now designated 10.0p2 by upstream due to a mistake in the release process, but they re the same thing), fixing CVE-2025-32728. This also involved a diffoscope bug report due to the version number change. I enabled the new --with-linux-memlock-onfault configure option to protect sshd against being swapped out, but this turned out to cause test failures on riscv64, so I disabled it again there. Debugging this took some time since I needed to do it under emulation, and in the process of setting up a testbed I added riscv64 support to vmdb2. In coordination with the wtmpdb maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH, so wtmpdb last now reports the correct tty. I fixed a couple of packaging bugs: I reviewed and merged several packaging contributions from others: dput-ng Since we added dput-ng integration to Debusine recently, I wanted to make sure that it was in good condition in trixie, so I fixed dput-ng: will FTBFS during trixie support period. Previously a similar bug had been fixed by just using different Ubuntu release names in tests; this time I made the tests independent of the current supported release data returned by distro_info, so this shouldn t come up again. We also ran into dput-ng: override doesn t override profile parameters, which needed somewhat more extensive changes since it turned out that that option had never worked. I fixed this after some discussion with Paul Tagliamonte to make sure I understood the background properly. man-db I released man-db 2.13.1. This just included various small fixes and a number of translation updates, but I wanted to get it into trixie in order to include a contribution to increase the MAX_NAME constant, since that was now causing problems for some pathological cases of manual pages in the wild that documented a very large number of terms. debmirror I fixed one security bug: debmirror prints credentials with progress. Python team I upgraded these packages to new upstream versions: In bookworm-backports, I updated these packages: I dropped a stale build-dependency from python-aiohttp-security that kept it out of testing (though unfortunately too late for the trixie freeze). I fixed or helped to fix various other build/test failures: I packaged python-typing-inspection, needed for a new upstream version of pydantic. I documented the architecture field in debian/tests/autopkgtest-pkg-pybuild.conf files. I fixed other odds and ends of bugs: Science team I fixed various build/test failures:

15 April 2025

Russell Coker: Storage Trends 2025

It s been almost 15 months since I blogged about Storage Trends 2024 [1]. There hasn t been much change in this time (in Australia at least I m not tracking prices in other countries). The change was so small I had to check how the Australian dollar has performed against other currencies to see if changes to currencies had countered changes to storage prices, but there has been little overall change when compared to the Chinese Yuan and the Australian dollar is only about 11% worse against the US dollar when compared to a year ago. Generally there s a trend of computer parts decreasing in price by significantly more than 11% per annum. Small Storage The cheapest storage device from MSY now is a Patriot P210 128G SATA SSD for $19, cheaper than the $24 last year and the same price as the year before. So over the last 2 years there has been no change to the cheapest storage device on sale. It would almost never make sense to buy that as a 256G SATA SSD (also Patriot P210) is $25 and has twice the lifetime (120TBW vs 60TBW). There are also 256G NVMe devices for $29 and $30 which would be better options if the system has a NVMe socket built in. The cheapest 500G devices are $42.50 for a 512G SATA SSD and $45 for a 500G NVMe. Last year the prices were $33 for SATA and $36 for NVMe in that size so there s been a significant increase in price there. The difference is enough that if someone was on a tight budget they might reasonably decide to use smaller storage than they might have used last year! 2TB hard drives are still $89 the same price as last year! Last year a 2TB SATA SSD was $118 and a 2TB NVMe was $145, now a 2TB SATA SSD is $157 and a 2TB NVMe is $127. So NVMe has become cheaper than SATA in that segment but overall prices are higher than last year. Again for business use 2TB seems a sensible minimum for most systems if you are paying MSY rates (or similar rates from Amazon etc). Medium Storage Last year 4TB HDDs were $135, now they are $148. Last year the cheapest 4TB SSD was $299, now the cheapest is a $309 NVMe. While the prices have all gone up the price difference between hard drives and SSD has decreased in that size range. So for a small server (a lot of home servers and small business servers) 4TB of RAID-1 storage is all that s needed and for that SSDs are the best option. The price difference between $296 for 4TB of RAID-1 HDDs and $618 for RAID-1 NVMe is small enough to be justified by the benefits of speed and being quiet for most small server uses. In 2023 a 8TB hard drive cost $179 and a 8TB SSD cost $739. Last year a 8TB hard drive cost $239 and a 8TB SATA SSD cost, $899. Now a 8TB HDD costs $229 and MSY doesn t sell 8TB SSDs but for comparison Amazon has a Samsung 8TB SATA SSD for $919. So for storing 8TB+ there are benefits of hard drives as SSDs are difficult to get in that size range and more expensive than they were before. It seems that 8TB SSDs aren t used by enough people to have a large market in the home and small office space, so those of us who want the larger storage sizes will have to get second hand enterprise gear. It will probably be another few years before 8TB enterprise SSDs start appearing on the second hand market. Serious Storage Last year I wrote about the affordability of U.2 devices. I regret not buying some then as there are fewer on sale now and prices are higher. For hard drives they still aren t a good choice for most users because most users don t have more than 4TB of data. For large quantities of data hard drives are still a good option, a 22TB disk costs $899. For companies this is a good option for many situations. For home users there is the additional problem that determining whether a drive is Shingled Magnetic Recording which has some serious performance issues for some use and it s very difficult to determine which drives use it. Conclusion For corporate purchases the options for serious storage are probably decent. But for small companies and home users things definitely don t seem to have improved as much as we expect from the computer industry, I had expected 8TB SSDs to go for $450 by now and SSDs less than 500G to not even be sold new any more. The prices on 8TB SSDs have gone up more in the last 2 yeas than the ASX 200 (index of 200 biggest companies in the Australian stock market). I would never recommend using SSDs as an investment, but in retrospect 8TB SSDs could have been a good one. $20 seems to be about the minimum cost that SSDs approach while hard drives have a higher minimum price of a bit under $100 because they are larger, heavier, and more fragile. It seems that the market is likely to move to most SSDs being close to $20, if they can make 2TB SSDs cheaply enough to sell for about that price then that would cover the majority of the market. I ve created a table of the prices, I should have done this before but I initially didn t plan an ongoing series of posts on this topic.
Jun 2020 Apr 2021 Apr 2023 Jan 2024 Apr 2025
128G SSD $49 $19 $24 $19
500G SSD $97 $73 $32 $33 $42.50
2TB HDD $95 $72 $75 $89 $89
2TB SSD $335 $245 $149
4TB HDD $115 $135 $148
4TB SSD $895 $349 $299 $309
8TB HDD $179 $239 $229
8TB SSD $949 $739 $899 $919
10TB HDD $549 $395

13 April 2025

Michael Prokop: OpenSSH penalty behavior in Debian/trixie #newintrixie

This topic came up at a customer of mine in September 2024, when working on Debian/trixie support. Since then I wanted to blog about it to make people aware of this new OpenSSH feature and behavior. I finally found some spare minutes at Debian s BSP in Vienna, so here we are. :) Some of our Q/A jobs failed to run against Debian/trixie, in the debug logs we found:
debug1: kex_exchange_identification: banner line 0: Not allowed at this time
This Not allowed at this time pointed to a new OpenSSH feature. OpenSSH introduced options to penalize undesirable behavior with version 9.8p1, see OpenSSH Release Notes, and also sshd source code. FTR, on the SSH server side, you ll see messages like that:
Apr 13 08:57:11 grml sshd-session[2135]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55792 ssh2 [preauth]
Apr 13 08:57:11 grml sshd-session[2135]: Disconnecting authenticating user root 10.100.15.42 port 55792: Too many authentication failures [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55800 ssh2 [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: Disconnecting authenticating user root 10.100.15.42 port 55800: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55804 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: Disconnecting authenticating user root 10.100.15.42 port 55804: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55810 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: Disconnecting authenticating user root 10.100.15.42 port 55810: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55818 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55824 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55838 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55854 on [10.100.15.230]:22 penalty: failed authentication
This feature certainly is useful and has its use cases. But if you f.e. run automated checks to ensure that specific logins aren t working, be careful: you might hit the penalty feature, lock yourself out but also consecutive checks then don t behave as expected. Your login checks might fail, but only because the penalty behavior kicks in. The login you re verifying still might be working underneath, but you don t actually check for it exactly. Furthermore legitimate traffic from systems which accept connections from many users or behind shared IP addresses, like NAT and proxies could be denied. To disable this new behavior, you can set PerSourcePenalties no in your sshd_config, but there are also further configuration options available, see PerSourcePenalties and PerSourcePenaltyExemptList settings in sshd_config(5) for further details.

Ben Hutchings: FOSS activity in November 2024

12 April 2025

Kalyani Kenekar: Nextcloud Installation HowTo: Secure Your Data with a Private Cloud

Logo NGinx Nextcloud is an open-source software suite that enables you to set up and manage your own cloud storage and collaboration platform. It offers a range of features similar to popular cloud services like Google Drive or Dropbox but with the added benefit of complete control over your data and the server where it s hosted. I wanted to have a look at Nextcloud and the steps to setup a own instance with a PostgreSQL based database together with NGinx as the webserver to serve the WebUI. Before doing a full productive setup I wanted to play around locally with all the needed steps and worked out all the steps within KVM machine. While doing this I wrote down some notes to mostly document for myself what I need to do to get a Nextcloud installation running and usable. So this manual describes how to setup a Nextcloud installation on Debian 12 Bookworm based on NGinx and PostgreSQL.

Nextcloud Installation

Install PHP and PHP extensions for Nextcloud Nextcloud is basically a PHP application so we need to install PHP packages to get it working in the end. The following steps are based on the upstream documentation about how to install a own Nextcloud instance. Installing the virtual package package php on a Debian Bookworm system would pull in the depending meta package php8.2. This package itself would then pull also the package libapache2-mod-php8.2 as an dependency which then would pull in also the apache2 webserver as a depending package. This is something I don t wanted to have as I want to use NGinx that is already installed on the system instead. To get this we need to explicitly exclude the package libapache2-mod-php8.2 from the list of packages which we want to install, to achieve this we have to append a hyphen - at the end of the package name, so we need to use libapache2-mod-php8.2- within the package list that is telling apt to ignore this package as an dependency. I ended up with this call to get all needed dependencies installed.
$ sudo apt install php php-cli php-fpm php-json php-common php-zip \
  php-gd php-intl php-curl php-xml php-mbstring php-bcmath php-gmp \
  php-pgsql libapache2-mod-php8.2-
  • Check php version (optional step) $ php -v
PHP 8.2.28 (cli) (built: Mar 13 2025 18:21:38) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.28, Copyright (c) Zend Technologies
    with Zend OPcache v8.2.28, Copyright (c), by Zend Technologies
  • After installing all the packages, edit the php.ini file: $ sudo vi /etc/php/8.2/fpm/php.ini
  • Change the following settings per your requirements:
max_execution_time = 300
memory_limit = 512M
post_max_size = 128M
upload_max_filesize = 128M
  • To make these settings effective, restart the php-fpm service $ sudo systemctl restart php8.2-fpm

Install PostgreSQL, Create a database and user This manual assumes we will use a PostgreSQL server on localhost, if you have a server instance on some remote site you can skip the installation step here. $ sudo apt install postgresql postgresql-contrib postgresql-client
  • Check version after installation (optinal step): $ sudo -i -u postgres $ psql -version
  • This output will be seen: psql (15.12 (Debian 15.12-0+deb12u2))
  • Exit the PSQL shell by using the command \q. postgres=# \q
  • Exit the CLI of the postgres user: postgres@host:~$ exit

Create a PostgreSQL Database and User:
  1. Create a new PostgreSQL user (Use a strong password!): $ sudo -u postgres psql -c "CREATE USER nextcloud_user PASSWORD '1234';"
  2. Create new database and grant access: $ sudo -u postgres psql -c "CREATE DATABASE nextcloud_db WITH OWNER nextcloud_user ENCODING=UTF8;"
  3. (Optional) Check if we now can connect to the database server and the database in detail (you will get a question about the password for the database user!). If this is not working it makes no sense to proceed further! We need to fix first the access then! $ psql -h localhost -U nextcloud_user -d nextcloud_db or $ psql -h 127.0.0.1 -U nextcloud_user -d nextcloud_db
  • Log out from postgres shell using the command \q.

Download and install Nextcloud
  • Use the following command to download the latest version of Nextcloud: $ wget https://download.nextcloud.com/server/releases/latest.zip
  • Extract file into the folder /var/www/html with the following command: $ sudo unzip latest.zip -d /var/www/html
  • Change ownership of the /var/www/html/nextcloud directory to www-data. $ sudo chown -R www-data:www-data /var/www/html/nextcloud

Configure NGinx for Nextcloud to use a certificate In case you want to use self signed certificate, e.g. if you play around to setup Nextcloud locally for testing purposes you can do the following steps.
  • Generate the private key and certificate: $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nextcloud.key -out nextcloud.crt $ sudo cp nextcloud.crt /etc/ssl/certs/ && sudo cp nextcloud.key /etc/ssl/private/
  • If you want or need to use the service of Let s Encrypt (or similar) drop the step above and create your required key data by using this command: $ sudo certbot --nginx -d nextcloud.your-domain.com You will need to adjust the path to the key and certificate in the next step!
  • Change the NGinx configuration: $ sudo vi /etc/nginx/sites-available/nextcloud.conf
  • Add the following snippet into the file and save it.
# /etc/nginx/sites-available/nextcloud.conf
upstream php-handler  
    #server 127.0.0.1:9000;
    server unix:/run/php/php8.2-fpm.sock;
 

# Set the  immutable  cache control options only for assets with a cache
# busting  v  argument

map $arg_v $asset_immutable  
    "" "";
    default ", immutable";
 

server  
    listen 80;
    listen [::]:80;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # Enforce HTTPS
    return 301 https://$server_name$request_uri;
 

server  
    listen 443      ssl http2;
    listen [::]:443 ssl http2;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Path to the root of your installation
    root /var/www/html/nextcloud;

    # Use Mozilla's guidelines for SSL/TLS settings
    # https://mozilla.github.io/server-side-tls/ssl-config-generator/
    # Adjust the usage and paths of the correct key data! E.g. it you want to use Let's Encrypt key material!
    ssl_certificate /etc/ssl/certs/nextcloud.crt;
    ssl_certificate_key /etc/ssl/private/nextcloud.key;
    # ssl_certificate /etc/letsencrypt/live/nextcloud.your-domain.com/fullchain.pem; 
    # ssl_certificate_key /etc/letsencrypt/live/nextcloud.your-domain.com/privkey.pem;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # HSTS settings
    # WARNING: Only add the preload option once you read about
    # the consequences in https://hstspreload.org/. This option
    # will add the domain to a hardcoded list that is shipped
    # in all major browsers and getting removed from this list
    # could take several months.
    #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;

    # set max upload size and increase upload timeout:
    client_max_body_size 512M;
    client_body_timeout 300s;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    # Pagespeed is not supported by Nextcloud, so if your server is built
    # with the  ngx_pagespeed  module, uncomment this line to disable it.
    #pagespeed off;

    # The settings allows you to optimize the HTTP2 bandwidth.
    # See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
    # for tuning hints
    client_body_buffer_size 512k;

    # HTTP response headers borrowed from Nextcloud  .htaccess 
    add_header Referrer-Policy                   "no-referrer"       always;
    add_header X-Content-Type-Options            "nosniff"           always;
    add_header X-Frame-Options                   "SAMEORIGIN"        always;
    add_header X-Permitted-Cross-Domain-Policies "none"              always;
    add_header X-Robots-Tag                      "noindex, nofollow" always;
    add_header X-XSS-Protection                  "1; mode=block"     always;

    # Remove X-Powered-By, which is an information leak
    fastcgi_hide_header X-Powered-By;

    # Set .mjs and .wasm MIME types
    # Either include it in the default mime.types list
    # and include that list explicitly or add the file extension
    # only for Nextcloud like below:
    include mime.types;
    types  
        text/javascript js mjs;
        application/wasm wasm;
     

    # Specify how to handle directories -- specifying  /index.php$request_uri 
    # here as the fallback means that NGinx always exhibits the desired behaviour
    # when a client requests a path that corresponds to a directory that exists
    # on the server. In particular, if that directory contains an index.php file,
    # that file is correctly served; if it doesn't, then the request is passed to
    # the front-end controller. This consistent behaviour means that we don't need
    # to specify custom rules for certain paths (e.g. images and other assets,
    #  /updater ,  /ocs-provider ), and thus
    #  try_files $uri $uri/ /index.php$request_uri 
    # always provides the desired behaviour.
    index index.php index.html /index.php$request_uri;

    # Rule borrowed from  .htaccess  to handle Microsoft DAV clients
    location = /  
        if ( $http_user_agent ~ ^DavClnt )  
            return 302 /remote.php/webdav/$is_args$args;
         
     

    location = /robots.txt  
        allow all;
        log_not_found off;
        access_log off;
     

    # Make a regex exception for  /.well-known  so that clients can still
    # access it despite the existence of the regex rule
    #  location ~ /(\. autotest ...)  which would otherwise handle requests
    # for  /.well-known .
    location ^~ /.well-known  
        # The rules in this block are an adaptation of the rules
        # in  .htaccess  that concern  /.well-known .

        location = /.well-known/carddav   return 301 /remote.php/dav/;  
        location = /.well-known/caldav    return 301 /remote.php/dav/;  

        location /.well-known/acme-challenge      try_files $uri $uri/ =404;  
        location /.well-known/pki-validation      try_files $uri $uri/ =404;  

        # Let Nextcloud's API for  /.well-known  URIs handle all other
        # requests by passing them to the front-end controller.
        return 301 /index.php$request_uri;
     

    # Rules borrowed from  .htaccess  to hide certain paths from clients
    location ~ ^/(?:build tests config lib 3rdparty templates data)(?:$ /)    return 404;  
    location ~ ^/(?:\. autotest occ issue indie db_ console)                  return 404;  

    # Ensure this block, which passes PHP files to the PHP process, is above the blocks
    # which handle static assets (as seen below). If this block is not declared first,
    # then NGinx will encounter an infinite rewriting loop when it prepend  /index.php 
    # to the URI, resulting in a HTTP 500 error response.
    location ~ \.php(?:$ /)  
        # Required for legacy support
        rewrite ^/(?!index remote public cron core\/ajax\/update status ocs\/v[12] updater\/.+ ocs-provider\/.+ .+\/richdocumentscode(_arm64)?\/proxy) /index.php$request_uri;

        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;

        try_files $fastcgi_script_name =404;

        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;

        fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
        fastcgi_param front_controller_active true;     # Enable pretty urls
        fastcgi_pass php-handler;

        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;

        fastcgi_max_temp_file_size 0;
     

    # Serve static files
    location ~ \.(?:css js mjs svg gif png jpg ico wasm tflite map ogg flac)$  
        try_files $uri /index.php$request_uri;
        # HTTP response headers borrowed from Nextcloud  .htaccess 
        add_header Cache-Control                     "public, max-age=15778463$asset_immutable";
        add_header Referrer-Policy                   "no-referrer"       always;
        add_header X-Content-Type-Options            "nosniff"           always;
        add_header X-Frame-Options                   "SAMEORIGIN"        always;
        add_header X-Permitted-Cross-Domain-Policies "none"              always;
        add_header X-Robots-Tag                      "noindex, nofollow" always;
        add_header X-XSS-Protection                  "1; mode=block"     always;
        access_log off;     # Optional: Don't log access to assets
     

    location ~ \.woff2?$  
        try_files $uri /index.php$request_uri;
        expires 7d;         # Cache-Control policy borrowed from  .htaccess 
        access_log off;     # Optional: Don't log access to assets
     

    # Rule borrowed from  .htaccess 
    location /remote  
        return 301 /remote.php$request_uri;
     

    location /  
        try_files $uri $uri/ /index.php$request_uri;
     
 
  • Symlink configuration site available to site enabled. $ ln -s /etc/nginx/sites-available/nextcloud.conf /etc/nginx/sites-enabled/
  • Restart NGinx and access the URI in the browser.
  • Go through the installation of Nextcloud.
  • The user data on the installation dialog should point e.g to administrator or similar, that user will become administrative access rights in Nextcloud!
  • To adjust the database connection detail you have to edit the file $install_folder/config/config.php. Means here in the example within this post you would need to modify /var/www/html/nextcloud/config/config.php to control or change the database connection.
---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(Or your remote PostgreSQL server address if you have.)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for database user.)
--->%---
After the installation and setup of the Nextcloud PHP application there are more steps to be done. Have a look into the WebUI what you will need to do as additional steps like create a cronjob or tuning of some more PHP configurations. If you ve done all things correct you should see a login page similar to this: Login Page of your Nextcloud instance

Optional other steps for more enhanced configuration modifications

Move the data folder to somewhere else The data folder is the root folder for all user content. By default it is located in $install_folder/data, so in our case here it is in /var/www/html/nextcloud/data.
  • Move the data directory outside the web server document root. $ sudo mv /var/www/html/nextcloud/data /var/nextcloud_data
  • Ensure access permissions, mostly not needed if you move the folder. $ sudo chown -R www-data:www-data /var/nextcloud_data $ sudo chown -R www-data:www-data /var/www/html/nextcloud/
  • Update the Nextcloud configuration:
    1. Open the config/config.php file of your Nextcloud installation. $ sudo vi /var/www/html/nextcloud/config/config.php
    2. Update the datadirectory parameter to point to the new location of your data directory.
  ---%<---
     'datadirectory' => '/var/nextcloud_data'
  --->%---
  • Restart NGinx service: $ sudo systemctl restart nginx

Make the installation available for multiple FQDNs on the same server
  • Adjust the Nextcloud configuration to listen and accept requests for different domain names. Configure and adjust the key trusted_domains accordingly. $ sudo vi /var/www/html/nextcloud/config/config.php
  ---%<---
    'trusted_domains' => 
    array (
      0 => 'domain.your-domain.com',
      1 => 'domain.other-domain.com',
    ),
  --->%---
  • Create and adjust the needed site configurations for the webserver.
  • Restart the NGinx unit.

An error message about .ocdata might occur
  • .ocdata is not found inside the data directory
    • Create file using touch and set necessary permissions. $ sudo touch /var/nextcloud_data/.ocdata $ sudo chown -R www-data:www-data /var/nextcloud_data/

The password for the administrator user is unknown
  1. Log in to your server:
    • SSH into the server where your PostgreSQL database is hosted.
  2. Switch to the PostgreSQL user:
    • $ sudo -i -u postgres
  3. Access the PostgreSQL command line
    • psql
  4. List the databases: (If you re unsure which database is being used by Nextcloud, you can list all the databases by the list command.)
    • \l
  5. Switch to the Nextcloud database:
    • Switch to the specific database that Nextcloud is using.
    • \c nextclouddb
  6. Reset the password for the Nextcloud database user:
    • ALTER USER nextcloud_user WITH PASSWORD 'new_password';
  7. Exit the PostgreSQL command line:
    • \q
  8. Verify Database Configuration:
    • Check the database connection details in the config.php file to ensure they are correct. sudo vi /var/www/html/nextcloud/config/config.php
    • Replace nextcloud_db, nextcloud_user, and your_password with your actual database name, user, and password.
---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(or your PostgreSQL server address)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for nextcloud_user.)
--->%---
  1. Restart NGinx and access the UI through the browser.

4 April 2025

Guido G nther: Booting an Android custom kernel on a Pixel 3a for QMI debugging

As you might know I'm not much of an Android user (let alone developer) but in order to figure out how something low level works you sometimes need to peek at how vendor kernels handles this. For that it is often useful to add additional debugging. One such case is QMI communication going on in Qualcomm SOCs. Joel Selvaraj wrote some nice tooling for this. To make use of this a rooted device and a small kernel patch is needed and what would be a no-brainer with Linux Mobile took me a moment to get it to work on Android. Here's the steps I took on a Pixel 3a to first root the device via Magisk, then build the patched kernel and put that into a boot.img to boot it. Flashing the factory image If you still have Android on the device you can skip this step. You can get Android 12 from developers.google.com. I've downloaded sargo-sp2a.220505.008-factory-071e368a.zip. Then put the device into Fastboot mode (Power + Vol-Down), connect it to your PC via USB, unzip/unpack the archive and reflash the phone:
unpack sargo-sp2a.220505.008-factory-071e368a.zip
./flash-all.sh
This wipes your device! I had to run it twice since it would time out on the first run. Note that this unpacked zip contains another zip (image-sargo-sp2a.220505.008.zip) which will become useful below. Enabling USB debugging Now boot Android and enable Developer mode by going to Settings About then touching Build Number (at the very bottom) 7 times. Go back one level, then go to System Developer Options and enable "USB Debugging". Obtaining boot.img There are several ways to get boot.img. If you just flashed Android above then you can fetch boot.img from the already mentioned image-sargo-sp2a.220505.008.zip:
unzip image-sargo-sp2a.220505.008.zip boot.img
If you want to fetch the exact boot.img from your device you can use TWRP (see the very end of this post). Becoming root with Magisk Being able to su via adb will later be useful to fetch kernel logs. For that we first download Magisk as APK. At the time of writing v28.1 is current. Once downloaded we upload the APK and the boot.img from the previous step onto the phone (which needs to have Android booted):
adb push Magisk-v28.1.apk /sdcard/Download
adb push boot.img /sdcard/Download
In Android open the Files app, navigate to /sdcard/Download and install the Magisk APK by opening the APK. We now want to patch boot.img to get su via adb to work (so we can run dmesg). This happens by hitting Install in the Magisk app, then "Select a file to patch". You then select the boot.img we just uploaded. The installation process will create a magisk_patched-<random>.img in /sdcard/Download. We can pull that file via adb back to our PC:
adb pull /sdcard/Download/magisk_patched-28100_3ucVs.img
Then reboot the phone into fastboot (adb reboot bootloader) and flash it (this is optional see below):
fastboot flash boot magisk_patched-28100_3ucVs.img
Now boot the phone again, open the Magisk app, go to SuperUser at the bottom and enable Shell. If you now connect to your phone via adb again and now su should work:
adb shell
su
As noted above if you want to keep your Android installation pristine you don't even need to flash this Magisk enabled boot.img. I've flashed it so I have su access for other operations too. If you don't want to flash it you can still test boot it via:
fastboot boot magisk_patched-28100_3ucVs.img
and then perform the same adb shell su check as above. Building the custom kernel For our QMI debugging to work we need to patch the kernel a bit and place that in boot.img too. So let's build the kernel first. For that we install the necessary tools (which are thankfully packaged in Debian) and fetch the Android kernel sources:
sudo apt install repo android-platform-tools-base kmod ccache build-essential mkbootimg
mkdir aosp-kernel && cd aosp-kernel
repo init -u https://android.googlesource.com/kernel/manifest -b android-msm-bonito-4.9-android12L
repo sync
With that we can apply Joel's kernel patches and also compile in the touch controller driver so we don't need to worry if the modules in the initramfs match the kernel. The kernel sources are in private/msm-google. I've just applied the diffs on top with patch and modified the defconfig and committed the changes. The resulting tree is here. We then build the kernel:
PATH=/usr/sbin:$PATH ./build_bonito.sh
The resulting kernel is at ./out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb. In order to boot that kernel I found it to be the simplest to just replace the kernel in the Magisk patched boot.img as we have that already. In case you have already deleted that for any reason we can always fetch the current boot.img from the phone via TWRP (see below). Preparing a new boot.img To replace the kernel in our Magisk enabled magisk_patched-28100_3ucVs.img from above with the just built kernel we can use mkbootimgfor that. I basically copied the steps we're using when building the boot.img on the Linux Mobile side:
ARGS=$(unpack_bootimg --format mkbootimg --out tmp --boot_img magisk_patched-28100_3ucVs.img)
CLEAN_PARAMS="$(echo "$ ARGS "   sed -e "s/ --cmdline '.*'//" -e "s/ --board '.*'//")"
cp android-kernel/out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb tmp/kernel
mkbootimg -o "boot.patched.img" $ CLEAN_PARAMS  --cmdline "$ ARGS "
This will give you a boot.patched.img with the just built kernel. Boot the new kernel via fastboot We can now boot the new boot.patched.img. No need to flash that onto the device for that:
fastboot boot boot.patched.img
Fetching the kernel logs With that we can fetch the kernel logs with the debug output via adb:
adb shell su -c 'dmesg -t' > dmesg_dump.xml
or already filtering out the QMI commands:
adb shell su -c 'dmesg -t'    grep "@QMI@"   sed -e "s/@QMI@//g" &> sargo_qmi_dump.xml
That's it. You can apply this method for testing out other kernel patches as well. If you want to apply the above to other devices you basically need to make sure you patch the right kernel sources, the other steps should be very similar. In case you just need a rooted boot.img for sargo you can find a patched one here. If this procedure can be improved / streamlined somehow please let me know. Appendix: Fetching boot.img from the phone If, for some reason you lost boot.img somewhere on the way you can always use TWRP to fetch the boot.img currently in use on your phone. First get TWRP for the Pixel 3a. You can boot that directly by putting your device into fastboot mode, then running:
fastboot boot twrp-3.7.1_12-1-sargo.img
Within TWRP select Backup Boot and backup the file. You can then use adb shell to locate the backup in /sdcard/TWRP/BACKUPS/ and pull it:
adb pull /sdcard/TWRP/BACKUPS/97GAY10PWS/2025-04-02--09-24-24_SP2A220505008/boot.emmc.win
You now have the device's boot.img on your PC and can e.g. replace the kernel or make modifications to the initramfs.

28 March 2025

John Goerzen: Why You Should (Still) Use Signal As Much As Possible

As I write this in March 2025, there is a lot of confusion about Signal messenger due to the recent news of people using Signal in government, and subsequent leaks. The short version is: there was no problem with Signal here. People were using it because they understood it to be secure, not the other way around. Both the government and the Electronic Frontier Foundation recommend people use Signal. This is an unusual alliance, and in the case of the government, was prompted because it understood other countries had a persistent attack against American telephone companies and SMS traffic. So let s dive in. I ll cover some basics of what security is, what happened in this situation, and why Signal is a good idea. This post isn t for programmers that work with cryptography every day. Rather, I hope it can make some of these concepts accessible to everyone else.

What makes communications secure? When most people are talking about secure communications, they mean some combination of these properties:
  1. Privacy - nobody except the intended recipient can decode a message.
  2. Authentication - guarantees that the person you are chatting with really is the intended recipient.
  3. Ephemerality - preventing a record of the communication from being stored. That is, making it more like a conversation around the table than a written email.
  4. Anonymity - keeping your set of contacts to yourself and even obfuscating the fact that communications are occurring.
If you think about it, most people care the most about the first two. In fact, authentication is a key part of privacy. There is an attack known as man in the middle in which somebody pretends to be the intended recipient. The interceptor reads the messages, and then passes them on to the real intended recipient. So we can t really have privacy without authentication. I ll have more to say about these later. For now, let s discuss attack scenarios.

What compromises security? There are a number of ways that security can be compromised. Let s think through some of them:

Communications infrastructure snooping Let s say you used no encryption at all, and connected to public WiFi in a coffee shop to send your message. Who all could potentially see it?
  • The owner of the coffee shop s WiFi
  • The coffee shop s Internet provider
  • The recipient s Internet provider
  • Any Internet providers along the network between the sender and the recipient
  • Any government or institution that can compel any of the above to hand over copies of the traffic
  • Any hackers that compromise any of the above systems
Back in the early days of the Internet, most traffic had no encryption. People were careful about putting their credit cards into webpages and emails because they knew it was easy to intercept them. We have been on a decades-long evolution towards more pervasive encryption, which is a good thing. Text messages (SMS) follow a similar path to the above scenario, and are unencrypted. We know that all of the above are ways people s texts can be compromised; for instance, governments can issue search warrants to obtain copies of texts, and China is believed to have a persistent hack into western telcos. SMS fails all four of our attributes of secure communication above (privacy, authentication, ephemerality, and anonymity). Also, think about what information is collected from SMS and by who. Texts you send could be retained in your phone, the recipient s phone, your phone company, their phone company, and so forth. They might also live in cloud backups of your devices. You only have control over your own phone s retention. So defenses against this involve things like:
  • Strong end-to-end encryption, so no intermediate party even the people that make the app can snoop on it.
  • Using strong authentication of your peers
  • Taking steps to prevent even app developers from being able to see your contact list or communication history
You may see some other apps saying they use strong encryption or use the Signal protocol. But while they may do that for some or all of your message content, they may still upload your contact list, history, location, etc. to a central location where it is still vulnerable to these kinds of attacks. When you think about anonymity, think about it like this: if you send a letter to a friend every week, every postal carrier that transports it even if they never open it or attempt to peak inside will be able to read the envelope and know that you communicate on a certain schedule with that friend. The same can be said of SMS, email, or most encrypted chat operators. Signal s design prevents it from retaining even this information, though nation-states or ISPs might still be able to notice patterns (every time you send something via Signal, your contact receives something from Signal a few milliseconds later). It is very difficult to provide perfect anonymity from well-funded adversaries, even if you can provide very good privacy.

Device compromise Let s say you use an app with strong end-to-end encryption. This takes away some of the easiest ways someone could get to your messages. But it doesn t take away all of them. What if somebody stole your phone? Perhaps the phone has a password, but if an attacker pulled out the storage unit, could they access your messages without a password? Or maybe they somehow trick or compel you into revealing your password. Now what? An even simpler attack doesn t require them to steal your device at all. All they need is a few minutes with it to steal your SIM card. Now they can receive any texts sent to your number - whether from your bank or your friend. Yikes, right? Signal stores your data in an encrypted form on your device. It can protect it in various ways. One of the most important protections is ephemerality - it can automatically delete your old texts. A text that is securely erased can never fall into the wrong hands if the device is compromised later. An actively-compromised phone, though, could still give up secrets. For instance, what if a malicious keyboard app sent every keypress to an adversary? Signal is only as secure as the phone it runs on but still, it protects against a wide variety of attacks.

Untrustworthy communication partner Perhaps you are sending sensitive information to a contact, but that person doesn t want to keep it in confidence. There is very little you can do about that technologically; with pretty much any tool out there, nothing stops them from taking a picture of your messages and handing the picture off.

Environmental compromise Perhaps your device is secure, but a hidden camera still captures what s on your screen. You can take some steps against things like this, of course.

Human error Sometimes humans make mistakes. For instance, the reason a reporter got copies of messages recently was because a participant in a group chat accidentally added him (presumably that participant meant to add someone else and just selected the wrong name). Phishing attacks can trick people into revealing passwords or other sensitive data. Humans are, quite often, the weakest link in the chain.

Protecting yourself So how can you protect yourself against these attacks? Let s consider:
  • Use a secure app like Signal that uses strong end-to-end encryption where even the provider can t access your messages
  • Keep your software and phone up-to-date
  • Be careful about phishing attacks and who you add to chat rooms
  • Be aware of your surroundings; don t send sensitive messages where people might be looking over your shoulder with their eyes or cameras
There are other methods besides Signal. For instance, you could install GnuPG (GPG) on a laptop that has no WiFi card or any other way to connect it to the Internet. You could always type your messages on that laptop, encrypt them, copy the encrypted text to a floppy disk (or USB device), take that USB drive to your Internet computer, and send the encrypted message by email or something. It would be exceptionally difficult to break the privacy of messages in that case (though anonymity would be mostly lost). Even if someone got the password to your secure laptop, it wouldn t do them any good unless they physically broke into your house or something. In some ways, it is probably safer than Signal. (For more on this, see my article How gapped is your air?) But, that approach is hard to use. Many people aren t familiar with GnuPG. You don t have the convenience of sending a quick text message from anywhere. Security that is hard to use most often simply isn t used. That is, you and your friends will probably just revert back to using insecure SMS instead of this GnuPG approach because SMS is so much easier. Signal strikes a unique balance of providing very good security while also being practical, easy, and useful. For most people, it is the most secure option available. Signal is also open source; you don t have to trust that it is as secure as it says, because you can inspect it for yourself. Also, while it s not federated, I previously addressed that.

Government use If you are a government, particularly one that is highly consequential to the world, you can imagine that you are a huge target. Other nations are likely spending billions of dollars to compromise your communications. Signal itself might be secure, but if some other government can add spyware to your phones, or conduct a successful phishing attack, you can still have your communications compromised. I have no direct knowledge, but I think it is generally understood that the US government maintains communications networks that are entirely separate from the Internet and can only be accessed from secure physical locations and secure rooms. These can be even more secure than the average person using Signal because they can protect against things like environmental compromise, human error, and so forth. The scandal in March of 2025 happened because government employees were using Signal rather than official government tools for sensitive information, had taken advantage of Signal s ephemerality (laws require records to be kept), and through apparent human error had directly shared this information with a reporter. Presumably a reporter would have lacked access to the restricted communications networks in the first place, so that wouldn t have been possible. This doesn t mean that Signal is bad. It just means that somebody that can spend billions of dollars on security can be more secure than you. Signal is still a great tool for people, and in many cases defeats even those that can spend lots of dollars trying to defeat it. And remember - to use those restricted networks, you have to go to specific rooms in specific buildings. They are still not as convenient as what you carry around in your pocket.

Conclusion Signal is practical security. Do you want phone companies reading your messages? How about Facebook or X? Have those companies demonstrated that they are completely trustworthy throughout their entire history? I say no. So, go install Signal. It s the best, most practical tool we have.
This post is also available on my website, where it may be periodically updated.

Freexian Collaborators: Monthly report about Debian Long Term Support, February 2025 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In February, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 10.0h (out of 8.0h assigned and 6.0h from previous period), thus carrying over 4.0h to the next month.
  • Adrian Bunk did 12.0h (out of 0.0h assigned and 63.5h from previous period), thus carrying over 51.5h to the next month.
  • Andrej Shadura did 10.0h (out of 6.0h assigned and 4.0h from previous period).
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 12.0h (out of 8.0h assigned and 16.0h from previous period), thus carrying over 12.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.0h (out of 20.0h assigned and 6.0h from previous period), thus carrying over 3.0h to the next month.
  • Emilio Pozuelo Monfort did 53.0h (out of 53.0h assigned and 0.75h from previous period), thus carrying over 0.75h to the next month.
  • Guilhem Moulin did 11.0h (out of 3.25h assigned and 16.75h from previous period), thus carrying over 9.0h to the next month.
  • Jochen Sprickerhof did 27.0h (out of 30.0h assigned), thus carrying over 3.0h to the next month.
  • Lee Garrett did 11.75h (out of 9.5h assigned and 44.25h from previous period), thus carrying over 42.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Roberto C. S nchez did 7.0h (out of 14.75h assigned and 9.25h from previous period), thus carrying over 17.0h to the next month.
  • Santiago Ruano Rinc n did 19.75h (out of 21.75h assigned and 3.25h from previous period), thus carrying over 5.25h to the next month.
  • Sean Whitton did 6.0h (out of 6.0h assigned).
  • Sylvain Beucler did 52.5h (out of 14.75h assigned and 39.0h from previous period), thus carrying over 1.25h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 17.0h (out of 17.0h assigned).

Evolution of the situation In February, we have released 38 DLAs.
  • Notable security updates:
    • pam-u2f, prepared by Patrick Winnertz, fixed an authentication bypass vulnerability
    • openjdk-17, prepared by Emilio Pozuelo Monfort, fixed an authorization bypass/information disclosure vulnerability
    • firefox-esr, prepared by Emilio Pozuelo Monfort, fixed several vulnerabilities
    • thunderbird, prepared by Emilio Pozuelo Monfort, fixed several vulnerabilities
    • postgresql-13, prepared by Christoph Berg, fixed an SQL injection vulnerability
    • freerdp2, prepared by Tobias Frost, fixed several vulnerabilities
    • openssh, prepared by Colin Watson, fixed a machine-in-the-middle vulnerability
LTS contributors Emilio Pozuelo Monfort and Santiago Ruano Rinc n coordinated the administrative aspects of LTS updates of postgresql-13 and pam-u2f, which were prepared by the respective maintainers, to whom we are most grateful. As has become the custom of the LTS team, work is under way on a number of package updates targeting Debian 12 (codename bookworm ) with fixes for a variety of vulnerabilities. In February, Guilhem Moulin prepared an upload of sssd, while several other updates are still in progress. Bastien Roucari s prepared an upload of krb5 for unstable as well. Given the importance of the Debian Security Tracker to the work of the LTS Team, we regularly contribute improvements to it. LTS contributor Emilio Pozuelo Monfort reviewed and merged a change to improve performance, and then dealt with unexpected issues that arose as a result. He also made improvements in the processing of CVEs which are not applicable to Debian. Looking to the future (the release of Debian 13, codename trixie , and beyond), LTS contributor Santiago Ruano Rinc n has initiated a conversation among the broader community involved in the development of Debian. The purpose of the discussion is to explore ways to improve the long term supportability of packages in Debian, specifically by focusing effort on ensuring that each Debian release contains the best supported upstream version of packages with a history of security issues.

Thanks to our sponsors Sponsors that joined recently are in bold.

24 March 2025

Simon Josefsson: Reproducible Software Releases

Around a year ago I discussed two concerns with software release archives (tarball artifacts) that could be improved to increase confidence in the supply-chain security of software releases. Repeating the goals for simplicity: While implementing these ideas for a small project was accomplished within weeks see my announcement of Libntlm version 1.8 adressing this in complex projects uncovered concerns with tools that had to be addressed, and things stalled for many months pending that work. I had the notion that these two goals were easy and shouldn t be hard to accomplish. I still believe that, but have had to realize that improving tooling to support these goals takes time. It seems clear that these concepts are not universally agreed on and implemented generally. I m now happy to recap some of the work that led to releases of libtasn1 v4.20.0, inetutils v2.6, libidn2 v2.3.8, libidn v1.43. These releases all achieve these goals. I am working on a bunch of more projects to support these ideas too. What have the obstacles so far been to make this happen? It may help others who are in the same process of addressing these concerns to have a high-level introduction to the issues I encountered. Source code for projects above are available and anyone can look at the solutions to learn how the problems are addressed. First let s look at the problems we need to solve to make git-archive style tarballs usable:

Version Handling To build usable binaries from a minimal tarballs, it need to know which version number it is. Traditionally this information was stored inside configure.ac in git. However I use gnulib s git-version-gen to infer the version number from the git tag or git commit instead. The git tag information is not available in a git-archive tarball. My solution to this was to make use of the export-subst feature of the .gitattributes file. I store the file .tarball-version-git in git containing the magic cookie like this:
$Format:%(describe)$
With this, git-archive will replace with a useful version identifier on export, see the libtasn1 patch to achieve this. To make use of this information, the git-version-gen script was enhanced to read this information, see the gnulib patch. This is invoked by ./configure to figure out which version number the package is for.

Translations We want translations to be included in the minimal source tarball for it to be buildable. Traditionally these files are retrieved by the maintainer from the Translation project when running ./bootstrap, however there are two problems with this. The first one is that there is no strong authentication or versioning information on this data, the tools just download and place whatever wget downloaded into your source tree (printf-style injection attack anyone?). We could improve this (e.g., publish GnuPG signed translations messages with clear versioning), however I did not work on that further. The reason is that I want to support offline builds of packages. Downloading random things from the Internet during builds does not work when building a Debian package, for example. The translation project could solve this by making a monthly tarball with their translations available, for distributors to pick up and provide as a separate package that could be used as a build dependency. However that is not how these tools and projects are designed. Instead I reverted back to storing translations in git, something that I did for most projects back when I was using CVS 20 years ago. Hooking this into ./bootstrap and gettext workflow can be tricky (ideas for improvement most welcome!), but I used a simple approach to store all directly downloaded po/*.po files directly as po/*.po.in and make the ./bootstrap tool move them in place, see the libidn2 commit followed by the actual make update-po commit with all the translations where one essential step is:
# Prime po/*.po from fall-back copy stored in git.
for poin in po/*.po.in; do
    po=$(echo $poin   sed 's/.in//')
    test -f $po   cp -v $poin $po
done
ls po/*.po   sed 's .*/ ; s \.po$ ' > po/LINGUAS

Fetching vendor files like gnulib Most build dependencies are in the shape of You need a C compiler . However some come in the shape of source-code files intended to be vendored , and gnulib is a huge repository of such files. The latter is a problem when building from a minimal git archive. It is possible to consider translation files as a class of vendor files, since they need to be copied verbatim into the project build directory for things to work. The same goes for *.m4 macros from the GNU Autoconf Archive. However I m not confident that the solution for all vendor files must be the same. For translation files and for Autoconf Archive macros, I have decided to put these files into git and merge them manually occasionally. For gnulib files, in some projects like OATH Toolkit I also store all gnulib files in git which effectively resolve this concern. (Incidentally, the reason for doing so was originally that running ./bootstrap took forever since there is five gnulib instances used, which is no longer the case since gnulib-tool was rewritten in Python.) For most projects, however, I rely on ./bootstrap to fetch a gnulib git clone when building. I like this model, however it doesn t work offline. One way to resolve this is to make the gnulib git repository available for offline use, and I ve made some effort to make this happen via a Gnulib Git Bundle and have explained how to implement this approach for Debian packaging. I don t think that is sufficient as a generic solution though, it is mostly applicable to building old releases that uses old gnulib files. It won t work when building from CI/CD pipelines, for example, where I have settled to use a crude way of fetching and unpacking a particular gnulib snapshot, see this Libntlm patch. This is much faster than working with git submodules and cloning gnulib during ./bootstrap. Essentially this is doing:
GNULIB_REVISION=$(. bootstrap.conf >&2; echo $GNULIB_REVISION)
wget -nv https://gitlab.com/libidn/gnulib-mirror/-/archive/$GNULIB_REVISION/gnulib-mirror-$GNULIB_REVISION.tar.gz
gzip -cd gnulib-mirror-$GNULIB_REVISION.tar.gz   tar xf -
rm -fv gnulib-mirror-$GNULIB_REVISION.tar.gz
export GNULIB_SRCDIR=$PWD/gnulib-mirror-$GNULIB_REVISION
./bootstrap --no-git
./configure
make

Test the git-archive tarball This goes without saying, but if you don t test that building from a git-archive style tarball works, you are likely to regress at some point. Use CI/CD techniques to continuously test that a minimal git-archive tarball leads to a usable build.

Mission Accomplished So that wasn t hard, was it? You should now be able to publish a minimal git-archive tarball and users should be able to build your project from it. I recommend naming these archives as PROJECT-vX.Y.Z-src.tar.gz replacing PROJECT with your project name and X.Y.Z with your version number. The archive should have only one sub-directory named PROJECT-vX.Y.Z/ containing all the source-code files. This differentiate it against traditional PROJECT-X.Y.Z.tar.gz tarballs in that it embeds the git tag (which typically starts with v) and contains a wildcard-friendly -src substring. Alas there is no consistency around this naming pattern, and GitLab, GitHub, Codeberg etc all seem to use their own slightly incompatible variant. Let s go on to see what is needed to achieve reproducible make dist source tarballs. This is the release artifact that most users use, and they often contain lots of generated files and vendor files. These files are included to make it easy to build for the user. What are the challenges to make these reproducible?

Build dependencies causing different generated content The first part is to realize that if you use tool X with version A to generate a file that goes into the tarball, version B of that tool may produce different outputs. This is a generic concern and it cannot be solved. We want our build tools to evolve and produce better outputs over time. What can be addressed is to avoid needless differences. For example, many tools store timestamps and versioning information in the generated files. This causes needless differences, which makes audits harder. I have worked on some of these, like Autoconf Archive timestamps but solving all of these examples will take a long time, and some upstream are reluctant to incorporate these changes. My approach meanwhile is to build things using similar environments, and compare the outputs for differences. I ve found that the various closely related forks of GNU/Linux distributions are useful for this. Trisquel 11 is based on Ubuntu 22.04, and building my projects using both and comparing the differences only give me the relevant differences to improve. This can be extended to compare AlmaLinux with RockyLinux (for both versions 8 and 9), Devuan 5 against Debian 12, PureOS 10 with Debian 11, and so on.

Timestamps Sometimes tools store timestamps in files in a way that is harder to fix. Two notable examples of this are *.po translation files and Texinfo manuals. For translation files, I have resolved this by making sure the files use a predictable POT-Creation-Date timestamp, and I set it to the modification timestamps of the NEWS file in the repository (which I set to the git commit of the latest commit elsewhere) like this:
dist-hook: po-CreationDate-to-mtime-NEWS
.PHONY: po-CreationDate-to-mtime-NEWS
po-CreationDate-to-mtime-NEWS: mtime-NEWS-to-git-HEAD
  $(AM_V_GEN)for p in $(distdir)/po/*.po $(distdir)/po/$(PACKAGE).pot; do \
    if test -f "$$p"; then \
      $(SED) -e 's,POT-Creation-Date: .*\\n",POT-Creation-Date: '"$$(env LC_ALL=C TZ=UTC0 stat --format=%y $(srcdir)/NEWS   cut -c1-16,31-)"'\\n",' < $$p > $$p.tmp && \
      if cmp $$p $$p.tmp > /dev/null; then \
        rm -f $$p.tmp; \
      else \
        mv $$p.tmp $$p; \
      fi \
    fi \
  done
Similarily, I set a predictable modification time of the texinfo source file like this:
dist-hook: mtime-NEWS-to-git-HEAD
.PHONY: mtime-NEWS-to-git-HEAD
mtime-NEWS-to-git-HEAD:
  $(AM_V_GEN)if test -e $(srcdir)/.git \
                && command -v git > /dev/null; then \
    touch -m -t "$$(git log -1 --format=%cd \
      --date=format-local:%Y%m%d%H%M.%S)" $(srcdir)/NEWS; \
  fi
However I ve realized that this needs to happen earlier and probably has to be run during ./configure time, because the doc/version.texi file is generated on first build before running make dist and for some reason the file is not rebuilt at release time. The Automake texinfo integration is a bit inflexible about providing hooks to extend the dependency tracking. The method to address these differences isn t really important, and they change over time depending on preferences. What is important is that the differences are eliminated.

ChangeLog Traditionally ChangeLog files were manually prepared, and still is for some projects. I maintain git2cl but recently I ve settled with gnulib s gitlog-to-changelog because doing so avoids another build dependency (although the output formatting is different and arguable worse for my git commit style). So the ChangeLog files are generated from git history. This means a shallow clone will not produce the same ChangeLog file depending on how deep it was cloned. For Libntlm I simply disabled use of generated ChangeLog because I wanted to support an even more extreme form of reproducibility: I wanted to be able to reproduce the full make dist source archives from a minimal git-archive source archive. However for other projects I ve settled with a middle ground. I realized that for git describe to produce reproducible outputs, the shallow clone needs to include the last release tag. So it felt acceptable to assume that the clone is not minimal, but instead has some but not all of the history. I settled with the following recipe to produce ChangeLog's covering all changes since the last release.
dist-hook: gen-ChangeLog
.PHONY: gen-ChangeLog
gen-ChangeLog:
  $(AM_V_GEN)if test -e $(srcdir)/.git; then			\
    LC_ALL=en_US.UTF-8 TZ=UTC0					\
    $(top_srcdir)/build-aux/gitlog-to-changelog			\
       --srcdir=$(srcdir) --					\
       v$(PREV_VERSION)~.. > $(distdir)/cl-t &&			\
         printf '\n\nSee the source repo for older entries\n'	\
         >> $(distdir)/cl-t &&					\
         rm -f $(distdir)/ChangeLog &&				\
         mv $(distdir)/cl-t $(distdir)/ChangeLog;  		\
  fi
I m undecided about the usefulness of generated ChangeLog files within make dist archives. Before we have stable and secure archival of git repositories widely implemented, I can see some utility of this in case we lose all copies of the upstream git repositories. I can sympathize with the concept of ChangeLog files died when we started to generate them from git logs: the files no longer serve any purpose, and we can ask people to go look at the git log instead of reading these generated non-source files.

Long-term reproducible trusted build environment Distributions comes and goes, and old releases of them goes out of support and often stops working. Which build environment should I chose to build the official release archives? To my knowledge only Guix offers a reliable way to re-create an older build environment (guix gime-machine) that have bootstrappable properties for additional confidence. However I had two difficult problems here. The first one was that I needed Guix container images that were usable in GitLab CI/CD Pipelines, and this side-tracked me for a while. The second one delayed my effort for many months, and I was inclined to give up. Libidn distribute a C# implementation. Some of the C# source code files included in the release tarball are generated. By what? You guess it, by a C# program, with the source code included in the distribution. This means nobody could reproduce the source tarball of Libidn without trusting someone elses C# compiler binaries, which were built from binaries of earlier releases, chaining back into something that nobody ever attempts to build any more and likely fail to build due to bit-rot. I had two basic choices, either remove the C# implementation from Libidn (which may be a good idea for other reasons, since the C and C# are unrelated implementations) or build the source tarball on some binary-only distribution like Trisquel. Neither felt appealing to me, but a late christmas gift of a reproducible Mono came to Guix that resolve this.

Embedded images in Texinfo manual For Libidn one section of the manual has an image illustrating some concepts. The PNG, PDF and EPS outputs were generated via fig2dev from a *.fig file (hello 1985!) that I had stored in git. Over time, I had also started to store the generated outputs because of build issues. At some point, it was possible to post-process the PDF outputs with grep to remove some timestamps, however with compression this is no longer possible and actually the grep command I used resulted in a 0-byte output file. So my embedded binaries in git was no longer reproducible. I first set out to fix this by post-processing things properly, however I then realized that the *.fig file is not really easy to work with in a modern world. I wanted to create an image from some text-file description of the image. Eventually, via the Guix manual on guix graph, I came to re-discover the graphviz language and tool called dot (hello 1993!). All well then? Oh no, the PDF output embeds timestamps. Binary editing of PDF s no longer work through simple grep, remember? I was back where I started, and after some (soul- and web-) searching I discovered that Ghostscript (hello 1988!) pdfmarks could be used to modify things here. Cooperating with automake s texinfo rules related to make dist proved once again a worthy challenge, and eventually I ended up with a Makefile.am snippet to build images that could be condensed into:
info_TEXINFOS = libidn.texi
libidn_TEXINFOS += libidn-components.png
imagesdir = $(infodir)
images_DATA = libidn-components.png
EXTRA_DIST += components.dot
DISTCLEANFILES = \
  libidn-components.eps libidn-components.png libidn-components.pdf
libidn-components.eps: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Teps < $< > $@.tmp
  $(AM_V_at)! grep %%CreationDate $@.tmp
  $(AM_V_at)mv $@.tmp $@
libidn-components.pdf: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpdf < $< > $@.tmp
# A simple sed on CreationDate is no longer possible due to compression.
# 'exiftool -CreateDate' is alternative to 'gs', but adds ~4kb to file.
# Ghostscript add <1kb.  Why can't 'dot' avoid setting CreationDate?
  $(AM_V_at)printf '[ /ModDate ()\n  /CreationDate ()\n  /DOCINFO pdfmark\n' > pdfmarks
  $(AM_V_at)$(GS) -q -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=$@.tmp2 $@.tmp pdfmarks
  $(AM_V_at)rm -f $@.tmp pdfmarks
  $(AM_V_at)mv $@.tmp2 $@
libidn-components.png: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpng < $< > $@.tmp
  $(AM_V_at)mv $@.tmp $@
pdf-recursive: libidn-components.pdf
dvi-recursive: libidn-components.eps
ps-recursive: libidn-components.eps
info-recursive: $(top_srcdir)/.version libidn-components.png
Surely this can be improved, but I m not yet certain in what way is the best one forward. I like having a text representation as the source of the image. I m sad that the new image size is ~48kb compared to the old image size of ~1kb. I tried using exiftool -CreateDate as an alternative to GhostScript, but using it to remove the timestamp added ~4kb to the file size and naturally I was appalled by this ignorance of impending doom.

Test reproducibility of tarball Again, you need to continuously test the properties you desire. This means building your project twice using different environments and comparing the results. I ve settled with a small GitLab CI/CD pipeline job that perform bit-by-bit comparison of generated make dist archives. It also perform bit-by-bit comparison of generated git-archive artifacts. See the Libidn2 .gitlab-ci.yml 0-compare job which essentially is:
0-compare:
  image: alpine:latest
  stage: repro
  needs: [ B-AlmaLinux8, B-AlmaLinux9, B-RockyLinux8, B-RockyLinux9, B-Trisquel11, B-Ubuntu2204, B-PureOS10, B-Debian11, B-Devuan5, B-Debian12, B-gcc, B-clang, B-Guix, R-Guix, R-Debian12, R-Ubuntu2404, S-Trisquel10, S-Ubuntu2004 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.*   sort   grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.*   sort   grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.*   sort   uniq -c -w64   sort -rn
  - sha256sum */*.tar.* */*/*.tar.*   grep    -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.*   grep -v -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp b-almalinux8/src/*.tar.gz b-almalinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux8/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-devuan5/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-guix/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-ubuntu2404/src/*v2.*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp b-trisquel11/src/*.tar.gz b-ubuntu2204/src/*.tar.gz
# Confirm really old git-archive (no export-subst) tarball reproducibility
  - cmp b-debian11/src/*.tar.gz b-pureos10/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp b-almalinux8/*.tar.gz b-rockylinux8/*.tar.gz
  - cmp b-almalinux9/*.tar.gz b-rockylinux9/*.tar.gz
  - cmp b-pureos10/*.tar.gz b-debian11/*.tar.gz
  - cmp b-devuan5/*.tar.gz b-debian12/*.tar.gz
  - cmp b-trisquel11/*.tar.gz b-ubuntu2204/*.tar.gz
  - cmp b-guix/*.tar.gz r-guix/*.tar.gz
# Confirm 'make dist' from git-archive tarball reproducibility
  - cmp s-trisquel10/*.tar.gz s-ubuntu2004/*.tar.gz
Notice that I discovered that git archive outputs differ over time too, which is natural but a bit of a nuisance. The output of the job is illuminating in the way that all SHA256 checksums of generated tarballs are included, for example the libidn2 v2.3.8 job log:
$ sha256sum */*.tar.* */*/*.tar.*   sort   grep -v -- -src.tar.
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-trisquel11/libidn2-2.3.8.tar.gz
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-ubuntu2204/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-debian11/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-pureos10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-trisquel10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-ubuntu2004/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-almalinux8/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-rockylinux8/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-clang/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-debian12/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-devuan5/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-gcc/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  r-debian12/libidn2-2.3.8.tar.gz
acf5cbb295e0693e4394a56c71600421059f9c9bf45ccf8a7e305c995630b32b  r-ubuntu2404/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-almalinux9/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-rockylinux9/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  b-guix/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  r-guix/libidn2-2.3.8.tar.gz
I m sure I have forgotten or suppressed some challenges (sprinkling LANG=C TZ=UTC0 helps) related to these goals, but my hope is that this discussion of solutions will inspire you to implement these concepts for your software project too. Please share your thoughts and additional insights in a comment below. Enjoy Happy Hacking in the course of practicing this!

Next.