Search Results: "xam"

29 August 2025

Ravi Dwivedi: Installing Debian With Btrfs and Encryption

Motivation On the 8th of August 2025 (a day before the Debian Trixie release), I was upgrading my personal laptop from Debian Bookworm to Trixie. It was a major update. However, the update didn t go smoothly, and I ran into some errors. From the Debian support IRC channel, I got to know that it would be best if I removed the texlive packages. However, it was not so easy to just remove texlive with a simple apt remove command. I had to remove the texlive packages from /usr/bin. Then I ran into other errors. Hours after I started the upgrade, I realized I preferred having my system as it was before, as I had to travel to Noida the next day. Needless to say, I wanted to go to sleep rather than fix my broken system. Only if I had a way to go back to my system before I started upgrading, it would have saved a lot of trouble for me. I ended up installing Trixie from scratch. It turns out that there was a way to recover to the state before the upgrade - using Timeshift to roll back the system to a state (in our example, it is the state before the upgrade process started) in the past. However, it needs the Btrfs filesystem with appropriate subvolumes, not provided by Debian installer in their guided partitioning menu. I have set it up after a few weeks of the above-mentioned incident. Let me demonstrate how it works.
Check the screenshot above. It shows a list of snapshots made by Timeshift. Some of them were made by me manually. Others were made by Timeshift automatically as per the routine - I have set up hourly backups and weekly backups etc. In the above-mentioned major update, I could have just taken a snapshot using Timeshift before performing the upgrade and could have rolled back to that snapshot when I found that I cannot spend more time on fixing my installation errors. Then I could just perform the upgrade later.

Installation In this tutorial, I will cover how I installed Debian with Btrfs and disk encryption, along with creating subvolumes @ for root and @home for /home so that I can use Timeshift to create snapshots. These snapshots are kept on the same disk where Debian is installed, and the use-case is to roll back to a working system in case I mess up something or to recover an accidentally deleted file. I went through countless tutorials on the Internet, but I didn t find a single tutorial covering both the disk encryption and the above-mentioned subvolumes (on Debian). Debian doesn t create the desired subvolumes by default, therefore the process requires some manual steps, which beginners may not be comfortable performing. Beginners can try distros such as Fedora and Linux Mint, as their installation includes Btrfs with the required subvolumes. Furthermore, it is pertinent to note that I used Debian Trixie s DVD iso on a real laptop (not a virtual machine) for my installation. Debian Trixie is the codename for the current stable version of Debian. Then I took screenshots in a virtual machine by repeating the process. Moreover, a couple of screenshots are from the installation I did on the real laptop. Let s start the tutorial by booting up the Debian installer.
The above screenshot shows the first screen we see on the installer. Since we want to choose Expert Install, we select Advanced Options in the screenshot above.
Let s select the Expert Install option in the above screenshot. It is because we want to create subvolumes after the installer is done with the partition, and only then proceed to installing the base system. Non-expert install modes proceed directly to installing the system right after creating partitions without pausing for us to create the subvolumes.
After selecting the Expert Install option, you will get the screen above. I will skip to partitioning from here and leave the intermediate steps such as choosing language, region, connecting to Wi-Fi, etc. For your reference, I did create the root user.
Let s jump right to the partitioning step. Select the Partition disks option from the menu as shown above.
Choose Manual.
Select your disk where you would like to install Debian.
Select Yes when asked for creating a new partition.
I chose the msdos option as I am not using UEFI. If you are using UEFI, then you need to choose the gpt option. Also, your steps will (slightly) differ from mine if you are using UEFI. In that case, you can watch this video by the YouTube channel EF Linux in which he creates an EFI partition. As he doesn t cover disk encryption, you can continue reading this post after following the steps corresponding to EFI.
Select the free space option as shown above.
Choose Create a new partition.
I chose the partition size to be 1 GB.
Choose Primary.
Choose Beginning.
Now, I got to this screen.
I changed mount point to /boot and turned on the bootable flag and then selected Done setting up the partition.
Now select free space.
Choose the Create a new partition option.
I made the partition size equal to the remaining space on my disk. I do not intend to create a swap partition, so I do not need more space.
Select Primary.
Select the Use as option to change its value.
Select physical volume for encryption.
Select Done setting up the partition.
Now select Configure encrypted volumes.
Select Yes.
Select Finish.
Selecting Yes will take a lot of time to erase the data. Therefore, I would say if you have hours for this step (in case your SSD is like 1 TB), then I would recommend selecting Yes. Otherwise, you could select No and compromise on the quality of encryption. After this, you will be asked to enter a passphrase for disk encryption and confirm it. Please do so. I forgot to take the screenshot for that step.
Now select that encrypted volume as shown in the screenshot above.
Here we will change a couple of options which will be shown in the next screenshot.
In the Use as menu, select btrfs journaling file system.
Now, click on the mount point option.
Change it to / - the root file system.
Select Done setting up the partition.
This is a preview of the paritioning after performing the above-mentioned steps.
If everything is okay, proceed with the Finish partitioning and write changes to disk option.
The installer is reminding us to create a swap partition. I proceeded without it as I planned to add swap after the installation.
If everything looks fine, choose yes for writing the changes to disks.
Now we are done with partitioning and we are shown the screen in the screenshot above. If we had not selected the Expert Install option, the installer would have proceeded to install the base system without asking us. However, we want to create subvolumes before proceeding to install the base system. This is the reason we chose Expert Install. Now press Ctrl + F2.
You will see the screen as in the above screenshot. It says Please press Enter to activate this console. So, let s press Enter.
After pressing Enter, we see the above screen.
The screenshot above shows the steps I performed in the console. I followed the already mentioned video by EF Linux for this part and adapted it to my situation (he doesn t encrypt the disk in his tutorial). First we run df -h to have a look at how our disk is partitioned. In my case, the output was:
# df -h
Filesystem              Size  Used  Avail   Use% Mounted on
tmpfs                   1.6G  344.0K  1.6G    0% /run
devtmpfs                7.7G       0  7.7G   0% /dev
/dev/sdb1               3.7G    3.7G    0   100% /cdrom
/dev/mapper/sda2_crypt  952.9G  5.8G  950.9G  0% /target
/dev/sda1               919.7M  260.0K  855.8M  0% /target/boot
df -h shows us that /dev/mapper/sda2_crypt and /dev/sda1 are mounted on /target and /target/boot respectively. Let s unmount them. For that, we run:
# umount /target
# umount /target/boot
Next, let s mount our root filesystem to /mnt.
# mount /dev/mapper/sda2_crypt /mnt
Let s go into the /mnt directory.
# cd /mnt
Upon listing the contents of this directory, we get:
/mnt # ls
@rootfs
Debian installer has created a subvolume @rootfs automatically. However, we need the subvolumes to be @ and @home. Therefore, let s rename the @rootfs subvolume to @.
/mnt # mv @rootfs @
Listing the contents of the directory again, we get:
/mnt # ls
@
We only one subvolume right now. Therefore, let us go ahead and create another subvolume @home.
/mnt # btrfs subvolume create @home
Create subvolume './@home'
If we perform ls now, we will see there are two subvolumes:
/mnt # ls
@ @home
Let us mount /dev/mapper/sda2_crypt to /target
/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@ /dev/mapper/sda2_crypt /target/
Now we need to create a directory for /home.
/mnt # mkdir /target/home/
Now we mount the /home directory with subvol=@home option.
/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@home /dev/mapper/sda2_crypt /target/home/
Now mount /dev/sda1 to /target/boot.
/mnt # mount /dev/sda1 /target/boot/
Now we need to add these options to the fstab file, which is located at /target/etc/fstab. Unfortunately, vim is not installed in this console. The only way to edit is Nano.
nano /target/etc/fstab
Edit your fstab file to look similar to the one in the screenshot above. I am pasting the fstab file contents below for easy reference.
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/sda2_crypt /        btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@ 0       0
/dev/mapper/sda2_crypt /home    btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@home 0       0
# /boot was on /dev/sda1 during installation
UUID=12842b16-d3b3-44b4-878a-beb1e6362fbc /boot           ext4    defaults        0       2
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
Please double check the fstab file before saving it. In Nano, you can press Ctrl+O followed by pressing Enter to save the file. Then press Ctrl+X to quit Nano. Now, preview the fstab file by running
cat /target/etc/fstab
and verify that the entries are correct, otherwise you will booted to an unusable and broken system after the installation is complete. Next, press Ctrl + Alt + F1 to go back to the installer.
Proceed to Install the base system.
Screenshot of Debian installer installing the base system. Screenshot of Debian installer installing the base system.
I chose the default option here - linux-image-amd64. After this, the installer will ask you a few more questions. For desktop environment, I chose KDE Plasma. You can choose the desktop environment as per your liking. I will not cover the rest of the installation process and assume that you were able to install from here.

Post installation Let s jump to our freshly installed Debian system. Since I created a root user, I added the user ravi to the suoders file (/etc/sudoers) so that ravi can run commands with sudo. Follow this if you would like to do the same. Now we set up zram as swap. First, install zram-tools.
sudo apt install zram-tools
Now edit the file /etc/default/zramswap and make sure to have the following lines are uncommented:
ALGO=lz4
PERCENT=50
Now, run
sudo systemctl restart zramswap
If you run lsblk now, you should see the below-mentioned entry in the output:
zram0          253:0    0   7.8G  0 disk  [SWAP]
This shows us that zram has been activated as swap. Now we install timeshift, which can be done by running
sudo apt install timeshift
After the installation is complete, run Timeshift and schedule snapshots as you please. We are done now. Hope the tutorial was helpful. See you in the next post and let me know if you have any suggestions and questions on this tutorial.

28 August 2025

Samuel Henrique: Debian 13: My list of exciting new features

A bunch of screenshots overlaid on top of each other showing different tools: lazygit, gnome settings, gnome system monitor, powerline-go, and the wcurl logo, the text at the top says 'Debian 13: My list of exciting new features', and there's a Debian logo in the middle of image

Beyond Debian: Useful for other distros too Every two years Debian releases a new major version of its Stable series, meaning the differences between consecutive Debian Stable releases represent two years of new developments both in Debian as an organization and its native packages, but also in all other packages which are also shipped by other distributions (which are getting into this new Stable release). If you're not paying close attention to everything that's going on all the time in the Linux world, you miss a lot of the nice new features and tools. It's common for people to only realize there's a cool new trick available only years after it was first introduced. Given these considerations, the tips that I'm describing will eventually be available in whatever other distribution you use, be it because it's a Debian derivative or because it just got the same feature from the upstream project. I'm not going to list "passive" features (as good as they can be), the focus here is on new features that might change how you configure and use your machine, with a mix between productivity and performance.

Debian 13 - Trixie I have been a Debian Testing user for longer than 10 years now (and I recommend it for non-server users), so I'm not usually keeping track of all the cool features arriving in the new Stable releases because I'm continuously receiving them through the Debian Testing rolling release. Nonetheless, as a Debian Developer I'm in a good position to point out the ones I can remember. I would also like other Debian Developers to do the same as I'm sure I would learn something new. The Debian 13 release notes contain a "What's new" section , which lists the first two items here and a few other things, in other words, take my list as an addition to the release notes. Debian 13 was released on 2025-08-09, and these are nice things you shouldn't miss in the new release, with a bonus one not tied to the Debian 13 release.

1) wcurl wcurl logo Have you ever had to download a file from your terminal using curl and didn't remember the parameters needed? I did. Nowadays you can use wcurl; "a command line tool which lets you download URLs without having to remember any parameters." Simply call wcurl with one or more URLs as parameters and it will download all of them in parallel, performing retries, choosing the correct output file name, following redirects, and more. Try it out:
wcurl example.com
wcurl comes installed as part of the curl package on Debian 13 and in any other distribution you can imagine, starting with curl 8.14.0. I've written more about wcurl in its release announcement and I've done a lightning talk presentation in DebConf24, which is linked in the release announcement.

2) HTTP/3 support in curl Debian has become the first stable Linux distribution to ship curl with support for HTTP/3. I've written about this in July 2024, when we first enabled it. Note that we first switched the curl CLI to GnuTLS, but then ended up releasing the curl CLI linked with OpenSSL (as support arrived later). Debian was the first stable Linux distro to enable it, and within rolling-release-based distros; Gentoo enabled it first in their non-default flavor of the package and Arch Linux did it three months before we pushed it to Debian Unstable/Testing/Stable-backports, kudos to them! HTTP/3 is not used by default by the curl CLI, you have to enable it with --http3 or --http3-only. Try it out:
curl --http3 https://www.example.org
curl --http3-only https://www.example.org

3) systemd soft-reboot Starting with systemd v254, there's a new soft-reboot option, it's an userspace-only reboot, much faster than a full reboot if you don't need to reboot the kernel. You can read the announcement from the systemd v254 GitHub release Try it out:
# This will reboot your machine!
systemctl soft-reboot

4) apt --update Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I! The new --update option lets you do both things in a single command:
sudo apt --update upgrade
sudo apt --update install $PACKAGE
I love this, but it's still not yet where it should be, fingers crossed for a simple apt upgrade to behave like other package managers by updating its cache as part of the task, maybe in Debian 14? Try it out:
sudo apt upgrade --update
# The order doesn't matter
sudo apt --update upgrade
This is especially handy for container usage, where you have to update the apt cache before installing anything, for example:
podman run debian:stable bin/bash -c 'apt install --update -y curl'

5) powerline-go powerline-go is a powerline-style prompt written in Golang, so it's much more performant than its Python alternative powerline. powerline-style prompts are quite useful to show things like the current status of the git repo in your working directory, exit code of the previous command, presence of jobs in the background, whether or not you're in an ssh session, and more. A screenshot of a terminal with powerline-go enabled, showing how the PS1 changes inside a git repository and when the last command fails Try it out:
sudo apt install powerline-go
Then add this to your .bashrc:
function _update_ps1()  
    PS1="$(/usr/bin/powerline-go -error $? -jobs $(jobs -p   wc -l))"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
 

if [ "$TERM" != "linux" ] && [ -f "/usr/bin/powerline-go" ]; then
    PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi
Or this to .zshrc:
function powerline_precmd()  
    PS1="$(/usr/bin/powerline-go -error $? -jobs $ $ (%):%j :-0 )"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
 
If you'd like to have your prompt start in a newline, like I have in the screenshot above, you just need to set -newline in the powerline-go invocation in your .bashrc/.zshrc.

6) Gnome System Monitor Extension Tips number 6 and 7 are for Gnome users. Gnome is now shipping a system monitor extension which lets you get a glance of the current load of your machine from the top bar. Screenshot of the top bar of Gnome with the system monitor extension enabled, showing the load of: CPU, memory, network and disk I've found this quite useful for machines where I'm required to install third-party monitoring software that tends to randomly consume more resources than it should. If I feel like my machine is struggling, I can quickly glance at its load to verify if it's getting overloaded by some process. The extension is not as complete as system-monitor-next, not showing temperatures or histograms, but at least it's officially part of Gnome, easy to install and supported by them. Try it out:
sudo apt install gnome-system-monitor gnome-shell-extension-manager
And then enable the extension from the "Extension Manager" application.

7) Gnome setting for battery charging profile After having to learn more about batteries in order to get into FPV drones, I've come to have a bigger appreciation for solutions that minimize the inevitable loss of capacity that accrues over time. There's now a "Battery Charging" setting (under the "Power") section which lets you choose between two different profiles: "Maximize Charge" and "Preserve Battery Health". A screenshot of the Gnome settings for Power showing the options for Battery Charging On supported laptops, this setting is an easy way to set thresholds for when charging should start and stop, just like you could do it with the tlp package, but now from the Gnome settings. To increase the longevity of my laptop battery, I always keep it at "Preserve Battery Health" unless I'm traveling. What I would like to see next is support for choosing different "Power Modes" based on whether the laptop is plugged-in, and based on the battery charge percentage. There's a GNOME issue tracking this feature, but there's some pushback on whether this is the right thing to expose to users. In the meantime, there are some workarounds mentioned in that issue which people who really want this feature can follow. If you would like to learn more about batteries; Battery University is a great starting point, besides getting into FPV drones and being forced to handle batteries without a Battery Management System (BMS). And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's YouTube channel is a great resource: @JoshuaBardwell.

8) Lazygit Emacs users are already familiar with the legendary magit; a terminal-based UI for git. Lazygit is an alternative for non-emacs users, you can integrate it with neovim or just use it directly. I'm still playing with lazygit and haven't integrated it into my workflows, but so far it has been a pleasant experience. Screenshot of lazygit from the Debian curl repository, showing a selected commit and its diff, besides the other things from the lazygit UI You should check out the demos from the lazygit GitHub page. Try it out:
sudo apt install lazygit
And then call lazygit from within a git repository.

9) neovim neovim has been shipped in Debian since 2016, but upstream has been doing a lot of work to improve the experience out-of-the-box in the last couple of years. If you're a neovim poweruser, you're likely not installing it from the official repositories, but for those that are, Debian 13 comes with version 0.10.4, which brings the following improvements compared to the version in Debian 12:
  • Treesitter support for C, Lua, Markdown, with the possibility of adding any other languages as needed;
  • Better spellchecking due to treesitter integration (spellsitter);
  • Mouse support enabled by default;
  • Commenting support out-of-the-box; Check :h commenting for details, but the tl;dr is that you can use gcc to comment the current line and gc to comment the current selection.
  • OSC52 support. Especially handy for those using neovim over an ssh connection, this protocol lets you copy something from within the neovim process into the clipboard of the machine you're using to connect through ssh. In other words, you can copy from neovim running in a host over ssh and paste it in the "outside" machine.

10) [Bonus] Running old Debian releases The bonus tip is not specific to the Debian 13 release, but something I've recently learned in the #debian-devel IRC channel. Did you know there are usable container images for all past Debian releases? I'm not talking "past" as in "some of the older releases", I'm talking past as in "literally every Debian release, including the very first one". Tianon Gravi "tianon" is the Debian Developer responsible for making this happen, kudos to him! There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a 32-bit host, otherwise you will get the error Out of virtual memory!, but starting with Bo (1.3) all should work in amd64/arm64. Try it out:
sudo apt install podman

podman run -it docker.io/debian/eol:bo
Don't be surprised when noticing that apt/apt-get is not available inside the container, that's because apt first appeared in Debian Slink (2.1).

Changes since publication

2025-08-30
  • Mention that Arch also enabled HTTP/3.

Samuel Henrique: Debian 13: My list of exciting new features

A bunch of screenshots overlaid on top of each other showing different tools: lazygit, gnome settings, gnome system monitor, powerline-go, and the wcurl logo, the text at the top says 'Debian 13: My list of exciting new features', and there's a Debian logo in the middle of image

Beyond Debian: Useful for other distros too Every two years Debian releases a new major version of its Stable series, meaning the differences between consecutive Debian Stable releases represent two years of new developments both in Debian as an organization and its native packages, but also in all other packages which are also shipped by other distributions (which are getting into this new Stable release). If you're not paying close attention to everything that's going on all the time in the Linux world, you miss a lot of the nice new features and tools. It's common for people to only realize there's a cool new trick available only years after it was first introduced. Given these considerations, the tips that I'm describing will eventually be available in whatever other distribution you use, be it because it's a Debian derivative or because it just got the same feature from the upstream project. I'm not going to list "passive" features (as good as they can be), the focus here is on new features that might change how you configure and use your machine, with a mix between productivity and performance.

Debian 13 - Trixie I have been a Debian Testing user for longer than 10 years now (and I recommend it for non-server users), so I'm not usually keeping track of all the cool features arriving in the new Stable releases because I'm continuously receiving them through the Debian Testing rolling release. Nonetheless, as a Debian Developer I'm in a good position to point out the ones I can remember. I would also like other Debian Developers to do the same as I'm sure I would learn something new. The Debian 13 release notes contain a "What's new" section , which lists the first two items here and a few other things, in other words, take my list as an addition to the release notes. Debian 13 was released on 2025-08-09, and these are nice things you shouldn't miss in the new release, with a bonus one not tied to the Debian 13 release.

1) wcurl wcurl logo Have you ever had to download a file from your terminal using curl and didn't remember the parameters needed? I did. Nowadays you can use wcurl; "a command line tool which lets you download URLs without having to remember any parameters." Simply call wcurl with one or more URLs as parameters and it will download all of them in parallel, performing retries, choosing the correct output file name, following redirects, and more. Try it out:
wcurl example.com
wcurl comes installed as part of the curl package on Debian 13 and in any other distribution you can imagine, starting with curl 8.14.0. I've written more about wcurl in its release announcement and I've done a lightning talk presentation in DebConf24, which is linked in the release announcement.

2) HTTP/3 support in curl Debian has become the first stable Linux distribution to ship curl with support for HTTP/3. I've written about this in July 2024, when we first enabled it. Note that we first switched the curl CLI to GnuTLS, but then ended up releasing the curl CLI linked with OpenSSL (as support arrived later). Debian was the first Linux distro to enable it in the default build of the curl package, but Gentoo enabled it a few weeks earlier in their non-default flavor of the package, kudos to them! HTTP/3 is not used by default by the curl CLI, you have to enable it with --http3 or --http3-only. Try it out:
curl --http3 https://www.example.org
curl --http3-only https://www.example.org

3) systemd soft-reboot Starting with systemd v254, there's a new soft-reboot option, it's an userspace-only reboot, much faster than a full reboot if you don't need to reboot the kernel. You can read the announcement from the systemd v254 GitHub release Try it out:
# This will reboot your machine!
systemctl soft-reboot

4) apt --update Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I! The new --update option lets you do both things in a single command:
sudo apt --update upgrade
sudo apt --update install $PACKAGE
I love this, but it's still not yet where it should be, fingers crossed for a simple apt upgrade to behave like other package managers by updating its cache as part of the task, maybe in Debian 14? Try it out:
sudo apt upgrade --update
# The order doesn't matter
sudo apt --update upgrade
This is especially handy for container usage, where you have to update the apt cache before installing anything, for example:
podman run debian:stable bin/bash -c 'apt install --update -y curl'

5) powerline-go powerline-go is a powerline-style prompt written in Golang, so it's much more performant than its Python alternative powerline. powerline-style prompts are quite useful to show things like the current status of the git repo in your working directory, exit code of the previous command, presence of jobs in the background, whether or not you're in an ssh session, and more. A screenshot of a terminal with powerline-go enabled, showing how the PS1 changes inside a git repository and when the last command fails Try it out:
sudo apt install powerline-go
Then add this to your .bashrc:
function _update_ps1()  
    PS1="$(/usr/bin/powerline-go -error $? -jobs $(jobs -p   wc -l))"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
 

if [ "$TERM" != "linux" ] && [ -f "/usr/bin/powerline-go" ]; then
    PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi
Or this to .zshrc:
function powerline_precmd()  
    PS1="$(/usr/bin/powerline-go -error $? -jobs $ $ (%):%j :-0 )"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
 
If you'd like to have your prompt start in a newline, like I have in the screenshot above, you just need to set -newline in the powerline-go invocation in your .bashrc/.zshrc.

6) Gnome System Monitor Extension Tips number 6 and 7 are for Gnome users. Gnome is now shipping a system monitor extension which lets you get a glance of the current load of your machine from the top bar. Screenshot of the top bar of Gnome with the system monitor extension enabled, showing the load of: CPU, memory, network and disk I've found this quite useful for machines where I'm required to install third-party monitoring software that tends to randomly consume more resources than it should. If I feel like my machine is struggling, I can quickly glance at its load to verify if it's getting overloaded by some process. The extension is not as complete as system-monitor-next, not showing temperatures or histograms, but at least it's officially part of Gnome, easy to install and supported by them. Try it out:
sudo apt install gnome-system-monitor gnome-shell-extension-manager
And then enable the extension from the "Extension Manager" application.

7) Gnome setting for battery charging profile After having to learn more about batteries in order to get into FPV drones, I've come to have a bigger appreciation for solutions that minimize the inevitable loss of capacity that accrues over time. There's now a "Battery Charging" setting (under the "Power") section which lets you choose between two different profiles: "Maximize Charge" and "Preserve Battery Health". A screenshot of the Gnome settings for Power showing the options for Battery Charging On supported laptops, this setting is an easy way to set thresholds for when charging should start and stop, just like you could do it with the tlp package, but now from the Gnome settings. To increase the longevity of my laptop battery, I always keep it at "Preserve Battery Health" unless I'm traveling. What I would like to see next is support for choosing different "Power Modes" based on whether the laptop is plugged-in, and based on the battery charge percentage. There's a GNOME issue tracking this feature, but there's some pushback on whether this is the right thing to expose to users. In the meantime, there are some workarounds mentioned in that issue which people who really want this feature can follow. If you would like to learn more about batteries; Battery University is a great starting point, besides getting into FPV drones and being forced to handle batteries without a Battery Management System (BMS). And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's YouTube channel is a great resource: @JoshuaBardwell.

8) Lazygit Emacs users are already familiar with the legendary magit; a terminal-based UI for git. Lazygit is an alternative for non-emacs users, you can integrate it with neovim or just use it directly. I'm still playing with lazygit and haven't integrated it into my workflows, but so far it has been a pleasant experience. Screenshot of lazygit from the Debian curl repository, showing a selected commit and its diff, besides the other things from the lazygit UI You should check out the demos from the lazygit GitHub page. Try it out:
sudo apt install lazygit
And then call lazygit from within a git repository.

9) neovim neovim has been shipped in Debian since 2016, but upstream has been doing a lot of work to improve the experience out-of-the-box in the last couple of years. If you're a neovim poweruser, you're likely not installing it from the official repositories, but for those that are, Debian 13 comes with version 0.10.4, which brings the following improvements compared to the version in Debian 12:
  • Treesitter support for C, Lua, Markdown, with the possibility of adding any other languages as needed;
  • Better spellchecking due to treesitter integration (spellsitter);
  • Mouse support enabled by default;
  • Commenting support out-of-the-box; Check :h commenting for details, but the tl;dr is that you can use gcc to comment the current line and gc to comment the current selection.
  • OSC52 support. Especially handy for those using neovim over an ssh connection, this protocol lets you copy something from within the neovim process into the clipboard of the machine you're using to connect through ssh. In other words, you can copy from neovim running in a host over ssh and paste it in the "outside" machine.

10) [Bonus] Running old Debian releases The bonus tip is not specific to the Debian 13 release, but something I've recently learned in the #debian-devel IRC channel. Did you know there are usable container images for all past Debian releases? I'm not talking "past" as in "some of the older releases", I'm talking past as in "literally every Debian release, including the very first one". Tianon Gravi "tianon" is the Debian Developer responsible for making this happen, kudos to him! There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a 32-bit host, otherwise you will get the error Out of virtual memory!, but starting with Bo (1.3) all should work in amd64/arm64. Try it out:
sudo apt install podman

podman run -it docker.io/debian/eol:bo
Don't be surprised when noticing that apt/apt-get is not available inside the container, that's because apt first appeared in Debian Slink (2.1).

Valhalla's Things: 1840s Underwear

Posted on August 28, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear
A woman wearing a knee-length shift with very short pleated sleeves and drawers that are a bit longer than needed to be ankle-length. The shift is too wide at the top, had to have a pleat taken in the center front, but the sleeves are still falling down. She is also wearing a black long sleeved t-shirt and leggings under said underwear, for decency. A bit more than a year ago, I had been thinking about making myself a cartridge pleated skirt. For a number of reasons, one of which is the historybounding potential, I ve been thinking pre-crinoline, so somewhere around the 1840s, and that s a completely new era for me, which means: new underwear. Also, the 1840s are pre-sewing machine, and I was already in a position where I had more chances to handsew than to machine sew, so I decided to embrace the slowness and sew 100% by hand, not even using the machine for straight seams. A woman turning fast enough that her petticoat extends a considerable distance from the body. The petticoat is white with a pattern of cording from the hem to just below hip level, with a decreasing number of rows of cording going up. If I remember correctly, I started with the corded petticoat, looking around the internet for instructions, and then designing my own based on the practicality of using modern wide fabric from my stash (and specifically some DITTE from costumers favourite source of dirty cheap cotton IKEA). Around the same time I had also acquired a sashiko kit, and I used the Japanese technique for sewing running stitches pushing the needle with a thimble that covers the base of the middle finger, and I can confirm that for this kind of things it s great! I ve since worn the petticoat a few times for casual / historyBounding / folkwearBounding reasons, during the summer, and I can confirm it s comfortable to use; I guess that during the winter it could be nice to add a flannel layer below it. The technical drawing and pattern for drawers from the book: each leg is cut out of a rectangle of fabric folded along the length, the leg is tapered equally, while the front is tapered more than the back, and comes to a point below the top of the original rectangle. Then I proceeded with the base layers: I had been browsing through The workwoman's guide and that provided plenty of examples, and I selected the basic ankle-length drawers from page 53 and the alternative shift on page 47. As for fabric, I had (and still have) a significant lack of underwear linen in my stash, but I had plenty of cotton voile that I had not used in a while: not very historically accurate for plain underwear, but quite suitable for a wearable mockup. Working with a 1830s source had an interesting aspect: other of the usual, mildly annoying, imperial units, it also used a lot a few obsolete units, especially nails, that my qalc, my usual calculator and converter, doesn t support. Not a big deal, because GNU units came to the rescue, and that one knows a lot of obscure and niche units, and it s quite easy to add those that are missing1 Working on this project also made me freshly aware of something I had already noticed: converting instructions for machine sewing garments into instructions for hand sewing them is usually straightforward, but the reverse is not always true. Starting from machine stitching, you can usually convert straight stitches into backstitches (or running backstitches), zigzag and overlocking into overcasting and get good results. In some cases you may want to use specialist hand stitches that don t really have a machine equivalent, such as buttonhole stitches instead of simply overcasting the buttonhole, but that s it. Starting from hand stitching, instead, there are a number of techniques that could be converted to machine stitching, but involve a lot of visible topstitching that wasn t there in the original instructions, or at times are almost impossible to do by machine, if they involve whipstitching together finished panels on seams that are subject to strong tension. Anyway, halfway through working with the petticoat I cut both the petticoat and the drawers at the same time, for efficiency in fabric use, and then started sewing the drawers. the top third or so of the drawers, showing a deep waistband that is closed with just one button at the top, and the front opening with finished edges that continue through the whole crotch, with just the overlap of fabric to provide coverage. The book only provided measurements for one size (moderate), and my fabric was a bit too narrow to make them that size (not that I have any idea what hip circumference a person of moderate size was supposed to have), so the result is just wide enough to be comfortably worn, but I think that when I ll make another pair I ll try to make them a bit wider. On the other hand they are a bit too long, but I think that I ll fix it by adding a tuck or two. Not a big deal, anyway. The same woman as in the opening image from the back, the shift droops significantly in the center back, and the shoulder straps have fallen down on the top of the arms. The shift gave me a bit more issues: I used the recommended gusset size, and ended up with a shift that was way too wide at the top, so I had to take a box pleat in the center front and back, which changed the look and wear of the garment. I have adjusted the instructions to make gussets wider, and in the future I ll make another shift following those. Even with the pleat, the narrow shoulder straps are set quite far to the sides, and they tend to droop, and I suspect that this is to be expected from the way this garment is made. The fact that there are buttonholes on the shoulder straps to attach to the corset straps and prevent the issue is probably a hint that this behaviour was to be expected. The technical drawing of the shift from the book, showing a the top of the body, two trapezoidal shoulder straps, the pleated sleeves and a ruffle on the front edge. I ve also updated the instructions so that they shoulder straps are a bit wider, to look more like the ones in the drawing from the book. Making a corset suitable for the time period is something that I will probably do, but not in the immediate future, but even just wearing the shift under a later midbust corset with no shoulder strap helps. I m also not sure what the point of the bosom gores is, as they don t really give more room to the bust where it s needed, but to the high bust where it s counterproductive. I also couldn t find images of original examples made from this pattern to see if they were actually used, so in my next make I may just skip them. Sleeve detail, showing box pleats that are about 2 cm wide and a few mm distance from each other all along the circumference, neatly sewn into the shoulder strap on one side and the band at the other side. On the other hand, I m really happy with how cute the short sleeves look, and if2 I ll ever make the other cut of shift from the same book, with the front flaps, I ll definitely use these pleated sleeves rather than the straight ones that were also used at the time. As usual, all of the patterns have been published on my website under a Free license:

  1. My ~/.units file currently contains definitions for beardseconds, bananas and the more conventional Nm and NeL (linear mass density of fibres).
  2. yeah, right. when.

27 August 2025

Russell Coker: ZRAM and VMs

I ve just started using zram for swap on VMs. The use of compression for swap in Linux apparently isn t new, it s been in the Linux kernel since version 3.2 (since 2012). But until recent years I hadn t used it. When I started using Mobian (the Debian distribution for phones) zram was in the default setup, it basically works and I never needed to bother with it which is exactly what you want from such a technology. After seeing it s benefits in Mobian I started using it on my laptops where it worked well. Benefits of ZRAM ZRAM means that instead of paging data to storage it is compressed to another part of RAM. That means no access to storage which is a significant benefit if storage is slow (typical for phones) or if storage wearing out is a problem. For servers you typically have SSDs that are fast and last for significant write volumes, for example the 120G SSDs referenced in my blog post about swap (not) breaking SSD [1] are running well in my parents PC because they outlasted all the other hardware connected to them and 120G isn t usable for anything more demanding than my parents use nowadays. Those are Intel 120G 2.5 DC grade SATA SSDs. For most servers ZRAM isn t a good choice as you can just keep doing IO on the SSDs for years. A server that runs multiple VMs is a special case because you want to isolate the VMs from each other. Support for quotas for storage IO in Linux isn t easy to configure while limiting the number of CPU cores is very easy. If a system or VM using ZRAM for swap starts paging excessively the bottleneck will be CPU, this probably isn t going to be great on a phone with a slow CPU but on a server class CPU it will be less of a limit. Whether compression is slower or faster than SSD is a complex issue but it will definitely be just a limit for that VM. When I setup a VM server I want to have some confidence that a DoS attack or configuration error on one VM isn t going to destroy the performance of other VMs. If the VM server has 4 cores (the smallest VM server I run) and no VM has more than 2 cores then I know that the system can still run adequately if half the CPU performance is being wasted. Some servers I run have storage limits that make saving the disk space for swap useful. For servers I run in Hetzner (currently only one server but I have run up to 6 at various times in the past) the storage is often limited, Hetzner seems to typically have storage that is 8* the size of RAM so if you have many VMs configured with the swap that they might need in the expectation that usually at most one of them will be actually swapping then it can make a real difference to usable storage. 5% of storage used for swap files isn t uncommon or unreasonable. Big Servers I am still considering the implications of zram on larger systems. If I have a ML server with 512G of RAM would it make sense to use it? It seems plausible that a system might need 550G of RAM and zram could make the difference between jobs being killed with OOM and the jobs just completing. The CPU overhead of compression shouldn t be an issue as when you have dozens of cores in the system having one or two used for compression is no big deal. If a system is doing strictly ML work there will be a lot of data that can t be compressed, so the question is how much of the memory is raw input data and the weights used for calculations and how much is arrays with zeros and other things that are easy to compress. With a big server nothing less than 32G of swap will make much difference to the way things work and if you have 32G of data being actively paged then the fastest NVMe devices probably won t be enough to give usable performance. As zram uses one stream per CPU code if you have 44 cores that means 44 compression streams which should handle greater throughput. I ll write another blog post if I get a chance to test this.

25 August 2025

Gunnar Wolf: The comedy of computation, or, how I learned to stop worrying and love obsolescence

This post is a review for Computing Reviews for The comedy of computation, or, how I learned to stop worrying and love obsolescence , a book published in Stanford University Press
The Comedy of Computation is not an easy book to review. It is a much enjoyable book that analyzes several examples of how being computational has been approached across literary genres in the last century how authors of stories, novels, theatrical plays and movies, focusing on comedic genres, have understood the role of the computer in defining human relations, reactions and even self-image. Mangrum structures his work in six thematic chapters, where he presents different angles on human society: How have racial stereotypes advanced in human imagination and perception about a future where we interact with mechanical or computational partners (from mechanical tools performing jobs that were identified with racial profiles to intelligent robots that threaten to control society); the genericity of computers and people can be seen as generic, interchangeable characters, often fueled by the tendency people exhibit to confer anthropomorphic qualities to inanimate objects; people s desire to be seen as truly authentic , regardless of what it ultimately means; romantic involvement and romance-led stories (with the computer seen as a facilitator for human-to-human romances, distractor away from them, or being itself a part of the couple); and the absurdity in antropomorphization, in comparing fundamentally different aspects such as intelligence and speed at solving mathematical operations, as well as the absurdity presented blatantly as such by several techno-utopian visions. But presenting this as a linear set of concepts that are presented does not do justice to the book. Throughout the sections of each chapter, a different work serves as the axis Novels and stories, Hollywood movies, Broadway plays, some covers for the Time magazine, a couple of presenting the would-be future, even a romantic comedy entirely written by bots . And for each of them, Benjamin Mangrum presents a very thorough analysis, drawing relations and comparing with contemporary works, but also with Shakespeare, classical Greek myths, and a very long etc tera. This book is hard to review because of the depth of work the author did: Reading it repeatedly made me look for other works, or at least longer references for them. Still, despite being a work with such erudition, Mangrum s text is easy and pleasant to read, without feeling heavy or written in an overly academic style. I very much enjoyed reading this book. It is certainly not a technical book about computers and society in any way; it is an exploration of human creativity and our understanding of the aspects the author has found as central to understanding the impact of computing on humankind. However, there is one point I must mention before closing: I believe the editorial decision to present the work as a running text, with all the material conceptualized as footnotes presented as a separate, over 50 page long final chapter, detracts from the final result. Personally, I enjoy reading the footnotes because they reveal the author s thought processes, even if they stray from the central line of thought. Even more, given my review copy was a PDF, I could not even keep said chapter open with one finger, bouncing back and forth. For all purposes, I missed out on the notes; now that I finished reading and stumbled upon that chapter, I know I missed an important part of the enjoyment.

22 August 2025

Reproducible Builds (diffoscope): diffoscope 304 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 304. This version includes the following changes:
[ Chris Lamb ]
* Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2)
  time. (Closes: reproducible-builds/diffoscope#414)
* Fix test after the upload of systemd-ukify 258~rc3 (vs. 258~rc2).
* Move from a mono-utils dependency to versioned "mono-devel   mono-utils"
  dependency, taking care to maintain the [!riscv64] architecture
  restriction. (Closes: #1111742)
* Use sed -ne over awk -F= to to avoid mangling dependency lines containing
  equals signs (=), for example version restrictions.
* Use sed backreferences when generating debian/tests/control to avoid DRY
  violations.
* Update copyright years.
[ Martin Joerg ]
* Avoid a crash in the HTML presenter when page limit is None.
You find out more by visiting the project homepage.

20 August 2025

Antoine Beaupr : Encrypting a Debian install with UKI

I originally setup a machine without any full disk encryption, then somehow regretted it quickly after. My original reasoning was that this was a "play" machine so I wanted as few restrictions on accessing the machine as possible, which meant removing passwords, mostly. I actually ended up having a user password, but disabled the lock screen. Then I started using the device to manage my photo collection, and suddenly there was a lot of "confidential" information on the device that I didn't want to store in clear text anymore.

Pre-requisites So, how does one convert an existing install from plain text to full disk encryption? One way is to backup to an external drive, re-partition everything and copy things back, but that's slow and boring. Besides, cryptsetup has a cryptsetup-reencrypt command, surely we can do this in place? Having not set aside enough room for /boot, I briefly considered a "encrypted /boot" configuration and conversion (e.g. with this guide) but remembered grub's support for this is flaky, at best, so I figured I would try something else. Here, I'm going to guide you through how I first converted from grub to systemd-boot then to UKI kernel, then re-encrypt my main partition. Note that secureboot is disabled here, see further discussion below.

systemd-boot and Unified Kernel Image conversion systemd folks have been developing UKI ("unified kernel image") to ship kernels. The way this works is the kernel and initrd (and UEFI boot stub) in a single portable executable that lives in the EFI partition, as opposed to /boot. This neatly solves my problem, because I already have such a clear-text partition and won't need to re-partition my disk to convert. Debian has started some preliminary support for this. It's not default, but I found this guide from Vasudeva Kamath which was pretty complete. Since the guide assumes some previous configuration, I had to adapt it to my case. Here's how I did the conversion to both systemd-boot and UKI, all at once. I could have perhaps done it one at a time, but doing both at once works fine. Before your start, make sure secureboot is disabled, see the discussion below.
  1. install systemd tools:
    apt install systemd-ukify systemd-boot
    
  2. Configure systemd-ukify, in /etc/kernel/install.conf:
    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    
    TODO: it doesn't look like this generates a initrd with dracut, do we care?
  3. Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:
    [UKI]
    Cmdline=@/etc/kernel/cmdline
    
    The /etc/kernel/cmdline file doesn't actually exist here, and that's fine. Defaults are okay, as the image gets generated from your current /proc/cmdline. Check your /etc/default/grub and /proc/cmdline if you are unsure. You'll see the generated arguments in bootctl list below.
  4. Build the image:
    dpkg-reconfigure linux-image-$(uname -r)
    
  5. Check the boot options:
    bootctl list
    
    Look for a Type #2 (.efi) entry for the kernel.
  6. Reboot:
    reboot
    
You can tell you have booted with systemd-boot because (a) you won't see grub and (b) the /proc/cmdline will reflect the configuration listed in bootctl list. In my case, a systemd.machine_id variable is set there, and not in grub (compare with /boot/grub/grub.cfg). By default, the systemd-boot loader just boots, without a menu. You can force the menu to show up by un-commenting the timeout line in /boot/efit/loader/loader.conf, by hitting keys during boot (e.g. hitting "space" repeatedly), or by calling:
systemctl reboot --boot-loader-menu=0
See the systemd-boot(7) manual for details on that. I did not go through the secureboot process, presumably I had already disabled secureboot. This is trickier: because one needs a "special key" to sign the UKI image, one would need the collaboration of debian.org to get this working out of the box with the keys shipped onboard most computers. In other words, if you want to make this work with secureboot enabled on your computer, you'll need to figure out how to sign the generated images before rebooting here, because otherwise you will break your computer. Otherwise, follow the following guides:

Re-encrypting root filesystem Now that we have a way to boot an encrypted filesystem, we can switch to LUKS for our filesystem. Note that you can probably follow this guide if, somehow, you managed to make grub work with your LUKS setup, although as this guide shows, you'd need to downgrade the cryptographic algorithms, which seems like a bad tradeoff. We're using cryptsetup-reencrypt for this which, amazingly, supports re-encrypting devices on the fly. The trick is it needs free space at the end of the partition for the LUKS header (which, I guess, makes it a footer), so we need to resize the filesystem to leave room for that, which is the trickiest bit. This is a possibly destructive behavior. Be sure your backups are up to date, or be ready to lose all data on the device. We assume 512 byte sectors here. Check your sector size with fdisk -l and adjust accordingly.
  1. Before you perform the procedure, make sure requirements are installed:
    apt install cryptsetup systemd-cryptsetup cryptsetup-initramfs
    
    Note that this requires network access, of course.
  2. Reboot in a live image, I like GRML but any Debian live image will work, possibly including the installer
  3. First, calculate how many sectors to free up for the LUKS header
    qalc> 32Mibyte / ( 512 byte )
      (32 mebibytes) / (512 bytes) = 65536
    
  4. Find the sector sizes of the Linux partitions:
    fdisk  -l /dev/nvme0n1   awk '/filesystem/   print $1 " " $4  '  
    
    For example, here's an example with a /boot and / filesystem:
    $ sudo fdisk -l /dev/nvme0n1   awk '/filesystem/   print $1 " " $4  '
    /dev/nvme0n1p2 999424
    /dev/nvme0n1p3 3904979087
    
  5. Substract 1 from 2:
    qalc> set precision 100
    qalc> 3904979087 - 65536
    
    Or, last step and this one, in one line:
    fdisk -l /dev/nvme0n1   awk '/filesystem/   print $1 " " $4 - 65536  '
    
  6. Recheck filesystem:
    e2fsck -f /dev/nvme0n1p2
    
  7. Resize filesystem:
    resize2fs /dev/nvme0n1p2 $(fdisk -l /dev/nvme0n1   awk '/nvme0n1p2/   print $4 - 65536  ')s
    
    Notice the trailing s here: it makes resize2fs interpret the number as a 512 byte sector size, as opposed to the default (4k blocks).
  8. Re-encrypt filesystem:
    cryptsetup reencrypt --encrypt /dev/nvme0n1p2 --resize-device-size=32M
    
    This is it! This is the most important step! Make sure your laptop is plugged in and try not to interrupt it. This can, apparently, be resumed without problem, but I'd hate to show you how. This will show progress information like:
    Progress:   2.4% ETA 23m45s,      53GiB written, speed   1.3 GiB/s
    
    Wait until the ETA has passed.
  9. Open and mount the encrypted filesystem and mount the EFI system partition (ESP):
    cryptsetup open /dev/nvme0n1p2 crypt
    mount /dev/mapper/crypt /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
    
    If this fails, now is the time to consider restoring from backups.
  10. Enter the chroot
    for fs in proc sys dev ; do
      mount --bind /$fs /mnt/$fs
    done
    chroot /mnt
    
    Pro tip: this can be done in one step in GRML with:
    grml-chroot /mnt bash
    
  11. Generate a crypttab:
    echo crypt_dev_nvme0n1p2 UUID=$(blkid -o value -s UUID /dev/nvme0n1p2) none luks,discard >> /etc/crypttab
    
  12. Adjust root filesystem in /etc/fstab, make sure you have a line like this:
    /dev/mapper/crypt_dev-nvme0n1p2 /               ext4    errors=remount-ro 0       1
    
    If you were already using a UUID entry for this, there's nothing to change!
  13. Configure the root filesystem in the initrd:
    echo root=/dev/mapper/crypt_dev_nvme0n1p2 > /etc/kernel/cmdline
    
  14. Regenerate UKI:
    dpkg-reconfigure linux-image-$(uname -r)
    
    Be careful here! systemd-boot inherits the command line from the system where it is generated, so this will possibly feature some unsupported commands from your boot environment. In my case GRML had a couple of those, which broke the boot. It's still possible to workaround this issue by tweaking the arguments at boot time, that said.
  15. Exit chroot and reboot
    exit
    reboot
    
Some of the ideas in this section were taken from this guide but was mostly rewritten to simplify the work. My guide also avoids the grub hacks or a specific initrd system (as the guide uses initramfs-tools and grub, while I, above, switched to dracut and systemd-boot). RHEL also has a similar guide, perhaps even better. Somehow I have made this system without LVM at all, which simplifies things a bit (as I don't need to also resize the physical volume/volume groups), but if you have LVM, you need to tweak this to also resize the LVM bits. The RHEL guide has some information about this.

18 August 2025

Otto Kek l inen: Best Practices for Submitting and Reviewing Merge Requests in Debian

Featured image of post Best Practices for Submitting and Reviewing Merge Requests in DebianHistorically the primary way to contribute to Debian has been to email the Debian bug tracker with a code patch. Now that 92% of all Debian source packages are hosted at salsa.debian.org the GitLab instance of Debian more and more developers are using Merge Requests, but not necessarily in the optimal way. In this post I share what I ve found the best practice to be, presented in the natural workflow from forking to merging.

Why use Merge Requests? Compared to sending patches back and forth in email, using a git forge to review code contributions brings several benefits:
  • Contributors can see the latest version of the code immediately when the maintainer pushes it to git, without having to wait for an upload to Debian archives.
  • Contributors can fork the development version and easily base their patches on the correct version and help test that the software continues to function correctly at that specific version.
  • Both maintainer and other contributors can easily see what was already submitted and avoid doing duplicate work.
  • It is easy for anyone to comment on a Merge Request and participate in the review.
  • Integrating CI testing is easy in Merge Requests by activating Salsa CI.
  • Tracking the state of a Merge Request is much easier than browsing Debian bug reports tagged patch , and the cycle of submit review re-submit re-review is much easier to manage in the dedicated Merge Request view compared to participants setting up their own email plugins for code reviews.
  • Merge Requests can have extra metadata, such as Approved , and the metadata often updates automatically, such as a Merge Request being closed automatically when the Git commit ID from it is pushed to the target branch.
Keeping these benefits in mind will help ensure that the best practices make sense and are aligned with maximizing these benefits.

Finding the Debian packaging source repository and preparing to make a contribution Before sinking any effort into a package, start by checking its overall status at the excellent Debian Package Tracker. This provides a clear overview of the package s general health in Debian, when it was last uploaded and by whom, and if there is anything special affecting the package right now. This page also has quick links to the Debian bug tracker of the package, the build status overview and more. Most importantly, in the General section, the VCS row links to the version control repository the package advertises. Before opening that page, note the version most recently uploaded to Debian. This is relevant because nothing in Debian currently enforces that the package in version control is actually the same as the latest uploaded to Debian. Packaging source code repository links at tracker.debian.org Following the Browse link opens the Debian package source repository, which is usually a project page on Salsa. To contribute, start by clicking the Fork button, select your own personal namespace and, under Branches to include, pick Only the default branch to avoid including unnecessary temporary development branches. View after pressing Fork Once forking is complete, clone it with git-buildpackage. For this example repository, the exact command would be gbp clone --verbose git@salsa.debian.org:otto/glow.git. Next, add the original repository as a new remote and pull from it to make sure you have all relevant branches. Using the same fork as an example, the commands would be:
git remote add go-team https://salsa.debian.org/go-team/packages/glow.git
gbp pull --verbose --track-missing go-team
The gbp pull command can be repeated whenever you want to make sure the main branches are in sync with the original repository. Finally, run gitk --all & to visually browse the Git history and note the various branches and their states in the two remotes. Note the style in comments and repository structure the project has and make sure your contributions follow the same conventions to maximize the chances of the maintainer accepting your contribution. It may also be good to build the source package to establish a baseline of the current state and what kind of binaries and .deb packages it produces. If using Debcraft, one can simply run debcraft build in the Git repository.

Submitting a Merge Request for a Debian packaging improvement Always start by making a development branch by running git checkout -b <branch name> to clearly separate your work from the main branch. When making changes, remember to follow the conventions you already see in the package. It is also important to be aware of general guidelines on how to make good Git commits. If you are not able to immediately finish coding, it may be useful to publish the Merge Request as a draft so that the maintainer and others can see that you started working on something and what general direction your change is heading in. If you don t finish the Merge Request in one sitting and return to it another day, you should remember to pull the Debian branch from the original Debian repository in case it has received new commits. This can be done easily with these commands (assuming the same remote and branch names as in the example above):
git fetch go-team
git rebase -i go-team/debian/latest
Frequent rebasing is a great habit to help keep the Git history linear, and restructuring and rewording your commits will make the Git history easier to follow and understand why the changes were made. When pushing improved versions of your branch, use git push --force. While GitLab does allow squashing, I recommend against it. It is better that the submitter makes sure the final version is a neat and clean set of commits that the receiver can easily merge without having to do any rebasing or squashing themselves. When ready, remove the draft status of the Merge Request and wait patiently for review. If the maintainer does not respond in several days, try sending an email to <source package name>@packages.debian.org, which is the official way to contact maintainers. You could also post a comment on the MR and tag the last few committers in the same repository so that a notification email is triggered. As a last resort, submit a bug report to the Debian bug tracker to announce that a Merge Request is pending review. This leaves a permanent record for posterity (or the Debian QA team) of your contribution. However, most of the time simply posting the Merge Request in Salsa is enough; excessive communication might be perceived as spammy, and someone needs to remember to check that the bug report is closed.

Respect the review feedback, respond quickly and avoid Merge Requests getting stale Once you get feedback, try to respond as quickly as possible. When people participating have everything fresh in their minds, it is much easier for the submitter to rework it and for the reviewer to re-review. If the Merge Request becomes stale, it can be challenging to revive it. Also, if it looks like the MR is only waiting for re-review but nothing happens, re-read the previous feedback and make sure you actually address everything. After that, post a friendly comment where you explicitly say you have addressed all feedback and are only waiting for re-review.

Reviewing Merge Requests This section about reviewing is not exclusive to Debian package maintainers anyone can contribute to Debian by reviewing open Merge Requests. Typically, the larger an open source project gets, the more help is needed in reviewing and testing changes to avoid regressions, and all diligently done work is welcome. As the famous Linus quote goes, given enough eyeballs, all bugs are shallow . On salsa.debian.org, you can browse open Merge Requests per project or for a whole group, just like on any GitLab instance. Reviewing Merge Requests is, however, most fun when they are fresh and the submitter is active. Thus, the best strategy is to ensure you have subscribed to email notifications in the repositories you care about so you get an email for any new Merge Request (or Issue) immediately when posted. Change notification settings from Global to Watch to get an email on new Merge Requests When you see a new Merge Request, try to review it within a couple of days. If you cannot review in a reasonable time, posting a small note that you intend to review it later will feel better to the submitter compared to not getting any response. Personally, I have a habit of assigning myself as a reviewer so that I can keep track of my whole review queue at https://salsa.debian.org/dashboard/merge_requests?reviewer_username=otto, and I recommend the same to others. Seeing the review assignment happen is also a good way to signal to the submitter that their submission was noted.

Reviewing commit-by-commit in the web interface Reviewing using the web interface works well in general, but I find that the way GitLab designed it is not ideal. In my ideal review workflow, I first read the Git commit message to understand what the submitter tried to do and why; only then do I look at the code changes in the commit. In GitLab, to do this one must first open the Commits tab and then click on the last commit in the list, as it is sorted in reverse chronological order with the first commit at the bottom. Only after that do I see the commit message and contents. Getting to the next commit is easy by simply clicking Next. Example review to demonstrate location of buttons and functionality When adding the first comment, I choose Start review and for the following remarks Add to review. Finally, I click Finish review and Submit review, which will trigger one single email to the submitter with all my feedback. I try to avoid using the Add comment now option, as each such comment triggers a separate notification email to the submitter.

Reviewing and testing on your own computer locally For the most thorough review, I pull the code to my laptop for local review with git pull <remote url> <branch name>. There is no need to run git remote add as pulling using a URL directly works too and saves from needing to clean up old remotes later. Pulling the Merge Request contents locally allows me to build, run and inspect the code deeply and review the commits with full metadata in gitk or equivalent.

Investing enough time in writing feedback, but not too much See my other post for more in-depth advice on how to structure your code review feedback. In Debian, I would emphasize patience, to allow the submitter time to rework their submission. Debian packaging is notoriously complex, and even experienced developers often need more feedback and time to get everything right. Avoid the temptation to rush the fix in yourself. In open source, Git credits are often the only salary the submitter gets. If you take the idea from the submission and implement it yourself, you rob the submitter of the opportunity to get feedback, try to improve and finally feel accomplished. Sure, it takes extra effort to give feedback, but the contributor is likely to feel ownership of their work and later return to further improve it. If a submission looks hopelessly low quality and you feel that giving feedback is a waste of time, you can simply respond with something along the lines of: Thanks for your contribution and interest in helping Debian. Unfortunately, looking at the commits, I see several shortcomings, and it is unlikely a normal review process is enough to help you finalize this. Please reach out to Debian Mentors to get a mentor who can give you more personalized feedback. There might also be contributors who just dump the code , ignore your feedback and never return to finalize their submission. If a contributor does not return to finalize their submission in 3-6 months, I will in my own projects simply finalize it myself and thank the contributor in the commit message (but not mark them as the author). Despite best practices, you will occasionally still end up doing some things in vain, but that is how volunteer collaboration works. We all just need to accept that some communication will inevitably feel like wasted effort, but it should be viewed as a necessary investment in order to get the benefits from the times when the communication led to real and valuable collaboration. Please just do not treat all contributors as if they are unlikely to ever contribute again; otherwise, your behavior will cause them not to contribute again. If you want to grow a tree, you need to plant several seeds.

Approving and merging Assuming review goes well and you are ready to approve, and if you are the only maintainer, you can proceed to merge right away. If there are multiple maintainers, or if you otherwise think that someone else might want to chime in before it is merged, use the Approve button to show that you approve the change but leave it unmerged. The person who approved does not necessarily have to be the person who merges. The point of the Merge Request review is not separation of duties in committing and merging the main purpose of a code review is to have a different set of eyeballs looking at the change before it is committed into the main development branch for all eternity. In some packages, the submitter might actually merge themselves once they see another developer has approved. In some rare Debian projects, there might even be separate people taking the roles of submitting, approving and merging, but most of the time these three roles are filled by two people either as submitter and approver+merger or submitter+merger and approver. If you are not a maintainer at all and do not have permissions to click Approve, simply post a comment summarizing your review and that you approve it and support merging it. This can help the maintainers review and merge faster.

Making a Merge Request for a new upstream version import Unlike many other Linux distributions, in Debian each source package has its own version control repository. The Debian sources consist of the upstream sources with an additional debian/ subdirectory that contains the actual Debian packaging. For the same reason, a typical Debian packaging Git repository has a debian/latest branch that has changes only in the debian/ subdirectory while the surrounding upstream files are the actual upstream files and have the actual upstream Git history. For details, see my post explaining Debian source packages in Git. Because of this Git branch structure, importing a new upstream version will typically modify three branches: debian/latest, upstream/latest and pristine-tar. When doing a Merge Request for a new upstream import, only submit one Merge Request for one branch: which means merging your new changes to the debian/latest branch. There is no need to submit the upstream/latest branch or the pristine-tar branch. Their contents are fixed and mechanically imported into Debian. There are no changes that the reviewer in Debian can request the submitter to do on these branches, so asking for feedback and comments on them is useless. All review, comments and re-reviews concern the content of the debian/latest branch only. It is not even necessary to use the debian/latest branch for a new upstream version. Personally, I always execute the new version import (with gbp import-orig --verbose --uscan) and prepare and test everything on debian/latest, but when it is time to submit it for review, I run git checkout -b import/$(dpkg-parsechangelog -SVersion) to get a branch named e.g. import/1.0.1 and then push that for review.

Reviewing a Merge Request for a new upstream version import Reviewing and testing a new upstream version import is a bit tricky currently, but possible. The key is to use gbp pull to automate fetching all branches from the submitter s fork. Assume you are reviewing a submission targeting the Glow package repository and there is a Merge Request from user otto s fork. As the maintainer, you would run the commands:
git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto
If there was feedback in the first round and you later need to pull a new version for re-review, running gbp pull --force will not suffice, and this trick of manually fetching each branch and resetting them to the submitter s version is needed:
for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done
Once review is done, either click Approve and let the submitter push everything, or alternatively, push all the branches you pulled locally yourself. In GitLab and other forges, the Merge Request will automatically be marked as Merged once the commit ID that was the head of the Merge Request is pushed to the target branch.

Please allow enough time for everyone to participate When working on Debian, keep in mind that it is a community of volunteers. It is common for people to do Debian stuff only on weekends, so you should patiently wait for at least a week so that enough workdays and weekend days have passed for the people you interact with to have had time to respond on their own Debian time. Having to wait may feel annoying and disruptive, but try to look at the upside: you do not need to do extra work simply while waiting for others. In some cases, that waiting can be useful thanks to the sleep on it phenomenon: when you yourself look at your own submission some days later with fresh eyes, you might notice something you overlooked earlier and improve your code change even without other people s feedback!

Contribute reviews! The last but not least suggestion is to make a habit of contributing reviews to packages you do not maintain. As we already see in large open source projects, such as the Linux kernel, they have far more code submissions than they can handle. The bottleneck for progress and maintaining quality becomes the reviews themselves. For Debian, as an organization and as a community, to be able to renew and grow new contributors, we need more of the senior contributors to shift focus from merely maintaining their packages and writing code to also intentionally interact with new contributors and guide them through the process of creating great open source software. Reviewing code is an effective way to both get tangible progress on individual development items and to transfer culture to a new generation of developers.

Why aren t 100% of all Debian source packages hosted on Salsa? As seen at trends.debian.net, more and more packages are using Salsa. Debian does not, however, have any policy about it. In fact, the Debian Policy Manual does not even mention the word Salsa anywhere. Adoption of Salsa has so far been purely organic, as in Debian each package maintainer has full freedom to choose whatever preferences they have regarding version control. I hope the trend to use Salsa will continue and more shared workflows emerge so that collaboration gets easier. To drive the culture of using Merge Requests and more, I drafted the Debian proposal DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are active in Debian and you think DEP-18 is beneficial for Debian, please give a thumbs up at dep-team/deps!21.

12 August 2025

Sergio Cipriano: Running Docker (OCI) Images in Incus

Running Docker (OCI) Images in Incus Incus 6.15 released with a lot of cool features, my favorite so far is the authentication support for OCI registries. Here's an example:
$ incus remote add docker https://docker.io --protocol=oci
$ incus launch docker:debian:sid sid
$ incus shell sid
root@sid:~# apt update && apt upgrade -y
root@sid:~# cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux forky/sid"
NAME="Debian GNU/Linux"
VERSION_CODENAME=forky
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
This has been really useful for creating containers to test packages, much better than launching the official Debian stable Incus images and then manually changing the sources list.

Freexian Collaborators: Debian Contributions: DebConf 25, OpenSSH upgrades, Cross compilation collaboration and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-07 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25, by Stefano Rivera and Santiago Ruano Rinc n In July, DebConf 25 was held in Brest, France. Freexian was a gold sponsor and most of the Freexian team attended the event. Many fruitful discussions were had amongst our team and within the Debian community. DebConf itself was organized by a local team in Brest, that included Santiago (who now lives in Uruguay). Stefano was also deeply involved in the organization, as a DebConf committee member, core video team, and the lead developer for the conference website. Running the conference took an enormous amount of work, consuming all of Stefano and Santiago s time for most of July. Lucas Kanashiro was active in the DebConf content team, reviewing talks and scheduling them. There were many last-minute changes to make during the event. Anupa Ann Joseph was part of the Debian publicity team doing live coverage of DebConf 25 and was part of the DebConf 25 content team reviewing the talks. She also assisted the local team to procure the lanyards. Recorded sessions presented by Freexian collaborators, often alongside other friends in Debian, included:

OpenSSH upgrades, by Colin Watson Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported No new SSH connections possible during large part of upgrade to Debian Trixie , which would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:
  • As part of hardening the OpenSSH server, OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it; after this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen (roughly) in two phases: first we unpack the new files onto disk, and then we run some configuration steps which usually include things like restarting services. Normally this is fine, because the old service keeps on working until it s restarted. In this case, unpacking the new files onto disk immediately stopped new SSH connections from working: the old sshd received the connection and tried to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this. This wasn t much of a problem when upgrading OpenSSH on its own or with a small number of other packages, but in release upgrades it left a large gap when you can t SSH to the system any more, and if anything fails in that interval then you could be in trouble.

    After trying a couple of other approaches, Colin landed on the idea of having the openssh-server package divert /usr/sbin/sshd to /usr/sbin/sshd.session-split before the unpack step of an upgrade from before 9.8, then removing the diversion and moving the new file into place once it s ready to restart the service. This reduces the period when new connections fail to a minimum.
  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor part of the version number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, so as soon as you unpacked the new OpenSSL library during an upgrade, sshd stopped working. This couldn t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL, and time was tight if we wanted this to be available before the release of Debian 13.

    Fortunately, there s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted Colin s proposal to fix this there.
The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine.

Cross compilation collaboration, by Helmut Grohne Supporting cross building in Debian packages touches lots of areas of the archive and quite some of these matters reside in shared responsibility between different teams. Hence, DebConf was an ideal opportunity to settle long-standing issues. The cross building bof sparked lively discussions as a significant fraction of developers employ cross builds to get their work done. In the trixie release, about two thirds of the packages can satisfy their cross Build-Depends and about half of the packages actually can be cross built.

Miscellaneous contributions
  • Rapha l Hertzog updated tracker.debian.org to remove references to Debian 10 which was moved to archive.debian.org, and had many fruitful discussions related to Debusine during DebConf 25.
  • Carles Pina prepared some data, questions and information for the DebConf 25 l10n and i18n BoF.
  • Carles Pina demoed and discussed possible next steps for po-debconf-manager with different teams in DebConf 25. He also reviewed Catalan translations and sent them to the packages.
  • Carles Pina started investigating a django-compressor bug: reproduced the bug consistently and prepared a PR for django-compressor upstream (likely more details next month). Looked at packaging frictionless-py.
  • Stefano Rivera triaged Python CVEs against pypy3.
  • Stefano prepared an upload of a new upstream release of pypy3 to Debian experimental (due to the freeze).
  • Stefano uploaded python3.14 RC1 to Debian experimental.
  • Thorsten Alteholz uploaded a new upstream version of sane-airscan to experimental. He also started to work on a new upstream version of hplip.
  • Colin backported fixes for CVE-2025-50181 and CVE-2025-50182 in python-urllib3, and fixed several other release-critical or important bugs in Python team packages.
  • Lucas uploaded ruby3.4 to experimental as a starting point for the ruby-defaults transition that will happen after Trixie release.
  • Lucas coordinated with the Release team the fix of the remaining RC bugs involving ruby packages, and got them all fixed.
  • Lucas, as part of the Debian Ruby team, kicked off discussions to improve internal process/tooling.
  • Lucas, as part of the Debian Outreach team, engaged in multiple discussions around internship programs we run and also what else we could do to improve outreach in the Debian project.
  • Lucas joined the Local groups BoF during DebConf 25 and shared all the good experiences from the Brazilian community and committed to help to document everything to try to support other groups.
  • Helmut spent significant time with Samuel Thibault on improving architecture cross bootstrap for hurd-any mostly reviewing Samuel s patches. He proposed a patch for improving bash s detection of its pipesize and a change to dpkg-shlibdeps to improve behavior for building cross toolchains.
  • Helmut reiterated the multiarch policy proposal with a lot of help from Nattie Mayer-Hutchings, Rhonda D Vine and Stuart Prescott.
  • Helmut finished his work on the process based unschroot prototype that was the main feature of his talk (see above).
  • Helmut analyzed a multiarch-related glibc upgrade failure induced by a /usr-move mitigation of systemd and sent a patch and regression fix both of which reached trixie in time. Thanks to Aurelien Jarno and the release team for their timely cooperation.
  • Helmut resurrected an earlier discussion about changing the semantics of Architecture: all packages in a multiarch context in order to improve the long-standing interpreter problem. With help from Tollef Fog Heen better semantics were discovered and agreement was reached with Guillem Jover and Julian Andres Klode to consider this change. The idea is to record a concrete architecture for every Architecture: all package in the dpkg database and enable choosing it as non-native.
  • Helmut implemented type hints for piuparts.
  • Helmut reviewed and improved a patch set of Jochen Sprickerhof for debvm.
  • Anupa was involved in discussions with the Debian Women team during DebConf 25.
  • Anupa started working for the trixie release coverage and started coordinating release parties.
  • Emilio helped coordinate the release of Debian 13 trixie.

6 August 2025

Reproducible Builds: Reproducible Builds in July 2025

Welcome to the seventh report from the Reproducible Builds project in 2025. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:
  1. Reproducible Builds Summit 2025
  2. Reproducible Builds an official goal for SUSE Enterprise Linux
  3. Reproducible Builds at FOSSY 2025
  4. New OSS Rebuild project from Google
  5. New extension of Python setuptools to support reproducible builds
  6. diffoscope
  7. New library to patch system functions for reproducibility
  8. Independently Reproducible Git Bundles
  9. Website updates
  10. Distribution work
  11. Reproducibility testing framework
  12. Upstream patches

Reproducible Builds Summit 2025 We are extremely pleased to announce the upcoming Reproducible Builds Summit, set to take place from October 28th 30th 2025 in Vienna, Austria! We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!

Reproducible Builds an official goal for SUSE Enterprise Linux On our mailing list this month, Bernhard M. Wiedemann revealed the big news that reproducibility is now an official goal for SUSE Linux Enterprise Server (SLES) 16:
[Everything] changed earlier this year when reproducible-builds for SLES-16 became an official goal for the product. More people are talking about digital sovereignty and supply-chain security now. [ ] Today, only 9 of 3319 (source) packages have significant problems left (plus 7 with pending fixes), so 99.5% of packages have reproducible builds.

Reproducible Builds at FOSSY 2025 On Saturday 2nd August, Vagrant Cascadian and Chris Lamb presented at this year s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here s Reproducible Builds!, was introduced as follows:
There are numerous policy compliance and regulatory processes being developed that target software development but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted forever?
Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you . More information on the event is available on the FOSSY 2025 website, including the full programme schedule. Vagrant and Chris also staffed a table, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.

New OSS Rebuild project from Google The Google Open Source Security Team (GOSST) published an article this month announcing OSS Rebuild, a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts. As the post itself documents, the new project comprises four facets:
  • Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.
  • SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.
  • Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.
  • Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.
One difference with most projects that aim for bit-for-bit reproducibility, OSS Rebuild aims for a kind of semantic reproducibility:
Through automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression).
The extensive post includes examples about how to access OSS Rebuild attestations using the Go-based command-line interface.

New extension of Python setuptools to support reproducible builds Wim Jeantine-Glenn has written a PEP 517 Build backend in order to enable reproducible builds when building Python projects that use setuptools. Called setuptools-reproducible, the project s README file contains the following:
Setuptools can create reproducible wheel archives (.whl) by setting SOURCE_DATE_EPOCH at build time, but setting the env var is insufficient for creating reproducible sdists (.tar.gz). setuptools-reproducible [therefore] wraps the hooks build_sdist build_wheel with some modifications to make reproducible builds by default.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 301, 302 and 303 to Debian:
  • Improvements:
    • Use Difference.from_operation in an attempt to pipeline the output of the extract-vmlinux script, potentially avoiding it all in memory. [ ]
    • Memoize a number of calls to --version, saving a very large number of external subprocess calls.
  • Bug fixes:
    • Don t check for PyPDF version 3 specifically, check for versions greater than 3. [ ]
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). [ ]
    • Mask stderr from extract-vmlinux script. [ ][ ]
    • Avoid spurious differences in h5dump output caused by exposure of absolute internal extraction paths. (#1108690)
  • Misc:
    • Use our_check_output in the ODT comparator. [ ]
    • Update copyright years. [ ]
In addition: Lastly, Chris Lamb added a tmpfs to try.diffoscope.org so that diffoscope has a non-trivial temporary area to unpack archives, etc. [ ] Elsewhere in our tooling, however, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, reprotest version 0.7.30 was uploaded to Debian unstable by Holger Levsen, chiefly including a change by Rebecca N. Palmer to not call sudo with the -h flag in order to fix Debian bug #1108550. [ ]

New library to patch system functions for reproducibility Nicolas Graves has written and published libfate, a simple collection of tiny libraries to patch system functions deterministically using LD_PRELOAD. According to the project s README:
libfate provides deterministic replacements for common non-deterministic system functions that can break reproducible builds. Instead of relying on complex build systems or apps or extensive patching, libfate uses the LD_PRELOAD trick to intercept system calls and return fixed, predictable values.
Describing why he wrote it, Nicolas writes:
I originally used the OpenSUSE dettrace approach to make Emacs reproducible in Guix. But when Guix switch to GCC@14, dettrace stopped working as expected. dettrace is a complex piece of software, my need was much less heavy: I don t need to systematically patch all sources of nondetermism, just the ones that make a process/binary unreproducible in a container/chroot.

Independently Reproducible Git Bundles Simon Josefsson has published another interesting article this month. Titled Independently Reproducible Git Bundles, the blog post describes the advantages of why you might a reproducible bundle, and the pitfalls that can arise when trying to create them:
One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine. It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occurred even when nothing had been committed on the server side between the two runs.

Website updates Once again, there were a number of improvements made to our website this month including:

Distribution work In Debian this month:
Debian contributors have made significant progress toward ensuring package builds produce byte-for-byte reproducible results. You can check the status for packages installed on your system using the new package debian-repro-status, or visit reproduce.debian.net for Debian s overall statistics for trixie and later. You can contribute to these efforts by joining #debian-reproducible on IRC to discuss fixes, or verify the statistics by installing the new rebuilderd package and setting up your own instance.

The IzzyOnDroid Android APK repository made further progress in July, crossing the 50% reproducibility threshold congratulations. Furthermore, a new release of the Neo Store was released, which exposes the reproducible status directly next to the version of each app.
In GNU Guix, a series of patches intended to fix the reproducibility for the Mono programming language was merged, fixing reproducibility in Mono versions 1.9 [ ], 2.4 [ ] and 2.6 [ ].
Lastly, in addition to the news that openSUSE Enterprise Linux now [has an official goal of reproducibility]((https://lists.reproducible-builds.org/pipermail/rb-general/2025-July/003846.html), Bernhard M. Wiedemann posted another monthly update for their work there.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In July, however, a number of changes were made by Holger Levsen, including:
  • Switch the URL for the Tails package set. [ ]
  • Make the dsa-check-packages output more useful. [ ]
  • Setup the ppc64el architecture again, has it has returned this time with a 2.7 GiB database instead of 72 GiB. [ ]
In addition, Jochen Sprickerhof improved the reproducibility statistics generation:
  • Enable caching of statistics. [ ][ ][ ]
  • Add some common non-reproducible patterns. [ ]
  • Change output to directory. [ ]
  • Add a page sorted by diffoscope size. [ ][ ]
  • Switch to Python s argparse module and separate output(). [ ]
Holger also submitted a number of Debian bugs against rebuilderd and rebuilderd-worker:
  • Config files and scripts for a simple one machine setup. [ ][ ]
  • Create a rebuilderd user. [ ]
  • Create rebuilderd-worker user with sbuild. [ ]
Lastly, Mattia Rizzolo added a scheduled job to renew some SSL certificates [ ] and Vagrant Cascadian performed some node maintenance [ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: There were a number of other patches from openSUSE developers:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

5 August 2025

Michael Ablassmeier: PVE 9.0 - Snapshots for LVM

The new Proxmox release advertises a new feature for easier snapshot handling of virtual machines whose disks are stored on LVM volumes, I wondered.. whats the deal..? To be able to use the new feature, you need to enable a special flag for the LVM volume group. This example shows the general workflow for a fresh setup. 1) Create the volume group with the snapshot-as-volume-chain feature turned on:
 pvesm add lvm lvmthick --content images --vgname lvm --snapshot-as-volume-chain 1
2) From this point on, you can create virtual machines right away, BUT those virtual machines disks must use the QCOW image format for their disk volumes. If you use the RAW format, you wont be able to create snapshots, still.
 VMID=401
 qm create $VMID --name vm-lvmthick
 qm set $VMID -scsi1 lvmthick:2,format=qcow2
So, why would it make sense to format the LVM volume as QCOW? Snapshots on LVM thick provisioned devices are, as everybody knows, a very I/O intensive task. Besides each snapshot, a special -cow Device is created that tracks the changed block regions and the original block data for each change to the active volume. This will waste quite some space within your volume group for each snapshot. Formatting the LVM volume as QCOW image, makes it possible to use the QCOW backing-image option for these devices, this is the way PVE 9 handles these kind of snapshots. Creating a snapshot looks like this:
 qm snapshot $VMID id
 snapshotting 'drive-scsi1' (lvmthick3:vm-401-disk-0.qcow2)
 Renamed "vm-401-disk-0.qcow2" to "snap_vm-401-disk-0_id.qcow2" in volume group "lvm"
 Rounding up size to full physical extent 1.00 GiB
 Logical volume "vm-401-disk-0.qcow2" created.
 Formatting '/dev/lvm/vm-401-disk-0.qcow2', fmt=qcow2 cluster_size=131072 extended_l2=on preallocation=metadata compression_type=zlib size=1073741824 backing_file=snap_vm-401-disk-0_id.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
So it will rename the current active disk and create another QCOW formatted LVM volume, but pointing it to the snapshot image using the backing_file option. Neat.

4 August 2025

Aigars Mahinovs: Snapshot mirroring in Debian (and Ubuntu)

Snapshot mirroring in Debian (and Ubuntu) The use of snapshots has been routine in both Debian and Ubuntu for several years now or more than 15 years for Debian, to be precise. Snapshots have become not only very reliable, but also an increasingly important part of the Debian package archive. This week, I encountered a problem at work that could be perfectly solved by correctly using the Snapshot service. However, while trying to figure it out, I ran into some shortcomings in the documentation. Until the docs are updated, I am publishing this blog post to make this information easier to find. Problem 1: Ensure fully reproducible creation of Docker containers with the exact same packages installed, even years after the original images were generated. Solution 1: Pin everything! Use a pinned source image in the FROM statement, such as debian:trixie-20250721-slim, and also pin the APT package sources to the "same" date - "20250722". Hint: The APT packages need to be newer than the Docker image base. If the APT packages are a bit newer, that's not a problem, as APT can upgrade packages without issues. However, if your Docker image has a newer package than your APT package sources, you will have a big problem. For example, if you have "libbearssl0" version 0.6-2 installed in the Docker image, but your package sources only have the older version 0.6-1, you will fail when trying to install the "libbearssl-dev" package. This is because you only have version 0.6-1 of the "-dev" package available, which hard-depends on exactly version 0.6-1 of "libbearssl0", and APT will refuse to downgrade an already installed package to satisfy that dependency. Problem 2: You are using a lot of images in a lot of executions and building tens of thousands of images per day. It would be a bad idea to put all this load on public Debian servers. Using local sources is also faster and adds extra security. Solution 2: Use local (transparently caching) mirrors for both the Docker Hub repository and the APT package source. At this point, I ran into another issue I could not easily figure out how to specify a local mirror for the snapshot part of the archive service. First of all, snapshot support in both Ubuntu and Debian accepts both syntaxes described in the Debian and Ubuntu documentation above. The documentation on both sites presents different approaches and syntax examples, but both work. The best approach nowadays is to use the "deb822" sources syntax. Remove /etc/apt/sources.list (if it still exists), delete all contents of the /etc/apt/sources.list.d directory, and instead create this file at /etc/apt/sources.list.d/debian.sources:
Types: deb
URIs: https://common.mirror-proxy.local/ftp.debian.org/debian/
Suites: trixie
Components: main non-free-firmware non-free contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Snapshot: 20250722
Hint: This assumes you have a mirror service running at common.mirror-proxy.local that proxies requests (with caching) to whitelisted domains, based on the name of the first folder in the path. If you now run sudo apt update --print-uris, you will see that your configuration accesses your mirror, but does not actually use the snapshot. Next, add the following to /etc/apt/apt.conf.d/80snapshots:
APT::Snapshot "20250722";
That should work, right? Let's try sudo apt update --print-uris again. I've got good news and bad news! The good news is that we are now actually using the snapshot we specified (twice). The bad news is that we are completely ignoring the mirror and going directly to snapshots.debian.org instead. Finding the right information was a bit of a challenge, but after a few tries, this worked: to specify a custom local mirror of the Debian (or Ubuntu) snapshot service, simply add the following line to the same file, /etc/apt/apt.conf.d/80snapshots:
Acquire::Snapshots::URI::Override::Origin::debian "https://common.mirror-proxy.local/snapshot.debian.org/archive/debian/@SNAPSHOTID@/";
Now, if you check again with sudo apt update --print-uris, you will see that the requests go to your mirror and include the specified snapshot identifier. Success! Now you can install any packages you want, and everything will be completely local and fully reproducible, even years later!

2 August 2025

Raju Devidas: Use phone/tablets/other laptops as external monitor with your laptop

This method is for wayland based systems. There are better ways to do this on GNOME or KDE desktops, but the method we are going to use is independent of DE/WM that you are using. I am doing this on sway window manager, but you can try this on any other Wayland based WM or DE. I have not tried this on Xorg based systems, there are several other guides for Xorg based systems online. When we connect a physical monitor to our laptops, it creates a second display output in our display settings that we can then re-arrange in layout, set resolution, set scale etc. Since we are not connecting via a physical interface like HDMI, DP, VGA etc. We need to create a virtual display within our system and set the display properties manually.
Get a list of current display outputs. You can also just check it in display settings of your DE/WM with wdisplays
rajudev@sanganak ~> swaymsg -t get_outputs
Output LVDS-1 &aposSeiko Epson Corporation 0x3047 Unknown&apos (focused)
  Current mode: 1366x768 @ 60.002 Hz
  Power: on
  Position: 0,0
  Scale factor: 1.000000
  Scale filter: nearest
  Subpixel hinting: rgb
  Transform: normal
  Workspace: 2
  Max render time: off
  Adaptive sync: disabled
  Allow tearing: no
  Available modes:
    1366x768 @ 60.002 Hz
altSingle physical display of the laptop
Currently we are seeing only one display output. Our goal is to create a second virtual display that we will then share on the tablet/phone. To do this there are various tools available. We are using sway-vdctl . It is currently not available within Debian packages, so we need to install it manually.
$ git clone https://github.com/odincat/sway-vdctl.git
$ cd sway-vdctl
$ cargo build --release
This will generate the binary with the name main under target/release . We can then copy this binary to our bin folder.
$ sudo cp target/release/main /usr/local/bin/vdctl
Now we have the vdctl command available.
$ vdctl --help
Usage: vdctl [OPTIONS] <ACTION> [VALUE]
Arguments:
  <ACTION>
          Possible values:
          - create:      Create new output based on a preset
          - kill:        Terminate / unplug an active preset
          - list:        List out active presets
          - next-number: Manually set the next output number, in case something breaks
          - sync-number: Sync the next output number using &aposswaymsg -t get_outputs&apos
  [VALUE]
          Preset name to apply, alternatively a value
          
          [default: ]
Options:
      --novnc
          do not launch a vnc server, just create the output
  -h, --help
          Print help (see a summary with &apos-h&apos)
Before creating the virtual display, we need to set it&aposs properties at .config/vdctl/config.json . I am using Xiaomi Pad 6 tablet as my external display. You can adjust the properties according to the device you want to use as a second display. $ (text-editor) .config/vdctl/config.json
 
    "host": "0.0.0.0",
    "presets": [
         
            "name": "pad6",
            "scale_factor": 2,
            "port": 9901,
            "resolution":  
                "width": 2800,
                "height": 1800
             
       
    ]
 
In the JSON, you can set the display resolution according to your external device and other configurations. If you want to configure multiple displays, you can add another entry into the presets in the json file. You can refer to example json file into the git repository. Now we need to actually create the virtual monitor.
$ vdctl create pad6
Created output, presumably &aposHEADLESS-1&apos
Set resolution of &aposHEADLESS-1&apos to 2800x1800
Set scale factor of &aposHEADLESS-1&apos to 2
Preset &apospad6&apos (&aposHEADLESS-1&apos: 2800x1800) is now active on port 9901
Now if you will check the display outputs in your display settings or from command line, you will see two different displays.
$ swaymsg -t get_outputs
Output LVDS-1 &aposSeiko Epson Corporation 0x3047 Unknown&apos
  Current mode: 1366x768 @ 60.002 Hz
  Power: on
  Position: 0,0
  Scale factor: 1.000000
  Scale filter: nearest
  Subpixel hinting: rgb
  Transform: normal
  Workspace: 2
  Max render time: off
  Adaptive sync: disabled
  Allow tearing: no
  Available modes:
    1366x768 @ 60.002 Hz
Output HEADLESS-1 &aposUnknown Unknown Unknown&apos (focused)
  Current mode: 2800x1800 @ 0.000 Hz
  Power: on
  Position: 1366,0
  Scale factor: 2.000000
  Scale filter: nearest
  Subpixel hinting: unknown
  Transform: normal
  Workspace: 3
  Max render time: off
  Adaptive sync: disabled
  Allow tearing: no
Also in the display settings.
altDisplay settings on Wayland with physical and virtual monitor output
Now we need to make this virtual display available over VNC which we will access with a VNC client on the tablet. To accomplish this I am using wayvnc but you can use any VNC server package. Install wayvnc
$ sudo apt install wayvnc
Now we will serve our virtual display HEADLESS-1 with wayvnc.
$ wayvnc -o HEADLESS-1 0.0.0.0 5900
You can adjust the port number as per your need. The process from laptop side is done. Now install any VNC software on your tablet. I am using AVNC, which is available on F-Droid. In the VNC software interface, add a new connection with the IP address of your laptop and the port started by wayvnc. Remember, both your laptop and phone need to be on the same Wi-Fi network.
altAVNC interface with the connection details to connect to the virtual monitor.
Save and connect. Now you will be able to see a extended display on your tablet. Enjoy working with multiple screens in a portable setup. Till next time.. Have a great time.

1 August 2025

puer-robustus: My Google Summer of Code '25 at Debian

I ve participated in this year s Google Summer of Code (GSoC) program and have been working on the small (90h) autopkgtests for the rsync package project at Debian.

Writing my proposal Before you can start writing a proposal, you need to select an organization you want to work with. Since many organizations participate in GSoC, I ve used the following criteria to narrow things down for me:
  • Programming language familiarity: For me only Python (preferably) as well as shell and Go projects would have made sense. While learning another programming language is cool, I wouldn t be as effective and helpful to the project as someone who is proficient in the language already.
  • Standing of the organization: Some of the organizations participating in GSoC are well-known for the outstanding quality of the software they produce. Debian is one of them, but so is e.g. the Django Foundation or PostgreSQL. And my thinking was that the higher the quality of the organization, the more there is to learn for me as a GSoC student.
  • Mentor interactions: Apart from the advantage you get from mentor feedback when writing your proposal (more on that further below), it is also helpful to gauge how responsive/helpful your potential mentor is during the application phase. This is important since you will be working together for a period of at least 2 months; if the mentor-student communication doesn t work, the GSoC project is going to be difficult.
  • Free and Open-Source Software (FOSS) communication platforms: I generally believe that FOSS projects should be built on FOSS infrastructure. I personally won t run proprietary software when I want to contribute to FOSS in my spare time.
  • Be a user of the project: As Eric S. Raymond has pointed out in his seminal The Cathedral and the Bazaar 25 years ago
    Every good work of software starts by scratching a developer s personal itch.
Once I had some organizations in mind whose projects I d be interested in working on, I started writing proposals for them. Turns out, I started writing my proposals way too late: In the end I only managed to hand in a single one which is risky. Competition for the GSoC projects is fierce and the more quality (!) proposals you send out, the better your chances are at getting one. However, don t write proposals for the sake of it: Reviewers get way too many AI slop proposals already and you will not do yourself a favor with a low-quality proposal. Take the time to read the instructions/ideas/problem descriptions the project mentors have provided and follow their guidelines. Don t hesitate to reach out to project mentors: In my case, I ve asked Samuel Henrique a few clarification questions whereby the following (email) discussion has helped me greatly in improving my proposal. Once I ve finalized my proposal draft, I ve sent it to Samuel for a review, which again led to some improvements to the final proposal which I ve uploaded to the GSoC program webpage.

Community bonding period Once you get the information that you ve been accepted into the GSoC program (don t take it personally if you don t make it; this was my second attempt after not making the cut in 2024), get in touch with your prospective mentor ASAP. Agree upon a communication channel and some response times. Put yourself in the loop for project news and discussions whatever that means in the context of your organization: In Debian s case this boiled down to subscribing to a bunch of mailing lists and IRC channels. Also make sure to setup a functioning development environment if you haven t done so for writing the proposal already.

Payoneer setup The by far most annoying part of GSoC for me. But since you don t have a choice if you want to get the stipend, you will need to signup for an account at Payoneer. In this iteration of GSoC all participants got a personalized link to open a Payoneer account. When I tried to open an account by following this link, I got an email after the registration and email verification that my account is being blocked because Payoneer deems the email adress I gave a temporary one. Well, the email in question is most certainly anything but temporary, so I tried to get in touch with the Payoneer support - and ended up in an LLM-infused kafkaesque support hell. Emails are answered by an LLM which for me meant utterly off-topic replies and no help whatsoever. The Payoneer website offers a real-time chat, but it is yet another instance of a bullshit-spewing LLM bot. When I at last tried to call them (the support lines are not listed on the Payoneer website but were provided by the GSoC program), I kid you not, I was being told that their platform is currently suffering from technical problems and was hung up on. Only thanks to the swift and helpful support of the GSoC administrators (who get priority support from Payoneer) I was able to setup a Payoneer account in the end. Apart from showing no respect to customers, Payoneer is also ripping them off big time with fees (unless you get paid in USD). They charge you 2% for currency conversions to EUR on top of the FX spread they take. What worked for me to avoid all of those fees, was to open a USD account at Wise and have Payoneer transfer my GSoC stipend in USD to that account. Then I exchanged the USD to my local currency at Wise for significantly less than Payoneer would have charged me. Also make sure to close your Payoneer account after the end of GSoC to avoid their annual fee.

Project work With all this prelude out of the way, I can finally get to the actual work I ve been doing over the course of my GSoC project.

Background The upstream rsync project generally sees little development. Nonetheless, they released version 3.4.0 including some CVE fixes earlier this year. Unfortunately, their changes broke the -H flag. Now, Debian package maintainers need to apply those security fixes to the package versions in the Debian repositories; and those are typically a bit older. Which usually means that the patches cannot be applied as is but will need some amendments by the Debian maintainers. For these cases it is helpful to have autopkgtests defined, which check the package s functionality in an automated way upon every build. The question then is, why should the tests not be written upstream such that regressions are caught in the development rather than the distribution process? There s a lot to say on this question and it probably depends a lot on the package at hand, but for rsync the main benefits are twofold:
  1. The upstream project mocks the ssh connection over which rsync is most typically used. Mocking is better than nothing but not the real thing. In addition to being a more realisitic test scenario for the typical rsync use case, involving an ssh server in the test would automatically extend the overall resilience of Debian packages as now new versions of the openssh-server package in Debian benefit from the test cases in the rsync reverse dependency.
  2. The upstream rsync test framework is somewhat idiosyncratic and difficult to port to reimplementations of rsync. Given that the original rsync upstream sees little development, an extensive test suit further downstream can serve as a threshold for drop-in replacements for rsync.

Goal(s) At the start of the project, the Debian rsync package was just running (a part of) the upstream tests as autopkgtests. The relevant snippet from the build log for the rsync_3.4.1+ds1-3 package reads:
114s ------------------------------------------------------------
114s ----- overall results:
114s 36 passed
114s 7 skipped
Samuel and I agreed that it would be a good first milestone to make the skipped tests run. Afterwards, I should write some rsync test cases for local calls, i.e. without an ssh connection, effectively using rsync as a more powerful cp. And once that was done, I should extend the tests such that they run over an active ssh connection. With these milestones, I went to work.

Upstream tests Running the seven skipped upstream tests turned out to be fairly straightforward:
  • Two upstream tests concern access control lists and extended filesystem attributes. For these tests to run they rely on functionality provided by the acl and xattr Debian packages. Adding those to the Build-Depends list in the debian/control file of the rsync Debian package repo made them run.
  • Four upstream tests required root privileges to run. The autopkgtest tool knows the needs-root restriction for that reason. However, Samuel and I agreed that the tests should not exclusively run with root privileges. So, instead of just adding the restiction to the existing autopkgtest test, we created a new one which has the needs-root restriction and runs the upstream-tests-as-root script - which is nothing else than a symlink to the existing upstream-tests script.
The commits to implement these changes can be found in this merge request. The careful reader will have noticed that I only made 2 + 4 = 6 upstream test cases run out of 7: The leftover upstream test is checking the functionality of the --ctimes rsync option. In the context of Debian, the problem is that the Linux kernel doesn t have a syscall to set the creation time of a file. As long as that is the case, this test will always be skipped for the Debian package.

Local tests When it came to writing Debian specific test cases I started of a completely clean slate. Which is a blessing and a curse at the same time: You have full flexibility but also full responsibility. There were a few things to consider at this point in time:
  • Which language to write the tests in? The programming language I am most proficient in is Python. But testing a CLI tool in Python would have been weird: it would have meant that I d have to make repeated subprocess calls to run rsync and then read from the filesystem to get the file statistics I want to check. Samuel suggested I stick with shell scripts and make use of diffoscope - one of the main tools used and maintained by the Reproducible Builds project - to check whether the file contents and file metadata are as expected after rsync calls. Since I did not have good reasons to use bash, I ve decided to write the scripts to be POSIX compliant.
  • How to avoid boilerplate? If one makes use of a testing framework, which one? Writing the tests would involve quite a bit of boilerplate, mostly related to giving informative output on and during the test run, preparing the file structure we want to run rsync on, and cleaning the files up after the test has run. It would be very repetitive and in violation of DRY to have the code for this appear in every test. Good testing frameworks should provide convenience functions for these tasks. shunit2 comes with those functions, is packaged for Debian, and given that it is already being used in the curl project, I decided to go with it.
  • Do we use the same directory structure and files for every test or should every test have an individual setup? The tradeoff in this question being test isolation vs. idiosyncratic code. If every test has its own setup, it takes a) more work to write the test and b) more work to understand the differences between tests. However, one can be sure that changes to the setup in one test will have no side effects on other tests. In my opinion, this guarantee was worth the additional effort in writing/reading the tests.
Having made these decisions, I simply started writing tests and ran into issues very quickly.

rsync and subsecond mtime diffs When testing the rsync --times option, I observed a weird phenomenon: If the source and destination file have modification times which differ only in the nanoseconds, an rsync --times call will not synchronize the modification times. More details about this behavior and examples can be found in the upstream issue I raised. In the Debian tests we had to occasionally work around this by setting the timestamps explicitly with touch -d.
diffoscope regression In one test case, I was expecting a difference in the modification times but diffoscope would not report a diff. After a good amount of time spent on debugging the problem (my default, and usually correct, assumption is that something about my code is seriously broken if I run into issues like that), I was able to show that diffoscope only displayed this behavior in the version in the unstable suite, not on Debian stable (which I am running on my development machine). Since everything pointed to a regression in the diffoscope project and with diffoscope being written in Python, a language I am familiar with, I wanted to spend some time investigating (and hopefully fixing) the problem. Running git bisect on the diffoscope repo helped me in identifying the commit which introduced the regression: The commit contained an optimization via an early return for bit-by-bit identical files. Unfortunately, the early return also caused an explicitly requested metadata comparison (which could be different between the files) to be skipped. With a nicely diagnosed issue like that, I was able to go to a local hackerspace event, where people work on FOSS together for an evening every month. In a group, we were able to first, write a test which showcases the broken behavior in the latest diffoscope version, and second, make a fix to the code such that the same test passes going forward. All details can be found in this merge request.
shunit2 failures At some point I had a few autopkgtests setup and passing, but adding a new one would throw me totally inexplicable errors. After trying to isolate the problem as much as possible, it turns out that shunit2 doesn t play well together we the -e shell option. The project mentions this in the release notes for the 2.1.8 version1, but in my opinion a constraint this severe should be featured much more prominently, e.g. in the README.

Tests over an ssh connection The centrepiece of this project; everything else has in a way only been preparation for this. Obviously, the goal was to reuse the previously written local tests in some way. Not only because lazy me would have less work to do this way, but also because of a reduced long-term maintenance burden of one rather than two test sets. As it turns out, it is actually possible to accomplish that: The remote-tests script doesn t do much apart from starting an ssh server on localhost and running the local-tests script with the REMOTE environment variable set. The REMOTE environment variable changes the behavior of the local-tests script in such a way that it prepends "$REMOTE": to the destination of the rsync invocations. And given that we set REMOTE=rsync@localhost in the remote-tests script, local-tests copies the files to the exact same locations as before, just over ssh. The implementational details for this can be found in this merge request.

proposed-updates Most of my development work on the Debian rsync package took place during the Debian freeze as the release of Debian Trixie is just around the corner. This means that uploading by Debian Developers (DD) and Debian Maintainers (DM) to the unstable suite is discouraged as it makes migrating the packages to testing more difficult for the Debian release team. If DDs/DMs want to have the package version in unstable migrated to testing during the freeze they have to file an unblock request. Samuel has done this twice (1, 2) for my work for Trixie but has asked me to file the proposed-updates request for current stable (i.e. Debian Bookworm) myself after I ve backported my tests to bookworm.

Unfinished business To run the upstream tests which check access control list and extended file system attributes functionality, I ve added the acl and xattr packages to Build-Depends in debian/control. This, however, will only make the packages available at build time: If Debian users install the rsync package, the acl and xattr packages will not be installed alongside it. For that, the dependencies would have to be added to Depends or Suggests in debian/control. Depends is probably to strong of a relation since rsync clearly works well in practice without, but adding them to Suggests might be worthwhile. A decision on this would involve checking, what happens if rsync is called with the relevant options on a host machine which has those packages installed, but where the destination machine lacks them. Apart from the issue described above, the 15 tests I managed to write are are a drop in the water in light of the infinitude of rsync options and their combinations. Most glaringly, not all options of the --archive option are covered separately (which would help indicating what code path of rsync broke in a regression). To increase the likelihood of catching regressions with the autopkgtests, the test coverage should be extended in the future.

Conclusion Generally, I am happy with my contributions to Debian over the course of my small GSoC project: I ve created an extensible, easy to understand, and working autopkgtest setup for the Debian rsync package. There are two things which bother me, however:
  1. In hindsight, I probably shouldn t have gone with shunit2 as a testing framework. The fact that it behaves erratically with the -e flag is a serious drawback for a shell testing framework: You really don t want a shell command to fail silently and the test to continue running.
  2. As alluded to in the previous section, I m not particularly proud of the number of tests I managed to write.
On the other hand, finding and fixing the regression in diffoscope - while derailing me from the GSoC project itself - might have a redeeming quality.

DebConf25 By sheer luck I happened to work on a GSoC project at Debian over a time period during which the annual Debian conference would take place close enough to my place of residence. Samuel pointed the opportunity to attend DebConf out to me during the community bonding period and since I could make time for the event in my schedule, I signed up. DebConf was a great experience which - aside from gaining more knowledge about Debian development - allowed me to meet the actual people usually hidden behind email adresses and IRC nicks. I can wholeheartedly recommend attending a DebConf to every interested Debian user! For those who have missed this year s iteration of the conference, I can recommend the following recorded talks: While not featuring as a keynote speaker (understandably so as the newcomer to Debian community that I am), I could still contribute a bit to the conference program.

GSoC project presentation The Debian Outreach team has scheduled a session in which all GSoC and Outreachy students over the past year had the chance to present their work in a lightning talk. The session has been recorded and is available online, just like my slides and the source for them.

Debian install workshop Additionally, with so many Debian experts gathering in one place while KDE s End of 10 campaign is ongoing, I felt it natural to organize a Debian install workhop. In hindsight I can say that I underestimated how much work it would be, especially for me who does not speak a word of French. But although the turnout of people who wanted us to install Linux on their machines was disappointingly low, it was still worth it: Not only because the material in the repo can be helpful to others planning install workshops but also because it was nice to meet a) the person behind the Debian installer images and b) the local Brest/Finist re Linux user group as well as the motivated and helpful people at Infini.

Credits I want to thank the Open Source team at Google for organizing GSoC: The highly structured program with a one-to-one mentorship is a great avenue to start contributing to well established and at times intimidating FOSS projects. And as much as I disagree with Google s surveillance capitalist business model, I have to give it to them that the company at least takes its responsibility for FOSS (somewhat) seriously - unlike many other businesses which rely on FOSS and choose to freeride of it. Big thanks to the Debian community! I ve experienced nothing but friendliness in my interactions with the community. And lastly, the biggest thanks to my GSoC mentor Samuel Henrique. He has dealt patiently and competently with all my stupid newbie questions. His support enabled me to make - albeit small - contributions to Debian. It has been a pleasure to work with him during GSoC and I m looking forward to working together with him in the future.

  1. Obviously, I ve only read them after experiencing the problem.

31 July 2025

Simon Josefsson: Independently Reproducible Git Bundles

The gnulib project publish a git bundle as a stable archival copy of the gnulib git repository once in a while. Why? We don t know exactly what this may be useful for, but I m promoting for this to see if we can establish some good use-case. A git bundle may help to establish provinence in case of an attack on the Savannah hosting platform that compromise the gnulib git repository. Another use is in the Debian gnulib package: that gnulib bundle is git cloned when building some Debian packages, to get to exactly the gnulib commit used by each upstream project see my talk on gnulib at Debconf24 and this approach reduces the amount of vendored code that is part of Debian s source code, which is relevant to mitigate XZ-style attacks. The first time we published the bundle, I wanted it to be possible to re-create it bit-by-bit identically by others. At the time I discovered a well-written blog post by Paul Beacher on reproducible git bundles and thought he had solved the problem for me. Essentially it boils down to disable threading during compression when producing the bundle, and his final example show this results in a predictable bit-by-bit identical output:
$ for i in $(seq 1 100); do \
> git -c 'pack.threads=1' bundle create -q /tmp/bundle-$i --all; \
> done
$ md5sum /tmp/bundle-*   cut -f 1 -d ' '   uniq -c
    100 4898971d4d3b8ddd59022d28c467ffca
So what remains to be said about this? It seems reproducability goes deeper than that. One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine. It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occured even when nothing had been committed on the server side between the two runs. I thought the reason had to do with other sources of unpredictable data, and I explored several ways to work around this but eventually gave up. I settled for the following sequence of commands:
REV=ac9dd0041307b1d3a68d26bf73567aa61222df54 # master branch commit to package
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input
# inspect that the new tree matches a trusted copy
git checkout -B master $REV # put $REV at master
for b in $(git branch -r   grep origin/stable-   sort --version-sort); do git checkout $ b#origin/ ; done
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any commits after $REV
git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle
At the time it felt more important to publish something than to reach for perfection, so we did so using the above snippet. Afterwards I reached out to the git community on this and there were good discussion about my challenge. At the end of that thread you see that I was finally able to reproduce a bit-by-bit identical bundles from two different clones, by using an intermediate git -c pack.threads=1 repack -adF step. I now assume that the unpredictable data I got earlier was introduced during the git clone steps, compressing the pack differently each time due to threaded compression. The outcome could also depend on what content the server provided, so if someone ran git gc, git repack on the server side things would change for the user, even if the user forced threading to 1 during cloning more experiments on what kind of server-side alterations results in client-side differences would be good research. A couple of months passed and it is now time to publish another gnulib bundle somewhat paired to the bi-yearly stable gnulib branches so let s walk through the commands and explain what they do. First clone the repository:
REV=225973a89f50c2b494ad947399425182dd42618c   # master branch commit to package
S1REV=475dd38289d33270d0080085084bf687ad77c74d # stable-202501 branch commit
S2REV=e8cc0791e6bb0814cf4e88395c06d5e06655d8b5 # stable-202507 branch commit
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input
I believe the git fsck will validate that the chain of SHA1 commits are linked together, preventing someone from smuggling in unrelated commits earlier in the history without having to do SHA1 collision. SHA1 collisions are economically feasible today, so this isn t much of a guarantee of anything though.
git checkout -B master $REV # put $REV at master
# Add all stable-* branches locally:
for b in $(git branch -r   grep origin/stable-   sort --version-sort); do git checkout $ b#origin/ ; done
git checkout -B stable-202501 $S1REV
git checkout -B stable-202507 $S2REV
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any unrelated commits, not clear this helps
This establish a set of branches pinned to particular commits. The older stable-* branches are no longer updated, so they shouldn t be moving targets. In case they are modified in the future, the particular commit we used will be found in the official git bundle.
time git -c pack.threads=1 repack -adF
That s the new magic command to repack and recompress things in a hopefully more predictable way. This leads to a 72MB git pack under .git/objects/pack/ and a 62MB git bundle. The runtime on my laptop is around 5 minutes. I experimented with -c pack.compression=1 and -c pack.compression=9 but the size was roughly the same; 76MB and 66MB for level 1 and 72MB and 62MB for level 9. Runtime still around 5 minutes. Git uses zlib by default, which isn t the most optimal compression around. I tried -c pack.compression=0 and got a 163MB git pack and a 153MB git bundle. The runtime is still around 5 minutes, indicating that compression is not the bottleneck for the git repack command. That 153MB uncompressed git bundle compresses to 48MB with gzip default settings and 46MB with gzip -9; to 39MB with zst defaults and 34MB with zst -9; and to 28MB using xz defaults with a small 26MB using xz -9. Still the inconvenience of having to uncompress a 30-40MB archive into
the much larger 153MB is probably not worth the savings compared to
shipping and using the (still relatively modest) 62MB git bundle. Now finally prepare the bundle and ship it:
git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle
Yay! Another gnulib git bundle snapshot is available from
https://ftp.gnu.org/gnu/gnulib/. The essential part of the git repack command is the -F parameter. In the thread -f was suggested, which translates into the git pack-objects --no-reuse-delta parameter:
--no-reuse-delta
When creating a packed archive in a repository that has existing packs, the command reuses existing deltas. This sometimes results in a slightly suboptimal pack. This flag tells the command not to reuse existing deltas but compute them from scratch.
When reading the man page, I though that using -F which translates into --no-reuse-object would be slightly stronger:
--no-reuse-object
This flag tells the command not to reuse existing object data at all, including non deltified object, forcing recompression of everything. This implies --no-reuse-delta. Useful only in the obscure case where wholesale enforcement of a different compression level on the packed data is desired.
On the surface, without --no-reuse-objects, some amount of earlier compression could taint the final result. Still, I was able to get bit-by-bit identical bundles by using -f so possibly reaching for -F is not necessary. All the commands were done using git 2.51.0 as packaged by Guix. I fear the result may be different with other git versions and/or zlib libraries. I was able to reproduce the same bundle on a Trisquel 12 aramo (derived from Ubuntu 22.04) machine, which uses git 2.34.1. This suggests there is some chances of this being possible to reproduce in 20 years time. Time will tell. I also fear these commands may be insufficient if something is moving on the server-side of the git repository of gnulib (even just something simple as a new commit), I tried to make some experiments with this but let s aim for incremental progress here. At least I have now been able to reproduce the same bundle on different machines, which wasn t the case last time. Happy Reproducible Git Bundle Hacking!

29 July 2025

Christoph Berg: The Debian Conference 2025 in Brest

It's Sunday and I'm now sitting in the train from Brest to Paris where I will be changing to Germany, on the way back from the annual Debian conference. A full week of presentations, discussions, talks and socializing is laying behind me and my head is still spinning from the intensity.
Pollito and the gang of DebConf mascots wearing their conference badgesPollito and the gang of DebConf mascots wearing their conference badges (photo: Christoph Berg)
Sunday, July 13th It started last Sunday with traveling to the conference. I got on the Eurostar in Duisburg and we left on time, but even before reaching Cologne, the train was already one hour delayed for external reasons, collecting yet another hour between Aachen and Liege for its own technical problems. "The train driver is working on trying to fix the problem." My original schedule had well over two hours for changing train stations in Paris, but being that late, I missed the connection to Brest in Montparnasse. At least in the end, the total delay was only one hour when finally arriving at the destination. Due to the French julliet quatorze fireworks approaching, buses in Brest were rerouted, but I managed to catch the right bus to the conference venue, already meeting a few Debian people on the way. The conference was hosted at the IMT Atlantique Brest campus, giving the event a nice university touch. I arrived shortly after 10 in the evening and after settling down a bit, got on one of the "magic" buses for transportation to the camping site where half of the attendees where stationed. I shared a mobile home with three other Debianites, where I got a small room for myself. Monday, July 14th Next morning, we took the bus back to the venue with a small breakfast and the opening session where Enrico Zini invited me to come to his and Nicolas Dandrimont's session about Debian community governance and curation, which I gladly did. Many ideas about conflict moderation and community steering were floated around. I hope some of that can be put into effect to make flamewars on the mailing lists less heated and more directed. After that, I attended Olly Betts' "Stemming with Snowball" session, which is the stemmer used also in PostgreSQL. Text search is one of the areas in PostgreSQL that I never really looked closely at, including the integration into the postgresql-common package, so it was nice to get more information about that. In preparation for the conference, a few of us Ham radio operators in Debian had decided to bring some radio gear to DebConf this year in order to perhaps spark more interest for our hobby among the fellow geeks. In the afternoon after the talks, I found a quieter spot just outside of the main hall and set up a shortwave antenna by attaching a 10m mast to one of the park benches there. The 40m band was still pretty much closed, but I could work a few stations from England, just across the channel from Bretagne, answering questions from interested passing-by Debian people between the contacts. Over time, the band opened and more European stations got into the log.
F/DF7CB in Brest (photo: Evangelos Ribeiro Tzaras)
Tuesday, July 15th Tuesday started with Helmut Grohne's session about "Reviving (un)schroot". The schroot program has been Debian's standard way of managing build chroots for a long time, but it is more and more being regarded as obsolete with all kinds of newer containerization and virtualization technologies taking over. Since many bits of Debian infrastructure depend on schroot, and its user interface is still very useful, Helmut reimplemented it using Linux namespaces and the "unshare" systemcall. I had already worked with him at the Hamburg Minidebconf to replace the apt.postgresql.org buildd machinery with the new system, but we were not quite there yet (network isolation is nice, but we still sometimes need proper networking), so it was nice to see the effort is still progressing and I will give his new scripts a try when I'm back home. Next, Stefano Rivera and Colin Watson presented Debusine, a new package repository and workflow management system. It looks very promising for anyone running their own repository, so perhaps yet another bit of apt.postgresql.org infrastructure to replace in the future. After that, I went to the Debian LTS BoF session by Santiago Ruano Rinc n and Bastien Roucari s - Debian releases plus LTS is what we are covering with apt.postgresql.org. Then there were bits from the DPL (Debian Project Leader), and a session moderated by Stefano Rivera interesting to me as a member of the Debian Technical Committee on the future structure of the packages required for cross-building in Debian, a topic which had been brought to TC a while ago. I am happy that we could resolve the issue without having to issue a formal TC ruling as the involved parties (kernel, glibc, gcc and the cross-build people) found a promising way forward themselves. DebConf is really a good way to get such issues unstuck. Ten years ago at the 2015 Heidelberg DebConf, Enrico had given a seminal "Semi-serious stand-up comedy" talk, drawing parallels between the Debian Open Source community and the BDSM community - "People doing things consensually together". (Back then, the talk was announced as "probably unsuitable for people of all ages".) With his unique presentation style and witty insights, the session made a lasting impression on everyone attending. Now, ten years later (and he and many in the audience being ten years older), he gave an updated version of it. We are now looking forward to the sequel in 2035. The evening closed with the famous DebConf tradition of the Cheese & Wine party in a old fort next to the coast, just below the conference venue. Even when he's a fellow Debian Developer, Ham and also TC member, I had never met Paul Tagliamonte in person before, but we spent most of the evening together geeking out on all things Debian and Ham radio.
The northern coast of Ushant (photo: Christoph Berg)
Wednesday, July 16th Wednesday already marked the end of the first half of the week, the day of the day trips. I had chosen to go to Ouessant island (Ushant in English) which marks the Western end of French mainland and hosts one of the lighthouses yielding the way into the English channel. The ferry trip included surprisingly big waves which left some participants seasick, but everyone recovered fast. After around one and a half hours we arrived, picked up the bicycles, and spent the rest of the day roaming the island. The weather forecast was originally very cloudy and 18 C, but over noon this turned into sunny and warm, so many got an unplanned sunburn. I enjoyed the trip very much - it made up for not having time visiting the city during the week. After returning, we spent the rest of the evening playing DebConf's standard game, Mao (spoiler alert: don't follow the link if you ever intend to play).
Having a nice day (photo: Christoph Berg)
Thursday, July 17th The next day started with the traditional "Meet the Technical Committee" session. This year, we trimmed the usual slide deck down to remove the boring boilerplate parts, so after a very short introduction to the work of the committee by our chairman Matthew Vernon, we opened up the discussion with the audience, with seven (out of 8) TC members on stage. I think the format worked very well, with good input from attendees. Next up was "Don't fear the TPM" by Jonathan McDowell. A common misconception in the Free Software community is that the TPM is evil DRM hardware working against the user, but while it could be used in theory that way, the necessary TPM attestations seem to impossible to attain in practice, so that wouldn't happen anyway. Instead, it is a crypto coprocessor present in almost all modern computers that can be used to hold keys, for example to be used for SSH. It will also be interesting to research if we can make use of it for holding the Transparent Data Encryption keys for CYBERTEC's PostgreSQL Enterprise Edition. Aigars Mahinovs then directed everyone in place for the DebConf group picture, and Lucas Nussbaum started a discussion about archive-wide QA tasks in Debian, an area where I did a lot of work in the past and that still interests me. Antonio Terceiro and Paul Gevers followed up with techniques to track archive-wide rebuilding and testing of packages and in turn filing a lot of bugs to track the problems. The evening ended with the conference dinner, again in the fort close by the coast. DebConf is good for meeting new people, and I incidentally ran into another Chris, who happened to be one of the original maintainers of pgaccess, the pre-predecessor of today's pgadmin. I admit still missing this PostgreSQL frontend for its simplicity and ability to easily edit table data, but it disappeared around 2004. Friday, July 18th On Friday, I participated in discussion sessions around contributors.debian.org (PostgreSQL is planning to set up something similar) and the New Member process which I had helped to run and reform a decade or two ago. Agathe Porte (also a Ham radio operator, like so many others at the conference I had no idea of) then shared her work on rust-rewriting the slower parts of Lintian, the Debian package linter. Craig Small talked about "Free as in Bytes", the evolution of the Linux procps free command. Over the time and many kernel versions, the summary numbers printed became better and better, but there will probably never be a version that suits all use cases alike. Later over dinner, Craig (who is also a TC member) and I shared our experiences with these numbers and customers (not) understanding them. He pointed out that for PostgreSQL and looking at used memory in the presence of large shared memory buffers, USS (unique set size) and PSS (proportional set size) should be more realistic numbers than the standard RSS (resident set size) that the top utility is showing by default. Antonio Terceiro and Paul Gevers again joined to lead a session, now on ci.debian.net and autopkgtest, the test driver used for running tests on packages after then have been installed on a system. The PostgreSQL packages are heavily using this to make sure no regressions creep in even after builds have successfully completed and test re-runs are rescheduled periodically. The day ended with Bdale Garbee's electronics team BoF and Paul Tagliamonte and me setting up the radio station in the courtyard, again answering countless questions about ionospheric conditions and operating practice. Saturday, July 19th Saturday was the last conference day. In the first session, Nikos Tsipinakis and Federico Vaga from CERN announced that the LHC will be moving to Debian for the accelerator's frontend computers in their next "long shutdown" maintenance period in the next year. CentOS broke compatibility too often, and Debian trixie together with the extended LTS support will cover the time until the next long shutdown window in 2035, until when the computers should have all been replaced with newer processors covering higher x86_64 baseline versions. The audience was very delighted to hear that Debian is now also being used in this prestige project. Ben Hutchings then presented new Linux kernel features. Particularly interesting for me was the support for atomic writes spanning more than one filesystem block. When configured correctly, this would mean PostgreSQL didn't have to record full-page images in the WAL anymore, increasing throughput and performance. After that, the Debian ftp team discussed ways to improve review of new packages in the archive, and which of their processes could be relaxed with new US laws around Open Source and cryptography algorithms export. Emmanuel Arias led a session on Salsa CI, Debian's Gitlab instance and standard CI pipeline. (I think it's too slow, but the runners are not under their control.) Julian Klode then presented new features in APT, Debian's package manager. I like the new display format (and a tiny bit of that is also from me sending in wishlist bugs). In the last round of sessions this week, I then led the Ham radio BoF with an introduction into the hobby and how Debian can be used. Bdale mentioned that the sBitx family of SDR radios is natively running Debian, so stock packages can be used from the radio's touch display. We also briefly discussed his involvement in ARDC and the possibility to get grants from them for Ham radio projects. Finally, DebConf wrapped up with everyone gathering in the main auditorium and cheering the organizers for making the conference possible and passing Pollito, the DebConf mascot, to the next organizer team.
Pollito on stage (photo: Christoph Berg)
Sunday, July 20th Zoom back to the train: I made it through the Paris metro and I'm now on the Eurostar back to Germany. It has been an intense week with all the conference sessions and meeting all the people I had not seen so long. There are a lot of new ideas to follow up on both for my Debian and PostgreSQL work. Next year's DebConf will take place in Santa Fe, Argentina. I haven't yet decided if I will be going, but I can recommend the experience to everyone! The post The Debian Conference 2025 in Brest appeared first on CYBERTEC PostgreSQL Services & Support.

28 July 2025

Dimitri John Ledkov: Achieving actually full disk encryption of UEFI ESP at rest with TCG OPAL, FIPS, LUKS

Achieving full disk encryption using FIPS, TCG OPAL and LUKS to encrypt UEFI ESP on bare-metal and in VMs
Many security standards such as CIS and STIG require to protect information at rest. For example, NIST SP 800-53r5 SC-28 advocate to use cryptographic protection, offline storage and TPMs to enhance protection of information confidentiality and/or integrity.Traditionally to satisfy such controls on portable devices such as laptops one would utilize software based Full Disk Encryption - Mac OS X FileVault, Windows Bitlocker, Linux cryptsetup LUKS2. In cases when FIPS cryptography is required, additional burden would be placed onto these systems to operate their kernels in FIPS mode.Trusted Computing Group works on establishing many industry standards and specifications, which are widely adopted to improve safety and security of computing whilst keeping it easy to use. One of their most famous specifications them is TCG TPM 2.0 (Trusted Platform Module). TPMs are now widely available on most devices and help to protect secret keys and attest systems. For example, most software full disk encryption solutions can utilise TCG TPM to store full disk encryption keys providing passwordless, biometric or pin-base ways to unlock the drives as well as attesting that system have not been modified or compromised whilst offline.TCG Storage Security Subsystem Class: Opal Specification is a set of specifications for features of data storage devices. The authors and contributors to OPAL are leading and well trusted storage manufacturers such as Samsung, Western Digital, Seagate Technologies, Dell, Google, Lenovo, IBM, Kioxia, among others. One of the features that Opal Specification enables is self-encrypting drives which becomes very powerful when combined with pre-boot authentication. Out of the box, such drives always and transparently encrypt all disk data using hardware acceleration. To protect data one can enter UEFI firmware setup (BIOS) to set NVMe single user password (or user + administrator/recovery passwords) to encrypt the disk encryption key. If one's firmware didn't come with such features, one can also use SEDutil to inspect and configure all of this. Latest release of major Linux distributions have SEDutil already packaged.Once password is set, on startup, pre-boot authentication will request one to enter password - prior to booting any operating systems. It means that full disk is actually encrypted, including the UEFI ESP and all operating systems that are installed in case of dual or multi-boot installations. This also prevents tampering with ESP, UEFI bootloaders and kernels which with traditional software-based encryption often remain unencrypted and accessible. It also means one doesn't have to do special OS level repartitioning, or installation steps to ensure all data is encrypted at rest.What about FIPS compliance? Well, the good news is that majority of the OPAL compliant hard drives and/or security sub-chips do have FIPS 140-3 certification. Meaning they have been tested by independent laboratories to ensure they do in-fact encrypt data. On the CMVP website one can search for module name terms "OPAL" or "NVMe" or name of hardware vendor to locate FIPS certificates.Are such drives widely available? Yes. For example, a common Thinkpad X1 gen 11 has OPAL NVMe drives as standard, and they have FIPS certification too. Thus, it is likely in your hardware fleet these are already widely available. Use sedutil to check if MediaEncrypt and LockingSupported features are available.Well, this is great for laptops and physical servers, but you may ask - what about public or private cloud? Actually, more or less the same is already in-place in both. On CVMP website all major clouds have their disk encryption hardware certified, and all of them always encrypt all Virtual Machines with FIPS certified cryptography without an ability to opt-out. One is however in full control of how the encryption keys are managed: cloud-provider or self-managed (either with a cloud HSM or KMS or bring your own / external). See these relevant encryption options and key management docs for GCP, Azure, AWS. But the key takeaway without doing anything, at rest, VMs in public cloud are always encrypted and satisfy NIST SP 800-53 controls.What about private cloud? Most Linux based private clouds ultimately use qemu typically with qcow2 virtual disk images. Qemu supports user-space encryption of qcow2 disk, see this manpage. Such encryption encrypts the full virtual machine disk, including the bootloader and ESP. And it is handled entirely outside of the VM on the host - meaning the VM never has access to the disk encryption keys. Qemu implements this encryption entirely in userspace using gnutls, nettle, libgcrypt depending on how it was compiled. This also means one can satisfy FIPS requirements entirely in userspace without a Linux kernel in FIPS mode. Higher level APIs built on top of qemu also support qcow2 disk encryption, as in projects such as libvirt and OpenStack Cinder.If you carefully read the docs, you may notice that agent support is explicitly sometimes called out as not supported or not mentioned. Quite often agents running inside the OS may not have enough observability to them to assess if there is external encryption. It does mean that monitoring above encryption options require different approaches - for example monitor your cloud configuration using tools such as Wiz and Orca, rather than using agents inside individual VMs. For laptop / endpoint security agents, I do wish they would start gaining capability to report OPAL SED availability and status if it is active or not.What about using software encryption none-the-less on top of the above solutions? It is commonly referred to double or multiple encryption. There will be an additional performance impact, but it can be worthwhile. It really depends on what you define as data at rest for yourself and which controls you need. If one has a dual-boot laptop, and wants to keep one OS encrypted whilst booted into the other, it can perfectly reasonable to encrypted the two using separate software encryption keys. In addition to the OPAL encryption of the ESP. For more targeted per-file / per-folder encryption, one can look into using gocryptfs which is the best successor to the once popular, but now deprecated eCryptfs (amazing tool, but has fallen behind in development and can lead to data loss).All of the above mostly talks about cryptographic encryption, which only provides confidentially but not data integrity. To protect integrity, one needs to choose how to maintain that. dm-verity is a good choice for read-only and rigid installations. For read-write workloads, it may be easier to deploy ZFS or Btrfs instead. If one is using filesystems without a built-in integrity support such as XFS or Ext4, one can retrofit integrity layer to them by using dm-integrity (either standalone, or via dm-luks/cryptsetup --integrity option).

If one has a lot of estate and a lot of encryption keys to keep track off a key management solution is likely needed. The most popular solution is likely the one from Thales Group marketed under ChiperTrust Data Security Platform (previously Vormetric), but there are many others including OEM / Vendor / Hardware / Cloud specific or agnostic solutions.

I hope this crash course guide piques your interest to learn and discover modern confidentially and integrity solutions, and to re-affirm or change your existing controls w.r.t. to data protection at rest.

Full disk encryption, including UEFI ESP /boot/efi is now widely achievable by default on both baremetal machines and in VMs including with FIPS certification. To discuss more let's connect on Linkedin.

Russ Allbery: Review: Cyteen

Review: Cyteen, by C.J. Cherryh
Series: Cyteen #1
Publisher: Warner Aspect
Copyright: 1988
Printing: September 1995
ISBN: 0-446-67127-4
Format: Trade paperback
Pages: 680
The main text below is an edited version of my original review of Cyteen written on 2012-01-03. Additional comments from my re-read are after the original review. I've reviewed several other C.J. Cherryh books somewhat negatively, which might give the impression I'm not a fan. That is an artifact of when I started reviewing. I first discovered Cherryh with Cyteen some 20 years ago, and it remains one of my favorite SF novels of all time. After finishing my reading for 2011, I was casting about for what to start next, saw Cyteen on my parents' shelves, and decided it was past time for my third reading, particularly given the recent release of a sequel, Regenesis. Cyteen is set in Cherryh's Alliance-Union universe following the Company Wars. It references several other books in that universe, most notably Forty Thousand in Gehenna but also Downbelow Station and others. It also has mentions of the Compact Space series (The Pride of Chanur and sequels). More generally, almost all of Cherryh's writing is loosely tied together by an overarching future history. One does not need to read any of those other books before reading Cyteen; this book will fill you in on all of the politics and history you need to know. I read Cyteen first and have never felt the lack. Cyteen was at one time split into three books for publishing reasons: The Betrayal, The Rebirth, and The Vindication. This is an awful way to think of the book. There are no internal pauses or reasonable volume breaks; Cyteen is a single coherent novel, and Cherryh has requested that it never be broken up that way again. If you happen to find all three portions as your reading copy, they contain all the same words and are serviceable if you remember it's a single novel under three covers, but I recommend against reading the portions in isolation. Human colonization of the galaxy started with slower-than-light travel sponsored by the private Sol Corporation. The inhabitants of the far-flung stations and the crews of the merchant ships that supplied them have formed their own separate cultures, but initially remained attached to Earth. That changed with the discovery of FTL travel and a botched attempt by Earth to reassert its authority. At the time of Cyteen, there are three human powers: distant Earth (which plays little role in this book), the merchanter Alliance, and Union. The planet Cyteen is one of only a few Earth-like worlds discovered by human expansion, and is the seat of government and the most powerful force in Union. This is primarily because of Reseune: the Cyteen lab that produces the azi. If Cyteen is about any one thing, it's about azi: genetically engineered human clones who are programmed via intensive psychological conditioning starting before birth. The conditioning uses a combination of drugs to make them receptive and "tape," specific patterns of instruction and sensory stimulation. They are designed for specific jobs or roles, they're conditioned to be obedient to regular humans, and they're not citizens. They are, in short, slaves. In a lot of books, that's as deep as the analysis would go. Azi are slaves, and slavery is certainly bad, so there would probably be a plot around azi overthrowing their conditioning, or around the protagonists trying to free them from servitude. But Cyteen is not any SF novel, and azi are considerably more complex and difficult than that analysis. We learn over the course of the book that the immensely powerful head of Reseune Labs, Ariane Emory, has a specific broader purpose in mind for the azi. One of the reasons why Reseune fought for and gained the role of legal protector of all azi in Union, regardless of where they were birthed, is so that Reseune could act to break any permanent dependence on azi as labor. And yet, they are slaves; one of the protagonists of Cyteen is an experimental azi, which makes him the permanent property of Reseune and puts him in constant jeopardy of being used as a political prisoner and lever of manipulation against those who care about him. Cyteen is a book about manipulation, about programming people, about what it means to have power over someone else's thoughts, and what one can do with that power. But it's also a book about connection and identity, about what makes up a personality, about what constitutes identity and how people construct the moral codes and values that they hold at their core. It's also a book about certainty. Azi are absolutely certain, and are capable of absolute trust, because that's part of their conditioning. Naturally-raised humans are not. This means humans can do things that azi can't, but the reverse is also true. The azi are not mindless slaves, nor are they mindlessly programmed, and several of the characters, both human and azi, find a lot of appeal in the core of certainty and deep self-knowledge of their own psychological rules that azis can have. Cyteen is a book about emotions, and logic, and where they come from and how to balance them. About whether emotional pain and uncertainty is beneficial or damaging, and about how one's experiences make up and alter one's identity. This is also a book about politics, both institutional and personal. It opens with Ariane Emory, Councilor for Science for five decades and the head of the ruling Union Expansionist party. She's powerful, brilliant, dangerously good at reading people, and dangerously willing to manipulate and control people for her own ends. What she wants, at the start of the book, is to completely clone a Special (the legal status given to the most brilliant minds of Union). This was attempted before and failed, but Ariane believes it's now possible, with a combination of tape, genetic engineering, and environmental control, to reproduce the brilliance of the original mind. To give Union another lifespan of work by their most brilliant thinkers. Jordan Warrick, another scientist at Reseune, has had a long-standing professional and personal feud with Ariane Emory. As the book opens, he is fighting to be transferred out from under her to the new research station that would be part of the Special cloning project, and he wants to bring his son Justin and Justin's companion azi Grant with them. Justin is a PR, a parental replicate, meaning he shares Jordan's genetic makeup but was not an attempt to reproduce the conditions of Jordan's rearing. Grant was raised as his brother. And both have, for reasons that are initially unclear, attracted the attention of Ariane, who may be using them as pawns. This is just the initial setup, and along with this should come a warning: the first 150 pages set up a very complex and dangerous political situation and build the tension that will carry the rest of the book, and they do this by, largely, torturing Justin and Grant. The viewpoint jumps around, but Justin and Grant are the primary protagonists for this first section of the book. While one feels sympathy for both of them, I have never, in my multiple readings of the book, particularly liked them. They're hard to like, as opposed to pity, during this setup; they have very little agency, are in way over their heads, are constantly making mistakes, and are essentially having their lives destroyed. Don't let this turn you off on the rest of the book. Cyteen takes a dramatic shift about 150 pages in. A new set of protagonists are introduced who are some of the most interesting, complex, and delightful protagonists in any SF novel I have read, and who are very much worth waiting for. While Justin has his moments later on (his life is so hard that his courage can be profoundly moving), it's not necessary to like him to love this book. That's one of the reasons why I so strongly dislike breaking it into three sections; that first section, which is mostly Justin and Grant, is not representative of the book. I can't talk too much more about the plot without risking spoiling it, but it's a beautiful, taut, and complex story that is full of my favorite things in both settings and protagonists. Cyteen is a book about brilliant people who think on their feet. Cherryh succeeds at showing this through what they do, which is rarely done as well as it is here. It's a book about remembering one's friends and remembering one's enemies, and waiting for the most effective moment to act, but it also achieves some remarkable transformations. About 150 pages in, you are likely to loathe almost everyone in Reseune; by the end of the book, you find yourself liking, or at least understanding, nearly everyone. This is extremely hard, and Cherryh pulls it off in most cases without even giving the people she's redeeming their own viewpoint sections. Other than perhaps George R.R. Martin I've not seen another author do this as well. And, more than anything else, Cyteen is a book with the most wonderful feeling of catharsis. I think this is one of the reasons why I adore this book and have difficulties with some of Cherryh's other works. She's always good at ramping up the tension and putting her characters in awful, untenable positions. Less frequently does she provide the emotional payoff of turning the tables, where you get to watch a protagonist do everything you've been wanting them to do for hundreds of pages, except even better and more delightfully than you would have come up with. Cyteen is one of the most emotionally satisfying books I've ever read. I could go on and on; there is just so much here that I love. Deep questions of ethics and self-control, presented in a way that one can see the consequences of both bad decisions and good ones and contrast them. Some of the best political negotiations in fiction. A wonderful look at friendship and loyalty from several directions. Two of the best semi-human protagonists I've seen, who one can see simultaneously as both wonderful friends and utterly non-human and who put nearly all of the androids in fiction to shame by being something trickier and more complex. A wonderful unfolding sense of power. A computer that can somewhat anticipate problems and somewhat can't, and that encapsulates much of what I love about semi-intelligent bases in science fiction. Cyteen has that rarest of properties of SF novels: Both the characters and the technology meld in a wonderful combination where neither could exist without the other, where the character issues are illuminated by the technology and the technology supports the characters. I have, for this book, two warnings. The first, as previously mentioned, is that the first 150 pages of setup is necessary but painful to read, and I never fully warmed to Justin and Grant throughout. I would not be surprised to hear that someone started this book but gave up on it after 50 or 100 pages. I do think it's worth sticking out the rocky beginning, though. Justin and Grant continue to be a little annoying, but there's so much other good stuff going on that it doesn't matter. The other warning is that part of the setup of the story involves the rape of an underage character. This is mostly off-camera, but the emotional consequences are significant (as they should be) and are frequently discussed throughout the book. There is also rather frank discussion of adolescent sexuality later in the book. I think both of these are relevant to the story and handled in a way that isn't gratuitous, but they made me uncomfortable and I don't have any past history with those topics. Those warnings notwithstanding, this is simply one of the best SF novels ever written. It uses technology to pose deep questions about human emotions, identity, and interactions, and it uses complex and interesting characters to take a close look at the impact of technology on lives. And it does this with a wonderfully taut, complicated plot that sustains its tension through all 680 pages, and with characters whom I absolutely love. I have no doubt that I'll be reading it for a fourth and fifth time some years down the road. Followed by Regenesis, although Cyteen stands well entirely on its own and there's no pressing need to read the sequel. Rating: 10 out of 10

Some additional thoughts after re-reading Cyteen in 2025: I touched on this briefly in my original review, but I was really struck during this re-read how much the azi are a commentary on and a complication of the role of androids in earlier science fiction. Asimov's Three Laws of Robotics were an attempt to control the risks of robots, but can also be read as turning robots into slaves. Azis make the slavery more explicit and disturbing by running the programming on a human biological platform, but they're more explicitly programmed and artificial than a lot of science fiction androids. Artificial beings and their relationship to humans have been a recurring theme of SF since Frankenstein, but I can't remember a novel that makes the comparison to humans this ambiguous and conflicted. The azi not only like being azi, they can describe why they prefer it. It's clear that Union made azi for many of the same reasons that humans enslave other humans, and that Ariane Emory is using them as machinery in a larger (and highly ethically questionable) plan, but Cherryh gets deeper into the emergent social complications and societal impact than most SF novels manage. Azi are apparently closer to humans than the famous SF examples such as Commander Data, but the deep differences are both more subtle and more profound. I've seen some reviewers who are disturbed by the lack of a clear moral stance by the protagonists against the creation of azi. I'm not sure what to think about that. It's clear the characters mostly like the society they've created, and the groups attempting to "free" azi from their "captivity" are portrayed as idiots who have no understanding of azi psychology. Emory says she doesn't want azi to be a permanent aspect of society but clearly has no intention of ending production any time soon. The book does seem oddly unaware that the production of azi is unethical per se and, unlike androids, has an obvious exit ramp: Continue cloning gene lines as needed to maintain a sufficient population for a growing industrial civilization, but raise the children as children rather than using azi programming. If Cherryh included some reason why that was infeasible, I didn't see it, and I don't think the characters directly confronted it. I don't think societies in books need to be ethical, or that Cherryh intended to defend this one. There are a lot of nasty moral traps that civilizations can fall into that make for interesting stories. But the lack of acknowledgment of the problem within the novel did seem odd this time around. The other part of this novel that was harder to read past in this re-read is the sexual ethics. There's a lot of adolescent sexuality in this book, and even apart from the rape scene which was more on-the-page than I had remembered and which is quite (intentionally) disturbing there is a whole lot of somewhat dubious consent. Maybe I've gotten older or just more discriminating, but it felt weirdly voyeuristic to know this much about the sex lives of characters who are, at several critical points in the story, just a bunch of kids. All that being said, and with the repeated warning that the first 150 pages of this novel are just not very good, there is still something magic about the last two-thirds of this book. It has competence porn featuring a precociously brilliant teenager who I really like, it has one of the more interesting non-AI programmed computer systems that I've read in SF, it has satisfying politics that feel like modern politics (media strategy and relationships and negotiated alliances, rather than brute force and ideology), and it has a truly excellent feeling of catharsis. The plot resolution is a bit too abrupt and a bit insufficiently explained (there's more in Regenesis), but even though this was my fourth time through this book, the pacing grabbed me again and I could barely put down the last part of the story. Ethics aside (and I realize that's quite the way to start a sentence), I find the azi stuff fascinating. I know the psychology in this book is not real and is hopelessly simplified compared to real psychology, but there's something in the discussions of value sets and flux and self-knowledge that grabs my interest and makes me want to ponder. I think it's the illusion of simplicity and control, the what-if premise of thought where core motivations and moral rules could be knowable instead of endlessly fluid the way they are in us humans. Cherryh's azi are some of the most intriguing androids in science fiction to me precisely because they don't start with computers and add the humanity in, but instead start with humanity and overlay a computer-like certainty of purpose that's fully self-aware. The result is more subtle and interesting than anything Star Trek managed. I was not quite as enamored with this book this time around, but it's still excellent once the story gets properly started. I still would recommend it, but I might add more warnings about the disturbing parts.

Re-read rating: 9 out of 10

Next.

Previous.