Search Results: "ah"

9 September 2025

Dirk Eddelbuettel: RcppSMC 0.2.9 on CRAN: Maintenance

Release 0.2.9 of our RcppSMC package arrived at CRAN today. RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now also features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021. This release is again entirely internal. It updates the code for the just-released RcppArmadillo 15.0.2-1, in particular opts into Armadillo 15.0.2. And it makes one small tweak to the continuous integration setup switching to the r-ci action. The release is summarized below.

Changes in RcppSMC version 0.2.9 (2025-09-09)
  • Adjust to RcppArmadillo 15.0.* by setting ARMA_USE_CURRENT and updating two expressions from deprecated code
  • Rely on r-ci GitHub Action which includes the bootstrap step

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More information is on the RcppSMC page and the repo. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

John Goerzen: btrfs on a Raspberry Pi

I m something of a filesystem geek, I guess. I first wrote about ZFS on Linux 14 years ago, and even before I used ZFS, I had used ext2/3/4, jfs, reiserfs, xfs, and no doubt some others. I ve also used btrfs. I last posted about it in 2014, when I noted it has some advantages over ZFS, but also some drawbacks, including a lot of kernel panics. Since that comparison, ZFS has gained trim support and btrfs has stabilized. The btrfs status page gives you an accurate idea of what is good to use on btrfs. Background: Moving towards ZFS and btrfs I have been trying to move everything away from ext4 and onto either ZFS or btrfs. There are generally several reasons for that:
  1. The checksums for every block help detect potential silent data corruption
  2. Instant snapshots make consistent backups of live systems a lot easier, and without the hassle and wasted space of LVM snapshots
  3. Transparent compression and dedup can save a lot of space in storage-constrained environments
For any machine with at least 32GB of RAM (plus my backup server, which has only 8GB), I run ZFS. While it lacks some of the flexibility of btrfs, it has polish. zfs list -o space shows a useful space accounting. zvols can be behind VMs. With my project simplesnap, I can easily send hourly backups with ZFS, and I choose to send them over NNCP in most cases. I have a few VMs in the cloud (running Debian, of course) that I use to host things like this blog, my website, my gopher site, the quux NNCP public relay, and various other things. In these environments, storage space can be expensive. For that matter, so can RAM. ZFS is RAM-hungry, so that rules out ZFS. I ve been running btrfs in those environments for a few years now, and it s worked out well. I do async dedup, lzo or zstd compression depending on the needs, and the occasional balance and defrag. Filesystems on the Raspberry Pi I run Debian trixie on all my Raspberry Pis; not Raspbian or Raspberry Pi OS for a number of reasons. My 8-yr-old uses a Raspberry Pi 400 as her primary computer and loves it! She doesn t do web browsing, but plays Tuxpaint, some old DOS games like Math Blaster via dosbox, and uses Thunderbird for a locked-down email account. But it was SLOW. Just really, glacially, slow, especially for Thunderbird. My first step to address that was to get a faster MicroSD card to hold the OS. That was a dramatic improvement. It s still slow, but a lot faster. Then, I thought, maybe I could use btrfs with LZO compression to reduce the amount of I/O and speed things up further? Analysis showed things were mostly slow due to I/O, not CPU, constraints. The conversion Rather than use the btrfs in-place conversion from ext4, I opted to dar it up (like tar), run mkfs.btrfs on the SD card, then unpack the archive back onto it. Easy enough, right? Well, not so fast. The MicroSD card is 128GB, and the entire filesystem is 6.2GB. But after unpacking 100MB onto it, I got an out of space error. btrfs has this notion of block groups. By default, each block group is dedicated to either data or metadata. btrfs fi df and btrfs fi usage will show you details about the block groups. btrfs allocates block groups greedily (the ssd_spread mount option I use may have exacerbated this). What happened was it allocated almost the entire drive to data block groups, trying to spread the data across it. It so happened that dar archived some larger files first (maybe /boot), so btrfs was allocating data and metadata blockgroups assuming few large files. But then it started unpacking one of the directories in /usr with lots of small files (maybe /usr/share/locale). It quickly filled up the metadata block group, and since the entire SD card had been allocated to different block groups, I got ENOSPC. Deleting a few files and running btrfs balance resolved it; now it allocated 1GB to metadata, which was plenty. I re-ran the dar extract and now everything was fine. See more details on btrfs balance and block groups. This was the only btrfs problem I encountered. Benchmarks I timed two things prior to switching to btrfs: how long it takes to boot (measured from the moment I turn on the power until the moment the XFCE login box is displayed), and how long it takes to start Thunderbird. After switching to btrfs with LZO compression, somewhat to my surprise, both measures were exactly the same! Why might this be? It turns out that SD cards are understood to be pathologically bad with random read performance. Boot and Thunderbird both are likely doing a lot of small random reads, not large streaming reads. Therefore, it may be that even though I have reduced the total I/O needed, the impact is unsubstantial because the real bottleneck is the seeks across the disk. Still, I gain the better backup support and silent data corruption prevention, so I kept btrfs. SSD mount options and MicroSD endurance btrfs has several mount options specifically relevant to SSDs. Aside from the obvious trim support, they are ssd and ssd_spread. The documentation on this is vague and my attempts to learn more about it found a lot of information that was outdated or unsubstantiated folklore. Some reports suggest that older SSDs will benefit from ssd_spread, but that it may have no effect or even a harmful effect on newer ones, and can at times cause fragmentation or write amplification. I could find nothing to back this up, though. And it seems particularly difficult to figure out what kind of wear leveling SSD firmware does. MicroSD firmware is likely to be on the less-advanced side, but still, I have no idea what it might do. In any case, with btrfs not updating blocks in-place, it should be better than ext4 in the most naive case (no wear leveling at all) but may have somewhat more write traffic for the pathological worst case (frequent updates of small portions of large files). One anecdotal report I read and can t find anymore, somehow was from a person that had set up a sort of torture test for SD cards, with reports that ext4 lasted a few weeks or months before the MicroSDs failed, while btrfs lasted years. If you are looking for a MicroSD card, by the way, The Great MicroSD Card Survey is a nice place to start. For longevity: I mount all my filesystems with noatime already, so I continue to recommend that. You can also consider limiting the log size in /etc/systemd/journald.conf, running daily fstrim (which may be more successful than live trims in all filesystems). Conclusion I ve been pretty pleased with btrfs. The concerns I have today relate to block groups and maintenance (periodic balance and maybe a periodic defrag). I m not sure I d be ready to say put btrfs on the computer you send to someone that isn t Linux-savvy because the chances of running into issues are higher than with ext4. Still, for people that have some tech savvy, btrfs can improve reliability and performance in other ways.

8 September 2025

Thorsten Alteholz: My Debian Activities in August 2025

Debian LTS This was my hundred-thirty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on: I also continued my work on suricata and could backport all patches. Now I have to do some tests with the package. I also started to work on an openafs regression and attended the monthly LTS/ELTS meeting. Debian ELTS This month was the eighty-fifth ELTS month. During my allocated time I uploaded or worked on: I could also mark the CVEs of libcoap as not-affected. I also attended the monthly LTS/ELTS meeting.Of course like for LTS, As suricata had been requested for Stretch as well now, I didn t not finish my work here. Debian Printing This month I uploaded a new upstream version or a bugfix version of: This work is generously funded by Freexian! Debian Astro This month I uploaded a new upstream version or a bugfix version of: Debian IoT This month I uploaded a new upstream version or a bugfix version of: Debian Mobcom This month I uploaded a new upstream version or a bugfix version of: misc This month I uploaded a new upstream version or a bugfix version of: On my fight against outdated RFPs, I closed 31 of them in August. FTP master Yeah, Trixie has been released, the tired bones need to be awaken again :-). This month I accepted 203 and rejected 18 packages. The overall number of packages that got accepted was 243.

6 September 2025

Reproducible Builds: Reproducible Builds in August 2025

Welcome to the August 2025 report from the Reproducible Builds project! Welcome to the latest report from the Reproducible Builds project for August 2025. These monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:

  1. Reproducible Builds Summit 2025
  2. Reproducible Builds and live-bootstrap at WHY2025
  3. DALEQ Explainable Equivalence for Java Bytecode
  4. Reproducibility regression identifies issue with AppArmor security policies
  5. Rust toolchain fixes
  6. Distribution work
  7. diffoscope
  8. Website updates
  9. Reproducibility testing framework
  10. Upstream patches

Reproducible Builds Summit 2025 Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th 30th 2025 in Vienna, Austria!** We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!

Reproducible Builds and live-bootstrap at WHY2025 WHY2025 (What Hackers Yearn) is a nonprofit outdoors hacker camp that takes place in Geestmerambacht in the Netherlands (approximately 40km north of Amsterdam). The event is organised for and by volunteers from the worldwide hacker community, and knowledge sharing, technological advancement, experimentation, connecting with your hacker peers, forging friendships and hacking are at the core of this event . At this year s event, Frans Faase gave a talk on live-bootstrap, an attempt to provide a reproducible, automatic, complete end-to-end bootstrap from a minimal number of binary seeds to a supported fully functioning operating system . Frans talk is available to watch on video and his slides are available as well.

DALEQ Explainable Equivalence for Java Bytecode Jens Dietrich of the Victoria University of Wellington, New Zealand and Behnaz Hassanshahi of Oracle Labs, Australia published an article this month entitled DALEQ Explainable Equivalence for Java Bytecode which explores the options and difficulties when Java binaries are not identical despite being from the same sources, and what avenues are available for proving equivalence despite the lack of bitwise correlation:
[Java] binaries are often not bitwise identical; however, in most cases, the differences can be attributed to variations in the build environment, and the binaries can still be considered equivalent. Establishing such equivalence, however, is a labor-intensive and error-prone process.
Jens and Behnaz therefore propose a tool called DALEQ, which:
disassembles Java byte code into a relational database, and can normalise this database by applying Datalog rules. Those databases can then be used to infer equivalence between two classes. Notably, equivalence statements are accompanied with Datalog proofs recording the normalisation process. We demonstrate the impact of DALEQ in an industrial context through a large-scale evaluation involving 2,714 pairs of jars, comprising 265,690 class pairs. In this evaluation, DALEQ is compared to two existing bytecode transformation tools. Our findings reveal a significant reduction in the manual effort required to assess non-bitwise equivalent artifacts, which would otherwise demand intensive human inspection. Furthermore, the results show that DALEQ outperforms existing tools by identifying more artifacts rebuilt from the same code as equivalent, even when no behavioral differences are present.
Jens also posted this news to our mailing list.

Reproducibility regression identifies issue with AppArmor security policies Tails developer intrigeri has tracked and followed a reproducibility regression in the generation of AppArmor policy caches, and has identified an issue with the 4.1.0 version of AppArmor. Although initially tracked on the Tails issue tracker, intrigeri filed an issue on the upstream bug tracker. AppArmor developer John Johansen replied, confirming that they can reproduce the issue and went to work on a draft patch. Through this, John revealed that it was caused by an actual underlying security bug in AppArmor that is to say, it resulted in permissions not (always) matching what the policy intends and, crucially, not merely a cache reproducibility issue. Work on the fix is ongoing at time of writing.

Rust toolchain fixes Rust Clippy is a linting tool for the Rust programming language. It provides a collection of lints (rules) designed to identify common mistakes, stylistic issues, potential performance problems and unidiomatic code patterns in Rust projects. This month, however, Sosth ne Gu don filed a new issue in the GitHub requesting a new check that would lint against non deterministic operations in proc-macros, such as iterating over a HashMap .

Distribution work In Debian this month: Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 303, 304 and 305 to Debian:
  • Improvements:
    • Use sed(1) backreferences when generating debian/tests/control to avoid duplicating ourselves. [ ]
    • Move from a mono-utils dependency to versioned mono-devel mono-utils dependency, taking care to maintain the [!riscv64] architecture restriction. [ ]
    • Use sed over awk to avoid mangling dependency lines containing = (equals) symbols such as version restrictions. [ ]
  • Bug fixes:
    • Fix a test after the upload of systemd-ukify version 258~rc3. [ ]
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). [ ]
    • Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2) time. [ ]
    • Don t check for PyPDF version 3 specifically; check for >= 3. [ ]
  • Misc:
    • Update copyright years. [ ][ ]
In addition, Martin Joerg fixed an issue with the HTML presenter to avoid crash when page limit is None [ ] and Zbigniew J drzejewski-Szmek fixed compatibility with RPM 6 [ ]. Lastly, John Sirois fixed a missing requests dependency in the trydiffoscope tool. [ ]

Website updates Once again, there were a number of improvements made to our website this month including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Run 4 workers on the o4 node again in order to speed up testing. [ ][ ][ ][ ]
    • Also test trixie-proposed-updates and trixie-updates etc. [ ][ ]
    • Gather seperate statistics for each tested release. [ ]
    • Support sources from all Debian suites. [ ]
    • Run new code from the prototype database rework branch for the amd64-pull184 pseudo-architecture. [ ][ ]
    • Add a number of helpful links. [ ][ ][ ][ ][ ][ ][ ][ ][ ]
    • Temporarily call debrebuild without the --cache argument to experiment with a new version of devscripts. [ ][ ][ ]
    • Update public TODO. [ ]
  • Installation tests:
    • Add comments to explain structure. [ ]
    • Mark more old jobs as old or dead . [ ][ ][ ]
    • Turn the maintenance job into a no-op. [ ]
  • Jenkins node maintenance:
    • Increase penalties if the osuosl5 or ionos7 nodes are down. [ ]
    • Stop trying to fix network automatically. [ ]
    • Correctly mark ppc64el architecture nodes when down. [ ]
    • Upgrade the remaining arm64 nodes to Debian trixie in anticipation of the release. [ ][ ]
    • Allow higher SSD temperatures on the riscv64 architecture. [ ]
  • Debian-related:
    • Drop the armhf architecture; many thanks to Vagrant for physically hosting the nodes for ten years. [ ][ ]
    • Add Debian forky, and archive bullseye. [ ][ ][ ][ ][ ][ ][ ]
    • Document the filesystem space savings from dropping the armhf architecture. [ ]
    • Exclude i386 and armhfr from JSON results. [ ]
    • Update TODOs for when Debian trixie and forky have been released. [ ][ ]
  • tests.reproducible-builds.org-related:
  • Misc:
    • Detect errors with openQA erroring out. [ ]
    • Drop the long-disabled openwrt_rebuilder jobs. [ ]
    • Use qa-jenkins-dev@alioth-lists.debian.net as the contact for jenkins.debian.net. [ ]
    • Redirect reproducible-builds.org/vienna25 to reproducible-builds.org/vienna2025. [ ]
    • Disable all OpenWrt reproducible CI jobs, in coordination with the OpenWrt community. [ ][ ]
    • Make reproduce.debian.net accessable via IPv6. [ ]
    • Ignore that the megacli RAID controller requires packages from Debian bookworm. [ ]
In addition,
  • James Addison migrated away from deprecated toplevel deb822 Python module in favour of debian.deb822 in the bin/reproducible_scheduler.py script [ ] and removed a note on reproduce.debian.net note after the release of Debian trixie [ ].
  • Jochen Sprickerhof made a huge number of improvements to the reproduce.debian.net statistics calculation [ ][ ][ ][ ][ ][ ] as well as to the reproduce.debian.net service more generally [ ][ ][ ][ ][ ][ ][ ][ ].
  • Mattia Rizzolo performed a lot of work migrating scripts to SQLAlchemy version 2.0 [ ][ ][ ][ ][ ][ ] in addition to making some changes to the way openSUSE reproducibility tests are handled internally. [ ]
  • Lastly, Roland Clobus updated the Debian Live packages after the release of Debian trixie. [ ][ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 September 2025

Noah Meyerhans: False Positives

There are times when an email based workflow gets really difficult. One of those times is when discussing projects related to spam and malware detection.
 noahm@debian.org
host stravinsky.debian.org [2001:41b8:202:deb::311:108]
SMTP error from remote mail server after end of data:
550-malware detected: Sanesecurity.Phishing.Fake.30934.1.UNOFFICIAL:
550 message rejected
submit@bugs.debian.org
host stravinsky.debian.org [2001:41b8:202:deb::311:108]
SMTP error from remote mail server after end of data:
550-malware detected: Sanesecurity.Phishing.Fake.30934.1.UNOFFICIAL:
550 message rejected
This was, in fact, a false positive. And now, because reportbug doesn t record outgoing messages locally, I need to retype the whole thing. (NB. this is not a complaint about the policies deployed on the Debian mail servers; they d be negligent if they didn t implement such policies on today s internet.)

2 September 2025

Jonathan Dowland: Luminal and Lateral

For my birthday I was gifted copies of Eno's last two albums, Luminal and Lateral, both of which are collaborations with Beatie Wolfe.
Luminal and Lateral records in the sunshine
Let's start with the art. I love this semi-minimalist, bold style, and how the LP itself (in their coloured, bio-vinyl variants) feels like it's part of the artwork. I like the way the artist credits mirror each other: Wolfe, Eno for Luminal; Eno, Wolfe for Lateral. My first "bio vinyl" LP was the Cure's last one, last year. Ahead of it arriving I planned to blog about it, but when it came arrived it turned out I had nothing interesting to say. In terms of how it feels, or sounds, it's basically the same as the traditional vinyl formulation. The attraction of bio-vinyl to well-known environmentalists like Eno (and I guess, the Cure) is the reduced environmental impact due to changing out the petroleum and other ingredients with recycled used cooking oil. You can read more about bio-vinyl if you wish. I try not to be too cynical about things like this; my immediate response is to assume some kind of green-washing PR campaign (I'm currently reading Consumed by Saabira Chaudhuri, an excellent book that is not sadly only fuelling my cynicism) but I know Eno in particular takes this stuff seriously and has likely done more than a surface-level evaluation. So perhaps every little helps. On to the music. The first few cuts I heard from the albums earlier in the year didn't inspire me much. Possibly I heard something from Luminal, the vocal album; and I'm generally more drawn to Eno's ambient work. (Lateral is ambient instrumental). I was not otherwise familiar with Beatie Wolfe. On returning to the albums months later, I found them more compelling. Luminal reminds me a little of Apollo: Atmospheres and Soundtracks. Lateral worked well as space music for phd-correction sessions. The pair recently announced a third album, Liminal, to arrive in October, and totally throw off the symmetry of the first two. Two of its tracks are available to stream now in the usual places.

29 August 2025

Ravi Dwivedi: Installing Debian With Btrfs and Encryption

Motivation On the 8th of August 2025 (a day before the Debian Trixie release), I was upgrading my personal laptop from Debian Bookworm to Trixie. It was a major update. However, the update didn t go smoothly, and I ran into some errors. From the Debian support IRC channel, I got to know that it would be best if I removed the texlive packages. However, it was not so easy to just remove texlive with a simple apt remove command. I had to remove the texlive packages from /usr/bin. Then I ran into other errors. Hours after I started the upgrade, I realized I preferred having my system as it was before, as I had to travel to Noida the next day. Needless to say, I wanted to go to sleep rather than fix my broken system. Only if I had a way to go back to my system before I started upgrading, it would have saved a lot of trouble for me. I ended up installing Trixie from scratch. It turns out that there was a way to recover to the state before the upgrade - using Timeshift to roll back the system to a state (in our example, it is the state before the upgrade process started) in the past. However, it needs the Btrfs filesystem with appropriate subvolumes, not provided by Debian installer in their guided partitioning menu. I have set it up after a few weeks of the above-mentioned incident. Let me demonstrate how it works.
Check the screenshot above. It shows a list of snapshots made by Timeshift. Some of them were made by me manually. Others were made by Timeshift automatically as per the routine - I have set up hourly backups and weekly backups etc. In the above-mentioned major update, I could have just taken a snapshot using Timeshift before performing the upgrade and could have rolled back to that snapshot when I found that I cannot spend more time on fixing my installation errors. Then I could just perform the upgrade later.

Installation In this tutorial, I will cover how I installed Debian with Btrfs and disk encryption, along with creating subvolumes @ for root and @home for /home so that I can use Timeshift to create snapshots. These snapshots are kept on the same disk where Debian is installed, and the use-case is to roll back to a working system in case I mess up something or to recover an accidentally deleted file. I went through countless tutorials on the Internet, but I didn t find a single tutorial covering both the disk encryption and the above-mentioned subvolumes (on Debian). Debian doesn t create the desired subvolumes by default, therefore the process requires some manual steps, which beginners may not be comfortable performing. Beginners can try distros such as Fedora and Linux Mint, as their installation includes Btrfs with the required subvolumes. Furthermore, it is pertinent to note that I used Debian Trixie s DVD iso on a real laptop (not a virtual machine) for my installation. Debian Trixie is the codename for the current stable version of Debian. Then I took screenshots in a virtual machine by repeating the process. Moreover, a couple of screenshots are from the installation I did on the real laptop. Let s start the tutorial by booting up the Debian installer.
The above screenshot shows the first screen we see on the installer. Since we want to choose Expert Install, we select Advanced Options in the screenshot above.
Let s select the Expert Install option in the above screenshot. It is because we want to create subvolumes after the installer is done with the partition, and only then proceed to installing the base system. Non-expert install modes proceed directly to installing the system right after creating partitions without pausing for us to create the subvolumes.
After selecting the Expert Install option, you will get the screen above. I will skip to partitioning from here and leave the intermediate steps such as choosing language, region, connecting to Wi-Fi, etc. For your reference, I did create the root user.
Let s jump right to the partitioning step. Select the Partition disks option from the menu as shown above.
Choose Manual.
Select your disk where you would like to install Debian.
Select Yes when asked for creating a new partition.
I chose the msdos option as I am not using UEFI. If you are using UEFI, then you need to choose the gpt option. Also, your steps will (slightly) differ from mine if you are using UEFI. In that case, you can watch this video by the YouTube channel EF Linux in which he creates an EFI partition. As he doesn t cover disk encryption, you can continue reading this post after following the steps corresponding to EFI.
Select the free space option as shown above.
Choose Create a new partition.
I chose the partition size to be 1 GB.
Choose Primary.
Choose Beginning.
Now, I got to this screen.
I changed mount point to /boot and turned on the bootable flag and then selected Done setting up the partition.
Now select free space.
Choose the Create a new partition option.
I made the partition size equal to the remaining space on my disk. I do not intend to create a swap partition, so I do not need more space.
Select Primary.
Select the Use as option to change its value.
Select physical volume for encryption.
Select Done setting up the partition.
Now select Configure encrypted volumes.
Select Yes.
Select Finish.
Selecting Yes will take a lot of time to erase the data. Therefore, I would say if you have hours for this step (in case your SSD is like 1 TB), then I would recommend selecting Yes. Otherwise, you could select No and compromise on the quality of encryption. After this, you will be asked to enter a passphrase for disk encryption and confirm it. Please do so. I forgot to take the screenshot for that step.
Now select that encrypted volume as shown in the screenshot above.
Here we will change a couple of options which will be shown in the next screenshot.
In the Use as menu, select btrfs journaling file system.
Now, click on the mount point option.
Change it to / - the root file system.
Select Done setting up the partition.
This is a preview of the paritioning after performing the above-mentioned steps.
If everything is okay, proceed with the Finish partitioning and write changes to disk option.
The installer is reminding us to create a swap partition. I proceeded without it as I planned to add swap after the installation.
If everything looks fine, choose yes for writing the changes to disks.
Now we are done with partitioning and we are shown the screen in the screenshot above. If we had not selected the Expert Install option, the installer would have proceeded to install the base system without asking us. However, we want to create subvolumes before proceeding to install the base system. This is the reason we chose Expert Install. Now press Ctrl + F2.
You will see the screen as in the above screenshot. It says Please press Enter to activate this console. So, let s press Enter.
After pressing Enter, we see the above screen.
The screenshot above shows the steps I performed in the console. I followed the already mentioned video by EF Linux for this part and adapted it to my situation (he doesn t encrypt the disk in his tutorial). First we run df -h to have a look at how our disk is partitioned. In my case, the output was:
# df -h
Filesystem              Size  Used  Avail   Use% Mounted on
tmpfs                   1.6G  344.0K  1.6G    0% /run
devtmpfs                7.7G       0  7.7G   0% /dev
/dev/sdb1               3.7G    3.7G    0   100% /cdrom
/dev/mapper/sda2_crypt  952.9G  5.8G  950.9G  0% /target
/dev/sda1               919.7M  260.0K  855.8M  0% /target/boot
df -h shows us that /dev/mapper/sda2_crypt and /dev/sda1 are mounted on /target and /target/boot respectively. Let s unmount them. For that, we run:
# umount /target
# umount /target/boot
Next, let s mount our root filesystem to /mnt.
# mount /dev/mapper/sda2_crypt /mnt
Let s go into the /mnt directory.
# cd /mnt
Upon listing the contents of this directory, we get:
/mnt # ls
@rootfs
Debian installer has created a subvolume @rootfs automatically. However, we need the subvolumes to be @ and @home. Therefore, let s rename the @rootfs subvolume to @.
/mnt # mv @rootfs @
Listing the contents of the directory again, we get:
/mnt # ls
@
We only one subvolume right now. Therefore, let us go ahead and create another subvolume @home.
/mnt # btrfs subvolume create @home
Create subvolume './@home'
If we perform ls now, we will see there are two subvolumes:
/mnt # ls
@ @home
Let us mount /dev/mapper/sda2_crypt to /target
/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@ /dev/mapper/sda2_crypt /target/
Now we need to create a directory for /home.
/mnt # mkdir /target/home/
Now we mount the /home directory with subvol=@home option.
/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@home /dev/mapper/sda2_crypt /target/home/
Now mount /dev/sda1 to /target/boot.
/mnt # mount /dev/sda1 /target/boot/
Now we need to add these options to the fstab file, which is located at /target/etc/fstab. Unfortunately, vim is not installed in this console. The only way to edit is Nano.
nano /target/etc/fstab
Edit your fstab file to look similar to the one in the screenshot above. I am pasting the fstab file contents below for easy reference.
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/sda2_crypt /        btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@ 0       0
/dev/mapper/sda2_crypt /home    btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@home 0       0
# /boot was on /dev/sda1 during installation
UUID=12842b16-d3b3-44b4-878a-beb1e6362fbc /boot           ext4    defaults        0       2
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
Please double check the fstab file before saving it. In Nano, you can press Ctrl+O followed by pressing Enter to save the file. Then press Ctrl+X to quit Nano. Now, preview the fstab file by running
cat /target/etc/fstab
and verify that the entries are correct, otherwise you will booted to an unusable and broken system after the installation is complete. Next, press Ctrl + Alt + F1 to go back to the installer.
Proceed to Install the base system.
Screenshot of Debian installer installing the base system. Screenshot of Debian installer installing the base system.
I chose the default option here - linux-image-amd64. After this, the installer will ask you a few more questions. For desktop environment, I chose KDE Plasma. You can choose the desktop environment as per your liking. I will not cover the rest of the installation process and assume that you were able to install from here.

Post installation Let s jump to our freshly installed Debian system. Since I created a root user, I added the user ravi to the suoders file (/etc/sudoers) so that ravi can run commands with sudo. Follow this if you would like to do the same. Now we set up zram as swap. First, install zram-tools.
sudo apt install zram-tools
Now edit the file /etc/default/zramswap and make sure to have the following lines are uncommented:
ALGO=lz4
PERCENT=50
Now, run
sudo systemctl restart zramswap
If you run lsblk now, you should see the below-mentioned entry in the output:
zram0          253:0    0   7.8G  0 disk  [SWAP]
This shows us that zram has been activated as swap. Now we install timeshift, which can be done by running
sudo apt install timeshift
After the installation is complete, run Timeshift and schedule snapshots as you please. We are done now. Hope the tutorial was helpful. See you in the next post and let me know if you have any suggestions and questions on this tutorial.

Noah Meyerhans: Determining Network Online Status of Dualstack Cloud VMs

When a Debian cloud VM boots, it typically runs cloud-init at various points in the boot process. Each invocation can perform certain operations based on the host s static configuration passed by the user, typically either through a well known link-local network service or an attached iso9660 drive image. Some of the cloud-init steps execute before the network comes up, and others at a couple of different points after the network is up. I recently encountered an unexpected issue when configuring a dualstack (uses both IPv6 and legacy IPv4 networking) VM to use a custom apt server accessible only via IPv6. VM provisioning failed because it was unable to access the server in question, yet when I logged in to investigate, it was able to access the server without any problem. The boot had apparently gone smoothly right up until cloud-init s Package Update Upgrade Install module called apt-get update, which failed and broke subsequent provisioning steps. The errors reported by apt-get indicated that there was no route to the service in question, which more accurately probably meant that there was not yet a route to the service. But there was shortly after, when I investigated. This was surprising because the apt-get invocations occur in a cloud-init sequence that s explicitly ordered after the network is configured according to systemd-networkd-wait-online. Investigation eventually led to similar issues encountered in other environments reported in Debian bug #1111791, systemd: network-online.target reached before IPv6 address is ready . The issue described in that bug is identical to mine, but the bug is tagged wontfix. The behavior is considered correct.

Why the default behavior is the correct one While it s a bit counterintuitive, the systemd-networkd behavior is correct, and it s also not something we d want to override in the cloud images. Without explicit configuration, systemd can t accurately infer the intended network configuration of a given system. If a system is IPv6-only, systemd-networkd-wait-online will introduce unexpected delays in the boot process if it waits for IPv4, and vice-versa. If it assumes dualstack, things are even worse because it would block for a long time (approximately two minutes) in any single stack network before failing, leaving the host in degraded state. So the most reasonable default behavior is to block until any protocol is configured. For these same reasons, we can t change the systemd-networkd-wait-online configuration in our cloud images. All of the cloud environments we support offer both single stack and dual stack networking, so we preserve systemd s default behavior. What s causing problems here is that IPv6 takes significantly longer to configure due to its more complex router solicitation + router advertisement + DHCPv6 setup process. So in this particular case, where I ve got a dualstack VM that needs to access a v6-only apt server during the provisioning process, I need to find some mechanism to override systemd s default behavior and wait for IPv6 connectivity specifically.

What won t work Cloud-init offers the ability to write out arbitrary files during provisioning. So writing a drop-in for systemd-networkd-wait-online.service is trivial. Unfortunately, this doesn t give us everything we actually need. We still need to invoke systemctl daemon-reload to get systemd to actually apply the changes after we ve written them, and of course we need to do that before the service actually runs. Cloud-init provides a bootcmd module that lets us run shell commands very early in the boot process , but it runs too early: it runs before we ve written out our configuration files. Similarly, it provides a runcmd module, but scripts there are towards the end of the boot process, far too late to be useful. Instead of using the bootcmd facility, to simply reload systemd s config, it seemed possible that we could both write the config and trigger the reload, similar to the following:
 bootcmd:
- mkdir -p /etc/systemd/system/systemd-networkd-wait-online.service.d
- echo "[Service]" > /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- systemctl daemon-reload
But even that runs too late, as we can see in the logs that systemd-networkd-wait-online.service has completed before bootcmd is executed:
root@sid-tmp2:~# journalctl --no-pager -l -u systemd-networkd-wait-online.service
Aug 29 17:02:12 sid-tmp2 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Aug 29 17:02:13 sid-tmp2 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured
.
root@sid-tmp2:~# grep -F 'config-bootcmd ran' /var/log/cloud-init.log
2025-08-29 17:02:14,766 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully and took 0.467 seconds
At this point, it s looking like there are few options left!

What eventually worked I ended up identifying two solutions to the issue, both of which involve getting some other component of the provisioning process to run systemd-networkd-wait-online.

Solution 1 The first involves getting apt-get itself to wait for IPv6 configuration. The apt.conf configuration interface allows the definition of an APT::Update::Pre-Invoke hook that s executed just before apt s update operation. By writing the following to a file in /etc/apt/apt.conf.d/, we re able to ensure that we have IPv6 connectivity before apt-get tries accessing the network. This cloud-config snippet accomplishes that:
 - path: /etc/apt/apt.conf.d/99-wait-for-ipv6
content:  
APT::Update::Pre-Invoke   "/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6";  
This is safe to leave in place after provisioning, because the delay will be negligible once IPv6 connectivity is established. It s only during address configuration that it ll block for a noticeable amount of time, but that s what we want. This solution isn t entirely correct, though, because it s only apt-get that s actually affected by it. Other service that start after the system is ostensibly online might only see IPv4 connectivity when they start. This seems acceptable at the moment, though.

Solution 2 The second solution is to simply invoke systemd-networkd-wait-online directly from a cloud-init bootcmd. Similar to the first solution, it s not exactly correct because the host has already reached network-online.target, but it does block enough of cloud-init that package installation happens only after it completes. The cloud-config snippet for this is
bootcmd:
- [/usr/lib/systemd/systemd-networkd-wait-online, --operational-state=routable, --any, --ipv6]
In either case, we still want to write out a snippet to configure systemd-networkd-wait-online to wait for IPv6 connectivity for future reboots. Even though cloud-init won t necessarily run in those cases, and many cloud VMs never reboot at all, it does complete the solution. Additionally, it solves the problem for any derivative images that may be created based on the running VM s state. (At least if we can be certain that instances of those derivative images will never run in an IPv4-only network!)
write_files:
- path: /run/systemd/system/systemd-networkd-wait-online.service.d/99-ipv6-wait.conf
content:  
[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online --any --operational-state=routable --ipv6

How to properly solve it One possible improvement would be for cloud-init to support a configuration key allowing the admin to specify the required protocols. Based on the presence of this key, cloud-init could reconfigure systemd-networkd-wait-online.service accordingly. Alternatively it could set the appropriate RequiredFamilyForOnline= value in the generated .network file. cloud-init supports multiple network configuration backends, so each of those would need to be updated. If using the systemd-networkd configuration renderer, this should be straightforward, but Debian uses the netplan renderer, so that tool might also need to be taught to pass such a configuration along to systemd-networkd.

28 August 2025

Valhalla's Things: 1840s Underwear

Posted on August 28, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear
A woman wearing a knee-length shift with very short pleated sleeves and drawers that are a bit longer than needed to be ankle-length. The shift is too wide at the top, had to have a pleat taken in the center front, but the sleeves are still falling down. She is also wearing a black long sleeved t-shirt and leggings under said underwear, for decency. A bit more than a year ago, I had been thinking about making myself a cartridge pleated skirt. For a number of reasons, one of which is the historybounding potential, I ve been thinking pre-crinoline, so somewhere around the 1840s, and that s a completely new era for me, which means: new underwear. Also, the 1840s are pre-sewing machine, and I was already in a position where I had more chances to handsew than to machine sew, so I decided to embrace the slowness and sew 100% by hand, not even using the machine for straight seams. A woman turning fast enough that her petticoat extends a considerable distance from the body. The petticoat is white with a pattern of cording from the hem to just below hip level, with a decreasing number of rows of cording going up. If I remember correctly, I started with the corded petticoat, looking around the internet for instructions, and then designing my own based on the practicality of using modern wide fabric from my stash (and specifically some DITTE from costumers favourite source of dirty cheap cotton IKEA). Around the same time I had also acquired a sashiko kit, and I used the Japanese technique for sewing running stitches pushing the needle with a thimble that covers the base of the middle finger, and I can confirm that for this kind of things it s great! I ve since worn the petticoat a few times for casual / historyBounding / folkwearBounding reasons, during the summer, and I can confirm it s comfortable to use; I guess that during the winter it could be nice to add a flannel layer below it. The technical drawing and pattern for drawers from the book: each leg is cut out of a rectangle of fabric folded along the length, the leg is tapered equally, while the front is tapered more than the back, and comes to a point below the top of the original rectangle. Then I proceeded with the base layers: I had been browsing through The workwoman's guide and that provided plenty of examples, and I selected the basic ankle-length drawers from page 53 and the alternative shift on page 47. As for fabric, I had (and still have) a significant lack of underwear linen in my stash, but I had plenty of cotton voile that I had not used in a while: not very historically accurate for plain underwear, but quite suitable for a wearable mockup. Working with a 1830s source had an interesting aspect: other of the usual, mildly annoying, imperial units, it also used a lot a few obsolete units, especially nails, that my qalc, my usual calculator and converter, doesn t support. Not a big deal, because GNU units came to the rescue, and that one knows a lot of obscure and niche units, and it s quite easy to add those that are missing1 Working on this project also made me freshly aware of something I had already noticed: converting instructions for machine sewing garments into instructions for hand sewing them is usually straightforward, but the reverse is not always true. Starting from machine stitching, you can usually convert straight stitches into backstitches (or running backstitches), zigzag and overlocking into overcasting and get good results. In some cases you may want to use specialist hand stitches that don t really have a machine equivalent, such as buttonhole stitches instead of simply overcasting the buttonhole, but that s it. Starting from hand stitching, instead, there are a number of techniques that could be converted to machine stitching, but involve a lot of visible topstitching that wasn t there in the original instructions, or at times are almost impossible to do by machine, if they involve whipstitching together finished panels on seams that are subject to strong tension. Anyway, halfway through working with the petticoat I cut both the petticoat and the drawers at the same time, for efficiency in fabric use, and then started sewing the drawers. the top third or so of the drawers, showing a deep waistband that is closed with just one button at the top, and the front opening with finished edges that continue through the whole crotch, with just the overlap of fabric to provide coverage. The book only provided measurements for one size (moderate), and my fabric was a bit too narrow to make them that size (not that I have any idea what hip circumference a person of moderate size was supposed to have), so the result is just wide enough to be comfortably worn, but I think that when I ll make another pair I ll try to make them a bit wider. On the other hand they are a bit too long, but I think that I ll fix it by adding a tuck or two. Not a big deal, anyway. The same woman as in the opening image from the back, the shift droops significantly in the center back, and the shoulder straps have fallen down on the top of the arms. The shift gave me a bit more issues: I used the recommended gusset size, and ended up with a shift that was way too wide at the top, so I had to take a box pleat in the center front and back, which changed the look and wear of the garment. I have adjusted the instructions to make gussets wider, and in the future I ll make another shift following those. Even with the pleat, the narrow shoulder straps are set quite far to the sides, and they tend to droop, and I suspect that this is to be expected from the way this garment is made. The fact that there are buttonholes on the shoulder straps to attach to the corset straps and prevent the issue is probably a hint that this behaviour was to be expected. The technical drawing of the shift from the book, showing a the top of the body, two trapezoidal shoulder straps, the pleated sleeves and a ruffle on the front edge. I ve also updated the instructions so that they shoulder straps are a bit wider, to look more like the ones in the drawing from the book. Making a corset suitable for the time period is something that I will probably do, but not in the immediate future, but even just wearing the shift under a later midbust corset with no shoulder strap helps. I m also not sure what the point of the bosom gores is, as they don t really give more room to the bust where it s needed, but to the high bust where it s counterproductive. I also couldn t find images of original examples made from this pattern to see if they were actually used, so in my next make I may just skip them. Sleeve detail, showing box pleats that are about 2 cm wide and a few mm distance from each other all along the circumference, neatly sewn into the shoulder strap on one side and the band at the other side. On the other hand, I m really happy with how cute the short sleeves look, and if2 I ll ever make the other cut of shift from the same book, with the front flaps, I ll definitely use these pleated sleeves rather than the straight ones that were also used at the time. As usual, all of the patterns have been published on my website under a Free license:

  1. My ~/.units file currently contains definitions for beardseconds, bananas and the more conventional Nm and NeL (linear mass density of fibres).
  2. yeah, right. when.

22 August 2025

Matthias Geiger: Enforcing darkmode for QT programs under a non-QT based environment

I use sway as window manager on my main machine. As I prefer dark mode, I looked for a way to enable dark mode everywhere. For GTK-based this is fairly straightforward: Just install whatever theme you prefer, and apply it. However, QT-based applications on a non-QT based desktop will look

Russell Coker: Dell T320 H310 RAID and IT Mode

The Problem Just over 2 years ago my Dell T320 server had a motherboard failure [1]. I recently bought another T320 that had been gutted (no drives, PSUs, or RAM) and put the bits from my one in it. I installed Debian and the resulting installation wouldn t boot, I tried installing with both UEFI and BIOS modes with the same result. Then I realised that the disks I had installed were available even though I hadn t gone through the RAID configuration (I usually make a separate RAID-0 for each disk to work best with BTRFS or ZFS). I tried changing the BIOS setting for SATA disks between RAID and AHCI modes which didn t change things and realised that the BIOS setting in question probably applies to the SATA connector on the motherboard and that the RAID card was in IT mode which means that each disk is seen separately. If you are using ZFS or BTRFS you don t want to use a RAID-1, RAID-5, or RAID-6 on the hardware RAID controller, if there are different versions of the data on disks in the stripe then you want the filesystem to be able to work out which one is correct. To use IT mode you have to flash a different unsupported firmware on the RAID controller and then you either have to go to some extra effort to make it bootable or have a different device to boot from. The Root Causes Dell has no reason to support unusual firmware on their RAID controllers. Installing different firmware on a device that is designed for high availability is going to have some probability of data loss and perhaps more importantly for Dell some probability of customers returning hardware during the support period and acting innocent about why it doesn t work. Dell has a great financial incentive to make it difficult to install Dell firmware on LSI cards from other vendors which have equivalent hardware as they don t want customers to get all the benefits of iDRAC integration etc without paying the Dell price premium. All the other vendors have similar financial incentives so there is no official documentation or support on converting between different firmware images. Dell s support for upgrading the Dell version is pretty good, but it aborts if it sees something different. The Attempts I tried following the instructions in this document to flash back to Dell firmware [2]. This document is about the H310 RAID card in my Dell T320 AKA a LSI SAS 9211-8i . The sas2flash.efi program didn t seem to do anything, it returned immediately and didn t give an error message. This page gives a start of how to get inside the Dell firmware package but doesn t work [3]. It didn t cover the case where sasdupie aborts with an error because it detects the current version as 00.00.00.00 not something that the upgrade program is prepared to upgrade from. But it s a place to start looking for someone who wants to try harder at this. This forum post has some interesting information, I gave up before trying it, but it may be useful for someone else [4]. The Solution Dell tower servers have as a standard feature an internal USB port for a boot device. So I created a boot image on a spare USB stick and installed it there and it then loads the kernel and mounts the filesystem from a SATA hard drive. Once I got that working everything was fine. The Debian/Trixie installer would probably have allowed me to install an EFI device on the internal USB stick as part of the install if I had known what was going to happen. The system is now fully working and ready to sell. Now I just need to find someone who wants IT mode on the RAID controller and hopefully is willing to pay extra for it. Whatever I sell the system for it seems unlikely to cover the hours I spent working on this. But I learned some interesting things about RAID firmware and hopefully this blog post will be useful to other people, even if only to discourage them from trying to change firmware.

15 August 2025

Freexian Collaborators: Monthly report about Debian Long Term Support, July 2025 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In July, 17 contributors have been paid to work on Debian LTS, their reports are available:
  • Adrian Bunk did 19.0h (out of 19.0h assigned).
  • Andrej Shadura did 5.0h (out of 0.0h assigned and 8.0h from previous period), thus carrying over 3.0h to the next month.
  • Bastien Roucari s did 18.5h (out of 18.75h assigned), thus carrying over 0.25h to the next month.
  • Ben Hutchings did 12.5h (out of 3.25h assigned and 15.5h from previous period), thus carrying over 6.25h to the next month.
  • Carlos Henrique Lima Melara did 10.0h (out of 10.0h assigned).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 18.75h (out of 17.25h assigned and 1.5h from previous period).
  • Emilio Pozuelo Monfort did 18.75h (out of 18.75h assigned).
  • Guilhem Moulin did 15.0h (out of 14.0h assigned and 1.0h from previous period).
  • Jochen Sprickerhof did 2.0h (out of 16.5h assigned and 2.25h from previous period), thus carrying over 16.75h to the next month.
  • Lee Garrett did 7.0h (out of 0.0h assigned and 23.25h from previous period), thus carrying over 16.25h to the next month.
  • Markus Koschany did 9.0h (out of 18.75h assigned), thus carrying over 9.75h to the next month.
  • Roberto C. S nchez did 10.25h (out of 18.5h assigned and 2.75h from previous period), thus carrying over 11.0h to the next month.
  • Santiago Ruano Rinc n did 7.25h (out of 12.75h assigned and 2.25h from previous period), thus carrying over 7.75h to the next month.
  • Sylvain Beucler did 18.75h (out of 18.75h assigned).
  • Thorsten Alteholz did 15.0h (out of 15.0h assigned).
  • Utkarsh Gupta did 15.0h (out of 1.0h assigned and 14.0h from previous period).

Evolution of the situation In July, we released 24 DLAs.
  • Notable security updates:
    • angular.js, prepared by Bastien Roucari s, fixes multiple vulnerabilities including input sanitization and potential regular expression denial of service (ReDoS)
    • tomcat9, prepared by Markus Koschany, fixes an assortment of vulnerabilities
    • mediawiki, prepared by Guilhem Moulin, fixes several information disclosure and privilege escalation vulnerabilities
    • php7.4, prepared by Guilhem Moulin, fixes several server side request forgery and denial of service vulnerabilities
This month s contributions from outside the regular team include an update to thunderbird, prepared by Christoph Goehre (the package maintainer). LTS Team members also contributed updates of the following packages:
  • commons-beanutils (to stable and unstable), prepared by Adrian Bunk
  • djvulibre (to oldstable, stable, and unstable), prepared by Adrian Bunk
  • git (to stable), prepared by Adrian Bunk
  • redis (to oldstable), prepared by Chris Lamb
  • libxml2 (to oldstable), prepared by Guilhem Moulin
  • commons-vfs (to oldstable), prepared by Daniel Leidert
Additionally, LTS Team member Santiago Ruano Rinc n proposed and implemented an improvement to the debian-security-support package. This package is available so that interested users can quickly determine if any installed packages are subject to limited security support or are excluded entirely from security support. However, there was not previously a way to identify explicitly supported packages, which has become necessary to note exceptions to broad exclusion policies (e.g., those which apply to substantial package groups, like modules belonging to the Go and Rust language ecosystems). Santiago s work has enabled the notation of exceptions to these exclusions, thus ensuring that users of debian-security-support have accurate status information concerning installed packages.

DebCamp 25 Security Tracker Sprint The previously announced security tracker sprint took place at DebCamp from 7-13 July. Participants included 8 members of the standing LTS Team, 2 active Debian Developers with an interest in LTS, 3 community members, and 1 member of the Debian Security Team (who provided guidance and reviews on proposed changes to the security tracker); participation was a mix of in person at the venue in Brest, France and remote. During the days of the sprint, the team tackled a wide range of bugs and improvements, mostly targeting the security tracker. The sprint participants worked on the following items: As can be seen from the above list, only a small number of changes were brought to completion during the sprint week itself. Given the very compressed timeframe involved, the broad scope of tasks which were under consideration, and the highly sensitive data managed by the security tracker, this is not entirely unexpected and in no way diminishes the great work done by the sprint participants. The LTS Team would especially like to thank Salvatore Bonaccorso of the Debian Security Team for making himself available throughout the sprint to answer questions, for providing guidance on the work, and for helping the work by reviewing and merging the MRs which were able to merged during the sprint itself. In the weeks that follow the sprint, the team will continue working towards completing the in progress items.

Thanks to our sponsors Sponsors that joined recently are in bold.

6 August 2025

Reproducible Builds: Reproducible Builds in July 2025

Welcome to the seventh report from the Reproducible Builds project in 2025. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:
  1. Reproducible Builds Summit 2025
  2. Reproducible Builds an official goal for SUSE Enterprise Linux
  3. Reproducible Builds at FOSSY 2025
  4. New OSS Rebuild project from Google
  5. New extension of Python setuptools to support reproducible builds
  6. diffoscope
  7. New library to patch system functions for reproducibility
  8. Independently Reproducible Git Bundles
  9. Website updates
  10. Distribution work
  11. Reproducibility testing framework
  12. Upstream patches

Reproducible Builds Summit 2025 We are extremely pleased to announce the upcoming Reproducible Builds Summit, set to take place from October 28th 30th 2025 in Vienna, Austria! We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!

Reproducible Builds an official goal for SUSE Enterprise Linux On our mailing list this month, Bernhard M. Wiedemann revealed the big news that reproducibility is now an official goal for SUSE Linux Enterprise Server (SLES) 16:
[Everything] changed earlier this year when reproducible-builds for SLES-16 became an official goal for the product. More people are talking about digital sovereignty and supply-chain security now. [ ] Today, only 9 of 3319 (source) packages have significant problems left (plus 7 with pending fixes), so 99.5% of packages have reproducible builds.

Reproducible Builds at FOSSY 2025 On Saturday 2nd August, Vagrant Cascadian and Chris Lamb presented at this year s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here s Reproducible Builds!, was introduced as follows:
There are numerous policy compliance and regulatory processes being developed that target software development but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted forever?
Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you . More information on the event is available on the FOSSY 2025 website, including the full programme schedule. Vagrant and Chris also staffed a table, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.

New OSS Rebuild project from Google The Google Open Source Security Team (GOSST) published an article this month announcing OSS Rebuild, a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts. As the post itself documents, the new project comprises four facets:
  • Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.
  • SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.
  • Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.
  • Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.
One difference with most projects that aim for bit-for-bit reproducibility, OSS Rebuild aims for a kind of semantic reproducibility:
Through automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression).
The extensive post includes examples about how to access OSS Rebuild attestations using the Go-based command-line interface.

New extension of Python setuptools to support reproducible builds Wim Jeantine-Glenn has written a PEP 517 Build backend in order to enable reproducible builds when building Python projects that use setuptools. Called setuptools-reproducible, the project s README file contains the following:
Setuptools can create reproducible wheel archives (.whl) by setting SOURCE_DATE_EPOCH at build time, but setting the env var is insufficient for creating reproducible sdists (.tar.gz). setuptools-reproducible [therefore] wraps the hooks build_sdist build_wheel with some modifications to make reproducible builds by default.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 301, 302 and 303 to Debian:
  • Improvements:
    • Use Difference.from_operation in an attempt to pipeline the output of the extract-vmlinux script, potentially avoiding it all in memory. [ ]
    • Memoize a number of calls to --version, saving a very large number of external subprocess calls.
  • Bug fixes:
    • Don t check for PyPDF version 3 specifically, check for versions greater than 3. [ ]
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). [ ]
    • Mask stderr from extract-vmlinux script. [ ][ ]
    • Avoid spurious differences in h5dump output caused by exposure of absolute internal extraction paths. (#1108690)
  • Misc:
    • Use our_check_output in the ODT comparator. [ ]
    • Update copyright years. [ ]
In addition: Lastly, Chris Lamb added a tmpfs to try.diffoscope.org so that diffoscope has a non-trivial temporary area to unpack archives, etc. [ ] Elsewhere in our tooling, however, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, reprotest version 0.7.30 was uploaded to Debian unstable by Holger Levsen, chiefly including a change by Rebecca N. Palmer to not call sudo with the -h flag in order to fix Debian bug #1108550. [ ]

New library to patch system functions for reproducibility Nicolas Graves has written and published libfate, a simple collection of tiny libraries to patch system functions deterministically using LD_PRELOAD. According to the project s README:
libfate provides deterministic replacements for common non-deterministic system functions that can break reproducible builds. Instead of relying on complex build systems or apps or extensive patching, libfate uses the LD_PRELOAD trick to intercept system calls and return fixed, predictable values.
Describing why he wrote it, Nicolas writes:
I originally used the OpenSUSE dettrace approach to make Emacs reproducible in Guix. But when Guix switch to GCC@14, dettrace stopped working as expected. dettrace is a complex piece of software, my need was much less heavy: I don t need to systematically patch all sources of nondetermism, just the ones that make a process/binary unreproducible in a container/chroot.

Independently Reproducible Git Bundles Simon Josefsson has published another interesting article this month. Titled Independently Reproducible Git Bundles, the blog post describes the advantages of why you might a reproducible bundle, and the pitfalls that can arise when trying to create them:
One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine. It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occurred even when nothing had been committed on the server side between the two runs.

Website updates Once again, there were a number of improvements made to our website this month including:

Distribution work In Debian this month:
Debian contributors have made significant progress toward ensuring package builds produce byte-for-byte reproducible results. You can check the status for packages installed on your system using the new package debian-repro-status, or visit reproduce.debian.net for Debian s overall statistics for trixie and later. You can contribute to these efforts by joining #debian-reproducible on IRC to discuss fixes, or verify the statistics by installing the new rebuilderd package and setting up your own instance.

The IzzyOnDroid Android APK repository made further progress in July, crossing the 50% reproducibility threshold congratulations. Furthermore, a new release of the Neo Store was released, which exposes the reproducible status directly next to the version of each app.
In GNU Guix, a series of patches intended to fix the reproducibility for the Mono programming language was merged, fixing reproducibility in Mono versions 1.9 [ ], 2.4 [ ] and 2.6 [ ].
Lastly, in addition to the news that openSUSE Enterprise Linux now [has an official goal of reproducibility]((https://lists.reproducible-builds.org/pipermail/rb-general/2025-July/003846.html), Bernhard M. Wiedemann posted another monthly update for their work there.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In July, however, a number of changes were made by Holger Levsen, including:
  • Switch the URL for the Tails package set. [ ]
  • Make the dsa-check-packages output more useful. [ ]
  • Setup the ppc64el architecture again, has it has returned this time with a 2.7 GiB database instead of 72 GiB. [ ]
In addition, Jochen Sprickerhof improved the reproducibility statistics generation:
  • Enable caching of statistics. [ ][ ][ ]
  • Add some common non-reproducible patterns. [ ]
  • Change output to directory. [ ]
  • Add a page sorted by diffoscope size. [ ][ ]
  • Switch to Python s argparse module and separate output(). [ ]
Holger also submitted a number of Debian bugs against rebuilderd and rebuilderd-worker:
  • Config files and scripts for a simple one machine setup. [ ][ ]
  • Create a rebuilderd user. [ ]
  • Create rebuilderd-worker user with sbuild. [ ]
Lastly, Mattia Rizzolo added a scheduled job to renew some SSL certificates [ ] and Vagrant Cascadian performed some node maintenance [ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: There were a number of other patches from openSUSE developers:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

David Bremner: Using git-annex for email and notmuch metadata

Introducing git-remote-notmuch Based on an idea and ruby implementation by Felipe Contreras, I have been developing a git remote helper for notmuch. I will soon post an updated version of the patchset to the notmuch mailing list (I wanted to refer to this post in my email). In this blog post I'll outline my experiments with using that tool, along with git-annex to store (and sync) a moderate sized email store along with its notmuch metadata.

WARNING The rest of this post describes some relatively complex operations using (at best) alpha level software (namely git-remote-notmuch). git-annex is good at not losing your files, but git-remote-notmuch can (and did several times during debugging) wipe out your notmuch database. If you have a backup (e.g. made with notmuch-dump), this is much less annoying, and in particular you can decide to walk away from this whole experiment and restore your database.

Why git-annex? I currently have about 31GiB of email, spread across more than 830,000 files. I want to maintain the ability to search and read my email offline, so I need to maintain a copy on several workstations and at least one server (which is backed up explicitly). I am somewhat commited to maintaining synchronization of tags to git since that is how the notmuch bug tracker works. Commiting the email files to git seems a bit wasteful: by design notmuch does not modify email files, and even with compression, the extra copy adds a fair amount of overhead (in my case, 17G of git objects, about 57% overhead). It is also notoriously difficult to completely delete files from a git repository. git-annex offers potential mitigation for these two issues, at the cost of a somewhat more complex mental model. The main idea is that instead of committing every version of a file to the git repository, git-annex tracks the filename and metadata, with the file content being stored in a key-value store outside git. Conceptually this is similar to git-lfs. From our current point, the important point is that instead of a second (compressed) copy of the file, we store one copy, along with a symlink and a couple of directory entries.

What to annex For sufficiently small files, the overhead of a symlink and couple of directory entries is greater than the cost of a compressed second copy. When this happens depends on several variables, and will probably depend on the file content in a particular collection of email. I did a few trials of different settings for annex.largefiles to come to a threshold of largerthan=32k 1. For the curious, my experimental results are below. One potentially surprising aspect is that annexing even a small fraction of the (largest) files yields a big drop in storage overhead.
Threshold fraction annexed overhead
0 100% 30%
8k 29% 13%
16k 12% 9.4%
32k 7% 8.9%
48k 6% 8.9%
100k 3% 9.1%
256k 2% 11%
(git) 0 % 57%
In the end I chose to err on the side of annexing more files (for the flexibility of deletion) rather than potentially faster operations with fewer annexed files at the same level of overhead. Summarizing the configuration settings for git-annex (some of these are actually defaults, but not in my environment).
$ git config annex.largefiles largerthan=32k
$ git config annex.dotfiles true
$ git config annex.synccontent true

Delivering mail To get new mail, I do something like
# compute a date based folder under $HOME/Maildir
$ dest = $(folder)
# deliver mail to $ dest  (somehow).
$ notmuch new
$ git -C $HOME/Maildir add $ folder 
$ git -C $HOME/Maildir diff-index --quiet HEAD $ folder    git -C $HOME/Maildir commit -m 'mail delivery'
The call to diff-index is just an optimization for the case when nothing was delivered. The default configuration of git-annex will automagically annex any files larger than my threshold. At this point the git-annex repo knows nothing about tags. There is some git configuration that can speed up the "git add" above, namely
$ git config core.untrackedCache true
$ git config core.fsmonitor true
See git-status(1) under "UNTRACKED FILES AND PERFORMANCE" Defining notmuch as a git remote Assuming git-remote-notmuch is somewhere in your path, you can define a remote to connect to the default notmuch database.
$ git remote add database notmuch::
$ git fetch database
$ git merge --allow-unrelated database
The --allow-unrelated should be needed only the first time. In my case the many small files used to represent the tags (one per message), use a noticeable amount of disk space (in my case about the same amount of space as the xapian database). Once you start merging from the database to the git repo, you will likely have some conflicts, and most conflict resolution tools leave junk lying around. I added the following .gitignore file to the top level of the repo
*.orig
*~
This prevents our cavalier use of git add from adding these files to our git history (and prevents pushing random junk to the notmuch database. To push the tags from git to notmuch, you can run
$ git push database master
You might need to run notmuch new first, so that the database knows about all of the messages (currently git-remote-notmuch can't index files, only update metadata). git annex sync should work with the new remote, but pushing back will be very slow 2. I disable automatic pushing as follows
$ git config remote.database.annex-push false
Unsticking the database remote If you are debugging git-remote-notmuch, or just unlucky, you may end up in a sitation where git thinks the database is ahead of your git remote. You can delete the database remote (and associated stuff) and re-create it. Although I cannot promise this will never cause problems (because, computers), it will not modify your local copy of the tags in the git repo, nor modify your notmuch database.
$ git remote rm database
$ git update-rf -d notmuch/master
$ rm -r .git/notmuch
Fine tuning notmuch config
  • In order to avoid dealing with file renames, I have
      notmuch config maildir.synchronize_flags false
    
  • I have added the following to new.ignore:
       .git;_notmuch_metadata;.gitignore
    

  1. I also had to set annex.dotfiles to true, as many of my maildirs follow the qmail style convention of starting with a .
  2. I'm not totally clear on why it so slow, but certainly git-annex tries to push several more branches, and these are ignored by git-remote-annex.

4 August 2025

Aigars Mahinovs: Snapshot mirroring in Debian (and Ubuntu)

Snapshot mirroring in Debian (and Ubuntu) The use of snapshots has been routine in both Debian and Ubuntu for several years now or more than 15 years for Debian, to be precise. Snapshots have become not only very reliable, but also an increasingly important part of the Debian package archive. This week, I encountered a problem at work that could be perfectly solved by correctly using the Snapshot service. However, while trying to figure it out, I ran into some shortcomings in the documentation. Until the docs are updated, I am publishing this blog post to make this information easier to find. Problem 1: Ensure fully reproducible creation of Docker containers with the exact same packages installed, even years after the original images were generated. Solution 1: Pin everything! Use a pinned source image in the FROM statement, such as debian:trixie-20250721-slim, and also pin the APT package sources to the "same" date - "20250722". Hint: The APT packages need to be newer than the Docker image base. If the APT packages are a bit newer, that's not a problem, as APT can upgrade packages without issues. However, if your Docker image has a newer package than your APT package sources, you will have a big problem. For example, if you have "libbearssl0" version 0.6-2 installed in the Docker image, but your package sources only have the older version 0.6-1, you will fail when trying to install the "libbearssl-dev" package. This is because you only have version 0.6-1 of the "-dev" package available, which hard-depends on exactly version 0.6-1 of "libbearssl0", and APT will refuse to downgrade an already installed package to satisfy that dependency. Problem 2: You are using a lot of images in a lot of executions and building tens of thousands of images per day. It would be a bad idea to put all this load on public Debian servers. Using local sources is also faster and adds extra security. Solution 2: Use local (transparently caching) mirrors for both the Docker Hub repository and the APT package source. At this point, I ran into another issue I could not easily figure out how to specify a local mirror for the snapshot part of the archive service. First of all, snapshot support in both Ubuntu and Debian accepts both syntaxes described in the Debian and Ubuntu documentation above. The documentation on both sites presents different approaches and syntax examples, but both work. The best approach nowadays is to use the "deb822" sources syntax. Remove /etc/apt/sources.list (if it still exists), delete all contents of the /etc/apt/sources.list.d directory, and instead create this file at /etc/apt/sources.list.d/debian.sources:
Types: deb
URIs: https://common.mirror-proxy.local/ftp.debian.org/debian/
Suites: trixie
Components: main non-free-firmware non-free contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Snapshot: 20250722
Hint: This assumes you have a mirror service running at common.mirror-proxy.local that proxies requests (with caching) to whitelisted domains, based on the name of the first folder in the path. If you now run sudo apt update --print-uris, you will see that your configuration accesses your mirror, but does not actually use the snapshot. Next, add the following to /etc/apt/apt.conf.d/80snapshots:
APT::Snapshot "20250722";
That should work, right? Let's try sudo apt update --print-uris again. I've got good news and bad news! The good news is that we are now actually using the snapshot we specified (twice). The bad news is that we are completely ignoring the mirror and going directly to snapshots.debian.org instead. Finding the right information was a bit of a challenge, but after a few tries, this worked: to specify a custom local mirror of the Debian (or Ubuntu) snapshot service, simply add the following line to the same file, /etc/apt/apt.conf.d/80snapshots:
Acquire::Snapshots::URI::Override::Origin::debian "https://common.mirror-proxy.local/snapshot.debian.org/archive/debian/@SNAPSHOTID@/";
Now, if you check again with sudo apt update --print-uris, you will see that the requests go to your mirror and include the specified snapshot identifier. Success! Now you can install any packages you want, and everything will be completely local and fully reproducible, even years later!

3 August 2025

Aigars Mahinovs: Debconf 25 photos

Debconf 25 photos Debconf 25 came to the end in Brest, France this year a couple weeks ago. This has been a very different and unusually interesting Debconf. For me it was for two, related reasons: for one the conference was close enough in Western Europe that I could simply drive there with a car (which reminds me that I should make a blog post about the BMW i5, before I am done with it at the end of this year) and for the other - the conference is close enough to Western Europe that many other Debian developers could join this year who have not been seen at the event for many years. Being able to arrive early, decompress and spend extra time looking around the place made the event itself even more enjoyable than usual. The French cuisine, especially in its Breton expression, has been a very welcome treat. Even if there were some rough patches with the food selection, amount, or waiting, it was still a great experience. I specifically want to say a big thank you to the organisers for everything, but very explicitly for planning all the talk/BOF rooms in the same building and almost on the same floor. It saved me a lot of footwork, but also for other participants the short walks between the talks made it possible to always have a few minutes to talk to people or grab a croissant before running to the next talk. IMHO we should come back to a tradition of organising Debconf in Europe every 2-3 years. This maximises one of the main goals of Debconf - bringing as many Debian Developers as possible together in one physical location. This works best when the location is actually close to large concentrations of existing developers. In other years, the other goal of Debconf can then take priority - recruiting new developers in new locations. However, these goals could both be achieved at the same time - there are plenty of locations in Europe and even in Western Europe that still have good potential for attracting new developers. Especially if we focus on organising the event on the campuses of some larger technically-oriented universities. This year was also very productive for me a lot of conversations with various people about all kinds of topics, especially technical packaging questions. It has been a long time since the very basic foundations of Debian packaging work have been so fundamentally refactored and modernized as in the past year. Tag2upload has become a catalyst for git-based packaging and for automated workflows via Salsa, and all of that feeds back into focusing on a few best-supported packaging workflows. There is still a bit of a documentation gap of a new contributor getting to these modern packaging workflows from the point where the New Maintainers Guide stops. In any case, next year Debconf will be happening in Santa Fe, Argentina. And the year after that it is all still open and in a close competition between Japan, Spain, Portugal, Brazil and .. El Salvador? Personally, I would love to travel to Japan (again), but Spain or Portugal would also be great locations to meet more European developers again. As for Santa Fe ... it is quite likely that I will not be able to make it there next year, for (planned) health reasons. I guess I should also write a new blog post about what it means to be a Debconf Photographer, so that someone else could do this as well, and also reduce the "bus factor" along the way. But before that - here is the main group photo from this year: DebConf 25 Group photo You can also see it on: You can also enjoy the rest of the photos: Additionally, check out photos from other people on GIT LFS and consider adding your own photos there as well. Other places I have updated with up-to-date information are these wiki pages: If you took part in the playing cards event, then check your photo in this folder and link to your favourite from your line in the playing card wiki

Ben Hutchings: FOSS activity in July 2025

In July I attended DebCamp and DebConf in Brest, France. I very much enjoyed the opportunity to reconnect with other Debian contributors in person. I had a number of interesting and fruitful conversations there, besides the formally organised BoFs and talks. I also gave my own talk on What s new in the Linux kernel (and what s missing in Debian). Here s the usual categorisation of activity:

1 August 2025

Iustin Pop: Our Grand Japan 2025 vacation is over

As I m writing this, we re one hour away from landing, and thus our Grand (with a capital G for sure) Japan 2025 vacation is over. Planning started about nine months ago, plane tickets bought six months in advance, most hotels booked about four months ahead, and then a wonderful, even if a bit packed, almost 3 weeks in Japan. And now we re left with lots of good memories, some mishaps that we re going to laugh about in a few months s time, and quite a few thousand pictures to process and filter, so that so they can be viewed in a single session. Oh, and I m also left with a nice bottle of plum wine, thanks to inflight shopping. Was planning to, but didn t manage to buy one in the airport, as Haneda International departures, after the security check, is a bit small. But in 15 hours of flying, there was enough time to implement 2 tiny Corydalis features, and browse the shopping catalog . I only learned on the flight that some items need to be preordered, a lesson for next time Thanks to the wonders of inflight internet, I can write and publish this, but it not being StarLink, Visual Studio Code managed to download an update for the UI, but now the remote server package is too big? slow? and can t be downloaded. Well, it started download 5 times, and aborted at about 80% each time. Well, thankful my blog is lightweight and I can write it in vi and push it . And pushing the above-mentioned features to GitHub was also possible. A proper blog post will follow, once I can select some pictures and manage to condense three weeks in an overall summary And in the meantime, back to the real world!

31 July 2025

Simon Josefsson: Independently Reproducible Git Bundles

The gnulib project publish a git bundle as a stable archival copy of the gnulib git repository once in a while. Why? We don t know exactly what this may be useful for, but I m promoting for this to see if we can establish some good use-case. A git bundle may help to establish provinence in case of an attack on the Savannah hosting platform that compromise the gnulib git repository. Another use is in the Debian gnulib package: that gnulib bundle is git cloned when building some Debian packages, to get to exactly the gnulib commit used by each upstream project see my talk on gnulib at Debconf24 and this approach reduces the amount of vendored code that is part of Debian s source code, which is relevant to mitigate XZ-style attacks. The first time we published the bundle, I wanted it to be possible to re-create it bit-by-bit identically by others. At the time I discovered a well-written blog post by Paul Beacher on reproducible git bundles and thought he had solved the problem for me. Essentially it boils down to disable threading during compression when producing the bundle, and his final example show this results in a predictable bit-by-bit identical output:
$ for i in $(seq 1 100); do \
> git -c 'pack.threads=1' bundle create -q /tmp/bundle-$i --all; \
> done
$ md5sum /tmp/bundle-*   cut -f 1 -d ' '   uniq -c
    100 4898971d4d3b8ddd59022d28c467ffca
So what remains to be said about this? It seems reproducability goes deeper than that. One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine. It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occured even when nothing had been committed on the server side between the two runs. I thought the reason had to do with other sources of unpredictable data, and I explored several ways to work around this but eventually gave up. I settled for the following sequence of commands:
REV=ac9dd0041307b1d3a68d26bf73567aa61222df54 # master branch commit to package
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input
# inspect that the new tree matches a trusted copy
git checkout -B master $REV # put $REV at master
for b in $(git branch -r   grep origin/stable-   sort --version-sort); do git checkout $ b#origin/ ; done
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any commits after $REV
git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle
At the time it felt more important to publish something than to reach for perfection, so we did so using the above snippet. Afterwards I reached out to the git community on this and there were good discussion about my challenge. At the end of that thread you see that I was finally able to reproduce a bit-by-bit identical bundles from two different clones, by using an intermediate git -c pack.threads=1 repack -adF step. I now assume that the unpredictable data I got earlier was introduced during the git clone steps, compressing the pack differently each time due to threaded compression. The outcome could also depend on what content the server provided, so if someone ran git gc, git repack on the server side things would change for the user, even if the user forced threading to 1 during cloning more experiments on what kind of server-side alterations results in client-side differences would be good research. A couple of months passed and it is now time to publish another gnulib bundle somewhat paired to the bi-yearly stable gnulib branches so let s walk through the commands and explain what they do. First clone the repository:
REV=225973a89f50c2b494ad947399425182dd42618c   # master branch commit to package
S1REV=475dd38289d33270d0080085084bf687ad77c74d # stable-202501 branch commit
S2REV=e8cc0791e6bb0814cf4e88395c06d5e06655d8b5 # stable-202507 branch commit
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input
I believe the git fsck will validate that the chain of SHA1 commits are linked together, preventing someone from smuggling in unrelated commits earlier in the history without having to do SHA1 collision. SHA1 collisions are economically feasible today, so this isn t much of a guarantee of anything though.
git checkout -B master $REV # put $REV at master
# Add all stable-* branches locally:
for b in $(git branch -r   grep origin/stable-   sort --version-sort); do git checkout $ b#origin/ ; done
git checkout -B stable-202501 $S1REV
git checkout -B stable-202507 $S2REV
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any unrelated commits, not clear this helps
This establish a set of branches pinned to particular commits. The older stable-* branches are no longer updated, so they shouldn t be moving targets. In case they are modified in the future, the particular commit we used will be found in the official git bundle.
time git -c pack.threads=1 repack -adF
That s the new magic command to repack and recompress things in a hopefully more predictable way. This leads to a 72MB git pack under .git/objects/pack/ and a 62MB git bundle. The runtime on my laptop is around 5 minutes. I experimented with -c pack.compression=1 and -c pack.compression=9 but the size was roughly the same; 76MB and 66MB for level 1 and 72MB and 62MB for level 9. Runtime still around 5 minutes. Git uses zlib by default, which isn t the most optimal compression around. I tried -c pack.compression=0 and got a 163MB git pack and a 153MB git bundle. The runtime is still around 5 minutes, indicating that compression is not the bottleneck for the git repack command. That 153MB uncompressed git bundle compresses to 48MB with gzip default settings and 46MB with gzip -9; to 39MB with zst defaults and 34MB with zst -9; and to 28MB using xz defaults with a small 26MB using xz -9. Still the inconvenience of having to uncompress a 30-40MB archive into
the much larger 153MB is probably not worth the savings compared to
shipping and using the (still relatively modest) 62MB git bundle. Now finally prepare the bundle and ship it:
git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle
Yay! Another gnulib git bundle snapshot is available from
https://ftp.gnu.org/gnu/gnulib/. The essential part of the git repack command is the -F parameter. In the thread -f was suggested, which translates into the git pack-objects --no-reuse-delta parameter:
--no-reuse-delta
When creating a packed archive in a repository that has existing packs, the command reuses existing deltas. This sometimes results in a slightly suboptimal pack. This flag tells the command not to reuse existing deltas but compute them from scratch.
When reading the man page, I though that using -F which translates into --no-reuse-object would be slightly stronger:
--no-reuse-object
This flag tells the command not to reuse existing object data at all, including non deltified object, forcing recompression of everything. This implies --no-reuse-delta. Useful only in the obscure case where wholesale enforcement of a different compression level on the packed data is desired.
On the surface, without --no-reuse-objects, some amount of earlier compression could taint the final result. Still, I was able to get bit-by-bit identical bundles by using -f so possibly reaching for -F is not necessary. All the commands were done using git 2.51.0 as packaged by Guix. I fear the result may be different with other git versions and/or zlib libraries. I was able to reproduce the same bundle on a Trisquel 12 aramo (derived from Ubuntu 22.04) machine, which uses git 2.34.1. This suggests there is some chances of this being possible to reproduce in 20 years time. Time will tell. I also fear these commands may be insufficient if something is moving on the server-side of the git repository of gnulib (even just something simple as a new commit), I tried to make some experiments with this but let s aim for incremental progress here. At least I have now been able to reproduce the same bundle on different machines, which wasn t the case last time. Happy Reproducible Git Bundle Hacking!

Russell Coker: Links July 2025

Louis Rossman made an informative YouTube video about right to repair and the US military [1]. This is really important as it helps promote free software and open standards. The ACM has an insightful article about hidden controls [2]. We need EU regulations about hidden controls in safety critical systems like cars. This Daily WTF article has some interesting security implications for Windows [3]. Earth.com has an interesting article about the rubber hand illusion and how it works on Octopus [4]. For a long time I have been opposed to eating Octopus because I think they are too intelligent. The Washington Post has an insightful article about the future of spies when everything is tracked by technology [5]. Micah Lee wrote an informative guide to using Signal groups for activism [6]. David Brin wrote an insightful blog post about the phases of the ongoing US civil war [7]. Christian Kastner wrote an interesting blog post about using Glibc hardware capabilities to use different builds of a shared library for a range of CPU features [8]. David Brin wrote an insightful and interesting blog post comparing President Carter with the criminals in the Republican party [9].

Next.

Previous.