I m something of a filesystem geek, I guess. I first wrote about ZFS on Linux 14 years ago, and even before I used ZFS, I had used ext2/3/4, jfs, reiserfs, xfs, and no doubt some others.
I ve also used btrfs. I last posted about it in 2014, when I noted it has some advantages over ZFS, but also some drawbacks, including a lot of kernel panics.
Since that comparison, ZFS has gained trim support and btrfs has stabilized. The btrfs status page gives you an accurate idea of what is good to use on btrfs.
Background: Moving towards ZFS and btrfs
I have been trying to move everything away from ext4 and onto either ZFS or btrfs. There are generally several reasons for that:
The checksums for every block help detect potential silent data corruption
Instant snapshots make consistent backups of live systems a lot easier, and without the hassle and wasted space of LVM snapshots
Transparent compression and dedup can save a lot of space in storage-constrained environments
For any machine with at least 32GB of RAM (plus my backup server, which has only 8GB), I run ZFS. While it lacks some of the flexibility of btrfs, it has polish. zfs list -o space shows a useful space accounting. zvols can be behind VMs. With my project simplesnap, I can easily send hourly backups with ZFS, and I choose to send them over NNCP in most cases.
I have a few VMs in the cloud (running Debian, of course) that I use to host things like this blog, my website, my gopher site, the quux NNCP public relay, and various other things.
In these environments, storage space can be expensive. For that matter, so can RAM. ZFS is RAM-hungry, so that rules out ZFS. I ve been running btrfs in those environments for a few years now, and it s worked out well. I do async dedup, lzo or zstd compression depending on the needs, and the occasional balance and defrag.
Filesystems on the Raspberry Pi
I run Debian trixie on all my Raspberry Pis; not Raspbian or Raspberry Pi OS for a number of reasons. My 8-yr-old uses a Raspberry Pi 400 as her primary computer and loves it! She doesn t do web browsing, but plays Tuxpaint, some old DOS games like Math Blaster via dosbox, and uses Thunderbird for a locked-down email account.
But it was SLOW. Just really, glacially, slow, especially for Thunderbird.
My first step to address that was to get a faster MicroSD card to hold the OS. That was a dramatic improvement. It s still slow, but a lot faster.
Then, I thought, maybe I could use btrfs with LZO compression to reduce the amount of I/O and speed things up further? Analysis showed things were mostly slow due to I/O, not CPU, constraints.
The conversion
Rather than use the btrfs in-place conversion from ext4, I opted to dar it up (like tar), run mkfs.btrfs on the SD card, then unpack the archive back onto it. Easy enough, right?
Well, not so fast. The MicroSD card is 128GB, and the entire filesystem is 6.2GB. But after unpacking 100MB onto it, I got an out of space error.
btrfs has this notion of block groups. By default, each block group is dedicated to either data or metadata. btrfs fi df and btrfs fi usage will show you details about the block groups.
btrfs allocates block groups greedily (the ssd_spread mount option I use may have exacerbated this). What happened was it allocated almost the entire drive to data block groups, trying to spread the data across it. It so happened that dar archived some larger files first (maybe /boot), so btrfs was allocating data and metadata blockgroups assuming few large files. But then it started unpacking one of the directories in /usr with lots of small files (maybe /usr/share/locale). It quickly filled up the metadata block group, and since the entire SD card had been allocated to different block groups, I got ENOSPC.
Deleting a few files and running btrfs balance resolved it; now it allocated 1GB to metadata, which was plenty. I re-ran the dar extract and now everything was fine. See more details on btrfs balance and block groups.
This was the only btrfs problem I encountered.
Benchmarks
I timed two things prior to switching to btrfs: how long it takes to boot (measured from the moment I turn on the power until the moment the XFCE login box is displayed), and how long it takes to start Thunderbird.
After switching to btrfs with LZO compression, somewhat to my surprise, both measures were exactly the same!
Why might this be?
It turns out that SD cards are understood to be pathologically bad with random read performance. Boot and Thunderbird both are likely doing a lot of small random reads, not large streaming reads. Therefore, it may be that even though I have reduced the total I/O needed, the impact is unsubstantial because the real bottleneck is the seeks across the disk.
Still, I gain the better backup support and silent data corruption prevention, so I kept btrfs.
SSD mount options and MicroSD endurance
btrfs has several mount options specifically relevant to SSDs. Aside from the obvious trim support, they are ssd and ssd_spread. The documentation on this is vague and my attempts to learn more about it found a lot of information that was outdated or unsubstantiated folklore.
Some reports suggest that older SSDs will benefit from ssd_spread, but that it may have no effect or even a harmful effect on newer ones, and can at times cause fragmentation or write amplification. I could find nothing to back this up, though. And it seems particularly difficult to figure out what kind of wear leveling SSD firmware does. MicroSD firmware is likely to be on the less-advanced side, but still, I have no idea what it might do. In any case, with btrfs not updating blocks in-place, it should be better than ext4 in the most naive case (no wear leveling at all) but may have somewhat more write traffic for the pathological worst case (frequent updates of small portions of large files).
One anecdotal report I read and can t find anymore, somehow was from a person that had set up a sort of torture test for SD cards, with reports that ext4 lasted a few weeks or months before the MicroSDs failed, while btrfs lasted years.
If you are looking for a MicroSD card, by the way, The Great MicroSD Card Survey is a nice place to start.
For longevity: I mount all my filesystems with noatime already, so I continue to recommend that. You can also consider limiting the log size in /etc/systemd/journald.conf, running daily fstrim (which may be more successful than live trims in all filesystems).
Conclusion
I ve been pretty pleased with btrfs. The concerns I have today relate to block groups and maintenance (periodic balance and maybe a periodic defrag). I m not sure I d be ready to say put btrfs on the computer you send to someone that isn t Linux-savvy because the chances of running into issues are higher than with ext4. Still, for people that have some tech savvy, btrfs can improve reliability and performance in other ways.
Welcome to the August 2025 report from the Reproducible Builds project!
Welcome to the latest report from the Reproducible Builds project for August 2025. These monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
In this report:
Reproducible Builds Summit 2025
Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th 30th 2025 in Vienna, Austria!**
We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.
During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.
If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!
Reproducible Builds and live-bootstrap at WHY2025
WHY2025 (What Hackers Yearn) is a nonprofit outdoors hacker camp that takes place in Geestmerambacht in the Netherlands (approximately 40km north of Amsterdam). The event is organised for and by volunteers from the worldwide hacker community, and knowledge sharing, technological advancement, experimentation, connecting with your hacker peers, forging friendships and hacking are at the core of this event .
At this year s event, Frans Faase gave a talk on live-bootstrap, an attempt to provide a reproducible, automatic, complete end-to-end bootstrap from a minimal number of binary seeds to a supported fully functioning operating system .
Frans talk is available to watch on video and his slides are available as well.
DALEQ Explainable Equivalence for Java Bytecode
Jens Dietrich of the Victoria University of Wellington, New Zealand and Behnaz Hassanshahi of Oracle Labs, Australia published an article this month entitled DALEQ Explainable Equivalence for Java Bytecode which explores the options and difficulties when Java binaries are not identical despite being from the same sources, and what avenues are available for proving equivalence despite the lack of bitwise correlation:
[Java] binaries are often not bitwise identical; however, in most cases, the differences can be attributed to variations in the build environment, and the binaries can still be considered equivalent. Establishing such equivalence, however, is a labor-intensive and error-prone process.
Jens and Behnaz therefore propose a tool called DALEQ, which:
disassembles Java byte code into a relational database, and can normalise this database by applying Datalog rules. Those databases can then be used to infer equivalence between two classes. Notably, equivalence statements are accompanied with Datalog proofs recording the normalisation process. We demonstrate the impact of DALEQ in an industrial context through a large-scale evaluation involving 2,714 pairs of jars, comprising 265,690 class pairs. In this evaluation, DALEQ is compared to two existing bytecode transformation tools. Our findings reveal a significant reduction in the manual effort required to assess non-bitwise equivalent artifacts, which would otherwise demand intensive human inspection. Furthermore, the results show that DALEQ outperforms existing tools by identifying more artifacts rebuilt from the same code as equivalent, even when no behavioral differences are present.
Reproducibility regression identifies issue with AppArmor security policies
Tails developer intrigeri has tracked and followed a reproducibility regression in the generation of AppArmor policy caches, and has identified an issue with the 4.1.0 version of AppArmor.
Although initially tracked on the Tails issue tracker, intrigeri filed an issue on the upstream bug tracker. AppArmor developer John Johansen replied, confirming that they can reproduce the issue and went to work on a draft patch. Through this, John revealed that it was caused by an actual underlying security bug in AppArmor that is to say, it resulted in permissions not (always) matching what the policy intends and, crucially, not merely a cache reproducibility issue.
Work on the fix is ongoing at time of writing.
Rust toolchain fixes
Rust Clippy is a linting tool for the Rust programming language. It provides a collection of lints (rules) designed to identify common mistakes, stylistic issues, potential performance problems and unidiomatic code patterns in Rust projects. This month, however, Sosth ne Gu don filed a new issue in the GitHub requesting a new check that would lint against non deterministic operations in proc-macros, such as iterating over a HashMap .
Dropping support for the armhf architecture. From July 2015, Vagrant Cascadian has been hosting a zoo of approximately 35 armhf systems which were used for building Debian packages for that architecture.
Holger Levsen also uploaded strip-nondeterminism, our program that improves reproducibility by stripping out non-deterministic information such as timestamps or other elements introduced during packaging. This new version, 1.14.2-1, adds some metadata to aid the deputy tool. ( #1111947)
Lastly, Bernhard M. Wiedemann posted another openSUSEmonthly update for their work there.
diffoscopediffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 303, 304 and 305 to Debian:
Improvements:
Use sed(1) backreferences when generating debian/tests/control to avoid duplicating ourselves. []
Move from a mono-utils dependency to versioned mono-devel mono-utils dependency, taking care to maintain the [!riscv64] architecture restriction. []
Use sed over awk to avoid mangling dependency lines containing = (equals) symbols such as version restrictions. []
Bug fixes:
Fix a test after the upload of systemd-ukify version 258~rc3. []
Ensure that Java class files are named .class on the filesystem before passing them to javap(1). []
Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2) time. []
Don t check for PyPDF version 3 specifically; check for >= 3. []
Misc:
Update copyright years. [][]
In addition, Martin Joerg fixed an issue with the HTML presenter to avoid crash when page limit is None [] and Zbigniew J drzejewski-Szmek fixed compatibility with RPM 6 []. Lastly, John Sirois fixed a missing requests dependency in the trydiffoscope tool. []
Website updates
Once again, there were a number of improvements made to our website this month including:
Chris Lamb:
Write and publish a news entry for the upcoming summit. []
Add some assets used at FOSSY, such as the badges and the paper handouts. []
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:
Ignore that the megacli RAID controller requires packages from Debian bookworm. []
In addition,
James Addison migrated away from deprecated toplevel deb822 Python module in favour of debian.deb822 in the bin/reproducible_scheduler.py script [] and removed a note on reproduce.debian.net note after the release of Debian trixie [].
Jochen Sprickerhof made a huge number of improvements to the reproduce.debian.net statistics calculation [][][][][][] as well as to the reproduce.debian.net service more generally [][][][][][][][].
Mattia Rizzolo performed a lot of work migrating scripts to SQLAlchemy version 2.0 [][][][][][] in addition to making some changes to the way openSUSE reproducibility tests are handled internally. []
Lastly, Roland Clobus updated the Debian Live packages after the release of Debian trixie. [][]
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
Back in March, with version 4.17.0, Lean introduced partial_fixpoint, a new way to define recursive functions. I had drafted a blog post for the official Lean FRO blog back then, but forgot about it, and with the Lean FRO blog discontinued, I ll just publish it here, better late than never.
With the partial_fixpoint mechanism we can model possibly partial functions (so those returning an Option) without an explicit termination proof, and still prove facts about them. See the corresponding section in the reference manual for more details.
On the Lean Zulip, I was asked if we can use this feature to define the McCarthy 91 function and prove it to be total. This function is a well-known tricky case for termination proofs.
First let us have a brief look at why this function is tricky to define in a system like Lean. A naive definition like
def f91 (n : Nat) : Nat :=
if n > 100
then n - 10
else f91 (f91 (n + 11))
does not work; Lean is not able to prove termination of this functions by itself.
Even using well-founded recursion with an explicit measure (e.g. termination_by 101 - n) is doomed, because we would have to prove facts about the function s behaviour (namely that f91n = f91101 = 91 for 90 n 100) and at the same time use that fact in the termination proof that we have to provide while defining the function. (The Wikipedia page spells out the proof.)
We can make well-founded recursion work if we change the signature and use a subtype on the result to prove the necessary properties while we are defining the function. Lean by Example shows how to do it, but for larger examples this approach can be hard or tedious.
With partial_fixpoint, we can define the function as a partial function without worrying about termination. This requires a change to the function s signature, returning an Option Nat:
def f91 (n : Nat) : Option Nat :=
if n > 100
then pure (n - 10)
else f91 (n + 11) >>= f91
partial_fixpoint
From the point of view of the logic, Option.none is then used for those inputs for which the function does not terminate.
This function definition is accepted and the function runs fine as compiled code:
#eval f91 42
prints some 91.
The crucial question is now: Can we prove anything about f91 In particular, can we prove that this function is actually total?
Since we now have the f91 function defined, we can start proving auxillary theorems, using whatever induction schemes we need. In particular we can prove that f91 is total and always returns 91 for n 100:
theorem f91_spec_high (n : Nat) (h : 100 < n) : f91 n = some (n - 10) := by
unfold f91; simp [*]
theorem f91_spec_low (n : Nat) (h : n 100) : f91 n = some 91 := by
unfold f91
rw [if_neg (by omega)]
by_cases n < 90
rw [f91_spec_low (n + 11) (by omega)]
simp only [Option.bind_eq_bind, Option.some_bind]
rw [f91_spec_low 91 (by omega)]
rw [f91_spec_high (n + 11) (by omega)]
simp only [Nat.reduceSubDiff, Option.some_bind]
by_cases h : n = 100
simp [f91, *]
exact f91_spec_low (n + 1) (by omega)
theorem f91_spec (n : Nat) : f91 n = some (if n 100 then 91 else n - 10) := by
by_cases h100 : n 100
simp [f91_spec_low, *]
simp [f91_spec_high, Nat.lt_of_not_le _ , *]
-- Generic totality theorem
theorem f91_total (n : Nat) : (f91 n).isSome := by simp [f91_spec]
(Note that theorem f91_spec_low is itself recursive in a somewhat non-trivial way, but Lean can figure that out all by itself. Use termination_by? if you are curious.)
This is already a solid start! But what if we want a function of type f91! (n : Nat) : Nat, without the Option? Then can derive that from the partial variant, as we have just proved that to be actually total:
def f91! (n : Nat) : Nat := (f91 n).get (f91_total n)
theorem f91!_spec (n : Nat) : f91! n = if n 100 then 91 else n - 10 := by
simp [f91!, f91_spec]
Using partial_fixpoint one can decouple the definition of a function from a termination proof, or even model functions that are not terminating on all inputs. This can be very useful in particular when using Lean for program verification, such as with the aeneas package, where such partial definitions are used to model Rust programs.
When a Debian cloud VM boots, it typically runs cloud-init at various points in the boot process. Each invocation can perform certain operations based on the host s static configuration passed by the user, typically either through a well known link-local network service or an attached iso9660 drive image. Some of the cloud-init steps execute before the network comes up, and others at a couple of different points after the network is up.
I recently encountered an unexpected issue when configuring a dualstack (uses both IPv6 and legacy IPv4 networking) VM to use a custom apt server accessible only via IPv6. VM provisioning failed because it was unable to access the server in question, yet when I logged in to investigate, it was able to access the server without any problem. The boot had apparently gone smoothly right up until cloud-init s Package Update Upgrade Install module called apt-get update, which failed and broke subsequent provisioning steps. The errors reported by apt-get indicated that there was no route to the service in question, which more accurately probably meant that there was not yet a route to the service. But there was shortly after, when I investigated.
This was surprising because the apt-get invocations occur in a cloud-init sequence that s explicitly ordered after the network is configured according to systemd-networkd-wait-online. Investigation eventually led to similar issues encountered in other environments reported in Debian bug #1111791, systemd: network-online.target reached before IPv6 address is ready . The issue described in that bug is identical to mine, but the bug is tagged wontfix. The behavior is considered correct.
Why the default behavior is the correct one
While it s a bit counterintuitive, the systemd-networkd behavior is correct, and it s also not something we d want to override in the cloud images. Without explicit configuration, systemd can t accurately infer the intended network configuration of a given system. If a system is IPv6-only, systemd-networkd-wait-online will introduce unexpected delays in the boot process if it waits for IPv4, and vice-versa. If it assumes dualstack, things are even worse because it would block for a long time (approximately two minutes) in any single stack network before failing, leaving the host in degraded state. So the most reasonable default behavior is to block until any protocol is configured.
For these same reasons, we can t change the systemd-networkd-wait-online configuration in our cloud images. All of the cloud environments we support offer both single stack and dual stack networking, so we preserve systemd s default behavior.
What s causing problems here is that IPv6 takes significantly longer to configure due to its more complex router solicitation + router advertisement + DHCPv6 setup process. So in this particular case, where I ve got a dualstack VM that needs to access a v6-only apt server during the provisioning process, I need to find some mechanism to override systemd s default behavior and wait for IPv6 connectivity specifically.
What won t work
Cloud-init offers the ability to write out arbitrary files during provisioning. So writing a drop-in for systemd-networkd-wait-online.service is trivial. Unfortunately, this doesn t give us everything we actually need. We still need to invoke systemctl daemon-reload to get systemd to actually apply the changes after we ve written them, and of course we need to do that before the service actually runs. Cloud-init provides a bootcmd module that lets us run shell commands very early in the boot process , but it runs too early: it runs before we ve written out our configuration files. Similarly, it provides a runcmd module, but scripts there are towards the end of the boot process, far too late to be useful.
Instead of using the bootcmd facility, to simply reload systemd s config, it seemed possible that we could both write the config and trigger the reload, similar to the following:
But even that runs too late, as we can see in the logs that systemd-networkd-wait-online.service has completed before bootcmd is executed:
root@sid-tmp2:~# journalctl --no-pager -l -u systemd-networkd-wait-online.service
Aug 29 17:02:12 sid-tmp2 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Aug 29 17:02:13 sid-tmp2 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured
.
root@sid-tmp2:~# grep -F 'config-bootcmd ran' /var/log/cloud-init.log
2025-08-29 17:02:14,766 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully and took 0.467 seconds
At this point, it s looking like there are few options left!
What eventually worked
I ended up identifying two solutions to the issue, both of which involve getting some other component of the provisioning process to run systemd-networkd-wait-online.
Solution 1
The first involves getting apt-get itself to wait for IPv6 configuration. The apt.conf configuration interface allows the definition of an APT::Update::Pre-Invoke hook that s executed just before apt s update operation. By writing the following to a file in /etc/apt/apt.conf.d/, we re able to ensure that we have IPv6 connectivity before apt-get tries accessing the network. This cloud-config snippet accomplishes that:
This is safe to leave in place after provisioning, because the delay will be negligible once IPv6 connectivity is established. It s only during address configuration that it ll block for a noticeable amount of time, but that s what we want.
This solution isn t entirely correct, though, because it s only apt-get that s actually affected by it. Other service that start after the system is ostensibly online might only see IPv4 connectivity when they start. This seems acceptable at the moment, though.
Solution 2
The second solution is to simply invoke systemd-networkd-wait-online directly from a cloud-init bootcmd. Similar to the first solution, it s not exactly correct because the host has already reached network-online.target, but it does block enough of cloud-init that package installation happens only after it completes. The cloud-config snippet for this is
In either case, we still want to write out a snippet to configure systemd-networkd-wait-online to wait for IPv6 connectivity for future reboots. Even though cloud-init won t necessarily run in those cases, and many cloud VMs never reboot at all, it does complete the solution. Additionally, it solves the problem for any derivative images that may be created based on the running VM s state. (At least if we can be certain that instances of those derivative images will never run in an IPv4-only network!)
How to properly solve it
One possible improvement would be for cloud-init to support a configuration key allowing the admin to specify the required protocols. Based on the presence of this key, cloud-init could reconfigure systemd-networkd-wait-online.service accordingly. Alternatively it could set the appropriate RequiredFamilyForOnline= value in the generated .network file. cloud-init supports multiple network configuration backends, so each of those would need to be updated. If using the systemd-networkd configuration renderer, this should be straightforward, but Debian uses the netplan renderer, so that tool might also need to be taught to pass such a configuration along to systemd-networkd.
Beyond Debian: Useful for other distros too
Every two years Debian releases a new major version of its Stable series,
meaning the differences between consecutive Debian Stable releases represent
two years of new developments both in Debian as an organization and its native
packages, but also in all other packages which are also shipped by other
distributions (which are getting into this new Stable release).
If you're not paying close attention to everything that's going on all the time
in the Linux world, you miss a lot of the nice new features and tools. It's
common for people to only realize there's a cool new trick available only years
after it was first introduced.
Given these considerations, the tips that I'm describing will eventually be
available in whatever other distribution you use, be it because it's a Debian
derivative or because it just got the same feature from the upstream project.
I'm not going to list "passive" features (as good as they can be), the focus
here is on new features that might change how you configure and use your
machine, with a mix between productivity and performance.
Debian 13 - Trixie
I have been a Debian Testing user for longer than 10 years now (and I recommend
it for non-server users), so I'm not usually keeping track of all the cool
features arriving in the new Stable releases because I'm continuously receiving
them through the Debian Testing rolling release.
Nonetheless, as a Debian Developer I'm in a good position to point out the ones
I can remember. I would also like other Debian Developers to do the same as I'm
sure I would learn something new.
The Debian 13 release notes contain a "What's new" section
, which
lists the first two items here and a few other things, in other words, take my
list as an addition to the release notes.
Debian 13 was released on 2025-08-09, and these are nice things you shouldn't
miss in the new release, with a bonus one not tied to the Debian 13 release.
1) wcurl
Have you ever had to download a file from your terminal using curl and didn't
remember the parameters needed? I did.
Nowadays you can use wcurl; "a command line tool which lets you download URLs
without having to remember any parameters."
Simply call wcurl with one or more URLs as parameters and it will download
all of them in parallel, performing retries, choosing the correct output file
name, following redirects, and more.
Try it out:
wcurl example.com
wcurl comes installed as part of the curl package on Debian 13 and in any other
distribution you can imagine, starting with curl 8.14.0.
I've written more about wcurl in its release
announcement
and I've done a lightning talk presentation in DebConf24, which is linked in
the release announcement.
2) HTTP/3 support in curl
Debian has become the first stable Linux distribution to ship curl with support
for HTTP/3. I've written about this in July
2024, when we
first enabled it. Note that we first switched the curl CLI to GnuTLS, but then
ended up releasing the curl CLI linked with OpenSSL (as support arrived later).
Debian was the first stable Linux distro to enable it, and within
rolling-release-based distros; Gentoo enabled it first in their non-default
flavor of the package and Arch Linux did it three months before we pushed it to
Debian Unstable/Testing/Stable-backports, kudos to them!
HTTP/3 is not used by default by the curl CLI, you have to enable it with
--http3 or --http3-only.
Try it out:
3) systemd soft-reboot
Starting with systemd v254, there's a new soft-reboot option, it's an
userspace-only reboot, much faster than a full reboot if you don't need to
reboot the kernel.
You can read the announcement from the systemd v254 GitHub
release
Try it out:
# This will reboot your machine!systemctl soft-reboot
4) apt --update
Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I!
The new --update option lets you do both things in a single command:
I love this, but it's still not yet where it should be, fingers crossed for a
simple apt upgrade to behave like other package managers by updating its
cache as part of the task, maybe in Debian 14?
Try it out:
sudo apt upgrade --update# The order doesn't mattersudo apt --update upgrade
This is especially handy for container usage, where you have to update the apt
cache before installing anything, for example:
podman run debian:stable bin/bash -c'apt install --update -y curl'
5) powerline-go
powerline-go is a powerline-style prompt written in Golang, so it's much more
performant than its Python alternative powerline.
powerline-style prompts are quite useful to show things like the current status
of the git repo in your working directory, exit code of the previous command,
presence of jobs in the background, whether or not you're in an ssh session,
and more.
Try it out:
sudo apt install powerline-go
Then add this to your .bashrc:
function_update_ps1()PS1="$(/usr/bin/powerline-go -error$? -jobs$(jobs -pwc -l))"# Uncomment the following line to automatically clear errors after showing# them once. This not only clears the error for powerline-go, but also for# everything else you run in that shell. Don't enable this if you're not# sure this is what you want.#set "?"if["$TERM"!="linux"]&&[-f"/usr/bin/powerline-go"];thenPROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"fi
Or this to .zshrc:
functionpowerline_precmd()PS1="$(/usr/bin/powerline-go -error$? -jobs$$(%):%j:-0)"# Uncomment the following line to automatically clear errors after showing# them once. This not only clears the error for powerline-go, but also for# everything else you run in that shell. Don't enable this if you're not# sure this is what you want.#set "?"
If you'd like to have your prompt start in a newline, like I have in the
screenshot above, you just need to set -newline in the powerline-go
invocation in your .bashrc/.zshrc.
6) Gnome System Monitor Extension
Tips number 6 and 7 are for Gnome users.
Gnome is now shipping a system monitor extension which lets you get a glance of
the current load of your machine from the top bar.
I've found this quite useful for machines where I'm required to install
third-party monitoring software that tends to randomly consume more resources
than it should. If I feel like my machine is struggling, I can quickly glance
at its load to verify if it's getting overloaded by some process.
The extension is not as complete as
system-monitor-next,
not showing temperatures or histograms, but at least it's officially part of
Gnome, easy to install and supported by them.
Try it out:
And then enable the extension from the "Extension Manager" application.
7) Gnome setting for battery charging profile
After having to learn more about batteries in order to get into FPV drones,
I've come to have a bigger appreciation for solutions that minimize the
inevitable loss of capacity that accrues over time.
There's now a "Battery Charging" setting (under the "Power") section which lets
you choose between two different profiles: "Maximize Charge" and "Preserve
Battery Health".
On supported laptops, this setting is an easy way to set thresholds for when
charging should start and stop, just like you could do it with the tlp package,
but now from the Gnome settings.
To increase the longevity of my laptop battery, I always keep it at "Preserve
Battery Health" unless I'm traveling.
What I would like to see next is support for choosing different "Power Modes"
based on whether the laptop is plugged-in, and based on the battery
charge percentage.
There's a GNOME
issue
tracking this feature, but there's some pushback on whether this is the right
thing to expose to users.
In the meantime, there are some workarounds mentioned in that issue which
people who really want this feature can follow.
If you would like to learn more about batteries; Battery
University is a great starting point, besides
getting into FPV drones and being forced to handle batteries without a Battery
Management System (BMS).
And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's
YouTube channel is a great resource:
@JoshuaBardwell.
8) Lazygit
Emacs users are already familiar with the legendary magit; a terminal-based
UI for git.
Lazygit is an alternative for non-emacs users, you can integrate it with neovim
or just use it directly.
I'm still playing with lazygit and haven't integrated it into my workflows,
but so far it has been a pleasant experience.
You should check out the demos from the lazygit GitHub
page.
Try it out:
sudo apt install lazygit
And then call lazygit from within a git repository.
9) neovim
neovim has been shipped in Debian since 2016, but upstream has been doing a lot of
work to improve the experience out-of-the-box in the last couple of years.
If you're a neovim poweruser, you're likely not installing it from the official
repositories, but for those that are, Debian 13 comes with version 0.10.4,
which brings the following improvements compared to the version in Debian 12:
Treesitter support for C, Lua, Markdown, with the possibility of adding any
other languages as needed;
Better spellchecking due to treesitter integration (spellsitter);
Mouse support enabled by default;
Commenting support out-of-the-box;
Check :h commenting for details, but the
tl;dr is that you can use gcc to comment the current line and gc to comment
the current selection.
OSC52 support.
Especially handy for those using neovim over an ssh
connection, this protocol lets you copy something from within the neovim
process into the clipboard of the machine you're using to connect through ssh.
In other words, you can copy from neovim running in a host over ssh and paste
it in the "outside" machine.
10) [Bonus] Running old Debian releases
The bonus tip is not specific to the Debian 13 release, but something I've
recently learned in the #debian-devel IRC channel.
Did you know there are usable container images for all past Debian releases?
I'm not talking "past" as in "some of the older releases", I'm talking past as
in "literally every Debian release, including the very first one".
Tianon Gravi "tianon" is the Debian Developer responsible for making this
happen, kudos to him!
There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a
32-bit host, otherwise you will get the error Out of virtual memory!, but
starting with Bo (1.3) all should work in amd64/arm64.
Try it out:
sudo apt install podmanpodman run -it docker.io/debian/eol:bo
Don't be surprised when noticing that apt/apt-get is not available inside the
container, that's because apt first appeared in Debian Slink (2.1).
Beyond Debian: Useful for other distros too
Every two years Debian releases a new major version of its Stable series,
meaning the differences between consecutive Debian Stable releases represent
two years of new developments both in Debian as an organization and its native
packages, but also in all other packages which are also shipped by other
distributions (which are getting into this new Stable release).
If you're not paying close attention to everything that's going on all the time
in the Linux world, you miss a lot of the nice new features and tools. It's
common for people to only realize there's a cool new trick available only years
after it was first introduced.
Given these considerations, the tips that I'm describing will eventually be
available in whatever other distribution you use, be it because it's a Debian
derivative or because it just got the same feature from the upstream project.
I'm not going to list "passive" features (as good as they can be), the focus
here is on new features that might change how you configure and use your
machine, with a mix between productivity and performance.
Debian 13 - Trixie
I have been a Debian Testing user for longer than 10 years now (and I recommend
it for non-server users), so I'm not usually keeping track of all the cool
features arriving in the new Stable releases because I'm continuously receiving
them through the Debian Testing rolling release.
Nonetheless, as a Debian Developer I'm in a good position to point out the ones
I can remember. I would also like other Debian Developers to do the same as I'm
sure I would learn something new.
The Debian 13 release notes contain a "What's new" section
, which
lists the first two items here and a few other things, in other words, take my
list as an addition to the release notes.
Debian 13 was released on 2025-08-09, and these are nice things you shouldn't
miss in the new release, with a bonus one not tied to the Debian 13 release.
1) wcurl
Have you ever had to download a file from your terminal using curl and didn't
remember the parameters needed? I did.
Nowadays you can use wcurl; "a command line tool which lets you download URLs
without having to remember any parameters."
Simply call wcurl with one or more URLs as parameters and it will download
all of them in parallel, performing retries, choosing the correct output file
name, following redirects, and more.
Try it out:
wcurl example.com
wcurl comes installed as part of the curl package on Debian 13 and in any other
distribution you can imagine, starting with curl 8.14.0.
I've written more about wcurl in its release
announcement
and I've done a lightning talk presentation in DebConf24, which is linked in
the release announcement.
2) HTTP/3 support in curl
Debian has become the first stable Linux distribution to ship curl with support
for HTTP/3. I've written about this in July
2024, when we
first enabled it. Note that we first switched the curl CLI to GnuTLS, but then
ended up releasing the curl CLI linked with OpenSSL (as support arrived later).
Debian was the first Linux distro to enable it in the default build of the curl
package, but Gentoo enabled it a few weeks earlier in their non-default flavor
of the package, kudos to them!
HTTP/3 is not used by default by the curl CLI, you have to enable it with
--http3 or --http3-only.
Try it out:
3) systemd soft-reboot
Starting with systemd v254, there's a new soft-reboot option, it's an
userspace-only reboot, much faster than a full reboot if you don't need to
reboot the kernel.
You can read the announcement from the systemd v254 GitHub
release
Try it out:
# This will reboot your machine!systemctl soft-reboot
4) apt --update
Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I!
The new --update option lets you do both things in a single command:
I love this, but it's still not yet where it should be, fingers crossed for a
simple apt upgrade to behave like other package managers by updating its
cache as part of the task, maybe in Debian 14?
Try it out:
sudo apt upgrade --update# The order doesn't mattersudo apt --update upgrade
This is especially handy for container usage, where you have to update the apt
cache before installing anything, for example:
podman run debian:stable bin/bash -c'apt install --update -y curl'
5) powerline-go
powerline-go is a powerline-style prompt written in Golang, so it's much more
performant than its Python alternative powerline.
powerline-style prompts are quite useful to show things like the current status
of the git repo in your working directory, exit code of the previous command,
presence of jobs in the background, whether or not you're in an ssh session,
and more.
Try it out:
sudo apt install powerline-go
Then add this to your .bashrc:
function_update_ps1()PS1="$(/usr/bin/powerline-go -error$? -jobs$(jobs -pwc -l))"# Uncomment the following line to automatically clear errors after showing# them once. This not only clears the error for powerline-go, but also for# everything else you run in that shell. Don't enable this if you're not# sure this is what you want.#set "?"if["$TERM"!="linux"]&&[-f"/usr/bin/powerline-go"];thenPROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"fi
Or this to .zshrc:
functionpowerline_precmd()PS1="$(/usr/bin/powerline-go -error$? -jobs$$(%):%j:-0)"# Uncomment the following line to automatically clear errors after showing# them once. This not only clears the error for powerline-go, but also for# everything else you run in that shell. Don't enable this if you're not# sure this is what you want.#set "?"
If you'd like to have your prompt start in a newline, like I have in the
screenshot above, you just need to set -newline in the powerline-go
invocation in your .bashrc/.zshrc.
6) Gnome System Monitor Extension
Tips number 6 and 7 are for Gnome users.
Gnome is now shipping a system monitor extension which lets you get a glance of
the current load of your machine from the top bar.
I've found this quite useful for machines where I'm required to install
third-party monitoring software that tends to randomly consume more resources
than it should. If I feel like my machine is struggling, I can quickly glance
at its load to verify if it's getting overloaded by some process.
The extension is not as complete as
system-monitor-next,
not showing temperatures or histograms, but at least it's officially part of
Gnome, easy to install and supported by them.
Try it out:
And then enable the extension from the "Extension Manager" application.
7) Gnome setting for battery charging profile
After having to learn more about batteries in order to get into FPV drones,
I've come to have a bigger appreciation for solutions that minimize the
inevitable loss of capacity that accrues over time.
There's now a "Battery Charging" setting (under the "Power") section which lets
you choose between two different profiles: "Maximize Charge" and "Preserve
Battery Health".
On supported laptops, this setting is an easy way to set thresholds for when
charging should start and stop, just like you could do it with the tlp package,
but now from the Gnome settings.
To increase the longevity of my laptop battery, I always keep it at "Preserve
Battery Health" unless I'm traveling.
What I would like to see next is support for choosing different "Power Modes"
based on whether the laptop is plugged-in, and based on the battery
charge percentage.
There's a GNOME
issue
tracking this feature, but there's some pushback on whether this is the right
thing to expose to users.
In the meantime, there are some workarounds mentioned in that issue which
people who really want this feature can follow.
If you would like to learn more about batteries; Battery
University is a great starting point, besides
getting into FPV drones and being forced to handle batteries without a Battery
Management System (BMS).
And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's
YouTube channel is a great resource:
@JoshuaBardwell.
8) Lazygit
Emacs users are already familiar with the legendary magit; a terminal-based
UI for git.
Lazygit is an alternative for non-emacs users, you can integrate it with neovim
or just use it directly.
I'm still playing with lazygit and haven't integrated it into my workflows,
but so far it has been a pleasant experience.
You should check out the demos from the lazygit GitHub
page.
Try it out:
sudo apt install lazygit
And then call lazygit from within a git repository.
9) neovim
neovim has been shipped in Debian since 2016, but upstream has been doing a lot of
work to improve the experience out-of-the-box in the last couple of years.
If you're a neovim poweruser, you're likely not installing it from the official
repositories, but for those that are, Debian 13 comes with version 0.10.4,
which brings the following improvements compared to the version in Debian 12:
Treesitter support for C, Lua, Markdown, with the possibility of adding any
other languages as needed;
Better spellchecking due to treesitter integration (spellsitter);
Mouse support enabled by default;
Commenting support out-of-the-box;
Check :h commenting for details, but the
tl;dr is that you can use gcc to comment the current line and gc to comment
the current selection.
OSC52 support.
Especially handy for those using neovim over an ssh
connection, this protocol lets you copy something from within the neovim
process into the clipboard of the machine you're using to connect through ssh.
In other words, you can copy from neovim running in a host over ssh and paste
it in the "outside" machine.
10) [Bonus] Running old Debian releases
The bonus tip is not specific to the Debian 13 release, but something I've
recently learned in the #debian-devel IRC channel.
Did you know there are usable container images for all past Debian releases?
I'm not talking "past" as in "some of the older releases", I'm talking past as
in "literally every Debian release, including the very first one".
Tianon Gravi "tianon" is the Debian Developer responsible for making this
happen, kudos to him!
There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a
32-bit host, otherwise you will get the error Out of virtual memory!, but
starting with Bo (1.3) all should work in amd64/arm64.
Try it out:
sudo apt install podmanpodman run -it docker.io/debian/eol:bo
Don't be surprised when noticing that apt/apt-get is not available inside the
container, that's because apt first appeared in Debian Slink (2.1).
Posted on August 28, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear
A bit more than a year ago, I had been thinking about making myself a
cartridge pleated skirt. For a number of reasons, one of which is the
historybounding potential, I ve been thinking pre-crinoline, so
somewhere around the 1840s, and that s a completely new era for me,
which means: new underwear.
Also, the 1840s are pre-sewing machine, and I was already in a position
where I had more chances to handsew than to machine sew, so I decided to
embrace the slowness and sew 100% by hand, not even using the machine
for straight seams.
If I remember correctly, I started with the corded petticoat, looking
around the internet for instructions, and then designing my own based on
the practicality of using modern wide fabric from my stash (and
specifically some DITTE from costumers favourite source of dirty cheap
cotton IKEA).
Around the same time I had also acquired a sashiko kit, and I used the
Japanese technique for sewing running stitches pushing the needle with a
thimble that covers the base of the middle finger, and I can confirm
that for this kind of things it s great!
I ve since worn the petticoat a few times for casual / historyBounding /
folkwearBounding reasons, during the summer, and I can confirm it s
comfortable to use; I guess that during the winter it could be nice to
add a flannel layer below it.
Then I proceeded with the base layers: I had been browsing through
The workwoman's guide and that provided plenty of examples, and I
selected the basic ankle-length drawers from page 53 and the alternative
shift on page 47.
As for fabric, I had (and still have) a significant lack of underwear
linen in my stash, but I had plenty of cotton voile that I had not used
in a while: not very historically accurate for plain underwear, but
quite suitable for a wearable mockup.
Working with a 1830s source had an interesting aspect: other of the
usual, mildly annoying, imperial units, it also used a lot a few
obsolete units, especially nails, that my qalc, my usual calculator and
converter, doesn t support.
Not a big deal, because GNU units came to the rescue, and that one
knows a lot of obscure and niche units, and it s quite easy to add those
that are missing1
Working on this project also made me freshly aware of something I had
already noticed: converting instructions for machine sewing garments
into instructions for hand sewing them is usually straightforward, but
the reverse is not always true.
Starting from machine stitching, you can usually convert straight
stitches into backstitches (or running backstitches), zigzag and
overlocking into overcasting and get good results. In some cases you may
want to use specialist hand stitches that don t really have a machine
equivalent, such as buttonhole stitches instead of simply overcasting
the buttonhole, but that s it.
Starting from hand stitching, instead, there are a number of techniques
that could be converted to machine stitching, but involve a lot of
visible topstitching that wasn t there in the original instructions, or
at times are almost impossible to do by machine, if they involve
whipstitching together finished panels on seams that are subject to
strong tension.
Anyway, halfway through working with the petticoat I cut both the
petticoat and the drawers at the same time, for efficiency in fabric
use, and then started sewing the drawers.
The book only provided measurements for one size (moderate), and my
fabric was a bit too narrow to make them that size (not that I have any
idea what hip circumference a person of moderate size was supposed to
have), so the result is just wide enough to be comfortably worn, but I
think that when I ll make another pair I ll try to make them a bit
wider. On the other hand they are a bit too long, but I think that I ll
fix it by adding a tuck or two. Not a big deal, anyway.
The shift gave me a bit more issues: I used the recommended gusset size,
and ended up with a shift that was way too wide at the top, so I had to
take a box pleat in the center front and back, which changed the look
and wear of the garment. I have adjusted the instructions to make
gussets wider, and in the future I ll make another shift following
those.
Even with the pleat, the narrow shoulder straps are set quite far to the
sides, and they tend to droop, and I suspect that this is to be expected
from the way this garment is made. The fact that there are buttonholes
on the shoulder straps to attach to the corset straps and prevent the
issue is probably a hint that this behaviour was to be expected.
I ve also updated the instructions so that they shoulder straps are a
bit wider, to look more like the ones in the drawing from the book.
Making a corset suitable for the time period is something that I will
probably do, but not in the immediate future, but even just wearing the
shift under a later midbust corset with no shoulder strap helps.
I m also not sure what the point of the bosom gores is, as they don t
really give more room to the bust where it s needed, but to the high
bust where it s counterproductive. I also couldn t find images of
original examples made from this pattern to see if they were actually
used, so in my next make I may just skip them.
On the other hand, I m really happy with how cute the short sleeves
look, and if2 I ll ever make the other cut of shift from the same
book, with the front flaps, I ll definitely use these pleated sleeves
rather than the straight ones that were also used at the time.
As usual, all of the patterns have been published on my website under a
Free license:
Today marks both a milestone and a turning point in my journey with open source software. I m proud to announce the release of KDE Gear 25.08.0 as my final snap package release. You can find all the details about this exciting update at the official KDE announcement.
After much reflection and with a heavy heart, I ve made the difficult decision to retire from most of my open source software work, including snap packaging. This wasn t a choice I made lightly it comes after months of rejections and silence in an industry I ve loved and called home for over 20 years.
Passing the Torch
While I m stepping back, I m thrilled to share that the future of KDE snaps is in excellent hands. Carlos from the Neon team has been working tirelessly to set up snaps on the new infrastructure that KDE has made available. This means building snaps in KDE CI is now possible a significant leap forward for the ecosystem. I ll be helping Carlos get the pipelines properly configured to ensure a smooth transition.
Staying Connected (But Differently)
Though I m stepping away from most development work, I won t be disappearing entirely from the communities that have meant so much to me:
Kubuntu: I ll remain available as a backup, though Rik is doing an absolutely fabulous job getting the latest and greatest KDE packages uploaded. The distribution is in capable hands.
Ubuntu Community Council: I m continuing my involvement here because I ve found myself genuinely enjoying the community side of things. There s something deeply fulfilling about focusing on the human connections that make these projects possible.
Debian: I ll likely be submitting for emeritus status, as I haven t had the time to contribute meaningfully and want to be honest about my current capacity.
The Reality Behind the Decision
This transition isn t just about career fatigue it s about financial reality. I ve spent too many years working for free while struggling to pay my bills. The recent changes in the industry, particularly with AI transforming the web development landscape, have made things even more challenging. Getting traffic to websites now requires extensive social media work and marketing all expected to be done without compensation.
My stint at webwork was good while it lasted, but the changing landscape has made it unsustainable. I ve reached a point where I can t continue doing free work when my family and I are struggling financially. It shouldn t take breaking a limb to receive the donations needed to survive.
A Career That Meant Everything
These 20+ years in open source have been the defining chapter of my professional life. I ve watched communities grow, technologies evolve, and witnessed firsthand the incredible things that happen when passionate people work together. The relationships I ve built, the problems we ve solved together, and the software we ve created have been deeply meaningful.
But I also have to be honest about where I stand today: I cannot compete in the current job market. The industry has changed, and despite my experience and passion, the opportunities just aren t there for someone in my situation.
Looking Forward
Making a career change after two decades is terrifying, but it s also necessary. I need to find a path that can provide financial stability for my family while still allowing me to contribute meaningfully to the world.
If you ve benefited from my work over the years and are in a position to help during this transition, I would be forever grateful for any support. Every contribution, no matter the size, helps ease this difficult period: https://gofund.me/a9c55d8f
Thank You
To everyone who has collaborated with me, tested my packages, filed bug reports, offered encouragement, or simply used the software I ve helped maintain thank you. You ve made these 20+ years worthwhile, and you ve been part of something bigger than any individual contribution.
The open source world will continue to thrive because it s built on the collective passion of thousands of people like Carlos, Rik, and countless others who are carrying the torch forward. While my active development days are ending, the impact of this community will continue long into the future.
With sincere gratitude and fond farewells,
Scarlett Moore
I originally setup a machine without any full disk encryption, then
somehow regretted it quickly after. My original reasoning was that
this was a "play" machine so I wanted as few restrictions on accessing
the machine as possible, which meant removing passwords, mostly.
I actually ended up having a user password, but disabled the lock
screen. Then I started using the device to manage my photo collection,
and suddenly there was a lot of "confidential" information on the
device that I didn't want to store in clear text anymore.
Pre-requisites
So, how does one convert an existing install from plain text to full
disk encryption? One way is to backup to an external drive,
re-partition everything and copy things back, but that's slow and
boring. Besides, cryptsetup has a cryptsetup-reencrypt command,
surely we can do this in place?
Having not set aside enough room for /boot, I briefly
considered a "encrypted /boot" configuration and conversion (e.g. with
this guide) but remembered grub's support for this is flaky, at
best, so I figured I would try something else.
Here, I'm going to guide you through how I first converted from grub
to systemd-boot then to UKI kernel, then re-encrypt my main
partition.
Note that secureboot is disabled here, see further discussion below.
systemd-boot and Unified Kernel Image conversion
systemd folks have been developing UKI ("unified kernel image")
to ship kernels. The way this works is the kernel and initrd (and UEFI
boot stub) in a single portable executable that lives in the EFI
partition, as opposed to /boot. This neatly solves my problem,
because I already have such a clear-text partition and won't need to
re-partition my disk to convert.
Debian has started some preliminary support for this. It's not
default, but I found this guide from Vasudeva Kamath which was
pretty complete. Since the guide assumes some previous configuration,
I had to adapt it to my case.
Here's how I did the conversion to both systemd-boot and UKI, all at
once. I could have perhaps done it one at a time, but doing both at
once works fine.
Before your start, make sure secureboot is disabled, see the
discussion below.
install systemd tools:
apt install systemd-ukify systemd-boot
Configure systemd-ukify, in /etc/kernel/install.conf:
TODO: it doesn't look like this generates a initrd with dracut, do
we care?
Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:
[UKI]
Cmdline=@/etc/kernel/cmdline
The /etc/kernel/cmdline file doesn't actually exist here, and
that's fine. Defaults are okay, as the image gets generated from
your current /proc/cmdline. Check your /etc/default/grub and
/proc/cmdline if you are unsure. You'll see the generated
arguments in bootctl list below.
Build the image:
dpkg-reconfigure linux-image-$(uname -r)
Check the boot options:
bootctl list
Look for a Type #2 (.efi) entry for the kernel.
Reboot:
reboot
You can tell you have booted with systemd-boot because (a) you won't
see grub and (b) the /proc/cmdline will reflect the configuration
listed in bootctl list. In my case, a systemd.machine_id variable
is set there, and not in grub (compare with /boot/grub/grub.cfg).
By default, the systemd-boot loader just boots, without a menu. You
can force the menu to show up by un-commenting the timeout line in
/boot/efit/loader/loader.conf, by hitting keys during boot
(e.g. hitting "space" repeatedly), or by calling:
systemctl reboot --boot-loader-menu=0
See the systemd-boot(7) manual for details on that.
I did not go through the secureboot process, presumably I had
already disabled secureboot. This is trickier: because one needs a
"special key" to sign the UKI image, one would need the collaboration
of debian.org to get this working out of the box with the
keys shipped onboard most computers.
In other words, if you want to make this work with secureboot enabled
on your computer, you'll need to figure out how to sign the generated
images before rebooting here, because otherwise you will break your
computer. Otherwise, follow the following guides:
Re-encrypting root filesystem
Now that we have a way to boot an encrypted filesystem, we can switch
to LUKS for our filesystem. Note that you can probably follow this
guide if, somehow, you managed to make grub work with your LUKS setup,
although as this guide shows, you'd need to downgrade the
cryptographic algorithms, which seems like a bad tradeoff.
We're using cryptsetup-reencrypt for this which, amazingly, supports
re-encrypting devices on the fly. The trick is it needs free space at
the end of the partition for the LUKS header (which, I guess, makes it
a footer), so we need to resize the filesystem to leave room for that,
which is the trickiest bit.
This is a possibly destructive behavior. Be sure your backups are up
to date, or be ready to lose all data on the device.
We assume 512 byte sectors here. Check your sector size with fdisk
-l and adjust accordingly.
Before you perform the procedure, make sure requirements are
installed:
This is it! This is the most important step! Make sure your laptop
is plugged in and try not to interrupt it. This can, apparently,
be resumed without problem, but I'd hate to show you how.
This will show progress information like:
Progress: 2.4% ETA 23m45s, 53GiB written, speed 1.3 GiB/s
Wait until the ETA has passed.
Open and mount the encrypted filesystem and mount the EFI system
partition (ESP):
cryptsetup open /dev/nvme0n1p2 crypt
mount /dev/mapper/crypt /mnt
mount /dev/nvme0n1p1 /mnt/boot/efi
If this fails, now is the time to consider restoring from backups.
Enter the chroot
for fs in proc sys dev ; do
mount --bind /$fs /mnt/$fs
done
chroot /mnt
Pro tip: this can be done in one step in GRML with:
Be careful here! systemd-boot inherits the command line from the
system where it is generated, so this will possibly feature some
unsupported commands from your boot environment. In my
case GRML had a couple of those, which broke the boot. It's still
possible to workaround this issue by tweaking the arguments at
boot time, that said.
Exit chroot and reboot
exit
reboot
Some of the ideas in this section were taken from this guide but
was mostly rewritten to simplify the work. My guide also avoids the
grub hacks or a specific initrd system (as the guide uses
initramfs-tools and grub, while I, above, switched to dracut and
systemd-boot). RHEL also has a similar guide, perhaps even
better.
Somehow I have made this system without LVM at all,
which simplifies things a bit (as I don't need to also resize the
physical volume/volume groups), but if you have LVM, you need to tweak
this to also resize the LVM bits. The RHEL guide has some information
about this.
Matthew blogged about his Amiga CDTV
project, a truly
unique Amiga hack which also manages to be a
novel Doom project (no mean feat: it's a crowded space)
This re-awakened my dormant wish to muck around with my
childhood Amiga some more. When I last wrote about
it (four years ago ) I'd upgraded the disk drive emulator
with an OLED display and rotary encoder.
I'd forgotten to mention I'd also sourced a modern trapdoor RAM expansion which
adds 2MiB of RAM. The Amiga can only see 1.5MiB1 of it at the moment, I
need perform a mainboard modification to access the final 512kiB2, which
means some soldering.
What I had planned to do back then: replace the switch in the left button of the
original mouse, which was misbehaving; perform the aformentioned mainboard mod;
upgrade the floppy emulator wiring to a ribbon cable with plug-and-socket, for
easier removal;
fit an RTC chip to the RAM expansion board to get clock support in the OS.
However much of that might be might be moot, because of two
other mods I am considering,
PiStorm
I've re-considered the PiStorm accelerator mentioned in Matt's blog.
Four years ago, I'd passed over it, because it required you to run Linux on a
Raspberry Pi, and then an m68k emulator as a user-space process under Linux. I
didn't want to administer another Linux system, and I'm generally uncomfortable
about using a regular Linux distribution on SD storage over the long term.
However in the intervening years Emu68,
a bare-metal m68k emulator has risen to prominence. You boot the Pi straight
into Emu68 without Linux in the middle. For some reason that's a lot more
compelling to me.
The PiStorm enormously expands the RAM visible to the Amiga. There would be
no point in doing the mainboard mod to add 512k (and I don't know how that
would interact with the PiStorm). It also can provide virtual
hard disk devices to the Amiga (backed by files on the SD card), meaning the
floppy emulator would be superfluous.
Denise Mainboard
I've just learned about a truly incredible project: the Denise Mini-ITX Amiga
mainboard. It fitss into a Mini-ITX
case (I have a suitable one spare already). Some assembly required. You move
the chips from the original Amiga over to the Denise mainboard. It's compatible
with the PiStorm (or vice-versa). It supports PC-style PS/2 keyboards (I have a
Model M in the loft, thanks again Simon) and has
a bunch of other modern conveniences: onboard RTC; mini-ITX power (I'll need
something like a picoPSU too)
It wouldn't support my trapdoor RAM card but it takes a 72-pin DIMM which can
supply 2MiB of Chip RAM, and the PiStorm can do the rest (they're compatible3).
No stock at the moment but if I could get my hands on this, I could build
something that could permanently live on my desk.
the Boobip board's 1.5MiB is "chip" RAM: accessible to the other chips
on the mainboard, with access mediated by the AGNUS chip.
the final 512kiB is "Fast" RAM: only accessible to the CPU,
not mediated via Agnus.
Historically the primary way to contribute to Debian has been to email the Debian bug tracker with a code patch. Now that 92% of all Debian source packages are hosted at salsa.debian.org the GitLab instance of Debian more and more developers are using Merge Requests, but not necessarily in the optimal way. In this post I share what I ve found the best practice to be, presented in the natural workflow from forking to merging.
Why use Merge Requests?
Compared to sending patches back and forth in email, using a git forge to review code contributions brings several benefits:
Contributors can see the latest version of the code immediately when the maintainer pushes it to git, without having to wait for an upload to Debian archives.
Contributors can fork the development version and easily base their patches on the correct version and help test that the software continues to function correctly at that specific version.
Both maintainer and other contributors can easily see what was already submitted and avoid doing duplicate work.
It is easy for anyone to comment on a Merge Request and participate in the review.
Integrating CI testing is easy in Merge Requests by activating Salsa CI.
Tracking the state of a Merge Request is much easier than browsing Debian bug reports tagged patch , and the cycle of submit review re-submit re-review is much easier to manage in the dedicated Merge Request view compared to participants setting up their own email plugins for code reviews.
Merge Requests can have extra metadata, such as Approved , and the metadata often updates automatically, such as a Merge Request being closed automatically when the Git commit ID from it is pushed to the target branch.
Keeping these benefits in mind will help ensure that the best practices make sense and are aligned with maximizing these benefits.
Finding the Debian packaging source repository and preparing to make a contribution
Before sinking any effort into a package, start by checking its overall status at the excellent Debian Package Tracker. This provides a clear overview of the package s general health in Debian, when it was last uploaded and by whom, and if there is anything special affecting the package right now. This page also has quick links to the Debian bug tracker of the package, the build status overview and more. Most importantly, in the General section, the VCS row links to the version control repository the package advertises. Before opening that page, note the version most recently uploaded to Debian. This is relevant because nothing in Debian currently enforces that the package in version control is actually the same as the latest uploaded to Debian.
Following the Browse link opens the Debian package source repository, which is usually a project page on Salsa. To contribute, start by clicking the Fork button, select your own personal namespace and, under Branches to include, pick Only the default branch to avoid including unnecessary temporary development branches.
Once forking is complete, clone it with git-buildpackage. For this example repository, the exact command would be gbp clone --verbose git@salsa.debian.org:otto/glow.git.
Next, add the original repository as a new remote and pull from it to make sure you have all relevant branches. Using the same fork as an example, the commands would be:
The gbp pull command can be repeated whenever you want to make sure the main branches are in sync with the original repository. Finally, run gitk --all & to visually browse the Git history and note the various branches and their states in the two remotes. Note the style in comments and repository structure the project has and make sure your contributions follow the same conventions to maximize the chances of the maintainer accepting your contribution.
It may also be good to build the source package to establish a baseline of the current state and what kind of binaries and .deb packages it produces. If using Debcraft, one can simply run debcraft build in the Git repository.
Submitting a Merge Request for a Debian packaging improvement
Always start by making a development branch by running git checkout -b <branch name> to clearly separate your work from the main branch.
When making changes, remember to follow the conventions you already see in the package. It is also important to be aware of general guidelines on how to make good Git commits.
If you are not able to immediately finish coding, it may be useful to publish the Merge Request as a draft so that the maintainer and others can see that you started working on something and what general direction your change is heading in.
If you don t finish the Merge Request in one sitting and return to it another day, you should remember to pull the Debian branch from the original Debian repository in case it has received new commits. This can be done easily with these commands (assuming the same remote and branch names as in the example above):
git fetch go-team
git rebase -i go-team/debian/latest
git fetch go-team
git rebase -i go-team/debian/latest
Frequent rebasing is a great habit to help keep the Git history linear, and restructuring and rewording your commits will make the Git history easier to follow and understand why the changes were made.
When pushing improved versions of your branch, use git push --force. While GitLab does allow squashing, I recommend against it. It is better that the submitter makes sure the final version is a neat and clean set of commits that the receiver can easily merge without having to do any rebasing or squashing themselves.
When ready, remove the draft status of the Merge Request and wait patiently for review. If the maintainer does not respond in several days, try sending an email to <source package name>@packages.debian.org, which is the official way to contact maintainers. You could also post a comment on the MR and tag the last few committers in the same repository so that a notification email is triggered. As a last resort, submit a bug report to the Debian bug tracker to announce that a Merge Request is pending review. This leaves a permanent record for posterity (or the Debian QA team) of your contribution. However, most of the time simply posting the Merge Request in Salsa is enough; excessive communication might be perceived as spammy, and someone needs to remember to check that the bug report is closed.
Respect the review feedback, respond quickly and avoid Merge Requests getting stale
Once you get feedback, try to respond as quickly as possible. When people participating have everything fresh in their minds, it is much easier for the submitter to rework it and for the reviewer to re-review. If the Merge Request becomes stale, it can be challenging to revive it. Also, if it looks like the MR is only waiting for re-review but nothing happens, re-read the previous feedback and make sure you actually address everything. After that, post a friendly comment where you explicitly say you have addressed all feedback and are only waiting for re-review.
Reviewing Merge Requests
This section about reviewing is not exclusive to Debian package maintainers anyone can contribute to Debian by reviewing open Merge Requests. Typically, the larger an open source project gets, the more help is needed in reviewing and testing changes to avoid regressions, and all diligently done work is welcome. As the famous Linus quote goes, given enough eyeballs, all bugs are shallow .
On salsa.debian.org, you can browse open Merge Requests per project or for a whole group, just like on any GitLab instance.
Reviewing Merge Requests is, however, most fun when they are fresh and the submitter is active. Thus, the best strategy is to ensure you have subscribed to email notifications in the repositories you care about so you get an email for any new Merge Request (or Issue) immediately when posted.
When you see a new Merge Request, try to review it within a couple of days. If you cannot review in a reasonable time, posting a small note that you intend to review it later will feel better to the submitter compared to not getting any response.
Personally, I have a habit of assigning myself as a reviewer so that I can keep track of my whole review queue at https://salsa.debian.org/dashboard/merge_requests?reviewer_username=otto, and I recommend the same to others. Seeing the review assignment happen is also a good way to signal to the submitter that their submission was noted.
Reviewing commit-by-commit in the web interface
Reviewing using the web interface works well in general, but I find that the way GitLab designed it is not ideal. In my ideal review workflow, I first read the Git commit message to understand what the submitter tried to do and why; only then do I look at the code changes in the commit. In GitLab, to do this one must first open the Commits tab and then click on the last commit in the list, as it is sorted in reverse chronological order with the first commit at the bottom. Only after that do I see the commit message and contents. Getting to the next commit is easy by simply clicking Next.
When adding the first comment, I choose Start review and for the following remarks Add to review. Finally, I click Finish review and Submit review, which will trigger one single email to the submitter with all my feedback. I try to avoid using the Add comment now option, as each such comment triggers a separate notification email to the submitter.
Reviewing and testing on your own computer locally
For the most thorough review, I pull the code to my laptop for local review with git pull <remote url> <branch name>. There is no need to run git remote add as pulling using a URL directly works too and saves from needing to clean up old remotes later.
Pulling the Merge Request contents locally allows me to build, run and inspect the code deeply and review the commits with full metadata in gitk or equivalent.
Investing enough time in writing feedback, but not too much
See my other post for more in-depth advice on how to structure your code review feedback.
In Debian, I would emphasize patience, to allow the submitter time to rework their submission. Debian packaging is notoriously complex, and even experienced developers often need more feedback and time to get everything right. Avoid the temptation to rush the fix in yourself. In open source, Git credits are often the only salary the submitter gets. If you take the idea from the submission and implement it yourself, you rob the submitter of the opportunity to get feedback, try to improve and finally feel accomplished. Sure, it takes extra effort to give feedback, but the contributor is likely to feel ownership of their work and later return to further improve it.
If a submission looks hopelessly low quality and you feel that giving feedback is a waste of time, you can simply respond with something along the lines of: Thanks for your contribution and interest in helping Debian. Unfortunately, looking at the commits, I see several shortcomings, and it is unlikely a normal review process is enough to help you finalize this. Please reach out to Debian Mentors to get a mentor who can give you more personalized feedback.
There might also be contributors who just dump the code , ignore your feedback and never return to finalize their submission. If a contributor does not return to finalize their submission in 3-6 months, I will in my own projects simply finalize it myself and thank the contributor in the commit message (but not mark them as the author).
Despite best practices, you will occasionally still end up doing some things in vain, but that is how volunteer collaboration works. We all just need to accept that some communication will inevitably feel like wasted effort, but it should be viewed as a necessary investment in order to get the benefits from the times when the communication led to real and valuable collaboration. Please just do not treat all contributors as if they are unlikely to ever contribute again; otherwise, your behavior will cause them not to contribute again. If you want to grow a tree, you need to plant several seeds.
Approving and merging
Assuming review goes well and you are ready to approve, and if you are the only maintainer, you can proceed to merge right away. If there are multiple maintainers, or if you otherwise think that someone else might want to chime in before it is merged, use the Approve button to show that you approve the change but leave it unmerged.
The person who approved does not necessarily have to be the person who merges. The point of the Merge Request review is not separation of duties in committing and merging the main purpose of a code review is to have a different set of eyeballs looking at the change before it is committed into the main development branch for all eternity. In some packages, the submitter might actually merge themselves once they see another developer has approved. In some rare Debian projects, there might even be separate people taking the roles of submitting, approving and merging, but most of the time these three roles are filled by two people either as submitter and approver+merger or submitter+merger and approver.
If you are not a maintainer at all and do not have permissions to click Approve, simply post a comment summarizing your review and that you approve it and support merging it. This can help the maintainers review and merge faster.
Making a Merge Request for a new upstream version import
Unlike many other Linux distributions, in Debian each source package has its own version control repository. The Debian sources consist of the upstream sources with an additional debian/ subdirectory that contains the actual Debian packaging. For the same reason, a typical Debian packaging Git repository has a debian/latest branch that has changes only in the debian/ subdirectory while the surrounding upstream files are the actual upstream files and have the actual upstream Git history. For details, see my post explaining Debian source packages in Git.
Because of this Git branch structure, importing a new upstream version will typically modify three branches: debian/latest, upstream/latest and pristine-tar. When doing a Merge Request for a new upstream import, only submit one Merge Request for one branch: which means merging your new changes to the debian/latest branch.
There is no need to submit the upstream/latest branch or the pristine-tar branch. Their contents are fixed and mechanically imported into Debian. There are no changes that the reviewer in Debian can request the submitter to do on these branches, so asking for feedback and comments on them is useless. All review, comments and re-reviews concern the content of the debian/latest branch only.
It is not even necessary to use the debian/latest branch for a new upstream version. Personally, I always execute the new version import (with gbp import-orig --verbose --uscan) and prepare and test everything on debian/latest, but when it is time to submit it for review, I run git checkout -b import/$(dpkg-parsechangelog -SVersion) to get a branch named e.g. import/1.0.1 and then push that for review.
Reviewing a Merge Request for a new upstream version import
Reviewing and testing a new upstream version import is a bit tricky currently, but possible. The key is to use gbp pull to automate fetching all branches from the submitter s fork. Assume you are reviewing a submission targeting the Glow package repository and there is a Merge Request from user otto s fork. As the maintainer, you would run the commands:
git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto
git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto
If there was feedback in the first round and you later need to pull a new version for re-review, running gbp pull --force will not suffice, and this trick of manually fetching each branch and resetting them to the submitter s version is needed:
for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done
for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done
Once review is done, either click Approve and let the submitter push everything, or alternatively, push all the branches you pulled locally yourself. In GitLab and other forges, the Merge Request will automatically be marked as Merged once the commit ID that was the head of the Merge Request is pushed to the target branch.
Please allow enough time for everyone to participate
When working on Debian, keep in mind that it is a community of volunteers. It is common for people to do Debian stuff only on weekends, so you should patiently wait for at least a week so that enough workdays and weekend days have passed for the people you interact with to have had time to respond on their own Debian time.
Having to wait may feel annoying and disruptive, but try to look at the upside: you do not need to do extra work simply while waiting for others. In some cases, that waiting can be useful thanks to the sleep on it phenomenon: when you yourself look at your own submission some days later with fresh eyes, you might notice something you overlooked earlier and improve your code change even without other people s feedback!
Contribute reviews!
The last but not least suggestion is to make a habit of contributing reviews to packages you do not maintain. As we already see in large open source projects, such as the Linux kernel, they have far more code submissions than they can handle. The bottleneck for progress and maintaining quality becomes the reviews themselves.
For Debian, as an organization and as a community, to be able to renew and grow new contributors, we need more of the senior contributors to shift focus from merely maintaining their packages and writing code to also intentionally interact with new contributors and guide them through the process of creating great open source software. Reviewing code is an effective way to both get tangible progress on individual development items and to transfer culture to a new generation of developers.
Why aren t 100% of all Debian source packages hosted on Salsa?
As seen at trends.debian.net, more and more packages are using Salsa. Debian does not, however, have any policy about it. In fact, the Debian Policy Manual does not even mention the word Salsa anywhere. Adoption of Salsa has so far been purely organic, as in Debian each package maintainer has full freedom to choose whatever preferences they have regarding version control.
I hope the trend to use Salsa will continue and more shared workflows emerge so that collaboration gets easier. To drive the culture of using Merge Requests and more, I drafted the Debian proposal DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are active in Debian and you think DEP-18 is beneficial for Debian, please give a thumbs up at dep-team/deps!21.
DebConf 25, by Stefano Rivera and Santiago Ruano Rinc n
In July, DebConf 25 was held in Brest, France.
Freexian was a gold sponsor and most of the Freexian team attended the event.
Many fruitful discussions were had amongst our team and within the Debian
community.
DebConf itself was organized by a local team in Brest, that included Santiago
(who now lives in Uruguay). Stefano was also deeply involved in the
organization, as a DebConf committee member, core video team, and the lead
developer for the conference website. Running the conference took an enormous
amount of work, consuming all of Stefano and Santiago s time for most of July.
Lucas Kanashiro was active in the DebConf content team, reviewing talks and
scheduling them. There were many last-minute changes to make during the event.
Anupa Ann Joseph was part of the Debian publicity team doing live coverage of
DebConf 25 and was part of the DebConf 25 content team reviewing the talks.
She also assisted the local team to procure the lanyards.
Recorded sessions presented by Freexian collaborators, often alongside other
friends in Debian, included:
OpenSSH upgrades, by Colin Watson
Towards the end of a release cycle, people tend to do more upgrade testing, and
this sometimes results in interesting problems. Manfred Stock reported
No new SSH connections possible during large part of upgrade to Debian Trixie ,
which would have affected many people upgrading from Debian 12 (bookworm), with
potentially severe consequences for people upgrading remote systems. In fact,
there were two independent problems that each led to much the same symptom:
As part of hardening the OpenSSH server, OpenSSH 9.8 split the monolithic
sshd listener process into two pieces: a minimal network listener (still
called sshd), and an sshd-session process dealing with each individual
session. Before this change, when sshd received an incoming connection, it
forked and re-executed itself with some special parameters to deal with it;
after this change, it forks and executes sshd-session instead, and sshd no
longer accepts the parameters it used to accept for this.
Debian package upgrades happen (roughly) in two phases: first we unpack the new
files onto disk, and then we run some configuration steps which usually include
things like restarting services. Normally this is fine, because the old service
keeps on working until it s restarted. In this case, unpacking the new files
onto disk immediately stopped new SSH connections from working: the old sshd
received the connection and tried to hand it off to a freshly-executed copy of
the new sshd binary on disk, which no longer supports this. This wasn t much
of a problem when upgrading OpenSSH on its own or with a small number of other
packages, but in release upgrades it left a large gap when you can t SSH to the
system any more, and if anything fails in that interval then you could be in
trouble.
After trying a couple of other approaches, Colin landed on the idea of having
the openssh-server package divert /usr/sbin/sshd to
/usr/sbin/sshd.session-split before the unpack step of an upgrade from before
9.8, then removing the diversion and moving the new file into place once it s
ready to restart the service. This reduces the period when new connections fail
to a minimum.
Most OpenSSH processes, including sshd, check for a compatible version of
the OpenSSL library when they start up. This check used to be very picky, among
other things requiring both the major and minor part of the version number to
match. OpenSSL 3 has a better versioning policy,
and so OpenSSH 9.4p1 relaxed this check.
Unfortunately, bookworm shipped with OpenSSH 9.2p1, so as soon as you unpacked
the new OpenSSL library during an upgrade, sshd stopped working. This
couldn t be fixed by a change in trixie; we needed to change bookworm in advance
of the upgrade so that it would tolerate newer versions of OpenSSL, and time was
tight if we wanted this to be available before the release of Debian 13.
Fortunately, there s a
stable-updates
mechanism for exactly this sort of thing, and the stable release managers kindly
accepted Colin s proposal to fix this there.
The net result is that if you apply updates to bookworm (including
stable-updates / bookworm-updates, which is enabled by default) before
starting the upgrade to trixie, everything should be fine.
Cross compilation collaboration, by Helmut Grohne
Supporting cross building in Debian packages touches lots of areas of the
archive and quite some of these matters reside in shared responsibility between
different teams. Hence, DebConf was an ideal opportunity to settle long-standing
issues.
The cross building bof
sparked lively discussions as a significant
fraction of developers employ cross builds to get their work done. In the
trixie release, about two thirds of the packages can satisfy their cross
Build-Depends and about half of the packages actually can be cross built.
Miscellaneous contributions
Rapha l Hertzog updated tracker.debian.org to remove
references to Debian 10 which was moved to
archive.debian.org, and had many fruitful discussions
related to Debusine during DebConf 25.
Carles Pina prepared some data, questions and information for the DebConf 25
l10n and i18n BoF.
Carles Pina demoed and discussed possible next steps for
po-debconf-manager
with different teams in DebConf 25. He also reviewed Catalan translations and
sent them to the packages.
Carles Pina started investigating a django-compressor bug:
reproduced the bug consistently and prepared a PR for django-compressor upstream
(likely more details next month). Looked at packaging
frictionless-py.
Stefano Rivera triaged Python CVEs against pypy3.
Stefano prepared an upload of a new upstream release of pypy3 to Debian
experimental (due to the freeze).
Stefano uploaded python3.14 RC1 to Debian experimental.
Thorsten Alteholz uploaded a new upstream version of sane-airscan to
experimental. He also started to work on a new upstream version of hplip.
Colin backported fixes for CVE-2025-50181
and CVE-2025-50182 in python-urllib3, and
fixed several other release-critical or important bugs in Python team packages.
Lucas uploaded ruby3.4 to experimental as a starting point for the
ruby-defaults transition that will happen after Trixie release.
Lucas coordinated with the Release team the fix of the remaining RC bugs
involving ruby packages, and got them all fixed.
Lucas, as part of the Debian Ruby team, kicked off discussions to improve
internal process/tooling.
Lucas, as part of the Debian Outreach team, engaged in multiple discussions
around internship programs we run and also what else we could do to improve
outreach in the Debian project.
Lucas joined the Local groups BoF during DebConf 25 and shared all the good
experiences from the Brazilian community and committed to help to document
everything to try to support other groups.
Helmut reiterated the multiarch policy proposal
with a lot of help from Nattie Mayer-Hutchings, Rhonda D Vine and Stuart Prescott.
Helmut finished his work on the process based unschroot prototype
that was the main feature of his talk (see above).
Helmut analyzed a multiarch-related glibcupgrade failure
induced by a /usr-move mitigation of systemd and sent a patch and regression
fix both of which reached trixie in time. Thanks to Aurelien Jarno and the
release team for their timely cooperation.
Helmut resurrected an earlier discussion about changing the semantics of
Architecture: all packages in a multiarch context in order to improve the
long-standing interpreter problem. With help from Tollef Fog Heen better
semantics were discovered and agreement was reached with Guillem Jover and
Julian Andres Klode to consider this change. The idea is to record a concrete
architecture for every Architecture: all package in the dpkg database and
enable choosing it as non-native.
About 90% of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay or GitHub
Sponsors.
DebConf
I attended DebConf for the first time in 11 years (my last one was DebConf
14 in Portland). It was great! For once I had a conference where I had a
fairly light load of things I absolutely had to do, so I was able to spend
time catching up with old friends, making some new friends, and doing some
volunteering - a bit of Front Desk, and quite a lot of video team work where
I got to play with sound desks and such. Apparently one of the BoFs ( birds
of a feather , i.e. relatively open discussion sessions) where I was
talkmeister managed to break the automatic video cutting system by starting
and ending precisely on time, to the second, which I m told has never
happened before. I ll take that.
I gave a talk about
Debusine,
along with helping Enrico run a Debusine
BoF.
We still need to process some of the feedback from this, but are generally
pretty thrilled about the reception. My personal highlight was getting a
shout-out in a talk from
CERN
(in the slide starting at 32:55).
Other highlights for me included a Python team
BoF,
Ian s tag2upload
talk
and some very useful follow-up discussions, a session on archive-wide
testing,
a somewhat brain-melting whiteboard session about the multiarch interpreter
problem ,
severalusefuldiscussions
about salsa.debian.org, Matthew s talk on how Wikimedia automates their Debian package
builds,
and many others. I hope I can start attending regularly again!
OpenSSH
Towards the end of a release cycle, people tend to do more upgrade testing,
and this sometimes results in interesting problems. Manfred Stock reported
No new SSH connections possible during large part of upgrade to Debian
Trixie , and after a little testing in a
container I confirmed that this was a reproducible problem that would have
affected many people upgrading from Debian 12 (bookworm), with potentially
severe consequences for people upgrading remote systems. In fact, there
were two independent problems that each led to much the same symptom:
OpenSSH 9.8 split the monolithic sshd listener process into two
pieces: a minimal network listener (still called sshd), and an
sshd-session process dealing with each individual session. (OpenSSH
10.0 further split sshd-session, adding an sshd-auth process that
deals with the user authentication phase of the protocol.) This hardens
the OpenSSH server by using different address spaces for privileged and
unprivileged code.
Before this change, when sshd received an incoming connection, it
forked and re-executed itself with some special parameters to deal with
it. After this change, it forks and executes sshd-session instead,
and sshd no longer accepts the parameters it used to accept for this.
Debian package upgrades happen in two phases: first we unpack the new
files onto disk, and then we run some package-specific configuration
steps which usually include things like restarting services. (I m
simplifying, but this is good enough for this post.) Normally this is
fine, and in fact desirable: the old service keeps on working, and this
approach often allows breaking what would otherwise be difficult cycles
by ensuring that the system is in a more coherent state before trying to
restart services. However, in this case, unpacking the new files onto
disk immediately means that new SSH connections no longer work: the old
sshd receives the connection and tries to hand it off to a
freshly-executed copy of the new sshd binary on disk, which no longer
supports this.
If you re just upgrading OpenSSH on its own or with a small number of
other packages, this isn t much of a problem as the listener will be
restarted quite soon; but if you re upgrading from bookworm to trixie,
there may be a long gap when you can t SSH to the system any more, and
if something fails in the middle of the upgrade then you could be in trouble.
So, what to do? I considered keeping a copy of the old sshd around
temporarily and patching the new sshd to re-execute it if it s being
run to handle an incoming connection, but that turned out to fail in my
first test: dependencies are normally only checked when configuring a
package, so it s possible to unpack openssh-server before unpacking a
newer libc6 that it depends on, at which point you can t execute the
new sshd at all. (That also means that the approach of restarting the
service at unpack time instead of configure time is a non-starter.) We
needed a different idea.
dpkg, the core Debian package manager, has a specialized facility
called diversions : you can tell it that when it s unpacking a
particular file it should put it somewhere else instead. This is
normally used by administrators when they want to install a
locally-modified version of a particular file at their own risk, or by
packages that knowingly override a file normally provided by some other
package. However, in this case it turns out to be useful for
openssh-server to temporarily divert one of its own files! When
upgrading from before 9.8, it now diverts /usr/sbin/sshd to
/usr/sbin/sshd.session-split before the new version is unpacked, then
removes the diversion and moves the new file into place once it s ready
to restart the service; this reduces the period when incoming
connections fail to a minimum. (We actually have to pretend that the
diversion is being performed on behalf of a slightly different package
since we re using dpkg-divert in a strange way here, but it all works.)
Most OpenSSH processes, including sshd, check for a compatible version
of the OpenSSL library when they start up. This check used to be very
picky, among other things requiring both the major and minor number to
match. OpenSSL 3 has a better versioning
policy,
and so OpenSSH 9.4p1 relaxed this
check.
Unfortunately, bookworm shipped with OpenSSH 9.2p1, which means that as
soon as you unpack the new libssl3 during an upgrade (actually
libssl3t64 due to the 64-bit time_t
transition), sshd
stops working. This couldn t be fixed by a change in trixie; we needed
to change bookworm in advance of the upgrade so that it would tolerate
newer versions of OpenSSL. And time was tight if we wanted to maximize
the chance that people would apply that stable update before upgrading
to trixie; there isn t going to be another point release of Debian 12
before the release of Debian 13.
Fortunately, there s a
stable-updates
mechanism for exactly this sort of thing, and the stable release
managers kindly accepted my proposal to fix this there.
The net result is that if you apply updates to bookworm (including
stable-updates / bookworm-updates, which is enabled by default) before
starting the upgrade to trixie, everything should be fine. Many thanks to
Manfred for reporting this with just enough time to spare that we were able
to fix it before Debian 13 is released in a few days!
debmirror
I did my twice-yearly refresh of debmirror s mirror_size
documentation,
and applied a patch from Christoph Goehre
to improve mirroring of installer files.
madison-lite
I proposed renaming this project along
with the rmadison tool in devscripts, although I m not yet sure what a
good replacement name would be.
Python team
I upgraded python-expandvars, python-typing-extensions (in experimental),
and webtest to new upstream versions.
I backported fixes for some security vulnerabilities to unstable:
I reinstated python3-mastodon s build-dependency on and recommendation of
python3-blurhash, now that the latter has been fixed to use the correct
upstream source.
Partners holding big jigsaw puzzle pieces flat vector illustration. Successful partnership, communication and collaboration metaphor. Teamwork and business cooperation concept.
I write this in the wake of a personal attack against my work and a project that is near and dear to me. Instead of spreading vile rumors and hearsay, talk to me. I am not known to be hard to talk to and am wide open for productive communication. I am disheartened and would like to share some thoughts of the importance of communication. Thanks for listening.
Open source development thrives on collaboration, shared knowledge, and mutual respect. Yet sometimes, the very passion that drives us to contribute can lead to misunderstandings and conflicts that harm both individuals and the projects we care about. As contributors, maintainers, and community members, we have a responsibility to foster environments where constructive dialogue flourishes.
The Foundation of Healthy Open Source Communities
At its core, open source is about people coming together to build something greater than what any individual could create alone. This collaborative spirit requires more than just technical skills it demands emotional intelligence, empathy, and a commitment to treating one another with dignity and respect.
When disagreements arise and they inevitably will the manner in which we handle them defines the character of our community. Technical debates should focus on the merits of ideas, implementations, and approaches, not on personal attacks or character assassinations conducted behind closed doors.
The Importance of Direct Communication
One of the most damaging patterns in any community is when criticism travels through indirect channels while bypassing the person who could actually address the concerns. When we have legitimate technical disagreements or concerns about someone s work, the constructive path forward is always direct, respectful communication.
Consider these approaches:
Address concerns directly: If you have technical objections to someone s work, engage with them directly through appropriate channels
Focus on specifics: Critique implementations, documentation, or processes not the person behind them
Assume good intentions: Most contributors are doing their best with the time and resources available to them
Offer solutions: Instead of just pointing out problems, suggest constructive alternatives
Supporting Contributors Through Challenges
Open source contributors often juggle their community involvement with work, family, and personal challenges. Many are volunteers giving their time freely, while others may be going through difficult periods in their lives job searching, dealing with health issues, or facing other personal struggles.
During these times, our response as a community matters enormously. A word of encouragement can sustain someone through tough periods, while harsh criticism delivered thoughtlessly can drive away valuable contributors permanently.
Building Resilient Communities
Strong open source communities are built on several key principles:
Transparency in Communication: Discussions about technical decisions should happen in public forums where all stakeholders can participate and learn from the discourse.
Constructive Feedback Culture: Criticism should be specific, actionable, and delivered with the intent to improve rather than to tear down.
Recognition of Contribution: Every contribution, whether it s code, documentation, bug reports, or community support, has value and deserves acknowledgment.
Conflict Resolution Processes: Clear, fair procedures for handling disputes help prevent minor disagreements from escalating into community-damaging conflicts.
The Long View
Many successful open source projects span decades, with contributors coming and going as their life circumstances change. The relationships we build and the culture we create today will determine whether these projects continue to attract and retain the diverse talent they need to thrive.
When we invest in treating each other well even during disagreements we re investing in the long-term health of our projects and communities. We re creating spaces where innovation can flourish because people feel safe to experiment, learn from mistakes, and grow together.
Moving Forward Constructively
If you find yourself in conflict with another community member, consider these steps:
Take a breath: Strong emotions rarely lead to productive outcomes
Seek to understand: What are the underlying concerns or motivations?
Communicate directly: Reach out privately first, then publicly if necessary
Focus on solutions: How can the situation be improved for everyone involved?
Know when to step back: Sometimes the healthiest choice is to disengage from unproductive conflicts
A Call for Better
Open source has given us incredible tools, technologies, and opportunities. The least we can do in return is treat each other with the respect and kindness that makes these collaborative achievements possible.
Every contributor whether they re packaging software, writing documentation, fixing bugs, or supporting users is helping to build something remarkable. Let s make sure our communities are places where that work can continue to flourish, supported by constructive communication and mutual respect.
The next time you encounter work you disagree with, ask yourself: How can I make this better? How can I help this contributor grow? How can I model the kind of community interaction I want to see?
Our projects are only as strong as the communities that support them. Let s build communities worthy of the amazing software we create together.
https://gofund.me/506c910c
I ve participated in this year s Google Summer of Code (GSoC)
program and have been working on the small (90h) autopkgtests for the rsync
package project at Debian.
Writing my proposal
Before you can start writing a proposal, you need to select an organization
you want to work with. Since many organizations participate in GSoC, I ve
used the following criteria to narrow things down for me:
Programming language familiarity: For me only Python (preferably) as well
as shell and Go projects would have made sense. While learning another
programming language is cool, I wouldn t be as effective and helpful to
the project as someone who is proficient in the language already.
Standing of the organization: Some of the organizations participating in
GSoC are well-known for the outstanding quality of the software they
produce. Debian is one of them, but so is e.g. the Django Foundation or
PostgreSQL. And my thinking was that the higher the quality of the
organization, the more there is to learn for me as a GSoC student.
Mentor interactions: Apart from the advantage you get from mentor
feedback when writing your proposal (more on that further below), it is
also helpful to gauge how responsive/helpful your potential mentor is
during the application phase. This is important since you will be working
together for a period of at least 2 months; if the mentor-student
communication doesn t work, the GSoC project is going to be difficult.
Free and Open-Source Software (FOSS) communication platforms: I
generally believe that FOSS projects should be built on FOSS
infrastructure. I personally won t run proprietary software
when I want to contribute to FOSS in my spare time.
Be a user of the project: As Eric S. Raymond has pointed out
in his seminal The Cathedral and the Bazaar 25 years ago
Every good work of software starts by scratching a developer s personal
itch.
Once I had some organizations in mind whose projects I d be interested in
working on, I started writing proposals for them. Turns out, I started
writing my proposals way too late: In the end I only managed to hand in a
single one which is risky. Competition for the GSoC projects is fierce
and the more quality (!) proposals you send out, the better your chances are
at getting one. However, don t write proposals for the sake of it: Reviewers
get way too many AI slop proposals already and you will not do yourself a
favor with a low-quality proposal. Take the time to read the
instructions/ideas/problem descriptions the project mentors have provided
and follow their guidelines. Don t hesitate to reach out to project mentors:
In my case, I ve asked Samuel Henrique a few clarification questions whereby
the following (email) discussion has helped me greatly in improving my
proposal. Once I ve finalized my proposal draft, I ve sent it to Samuel for
a review, which again led to some improvements to the final proposal
which I ve uploaded to the GSoC program webpage.
Community bonding period
Once you get the information that you ve been accepted into the GSoC program
(don t take it personally if you don t make it; this was my second attempt
after not making the cut in 2024), get in touch with your prospective mentor
ASAP. Agree upon a communication channel and some response times. Put
yourself in the loop for project news and discussions whatever that means in
the context of your organization: In Debian s case this boiled down to
subscribing to a bunch of mailing lists and IRC channels. Also make sure to
setup a functioning development environment if you haven t done so for
writing the proposal already.
Payoneer setup
The by far most annoying part of GSoC for me. But since you don t have a
choice if you want to get the stipend, you will need to signup for an
account at Payoneer.
In this iteration of GSoC all participants got a personalized link to open a
Payoneer account. When I tried to open an account by following this link, I
got an email after the registration and email verification that my
account is being blocked because Payoneer deems the email adress I gave a
temporary one. Well, the email in question is most certainly anything but
temporary, so I tried to get in touch with the Payoneer support - and ended
up in an LLM-infused kafkaesque support hell. Emails are answered by an LLM
which for me meant utterly off-topic replies and no help whatsoever. The
Payoneer website offers a real-time chat, but it is yet another instance of
a bullshit-spewing LLM bot. When I at last tried to call them (the
support lines are not listed on the Payoneer website but were provided by
the GSoC program), I kid you not, I was being told that their platform is
currently suffering from technical problems and was hung up on. Only thanks
to the swift and helpful support of the GSoC administrators (who get
priority support from Payoneer) I was able to setup a Payoneer account in
the end.
Apart from showing no respect to customers, Payoneer is also ripping them
off big time with fees (unless you get paid in USD). They charge you 2% for
currency conversions to EUR on top of the FX spread they take. What
worked for me to avoid all of those fees, was to open a USD account at Wise
and have Payoneer transfer my GSoC stipend in USD to that account. Then I
exchanged the USD to my local currency at Wise for significantly less than
Payoneer would have charged me. Also make sure to close your Payoneer
account after the end of GSoC to avoid their annual fee.
Project work
With all this prelude out of the way, I can finally get to the actual work
I ve been doing over the course of my GSoC project.
Background
The upstream rsync project generally sees little development. Nonetheless,
they released version 3.4.0 including some CVE fixes earlier
this year. Unfortunately, their changes broke the -H
flag. Now, Debian package maintainers need to apply those security fixes to
the package versions in the Debian repositories; and those are typically a
bit older. Which usually means that the patches cannot be applied as is but
will need some amendments by the Debian maintainers. For these cases it is
helpful to have autopkgtests defined, which check the package s
functionality in an automated way upon every build.
The question then is, why should the tests not be written upstream such that
regressions are caught in the development rather than the distribution
process? There s a lot to say on this question and it probably depends a lot
on the package at hand, but for rsync the main benefits are twofold:
The upstream project mocks the ssh connection over which rsync is most
typically used. Mocking is better than nothing but not the real thing. In
addition to being a more realisitic test scenario for the typical rsync
use case, involving an ssh server in the test would automatically extend
the overall resilience of Debian packages as now new versions of the
openssh-server package in Debian benefit from the test cases in the
rsync reverse dependency.
The upstream rsync test framework is somewhat idiosyncratic and
difficult to port to reimplementations of rsync. Given that the
original rsync upstream sees little development, an extensive test suit
further downstream can serve as a threshold for drop-in replacements for
rsync.
Goal(s)
At the start of the project, the Debian rsync package was just running (a
part of) the upstream tests as autopkgtests. The relevant snippet from the
build log for the rsync_3.4.1+ds1-3 package reads:
Samuel and I agreed that it would be a good first milestone to make the
skipped tests run. Afterwards, I should write some rsync test cases for
local calls, i.e. without an ssh connection, effectively using rsync as
a more powerful cp. And once that was done, I should extend the tests such
that they run over an active ssh connection.
With these milestones, I went to work.
Upstream tests
Running the seven skipped upstream tests turned out to be fairly
straightforward:
Two upstream tests concern access control lists and extended
filesystem attributes. For these tests to run they rely on
functionality provided by the acl and xattr Debian packages. Adding
those to the Build-Depends list in the debian/control file of the
rsync Debian package repo made them run.
Four upstream tests required root privileges to run. The autopkgtest
tool knows the needs-root restriction for that reason. However, Samuel
and I agreed that the tests should not exclusively run with root
privileges. So, instead of just adding the restiction to the existing
autopkgtest test, we created a new one which has the needs-root
restriction and runs the upstream-tests-as-root script - which is
nothing else than a symlink to the existing upstream-tests script.
The commits to implement these changes can be found in this merge
request.
The careful reader will have noticed that I only made 2 + 4 = 6 upstream
test cases run out of 7: The leftover upstream test is checking the
functionality of the --ctimes rsync option. In the context of Debian, the
problem is that the Linux kernel doesn t have a syscall to set the creation
time of a file. As long as that is the case, this test will always be
skipped for the Debian package.
Local tests
When it came to writing Debian specific test cases I started of a completely
clean slate. Which is a blessing and a curse at the same time: You have full
flexibility but also full responsibility.
There were a few things to consider at this point in time:
Which language to write the tests in?
The programming language I am most proficient in is Python. But testing a
CLI tool in Python would have been weird: it would have meant that I d
have to make repeated subprocess calls to run rsync and then read from
the filesystem to get the file statistics I want to check.
Samuel suggested I stick with shell scripts and make use of diffoscope -
one of the main tools used and maintained by the Reproducible Builds
project - to check whether the file contents and
file metadata are as expected after rsync calls. Since I did not have
good reasons to use bash, I ve decided to write the scripts to be POSIX
compliant.
How to avoid boilerplate? If one makes use of a testing framework, which
one?
Writing the tests would involve quite a bit of boilerplate, mostly related
to giving informative output on and during the test run, preparing the
file structure we want to run rsync on, and cleaning the files up after
the test has run. It would be very repetitive and in violation of DRY
to have the code for this appear in every test. Good testing frameworks
should provide convenience functions for these tasks. shunit2 comes with
those functions, is packaged for Debian, and given that it is already
being used in the curl project, I decided to go with it.
Do we use the same directory structure and files for every test or should
every test have an individual setup?
The tradeoff in this question being test isolation vs. idiosyncratic
code. If every test has its own setup, it takes a) more work to write the
test and b) more work to understand the differences between
tests. However, one can be sure that changes to the setup in one test will
have no side effects on other tests. In my opinion, this guarantee was
worth the additional effort in writing/reading the tests.
Having made these decisions, I simply started writing
tests and ran into issues very quickly.
rsync and subsecond mtime diffs
When testing the rsync --times option, I observed a weird phenomenon: If
the source and destination file have modification times which differ only in
the nanoseconds, an rsync --times call will not synchronize the
modification times. More details about this behavior and examples can be
found in the upstream issue I raised. In the Debian tests we
had to occasionally work around this by setting the timestamps explicitly
with touch -d.
diffoscope regression
In one test case, I was expecting a difference in the modification times but
diffoscope would not report a diff. After a good amount of time spent
on debugging the problem (my default, and usually correct, assumption is
that something about my code is seriously broken if I run into issues like
that), I was able to show that diffoscope only displayed this behavior in
the version in the unstable suite, not on Debian stable (which I am running
on my development machine).
Since everything pointed to a regression in the diffoscope project and
with diffoscope being written in Python, a language I am familiar with, I
wanted to spend some time investigating (and hopefully fixing) the problem.
Running git bisect on the diffoscope repo helped me in identifying the
commit which introduced the regression: The commit contained an optimization
via an early return for bit-by-bit identical files. Unfortunately, the early
return also caused an explicitly requested metadata comparison (which could
be different between the files) to be skipped.
With a nicely diagnosed issue like that, I was able to
go to a local hackerspace event, where people work on FOSS together for an
evening every month. In a group, we were able to first, write a test which
showcases the broken behavior in the latest diffoscope version, and
second, make a fix to the code such that the same test passes going
forward. All details can be found in this merge request.
shunit2 failures
At some point I had a few autopkgtests setup and passing, but adding a new
one would throw me totally inexplicable errors. After trying to isolate the
problem as much as possible, it turns out that shunit2 doesn t play well
together we the -e shell option. The project mentions this in the release
notes for the 2.1.8 version1, but in my opinion a
constraint this severe should be featured much more prominently, e.g. in the
README.
Tests over an ssh connection
The centrepiece of this project; everything else has in a way only been
preparation for this.
Obviously, the goal was to reuse the previously written local tests in some
way. Not only because lazy me would have less work to do this way, but also
because of a reduced long-term maintenance burden of one rather than two
test sets.
As it turns out, it is actually possible to accomplish that: The
remote-tests script doesn t do much apart from starting an ssh server on
localhost and running the local-tests script with the REMOTE environment
variable set.
The REMOTE environment variable changes the behavior of the local-tests
script in such a way that it prepends "$REMOTE": to the destination of the
rsync invocations. And given that we set REMOTE=rsync@localhost in the
remote-tests script, local-tests copies the files to the exact same
locations as before, just over ssh.
The implementational details for this can be found in this merge
request.
proposed-updates
Most of my development work on the Debian rsync package took place during
the Debian freeze as the release of Debian Trixie is just
around the corner. This means that uploading by Debian Developers (DD) and
Debian Maintainers (DM) to the unstable suite is discouraged as it makes
migrating the packages to testing more difficult for the Debian release
team. If DDs/DMs want to have the package version in unstable migrated to
testing during the freeze they have to file an unblock request.
Samuel has done this twice (1, 2) for my work for Trixie but has
asked me to file the proposed-updates request for current
stable (i.e. Debian Bookworm) myself after I ve backported my
tests to bookworm.
Unfinished business
To run the upstream tests which check access control list and extended file
system attributes functionality, I ve added the acl and xattr packages
to Build-Depends in debian/control. This, however, will only make the
packages available at build time: If Debian users install the rsync
package, the acl and xattr packages will not be installed alongside
it. For that, the dependencies would have to be added to Depends or
Suggests in debian/control. Depends is probably to strong of a relation
since rsync clearly works well in practice without, but adding them to
Suggests might be worthwhile. A decision on this would involve checking,
what happens if rsync is called with the relevant options on a host
machine which has those packages installed, but where the destination
machine lacks them.
Apart from the issue described above, the 15 tests I managed to write are
are a drop in the water in light of the infinitude of rsync
optionsand their combinations. Most glaringly, not all
options of the --archive option are covered separately (which would help
indicating what code path of rsync broke in a regression). To increase the
likelihood of catching regressions with the autopkgtests, the test
coverage should be extended in the future.
Conclusion
Generally, I am happy with my contributions to Debian over the course of my
small GSoC project: I ve created an extensible, easy to understand, and
working autopkgtest setup for the Debian rsync package. There are two
things which bother me, however:
In hindsight, I probably shouldn t have gone with shunit2 as a testing
framework. The fact that it behaves erratically with the -e flag is a
serious drawback for a shell testing framework: You really don t want a
shell command to fail silently and the test to continue running.
As alluded to in the previous section, I m not particularly proud of the
number of tests I managed to write.
On the other hand, finding and fixing the regression in diffoscope - while
derailing me from the GSoC project itself - might have a redeeming quality.
DebConf25
By sheer luck I happened to work on a GSoC project at Debian over a time
period during which the annual Debian conference would take place
close enough to my place of residence. Samuel pointed the
opportunity to attend DebConf out to me during the community bonding period
and since I could make time for the event in my schedule, I signed up.
DebConf was a great experience which - aside from gaining more knowledge
about Debian development - allowed me to meet the actual people usually
hidden behind email adresses and IRC nicks. I can wholeheartedly recommend
attending a DebConf to every interested Debian user!
For those who have missed this year s iteration of the conference, I can
recommend the following recorded talks:
While not featuring as a keynote speaker (understandably so as the newcomer
to Debian community that I am), I could still contribute a bit to the
conference program.
Debian install workshop
Additionally, with so many Debian experts gathering in one place while KDE s
End of 10 campaign is ongoing, I felt it natural to
organize a Debian install workhop. In hindsight I can say
that I underestimated how much work it would be, especially for me who does
not speak a word of French. But although the turnout of people who wanted us
to install Linux on their machines was disappointingly low, it was still
worth it: Not only because the material in the repo can be
helpful to others planning install workshops but also because it was nice to
meet a) the person behind the Debian installer images and b) the
local Brest/Finist re Linux user group as well as the motivated and helpful
people at Infini.
Credits
I want to thank the Open Source team at Google for organizing GSoC: The
highly structured program with a one-to-one mentorship is a great avenue to
start contributing to well established and at times intimidating FOSS
projects. And as much as I disagree with Google s surveillance
capitalist business model, I have to give it to them that the company at
least takes its responsibility for FOSS (somewhat) seriously - unlike many
other businesses which rely on FOSS and choose to freeride of it.
Big thanks to the Debian community! I ve experienced nothing but
friendliness in my interactions with the community.
And lastly, the biggest thanks to my GSoC mentor Samuel Henrique. He has
dealt patiently and competently with all my stupid newbie questions. His
support enabled me to make - albeit small - contributions to Debian. It has
been a pleasure to work with him during GSoC and I m looking forward to
working together with him in the future.
Obviously, I ve only read them after experiencing the problem.
If your bank is like mine, its website doesn t allow you to copy your password and paste it by performing a simple Ctrl+V. I tried the Don t Fuck With Paste extension in Firefox, which could paste my bank account s profile password but not the login password.
Therefore, I asked on Mastodon a couple of days ago and got some responses. The solution that worked for me was to use Shift+Insert to paste the password. It worked for me in LibreWolf and Firefox, and that s all I needed.
Furthermore, this behavior by bank websites leads to users choosing insecure and memorable passwords. Using this trick will help you choose strong passwords for your bank account.
I prefer to use random and strong passwords generated using the password manager pass. It is a freedom-respecting software, unlike popular proprietary password managers promoted by YouTubers. Feel free to check out their webpage here. The reason I use pass is that it stores all the passwords locally (and optionally in a remote Git repository) in encrypted form, which can only be decrypted using your private GPG keys.
It's Sunday and I'm now sitting in the train from Brest to Paris where I will be changing to Germany, on the way back from the annual Debian conference. A full week of presentations, discussions, talks and socializing is laying behind me and my head is still spinning from the intensity.
Pollito and the gang of DebConf mascots wearing their conference badges (photo: Christoph Berg)
Table of Contents
Sunday, July 13th
It started last Sunday with traveling to the conference. I got on the Eurostar in Duisburg and we left on time, but even before reaching Cologne, the train was already one hour delayed for external reasons, collecting yet another hour between Aachen and Liege for its own technical problems. "The train driver is working on trying to fix the problem." My original schedule had well over two hours for changing train stations in Paris, but being that late, I missed the connection to Brest in Montparnasse. At least in the end, the total delay was only one hour when finally arriving at the destination. Due to the French julliet quatorze fireworks approaching, buses in Brest were rerouted, but I managed to catch the right bus to the conference venue, already meeting a few Debian people on the way.
The conference was hosted at the IMT Atlantique Brest campus, giving the event a nice university touch. I arrived shortly after 10 in the evening and after settling down a bit, got on one of the "magic" buses for transportation to the camping site where half of the attendees where stationed. I shared a mobile home with three other Debianites, where I got a small room for myself.
Monday, July 14th
Next morning, we took the bus back to the venue with a small breakfast and the opening session where Enrico Zini invited me to come to his and Nicolas Dandrimont's session about Debian community governance and curation, which I gladly did. Many ideas about conflict moderation and community steering were floated around. I hope some of that can be put into effect to make flamewars on the mailing lists less heated and more directed. After that, I attended Olly Betts' "Stemming with Snowball" session, which is the stemmer used also in PostgreSQL. Text search is one of the areas in PostgreSQL that I never really looked closely at, including the integration into the postgresql-common package, so it was nice to get more information about that.
In preparation for the conference, a few of us Ham radio operators in Debian had decided to bring some radio gear to DebConf this year in order to perhaps spark more interest for our hobby among the fellow geeks. In the afternoon after the talks, I found a quieter spot just outside of the main hall and set up a shortwave antenna by attaching a 10m mast to one of the park benches there. The 40m band was still pretty much closed, but I could work a few stations from England, just across the channel from Bretagne, answering questions from interested passing-by Debian people between the contacts. Over time, the band opened and more European stations got into the log.
F/DF7CB in Brest (photo: Evangelos Ribeiro Tzaras)
Tuesday, July 15th
Tuesday started with Helmut Grohne's session about "Reviving (un)schroot". The schroot program has been Debian's standard way of managing build chroots for a long time, but it is more and more being regarded as obsolete with all kinds of newer containerization and virtualization technologies taking over. Since many bits of Debian infrastructure depend on schroot, and its user interface is still very useful, Helmut reimplemented it using Linux namespaces and the "unshare" systemcall. I had already worked with him at the Hamburg Minidebconf to replace the apt.postgresql.org buildd machinery with the new system, but we were not quite there yet (network isolation is nice, but we still sometimes need proper networking), so it was nice to see the effort is still progressing and I will give his new scripts a try when I'm back home.
Next, Stefano Rivera and Colin Watson presented Debusine, a new package repository and workflow management system. It looks very promising for anyone running their own repository, so perhaps yet another bit of apt.postgresql.org infrastructure to replace in the future. After that, I went to the Debian LTS BoF session by Santiago Ruano Rinc n and Bastien Roucari s - Debian releases plus LTS is what we are covering with apt.postgresql.org. Then there were bits from the DPL (Debian Project Leader), and a session moderated by Stefano Rivera interesting to me as a member of the Debian Technical Committee on the future structure of the packages required for cross-building in Debian, a topic which had been brought to TC a while ago. I am happy that we could resolve the issue without having to issue a formal TC ruling as the involved parties (kernel, glibc, gcc and the cross-build people) found a promising way forward themselves. DebConf is really a good way to get such issues unstuck.
Ten years ago at the 2015 Heidelberg DebConf, Enrico had given a seminal "Semi-serious stand-up comedy" talk, drawing parallels between the Debian Open Source community and the BDSM community - "People doing things consensually together". (Back then, the talk was announced as "probably unsuitable for people of all ages".) With his unique presentation style and witty insights, the session made a lasting impression on everyone attending. Now, ten years later (and he and many in the audience being ten years older), he gave an updated version of it. We are now looking forward to the sequel in 2035. The evening closed with the famous DebConf tradition of the Cheese & Wine party in a old fort next to the coast, just below the conference venue. Even when he's a fellow Debian Developer, Ham and also TC member, I had never met Paul Tagliamonte in person before, but we spent most of the evening together geeking out on all things Debian and Ham radio.
The northern coast of Ushant (photo: Christoph Berg)
Wednesday, July 16th
Wednesday already marked the end of the first half of the week, the day of the day trips. I had chosen to go to Ouessant island (Ushant in English) which marks the Western end of French mainland and hosts one of the lighthouses yielding the way into the English channel. The ferry trip included surprisingly big waves which left some participants seasick, but everyone recovered fast. After around one and a half hours we arrived, picked up the bicycles, and spent the rest of the day roaming the island. The weather forecast was originally very cloudy and 18 C, but over noon this turned into sunny and warm, so many got an unplanned sunburn. I enjoyed the trip very much - it made up for not having time visiting the city during the week. After returning, we spent the rest of the evening playing DebConf's standard game, Mao (spoiler alert: don't follow the link if you ever intend to play).
Having a nice day (photo: Christoph Berg)
Thursday, July 17th
The next day started with the traditional "Meet the Technical Committee" session. This year, we trimmed the usual slide deck down to remove the boring boilerplate parts, so after a very short introduction to the work of the committee by our chairman Matthew Vernon, we opened up the discussion with the audience, with seven (out of 8) TC members on stage. I think the format worked very well, with good input from attendees. Next up was "Don't fear the TPM" by Jonathan McDowell. A common misconception in the Free Software community is that the TPM is evil DRM hardware working against the user, but while it could be used in theory that way, the necessary TPM attestations seem to impossible to attain in practice, so that wouldn't happen anyway. Instead, it is a crypto coprocessor present in almost all modern computers that can be used to hold keys, for example to be used for SSH. It will also be interesting to research if we can make use of it for holding the Transparent Data Encryption keys for CYBERTEC's PostgreSQL Enterprise Edition.
Aigars Mahinovs then directed everyone in place for the DebConf group picture, and Lucas Nussbaum started a discussion about archive-wide QA tasks in Debian, an area where I did a lot of work in the past and that still interests me. Antonio Terceiro and Paul Gevers followed up with techniques to track archive-wide rebuilding and testing of packages and in turn filing a lot of bugs to track the problems. The evening ended with the conference dinner, again in the fort close by the coast. DebConf is good for meeting new people, and I incidentally ran into another Chris, who happened to be one of the original maintainers of pgaccess, the pre-predecessor of today's pgadmin. I admit still missing this PostgreSQL frontend for its simplicity and ability to easily edit table data, but it disappeared around 2004.
Friday, July 18th
On Friday, I participated in discussion sessions around contributors.debian.org (PostgreSQL is planning to set up something similar) and the New Member process which I had helped to run and reform a decade or two ago. Agathe Porte (also a Ham radio operator, like so many others at the conference I had no idea of) then shared her work on rust-rewriting the slower parts of Lintian, the Debian package linter. Craig Small talked about "Free as in Bytes", the evolution of the Linux procps free command. Over the time and many kernel versions, the summary numbers printed became better and better, but there will probably never be a version that suits all use cases alike. Later over dinner, Craig (who is also a TC member) and I shared our experiences with these numbers and customers (not) understanding them. He pointed out that for PostgreSQL and looking at used memory in the presence of large shared memory buffers, USS (unique set size) and PSS (proportional set size) should be more realistic numbers than the standard RSS (resident set size) that the top utility is showing by default.
Antonio Terceiro and Paul Gevers again joined to lead a session, now on ci.debian.net and autopkgtest, the test driver used for running tests on packages after then have been installed on a system. The PostgreSQL packages are heavily using this to make sure no regressions creep in even after builds have successfully completed and test re-runs are rescheduled periodically. The day ended with Bdale Garbee's electronics team BoF and Paul Tagliamonte and me setting up the radio station in the courtyard, again answering countless questions about ionospheric conditions and operating practice.
Saturday, July 19th
Saturday was the last conference day. In the first session, Nikos Tsipinakis and Federico Vaga from CERN announced that the LHC will be moving to Debian for the accelerator's frontend computers in their next "long shutdown" maintenance period in the next year. CentOS broke compatibility too often, and Debian trixie together with the extended LTS support will cover the time until the next long shutdown window in 2035, until when the computers should have all been replaced with newer processors covering higher x86_64 baseline versions. The audience was very delighted to hear that Debian is now also being used in this prestige project.
Ben Hutchings then presented new Linux kernel features. Particularly interesting for me was the support for atomic writes spanning more than one filesystem block. When configured correctly, this would mean PostgreSQL didn't have to record full-page images in the WAL anymore, increasing throughput and performance. After that, the Debian ftp team discussed ways to improve review of new packages in the archive, and which of their processes could be relaxed with new US laws around Open Source and cryptography algorithms export. Emmanuel Arias led a session on Salsa CI, Debian's Gitlab instance and standard CI pipeline. (I think it's too slow, but the runners are not under their control.) Julian Klode then presented new features in APT, Debian's package manager. I like the new display format (and a tiny bit of that is also from me sending in wishlist bugs).
In the last round of sessions this week, I then led the Ham radio BoF with an introduction into the hobby and how Debian can be used. Bdale mentioned that the sBitx family of SDR radios is natively running Debian, so stock packages can be used from the radio's touch display. We also briefly discussed his involvement in ARDC and the possibility to get grants from them for Ham radio projects. Finally, DebConf wrapped up with everyone gathering in the main auditorium and cheering the organizers for making the conference possible and passing Pollito, the DebConf mascot, to the next organizer team.
Pollito on stage (photo: Christoph Berg)
Sunday, July 20th
Zoom back to the train: I made it through the Paris metro and I'm now on the Eurostar back to Germany. It has been an intense week with all the conference sessions and meeting all the people I had not seen so long. There are a lot of new ideas to follow up on both for my Debian and PostgreSQL work. Next year's DebConf will take place in Santa Fe, Argentina. I haven't yet decided if I will be going, but I can recommend the experience to everyone!
The post The Debian Conference 2025 in Brest appeared first on CYBERTEC PostgreSQL Services & Support.
Dear friends, family, and community,
I m reaching out during a challenging time in my life to ask for your support. This year has been particularly difficult as I ve been out of work for most of it due to a broken arm and a serious MRSA infection that required extensive treatment and recovery time.
Current Situation
While I ve been recovering, I ve been actively working to maintain and improve my professional skills by contributing to open source software projects. These contributions help me stay current with industry trends and demonstrate my ongoing commitment to my field, but unfortunately, they don t provide the income I need to cover my basic living expenses.
Despite my efforts, I m still struggling to secure employment, and I m falling behind on essential bills including:
Rent/mortgage payments
Utilities
Medical expenses
Basic living costs
How You Can Help
Any financial assistance, no matter the amount, would make a meaningful difference in helping me stay afloat during this job search. Your support would allow me to:
Keep my housing stable
Maintain essential services
Focus fully on finding employment without the constant stress of unpaid bills
Continue contributing to the open source community
Moving Forward
I m actively job searching and interviewing, and I m confident that I ll be back on my feet soon. Your temporary support during this difficult period would mean the world to me and help bridge the gap until I can secure stable employment.
If you re able to contribute, GoFundMe . If you re unable to donate, I completely understand, and sharing this request with others who might be able to help would be greatly appreciated.
Thank you for taking the time to read this and for considering helping me during this challenging time.
With gratitude, Scarlett
Achieving full disk encryption using FIPS, TCG OPAL and LUKS to encrypt UEFI ESP on bare-metal and in VMs
Many security standards such as CIS and STIG require to protect information at rest. For example, NIST SP 800-53r5 SC-28 advocate to use cryptographic protection, offline storage and TPMs to enhance protection of information confidentiality and/or integrity.Traditionally to satisfy such controls on portable devices such as laptops one would utilize software based Full Disk Encryption - Mac OS X FileVault, Windows Bitlocker, Linux cryptsetup LUKS2. In cases when FIPS cryptography is required, additional burden would be placed onto these systems to operate their kernels in FIPS mode.Trusted Computing Group works on establishing many industry standards and specifications, which are widely adopted to improve safety and security of computing whilst keeping it easy to use. One of their most famous specifications them is TCG TPM 2.0 (Trusted Platform Module). TPMs are now widely available on most devices and help to protect secret keys and attest systems. For example, most software full disk encryption solutions can utilise TCG TPM to store full disk encryption keys providing passwordless, biometric or pin-base ways to unlock the drives as well as attesting that system have not been modified or compromised whilst offline.TCG Storage Security Subsystem Class: Opal Specification is a set of specifications for features of data storage devices. The authors and contributors to OPAL are leading and well trusted storage manufacturers such as Samsung, Western Digital, Seagate Technologies, Dell, Google, Lenovo, IBM, Kioxia, among others. One of the features that Opal Specification enables is self-encrypting drives which becomes very powerful when combined with pre-boot authentication. Out of the box, such drives always and transparently encrypt all disk data using hardware acceleration. To protect data one can enter UEFI firmware setup (BIOS) to set NVMe single user password (or user + administrator/recovery passwords) to encrypt the disk encryption key. If one's firmware didn't come with such features, one can also use SEDutil to inspect and configure all of this. Latest release of major Linux distributions have SEDutil already packaged.Once password is set, on startup, pre-boot authentication will request one to enter password - prior to booting any operating systems. It means that full disk is actually encrypted, including the UEFI ESP and all operating systems that are installed in case of dual or multi-boot installations. This also prevents tampering with ESP, UEFI bootloaders and kernels which with traditional software-based encryption often remain unencrypted and accessible. It also means one doesn't have to do special OS level repartitioning, or installation steps to ensure all data is encrypted at rest.What about FIPS compliance? Well, the good news is that majority of the OPAL compliant hard drives and/or security sub-chips do have FIPS 140-3 certification. Meaning they have been tested by independent laboratories to ensure they do in-fact encrypt data. On the CMVP website one can search for module name terms "OPAL" or "NVMe" or name of hardware vendor to locate FIPS certificates.Are such drives widely available? Yes. For example, a common Thinkpad X1 gen 11 has OPAL NVMe drives as standard, and they have FIPS certification too. Thus, it is likely in your hardware fleet these are already widely available. Use sedutil to check if MediaEncrypt and LockingSupported features are available.Well, this is great for laptops and physical servers, but you may ask - what about public or private cloud? Actually, more or less the same is already in-place in both. On CVMP website all major clouds have their disk encryption hardware certified, and all of them always encrypt all Virtual Machines with FIPS certified cryptography without an ability to opt-out. One is however in full control of how the encryption keys are managed: cloud-provider or self-managed (either with a cloud HSM or KMS or bring your own / external). See these relevant encryption options and key management docs for GCP, Azure, AWS. But the key takeaway without doing anything, at rest, VMs in public cloud are always encrypted and satisfy NIST SP 800-53 controls.What about private cloud? Most Linux based private clouds ultimately use qemu typically with qcow2 virtual disk images. Qemu supports user-space encryption of qcow2 disk, see this manpage. Such encryption encrypts the full virtual machine disk, including the bootloader and ESP. And it is handled entirely outside of the VM on the host - meaning the VM never has access to the disk encryption keys. Qemu implements this encryption entirely in userspace using gnutls, nettle, libgcrypt depending on how it was compiled. This also means one can satisfy FIPS requirements entirely in userspace without a Linux kernel in FIPS mode. Higher level APIs built on top of qemu also support qcow2 disk encryption, as in projects such as libvirt and OpenStack Cinder.If you carefully read the docs, you may notice that agent support is explicitly sometimes called out as not supported or not mentioned. Quite often agents running inside the OS may not have enough observability to them to assess if there is external encryption. It does mean that monitoring above encryption options require different approaches - for example monitor your cloud configuration using tools such as Wiz and Orca, rather than using agents inside individual VMs. For laptop / endpoint security agents, I do wish they would start gaining capability to report OPAL SED availability and status if it is active or not.What about using software encryption none-the-less on top of the above solutions? It is commonly referred to double or multiple encryption. There will be an additional performance impact, but it can be worthwhile. It really depends on what you define as data at rest for yourself and which controls you need. If one has a dual-boot laptop, and wants to keep one OS encrypted whilst booted into the other, it can perfectly reasonable to encrypted the two using separate software encryption keys. In addition to the OPAL encryption of the ESP. For more targeted per-file / per-folder encryption, one can look into using gocryptfs which is the best successor to the once popular, but now deprecated eCryptfs (amazing tool, but has fallen behind in development and can lead to data loss).All of the above mostly talks about cryptographic encryption, which only provides confidentially but not data integrity. To protect integrity, one needs to choose how to maintain that. dm-verity is a good choice for read-only and rigid installations. For read-write workloads, it may be easier to deploy ZFS or Btrfs instead. If one is using filesystems without a built-in integrity support such as XFS or Ext4, one can retrofit integrity layer to them by using dm-integrity (either standalone, or via dm-luks/cryptsetup --integrity option).
If one has a lot of estate and a lot of encryption keys to keep track off a key management solution is likely needed. The most popular solution is likely the one from Thales Group marketed under ChiperTrust Data Security Platform (previously Vormetric), but there are many others including OEM / Vendor / Hardware / Cloud specific or agnostic solutions.
I hope this crash course guide piques your interest to learn and discover modern confidentially and integrity solutions, and to re-affirm or change your existing controls w.r.t. to data protection at rest.
Full disk encryption, including UEFI ESP /boot/efi is now widely achievable by default on both baremetal machines and in VMs including with FIPS certification. To discuss more let's connect on Linkedin.
DebConf 25 happened between 14th July and
19th July and I was there. It was my first DebConf (the big one, I was at a
Mini DebConf in Hamburg a
couple of years ago) and it was interesting. DebConf 25 happened at a Campus
University at the outskirts of Brest and I was rather reluctant to go at first
(EuroPython 25 was happening at the same time
in Prague), but I decided to use the chance of DebConf happening in Europe,
reachable by train from Vienna. We took the nighttrain to Paris, then found our
way through the maze that is the Paris underground system and then got to Brest
with the TGV. On our way to the Conference site we made a detour to a
supermarket, which wasn t that easy because is was a national holiday in France
and most of the shops were closed. But we weren t sure about the food situation
at DebConf and we also wanted to get some beer.
At the conference we were greeted by very friendly people at the badge station
and the front desk and got our badges, swag and most important the keys to
pretty nice rooms on the campus. Our rooms had a small private bathroom with a
toilet and a shower and between the two rooms was a shared kitchen with a
refrigerator and a microwave. All in all, the accommodation was simple but
provided everything we needed and especially a space to have some privacy.
During the next days I watched a lot of talks, met new people, caught up with
old friends and also had a nice time with my travel buddies. There was a beach
near the campus which I used nearly every day. It was mostly sunny except for
the last day of the conference, which apparently was not common for the Brest
area, so we got lucky regarding the weather.
Given that we only arrived in the evening of the first day of DebConf, I missed
the talk When Free Software Communities Unite: Tails, Tor, and the Fight for
Privacy
(recording),
but I watched it on the way home and it was also covered by
LWN.
On Tuesday I started the day by visiting a talk about
tag2upload
(recording).
The same day there was also an academic track and I watched the talk titled
Integrating Knowledge Graphs into the Debian
Ecosystem
(recording) which presented
a property graph showing relationships between various entities like packages,
maintainers or bugs (there is a
repository with parts of a paper,
but not much other information). The speaker also mentioned the graphcast
framework and the ontocast
framework which sound interesting - we
might have use for something liked this at $dayjob.
In the afternoon there was a talk about the
ArchWiki
(recording)
which gave a comprehensive insight in how the
ArchWiki and the community behind it works. Right
after that was a Debian Wiki BoF. There are various technical limitations with
the current wiki software and there are not enough helping hands to maintain
the service and do content curation. But the BoF had some nice results: there
is now a new debian-wiki mailinglist,
an IRC channel, a MediaWiki installation has been set up during DebConf, there
are efforts to migrate the data and most importantly: and handful of people who
want to maintain the service and organize the content of the wiki. I think the
input from the ArchWiki folks gave some ideas how that team could operate.
Wednesday was the day of the daytrip. I did not sign up for any of the trips
and used the time to try out
tag2upload, uploaded
the latest labwc release to
experimental
and spent the rest of the day at the beach.
Other noteworthy session I ve attended were the Don t fear the
TPM talk
(recording),
which showed me a lot of stuff to try out, the session about
lintian-ng (no
recording), which is an experimental approach to make lintian faster, the
review of the first year of wcurls
existence (no
recording yet) and the summary of Rust packaging in
Debian (no
recording yet). In between the sessions I started working on packaging
wlr-sunclock
(#1109230).
What did not work
Vegan food.
I might be spoiled by other conferences. Both at EuroPycon last year (definitely
bigger, a lot more commercial) and at PyCon CZ
23 (similar
in size, a lot more DIY) there was catering with explicitly vegan options.
As I ve mentioned in the beginning, we went to a supermarket before we went to
the conference and we had to go there one more time during the conference. I
think there was a mixture between a total lack of awareness and a LOT of
miscommunication. The breakfasts at the conference consisted of pastries and
baguettes - I asked at the first day what the vegan options were and the answer
was I don t know, maybe the baguette? and we were asked to only take as
much baguette as the people who also got pastries.
The lunch was prepared by the Restaurant associatif de Kern vent which is
a canteen at the university campus. When we asked if there is vegan food, the
people there said that there was only a vegetarian option so we only ate salad.
Only later we heard via word of mouth that one has to explicitly ask for a
vegan meal which was apparently prepared separatly and you had to find the
right person that knows about it (I think thats very Debian-like ). But even
then a person once got a vegetarian option offered as vegan food.
One problem was also the missing / confusing labeling of the food. At the
conference dinner there was apparently vegan food, but it was mixed with all
the other food. There were some labels but with hundreds of hungry people around
and caterers removing empty plates and dropping off plates with other stuff,
everything gets mixed up. In the end we ate bread soaked in olive oil, until the
olive oil got taken away by the catering people literally while we were dipping
the bread in it.
And when these issues were raised, some of the reactions can be summarized as
You re holding it wrong which was really frustrating.
The dinners at the conference hall were similar. At some point I had the
impression that vegan and vegetarian was simply seen as the same thing.
If the menus would be written like a debian/copyright file it would probably
have looked like this:
Food: *
Diet: Vegan or Vegetarian
But the thing is that Vegan and Vegetarian cannot be mixed. Its similar to non
compatible licenses. Once you mix vegan food with vegan food with vegetarian
food it s not vegan anymore.
Don t get me wrong, I know its hard to organize food for hundreds of people.
But if you don t know what it means to provide a vegan option, just communicate
the fact so people can look alternatives in advance. During the week some of
the vegan people shared food, which was really nice and there were also a lot
of non-vegan people who tried to help, organized extra food or simply listened
to the hangry rants. Thanks for that!
Paris
Saturday was the last day of DebConf and it was a rainy day. On Sunday morning
we took the TGV back to Paris and then stayed there for one night because the
next night train back to Vienna was on Monday. Luckily the weather was
better in Paris. The first thing we did was to look up a vegan burger place. In
the evening we strolled along the Seine and had a couple of beers at the
Jardins du Trocad ro. Monday the rain also arrived in Paris and we mostly went
from one cafe to the next, but also managed to visit Notre Dame.
Conclusio
The next DebConf will be in Argentina and I think its likely that DebConf 27
will also not happen anywhere in trainvelling distance. But even if, I think
the Mini DebConfs are more my style of happening (there is one planned in
Hamburg next spring, and a couple of days ago I learned that there will be a
Back to the Future musical show in Hamburg during that time). Nonetheless I had
a nice time and I stumbled over some projects I might get more involved in.
Thanks also to my travel buddies who put up with me