Search Results: "p2"

22 January 2025

Jonathan McDowell: Christmas Movies

I watch a lot of films. Since completing the IMDB Top 250 back in 2016 I ve kept an eye on it, and while I don t go out of my way to watch the films that newly appear in it I generally sit at over 240 watched. I should note I don t consider myself a film buff/critic, however. I watch things for enjoyment, and a lot of the time that s kicking back and relaxing and disengaging my brain. So I don t get into writing reviews, just high level lists of things I ve watched, sometimes with a few comments. With that in mind, let s talk about Christmas movies. Yes, I appreciate it s the end of January, but generally during December we watch things that have some sort of Christmas theme. New ones if we can find them, but also some of what we consider classics . This almost always starts with Scrooged after we ve put up the tree. I don t always like Bill Murray (I couldn t watch The Life Aquatic with Steve Zissou and I think Lost in Translation is overrated), but he s in a bunch of things I really like, and Scrooged is one of those. I don t care where you sit on whether Die Hard is a Christmas movie or not, it s a good movie and therefore regularly gets a December watch. Die Hard 2 also fits into that category of sequel at least as good as the original , though Helen doesn t agree. We watched it anyway, and I finally made the connection between the William Sadler hotel scene and Michael Rooker s in Mallrats. It turns out I m a Richard Curtis fan. Love Actually has not aged well; most times I watch it I find something new questionable about it, and I always end up hating Alan Rickman for cheating on Emma Thompson, but I do like watching it. He had a new one, That Christmas, out this year, so we watched it as well. Another new-to-us film this year was Spirited. I generally like Ryan Reynolds, and Will Ferrell is good as long as he s not too overboard, so I had high hopes. I enjoyed it, but for some reason not as much as I d expected, and I doubt it s getting added to the regular watch list. Larry doesn t generally like watching full length films, but he (and we), enjoyed The Grinch, which I actually hadn t seen before. He s not as fussed on The Muppet Christmas Carol, but we watched it every year, generally on Christmas or Boxing Day. Favourite thing I saw on the Fediverse in December was Do you know there s a book of The Muppet Christmas Carol, and they don t mention that there s muppets in it once? There are a various other light hearted Christmas films we regularly watch. This year included The Holiday (I have so many issues with even just the practicalities of a short notice house swap), and Last Christmas (lots of George Michael music, what s not to love? Also it was only on this watch through that we realised the lead character is the Mother of Dragons). We started, but could not finish, Carry On. I saw it described somewhere as copaganda, and that feels accurate. It does not accurately reflect any of my interactions with TSA at airports, especially during busy periods. Things we didn t watch this year, but are regularly in the mix, include Fatman, Violent Night (looking forward to the sequel, hopefully this year), and Lethal Weapon. Klaus is kinda at the other end of the spectrum, but very touching, and we ve watched it a couple of years now. Given what we seem to like, any suggestions for other films to add? It s nice to have enough in the mix that we get some variety every year.

29 December 2024

Emmanuel Kasper: Accessing Atari ST disk images on Linux

This post leverages support for Atari Hard Disk Interface Partition (AHDI) partition tables in the Linux kernel, activated by default in Debian, and in the parted partition editor. Accessing the content of a partition using a user mounted loop device This is the easiest procedure and should be tried to first. Depending if your Linux kernel has support for AHDI partition tables, and the size of the FAT system on the partition, this procedure might not work. In that case, try the procedure using mtools further below. Attach a disk image called hd80mb.image to a loop device:
$ udisksctl loop-setup --file hd80mb.image
Mapped file hd80mb.image as /dev/loop0
Notice how the kernel detected the partition table:
$ dmesg   grep loop0
[160892.151941] loop0: detected capacity change from 0 to 164138
[160892.171061]  loop0: AHDI p1 p2 p3 p4
Inspect the block devices created for each partition:
$ lsblk   grep loop0
If the partitions are not already mounted by udisks2 under /media/, mount them manually:
$ sudo mount /dev/loop0p1 /mnt/
$ ls /mnt/
SHDRIVER.SYS
When you are finished copying data, unmount the partition, and detach the loop device.
$ sudo umount /mnt
$ udisksctl loop-delete --block-device /dev/loop0
Accessing the content of a partition using mtools and parted This procedure uses the mtools package and the support for the AHDI partition scheme in the parted partition editor. Display the partition table, with partitions offsets in bytes:
$ parted st_mint-1.5.img -- unit B print
...
Partition Table: atari
Disk Flags: 
Number  Start       End         Size        Type     File system  Flags
 1      1024B       133170175B  133169152B  primary               boot
 2      133170176B  266339327B  133169152B  primary
 3      266339328B  399508479B  133169152B  primary
 4      399508480B  532676607B  133168128B  primary
Set some Atari-friendly mtools options:
$ export MTOOLS_SKIP_CHECK=1
$ export MTOOLS_NO_VFAT=1
List the content of the partition, passing as parameter the offset in bytes of the partition: For instance here we are interested in the second partition, and the parted output above indicates that this partition starts at byte offset 133170176 in the disk image.
$ mdir -s -i st_mint-1.5.img@@133170176
 Volume in drive : has no label
Directory for ::/
demodata          2024-08-27  11:43 
        1 file                    0 bytes
Directory for ::/demodata
We can also use the command mcopy with a similar syntax to copy data from and to the disk image. For instance we copy a file named file.zip to the root directory of the second partition:
$ mcopy -s -i st_mint-1.5.img@@133170176 file.zip ::
Recompiling mtools to access large partitions With disk images having large AHDI partitions (well considered large in 1992 ), you might encounter the error
mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
init: sector size too big
Cannot initialize '::'
This error is caused by the non-standard large logical sectors that the TOS uses for large FAT partitions (see the Atari Hard Disk Filesystem reference on page 41, TOS partitions size) We can inspect the logical sector size using fsck tools:
$ udiskctl loop-setup --file cecile-falcon-singlepart-1GB.img
$ sudo fsck.fat -Anv /dev/loop0p1
fsck.fat 4.2 (2021-01-31)
...
Media byte 0xf8 (hard disk)
16384 bytes per logical sector
To access the partition, you need to patch mtools, so that it supports a logical sector size of 16384 bytes. For this you need to change the MAX_SECTOR macro from 8192 to 16384 in msdos.h in the mtools distribution and recompile. A rebuilt mtools is then able to access the partition:
$ /usr/local/bin/mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
 Volume in drive : has no label
Directory for ::/
CECILE   SYS      8462 1998-03-27  22:42 
NEWDESK  INF       804 2024-09-09   9:23 
        2 files               9 266 bytes
                      1 072 463 872 bytes free

12 December 2024

Dirk Eddelbuettel: #44: r2u For ML and MLops Talk

Welcome to the 44th post in the $R^4 series. A few weeks ago, and following an informal call for talks by James Lamb, I had an opportunity to talk about r2u to the Chicago ML and MLops meetup groups. You can find the slides here. Over the last 2 1/2 years, r2u has become a widely-deployed mechanism in a number of settings, including (but not limited to) software testing via continuous integration, deployment on cloud servers besides of course to more standard use on local laptops or workstation. 30 million downloads illustrate this. My thesis for the talk was that this extends equally to ML(ops) where no surprises, no hickups automated deployments are key for large-scale model training, evaluation and of course production deployments. In this context, I introduce r2u while giving credit both to what came before it, the existing alternatives (or competitors for mindshare if one prefers that phrasing), and of course what lies underneath it. The central takeaway, I argue, is that r2u can and does take advantage of a unique situation in that we can join the package manager task for the underlying (operating) system and and the application domain, here R and its unique CRAN repository network. Other approaches can, and of course do, provide binaries, but by doing this outside the realm of the system package manager can only arrive at a lesser integration (and I show a common error arising in that case). So where r2u is feasible, it dominates the alternatives (while the alternatives may well provide deployment on more platforms which, even when less integrated, may be of greater importance for some). As always, it all depends. But the talk, and its slides, motivate and illustrate why we keep calling r2u by its slogan of r2u: Fast. Easy. Reliable. Pick All Three.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. Please report excessive re-aggregation in third-party for-profit settings.

18 November 2024

Philipp Kern: debian.org now supports Security Key-backed SSH keys

debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.
As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.

12 November 2024

Freexian Collaborators: Monthly report about Debian Long Term Support, October 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In October, 20 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 6.0h (out of 7.0h assigned and 7.0h from previous period), thus carrying over 8.0h to the next month.
  • Adrian Bunk did 15.0h (out of 87.0h assigned and 13.0h from previous period), thus carrying over 85.0h to the next month.
  • Arturo Borrero Gonzalez did 10.0h (out of 10.0h assigned).
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 4.0h (out of 0.0h assigned and 4.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 29.0h (out of 26.0h assigned and 3.0h from previous period).
  • Emilio Pozuelo Monfort did 60.0h (out of 23.5h assigned and 36.5h from previous period).
  • Guilhem Moulin did 7.5h (out of 19.75h assigned and 0.25h from previous period), thus carrying over 12.5h to the next month.
  • Lee Garrett did 15.25h (out of 0.0h assigned and 60.0h from previous period), thus carrying over 44.75h to the next month.
  • Lucas Kanashiro did 10.0h (out of 10.0h assigned and 10.0h from previous period), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 14.5h (out of 6.5h assigned and 17.5h from previous period), thus carrying over 9.5h to the next month.
  • Roberto C. S nchez did 9.75h (out of 24.0h assigned), thus carrying over 14.25h to the next month.
  • Santiago Ruano Rinc n did 23.5h (out of 25.0h assigned), thus carrying over 1.5h to the next month.
  • Sean Whitton did 6.25h (out of 1.0h assigned and 5.25h from previous period).
  • Stefano Rivera did 1.0h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 9.0h to the next month.
  • Sylvain Beucler did 9.5h (out of 16.0h assigned and 44.0h from previous period), thus carrying over 50.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 10.5h (out of 12.0h assigned), thus carrying over 1.5h to the next month.

Evolution of the situation In October, we have released 35 DLAs. Some notable updates prepared in October include denial of service vulnerability fixes in nss, regression fixes in apache2, multiple fixes in php7.4, and new upstream releases of firefox-esr, openjdk-17, and opendk-11. Additional contributions were made for the stable Debian 12 bookworm release by several LTS contributors. Arturo Borrero Gonzalez prepared a parallel update of nss, Bastien Roucari s prepared a parallel update of apache2, and Santiago Ruano Rinc n prepared updates of activemq for both LTS and Debian stable. LTS contributor Bastien Roucari s undertook a code audit of the cacti package and in the process discovered three new issues in node-dompurity, which were reported upstream and resulted in the assignment of three new CVEs. As always, the LTS team continues to work towards improving the overall sustainability of the free software base upon which Debian LTS is built. We thank our many committed sponsors for their ongoing support.

Thanks to our sponsors Sponsors that joined recently are in bold.

11 October 2024

Freexian Collaborators: Monthly report about Debian Long Term Support, September 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In September, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 7.0h (out of 0.0h assigned and 14.0h from previous period), thus carrying over 7.0h to the next month.
  • Adrian Bunk did 51.75h (out of 9.25h assigned and 55.5h from previous period), thus carrying over 13.0h to the next month.
  • Arturo Borrero Gonzalez did 10.0h (out of 0.0h assigned and 10.0h from previous period).
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 20.0h (out of 12.0h assigned and 12.0h from previous period), thus carrying over 4.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.0h (out of 26.0h assigned), thus carrying over 3.0h to the next month.
  • Emilio Pozuelo Monfort did 23.5h (out of 22.25h assigned and 37.75h from previous period), thus carrying over 36.5h to the next month.
  • Guilhem Moulin did 22.25h (out of 20.0h assigned and 2.5h from previous period), thus carrying over 0.25h to the next month.
  • Lucas Kanashiro did 10.0h (out of 5.0h assigned and 15.0h from previous period), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 6.5h (out of 14.5h assigned and 9.5h from previous period), thus carrying over 17.5h to the next month.
  • Roberto C. S nchez did 24.75h (out of 21.0h assigned and 3.75h from previous period).
  • Santiago Ruano Rinc n did 19.0h (out of 19.0h assigned).
  • Sean Whitton did 0.75h (out of 4.0h assigned and 2.0h from previous period), thus carrying over 5.25h to the next month.
  • Sylvain Beucler did 16.0h (out of 42.0h assigned and 18.0h from previous period), thus carrying over 44.0h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 17.0h (out of 7.5h assigned and 9.5h from previous period).

Evolution of the situation In September, we have released 52 DLAs. September marked the first full month of Debian 11 bullseye under the responsibility of the LTS Team and the team immediately got to work, publishing more than 4 dozen updates. Some notable updates include ruby2.7 (denial-of-service, information leak, and remote code execution), git (various arbitrary code execution vulnerabilities), firefox-esr (multiple issues), gnutls28 (information disclosure), thunderbird (multiple issues), cacti (cross site scripting and SQL injection), redis (unauthorized access, denial of service, and remote code execution), mariadb-10.5 (arbitrary code execution), cups (arbitrary code execution). Several LTS contributors have also contributed package updates which either resulted in a DSA (a Debian Security Announcement, which applies to Debian 12 bookworm) or in an upload that will be published at the next stable point release of Debian 12 bookworm. This list of packages includes cups, cups-filters, booth, nghttp2, puredata, python3.11, sqlite3, and wireshark. This sort of work, contributing fixes to newer Debian releases (and sometimes even to unstable), helps to ensure that upgrades from a release in the LTS phase of its lifecycle to a newer release do not expose users to vulnerabilities which have been closed in the older release. Looking beyond Debian, LTS contributor Bastien Roucari s has worked with the upstream developers of apache2 to address regressions introduced upstream by some recent vulnerability fixes and he has also reached out to the community regarding a newly discovered security issue in the dompurify package. LTS contributor Santiago Ruano Rinc n has undertaken the work of triaging and reproducing nearly 4 dozen CVEs potentially affecting the freeimage package. The upstream development of freeimage appears to be dormant and some of the issues have languished for more than 5 years. It is unclear how much can be done without the aid of upstream, but we will do our best to provide as much help to the community as we can feasibly manage. Finally, it is sometimes necessary to limit or discontinue support for certain packages. The transition of a release from being under the responsibility of the Debian Security Team to that of the LTS Team is an occasion where we assess any pending decisions in this area and formalize them. Please see the announcement for a complete list of packages which have been designated as unsupported.

Thanks to our sponsors Sponsors that joined recently are in bold.

11 September 2024

Freexian Collaborators: Monthly report about Debian Long Term Support, August 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In August, 16 contributors have been paid to work on Debian LTS, their reports are available:
  • Adrian Bunk did 44.5h (out of 46.5h assigned and 53.5h from previous period), thus carrying over 55.5h to the next month.
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 9.0h (out of 0.0h assigned and 21.0h from previous period), thus carrying over 12.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 12.0h (out of 7.0h assigned and 5.0h from previous period).
  • Emilio Pozuelo Monfort did 22.25h (out of 6.5h assigned and 53.5h from previous period), thus carrying over 37.75h to the next month.
  • Guilhem Moulin did 17.5h (out of 8.75h assigned and 11.25h from previous period), thus carrying over 2.5h to the next month.
  • Lee Garrett did 11.5h (out of 58.0h assigned and 2.0h from previous period), thus carrying over 48.5h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 14.5h (out of 4.0h assigned and 20.0h from previous period), thus carrying over 9.5h to the next month.
  • Roberto C. S nchez did 8.25h (out of 5.0h assigned and 7.0h from previous period), thus carrying over 3.75h to the next month.
  • Santiago Ruano Rinc n did 21.5h (out of 11.5h assigned and 10.0h from previous period).
  • Sean Whitton did 4.0h (out of 2.25h assigned and 3.75h from previous period), thus carrying over 2.0h to the next month.
  • Sylvain Beucler did 42.0h (out of 46.0h assigned and 14.0h from previous period), thus carrying over 18.0h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 2.5h (out of 7.75h assigned and 4.25h from previous period), thus carrying over 9.5h to the next month.

Evolution of the situation In August, we have released 1 DLAs. During the month of August Debian 11 "bullseye" officially transitioned to the responsibility of the LTS team (on 2024-08-15). However, because the final point release (11.11) was not made until 2024-08-31, LTS contributors were prevented from uploading packages to bullseye until after the point release had been made. That said, the team was not at all idle, and was busy at work on a variety of tasks which impacted both LTS and the broader Debian community, as well as preparing uploads which will be released during the month of September. Of particular note, LTS contributor Bastien Roucari s prepared updates of the putty and cacti packages for bookworm (1 2) and bullseye (1 2), which were accepted by the old-stable release managers for the August point releases. He also analysed several security regressions in the apache2 package. LTS contributor Emilio Pozuelo Monfort worked on the Rust toolchain in bookworm and bullseye, which will be needed to support the upcoming Firefox ESR and Thunderbird ESR releases from the Mozilla project. Additionally, LTS contributor Thorsten Alteholz prepared bookworm and bullseye updates of the cups package (1 2), which were accepted by the old-stable release managers for the August point releases. LTS contributor Markus Koschany collaborated with Emmanuel Bourg, co-maintainer of the tomcat packages in Debian. Regressions in a proposed security fix necessitated the updating of the tomcat10 package in Debian to the latest upstream release. LTS contributors Bastien and Santiago Ruano Rinc n collaborated with the upstream developers and the Debian maintainer (Bernhard Schmidt) of the FreeRADIUS project towards addressing the BlastRADIUS vulnerability in the bookworm and bullseye versions of the freeradius package. If you use FreeRADIUS in Debian bookworm or bullseye, we encourage you to test the packages following the instructions found in the call for testers to help identifying any possible regression that could be introduced with these updates. Testing is an important part of the work the LTS Team does, and in that vein LTS contributor Sean Whitton worked on improving the documentation and tooling around creating test filesystems which can be used for testing a variety of package update scenarios.

Thanks to our sponsors Sponsors that joined recently are in bold.

10 September 2024

Freexian Collaborators: Debian Contributions: Python 3 patches, OpenSSH GSS-API split, rebootstrap, salsa CI, etc. (by Anupa Ann Joseph)

Debian Contributions: 2024-08 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Debian Python 3 patch review, by Stefano Rivera Last month, at DebConf, Stefano reviewed the current patch set of Debian s cPython packages with Matthias Klose, the primary maintainer until now. As a result of that review, Stefano re-reviewed the patchset, updating descriptions, etc. A few patches were able to be dropped, and a few others were forwarded upstream. One finds all sorts of skeletons doing reviews like this. One of the patches had been inactive (fortunately, because it was buggy) since the day it was applied, 13 years ago. One is a cleanup that probably only fixes a bug on HPUX, and is a result of copying code from xfree86 into Python 25 years ago. It was fixed in xfree86 a year later. Others support just Debian-specific functionality and probably never seemed worth forwarding. Or good cleanup that only really applies to Debian. A trivial new patch would allow Debian to multiarch co-install Python stable ABI dynamic extensions (like we can with regular dynamic extensions). Performance concerns are stalling it in review, at the moment.

DebConf 24 Organization, by Stefano Rivera Stefano helped organize DebConf 24, which concluded in early August. The event is run by a large entirely volunteer team. The work involved in making this happen is far too varied to describe here. While Freexian provides funding for 20% of collaborator time to spend on Debian-related work, it only covers a small fraction of contributions to time-intensive tasks like this. Since the end of the event, Stefano has been doing some work on the conference finances, and initiated the reimbursement process for travel bursaries.

Archive rebuilds on Debusine, by Stefano Rivera The recent setuptools 73 upload to Debian unstable removed the test subcommand, breaking many packages that were using python3 setup.py test in their Debian packaging. Stefano did a partial archive-rebuild using debusine.debian.net to find the regressions and file bugs. Debusine will be a powerful tool to do QA work like this for Debian in the future, but it doesn t have all the features needed to coordinate rebuild-testing, yet. They are planned to be fleshed out in the next year. In the meantime, Debusine has the building blocks to work through a queue of package building tasks and store the results, it just needs to be driven from outside the system. So, Stefano started working on a set of tools using the Debusine client API to perform archive rebuilds, found and tagged existing bugs, and filed many more.

OpenSSH GSS-API split, by Colin Watson Colin landed the first stage of the planned split of GSS-API authentication and key exchange support in Debian s OpenSSH packaging. In order to allow for smooth upgrades, the second stage will have to wait until after the Debian 13 (trixie) release; but once that s done, as upstream puts it, this substantially reduces the amount of pre-authentication attack surface exposed on your users sshd by default .

OpenSSL vs. cryptography, by Colin Watson Colin facilitated a discussion between Debian s OpenSSL team and the upstream maintainers of Python cryptography about a new incompatibility between Debian s OpenSSL packaging and cryptography s handling of OpenSSL s legacy provider, which was causing a number of build and test failures. While the issue remains open, the Debian OpenSSL maintainers have effectively reverted the change now, so it s no longer a pressing problem.

/usr-move, by Helmut Grohne There are less than 40 source packages left to move files to /usr, so what we re left with is the long tail of the transition. Rather than fix all of them, Helmut started a discussion on removing packages from unstable and filed a first batch. As libvirt is being restructured in experimental, we re handling the fallout in collaboration with its maintainer Andrea Bolognani. Since base-files validates the aliasing symlinks before upgrading, it was discovered that systemd has its own ideas with no solution as of yet. Helmut also proposed that dash checks for ineffective diversions of /bin/sh and that lintian warns about aliased files.

rebootstrap by Helmut Grohne Bootstrapping Debian for a new or existing CPU architecture still is a quite manual process. The rebootstrap project attempts to automate part of the early stage, but it still is very sensitive to changes in unstable. We had a number of fairly intrusive changes this year already. August included a little more fallout from the earlier gcc-for-host work where the C++ include search path would end up being wrong in the generated cross toolchain. A number of packages such as util-linux (twice), libxml2, libcap-ng or systemd had their stage profiles broken. e2fsprogs gained a cycle with libarchive-dev due to having gained support for creating an ext4 filesystem from a tar archive. The restructuring of glib2.0 remains an unsolved problem for now, but libxt and cdebconf should be buildable without glib2.0.

Salsa CI, by Santiago Ruano Rinc n Santiago completed the initial RISC-V support (!523) in the Salsa CI s pipeline. The main work started in July, but it was required to take into account some comments in the review (thanks to Ahmed!) and some final details in [!534]. riscv64 is the most recently supported port in Debian, which will be part of trixie. As its name suggests, the new build-riscv64 job makes it possible to test that a package successfully builds in the riscv64 architecture. The RISC-V runner (salsaci riscv64 runner 01) runs in a couple of machines generously provided by lab.rvperf.org. Debian Developers interested in running this job in their projects should enable the runner (salsaci riscv64 runner 01) in Settings / CI / Runners, and follow the instructions available at https://salsa.debian.org/salsa-ci-team/pipeline/#build-job-on-risc-v. Santiago also took part in discussions about how to optimize the build jobs and reviewed !537 to make the build-source job to only satisfy the Build-Depends and Build-Conflicts fields by Andrea Pappacoda. Thanks a lot to him!

Miscellaneous contributions
  • Stefano submitted patches for BeautifulSoup to support the latest soupsieve and lxml.
  • Stefano uploaded pypy3 7.3.17, upgrading the cPython compatibility from 3.9 to 3.10. Then ran into a GCC-14-related regression, which had to be ignored for now as it s proving hard to fix.
  • Colin released libpipeline 1.5.8 and man-db 2.13.0; the latter included foundations allowing adding an autopkgtest for man-db.
  • Colin upgraded 19 Python packages to new upstream versions (fixing 5 CVEs), fixed several other build failures, fixed a Python 3.12 compatibility issue in zope.security, and made python-nacl build reproducibly.
  • Colin tracked down test failures in python-asyncssh and Ruby resulting from certain odd /etc/hosts configurations.
  • Carles upgraded the packages python-ring-doorbell and simplemonitor to new upstream versions.
  • Carles started discussions and implementation of a tool (still in early days) named po-debconf-manager : a way for translators and reviewers to collaborate using git as a backend instead of mailing list; and submit the translations using salsa MR. More information next month.
  • Carles (dog-fooding po-debconf-manager ) reviewed debconf templates translated by a collaborator.
  • Carles reviewed and submitted the translation of apt .
  • Helmut sent 19 patches for improving cross building.
  • Helmut implemented the cross-exe-wrapper proposed by Simon McVittie for use with glib2.0.
  • Helmut detailed what it takes to make Perl s ExtUtils::PkgConfig suitable for cross building.
  • Helmut made the deletion of the root password work in debvm in all situations and implemented a test case using expect.
  • Anupa attended Debian Publicity team meeting and is moderating and posting on Debian Administrators LinkedIn group.
  • Thorsten uploaded package gutenprint to fix a FTBFS with gcc14 and package ipp-usb to fix a /usr-merge issue.
  • Santiago updated bzip2 to fix a long-standing bug that requested to include a pkg-config file. An important impact of this change is that it makes it possible to use Rust bindings for libbz2 by Sequoia, an implementation of OpenPGP.

8 September 2024

Thorsten Alteholz: My Debian Activities in August 2024

FTP master This month I accepted 441 and rejected 15 packages. The overall number of packages that got accepted was 442. I am ashamed of some occurrences that happened this month and I apologize for this. Unfortunately I have no idea how to prevent this in the future without becoming a solo entertainer. Debian LTS This was my hundred-twenty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. Unfortunately Bullseye was not handed over to LTS in August. So I only prepared new packages of asterisk, libvirt and tinyproxy and will upload them next month. Last but not least I did a week of FD this month. Debian ELTS This month was the seventy-third ELTS month. During my allocated time I uploaded or worked on: I also did a week of FD. Debian Printing This month I uploaded This work is generously funded by Freexian! Debian Astro This month I uploaded a new upstream or bugfix version of: Debian Mobcom The following packages have been prepared by the GSoC student Nathan: It was so much fun working with Nathan. Unfortunately GSoC is over now, but Nathan will continue working in Debian and become a Debian Maintainer. misc This month I uploaded new upstream or bugfix versions of: I also filed an RM bug against meep-openmpi. As Adrian made me ware, this package is no longer needed.

Jacob Adams: Linux's Bedtime Routine

How does Linux move from an awake machine to a hibernating one? How does it then manage to restore all state? These questions led me to read way too much C in trying to figure out how this particular hardware/software boundary is navigated. This investigation will be split into a few parts, with the first one going from invocation of hibernation to synchronizing all filesystems to disk. This article has been written using Linux version 6.9.9, the source of which can be found in many places, but can be navigated easily through the Bootlin Elixir Cross-Referencer: https://elixir.bootlin.com/linux/v6.9.9/source Each code snippet will begin with a link to the above giving the file path and the line number of the beginning of the snippet.

A Starting Point for Investigation: /sys/power/state and /sys/power/disk These two system files exist to allow debugging of hibernation, and thus control the exact state used directly. Writing specific values to the state file controls the exact sleep mode used and disk controls the specific hibernation mode1. This is extremely handy as an entry point to understand how these systems work, since we can just follow what happens when they are written to.

Show and Store Functions These two files are defined using the power_attr macro: kernel/power/power.h:80
#define power_attr(_name) \
static struct kobj_attribute _name##_attr =     \
    .attr   =               \
        .name = __stringify(_name), \
        .mode = 0644,           \
     ,                  \
    .show   = _name##_show,         \
    .store  = _name##_store,        \
 
show is called on reads and store on writes. state_show is a little boring for our purposes, as it just prints all the available sleep states. kernel/power/main.c:657
/*
 * state - control system sleep states.
 *
 * show() returns available sleep state labels, which may be "mem", "standby",
 * "freeze" and "disk" (hibernation).
 * See Documentation/admin-guide/pm/sleep-states.rst for a description of
 * what they mean.
 *
 * store() accepts one of those strings, translates it into the proper
 * enumerated value, and initiates a suspend transition.
 */
static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr,
			  char *buf)
 
	char *s = buf;
#ifdef CONFIG_SUSPEND
	suspend_state_t i;
	for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++)
		if (pm_states[i])
			s += sprintf(s,"%s ", pm_states[i]);
#endif
	if (hibernation_available())
		s += sprintf(s, "disk ");
	if (s != buf)
		/* convert the last space to a newline */
		*(s-1) = '\n';
	return (s - buf);
 
state_store, however, provides our entry point. If the string disk is written to the state file, it calls hibernate(). This is our entry point. kernel/power/main.c:715
static ssize_t state_store(struct kobject *kobj, struct kobj_attribute *attr,
			   const char *buf, size_t n)
 
	suspend_state_t state;
	int error;
	error = pm_autosleep_lock();
	if (error)
		return error;
	if (pm_autosleep_state() > PM_SUSPEND_ON)  
		error = -EBUSY;
		goto out;
	 
	state = decode_state(buf, n);
	if (state < PM_SUSPEND_MAX)  
		if (state == PM_SUSPEND_MEM)
			state = mem_sleep_current;
		error = pm_suspend(state);
	  else if (state == PM_SUSPEND_MAX)  
		error = hibernate();
	  else  
		error = -EINVAL;
	 
 out:
	pm_autosleep_unlock();
	return error ? error : n;
 
kernel/power/main.c:688
static suspend_state_t decode_state(const char *buf, size_t n)
 
#ifdef CONFIG_SUSPEND
	suspend_state_t state;
#endif
	char *p;
	int len;
	p = memchr(buf, '\n', n);
	len = p ? p - buf : n;
	/* Check hibernation first. */
	if (len == 4 && str_has_prefix(buf, "disk"))
		return PM_SUSPEND_MAX;
#ifdef CONFIG_SUSPEND
	for (state = PM_SUSPEND_MIN; state < PM_SUSPEND_MAX; state++)  
		const char *label = pm_states[state];
		if (label && len == strlen(label) && !strncmp(buf, label, len))
			return state;
	 
#endif
	return PM_SUSPEND_ON;
 
Could we have figured this out just via function names? Sure, but this way we know for sure that nothing else is happening before this function is called.

Autosleep Our first detour is into the autosleep system. When checking the state above, you may notice that the kernel grabs the pm_autosleep_lock before checking the current state. autosleep is a mechanism originally from Android that sends the entire system to either suspend or hibernate whenever it is not actively working on anything. This is not enabled for most desktop configurations, since it s primarily for mobile systems and inverts the standard suspend and hibernate interactions. This system is implemented as a workqueue2 that checks the current number of wakeup events, processes and drivers that need to run3, and if there aren t any, then the system is put into the autosleep state, typically suspend. However, it could be hibernate if configured that way via /sys/power/autosleep in a similar manner to using /sys/power/state to manually enable hibernation. kernel/power/main.c:841
static ssize_t autosleep_store(struct kobject *kobj,
			       struct kobj_attribute *attr,
			       const char *buf, size_t n)
 
	suspend_state_t state = decode_state(buf, n);
	int error;
	if (state == PM_SUSPEND_ON
	    && strcmp(buf, "off") && strcmp(buf, "off\n"))
		return -EINVAL;
	if (state == PM_SUSPEND_MEM)
		state = mem_sleep_current;
	error = pm_autosleep_set_state(state);
	return error ? error : n;
 
power_attr(autosleep);
#endif /* CONFIG_PM_AUTOSLEEP */
kernel/power/autosleep.c:24
static DEFINE_MUTEX(autosleep_lock);
static struct wakeup_source *autosleep_ws;
static void try_to_suspend(struct work_struct *work)
 
	unsigned int initial_count, final_count;
	if (!pm_get_wakeup_count(&initial_count, true))
		goto out;
	mutex_lock(&autosleep_lock);
	if (!pm_save_wakeup_count(initial_count)  
		system_state != SYSTEM_RUNNING)  
		mutex_unlock(&autosleep_lock);
		goto out;
	 
	if (autosleep_state == PM_SUSPEND_ON)  
		mutex_unlock(&autosleep_lock);
		return;
	 
	if (autosleep_state >= PM_SUSPEND_MAX)
		hibernate();
	else
		pm_suspend(autosleep_state);
	mutex_unlock(&autosleep_lock);
	if (!pm_get_wakeup_count(&final_count, false))
		goto out;
	/*
	 * If the wakeup occurred for an unknown reason, wait to prevent the
	 * system from trying to suspend and waking up in a tight loop.
	 */
	if (final_count == initial_count)
		schedule_timeout_uninterruptible(HZ / 2);
 out:
	queue_up_suspend_work();
 
static DECLARE_WORK(suspend_work, try_to_suspend);
void queue_up_suspend_work(void)
 
	if (autosleep_state > PM_SUSPEND_ON)
		queue_work(autosleep_wq, &suspend_work);
 

The Steps of Hibernation

Hibernation Kernel Config It s important to note that most of the hibernate-specific functions below do nothing unless you ve defined CONFIG_HIBERNATION in your Kconfig4. As an example, hibernate itself is defined as the following if CONFIG_HIBERNATE is not set. include/linux/suspend.h:407
static inline int hibernate(void)   return -ENOSYS;  

Check if Hibernation is Available We begin by confirming that we actually can perform hibernation, via the hibernation_available function. kernel/power/hibernate.c:742
if (!hibernation_available())  
	pm_pr_dbg("Hibernation not available.\n");
	return -EPERM;
 
kernel/power/hibernate.c:92
bool hibernation_available(void)
 
	return nohibernate == 0 &&
		!security_locked_down(LOCKDOWN_HIBERNATION) &&
		!secretmem_active() && !cxl_mem_active();
 
nohibernate is controlled by the kernel command line, it s set via either nohibernate or hibernate=no. security_locked_down is a hook for Linux Security Modules to prevent hibernation. This is used to prevent hibernating to an unencrypted storage device, as specified in the manual page kernel_lockdown(7). Interestingly, either level of lockdown, integrity or confidentiality, locks down hibernation because with the ability to hibernate you can extract bascially anything from memory and even reboot into a modified kernel image. secretmem_active checks whether there is any active use of memfd_secret, and if so it prevents hibernation. memfd_secret returns a file descriptor that can be mapped into a process but is specifically unmapped from the kernel s memory space. Hibernating with memory that not even the kernel is supposed to access would expose that memory to whoever could access the hibernation image. This particular feature of secret memory was apparently controversial, though not as controversial as performance concerns around fragmentation when unmapping kernel memory (which did not end up being a real problem). cxl_mem_active just checks whether any CXL memory is active. A full explanation is provided in the commit introducing this check but there s also a shortened explanation from cxl_mem_probe that sets the relevant flag when initializing a CXL memory device. drivers/cxl/mem.c:186
* The kernel may be operating out of CXL memory on this device,
* there is no spec defined way to determine whether this device
* preserves contents over suspend, and there is no simple way
* to arrange for the suspend image to avoid CXL memory which
* would setup a circular dependency between PCI resume and save
* state restoration.

Check Compression The next check is for whether compression support is enabled, and if so whether the requested algorithm is enabled. kernel/power/hibernate.c:747
/*
 * Query for the compression algorithm support if compression is enabled.
 */
if (!nocompress)  
	strscpy(hib_comp_algo, hibernate_compressor, sizeof(hib_comp_algo));
	if (crypto_has_comp(hib_comp_algo, 0, 0) != 1)  
		pr_err("%s compression is not available\n", hib_comp_algo);
		return -EOPNOTSUPP;
	 
 
The nocompress flag is set via the hibernate command line parameter, setting hibernate=nocompress. If compression is enabled, then hibernate_compressor is copied to hib_comp_algo. This synchronizes the current requested compression setting (hibernate_compressor) with the current compression setting (hib_comp_algo). Both values are character arrays of size CRYPTO_MAX_ALG_NAME (128 in this kernel). kernel/power/hibernate.c:50
static char hibernate_compressor[CRYPTO_MAX_ALG_NAME] = CONFIG_HIBERNATION_DEF_COMP;
/*
 * Compression/decompression algorithm to be used while saving/loading
 * image to/from disk. This would later be used in 'kernel/power/swap.c'
 * to allocate comp streams.
 */
char hib_comp_algo[CRYPTO_MAX_ALG_NAME];
hibernate_compressor defaults to lzo if that algorithm is enabled, otherwise to lz4 if enabled5. It can be overwritten using the hibernate.compressor setting to either lzo or lz4. kernel/power/Kconfig:95
choice
	prompt "Default compressor"
	default HIBERNATION_COMP_LZO
	depends on HIBERNATION
config HIBERNATION_COMP_LZO
	bool "lzo"
	depends on CRYPTO_LZO
config HIBERNATION_COMP_LZ4
	bool "lz4"
	depends on CRYPTO_LZ4
endchoice
config HIBERNATION_DEF_COMP
	string
	default "lzo" if HIBERNATION_COMP_LZO
	default "lz4" if HIBERNATION_COMP_LZ4
	help
	  Default compressor to be used for hibernation.
kernel/power/hibernate.c:1425
static const char * const comp_alg_enabled[] =  
#if IS_ENABLED(CONFIG_CRYPTO_LZO)
	COMPRESSION_ALGO_LZO,
#endif
#if IS_ENABLED(CONFIG_CRYPTO_LZ4)
	COMPRESSION_ALGO_LZ4,
#endif
 ;
static int hibernate_compressor_param_set(const char *compressor,
		const struct kernel_param *kp)
 
	unsigned int sleep_flags;
	int index, ret;
	sleep_flags = lock_system_sleep();
	index = sysfs_match_string(comp_alg_enabled, compressor);
	if (index >= 0)  
		ret = param_set_copystring(comp_alg_enabled[index], kp);
		if (!ret)
			strscpy(hib_comp_algo, comp_alg_enabled[index],
				sizeof(hib_comp_algo));
	  else  
		ret = index;
	 
	unlock_system_sleep(sleep_flags);
	if (ret)
		pr_debug("Cannot set specified compressor %s\n",
			 compressor);
	return ret;
 
static const struct kernel_param_ops hibernate_compressor_param_ops =  
	.set    = hibernate_compressor_param_set,
	.get    = param_get_string,
 ;
static struct kparam_string hibernate_compressor_param_string =  
	.maxlen = sizeof(hibernate_compressor),
	.string = hibernate_compressor,
 ;
We then check whether the requested algorithm is supported via crypto_has_comp. If not, we bail out of the whole operation with EOPNOTSUPP. As part of crypto_has_comp we perform any needed initialization of the algorithm, loading kernel modules and running initialization code as needed6.

Grab Locks The next step is to grab the sleep and hibernation locks via lock_system_sleep and hibernate_acquire. kernel/power/hibernate.c:758
sleep_flags = lock_system_sleep();
/* The snapshot device should not be opened while we're running */
if (!hibernate_acquire())  
	error = -EBUSY;
	goto Unlock;
 
First, lock_system_sleep marks the current thread as not freezable, which will be important later7. It then grabs the system_transistion_mutex, which locks taking snapshots or modifying how they are taken, resuming from a hibernation image, entering any suspend state, or rebooting.

The GFP Mask The kernel also issues a warning if the gfp mask is changed via either pm_restore_gfp_mask or pm_restrict_gfp_mask without holding the system_transistion_mutex. GFP flags tell the kernel how it is permitted to handle a request for memory. include/linux/gfp_types.h:12
 * GFP flags are commonly used throughout Linux to indicate how memory
 * should be allocated.  The GFP acronym stands for get_free_pages(),
 * the underlying memory allocation function.  Not every GFP flag is
 * supported by every function which may allocate memory.
In the case of hibernation specifically we care about the IO and FS flags, which are reclaim operators, ways the system is permitted to attempt to free up memory in order to satisfy a specific request for memory. include/linux/gfp_types.h:176
 * Reclaim modifiers
 * -----------------
 * Please note that all the following flags are only applicable to sleepable
 * allocations (e.g. %GFP_NOWAIT and %GFP_ATOMIC will ignore them).
 *
 * %__GFP_IO can start physical IO.
 *
 * %__GFP_FS can call down to the low-level FS. Clearing the flag avoids the
 * allocator recursing into the filesystem which might already be holding
 * locks.
gfp_allowed_mask sets which flags are permitted to be set at the current time. As the comment below outlines, preventing these flags from being set avoids situations where the kernel needs to do I/O to allocate memory (e.g. read/writing swap8) but the devices it needs to read/write to/from are not currently available. kernel/power/main.c:24
/*
 * The following functions are used by the suspend/hibernate code to temporarily
 * change gfp_allowed_mask in order to avoid using I/O during memory allocations
 * while devices are suspended.  To avoid races with the suspend/hibernate code,
 * they should always be called with system_transition_mutex held
 * (gfp_allowed_mask also should only be modified with system_transition_mutex
 * held, unless the suspend/hibernate code is guaranteed not to run in parallel
 * with that modification).
 */
static gfp_t saved_gfp_mask;
void pm_restore_gfp_mask(void)
 
	WARN_ON(!mutex_is_locked(&system_transition_mutex));
	if (saved_gfp_mask)  
		gfp_allowed_mask = saved_gfp_mask;
		saved_gfp_mask = 0;
	 
 
void pm_restrict_gfp_mask(void)
 
	WARN_ON(!mutex_is_locked(&system_transition_mutex));
	WARN_ON(saved_gfp_mask);
	saved_gfp_mask = gfp_allowed_mask;
	gfp_allowed_mask &= ~(__GFP_IO   __GFP_FS);
 

Sleep Flags After grabbing the system_transition_mutex the kernel then returns and captures the previous state of the threads flags in sleep_flags. This is used later to remove PF_NOFREEZE if it wasn t previously set on the current thread. kernel/power/main.c:52
unsigned int lock_system_sleep(void)
 
	unsigned int flags = current->flags;
	current->flags  = PF_NOFREEZE;
	mutex_lock(&system_transition_mutex);
	return flags;
 
EXPORT_SYMBOL_GPL(lock_system_sleep);
include/linux/sched.h:1633
#define PF_NOFREEZE		0x00008000	/* This thread should not be frozen */
Then we grab the hibernate-specific semaphore to ensure no one can open a snapshot or resume from it while we perform hibernation. Additionally this lock is used to prevent hibernate_quiet_exec, which is used by the nvdimm driver to active its firmware with all processes and devices frozen, ensuring it is the only thing running at that time9. kernel/power/hibernate.c:82
bool hibernate_acquire(void)
 
	return atomic_add_unless(&hibernate_atomic, -1, 0);
 

Prepare Console The kernel next calls pm_prepare_console. This function only does anything if CONFIG_VT_CONSOLE_SLEEP has been set. This prepares the virtual terminal for a suspend state, switching away to a console used only for the suspend state if needed. kernel/power/console.c:130
void pm_prepare_console(void)
 
	if (!pm_vt_switch())
		return;
	orig_fgconsole = vt_move_to_console(SUSPEND_CONSOLE, 1);
	if (orig_fgconsole < 0)
		return;
	orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
	return;
 
The first thing is to check whether we actually need to switch the VT kernel/power/console.c:94
/*
 * There are three cases when a VT switch on suspend/resume are required:
 *   1) no driver has indicated a requirement one way or another, so preserve
 *      the old behavior
 *   2) console suspend is disabled, we want to see debug messages across
 *      suspend/resume
 *   3) any registered driver indicates it needs a VT switch
 *
 * If none of these conditions is present, meaning we have at least one driver
 * that doesn't need the switch, and none that do, we can avoid it to make
 * resume look a little prettier (and suspend too, but that's usually hidden,
 * e.g. when closing the lid on a laptop).
 */
static bool pm_vt_switch(void)
 
	struct pm_vt_switch *entry;
	bool ret = true;
	mutex_lock(&vt_switch_mutex);
	if (list_empty(&pm_vt_switch_list))
		goto out;
	if (!console_suspend_enabled)
		goto out;
	list_for_each_entry(entry, &pm_vt_switch_list, head)  
		if (entry->required)
			goto out;
	 
	ret = false;
out:
	mutex_unlock(&vt_switch_mutex);
	return ret;
 
There is an explanation of the conditions under which a switch is performed in the comment above the function, but we ll also walk through the steps here. Firstly we grab the vt_switch_mutex to ensure nothing will modify the list while we re looking at it. We then examine the pm_vt_switch_list. This list is used to indicate the drivers that require a switch during suspend. They register this requirement, or the lack thereof, via pm_vt_switch_required. kernel/power/console.c:31
/**
 * pm_vt_switch_required - indicate VT switch at suspend requirements
 * @dev: device
 * @required: if true, caller needs VT switch at suspend/resume time
 *
 * The different console drivers may or may not require VT switches across
 * suspend/resume, depending on how they handle restoring video state and
 * what may be running.
 *
 * Drivers can indicate support for switchless suspend/resume, which can
 * save time and flicker, by using this routine and passing 'false' as
 * the argument.  If any loaded driver needs VT switching, or the
 * no_console_suspend argument has been passed on the command line, VT
 * switches will occur.
 */
void pm_vt_switch_required(struct device *dev, bool required)
Next, we check console_suspend_enabled. This is set to false by the kernel parameter no_console_suspend, but defaults to true. Finally, if there are any entries in the pm_vt_switch_list, then we check to see if any of them require a VT switch. Only if none of these conditions apply, then we return false. If a VT switch is in fact required, then we move first the currently active virtual terminal/console10 (vt_move_to_console) and then the current location of kernel messages (vt_kmsg_redirect) to the SUSPEND_CONSOLE. The SUSPEND_CONSOLE is the last entry in the list of possible consoles, and appears to just be a black hole to throw away messages. kernel/power/console.c:16
#define SUSPEND_CONSOLE	(MAX_NR_CONSOLES-1)
Interestingly, these are separate functions because you can use TIOCL_SETKMSGREDIRECT (an ioctl11) to send kernel messages to a specific virtual terminal, but by default its the same as the currently active console. The locations of the previously active console and the previous kernel messages location are stored in orig_fgconsole and orig_kmsg, to restore the state of the console and kernel messages after the machine wakes up again. Interestingly, this means orig_fgconsole also ends up storing any errors, so has to be checked to ensure it s not less than zero before we try to do anything with the kernel messages on both suspend and resume. drivers/tty/vt/vt_ioctl.c:1268
/* Perform a kernel triggered VT switch for suspend/resume */
static int disable_vt_switch;
int vt_move_to_console(unsigned int vt, int alloc)
 
	int prev;
	console_lock();
	/* Graphics mode - up to X */
	if (disable_vt_switch)  
		console_unlock();
		return 0;
	 
	prev = fg_console;
	if (alloc && vc_allocate(vt))  
		/* we can't have a free VC for now. Too bad,
		 * we don't want to mess the screen for now. */
		console_unlock();
		return -ENOSPC;
	 
	if (set_console(vt))  
		/*
		 * We're unable to switch to the SUSPEND_CONSOLE.
		 * Let the calling function know so it can decide
		 * what to do.
		 */
		console_unlock();
		return -EIO;
	 
	console_unlock();
	if (vt_waitactive(vt + 1))  
		pr_debug("Suspend: Can't switch VCs.");
		return -EINTR;
	 
	return prev;
 
Unlike most other locking functions we ve seen so far, console_lock needs to be careful to ensure nothing else is panicking and needs to dump to the console before grabbing the semaphore for the console and setting a couple flags.

Panics Panics are tracked via an atomic integer set to the id of the processor currently panicking. kernel/printk/printk.c:2649
/**
 * console_lock - block the console subsystem from printing
 *
 * Acquires a lock which guarantees that no consoles will
 * be in or enter their write() callback.
 *
 * Can sleep, returns nothing.
 */
void console_lock(void)
 
	might_sleep();
	/* On panic, the console_lock must be left to the panic cpu. */
	while (other_cpu_in_panic())
		msleep(1000);
	down_console_sem();
	console_locked = 1;
	console_may_schedule = 1;
 
EXPORT_SYMBOL(console_lock);
kernel/printk/printk.c:362
/*
 * Return true if a panic is in progress on a remote CPU.
 *
 * On true, the local CPU should immediately release any printing resources
 * that may be needed by the panic CPU.
 */
bool other_cpu_in_panic(void)
 
	return (panic_in_progress() && !this_cpu_in_panic());
 
kernel/printk/printk.c:345
static bool panic_in_progress(void)
 
	return unlikely(atomic_read(&panic_cpu) != PANIC_CPU_INVALID);
 
kernel/printk/printk.c:350
/* Return true if a panic is in progress on the current CPU. */
bool this_cpu_in_panic(void)
 
	/*
	 * We can use raw_smp_processor_id() here because it is impossible for
	 * the task to be migrated to the panic_cpu, or away from it. If
	 * panic_cpu has already been set, and we're not currently executing on
	 * that CPU, then we never will be.
	 */
	return unlikely(atomic_read(&panic_cpu) == raw_smp_processor_id());
 
console_locked is a debug value, used to indicate that the lock should be held, and our first indication that this whole virtual terminal system is more complex than might initially be expected. kernel/printk/printk.c:373
/*
 * This is used for debugging the mess that is the VT code by
 * keeping track if we have the console semaphore held. It's
 * definitely not the perfect debug tool (we don't know if _WE_
 * hold it and are racing, but it helps tracking those weird code
 * paths in the console code where we end up in places I want
 * locked without the console semaphore held).
 */
static int console_locked;
console_may_schedule is used to see if we are permitted to sleep and schedule other work while we hold this lock. As we ll see later, the virtual terminal subsystem is not re-entrant, so there s all sorts of hacks in here to ensure we don t leave important code sections that can t be safely resumed.

Disable VT Switch As the comment below lays out, when another program is handling graphical display anyway, there s no need to do any of this, so the kernel provides a switch to turn the whole thing off. Interestingly, this appears to only be used by three drivers, so the specific hardware support required must not be particularly common.
drivers/gpu/drm/omapdrm/dss
drivers/video/fbdev/geode
drivers/video/fbdev/omap2
drivers/tty/vt/vt_ioctl.c:1308
/*
 * Normally during a suspend, we allocate a new console and switch to it.
 * When we resume, we switch back to the original console.  This switch
 * can be slow, so on systems where the framebuffer can handle restoration
 * of video registers anyways, there's little point in doing the console
 * switch.  This function allows you to disable it by passing it '0'.
 */
void pm_set_vt_switch(int do_switch)
 
	console_lock();
	disable_vt_switch = !do_switch;
	console_unlock();
 
EXPORT_SYMBOL(pm_set_vt_switch);
The rest of the vt_switch_console function is pretty normal, however, simply allocating space if needed to create the requested virtual terminal and then setting the current virtual terminal via set_console.

Virtual Terminal Set Console With set_console, we begin (as if we haven t been already) to enter the madness that is the virtual terminal subsystem. As mentioned previously, modifications to its state must be made very carefully, as other stuff happening at the same time could create complete messes. All this to say, calling set_console does not actually perform any work to change the state of the current console. Instead it indicates what changes it wants and then schedules that work. drivers/tty/vt/vt.c:3153
int set_console(int nr)
 
	struct vc_data *vc = vc_cons[fg_console].d;
	if (!vc_cons_allocated(nr)   vt_dont_switch  
		(vc->vt_mode.mode == VT_AUTO && vc->vc_mode == KD_GRAPHICS))  
		/*
		 * Console switch will fail in console_callback() or
		 * change_console() so there is no point scheduling
		 * the callback
		 *
		 * Existing set_console() users don't check the return
		 * value so this shouldn't break anything
		 */
		return -EINVAL;
	 
	want_console = nr;
	schedule_console_callback();
	return 0;
 
The check for vc->vc_mode == KD_GRAPHICS is where most end-user graphical desktops will bail out of this change, as they re in graphics mode and don t need to switch away to the suspend console. vt_dont_switch is a flag used by the ioctls11 VT_LOCKSWITCH and VT_UNLOCKSWITCH to prevent the system from switching virtual terminal devices when the user has explicitly locked it. VT_AUTO is a flag indicating that automatic virtual terminal switching is enabled12, and thus deliberate switching to a suspend terminal is not required. However, if you do run your machine from a virtual terminal, then we indicate to the system that we want to change to the requested virtual terminal via the want_console variable and schedule a callback via schedule_console_callback. drivers/tty/vt/vt.c:315
void schedule_console_callback(void)
 
	schedule_work(&console_work);
 
console_work is a workqueue2 that will execute the given task asynchronously.

Console Callback drivers/tty/vt/vt.c:3109
/*
 * This is the console switching callback.
 *
 * Doing console switching in a process context allows
 * us to do the switches asynchronously (needed when we want
 * to switch due to a keyboard interrupt).  Synchronization
 * with other console code and prevention of re-entrancy is
 * ensured with console_lock.
 */
static void console_callback(struct work_struct *ignored)
 
	console_lock();
	if (want_console >= 0)  
		if (want_console != fg_console &&
		    vc_cons_allocated(want_console))  
			hide_cursor(vc_cons[fg_console].d);
			change_console(vc_cons[want_console].d);
			/* we only changed when the console had already
			   been allocated - a new console is not created
			   in an interrupt routine */
		 
		want_console = -1;
	 
...
console_callback first looks to see if there is a console change wanted via want_console and then changes to it if it s not the current console and has been allocated already. We do first remove any cursor state with hide_cursor. drivers/tty/vt/vt.c:841
static void hide_cursor(struct vc_data *vc)
 
	if (vc_is_sel(vc))
		clear_selection();
	vc->vc_sw->con_cursor(vc, false);
	hide_softcursor(vc);
 
A full dive into the tty driver is a task for another time, but this should give a general sense of how this system interacts with hibernation.

Notify Power Management Call Chain kernel/power/hibernate.c:767
pm_notifier_call_chain_robust(PM_HIBERNATION_PREPARE, PM_POST_HIBERNATION)
This will call a chain of power management callbacks, passing first PM_HIBERNATION_PREPARE and then PM_POST_HIBERNATION on startup or on error with another callback. kernel/power/main.c:98
int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down)
 
	int ret;
	ret = blocking_notifier_call_chain_robust(&pm_chain_head, val_up, val_down, NULL);
	return notifier_to_errno(ret);
 
The power management notifier is a blocking notifier chain, which means it has the following properties. include/linux/notifier.h:23
 *	Blocking notifier chains: Chain callbacks run in process context.
 *		Callouts are allowed to block.
The callback chain is a linked list with each entry containing a priority and a function to call. The function technically takes in a data value, but it is always NULL for the power management chain. include/linux/notifier.h:49
struct notifier_block;
typedef	int (*notifier_fn_t)(struct notifier_block *nb,
			unsigned long action, void *data);
struct notifier_block  
	notifier_fn_t notifier_call;
	struct notifier_block __rcu *next;
	int priority;
 ;
The head of the linked list is protected by a read-write semaphore. include/linux/notifier.h:65
struct blocking_notifier_head  
	struct rw_semaphore rwsem;
	struct notifier_block __rcu *head;
 ;
Because it is prioritized, appending to the list requires walking it until an item with lower13 priority is found to insert the current item before. kernel/notifier.c:252
/*
 *	Blocking notifier chain routines.  All access to the chain is
 *	synchronized by an rwsem.
 */
static int __blocking_notifier_chain_register(struct blocking_notifier_head *nh,
					      struct notifier_block *n,
					      bool unique_priority)
 
	int ret;
	/*
	 * This code gets used during boot-up, when task switching is
	 * not yet working and interrupts must remain disabled.  At
	 * such times we must not call down_write().
	 */
	if (unlikely(system_state == SYSTEM_BOOTING))
		return notifier_chain_register(&nh->head, n, unique_priority);
	down_write(&nh->rwsem);
	ret = notifier_chain_register(&nh->head, n, unique_priority);
	up_write(&nh->rwsem);
	return ret;
 
kernel/notifier.c:20
/*
 *	Notifier chain core routines.  The exported routines below
 *	are layered on top of these, with appropriate locking added.
 */
static int notifier_chain_register(struct notifier_block **nl,
				   struct notifier_block *n,
				   bool unique_priority)
 
	while ((*nl) != NULL)  
		if (unlikely((*nl) == n))  
			WARN(1, "notifier callback %ps already registered",
			     n->notifier_call);
			return -EEXIST;
		 
		if (n->priority > (*nl)->priority)
			break;
		if (n->priority == (*nl)->priority && unique_priority)
			return -EBUSY;
		nl = &((*nl)->next);
	 
	n->next = *nl;
	rcu_assign_pointer(*nl, n);
	trace_notifier_register((void *)n->notifier_call);
	return 0;
 
Each callback can return one of a series of options. include/linux/notifier.h:18
#define NOTIFY_DONE		0x0000		/* Don't care */
#define NOTIFY_OK		0x0001		/* Suits me */
#define NOTIFY_STOP_MASK	0x8000		/* Don't call further */
#define NOTIFY_BAD		(NOTIFY_STOP_MASK 0x0002)
						/* Bad/Veto action */
When notifying the chain, if a function returns STOP or BAD then the previous parts of the chain are called again with PM_POST_HIBERNATION14 and an error is returned. kernel/notifier.c:107
/**
 * notifier_call_chain_robust - Inform the registered notifiers about an event
 *                              and rollback on error.
 * @nl:		Pointer to head of the blocking notifier chain
 * @val_up:	Value passed unmodified to the notifier function
 * @val_down:	Value passed unmodified to the notifier function when recovering
 *              from an error on @val_up
 * @v:		Pointer passed unmodified to the notifier function
 *
 * NOTE:	It is important the @nl chain doesn't change between the two
 *		invocations of notifier_call_chain() such that we visit the
 *		exact same notifier callbacks; this rules out any RCU usage.
 *
 * Return:	the return value of the @val_up call.
 */
static int notifier_call_chain_robust(struct notifier_block **nl,
				     unsigned long val_up, unsigned long val_down,
				     void *v)
 
	int ret, nr = 0;
	ret = notifier_call_chain(nl, val_up, v, -1, &nr);
	if (ret & NOTIFY_STOP_MASK)
		notifier_call_chain(nl, val_down, v, nr-1, NULL);
	return ret;
 
Each of these callbacks tends to be quite driver-specific, so we ll cease discussion of this here.

Sync Filesystems The next step is to ensure all filesystems have been synchronized to disk. This is performed via a simple helper function that times how long the full synchronize operation, ksys_sync takes. kernel/power/main.c:69
void ksys_sync_helper(void)
 
	ktime_t start;
	long elapsed_msecs;
	start = ktime_get();
	ksys_sync();
	elapsed_msecs = ktime_to_ms(ktime_sub(ktime_get(), start));
	pr_info("Filesystems sync: %ld.%03ld seconds\n",
		elapsed_msecs / MSEC_PER_SEC, elapsed_msecs % MSEC_PER_SEC);
 
EXPORT_SYMBOL_GPL(ksys_sync_helper);
ksys_sync wakes and instructs a set of flusher threads to write out every filesystem, first their inodes15, then the full filesystem, and then finally all block devices, to ensure all pages are written out to disk. fs/sync.c:87
/*
 * Sync everything. We start by waking flusher threads so that most of
 * writeback runs on all devices in parallel. Then we sync all inodes reliably
 * which effectively also waits for all flusher threads to finish doing
 * writeback. At this point all data is on disk so metadata should be stable
 * and we tell filesystems to sync their metadata via ->sync_fs() calls.
 * Finally, we writeout all block devices because some filesystems (e.g. ext2)
 * just write metadata (such as inodes or bitmaps) to block device page cache
 * and do not sync it on their own in ->sync_fs().
 */
void ksys_sync(void)
 
	int nowait = 0, wait = 1;
	wakeup_flusher_threads(WB_REASON_SYNC);
	iterate_supers(sync_inodes_one_sb, NULL);
	iterate_supers(sync_fs_one_sb, &nowait);
	iterate_supers(sync_fs_one_sb, &wait);
	sync_bdevs(false);
	sync_bdevs(true);
	if (unlikely(laptop_mode))
		laptop_sync_completion();
 
It follows an interesting pattern of using iterate_supers to run both sync_inodes_one_sb and then sync_fs_one_sb on each known filesystem16. It also calls both sync_fs_one_sb and sync_bdevs twice, first without waiting for any operations to complete and then again waiting for completion17. When laptop_mode is enabled the system runs additional filesystem synchronization operations after the specified delay without any writes. mm/page-writeback.c:111
/*
 * Flag that puts the machine in "laptop mode". Doubles as a timeout in jiffies:
 * a full sync is triggered after this time elapses without any disk activity.
 */
int laptop_mode;
EXPORT_SYMBOL(laptop_mode);
However, when running a filesystem synchronization operation, the system will add an additional timer to schedule more writes after the laptop_mode delay. We don t want the state of the system to change at all while performing hibernation, so we cancel those timers. mm/page-writeback.c:2198
/*
 * We're in laptop mode and we've just synced. The sync's writes will have
 * caused another writeback to be scheduled by laptop_io_completion.
 * Nothing needs to be written back anymore, so we unschedule the writeback.
 */
void laptop_sync_completion(void)
 
	struct backing_dev_info *bdi;
	rcu_read_lock();
	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
		del_timer(&bdi->laptop_mode_wb_timer);
	rcu_read_unlock();
 
As a side note, the ksys_sync function is simply called when the system call sync is used. fs/sync.c:111
SYSCALL_DEFINE0(sync)
 
	ksys_sync();
	return 0;
 

The End of Preparation With that the system has finished preparations for hibernation. This is a somewhat arbitrary cutoff, but next the system will begin a full freeze of userspace to then dump memory out to an image and finally to perform hibernation. All this will be covered in future articles!
  1. Hibernation modes are outside of scope for this article, see the previous article for a high-level description of the different types of hibernation.
  2. Workqueues are a mechanism for running asynchronous tasks. A full description of them is a task for another time, but the kernel documentation on them is available here: https://www.kernel.org/doc/html/v6.9/core-api/workqueue.html 2
  3. This is a bit of an oversimplification, but since this isn t the main focus of this article this description has been kept to a higher level.
  4. Kconfig is Linux s build configuration system that sets many different macros to enable/disable various features.
  5. Kconfig defaults to the first default found
  6. Including checking whether the algorithm is larval? Which appears to indicate that it requires additional setup, but is an interesting choice of name for such a state.
  7. Specifically when we get to process freezing, which we ll get to in the next article in this series.
  8. Swap space is outside the scope of this article, but in short it is a buffer on disk that the kernel uses to store memory not current in use to free up space for other things. See Swap Management for more details.
  9. The code for this is lengthy and tangential, thus it has not been included here. If you re curious about the details of this, see kernel/power/hibernate.c:858 for the details of hibernate_quiet_exec, and drivers/nvdimm/core.c:451 for how it is used in nvdimm.
  10. Annoyingly this code appears to use the terms console and virtual terminal interchangeably.
  11. ioctls are special device-specific I/O operations that permit performing actions outside of the standard file interactions of read/write/seek/etc. 2
  12. I m not entirely clear on how this flag works, this subsystem is particularly complex.
  13. In this case a higher number is higher priority.
  14. Or whatever the caller passes as val_down, but in this case we re specifically looking at how this is used in hibernation.
  15. An inode refers to a particular file or directory within the filesystem. See Wikipedia for more details.
  16. Each active filesystem is registed with the kernel through a structure known as a superblock, which contains references to all the inodes contained within the filesystem, as well as function pointers to perform the various required operations, like sync.
  17. I m including minimal code in this section, as I m not looking to deep dive into the filesystem code at this time.

8 August 2024

Jonathan Carter: DebConf24 Busan, South Korea

I m finishing typing up this blog entry hours before my last 13 hour leg back home, after I spent 2 weeks in Busan, South Korea for DebCamp24 and DebCamp24. I had a rough year and decided to take it easy this DebConf. So this is the first DebConf in a long time where I didn t give any talks. I mostly caught up on a bit of packaging, worked on DebConf video stuff, attended a few BoFs and talked to people. Overall it was a very good DebConf, which also turned out to be more productive than I expeced it would. In the welcome session on the first day of DebConf, Nicolas Dandrimont mentioned that a benefit of DebConf is that it provides a sort of caffeine for your Debian motivation. I could certainly feel that affect swell as the days went past, and it s nice to be excited about some ideas again that would otherwise be fading.

Recovering DPL It s a bit of a gear shift being DPL for 4 years, and DebConf Committee for nearly 5 years before that, and then being at DebConf while some issue arise (as it always does during a conference). At first I jump into high alert mode, but then I have to remind myself it s not your problem anymore and let others deal with it. It was nice spending a little in-person time with Andreas Tille, our new DPL, we did some more handover and discussed some current issues. I still have a few dozen emails in my DPL inbox that I need to collate and forward to Andreas, I hope to finish all that up by the end of August. During the Bits from the DPL talk, the usual question came up whether Andreas will consider running for DPL again, to which he just responded in a slide Maybe . I think it s a good idea for a DPL to do at least two terms if it all works out for everyone, since it takes a while to get up to speed on everything. Also, having been DPL for four years, I have a lot to say about it, and I think there s a lot we can fix in the role, or at least discuss it. If I had the bandwidth for it I would have scheduled a BoF for it, but I ll very likely do that for the next DebConf instead!

Video team I set up the standby loop for the video streaming setup. We call it loopy, it s a bunch of OBS scenes that provide announcements, shows sponsors, the schedule and some social content. I wrote about it back in 2020, but it s evolved quite a bit since then, so I m probably due to write another blog post with a bunch of updates on it. I hope to organise a video team sprint in Cape Town in the first half of next year, so I ll summarize everything before then.

It would ve been great if we could have some displays in social areas that could show talks, the loop and other content, but we were just too pressed for time for that. This year s DebConf had a very compressed timeline, and there was just too much that had to be done and that had to be figured out on the last minute. This put quite a lot of strain on the organisers, but I was glad to see how, for the most part, most attendees were very sympathetic to some rough edges (but I digress ). I added more of the OBS machine setup to the videoteam s ansible repository, so as of now it just needs an ansible setup and the OBS data and it s good to go. The loopy data is already in the videoteam git repository, so I could probably just add a git pull and create some symlinks in ansible and then that machine can be installed from 0% to 100% by just installing via debian-installer with our ansible hooks. This DebConf I volunteered quite a bit for actual video roles during the conference, something I didn t have much time for in recent DebConfs, and it s been fun, especially in a session or two where nearly none of the other volunteers showed up. Sometimes chaos is just fun :-)
Baekyongee is the university mascot, who s visible throughout the university. So of course we included this four legged whale creature on the loop too!

Packaging I was hoping to do more packaging during DebCamp, but at least it was a non-zero amount:
  • Uploaded gdisk 1.0.10-2 to unstable (previously tested effects of adding dh-sequence-movetousr) (Closes: #1073679).
  • Worked a bit on bcachefs-tools (updating git to 1.9.4), but has a build failure that I need to look into (we might need a newer bindgen) update: I m probably going to ROM this package soon, it doesn t seem suitable for packaging in Debian.
  • Calamares: Tested a fix for encrypted installs, and uploaded it.
  • Calamares: Uploaded (3.3.8-1) to backports (at the time of writing it s still in backports-NEW).
  • Backport obs-gradient-source for bookworm.
  • Did some initial packaging on Cambalache, I ll upload to unstable once wlroots (0.18) hits unstable.
  • Pixelorama 1.0 I did some initial packaging for Pixelorama back when we did the MiniDebConf Gaming Edition, but it had a few stoppers back then. Version 1.0 seems to fix all of that, but it depends on Godot 4.2 and we re still on the 3 series in Debian, so I ll upload this once Godot 4.2 hits at least experimental. Godot software/games is otherwise quite easy to run, it s basically just source code / data that is installed and then run via godot-runner (godot3-runner package in Debian).

BoFs Python Team BoF Link to the etherpad / pad archive link and video can be found on the talk page: https://debconf24.debconf.org/talks/31-python-bof/ The session ended up being extended to a second part, since all the issues didn t fit into the first session. I was distracted by too many thing during the Python 3.12 transition (to the point where I thought that 3.11 was still new in Debian), so it was very useful listening to the retrospective of that transition. There was a discussion whether Python 3.13 could still make it to testing in time for freeze, and it seems that there is consensus that it can, although, likely with new experimental features like disabling the global interpreter lock and the just in time compiler disabled. I learned for the first time about the dead batteries project, PEP-0594, which removes ancient modules that have mostly been superseded, from the Python standard library. There was some talk about the process for changing team policy, and a policy discussion on whether we should require autopkgtests as a SHOULD or a MUST for migration to testing. As with many things, the devil is in the details and in my opinion you could go either way and achieve a similar result (the original MUST proposal allowed exceptions which imho made it the same as the SHOULD proposal). There s an idea to do some ongoing remote sprints, like having co-ordinated days for bug squashing / working on stuff together. This is a nice idea and probably a good way to energise the team and also to gain some interest from potential newcomers. Louis-Philipe V ronneau was added as a new team admin and there was some discussion on various Sphinx issues and which Lintian tags might be needed for Python 3.13. If you want to know more, you probably have to watch the videos / read the notes :)
    Debian.net BoF Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/37-debiannet-team-bof Debian Developers can set up services on subdomains on debian.net, but a big problem we ve had before was that developers were on their own for hosting those services. This meant that they either hosted it on their DSL/fiber connection at home, paid for the hosting themselves, or hosted it at different services which became an accounting nightmare to claim back the used funds. So, a few of us started the debian.net hosting project (sometimes we just call it debian.net, this is probably a bit of a bug) so that Debian has accounts with cloud providers, and as admins we can create instances there that gets billed directly to Debian. We had an initial rush of services, but requests have slowed down since (not really a bad thing, we don t want lots of spurious requests). Last year we did a census, to check which of the instances were still used, whether they received system updates and to ask whether they are performing backups. It went well and some issues were found along the way, so we ll be doing that again. We also gained two potential volunteers to help run things, which is great. Debian Social BoF Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/34-debiansocial-bof We discussed the services we run, you can view the current state of things at: https://wiki.debian.org/Teams/DebianSocial Pleroma has shown some cracks over the last year or so, and there are some forks that seem promising. At the same time, it might be worth while considering Mastodon too. So we ll do some comparison of features and maintenance and find a way forward. At the time when Pleroma was installed, it was way ahead in terms of moderation features. Pixelfed is doing well and chugging along nicely, we should probably promote it more. Peertube is working well, although we learned that we still don t have all the recent DebConf videos on there. A bunch of other issues should be fixed once we move it to a new machine that we plan to set up. We re removing writefreely and plume. Nice concepts, but it didn t get much traction yet, and no one who signed up for these actually used it, which is fine, some experimentation with services is good and sometimes they prove to be very popular and other times not. The WordPress multisite instance has some mild use, otherwise haven t had any issues. Matrix ended up to be much, much bigger than we thought, both in usage and in its requirements. It s very stateful and remembers discussions for as long as you let it, so it s Postgres database is continuously expanding, this will also be a lot easier to manage once we have this on the new host. Jitsi is also quite popular, but it could probably be on jitsi.debian.net instead (we created this on debian.social during the initial height of COVID-19 where we didn t have the debian.net hosting yet), although in practice it doesn t really matter where it lives. Most of our current challenges will be solved by moving everything to a new big machine that has a few public IPs available for some VMs, so we ll be doing that shortly. Debian Foundation Discussion BoF This was some brainstorming about the future structure of Debian, and what steps might be needed to get there. It s way too big a problem to take on in a BoF, but we made some progress in figuring out some smaller pieces of the larger puzzle. The DPL is going to get in touch with some legal advisors and our trusted organisations so that we can aim to formalise our relationships a bit more by the time it s DebConf again. I also introduced my intention to join the Debian Partners delegation. When I was DPL, I enjoyed talking with external organisations who wanted to help Debian, but helping external organisations help Debian turned out to be too much additional load on the usual DPL roles, so I m pursuing this with the Debian Partners team, more on that some other time. This session wasn t recorded, but if you feel like you missed something, don t worry, all intentions will be communicated and discussed with project members before anything moves forward. There was a strong agreement in the room though that we should push forward on this, and not reach another DebConf where we didn t make progress on formalising Debian s structure more.

    Social Conference Dinner
    Conference Dinner Photo from Santiago
    The conference dinner took place in the university gymnasium. I hope not many people do sports there in the summer, because it got HOT. There was also some interesting observations on the thermodynamics of the attempted cooling solutions, which was amusing. On the plus side, the food was great, the company was good, and the speeches were kept to a minimum, so it was a great conference dinner, even though it was probably cut a bit short due to the heat. Cheese and Wine Cheese and Wine happened on 1 August, which happens to be the date I became a DD at DebConf17 in Montr al seven years before, so this was a nice accidental celebration of my Debiversary :) Since I m running out of time, I ll add some more photos to this post some time after publishing it :P Group Photo As per DebConf tradition, Aigars took the group photo. You can find the high resolution version on Debian s GitLab instance.
    Debian annual conference Debconf 24, Busan, South Korea
    Photography: Aigars Mahinovs aigarius@debian.org
    License: CC-BYv3+ or GPLv2+
    Talking Ah yes, talking to people is a big part of DebConf, but I didn t keep track of it very well.
    • I mostly listened to Alper a bit about his ideas for his talk about debian installer.
    • I talked to Rhonda a bit about ActivityPub and MQTT and whether they could be useful for publicising Debian activity.
    • Listened to Gunnar and Julian have a discussion about GPG and APT which was interesting.
    • I learned that you can learn Hangul, the Korean alphabet, in about an hour or so (I wish I knew that in all my years of playing StarCraft II).
    • We had the usual continuous keysigning party. Besides it s intended function, this is always a good ice breaker and a way to for shy people to meet other shy people.
    • and many other fly-by discussions.

    Stuff that didn t happen this DebConf
    • loo.py A simple Python script that could eventually replace the obs-advanced-scene-switcher sequencer in OBS. It would also be extremely useful if we d ever replace OBS for loopy. I was hoping to have some time to hack on this, and try to recreate the current loopy in loo.py, but didn t have the time.
    • toetally This year videoteam had to scramble to get a bunch of resistors to assemble some tally light. Even when assembled, they were a bit troublesome. It would ve been nice to hack on toetally and get something ready for testing, but it mostly relies on having something like a rasbperry pi zero with an attached screen in order to work on further. I ll try to have something ready for the next mini conf though.
    • extrepo on debian live I think we should have extrepo installed by default on desktop systems, I meant to start a discussion on this, but perhaps it s just time I go ahead and do it and announce it.
    • Live stream to peertube server It would ve been nice to live stream DebConf to PeerTube, but the dependency tree to get this going got a bit too huge. Following our plans discussed in the Debian Social BoF, we should have this safely ready before the next MiniDebConf and should be able to test it there.
    • Desktop Egg there was this idea to get a stand-in theme for Debian testing/unstable until the artwork for the next release is finalized (Debian bug: #1038660), I have an idea that I meant to implement months ago, but too many things got in the way. It s based on Juliette Taka s Homeworld theme, and basically transforms the homeworld into an egg. Get it? Something that hasn t hatched yet? I also only recently noticed that we never used the actual homeworld graphics (featuring the world image) in the final bullseye release. lol.
    So, another DebConf and another new plush animal. Last but not least, thanks to PKNU for being such a generous and fantastic host to us! See you again at DebConf25 in Brest, France next year!

      8 July 2024

      Thorsten Alteholz: My Debian Activities in June 2024

      FTP master This month I accepted 270 and rejected 23 packages. The overall number of packages that got accepted was 279.

      Debian LTS This was my hundred-twentieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on: This month handling of the CVE of cups was a bit messy. After lifting the embargo of the CVE, a published patch did not work with all possible combinations of the configuration. In other words, in cases of having only one local domain socket configured, the cupsd did not start and failed with a strange error. Anyway, upstream published a new set of patches, which made cups work again. Unfortunately this happended just before the latest point release for Bullseye and Bookworm, so that the new packages did not make it into the release, but stopped in the corresponding p-u-queues: stable-p-u and old-p-u. I also continued to work on tiff and last but not least did a week of FD and attended the monthly LTS/ELTS meeting. Debian ELTS This month was the seventy-first ELTS month. During my allocated time I tried to upload a new version of cups for Jessie and Stretch. Unfortunately this was stopped due to an autopkgtest error, which I could not reproduce yet. I also wanted to finally upload a fixed version of exim4. Unfortunately this was stopped due to lots of CI-jobs for Buster. Updates for Buster are now also availble from ELTS, so some stuff had to prepared before the actual switch end of June. Additionally everything was delayed due to a crash of the CI worker. All in all this month was rather ill-fated. At least the exim4 upload will happen/already happened in July. I also continued to work on an update for libvirt, did a week of FD and attended the LTS/ELTS meeting. Debian Printing This month I uploaded new upstream or bugfix versions of: This work is generously funded by Freexian! Debian Astro This month I uploaded a new upstream or bugfix version of: All of those uploads are somehow related to /usr-move. Debian IoT This month I uploaded new upstream or bugfix versions of: Debian Mobcom The following packages have been prepared by the GSoC student Nathan: misc This month I uploaded new upstream or bugfix versions of: Here as well all uploads are somehow related to /usr-move

      4 July 2024

      Samuel Henrique: Debian's curl now supports HTTP3

      tl;dr Starting with curl 8.0.0-2, you can now use HTTP3.
      curl --http3-only https://example.com
      
      Or, if you would like to try it out in a container:
      podman run debian:unstable apt install --update -y curl && curl --http3-only https://example.com
      
      (in case you haven't noticed, apt now has the --update option for the upgrade and install commands, although not available on stable yet)

      Availability
      • Debian unstable - Since 2024-07-02
      • Debian testing - Since 2024-07-18
      • Debian 12/bookworm backports - Expected by the end of August 2024.
      • Debian 12/bookworm - Due to the mechanisms we have in place to make sure Debian stable is in fact stable, we will never be able to ship this in the regular repository. Users can make use of the backports repositories instead.
      • Debian derivatives - Rolling releases will get it by the time it's on Debian testing (e.g.: Kali Linux). Stable derivatives only in their next major release.

      The challenge HTTP3 is fresh new, well... not really, but at least fresh enough that I'm not aware of any other Linux distribution supporting it on curl, the reason is likely two-fold:
      1. OpenSSL is not there yet OpenSSL still doesn't have proper HTTP3 support, and given that OpenSSL is so widely used, almost every curl distributor/packager will build curl with it and thus changing the TLS backend to something else is risky. Unfortunately, proper support for the OpenSSL libcurl is unlikely to come anytime before the end of this year, the OpenSSL performance is not good enough yet as of version 3.3. Daniel Stenberg has written about the state of this multiple times, most recently at HTTP/3 in curl mid 2024, if you're interested, I suggest reading through his other posts as well. Some might have noticed that nginx does support HTTP3 through OpenSSL, although when you look closely, it's not exactly perfect:
        An SSL library that provides QUIC support is recommended to build nginx, such as BoringSSL, LibreSSL, or QuicTLS. Otherwise, the OpenSSL compatibility layer will be used that does not support early data.
        As you can see, they don't recommend using OpenSSL, and when doing so, you don't get complete support.

      2. HTTP3 support for GnuTLS/nghttp3/ngtcp2 is recent The non-experimental support arrived back in October 2023, and so that's when I started seriously planning for this. curl has been working on HTTP3 support for years, and so it did support other TLS backends before that, but out of them, the one most feasible for a distribution to ship would be GnuTLS, which gets HTTP3 support through ngctp2 and nghttp3.

      How it was done The Debian curl package has historically shipped at least two variants of libcurl, an OpenSSL and a GnuTLS one. The OpenSSL libcurl can't support HTTP3 for the reasons explained above, but the GnuTLS libcurl can (with ngtcp2 and nghtp3). Debian packages can choose which version of libcurl to link against (without having to modify any upstream source code). Debian's "git" package being a famous example of a package that links against the GnuTLS libcurl. Enabling HTTP3 on curl was done in three steps:
      1. Make sure all required dependencies fulfill the minimum requirements.
      2. Enable HTTP3 for GnuTLS libcurl.
      3. Change the libcurl used by the curl CLI, from OpenSSL to GnuTLS.
      curl's HTTP3 support requires a somewhat recent version of nghttp3 and updating that required a transition (due to the SONAME bump), while we've also had months of freeze for transitions due to the time_t transition. After the dependencies were in place, enabling HTTP3 for the GnuTLS libcurl was straightforward. Then, for the last part, we had to switch the TLS backend used by the curl CLI. Doing the swap is also quite easy on the packaging level, but we have to consider the chances of this change breaking our users' environments.

      Ensuring there are no breakages The first thing to consider regarding breakages is that this change is not going to be pushed directly to the current Debian stable releases, it will be present in the next stable release (13/trixie) but the current one will stick to the version that's already shipped. Secondly, we have to consider the risk of losing the ability to use certain parameters from the curl CLI which could be limited to the OpenSSL backend. During curl-up 2024, the curl developers pointed out the existence of a page that lists the TLS related options and the backends they work with. Analysing that page, ignoring all of the options that are suffixed with "BLOB" (only pertinent to the library, not the CLI), the only one left which is attention worthy is CURLOPT_ECH.
      This experimental feature requires a special build of OpenSSL, as ECH is not yet supported in OpenSSL releases. In contrast ECH is supported by the latest BoringSSL and wolfSSL releases.
      As it turns out, Encrypted Client Hello is experimental and it's not supported by the vanilla OpenSSL. This was enough of an investigation for me to go ahead with the change. Noting that even in the worst case scenario (we find a horrible regression), we can rollback without having affected a single stable release. Now that the package is on Debian unstable, the CI tests (autopkgtest) of every package that depends on curl is currently running, the results are compared against the migration-reference (in this case, the curl CLI with OpenSSL, before the change). If everything goes right, curl with HTTP3 support will migrate to Debian testing in around 5 days. If we spot any issues, we'll have to solve them first and it's going to be hard to predict how long it takes, although it's fair to expect less than a month.

      Feedback Feel free to join the Matrix room for the Debian curl maintainers:
      https://matrix.to/#/#debian-curl-maintainers:matrix.org

      Acknowledgements It took us a bit longer than expected to be able to enable HTTP3, nonetheless it's still early enough to be excited about. A lot of people were crucial to make this happen. I should recognize in the first place, obviously, the curl developers and the developers of the supporting libraries: GnuTLS, nghttp3, ngtcp2. Participating in the curl-up 2024 conference helped me get motivated to push this through, besides becoming aware of the right documentation to research for impact. On the Debian side, Sakirnth Nagarasa <sakirnth> was responsible for updating and taking care of the transition for nghttp3 and ngtcp2. Also on the Debian side, I've got loads of help and support from the co-maintainers of the curl package: Sergio Durigan Junior <sergiodj> and Carlos Henrique Lima Melara <charles>.

      Changes since publication

      2024-07-18
      • Update date of availability for Debian testing and expected date for bookworm backports.
      • We have historically spoken Portuguese in the room but we'll switch to English in case anyone joins.

      2 July 2024

      Ben Hutchings: FOSS activity in June 2024

      8 June 2024

      Thorsten Alteholz: My Debian Activities in May 2024

      FTP master This month I accepted 347 and rejected 49 packages. The overall number of packages that got accepted was 348.

      Debian LTS This was my hundred-nineteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on: I also continued to work on tiff and last but not least did a week of FD and attended the monthly LTS/ELTS meeting. Unfortunately I used lots of time to debug an issue with nghttp2. Please see my odyssey below. Debian ELTS This month was the seventieth ELTS month. During my allocated time I uploaded: For some tests I installed the new nghttp2 package on my Stretch VM and started the daemon. Unfortunately I got an unexpected error from getaddrinfo() about ai_socktype not supported. The daemon was configured to listen on lo, the device was available, but the error remained. I was pretty sure that my patch was not the reason for this and indeed the unpatched version showed this error as well. I didn t want to release an untested package, so nghttp2 had to start at least! Therefore I built a minimal example to reproduce the issue. getaddrinfo() failed for hints.ai_socktype=SOCK_STREAM and a numerical IP address. Having no hints at all or localhost instead of 127.0.0.1 made the error disappear (as a remark: localhost resolves to 127.0.0.1, the ipv6 variant is ip6-localhost ). I could see that in nghttp2 as well. Configuring it with localhost let the error vanish but the daemon still exited due to other reasons. After some time of debugging, I added another network interface to my VM and configured it with a dummy IPv4 address. Voila, everything worked as expected. According to Wikipedia, IPv6 was ratified as standard in 2017 and Stretch was also released in 2017. No wonder that a IPv6-only-VM had problems back then and these problems survived to the present. I also continued to work on an update for tiff in Jessie and Stretch, did a week of FD and attended the LTS/ELTS meeting. Debian Printing This month I uploaded new upstream or bugfix versions of: This work is generously funded by Freexian! Debian Astro This month I uploaded a new upstream or bugfix version of: Debian IoT This month I uploaded new upstream or bugfix versions of: Debian Mobcom Due to more and more problems with time_t, I removed osmo-iuh and all dependencies from armel, armhf and i386, sorry. If there is really anybody using this software on 32-bit architectures don t hesitate to get in touch. It is official now, the GSoC student working on the Mobcom packages is Nathan Doris. He already finished the hardest part of the job and I could upload the latest version of libosmocore. I really enjoy working with him and look forward to a pleasant SoC :-). misc This month I uploaded new upstream or bugfix versions of: Did I already mention that I love lists with topics I can work on. I print out such lists and enjoy checking off one after the other. End of May Helmut told me that I am a bit lazy and gave me such a list with all my packages that have one or the other issue with /usr-move. Most of the uploads above are packages on that list and I could check off a lot :-).

      27 May 2024

      Thomas Koch: Rebuild search with trust

      Posted on January 20, 2024
      Finally there is a thing people can agree on: Apparently, Google Search is not good anymore. And I m not the only one thinking about decentralization to fix it: Honey I federated the search engine - finding stuff online post-big tech - a lightning talk at the recent chaos communication congress The speaker however did not mention, that there have already been many attempts at building distributed search engines. So why do I think that such an attempt could finally succeed? My definition of success is:
      A mildly technical computer user (able to install software) has access to a search engine that provides them with superior search results compared to Google for at least a few predefined areas of interest.
      The exact algorithm used by Google Search to rank websites is a secret even to most Googlers. Still it is clear, that it relies heavily on big data: billions of queries, a comprehensive web index and user behaviour data. - All this is not available to us. A distributed search engine however can instead rely on user input. Every admin of one node seeds the node ranking with their personal selection of trusted sites. They connect their node with nodes of people they trust. This results in a web of (transitive) trust much like pgp. For comparison, imagine you are searching for something in a world without computers: You ask the people around you. They probably forward your question to their peers. I already had a look at YaCy. It is active, somewhat usable and has a friendly maintainer. Unfortunately I consider the codebase to show its age. It takes a lot of time for a newcomer to find their way around and it contains a lot of cruft. Nevertheless, YaCy is a good example that a decentralized search software can be done even by a small team or just one person. I myself started working on a software in Haskell and keep my notes here: Populus:DezInV. Since I m learning Haskell along the way, there is nothing there to see yet. Additionally I took a yak shaving break to learn nix. By the way: DuckDuckGo is not the alternative. And while I would encourage you to also try Yandex for a second opinion, I don t consider this a solution.

      21 May 2024

      Michael Ablassmeier: lvm thin send/recv

      A few days ago i found this mail on the LKML that introduces support for userspace access to LVM thin provisioned metadata snapshots. I didn t know this is possible. Using the thin provisioning tools you can then export the metadata information for your LVM snapshots to track changed regions between them. The workflow is pretty straight forward, yet not really documented:
      # lvcreate -ay -Ky --snapshot -n full_backup thingroup/vol1
        # dmsetup message /dev/mapper/thingroup-thinpool-tpool 0 reserve_metadata_snap
        # lvcreate -ay -Ky --snapshot -n inc_backup thingroup/vol1
        # thin_delta  -m --snap1 $(lvs --noheadings -o thin_id thingroup/full_backup) --snap2 $(lvs --noheadings -o thin_id thingroup/inc_backup) > delta_dump
        # dmsetup message /dev/mapper/thingroup-thinpool-tpool 0 release_metadata_snap
      This all has already been implemented by a nice utility called thin-send-recv, which based on this functionality allows to (incrementally) send LVM snapshots to remote systems just like zfs send or zfs recv.

      16 May 2024

      John Goerzen: Review of Reputable, Functional, and Secure Email Service

      I last reviewed email services in 2019. That review focused a lot of attention on privacy. At the time, I selected mailbox.org as my provider, and have been using them for these 5 years since. However, both their service and their support have gone significantly downhill since, so it is time for me to look at other options. Here I am focusing strongly on email. Some of the providers mentioned here provide other services (IM, video calls, groupware, etc.), and to the extent they do, I am ignoring them.

      What Matters in 2024
      I want to start off by acknowledging that what you need in email probably depends on your circumstances and the country in which you live. For me, I begin by naming that the largest threat most of us face isn t from state actors but from criminals: hackers, ransomware gangs, etc. It is important to take as many steps as possible to secure one s account against that. Privacy and security are both part of the mix. I still value privacy but I am acknowledging, as Migadu does, that Email as we know it and encryption are incompatible. Although some of these services strongly protect parts of the conversation, the reality is that most people will be emailing people using plain old email services which don t. For stronger security, something like Signal would be needed. (I wrote about Signal in 2021 also.) Interestingly, OpenPGP support seems to be something of a standard feature in the providers I reviewed by this point. All or almost all of them provide integration with browser-based encryption as well as server-side encryption if you prefer that. Although mailbox.org can automatically PGP-encrypt every message that arrives in plaintext, for general use, this is unwieldy; there isn t good tooling for searching mailboxes where every message is encrypted, etc. So I never enabled that feature at Mailbox. I still value security and privacy, but a pragmatic approach addresses the most pressing threats first.

      My criteria
      The basic requirements for an email service include:
      1. Ability to use my own domains
      2. Strong privacy policy
      3. Ability for me to use my own IMAP and SMTP clients on both desktop and mobile
      4. It must be extremely reliable
      5. It must not be free
      6. It must have excellent support for those rare occasions when it is needed
      7. Support for basic aliases
      Why do I say it must not be free? Because if someone is providing a service with the quality I m talking about here, and not charging for it, it implies something is fishy: either they are unscrupulous, are financially unstable, or the product is something else like ads. I am not aware of any provider that matches the other criteria with a free account anyhow. These providers range from about $30 to $90 per year, so cheaper than a Netflix subscription. Immediately, this rules out several options:
      • Proton doesn t let me use my own clients on mobile (their bridge is desktop-only)
      • Tuta also doesn t let me use my own clients
      • Posteo doesn t let me use my own domain
      • mxroute.com lacks a strong privacy policy, and its policy has numerous causes for concern (for instance, If you repeatedly send email to invalid/unroutable recipients, they may be published on our GitHub )
      I will have a bit more to say about a couple of these providers below. There are some additional criteria that are strongly desired but not absolutely required:
      1. Ability to set individual access passwords for every device/app
      2. Support for two-factor authentication (2FA/TFA/TOTP) for web-based access
      3. Support for basics in filtering: ability to filter on envelope recipient (so if I get BCC d, I can still filter), and ability to execute more than one action on filter match (eg, deliver to two folders, or deliver to a folder and forward to someone else)
      IMAP and SMTP don t really support 2FA, so by setting individual passwords for every device, you can at least limit the blast radius and cut off a specific device if something is (or might be) compromised.

      The candidates
      I considered these providers: Startmail, Mailfence, Runbox, Fastmail, Kolab, Mailbox.org, and Migadu. I ll review each, and highlight the pricing of the plan I would most likely use. Each provider offers multiple plans; some may be more expensive and some may be cheaper than the one I reviewed. I included a link to each provider s full pricing information so you can compare for your needs. I set up trials with each of these (except Mailbox.org, with which I already had a paid account). It so happend that I had actual questions for support for each one, which gave me an opportunity to see how support responded. I did not fabricate questions, and would not have contacted support if I didn t have real ones. (This means that I asked different questions of each provider, because they were the REAL questions I had.) I ll jump to the spoiler right now: I eventually chose Migadu, with Fastmail and Mailfence as close seconds. I looked for providers myself, and also solicited recommendations in a Mastodon thread.

      Mailbox.org
      I begin with Mailbox, as it was my top choice in 2019 and the incumbent. Until this year, I had been quite happy with it. I had cause to reach their support less than once a year on average, and each time they replied the same day or next day. Now, however, they are failing on reliability and on support. Their spam filter has become overly aggressive. It has blocked quite a bit of legitimate mail. When contacting their support about a prior issue earlier this year, they initially took 4 days to reply, and then 6 days to reply after that. Ouch. They had me disable some spam settings. It didn t really help. I continue to lose mail. I don t know how much, because they block a lot of it before it even hits the spam folder. One of my friends texted to say mail was dropping. I raised a new ticket with mailbox, which took them 5 days to reply to. Their reply was unhelpful. As the Internet is not a static system, unforeseen events can always occur. Well yes, that s true, and I get it, false positives exist with email. But this was from an ISP s mail system with an address that had been established for years, and it was part of a larger pattern of rejecting quite a bit of legit mail. And every interaction with them recently hasn t resulted in them actually doing anything to resolve anything. It s just a paragraph or two of reply that does nothing and helps nothing. When I complained that it took 5 days to reply, they said We have not been able to reply sooner as we are currently experiencing a high volume of customer enquiries. Even though their SLA for my account is a not-great 48 business hour turnaround, they still missed it and their reason is we re busy. I finally asked what RBL had caught the blocked email, since when I checked, the sender wasn t on any RBL. Mailbox s reply: they only keep their logs for 7 days, so next time I should contact them within 7 days. Which, of course, I DID; it was them that kept delaying. Ugh! It s like they ve become a cable company. Even worse is how they have been blocking mail from GrapheneOS s discussion form. See their thread about it. In short, Graphene s mail server has a clean reputation and Mailbox has no problem with it. But because one of Graphene s IPv6 webservers has an IPv6 allocation of a size Mailbox doesn t like, they drop mail. It s ridiculous, and Mailbox was dismissive of this well-known and well-regarded Open Source project. So if the likes of GrapheneOS can t get good faith effort to deliver their mail, what chance does an individual like me have? I m sorry, but I m literally paying you to deliver email for me and provide good support. If you can t do either of those, you don t get to push that problem down onto me. Hire appropriate staff. On the technical side, they support aliases, my own clients, and have a reasonable privacy policy. Their 2FA support exists for the web interface (though weirdly not the support site), though it is somewhat weird. They do not support app passwords. A somewhat unique feature is the @secure.mailbox.org domain. If you try to receive mail at that address, mailbox.org will block it unless it uses TLS. Same for sending. This isn t E2EE, but it does at least require things not be in plaintext for the last hop to Mailbox. Verdict: not recommended due to poor reliability and support. Mailbox.Org summary:
      • Website: https://mailbox.org/en/
      • Reliability: iffy due to over-aggressive spam filtering
      • Support: Poor; takes 4-6 days for a reply and replies are unhelpful
      • Individual access passwords: No
      • 2FA: Yes, but with a PIN instead of a password as the other factor
      • Filtering: Full SIEVE feature set and GUI editor
      • Spam settings: greylisting on/off, reject some/all spam, etc. But they re insufficient to address Mailbox s overzealousness, which support says I cannot workaround within the interface.
      • Server storage location: Germany
      • Plan as reviewed: standard [pricing link]
        • Cost per year: EUR 30 (about $33)
        • Mail storage included: 10GB
        • Limits on send/receive volume: none
        • Aliases: 50 on your domain name, 25 on mailbox.org
        • Additional mailboxes: Available; each one at the same fee as the primary mailbox

      Startmail
      I really wanted to like Startmail. Its vault is an interesting idea and should contribute to the security and privacy of an account. They clearly care about privacy. It falls down in filtering. They have no way to filter on envelope recipient (BCC or similar). Their support confirmed this to me and that s a showstopper. Startmail support was also as slow as Mailbox, taking 5 days to respond to me. Two showstoppers right there. Verdict: Not recommended due to slow support responsiveness and weak filtering. Startmail summary:
      • Website: https://www.startmail.com/
      • Reliability: Seems to be fine
      • Support: Mediocre; Took 5 days for a reply, but the reply was helpful
      • Individual app access passwords: Yes
      • 2FA: Yes
      • Filtering: Poor; cannot filter on envelope recipient, and can t build filters with multiple actions
      • Spam settings: None
      • Server storage location: The Netherlands
      • Plan as reviewed: Custom domain (trial was Personal), [pricing link]
        • Cost per year: $70
        • Mail storage included: 20GB
        • Limits on send/receive volume: none
        • Aliases: unlimited, with lots of features: can set expiration, etc.
        • Additional mailboxes: not available

      Kolab
      Kolab Now is mainly positioned as a full groupware service, but they do have a email-only option which I investigated. There isn t much documentation about it compared to other providers, and also not much in the way of settings. You can turn greylisting on or off. And . that s it. It has a full suite of filtering options. They set an X-Envelope-To header which you can use with the arbitrary header match to do the right thing even for BCC situations. Filters can have multiple conditions and multiple actions. It is SIEVE-based and you can download your SIEVE definitions. If you enable 2FA, you disable IMAP and SMTP; not great. Verdict: Not an impressive enough email featureset to justify going with it. Kolab Now summary:
      • Website: https://kolabnow.com/
      • Reliability: Seems to be fine
      • Support: Fine responsiveness (next day)
      • Invidiaul app passwords: no
      • 2FA: Yes, but if you enable it, they disable IMAP and SMTP
      • Filtering: Excellent
      • Spam settings: Only greylisting on/off
      • Server storage location: Switzerland; they have lots of details on their setup
      • Plan as reviewed: Just email [pricing link]
        • Cost per year: CHF 60, about $66
        • Mail storage included: 5GB
        • Limitations on send/receive volume: None
        • Aliases: Yes. Not sure if there are limits.
        • Additional mailboxes: Yes if you set up a group account. Flexible pricing based on user count is not documented anywhere I could find.

      Mailfence
      Mailfence is another option, somewhat similar to Startmail but without the unique vault. I had some questions about filters, and support was quite responsive, responding in a couple of hours. Some of their copy on their website is a bit misleading, but support clarified when I asked them. They do not offer encryption at rest (like most of the entries here). Mailfence s filtering system is the kind I d like to see. It allows multiple conditions and multiple actions for each rule, and has some unique actions as well (notify by SMS or XMPP). Support says that Recipients matches envelope recipients. However, one ommission is that I can t match on arbitrary headers; only the canned list of headers they provide. They have only two spam settings:
      • spam filter on/off
      • whitelist
      Given some recent complaints about their spam filter being overly aggressive, I find this lack of control somewhat concerning. (However, I discount complaints about people begging for more features in free accounts; free won t provide the kind of service I m looking for with any provider.) There are generally just very few settings for email as well. Verdict: Response and helpful support, filtering has the right structure but lacks arbitrary header match. Could be a good option. Mailfence summary:
      • Website: https://mailfence.com/
      • Reliability: Seems to be fine
      • Support: Excellent responsiveness and helpful replies (after some initial confusion about my question of greylisting)
      • Individual app access passwords: No. You can set a per-service password (eg, an IMAP password), but those will be shared with all devices speaking that protocol.
      • 2FA: Yes
      • Filtering: Good; only misses the ability to filter on arbitrary headers
      • Spam settings: Very few
      • Server storage location: Belgium
      • Plan as reviewed: Entry [pricing link]
        • Cost per year: $42
        • Mail storage included: 10GB, with a maximum of 50,000 messages
        • Limits on send/receive volume: none
        • Aliases: 50. Aliases can t be deleted once created (there may be an exeption to this for aliases on your own domain rather than mailfence.com)
        • Additional mailboxes: Their page on this is a bit confusing, and the pricing page lacks the information promised. It looks like you can pay the same $42/year for additional mailboxes, with a limit of up to 2 additional paid mailboxes and 2 additional free mailboxes tied to the account.

      Runbox
      This one came recommended in a Mastodon thread. I had some questions about it, and support response was fantastic I heard from two people that were co-founders of the company! Even within hours, on a weekend. Incredible! This kind of response was only surpassed by Migadu. I initially wrote to Runbox with questions about the incoming and outgoing message limits, which I hadn t seen elsewhere, as well as the bandwidth limit. They said the bandwidth limit is no longer enforced on paid accounts. The incoming and outgoing limits are enforced, and all email (even spam) counts towards the limit. Notably the outgoing limit is per recipient, so if you send 10 messages to your 50-recipient family group, that s the limit. However, they also indicated a willingness to reset the limit if something happens. Unfortunately, hitting the limit results in a hard bounce (SMTP 5xx) rather than a temporary failure (SMTP 4xx) so it can result in lost mail. This means I d be worried about some attack or other weirdness causing me to lose mail. Their filter is a pain point. Here are the challenges:
      • You can t directly match on a BCC recipient. Support advised to use a headers match, which will search for something anywhere in the headers. This works and is probably good enough since this data is in the Received: headers, but it is a little more imprecise.
      • They only have a contains , not an equals operator. So, for instance, a pattern searching for test@example.com would also match newtest@example.com . Support advised to put the email address in angle brackets to avoid this. That will work mostly. Angle brackets aren t always required in headers.
      • There is no way to have multiple actions on the filter (there is just no way to file an incoming message into two folders). This was the ultimate showstopper for me.
      Support advised they are planning to upgrade the filter system in the future, but these are the limitations today. Verdict: A good option if you don t need much from the filtering system. Lots of privacy emphasis. Runbox summary:
      • Website: https://runbox.com/
      • Reliability: Seems to be fine, except returning 5xx codes if per-day limits are exceeded
      • Support: Excellent responsiveness and replies from founders
      • Individual app passwords: Yes
      • 2FA: Yes
      • Filtering: Poor
      • Spam settings: Very few
      • Server storage location: Norway
      • Plan as reviewed: Mini [pricing link]
        • Cost per year: $35
        • Mail storage included: 10GB
        • Limited on send/receive volume: Receive 5000 messages/day, Send 500 recipients/day
        • Aliases: 100 on runbox.com; unlimited on your own domain
        • Additional mailboxes: $15/yr each, also with 10GB non-shared storage per mailbox

      Fastmail
      Fastmail came recommended to me by a friend I ve known for decades. Here s the thing about Fastmail, compared to all the services listed above: It all just works. Everything. Filtering, spam prevention, it is all there, all feature-complete, and all just does the right thing as you d hope. Their filtering system has a canned dropdown for To/Cc/Bcc , it supports multiple conditions and multiple actions, and just does the right thing. (Delivering to multiple folders is a little cumbersome but possible.) It has a particularly strong feature set around administering multiple accounts, including things like whether users can prevent admins from reading their mail. The not-so-great part of the picture is around privacy. Fastmail is based in Australia, where the government has extensive power around spying on data, even to the point of forcing companies to add wiretap capabilities. Fastmail s privacy policy states user data may be held in Australia, USA, India, and Netherlands. By default, they share data with unidentified spam companies , though you can disable this in settings. On the other hand, they do make a good effort towards privacy. I contacted support with some questions and got back a helpful response in three hours. However, one of the questions was about in which countries my particular data would be stored, and the support response said they would have to get back to me on that. It s been several days and no word back. Verdict: A featureful option that just works , with a lot of features for managing family accounts and the like, but lacking in the privacy area. Fastmail summary:
      • Website: https://www.fastmail.com/
      • Reliability: Seems to be fine
      • Support: Good response time on most questions; dropped the ball on one tha trequired research
      • Individual app access passwords: Yes
      • 2FA: Yes
      • Filtering: Excellent
      • Spam settings: Can set filter aggressiveness, decide whether to share spam data with spam-fighting companies , configure how to handle backscatter spam, and evaluate the personal learning filter.
      • Server storage locations: Australia, USA, India, and The Netherlands. Legal jurisdiction is Australia.
      • Plan as reviewed: Individual [pricing link]
        • Cost per year: $60
        • Mail storage included: 50GB
        • Limits on send/receive volume: 300/hour
        • Aliases: Unlimited from what I can see
        • Additional mailboxes: No; requires a different plan for that

      Migadu
      Migadu was a service I d never heard of, but came recommended to me on Mastodon. I listed Migadu last because it is a class of its own compared to all the other options. Every other service is basically a webmail interface with a few extra settings tacked on. Migadu has a full-featured email admin console in addition. By that I mean you can:
      • View usage graphs (incoming, outgoing, storage) over time
      • Manage DNS (if you want Migadu to run your nameservers)
      • Manage multiple domains, and cross-domain relationships with mailboxes
      • View a limited set of logs
      • Configure accounts, reset their passwords if needed/authorized, etc.
      • Configure email address rewrite rules with wildcards and so forth
      Basically, if you were the sort of person that ran your own mail servers back in the day, here is Migadu giving you most of that functionality. Effectively you have a web interface to do all the useful stuff, and they handle the boring and annoying bits. This is a really attractive model. Migadu support has been fantastic. They are quick to respond, and went above and beyond. I pointed out that their X-Envelope-To header, which is needed for filtering by BCC, wasn t being added on emails I sent myself. They replied 5 hours later indicating they had added the feature to add X-Envelope-To even for internal mails! Wow! I am impressed. With Migadu, you buy a pool of resources: storage space and incoming/outgoing traffic. What you do within that pool is up to you. You can set up users ( mailboxes ), aliases, domains, whatever you like. It all just shares the pool. You can restrict users further so that an individual user has access to only a subset of the pool resources. I was initially concerned about Migadu s daily send/receive message count limits, but in visiting with support and reading the documentation, what really comes out is that Migadu is a service with a personal touch. Hitting the incoming traffic limit will cause a SMTP temporary fail (4xx) response so you won t lose legit mail and support will work with you if it s a problem for legit uses. In other words, restrictions are soft and they are interpreted reasonably. One interesting thing about Migadu is that they do not offer accounts under their domain. That is, you MUST bring your own domain. That s pretty easy and cheap, of course. It also puts you in a position of power, because it is easy to migrate email from one provider to another if you own the domain. Filtering is done via SIEVE. There is a GUI editor which lets you accomplish most things, though it has an odd blind spot where you can t file a message into multiple folders. However, you can edit a SIEVE ruleset directly and you get the full SIEVE featureset, which is extensive (and does support filing a message into multiple folders). I note that the SIEVE :envelope match doesn t work, but Migadu adds an X-Envelope-To header which is just as good. I particularly love a company that tells you all the reasons you might not want to use them. Migadu s pro/con list is an honest drawbacks list (of course, their homepage highlights all the features!). Verdict: Fantastically powerful, excellent support, and good privacy. I chose this one. Migadu summary:
      • Website: https://migadu.com/
      • Reliability: Excellent
      • Support: Fantastic. Good response times and they added a feature (or fixed a bug?) a few hours after I requested it.
      • Individual access passwords: Yes. Create identities to support them.
      • 2FA: Yes, on both the admin interface and the webmail interface
      • Filtering: Excellent, based on SIEVE. GUI editor doesn t support multiple actions when filing into a folder, but full SIEVE functionality is exposed.
      • Spam settings:
        • On the domain level, filter aggressiveness, Greylisting on/off, black and white lists
        • On the mailbox level, filter aggressiveness, black and whitelists, action to take with spam; compatible with filters.
      • Server storage location: France; legal jurisdiction Switzerland
      • Plan as reviewed: mini [pricing link]
        • Cost per year: $90
        • Mail storage included: 30GB ( soft quota)
        • Limits on send/receive volume: 1000 messgaes in/day, 100 messages out/day ( soft quotas)
        • Aliases: Unlimited on an unlimited number of domains
        • Additional mailboxes: Unlimited and free; uses pooled quotas, but individual quotas can be set

      Others
      Here are a few others that I didn t think worthy of getting a trial:
      • mxroute was recommended by several. Lots of concerning things in their policy, such as:
        • if you repeatedly send mail to unroutable recipients, they may publish the addresses on Github
        • they will terminate your account if they think you are rude or want to contest a charge
        • they reserve the right to cancel your service at any time for any (or no) reason.
      • Proton keeps coming up, and I will not consider it so long as I am locked into their client on mobile.
      • Skiff comes up sometimes, but they were acquired by Notion.
      • Disroot comes up; this discussion highlights a number of reasons why I avoid them. Their Terms of Service (ToS) is inconsistent with a general-purpose email account (I guess for targeting nonprofits and activists, that could make sense). Particularly laughable is that they claim to be friends of Open Source, but then would take down your account if you upload copyrighted material. News flash: in order for an Open Source license to be meaningful, the underlying work is copyrighted. It is perfectly legal to upload copyrighted material when you wrote it or have the license to do so!

      Conclusions
      There are a lot of good options for email hosting today, and in particular I appreciate the excellent personal support from companies like Migadu and Runbox. Support small businesses!

      6 May 2024

      Thomas Lange: Removing tens of thousands of web pages

      In January I've removed tens of thousands of web pages on www.debian.org. Have you noticed it? In the past From 1997 onwards, we had web pages for security announcements. We had to manually prepare a .data and a .wml file which then generated a web page for each security announcement (DSA or DLA). We have listed the 6 most recent messages in a short list that was created from these files. Most of the work that went into the Debian web pages was creating these files. Our search engine often listed the pages with security announcements instead of a more relevant web page for a particular topic. Preparation At DebConf Kosovo (2022) I started with a proof of concept and wrote a script, that generates this list without using the .data/.wml files in the Git repository, but instead reading the primary sources of security information[1]. This new list now includes links to the security tracker and the email of the announcement. Following web pages and scripts were also using these .data and .wml files: Before I could remove all the security web pages, I had to adjust the scripts, that create the above information. When I looked at the OVAL files and the apache logs of our web server, I saw that more than 99% of the web traffic was generated by these XML files (134TB of 135TB total in two weeks). They were not compressed and were around 50MB in size. With the help of Carsten Sch nert we managed to modify the python scripts that generate this OVAL file without using the .data/.wml files and now we only provide bzip2 compressed XML files[2]. The RSS feeds are created by the new Perl script which reads the DSA/DLA list the security tracker and determines the URL of the email of all entries. This script also generates the list of the most recent DSA/DLA entries. Currently we show the last 350 entries which covers more than the last year and includes links to the announcement email and the security tracker. The huge list of crossreferences is not needed any more, since the mapping of CVE to DSA is already included in the DSA list[3] of the security tracker. The amount of translations of the DSA/DLA was very different. French translations were almost all done, but all other languages did translations for a couple of months or years only. E.g. in 2022, Italian had 2 translations, Russian 15, Danish 212, French and English each 279. But from 2023 on only French translations were made. By generating the list of DSA/DLA we lost the ability to translate these web pages, but since these announcements are made of simple, identical sentences it is easy to use an automatic translation service if needed. Now the translation statistics of all web pages are more accurate. Instead of 12200 pages that need to be translated (including all these old DSA/DLA) there are now only 2500 pages to translate[4]. Languages that had a lot of old translations of DSA/DLA lost some percentage but languages that are doing translations of newer web pages won in the statistics of how many pages are translated. Examples: Before
      German (de)   3501  28.5%
      Italian (it)  1005   8.2%
      Danish (da)   6336  51.7%
      
      After
      German (de)   1486  59.0%
      Italian (it)   909  36.1%
      Danish (da)    982  39.0%
      
      Cleanup of all the security web pages Finally in January, I could remove all web pages of the security announcements in one git commit[5]. Using several git rm -rf commands this commit removed 54335 files, including around 9650 DSA/DLA data files, 44189 wml files, nearly 500 Makefiles. Outcome No more manual work is needed for the security team and we now have direct links from a DSA-NNN/DLA-NNN to the email in our mailing list archive. This was not possible before. The search results became more accurate. But we still host a lot of other old content on the Debian web pages which may be removed in the future. [1] https://www.debian.org/security/#infos [2] https://www.debian.org/security/oval/ [3] https://salsa.debian.org/security-tracker-team/security-tracker/-/raw/master/data/DSA/list [4] https://www.debian.org/devel/website/stats [5] https://salsa.debian.org/webmaster-team/webwml/-/commit/2aa73ff15bfc4eb2afd85c

      1 May 2024

      Guido G nther: Free Software Activities April 2024

      A short status update of what happened on my side last month. Maintenance and code review keep to be the top time sinks (in a positive way). If you want to support my work see donations.

      Next.