Search Results: "julian"

16 January 2026

Freexian Collaborators: Monthly report about Debian Long Term Support, December 2025 (by Santiago Ruano Rinc n)

The Debian LTS Team, funded by [Freexian s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for December.

Activity summary During the month of December, 18 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below). The team released 41 DLAs fixing 252 CVEs. The team currently focuses on preparing security updates for Debian 11 bullseye , but also looks for contributing with updates for Debian 12 bookworm , Debian 13 trixie and even Debian unstable. Notable security updates:
  • libsoup2.4 (DLA-4398-1), prepared by Andreas Henrikson, fixing several vulnerabilities.
  • glib2.0 (DLA-4412-1), published by Emilio Pozuelo Monfort, addressing multiple issues.
  • lasso (DLA-4397-1), prepared by Sylvain Beucler, addressing multiple issues, including a critical remote code execution (RCE) vulnerability (CVE-2025-47151)
  • roundcube (DLA 4415-1), prepared by Guilhem Moulin, fixing a cross-site-scripting (XSS) (CVE-2025-68461) and an information disclosure (CVE-2025-68460) vulnerabilities
  • mediawiki (DLA 4428-1), published by Guilhem, fixing multiple vulnerabilities could lead to information disclosure, denial of service or privilege escalation.
  • While the DLA has not been published yet, Charles Henrique Melara proposed upstream fixes for seven CVEs in ffmpeg: https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/21275.
  • python-apt (DLA 4408-1), prepared by Utkarsh Gupta, in coordination with the Debian Security Team and Julian Andres Klode, the apt s maintainer.
  • libpng1.6 (DLA-4396-1), published by Tobias Frost, completing the work started the previous month.
Notable non-security updates:
  • tzdata (DLA-4403-1), prepared by Emilio, including the latest changes to the leap second list and its expiry date, which was set for the end of December.
Contributions from outside the LTS Team:
  • Christoph Berg, co-maintainer of PostgreSQL in Debian, prepared a postgresql-13 update, released as DLA-4420-1
The LTS Team has also contributed with updates to the latest Debian releases:

Individual Debian LTS contributor reports

Thanks to our sponsors Sponsors that joined recently are in bold.

30 December 2025

Utkarsh Gupta: FOSS Activites in December 2025

Here s my monthly but brief update about the activities I ve done in the FOSS world.

Debian
Whilst I didn t get a chance to do much, here are still a few things that I worked on:
  • Prepared security update for wordpress for trixie and bookworm.
  • A few discussions with the new DFSG team, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
I joined Canonical to work on Ubuntu full-time back in February 2021. Whilst I can t give a full, detailed list of things I did, here s a quick TL;DR of what I did:
  • Successfully released Resolute Snapshot 2!
    • This one was also done without the ISO tracker and cdimage access.
    • I think this one went rather smooth. Let s see what we re able to do for snapshot 3.
  • Worked on removing GPG keys from the cdimage instance. That took a while, whew!
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • review NEW packages for Ubuntu Studio.
    • remove old binaries that are stalling transition and/or migration.
    • LTS requalification of Ubuntu flavours.
    • bootstrapping dotnet-10 packages for Stable Release Updates.
  • With that, we ve entered the EOY break. :)
    • I was anyway on vacation for majority of this month. ;)

Debian (E)LTS
This month I have worked 72 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:

Released Security Updates
  • ruby-git: Multiple vulnerabilities leading to command line injection and improper path escaping.
  • ruby-sidekiq: Multiple vulnerabilities leading to Cross-site Scripting (XSS) and Denial of Service in Web UI.
  • python-apt: Vulnerability leading to crash via invalid nullptr dereference in TagSection.keys().
    • [LTS]: Fixed CVE-2025-6966 via 2.2.1.1 for bullseye. This has been released as DLA 4408-1.
    • [ELTS]: Fixed CVE-2025-6966 via 1.8.4.4 for buster and 1.4.4 for stretch. This has been released as ELA 1596-1.
    • All of this was coordinated b/w the Security team and Julian Andres Klode. Julian will take care of the stable uploads.
  • node-url-parse: Vulnerability allowing authorization bypass through specially crafted URL with empty userinfo and no host.
  • wordpress: Multiple vulnerabilities in WordPress core, leading to Sent Data & Cross-site Scripting.
  • usbmuxd: Privilege escalation vulnerability via path traversal in SavePairRecord command.
    • [LTS]: Fixed CVE-2025-66004 via 1.1.1-2+deb11u1 for bullseye. This has been released as DLA 4417-1.
    • [ELTS]: Fixed CVE-2025-66004 via 1.1.1~git20181007.f838cf6-1+deb10u1 for buster and 1.1.0-2+deb9u1 for stretch. This has been released as ELA 1599-1.
    • All of this was coordinated b/w the Security team and Yves-Alexis Perez. Yves will take care of the stable uploads.
  • gst-plugins-good1.0: Multiple vulnerabilities in isomp4 plugin leading to potential out-of-bounds reads and information disclosure.
  • postgresql-13: Multiple vulnerabilities including unauthorized schema statistics creation and integer overflow in libpq allocation calculations.
  • gst-plugins-base1.0: Multiple vulnerabilities in SubRip subtitle parsing leading to potential crashes and buffer issues.

Work in Progress
  • ceph: Affected by CVE-2024-47866, using the argument x-amz-copy-source to put an object and specifying an empty string as its content leads to the RGW daemon crashing, resulting in a DoS attack.
  • knot-resolver: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.
  • adminer: Affected by CVE-2023-45195 and CVE-2023-45196, leading to SSRF and DoS, respectively.
  • u-boot: Affected by CVE-2025-24857, where boot code access control flaw in U-Boot allowing arbitrary code execution via physical access.
    • [ELTS]: As it s only affected the version in stretch, I ve started the work to find the fixing commits and prepare a backport. Not much progress there, I ll roll it over to January.
  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.
    • [ELTS]: After completing the work for LTS myself, Bastien picked it up for ELTS and reached out about an upstream regression and we ve been doing some exchanges. Bastien has done most of the work backporting the patches but needs a review and help backporting CVE-2025-61771.

Other Activities
  • Frontdesk from 01-12-2025 to 07-12-2025.
    • Auto EOL d a bunch of packages.
    • Marked CVE-2025-12084/python2.7 as end-of-life for bullseye, buster, and stretch.
    • Marked CVE-2025-12084/jython as end-of-life for bullseye.
    • Marked CVE-2025-13992/chromium as end-of-life for bullseye.
    • Marked apache2 CVEs as postponed for bullseye, buster, and stretch.
    • Marked CVE-2025-13654/duc as postponed for bullseye and buster.
    • Marked CVE-2025-32900/kdeconnect as ignored for bullseye.
    • Marked CVE-2025-12084/pypy3 as postponed for bullseye.
    • Marked CVE-2025-14104/util-linux as postponed for bullseye, buster, and stretch.
    • Marked several CVEs for fastdds as postponed for bullseye.
    • Marked several CVEs for pytorch as postponed for bullseye.
    • Marked CVE-2025-2486/edk2 as postponed for bullseye.
    • Marked CVE-2025-6172 7,9 /golang-1.15 as postponed for bullseye.
    • Marked CVE-2025-65637/golang-logrus as postponed for bullseye.
    • Marked CVE-2025-12385/qtdeclarative-opensource-src ,gles as postponed for bullseye, buster, and stretch.
    • Marked TEMP-0000000-D08402/rust-maxminddb as postponed for bullseye.
    • Added the following packages to d,e la-needed.txt:
      • liblivemedia, sogo.
    • During my triage, I had to make the bin/elts-eol script robust to determine the lts_admin repository - did a back and forth with Emilio about this on the list.
    • I sent a gentle reminder to the LTS team about the issues fixed in bullseye but not in bookworm via mailing list: https://lists.debian.org/debian-lts/2025/12/msg00013.html.
  • I claimed php-horde-css-parser to work on CVE-2020-13756 for buster and did almost all the work only to realize that the patch already existed in buster and the changelog confirmed that it was intentionally fixed.
    • After speaking with Andreas Henriksson, we figured that the CVE ID was missed when the ELA was generated and so I fixed that via 87afaaf19ce56123bc9508d9c6cd5360b18114ef and 5621431e84818b4e650ffdce4c456daec0ee4d51 in the ELTS security tracker to reflect the situation.
  • Participated in a thread which I started last month around using Salsa CI for E/LTS packages and if we plan to sunset it in favor of using Debusine. The plan for now is to keep it around as it s still beneficial and Debusine is still in its early phase.
  • Did a lot of back and forth with Helmut about debusine uploads on #debian-elts.
    • While debugging a failure in dcut uploads, I ran into an SSH compatibility issue on deb-master.freexian.com that could be fixed on the server-side. I shared all my findings to Freexian s sysadmin team.
    • A minimal fix on the server side would be one of:
      PubkeyAcceptedAlgorithms -ssh-dss
      
      or explicitly restricting to modern algorithms, e.g.:
      PubkeyAcceptedAlgorithms
      ssh-ed25519,ecdsa-sha2-nistp256,rsa-sha2-512,rsa-sha2-256
      
  • Jelly on #debian-lts reported that all my DLA mails had broken GMail s DKIM signature. So I set up sending replies from @debian.org and that seems to have fixed that! \o/
  • [LTS] Attended a rather short monthly LTS meeting on Jitsi. Summary here.
  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.

Until next time.
:wq for today.

18 October 2025

Julian Andres Klode: Sound Removals

Problem statement Currently if you have an automatically installed package A (= 1) where
  • A (= 1) Depends B (= 1)
  • A (= 2) Depends B (= 2)
and you upgrade B from 1 to 2; then you can:
  1. Remove A (= 1)
  2. Upgrade A to version 2
If A was installed by a chain initiated by Recommends (say X Rec Y, Y Depends A), the solver sometimes preferred removing A (and anything depending on it until it got). I have a fix pending to introduce eager Recommends which fixes the practical case, but this is still not sound. In fact we can show that the solver produces the wrong result for small minimal test cases, as well as the right result for some others without the fix (hooray?). Ensuring sound removals is more complex, and first of all it begs the question: When is a removal sound? This, of course, is on us to define. An easy case can be found in the Debian policy, 7.6.2 Replacing whole packages, forcing their removal : If B (= 2) declares a Conflicts: A (= 1) and Replaces: A (= 1), then the removal is valid. However this is incomplete as well, consider it declares Conflicts: A (< 1) and Replaces: A (< 1); the solution to remove A rather than upgrade it would still be wrong. This indicates that we should only allow removing A if the conflicts could not be solved by upgrading it. The other case to explore is package removals. If B is removed, A should be removed as well; however it there is another package X that Provides: B (= 1) and it is marked for install, A should not be removed. That said, the solver is not allowed to install X to satisfy the depends B (= 1) - only to satisfy other dependencies [we do not want to get into endless loops where we switch between alternatives to keep reverse dependencies installed].

Proposed solution To solve this, I propose the following definition: Definition (sound removal): A removal of package P is sound if either:
  1. A version v is installed that package-conflicts with B.
  2. A package Q is removed and the installable versions of P package-depends on Q.
where the other definitions are: Definition (installable version): A version v is installable if either it is installed, or it is newer than an installed version of the same package (you may wish to change this to accomodate downgrades, or require strict pinning, but here be dragons). Definition (package-depends): A version v package-depends on a package B if either:
  1. there exists a dependency in v that can be solved by any version of B, or
  2. there exists a package C where v package-depends C and any (c in C) package-depends B (transitivity)
Definition (package-conflicts): A version v package-conflicts with an installed package B if either:
  1. it declares a conflicts against an installable version of B; or
  2. there exists a package C where v package-conflicts C, and b package-depends C for installable versions b.

Translating this into a (modified) SAT solver One approach may be to implement the logic in the conflict analysis that drives backtracking, i.e. we assume a package A and when we reach not A, we analyse if the implication graph for not A constitutes a sound removal, and then replace the assumption A with the assumption A or "learned reason. However, while this seems a plausible mechanism for a DPLL solver, for a modern CDCL solver, it s not immediately evident how to analyse whether not A is sound if the reason for it is a learned clause, rather than a problem clause. Instead we propose a static encoding of the rules into a slightly modified SAT solver: Given c1, , cn that transitive-conflicts A and D1, , Dn that A package-depends on, introduce the rule: A unless c1 or c2 or ... cn ... or not D1 or not D2 ... or not Dn Rules of the form A... unless B... - where A... and B... are CNF - are intuitively the same as A... or B..., however the semantic here is different: We are not allowed to select B... to satisfy this clause. This requires a SAT solver that tracks a reason for each literal being assigned, such as solver3, rather than a SAT solver like MiniSAT that only tracks reasons across propagation (solver3 may track A depends B or C as the reason for B without evaluating C, whereas MiniSAT would only track it as the reason given not C).

Is it actually sound? The proposed definition of a sound removal may still proof unsound as I either missed something in the conclusion of the proposed definition that violates my goal I set out to achieve, or I missed some of the goals. I challenge you to find cases that cause removals that look wrong :D

27 September 2025

Julian Andres Klode: Dependency Tries

As I was shopping groceries I had a shocking realization: The active dependencies of packages in a solver actually form a trie (a dependency A B - A or B - of a package X is considered active if we marked X for install). Consider the dependencies A B C, A B, B X. In most package managers these just express alternatives, that is, the or relationship, but in Debian packages, it also expresses a preference relationship between its operands, so in A B C, A is preferred over B and B over C (and A transitively over C). This means that we can convert the three dependencies into a trie as follows: Dependency trie of the three dependencies Solving the dependency here becomes a matter of trying to install the package referenced by the first edge of the root, and seeing if that sticks. In this case, that would be a . Let s assume that a failed to install, the next step is to remove the empty node of a, and merging its children into the root. Reduced dependency trie with  not A  containing b, b c, b x For ease of visualisation, we remove a from the dependency nodes as well, leading us to a trie of the dependencies b , b c , and b x . Presenting the Debian dependency problem, or the positive part of it as a trie allows us for a great visualization of the problem but it may not proof to be an effective implementation choice. In the real world we may actually store this as a priority queue that we can delete from. Since we don t actually want to delete from the queue for real, our queue items are pairs of a pointer to dependency and an activitity level, say A B@1. Whenever a variable is assigned false, we look at its reverse dependencies and bump their activity, and reinsert them (the priority of the item being determined by the leftmost solution still possible, it has now changed). When we iterate the queue, we remove items with a lower activity level:
  1. Our queue is A B@1, A B C@1, B X@1
  2. Rejecting A bump the activity for its reverse dependencies and reinset them: Our queue is A B@1, A B C@1, (A )B@2, (A )B C@2, B X@1
  3. We visit A B@1 but see the activity of the underlying dependency is now 2 and remove it Our queue is A B C@1, (A )B@2, (A )B C@2, B X@1
  4. We visit A B C@1 but see the activity of the underlying dependency is now 2 and remove it Our queue is (A )B@2, (A )B C@2, B X@1
  5. We visit A B@2, see the activity matches and find B is the solution.

12 August 2025

Freexian Collaborators: Debian Contributions: DebConf 25, OpenSSH upgrades, Cross compilation collaboration and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-07 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25, by Stefano Rivera and Santiago Ruano Rinc n In July, DebConf 25 was held in Brest, France. Freexian was a gold sponsor and most of the Freexian team attended the event. Many fruitful discussions were had amongst our team and within the Debian community. DebConf itself was organized by a local team in Brest, that included Santiago (who now lives in Uruguay). Stefano was also deeply involved in the organization, as a DebConf committee member, core video team, and the lead developer for the conference website. Running the conference took an enormous amount of work, consuming all of Stefano and Santiago s time for most of July. Lucas Kanashiro was active in the DebConf content team, reviewing talks and scheduling them. There were many last-minute changes to make during the event. Anupa Ann Joseph was part of the Debian publicity team doing live coverage of DebConf 25 and was part of the DebConf 25 content team reviewing the talks. She also assisted the local team to procure the lanyards. Recorded sessions presented by Freexian collaborators, often alongside other friends in Debian, included:

OpenSSH upgrades, by Colin Watson Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported No new SSH connections possible during large part of upgrade to Debian Trixie , which would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:
  • As part of hardening the OpenSSH server, OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it; after this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen (roughly) in two phases: first we unpack the new files onto disk, and then we run some configuration steps which usually include things like restarting services. Normally this is fine, because the old service keeps on working until it s restarted. In this case, unpacking the new files onto disk immediately stopped new SSH connections from working: the old sshd received the connection and tried to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this. This wasn t much of a problem when upgrading OpenSSH on its own or with a small number of other packages, but in release upgrades it left a large gap when you can t SSH to the system any more, and if anything fails in that interval then you could be in trouble.

    After trying a couple of other approaches, Colin landed on the idea of having the openssh-server package divert /usr/sbin/sshd to /usr/sbin/sshd.session-split before the unpack step of an upgrade from before 9.8, then removing the diversion and moving the new file into place once it s ready to restart the service. This reduces the period when new connections fail to a minimum.
  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor part of the version number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, so as soon as you unpacked the new OpenSSL library during an upgrade, sshd stopped working. This couldn t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL, and time was tight if we wanted this to be available before the release of Debian 13.

    Fortunately, there s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted Colin s proposal to fix this there.
The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine.

Cross compilation collaboration, by Helmut Grohne Supporting cross building in Debian packages touches lots of areas of the archive and quite some of these matters reside in shared responsibility between different teams. Hence, DebConf was an ideal opportunity to settle long-standing issues. The cross building bof sparked lively discussions as a significant fraction of developers employ cross builds to get their work done. In the trixie release, about two thirds of the packages can satisfy their cross Build-Depends and about half of the packages actually can be cross built.

Miscellaneous contributions
  • Rapha l Hertzog updated tracker.debian.org to remove references to Debian 10 which was moved to archive.debian.org, and had many fruitful discussions related to Debusine during DebConf 25.
  • Carles Pina prepared some data, questions and information for the DebConf 25 l10n and i18n BoF.
  • Carles Pina demoed and discussed possible next steps for po-debconf-manager with different teams in DebConf 25. He also reviewed Catalan translations and sent them to the packages.
  • Carles Pina started investigating a django-compressor bug: reproduced the bug consistently and prepared a PR for django-compressor upstream (likely more details next month). Looked at packaging frictionless-py.
  • Stefano Rivera triaged Python CVEs against pypy3.
  • Stefano prepared an upload of a new upstream release of pypy3 to Debian experimental (due to the freeze).
  • Stefano uploaded python3.14 RC1 to Debian experimental.
  • Thorsten Alteholz uploaded a new upstream version of sane-airscan to experimental. He also started to work on a new upstream version of hplip.
  • Colin backported fixes for CVE-2025-50181 and CVE-2025-50182 in python-urllib3, and fixed several other release-critical or important bugs in Python team packages.
  • Lucas uploaded ruby3.4 to experimental as a starting point for the ruby-defaults transition that will happen after Trixie release.
  • Lucas coordinated with the Release team the fix of the remaining RC bugs involving ruby packages, and got them all fixed.
  • Lucas, as part of the Debian Ruby team, kicked off discussions to improve internal process/tooling.
  • Lucas, as part of the Debian Outreach team, engaged in multiple discussions around internship programs we run and also what else we could do to improve outreach in the Debian project.
  • Lucas joined the Local groups BoF during DebConf 25 and shared all the good experiences from the Brazilian community and committed to help to document everything to try to support other groups.
  • Helmut spent significant time with Samuel Thibault on improving architecture cross bootstrap for hurd-any mostly reviewing Samuel s patches. He proposed a patch for improving bash s detection of its pipesize and a change to dpkg-shlibdeps to improve behavior for building cross toolchains.
  • Helmut reiterated the multiarch policy proposal with a lot of help from Nattie Mayer-Hutchings, Rhonda D Vine and Stuart Prescott.
  • Helmut finished his work on the process based unschroot prototype that was the main feature of his talk (see above).
  • Helmut analyzed a multiarch-related glibc upgrade failure induced by a /usr-move mitigation of systemd and sent a patch and regression fix both of which reached trixie in time. Thanks to Aurelien Jarno and the release team for their timely cooperation.
  • Helmut resurrected an earlier discussion about changing the semantics of Architecture: all packages in a multiarch context in order to improve the long-standing interpreter problem. With help from Tollef Fog Heen better semantics were discovered and agreement was reached with Guillem Jover and Julian Andres Klode to consider this change. The idea is to record a concrete architecture for every Architecture: all package in the dpkg database and enable choosing it as non-native.
  • Helmut implemented type hints for piuparts.
  • Helmut reviewed and improved a patch set of Jochen Sprickerhof for debvm.
  • Anupa was involved in discussions with the Debian Women team during DebConf 25.
  • Anupa started working for the trixie release coverage and started coordinating release parties.
  • Emilio helped coordinate the release of Debian 13 trixie.

29 July 2025

Christoph Berg: The Debian Conference 2025 in Brest

It's Sunday and I'm now sitting in the train from Brest to Paris where I will be changing to Germany, on the way back from the annual Debian conference. A full week of presentations, discussions, talks and socializing is laying behind me and my head is still spinning from the intensity.
Pollito and the gang of DebConf mascots wearing their conference badgesPollito and the gang of DebConf mascots wearing their conference badges (photo: Christoph Berg)
Sunday, July 13th It started last Sunday with traveling to the conference. I got on the Eurostar in Duisburg and we left on time, but even before reaching Cologne, the train was already one hour delayed for external reasons, collecting yet another hour between Aachen and Liege for its own technical problems. "The train driver is working on trying to fix the problem." My original schedule had well over two hours for changing train stations in Paris, but being that late, I missed the connection to Brest in Montparnasse. At least in the end, the total delay was only one hour when finally arriving at the destination. Due to the French julliet quatorze fireworks approaching, buses in Brest were rerouted, but I managed to catch the right bus to the conference venue, already meeting a few Debian people on the way. The conference was hosted at the IMT Atlantique Brest campus, giving the event a nice university touch. I arrived shortly after 10 in the evening and after settling down a bit, got on one of the "magic" buses for transportation to the camping site where half of the attendees where stationed. I shared a mobile home with three other Debianites, where I got a small room for myself. Monday, July 14th Next morning, we took the bus back to the venue with a small breakfast and the opening session where Enrico Zini invited me to come to his and Nicolas Dandrimont's session about Debian community governance and curation, which I gladly did. Many ideas about conflict moderation and community steering were floated around. I hope some of that can be put into effect to make flamewars on the mailing lists less heated and more directed. After that, I attended Olly Betts' "Stemming with Snowball" session, which is the stemmer used also in PostgreSQL. Text search is one of the areas in PostgreSQL that I never really looked closely at, including the integration into the postgresql-common package, so it was nice to get more information about that. In preparation for the conference, a few of us Ham radio operators in Debian had decided to bring some radio gear to DebConf this year in order to perhaps spark more interest for our hobby among the fellow geeks. In the afternoon after the talks, I found a quieter spot just outside of the main hall and set up a shortwave antenna by attaching a 10m mast to one of the park benches there. The 40m band was still pretty much closed, but I could work a few stations from England, just across the channel from Bretagne, answering questions from interested passing-by Debian people between the contacts. Over time, the band opened and more European stations got into the log.
F/DF7CB in Brest (photo: Evangelos Ribeiro Tzaras)
Tuesday, July 15th Tuesday started with Helmut Grohne's session about "Reviving (un)schroot". The schroot program has been Debian's standard way of managing build chroots for a long time, but it is more and more being regarded as obsolete with all kinds of newer containerization and virtualization technologies taking over. Since many bits of Debian infrastructure depend on schroot, and its user interface is still very useful, Helmut reimplemented it using Linux namespaces and the "unshare" systemcall. I had already worked with him at the Hamburg Minidebconf to replace the apt.postgresql.org buildd machinery with the new system, but we were not quite there yet (network isolation is nice, but we still sometimes need proper networking), so it was nice to see the effort is still progressing and I will give his new scripts a try when I'm back home. Next, Stefano Rivera and Colin Watson presented Debusine, a new package repository and workflow management system. It looks very promising for anyone running their own repository, so perhaps yet another bit of apt.postgresql.org infrastructure to replace in the future. After that, I went to the Debian LTS BoF session by Santiago Ruano Rinc n and Bastien Roucari s - Debian releases plus LTS is what we are covering with apt.postgresql.org. Then there were bits from the DPL (Debian Project Leader), and a session moderated by Stefano Rivera interesting to me as a member of the Debian Technical Committee on the future structure of the packages required for cross-building in Debian, a topic which had been brought to TC a while ago. I am happy that we could resolve the issue without having to issue a formal TC ruling as the involved parties (kernel, glibc, gcc and the cross-build people) found a promising way forward themselves. DebConf is really a good way to get such issues unstuck. Ten years ago at the 2015 Heidelberg DebConf, Enrico had given a seminal "Semi-serious stand-up comedy" talk, drawing parallels between the Debian Open Source community and the BDSM community - "People doing things consensually together". (Back then, the talk was announced as "probably unsuitable for people of all ages".) With his unique presentation style and witty insights, the session made a lasting impression on everyone attending. Now, ten years later (and he and many in the audience being ten years older), he gave an updated version of it. We are now looking forward to the sequel in 2035. The evening closed with the famous DebConf tradition of the Cheese & Wine party in a old fort next to the coast, just below the conference venue. Even when he's a fellow Debian Developer, Ham and also TC member, I had never met Paul Tagliamonte in person before, but we spent most of the evening together geeking out on all things Debian and Ham radio.
The northern coast of Ushant (photo: Christoph Berg)
Wednesday, July 16th Wednesday already marked the end of the first half of the week, the day of the day trips. I had chosen to go to Ouessant island (Ushant in English) which marks the Western end of French mainland and hosts one of the lighthouses yielding the way into the English channel. The ferry trip included surprisingly big waves which left some participants seasick, but everyone recovered fast. After around one and a half hours we arrived, picked up the bicycles, and spent the rest of the day roaming the island. The weather forecast was originally very cloudy and 18 C, but over noon this turned into sunny and warm, so many got an unplanned sunburn. I enjoyed the trip very much - it made up for not having time visiting the city during the week. After returning, we spent the rest of the evening playing DebConf's standard game, Mao (spoiler alert: don't follow the link if you ever intend to play).
Having a nice day (photo: Christoph Berg)
Thursday, July 17th The next day started with the traditional "Meet the Technical Committee" session. This year, we trimmed the usual slide deck down to remove the boring boilerplate parts, so after a very short introduction to the work of the committee by our chairman Matthew Vernon, we opened up the discussion with the audience, with seven (out of 8) TC members on stage. I think the format worked very well, with good input from attendees. Next up was "Don't fear the TPM" by Jonathan McDowell. A common misconception in the Free Software community is that the TPM is evil DRM hardware working against the user, but while it could be used in theory that way, the necessary TPM attestations seem to impossible to attain in practice, so that wouldn't happen anyway. Instead, it is a crypto coprocessor present in almost all modern computers that can be used to hold keys, for example to be used for SSH. It will also be interesting to research if we can make use of it for holding the Transparent Data Encryption keys for CYBERTEC's PostgreSQL Enterprise Edition. Aigars Mahinovs then directed everyone in place for the DebConf group picture, and Lucas Nussbaum started a discussion about archive-wide QA tasks in Debian, an area where I did a lot of work in the past and that still interests me. Antonio Terceiro and Paul Gevers followed up with techniques to track archive-wide rebuilding and testing of packages and in turn filing a lot of bugs to track the problems. The evening ended with the conference dinner, again in the fort close by the coast. DebConf is good for meeting new people, and I incidentally ran into another Chris, who happened to be one of the original maintainers of pgaccess, the pre-predecessor of today's pgadmin. I admit still missing this PostgreSQL frontend for its simplicity and ability to easily edit table data, but it disappeared around 2004. Friday, July 18th On Friday, I participated in discussion sessions around contributors.debian.org (PostgreSQL is planning to set up something similar) and the New Member process which I had helped to run and reform a decade or two ago. Agathe Porte (also a Ham radio operator, like so many others at the conference I had no idea of) then shared her work on rust-rewriting the slower parts of Lintian, the Debian package linter. Craig Small talked about "Free as in Bytes", the evolution of the Linux procps free command. Over the time and many kernel versions, the summary numbers printed became better and better, but there will probably never be a version that suits all use cases alike. Later over dinner, Craig (who is also a TC member) and I shared our experiences with these numbers and customers (not) understanding them. He pointed out that for PostgreSQL and looking at used memory in the presence of large shared memory buffers, USS (unique set size) and PSS (proportional set size) should be more realistic numbers than the standard RSS (resident set size) that the top utility is showing by default. Antonio Terceiro and Paul Gevers again joined to lead a session, now on ci.debian.net and autopkgtest, the test driver used for running tests on packages after then have been installed on a system. The PostgreSQL packages are heavily using this to make sure no regressions creep in even after builds have successfully completed and test re-runs are rescheduled periodically. The day ended with Bdale Garbee's electronics team BoF and Paul Tagliamonte and me setting up the radio station in the courtyard, again answering countless questions about ionospheric conditions and operating practice. Saturday, July 19th Saturday was the last conference day. In the first session, Nikos Tsipinakis and Federico Vaga from CERN announced that the LHC will be moving to Debian for the accelerator's frontend computers in their next "long shutdown" maintenance period in the next year. CentOS broke compatibility too often, and Debian trixie together with the extended LTS support will cover the time until the next long shutdown window in 2035, until when the computers should have all been replaced with newer processors covering higher x86_64 baseline versions. The audience was very delighted to hear that Debian is now also being used in this prestige project. Ben Hutchings then presented new Linux kernel features. Particularly interesting for me was the support for atomic writes spanning more than one filesystem block. When configured correctly, this would mean PostgreSQL didn't have to record full-page images in the WAL anymore, increasing throughput and performance. After that, the Debian ftp team discussed ways to improve review of new packages in the archive, and which of their processes could be relaxed with new US laws around Open Source and cryptography algorithms export. Emmanuel Arias led a session on Salsa CI, Debian's Gitlab instance and standard CI pipeline. (I think it's too slow, but the runners are not under their control.) Julian Klode then presented new features in APT, Debian's package manager. I like the new display format (and a tiny bit of that is also from me sending in wishlist bugs). In the last round of sessions this week, I then led the Ham radio BoF with an introduction into the hobby and how Debian can be used. Bdale mentioned that the sBitx family of SDR radios is natively running Debian, so stock packages can be used from the radio's touch display. We also briefly discussed his involvement in ARDC and the possibility to get grants from them for Ham radio projects. Finally, DebConf wrapped up with everyone gathering in the main auditorium and cheering the organizers for making the conference possible and passing Pollito, the DebConf mascot, to the next organizer team.
Pollito on stage (photo: Christoph Berg)
Sunday, July 20th Zoom back to the train: I made it through the Paris metro and I'm now on the Eurostar back to Germany. It has been an intense week with all the conference sessions and meeting all the people I had not seen so long. There are a lot of new ideas to follow up on both for my Debian and PostgreSQL work. Next year's DebConf will take place in Santa Fe, Argentina. I haven't yet decided if I will be going, but I can recommend the experience to everyone! The post The Debian Conference 2025 in Brest appeared first on CYBERTEC PostgreSQL Services & Support.

12 July 2025

Bits from Debian: Debconf25 welcomes its sponsors

DebConf25 logo DebConf25, the 26th edition of the Debian conference is taking place in Brest Campus of IMT Atlantique Bretagne-Pays de la Loire, France. We appreciate the organizers for their hard work, and hope this event will be highly beneficial for those who attend in person as well as online. This event would not be possible without the help from our generous sponsors. We would like to warmly welcome the sponsors of DebConf 25, and introduce them to you. We have five Platinum sponsors. Our Gold sponsors are: Our Silver sponsors are: Bronze sponsors: And finally, our Supporter level sponsors: A special thanks to the IMT Atlantique Bretagne-Pays de la Loire, our Venue Partner and our Network Partner ResEl! Thanks to all our sponsors for their support! Their contributions enable a diverse global community of Debian developers and maintainers to collaborate, support one another, and share knowledge at DebConf25.

11 June 2025

Freexian Collaborators: Debian Contributions: Updated Austin, DebConf 25 preparations continue and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-05 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Updated Austin, by Colin Watson and Helmut Grohne Austin is a frame stack sampling profiler for Python. It allows profiling Python applications without instrumenting them while losing some accuracy in the process, and is the only one of its kind presently packaged for Debian. Unfortunately, it hadn t been uploaded in a while and hence the last Python version it worked with was 3.8. We updated it to a current version and also dealt with a number of architecture-specific problems (such as unintended sign promotion, 64bit time_t fallout and strictness due to -Wformat-security ) in cooperation with upstream. With luck, it will migrate in time for trixie.

Preparing for DebConf 25, by Stefano Rivera and Santiago Ruano Rinc n DebConf 25 is quickly approaching, and the organization work doesn t stop. In May, Stefano continued supporting the different teams. Just to give a couple of examples, Stefano made changes in DebConf 25 website to make BoF and sprints submissions public, so interested people can already know if a BoF or sprint for a given subject is planned, allowing coordination with the proposer; or to enhance how statistics are made public to help the work of the local team. Santiago has participated in different tasks, including the logistics of the conference, like preparing more information about the public transportation that will be available. Santiago has also taken part in activities related to fundraising and reviewing more event proposals.

Miscellaneous contributions
  • Lucas fixed security issues in Valkey in unstable.
  • Lucas tried to help with the update of Redis to version 8 in unstable. The package hadn t been updated for a while due to licensing issues, but now upstream maintainers fixed them.
  • Lucas uploaded around 20 ruby-* packages to unstable that weren t updated for some years to make them build reproducible. Thanks to reproducible builds folks to point out those issues. Also some unblock requests (and follow-ups) were needed to make them reach trixie in time for the release.
  • Lucas is organizing a Debian Outreach session for DebConf 25, reaching out to all interns of Google Summer of Code and Outreachy programs from the last year. The session will be presented by in-person interns and also video recordings from the interns interested in participating but did not manage to attend the conference.
  • Lucas continuously works on DebConf Content team tasks. Replying to speakers, sponsors, and communicating internally with the team.
  • Carles improved po-debconf-manager: fixed bugs reported by Catalan translator, added possibility to import packages out of salsa, added using non-default project branches on salsa, polish to get ready for DebCamp.
  • Carles tested new apt in trixie and reported bugs to apt , installation-report , libqt6widget6 .
  • Carles used po-debconf-manager and imported remaining 80 packages, reviewed 20 translations, submitted (MR or bugs) 54 translations.
  • Carles prepared some topics for translation BoF in DebConf (gathered feedback, first pass on topics).
  • Helmut gave an introductory talk about the mechanics of Linux namespaces at MiniDebConf Hamburg.
  • Helmut sent 25 patches for cross compilation failures.
  • Helmut reviewed, refined and applied a patch from Jochen Sprickerhof to make the Multi-Arch hinter emit more hints for pure Python modules.
  • Helmut sat down with Christoph Berg (not affiliated with Freexian) and extended unschroot to support directory-based chroots with overlayfs. This is a feature that was lost in transitioning from sbuild s schroot backend to its unshare backend. unschroot implements the schroot API just enough to be usable with sbuild and otherwise works a lot like the unshare backend. As a result, apt.postgresql.org now performs its builds contained in a user namespace.
  • Helmut looked into a fair number of rebootstrap failures most of which related to musl or gcc-15 and imported patches or workarounds to make those builds proceed.
  • Helmut updated dumat to use sqop fixing earlier PGP verification problems thanks to Justus Winter and Neal Walfield explaining a lot of sequoia at MiniDebConf Hamburg.
  • Helmut got the previous zutils update for /usr-move wrong again and had to send another update.
  • Helmut looked into why debvm s autopkgtests were flaky and with lots of help from Paul Gevers and Michael Tokarev tracked it down to a race condition in qemu. He updated debvm to trigger the problem less often and also fixed a wrong dependency using Luca Boccassi s patch.
  • Santiago continued the switch to sbuild for Salsa CI (that was stopped for some months), and has been mainly testing linux, since it s a complex project that heavily customizes the pipeline. Santiago is preparing the changes for linux to submit a MR soon.
  • In openssh, Colin tracked down some intermittent sshd crashes to a root cause, and issued bookworm and bullseye updates for CVE-2025-32728.
  • Colin spent some time fixing up fail2ban, mainly reverting a patch that caused its tests to fail and would have banned legitimate users in some common cases.
  • Colin backported upstream fixes for CVE-2025-48383 (django-select2) and CVE-2025-47287 (python-tornado) to unstable.
  • Stefano supported video streaming and recording for 2 miniDebConfs in May: Macei and Hamburg. These had overlapping streams for one day, which is a first for us.
  • Stefano packaged the new version of python-virtualenv that includes our patches for not including the wheel for wheel.
  • Stefano got all involved parties to agree (in principle) to meet at DebConf for a mediated discussion on a dispute that was brought to the technical committee.
  • Anupa coordinated the swag purchase for DebConf 25 with Juliana and Nattie.
  • Anupa joined the publicity team meeting for discussing the upcoming events and BoF at DebConf 25.
  • Anupa worked with the publicity team to publish Bits post to welcome GSoc 2025 Interns.

24 May 2025

Julian Andres Klode: A SomewhatMaxSAT Solver

As you may recall from previous posts and elsewhere I have been busy writing a new solver for APT. Today I want to share some of the latest changes in how to approach solving. The idea for the solver was that manually installed packages are always protected from removals in terms of SAT solving, they are facts. Automatically installed packages become optional unit clauses. Optional clauses are solved after manual ones, they don t partake in normal unit propagation. This worked fine, say you had
A                                   # install request for A
B                                   # manually installed, keep it
A depends on: conflicts-B   C
Installing A on a system with B installed installed C, as it was not allowed to install the conflicts-B package since B is installed. However, I also introduced a mode to allow removing manually installed packages, and that s where it broke down, now instead of B being a fact, our clauses looked like:
A                               # install request for A
A depends on: conflicts-B   C
Optional: B                     # try to keep B installed
As a result, we installed conflicts-B and removed B; the steps the solver takes are:
  1. A is a fact, mark it
  2. A depends on: conflicts-B C is the strongest clause, try to install conflicts-B
  3. We unit propagate that conflicts-B conflicts with B, so we mark not B
  4. Optional: B is reached, but not satisfiable, ignore it because it s optional.
This isn t correct: Just because we allow removing manually installed packages doesn t mean that we should remove manually installed packages if we don t need to. Fixing this turns out to be surprisingly easy. In addition to adding our optional (soft) clauses, let s first assume all of them! But to explain how this works, we first need to explain some terminology:
  1. The solver operates on a stack of decisions
  2. enqueue means a fact is being added at the current decision level, and enqueued for propagation
  3. assume bumps the decision level, and then enqueues the assumed variable
  4. propagate looks at all the facts and sees if any clause becomes unit, and then enqueues it
  5. unit is when a clause has a single literal left to assign
To illustrate this in pseudo Python code:
  1. We introduce all our facts, and if they conflict, we are unsat:
    for fact in facts:
        enqueue(fact)
    if not propagate():
        return False
    
  2. For each optional literal, we register a soft clause and assume it. If the assumption fails, we ignore it. If it succeeds, but propagation fails, we undo the assumption.
    for optionalLiteral in optionalLiterals:
        registerClause(SoftClause([optionalLiteral]))
        if assume(optionalLiteral) and not propagate():
            undo()
    
  3. Finally we enter the main solver loop:
    while True:
        if not propagate():
            if not backtrack():
                return False
        elif <all clauses are satisfied>:
            return True
        elif it := find("best unassigned literal satisfying a hard clause"):
            assume(it)
        elif it := find("best literal satisfying a soft clause"):
            assume(it)
    
The key point to note is that the main loop will undo the assumptions in order; so if you assume A,B,C and B is not possible, we will have also undone C. But since C is also enqueued as a soft clause, we will then later find it again:
  1. Assume A: State=[Assume(A)], Clauses=[SoftClause([A])]
  2. Assume B: State=[Assume(A),Assume(B)], Clauses=[SoftClause([A]),SoftClause([B])]
  3. Assume C: State=[Assume(A),Assume(B),Assume(C)], Clauses=[SoftClause([A]),SoftClause([B]),SoftClause([C])]
  4. Solve finds a conflict, backtracks, and sets not C: State=[Assume(A),Assume(B),not(C)]
  5. Solve finds a conflict, backtracks, and sets not B: State=[Assume(A),not(B)] C is no longer assumed either
  6. Solve, assume C as it satisfies SoftClause([C]) as next best literal: State=[Assume(A),not(B),Assume(C)]
  7. All clauses are satisfied, solution is A, not B, and C.
This is not (correct) MaxSAT, because we actually do not guarantee that we satisfy as many soft clauses as possible. Consider you have the following clauses:
Optional: A
Optional: B
Optional: C
B Conflicts with A
C Conflicts with A
There are two possible results here:
  1. A If we assume A first, we are unable to satisfy B or C.
  2. B,C If we assume either B or C first, A is unsat.
The question to ponder though is whether we actually need a global maximum or whether a local maximum is satisfactory in practice for a dependency solver If you look at it, a naive MaxSAT solver needs to run the SAT solver 2**n times for n soft clauses, whereas our heuristic only needs n runs. For dependency solving, it seems we do not seem have a strong need for a global maximum: There are various other preferences between our literals, say priorities; and empirically, from evaluating hundreds of regressions without the initial assumptions, I can say that the assumptions do fix those cases and the result is correct. Further improvements exist, though, and we can look into them if they are needed, such as:

25 April 2025

Bits from Debian: Debian Project Leader election 2025 is over, Andreas Tille re-elected!

The voting period and tally of votes for the Debian Project Leader election has just concluded and the winner is Andreas Tille, who has been elected for the second time. Congratulations! Out of a total of 1,030 developers, 362 voted. As usual in Debian, the voting method used was the Condorcet method. More information about the result is available in the Debian Project Leader Elections 2025 page. Many thanks to Andreas Tille, Gianfranco Costamagna, Julian Andres Klode, and Sruthi Chandran for their campaigns, and to our Developers for voting. The new term for the project leader started on April 21st and will expire on April 20th 2026.

13 February 2025

Bits from Debian: DebConf25 Logo Contest Results

Last November, the DebConf25 Team asked the community to help design the logo for the 25th Debian Developers' Conference and the results are in! The logo contest received 23 submissions and we thank all the 295 people who took the time to participate in the survey. There were several amazing proposals, so choosing was not easy. We are pleased to announce that the winner of the logo survey is 'Tower with red Debian Swirl originating from blue water' (option L), by Juliana Camargo and licensed CC BY-SA 4.0. [DebConf25 Logo Contest Winner] Juliana also shared with us a bit of her motivation, creative process and inspiration when designing her logo:
The idea for this logo came from the city's landscape, the place where the medieval tower looks over the river that meets the sea, almost like guarding it. The Debian red swirl comes out of the blue water splash as a continuous stroke, and they are also the French flag colours. I tried to combine elements from the city when I was sketching in the notebook, which is an important step for me as I feel that ideas flow much more easily, but the swirl + water with the tower was the most refreshing combination, so I jumped to the computer to design it properly. The water bit was the most difficult element, and I used the Debian swirl as a base for it, so both would look consistent. The city name font is a modern calligraphy style and the overall composition is not symmetric but balanced with the different elements. I am glad that the Debian community felt represented with this logo idea!
Congratulations, Juliana, and thank you very much for your contribution to Debian! The DebConf25 Team would like to take this opportunity to remind you that DebConf, the annual international Debian Developers Conference, needs your help. If you want to help with the DebConf 25 organization, don't hesitate to reach out to us via the #debconf-team channel on OFTC. Furthermore, we are always looking for sponsors. DebConf is run on a non-profit basis, and all financial contributions allow us to bring together a large number of contributors from all over the globe to work collectively on Debian. Detailed information about the sponsorship opportunities is available on the DebConf 25 website. See you in Brest!

2 February 2025

Joachim Breitner: Coding on my eInk Tablet

For many years I wished I had a setup that would allow me to work (that is, code) productively outside in the bright sun. It s winter right now, but when its summer again it s always a bit. this weekend I got closer to that goal. TL;DR: Using code-server on a beefy machine seems to be quite neat.
Passively lit coding Passively lit coding

Personal history Looking back at my own old blog entries I find one from 10 years ago describing how I bought a Kobo eBook reader with the intent of using it as an external monitor for my laptop. It seems that I got a proof-of-concept setup working, using VNC, but it was tedious to set up, and I never actually used that. I subsequently noticed that the eBook reader is rather useful to read eBooks, and it has been in heavy use for that every since. Four years ago I gave this old idea another shot and bought an Onyx BOOX Max Lumi. This is an A4-sized tablet running Android and had the very promising feature of an HDMI input. So hopefully I d attach it to my laptop and it just works . Turns out that this never worked as well as I hoped: Even if I set the resolution to exactly the tablet s screen s resolution I got blurry output, and it also drained the battery a lot, so I gave up on this. I subsequently noticed that the tablet is rather useful to take notes, and it has been in sporadic use for that. Going off on this tangent: I later learned that the HDMI input of this device appears to the system like a camera input, and I don t have to use Boox s monitor app but could other apps like FreeDCam as well. This somehow managed to fix the resolution issues, but the setup still wasn t as convenient to be used regularly. I also played around with pure terminal approaches, e.g. SSH ing into a system, but since my usual workflow was never purely text-based (I was at least used to using a window manager instead of a terminal multiplexer like screen or tmux) that never led anywhere either.

VSCode, working remotely Since these attempts I have started a new job working on the Lean theorem prover, and working on or with Lean basically means using VSCode. (There is a very good neovim plugin as well, but I m using VSCode nevertheless, if only to make sure I am dogfooding our default user experience). My colleagues have said good things about using VSCode with the remote SSH extension to work on a beefy machine, so I gave this a try now as well, and while it s not a complete game changer for me, it does make certain tasks (rebuilding everything after a switching branches, running the test suite) very convenient. And it s a bit spooky to run these work loads without the laptop s fan spinning up. In this setup, the workspace is remote, but VSCode still runs locally. But it made me wonder about my old goal of being able to work reasonably efficient on my eInk tablet. Can I replicate this setup there? VSCode itself doesn t run on Android directly. There are project that run a Linux chroot or in termux on the Android system, and then you can VNC to connect to it (e.g. on Andronix) but that did not seem promising. It seemed fiddly, and I probably should take it easy on the tablet s system.

code-server, running remotely A more promising option is code-server. This is a fork of VSCode (actually of VSCodium) that runs completely on the remote machine, and the client machine just needs a browser. I set that up this weekend and found that I was able to do a little bit of work reasonably.

Access With code-server one has to decide how to expose it safely enough. I decided against the tunnel-over-SSH option, as I expected that to be somewhat tedious to set up (both initially and for each session) on the android system, and I liked the idea of being able to use any device to work in my environment. I also decided against the more involved reverse proxy behind proper hostname with SSL setups, because they involve a few extra steps, and some of them I cannot do as I do not have root access on the shared beefy machine I wanted to use. That left me with the option of using a code-server s built-in support for self-signed certificates and a password:
$ cat .config/code-server/config.yaml
bind-addr: 1.2.3.4:8080
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: true
With trust-on-first-use this seems reasonably secure. Update: I noticed that the browsers would forget that I trust this self-signed cert after restarting the browser, and also that I cannot install the page (as a Progressive Web App) unless it has a valid certificate. But since I don t have superuser access to that machine, I can t just follow the official recommendation of using a reverse proxy on port 80 or 431 with automatic certificates. Instead, I pointed a hostname that I control to that machine, obtained a certificate manually on my laptop (using acme.sh) and copied the files over, so the configuration now reads as follows:
bind-addr: 1.2.3.4:3933
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.cer
cert-key: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.key
(This is getting very specific to my particular needs and constraints, so I ll spare you the details.)

Service To keep code-server running I created a systemd service that s managed by my user s systemd instance:
~ $ cat ~/.config/systemd/user/code-server.service
[Unit]
Description=code-server
After=network-online.target
[Service]
Environment=PATH=/home/joachim/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
ExecStart=/nix/var/nix/profiles/default/bin/nix run nixpkgs#code-server
[Install]
WantedBy=default.target
(I am using nix as a package manager on a Debian system there, hence the additional PATH and complex ExecStart. If you have a more conventional setup then you do not have to worry about Environment and can likely use ExecStart=code-server. For this to survive me logging out I had to ask the system administrator to run loginctl enable-linger joachim, so that systemd allows my jobs to linger.

Git credentials The next issue to be solved was how to access the git repositories. The work is all on public repositories, but I still need a way to push my work. With the classic VSCode-SSH-remote setup from my laptop, this is no problem: My local SSH key is forwarded using the SSH agent, so I can seamlessly use that on the other side. But with code-server there is no SSH key involved. I could create a new SSH key and store it on the server. That did not seem appealing, though, because SSH keys on Github always have full access. It wouldn t be horrible, but I still wondered if I can do better. I thought of creating fine-grained personal access tokens that only me to push code to specific repositories, and nothing else, and just store them permanently on the remote server. Still a neat and convenient option, but creating PATs for our org requires approval and I didn t want to bother anyone on the weekend. So I am experimenting with Github s git-credential-manager now. I have configured it to use git s credential cache with an elevated timeout, so that once I log in, I don t have to again for one workday.
$ nix-env -iA nixpkgs.git-credential-manager
$ git-credential-manager configure
$ git config --global credential.credentialStore cache
$ git config --global credential.cacheOptions "--timeout 36000"
To login, I have to https://github.com/login/device on an authenticated device (e.g. my phone) and enter a 8-character code. Not too shabby in terms of security. I only wish that webpage would not require me to press Tab after each character This still grants rather broad permissions to the code-server, but at least only temporarily

Android setup On the client side I could now open https://host.example.com:8080 in Firefox on my eInk Android tablet, click through the warning about self-signed certificates, log in with the fixed password mentioned above, and start working! I switched to a theme that supposedly is eInk-optimized (eInk by Mufanza). It s not perfect (e.g. git diffs are unhelpful because it is not possible to distinguish deleted from added lines), but it s a start. There are more eInk themes on the official Visual Studio Marketplace, but because code-server is a fork it cannot use that marketplace, and for example this theme isn t on Open-VSX. For some reason the F11 key doesn t work, but going fullscreen is crucial, because screen estate is scarce in this setup. I can go fullscreen using VSCode s command palette (Ctrl-P) and invoking the command there, but Firefox often jumps out of the fullscreen mode, which is annoying. I still have to pay attention to when that s happening; maybe its the Esc key, which I am of course using a lot due to me using vim bindings. A more annoying problem was that on my Boox tablet, sometimes the on-screen keyboard would pop up, which is seriously annoying! It took me a while to track this down: The Boox has two virtual keyboards installed: The usual Google ASOP keyboard, and the Onyx Keyboard. The former is clever enough to stay hidden when there is a physical keyboard attached, but the latter isn t. Moreover, pressing Shift-Ctrl on the physical keyboard rotates through the virtual keyboards. Now, VSCode has many keyboard shortcuts that require Shift-Ctrl (especially on an eInk device, where you really want to avoid using the mouse). And the limited settings exposed by the Boox Android system do not allow you configure that or disable the Onyx keyboard! To solve this, I had to install the KISS Launcher, which would allow me to see more Android settings, and in particular allow me to disable the Onyx keyboard. So this is fixed. I was hoping to improve the experience even more by opening the web page as a Progressive Web App (PWA), as described in the code-server FAQ. Unfortunately, that did not work. Firefox on Android did not recognize the site as a PWA (even though it recognizes a PWA test page). And I couldn t use Chrome either because (unlike Firefox) it would not consider a site with a self-signed certificate as a secure context, and then code-server does not work fully. Maybe this is just some bug that gets fixed in later versions. Now that I use a proper certificate, I can use it as a Progressive Web App, and with Firefox on Android this starts the app in full-screen mode (no system bars, no location bar). The F11 key still does t work, and using the command palette to enter fullscreen does nothing visible, but then Esc leaves that fullscreen mode and I suddenly have the system bars again. But maybe if I just don t do that I get the full screen experience. We ll see. I did not work enough with this yet to assess how much the smaller screen estate, the lack of colors and the slower refresh rate will bother me. I probably need to hide Lean s InfoView more often, and maybe use the Error Lens extension, to avoid having to split my screen vertically. I also cannot easily work on a park bench this way, with a tablet and a separate external keyboard. I d need at least a table, or some additional piece of hardware that turns tablet + keyboard into some laptop-like structure that I can put on my, well, lap. There are cases for Onyx products that include a keyboard, and maybe they work on the lap, but they don t have the Trackpoint that I have on my ThinkPad TrackPoint Keyboard II, and how can you live without that?

Conclusion After this initial setup chances are good that entering and using this environment is convenient enough for me to actually use it; we will see when it gets warmer. A few bits could be better. In particular logging in and authenticating GitHub access could be both more convenient and more safe I could imagine that when I open the page I confirm that on my phone (maybe with a fingerprint), and that temporarily grants access to the code-server and to specific GitHub repositories only. Is that easily possible?

8 August 2024

Jonathan Carter: DebConf24 Busan, South Korea

I m finishing typing up this blog entry hours before my last 13 hour leg back home, after I spent 2 weeks in Busan, South Korea for DebCamp24 and DebCamp24. I had a rough year and decided to take it easy this DebConf. So this is the first DebConf in a long time where I didn t give any talks. I mostly caught up on a bit of packaging, worked on DebConf video stuff, attended a few BoFs and talked to people. Overall it was a very good DebConf, which also turned out to be more productive than I expeced it would. In the welcome session on the first day of DebConf, Nicolas Dandrimont mentioned that a benefit of DebConf is that it provides a sort of caffeine for your Debian motivation. I could certainly feel that affect swell as the days went past, and it s nice to be excited about some ideas again that would otherwise be fading.

Recovering DPL It s a bit of a gear shift being DPL for 4 years, and DebConf Committee for nearly 5 years before that, and then being at DebConf while some issue arise (as it always does during a conference). At first I jump into high alert mode, but then I have to remind myself it s not your problem anymore and let others deal with it. It was nice spending a little in-person time with Andreas Tille, our new DPL, we did some more handover and discussed some current issues. I still have a few dozen emails in my DPL inbox that I need to collate and forward to Andreas, I hope to finish all that up by the end of August. During the Bits from the DPL talk, the usual question came up whether Andreas will consider running for DPL again, to which he just responded in a slide Maybe . I think it s a good idea for a DPL to do at least two terms if it all works out for everyone, since it takes a while to get up to speed on everything. Also, having been DPL for four years, I have a lot to say about it, and I think there s a lot we can fix in the role, or at least discuss it. If I had the bandwidth for it I would have scheduled a BoF for it, but I ll very likely do that for the next DebConf instead!

Video team I set up the standby loop for the video streaming setup. We call it loopy, it s a bunch of OBS scenes that provide announcements, shows sponsors, the schedule and some social content. I wrote about it back in 2020, but it s evolved quite a bit since then, so I m probably due to write another blog post with a bunch of updates on it. I hope to organise a video team sprint in Cape Town in the first half of next year, so I ll summarize everything before then.

It would ve been great if we could have some displays in social areas that could show talks, the loop and other content, but we were just too pressed for time for that. This year s DebConf had a very compressed timeline, and there was just too much that had to be done and that had to be figured out on the last minute. This put quite a lot of strain on the organisers, but I was glad to see how, for the most part, most attendees were very sympathetic to some rough edges (but I digress ). I added more of the OBS machine setup to the videoteam s ansible repository, so as of now it just needs an ansible setup and the OBS data and it s good to go. The loopy data is already in the videoteam git repository, so I could probably just add a git pull and create some symlinks in ansible and then that machine can be installed from 0% to 100% by just installing via debian-installer with our ansible hooks. This DebConf I volunteered quite a bit for actual video roles during the conference, something I didn t have much time for in recent DebConfs, and it s been fun, especially in a session or two where nearly none of the other volunteers showed up. Sometimes chaos is just fun :-)
Baekyongee is the university mascot, who s visible throughout the university. So of course we included this four legged whale creature on the loop too!

Packaging I was hoping to do more packaging during DebCamp, but at least it was a non-zero amount:
  • Uploaded gdisk 1.0.10-2 to unstable (previously tested effects of adding dh-sequence-movetousr) (Closes: #1073679).
  • Worked a bit on bcachefs-tools (updating git to 1.9.4), but has a build failure that I need to look into (we might need a newer bindgen) update: I m probably going to ROM this package soon, it doesn t seem suitable for packaging in Debian.
  • Calamares: Tested a fix for encrypted installs, and uploaded it.
  • Calamares: Uploaded (3.3.8-1) to backports (at the time of writing it s still in backports-NEW).
  • Backport obs-gradient-source for bookworm.
  • Did some initial packaging on Cambalache, I ll upload to unstable once wlroots (0.18) hits unstable.
  • Pixelorama 1.0 I did some initial packaging for Pixelorama back when we did the MiniDebConf Gaming Edition, but it had a few stoppers back then. Version 1.0 seems to fix all of that, but it depends on Godot 4.2 and we re still on the 3 series in Debian, so I ll upload this once Godot 4.2 hits at least experimental. Godot software/games is otherwise quite easy to run, it s basically just source code / data that is installed and then run via godot-runner (godot3-runner package in Debian).

BoFs Python Team BoF Link to the etherpad / pad archive link and video can be found on the talk page: https://debconf24.debconf.org/talks/31-python-bof/ The session ended up being extended to a second part, since all the issues didn t fit into the first session. I was distracted by too many thing during the Python 3.12 transition (to the point where I thought that 3.11 was still new in Debian), so it was very useful listening to the retrospective of that transition. There was a discussion whether Python 3.13 could still make it to testing in time for freeze, and it seems that there is consensus that it can, although, likely with new experimental features like disabling the global interpreter lock and the just in time compiler disabled. I learned for the first time about the dead batteries project, PEP-0594, which removes ancient modules that have mostly been superseded, from the Python standard library. There was some talk about the process for changing team policy, and a policy discussion on whether we should require autopkgtests as a SHOULD or a MUST for migration to testing. As with many things, the devil is in the details and in my opinion you could go either way and achieve a similar result (the original MUST proposal allowed exceptions which imho made it the same as the SHOULD proposal). There s an idea to do some ongoing remote sprints, like having co-ordinated days for bug squashing / working on stuff together. This is a nice idea and probably a good way to energise the team and also to gain some interest from potential newcomers. Louis-Philipe V ronneau was added as a new team admin and there was some discussion on various Sphinx issues and which Lintian tags might be needed for Python 3.13. If you want to know more, you probably have to watch the videos / read the notes :)
    Debian.net BoF Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/37-debiannet-team-bof Debian Developers can set up services on subdomains on debian.net, but a big problem we ve had before was that developers were on their own for hosting those services. This meant that they either hosted it on their DSL/fiber connection at home, paid for the hosting themselves, or hosted it at different services which became an accounting nightmare to claim back the used funds. So, a few of us started the debian.net hosting project (sometimes we just call it debian.net, this is probably a bit of a bug) so that Debian has accounts with cloud providers, and as admins we can create instances there that gets billed directly to Debian. We had an initial rush of services, but requests have slowed down since (not really a bad thing, we don t want lots of spurious requests). Last year we did a census, to check which of the instances were still used, whether they received system updates and to ask whether they are performing backups. It went well and some issues were found along the way, so we ll be doing that again. We also gained two potential volunteers to help run things, which is great. Debian Social BoF Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/34-debiansocial-bof We discussed the services we run, you can view the current state of things at: https://wiki.debian.org/Teams/DebianSocial Pleroma has shown some cracks over the last year or so, and there are some forks that seem promising. At the same time, it might be worth while considering Mastodon too. So we ll do some comparison of features and maintenance and find a way forward. At the time when Pleroma was installed, it was way ahead in terms of moderation features. Pixelfed is doing well and chugging along nicely, we should probably promote it more. Peertube is working well, although we learned that we still don t have all the recent DebConf videos on there. A bunch of other issues should be fixed once we move it to a new machine that we plan to set up. We re removing writefreely and plume. Nice concepts, but it didn t get much traction yet, and no one who signed up for these actually used it, which is fine, some experimentation with services is good and sometimes they prove to be very popular and other times not. The WordPress multisite instance has some mild use, otherwise haven t had any issues. Matrix ended up to be much, much bigger than we thought, both in usage and in its requirements. It s very stateful and remembers discussions for as long as you let it, so it s Postgres database is continuously expanding, this will also be a lot easier to manage once we have this on the new host. Jitsi is also quite popular, but it could probably be on jitsi.debian.net instead (we created this on debian.social during the initial height of COVID-19 where we didn t have the debian.net hosting yet), although in practice it doesn t really matter where it lives. Most of our current challenges will be solved by moving everything to a new big machine that has a few public IPs available for some VMs, so we ll be doing that shortly. Debian Foundation Discussion BoF This was some brainstorming about the future structure of Debian, and what steps might be needed to get there. It s way too big a problem to take on in a BoF, but we made some progress in figuring out some smaller pieces of the larger puzzle. The DPL is going to get in touch with some legal advisors and our trusted organisations so that we can aim to formalise our relationships a bit more by the time it s DebConf again. I also introduced my intention to join the Debian Partners delegation. When I was DPL, I enjoyed talking with external organisations who wanted to help Debian, but helping external organisations help Debian turned out to be too much additional load on the usual DPL roles, so I m pursuing this with the Debian Partners team, more on that some other time. This session wasn t recorded, but if you feel like you missed something, don t worry, all intentions will be communicated and discussed with project members before anything moves forward. There was a strong agreement in the room though that we should push forward on this, and not reach another DebConf where we didn t make progress on formalising Debian s structure more.

    Social Conference Dinner
    Conference Dinner Photo from Santiago
    The conference dinner took place in the university gymnasium. I hope not many people do sports there in the summer, because it got HOT. There was also some interesting observations on the thermodynamics of the attempted cooling solutions, which was amusing. On the plus side, the food was great, the company was good, and the speeches were kept to a minimum, so it was a great conference dinner, even though it was probably cut a bit short due to the heat. Cheese and Wine Cheese and Wine happened on 1 August, which happens to be the date I became a DD at DebConf17 in Montr al seven years before, so this was a nice accidental celebration of my Debiversary :) Since I m running out of time, I ll add some more photos to this post some time after publishing it :P Group Photo As per DebConf tradition, Aigars took the group photo. You can find the high resolution version on Debian s GitLab instance.
    Debian annual conference Debconf 24, Busan, South Korea
    Photography: Aigars Mahinovs aigarius@debian.org
    License: CC-BYv3+ or GPLv2+
    Talking Ah yes, talking to people is a big part of DebConf, but I didn t keep track of it very well.
    • I mostly listened to Alper a bit about his ideas for his talk about debian installer.
    • I talked to Rhonda a bit about ActivityPub and MQTT and whether they could be useful for publicising Debian activity.
    • Listened to Gunnar and Julian have a discussion about GPG and APT which was interesting.
    • I learned that you can learn Hangul, the Korean alphabet, in about an hour or so (I wish I knew that in all my years of playing StarCraft II).
    • We had the usual continuous keysigning party. Besides it s intended function, this is always a good ice breaker and a way to for shy people to meet other shy people.
    • and many other fly-by discussions.

    Stuff that didn t happen this DebConf
    • loo.py A simple Python script that could eventually replace the obs-advanced-scene-switcher sequencer in OBS. It would also be extremely useful if we d ever replace OBS for loopy. I was hoping to have some time to hack on this, and try to recreate the current loopy in loo.py, but didn t have the time.
    • toetally This year videoteam had to scramble to get a bunch of resistors to assemble some tally light. Even when assembled, they were a bit troublesome. It would ve been nice to hack on toetally and get something ready for testing, but it mostly relies on having something like a rasbperry pi zero with an attached screen in order to work on further. I ll try to have something ready for the next mini conf though.
    • extrepo on debian live I think we should have extrepo installed by default on desktop systems, I meant to start a discussion on this, but perhaps it s just time I go ahead and do it and announce it.
    • Live stream to peertube server It would ve been nice to live stream DebConf to PeerTube, but the dependency tree to get this going got a bit too huge. Following our plans discussed in the Debian Social BoF, we should have this safely ready before the next MiniDebConf and should be able to test it there.
    • Desktop Egg there was this idea to get a stand-in theme for Debian testing/unstable until the artwork for the next release is finalized (Debian bug: #1038660), I have an idea that I meant to implement months ago, but too many things got in the way. It s based on Juliette Taka s Homeworld theme, and basically transforms the homeworld into an egg. Get it? Something that hasn t hatched yet? I also only recently noticed that we never used the actual homeworld graphics (featuring the world image) in the final bullseye release. lol.
    So, another DebConf and another new plush animal. Last but not least, thanks to PKNU for being such a generous and fantastic host to us! See you again at DebConf25 in Brest, France next year!

      24 May 2024

      Julian Andres Klode: Observations in Debian dependency solving

      In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs. You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don t actually have any pure literals in there). We can control it in a bunch of ways:
      1. We can mark packages as install or reject
      2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
      3. We can order the choices of a dependency - we try them left to right.
      This is about all that we really want to do, we can t go if we reach a conflict, say oh but this conflict was introduced by that upgrade, and it seems more important, so let s not backtrack on the upgrade request but on this dependency instead. . This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a which of these packages should I flip the opposite way to break the conflict kind of thinking. Now our test suite has a whole bunch of these semantics encoded in it, and I m going to share some problems and ideas for how to solve them. I can t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let s be honest).

      apt upgrade is hard The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages. Now, consider the following package is installed:
      X Depends: A (= 1)   B
      
      An upgrade from A=1 to A=2 is available. What should happen? The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it s answer is quite clear: Keep back the upgrade of A. The new solver however sees two possible solutions:
      1. Install B to satisfy X Depends A (= 1) B.
      2. Keep back the upgrade of A
      Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So
      1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) A (= 1) and sees it is satisfied already and is content.
      2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) B, sees that A (= 1) is not satisfiable, and picks B.
      We have two ways to approach this issue:
      1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
      2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.

      Recommends are hard too See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases. But let s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:
      • An upgrade should keep back A instead of breaking the Recommends
      • A dist-upgrade should either keep back A or remove X (if it is obsolete)
      This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, promotions :
      A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.
      This neatly solves the problem for us. We will never break Recommends that are satisfied. Likewise, we already have a Recommends demotion rule:
      A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).
      Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn t autoremove them, but treat them as optional?

      tightening of versioned dependencies Another case of versioned dependencies with alternatives that has complex behavior is something like
      X Depends: A (>= 2)   B
      X Recommends: A (>= 2)   B
      
      In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B. We can solve this again as in the previous example by ordering the keep A installed requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

      version narrowing instead of version choosing A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate
      Depends: A (>= 2)
      
      into two rules:
      1. The package selection rule:
         Depends: A
        
        This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) A (= 2) in an example with two versions for A.
      2. The version narrowing rule:
         Conflicts: A (<< 2)
        
        This outright would reject a choice of A (= 1).
      So now we have 3 kinds of clauses:
      1. package selection
      2. version narrowing
      3. version selection
      If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions. This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) B but e.g. Depends: A (= 3) B A (= 2). He d expect us to fall back to B if A (= 3) is not installable, and not to B. But we d like to enqueue A and reject all choices other than 3 and 2. I think it s fair to say: Don t do that, then here.

      Implementing strict pinning correctly APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions. But of course, APT allows you to specify a non-candidate version of a package to install, for example:
      apt install foo/oracular-proposed
      
      The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy. The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I d really like to get rid of it. But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache. The current implementation of allowed version is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) A (= 1). However this has two disadvantages. (1) It means if we show you why A could not be installed, you don t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides. So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

      pulling up common dependencies to minimize backtracking cost One of the common issues we have is that when we have a dependency group
       A   B   C   D 
      
      we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn t perhaps the best choice of operation. I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don t do this here: We have already lowered the representation of the dependency group into a list of versions, so we d need to extract the package back out of it. This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:
      1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
      2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
      3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
      4. We decide (adding a decision level) not to install B right now and enqueue A
      Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway). The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly #A * (#B+#C+#D) Each dependency group we need to check i.e. is X Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X Y and Y X are different dependencies of course, but that is to be expected they are. But any dependency of the same order will have the same memory layout. So really the cost is roughly N^4. This isn t nice. You can apply various heuristics here on how to improve that, or you can even apply binary logic:
      1. Enqueue common dependencies of A B C D
      2. Move into the left half, enqueue of A B
      3. Again divide and conquer and select A.
      This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one. Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

      14 May 2024

      Julian Andres Klode: The new APT 3.0 solver

      APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the solver 3.0 option. The new solver works fundamentally different from the old one.

      How does it work? Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies. Deferring the choices is implemented multiple ways: First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package. Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a b before a b c. Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not nest in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue. Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all. Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.

      Comparison to SAT solver design. If you have studied SAT solver design, you ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy). As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B). Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.

      What changes can you expect in behavior? The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other. Implementing that policy is rather trivial: We just need to queue obsolete replacement as a dependency to solve, rather than mark the obsolete package for install. Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc c-compiler is enough to keep them around.

      New features The new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible. The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.

      What is left to do? At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful. Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely. The test suite is not passing yet, I haven t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn t remove those. We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed. Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make. Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to. At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.

      16 November 2023

      Dimitri John Ledkov: Ubuntu 23.10 significantly reduces the installed kernel footprint


      Photo by Pixabay
      Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

      Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

      • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

      • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

      • 0.5x increase in download size (949MB vs 600MB)

      • 2.5x faster initrd generation (4.5s vs 11.3s)

      • approximately the same total time (103s vs 98s, hardware dependent)


      For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

      • 1.3x less disk space used (548MB vs 742MB)

      • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

      • 0.4x increase in download size (207MB vs 146MB)


      Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.

      This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.

      The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.

      The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.

      Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.

      Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:

      For questions and comments please post to Kernel section on Ubuntu Discourse.



      10 October 2023

      Julian Andres Klode: Divergence - A case for different upgrade approaches

      APT currently knows about three types of upgrades: All of these upgrade types are necessary to deal with upgrades within a distribution release. Yes, sometimes even removals may be needed because bug fixes require adding a Conflicts somewhere. In Ubuntu we have a third type of upgrades, handled by a separate tool: release upgrades. ubuntu-release-upgrader changes your sources.list, and applies various quirks to the upgrade. In this post, I want to look not at the quirk aspects but discuss how dependency solving should differ between intra-release and inter-release upgrades. Previous solver projects (such as Mancoosi) operated under the assumption that minimizing the number of changes performed should ultimately be the main goal of a solver. This makes sense as every change causes risks. However it ignores a different risk, which especially applies when upgrading from one distribution release to a newer one: Increasing divergence from the norm. Consider a person installs foo in Debian 12. foo depends on a b, so a will be automatically installed to satisfy the dependency. A release later, a has some known issues and b is prefered, the dependency now reads: b a. A classic solver would continue to keep a installed because it was installed before, leading upgraded installs to have foo, a installed whereas new systems have foo, b installed. As systems get upgraded over and over, they continue to diverge further and further from new installs to the point that it adds substantial support effort. My proposal for the new APT solver is that when we perform release upgrades, we forget which packages where previously automatically installed. We effectively perform a normalization: All systems with the same set of manually installed packages will end up with the same set of automatically installed packages. Consider the solving starting with an empty set and then installing the latest version of each previously manually installed package: It will see now that foo depends b a and install b (and a will be removed later on as its not part of the solution). Another case of divergence is Suggests handling. Consider that foo also Suggests s. You now install another package bar that depends s, hence s gets installed. Upon removing bar, s is not being removed automatically because foo still suggests it (and you may have grown used to foo s integration of s). This is because apt considers Suggests to be important - they won t be automatically installed, but will not be automatically removed. In Ubuntu, we unset that policy on release upgrades to normalize the systems. The reasoning for that is simple: While you may have grown to use s as part of foo during the release, an upgrade to the next release already is big enough that removing s is going to have less of an impact - breakage of workflows is expected between release upgrades. I believe that apt release-upgrade will benefit from both of these design choices, and in the end it boils down to a simple mantra:

      21 September 2023

      Jonathan Carter: DebConf23

      I very, very nearly didn t make it to DebConf this year, I had a bad cold/flu for a few days before I left, and after a negative covid-19 test just minutes before my flight, I decided to take the plunge and travel. This is just everything in chronological order, more or less, it s the only way I could write it.

      DebCamp I planned to spend DebCamp working on various issues. Very few of them actually got done, I spent the first few days in bed further recovering, took a covid-19 test when I arrived and after I felt better, and both were negative, so not sure what exactly was wrong with me, but between that and catching up with other Debian duties, I couldn t make any progress on catching up on the packaging work I wanted to do. I ll still post what I intended here, I ll try to take a few days to focus on these some time next month: Calamares / Debian Live stuff:
      • #980209 installation fails at the install boot loader phase
      • #1021156 calamares-settings-debian: Confusing/generic program names
      • #1037299 Install Debian -> Untrusted application launcher
      • #1037123 Minimal HD space required too small for some live images
      • #971003 Console auto-login doesn t work with sysvinit
      At least Calamares has been trixiefied in testing, so there s that! Desktop stuff:
      • #1038660 please set a placeholder theme during development, different from any release
      • #1021816 breeze: Background image not shown any more
      • #956102 desktop-base: unwanted metadata within images
      • #605915 please mtheake it a non-native package
      • #681025 Put old themes in a new package named desktop-base-extra
      • #941642 desktop-base: split theme data files and desktop integrations in separate packages
      The Egg theme that I want to develop for testing/unstable is based on Juliette Taka s Homeworld theme that was used for Bullseye. Egg, as in, something that hasn t quite hatched yet. Get it? (for #1038660) Debian Social:
      • Set up Lemmy instance
        • I started setting up a Lemmy instance before DebCamp, and meant to finish it.
      • Migrate PeerTube to new server
        • We got a new physical server for our PeerTube instance, we should have more space for growth and it would help us fix the streaming feature on our platform.
      Loopy: I intended to get the loop for DebConf in good shape before I left, so that we can spend some time during DebCamp making some really nice content, unfortunately this went very tumbly, but at least we ended up with a loopy that kind of worked and wasn t too horrible. There s always another DebConf to try again, right?
      So DebCamp as a usual DebCamp was pretty much a wash (fitting with all the rain we had?) for me, at least it gave me enough time to recover a bit for DebConf proper, and I had enough time left to catch up on some critical DPL duties and put together a few slides for the Bits from the DPL talk.

      DebConf Bits From the DPL I had very, very little available time to prepare something for Bits fro the DPL, but I managed to put some slides together (available on my wiki page). I mostly covered:
      • A very quick introduction of myself (I ve done this so many times, it feels redundant giving my history every time), and some introduction on what it is that the DPL does. I declared my intent not to run for DPL again, and the reasoning behind it, and a few bits of information for people who may intend to stand for DPL next year.
      • The sentiment out there for the Debian 12 release (which has been very positive). How we include firmware by default now, and that we re saying goodbye to architectures both GNU/KFreeBSD and mipsel.
      • Debian Day and the 30th birthday party celebrations from local groups all over the world (and a reminder about the Local Groups BoF later in the week).
      • I looked forward to Debian 13 (trixie!), and how we re gaining riscv64 as a release architecture, as well as loongarch64, and that plans seem to be forming to fix 2k38 in Debian, and hopefully largely by the time the Trixie release comes by.
      • I made some comments about Enterprise Linux as people refer to the RHEL eco-system these days, how really bizarre some aspects of it is (like the kernel maintenance), and that some big vendors are choosing to support systems outside of that eco-system now (like CPanel now supporting Ubuntu too). I closed with the quote below from Ian Murdock, and assured the audience that if they want to go out and make money with Debian, they are more than welcome too.
      Job Fair I walked through the hallway where the Job Fair was hosted, and enjoyed all the buzz. It s not always easy to get this right, but this year it was very active and energetic, I hope lots of people made some connections! Cheese & Wine Due to state laws and alcohol licenses, we couldn t consume alcohol from outside the state of Kerala in the common areas of the hotel (only in private rooms), so this wasn t quite as big or as fun as our usual C&W parties since we couldn t share as much from our individual countries and cultures, but we always knew that this was going to be the case for this DebConf, and it still ended up being alright. Day Trip I opted for the forest / waterfalls daytrip. It was really, really long with lots of time in the bus. I think our trip s organiser underestimated how long it would take between the points on the route (all in all it wasn t that far, but on a bus on a winding mountain road, it takes long). We left at 8:00 and only found our way back to the hotel around 23:30. Even though we arrived tired and hungry, we saw some beautiful scenery, animals and also met indigenous river people who talked about their struggles against being driven out of their place of living multiple times as government invests in new developments like dams and hydro power. Photos available in the DebConf23 public git repository. Losing a beloved Debian Developer during DebConf To our collective devastation, not everyone made it back from their day trips. Abraham Raji was out to the kayak day trip, and while swimming, got caught by a whirlpool from a drainage system. Even though all of us were properly exhausted and shocked in disbelief at this point, we had to stay up and make some tough decisions. Some initially felt that we had to cancel the rest of DebConf. We also had to figure out how to announce what happened asap both to the larger project and at DebConf in an official manner, while ensuring that due diligence took place and that the family is informed by the police first before making anything public. We ended up cancelling all the talks for the following day, with an address from the DPL in the morning to explain what had happened. Of all the things I ve ever had to do as DPL, this was by far the hardest. The day after that, talks were also cancelled for the morning so that we could attend his funeral. Dozens of DebConf attendees headed out by bus to go pay their final respects, many wearing the t-shirts that Abraham had designed for DebConf. A book of condolences was set up so that everyone who wished to could write a message on how they remembered him. The book will be kept by his family.
      Today marks a week since his funeral, and I still feel very raw about it. And even though there was uncertainty whether DebConf should even continue after his death, in hindsight I m glad that everyone pushed forward. While we were all heart broken, it was also heart warming to see people care for each other in all of this. If anything, I think I needed more time at DebConf just to be in that warm aura of emotional support for just a bit longer. There are many people who I wanted to talk to who I barely even had a chance to see. Abraham, or Abru as he was called by some people (which I like because bru in Afrikaans is like bro in English, not sure if that s what it implied locally too) enjoyed artistic pursuits, but he was also passionate about knowledge transfer. He ran classes at DebConf both last year and this year (and I think at other local events too) where he taught people packaging via a quick course that he put together. His enthusiasm for Debian was contagious, a few of the people who he was mentoring came up to me and told me that they were going to see it through and become a DD in honor of him. I can t even remember how I reacted to that, my brain was already so worn out and stitching that together with the tragedy of what happened while at DebConf was just too much for me. I first met him in person last year in Kosovo, I already knew who he was, so I think we interacted during the online events the year before. He was just one of those people who showed so much promise, and I was curious to see what he d achieve in the future. Unfortunately, we was taken away from us too soon. Poetry Evening Later in the week we had the poetry evening. This was the first time I had the courage to recite something. I read Ithaka by C.P. Cavafy (translated by Edmund Keely). The first time I heard about this poem was in an interview with Julian Assange s wife, where she mentioned that he really loves this poem, and it caught my attention because I really like the Weezer song Return to Ithaka and always wondered what it was about, so needless to say, that was another rabbit hole at some point. Group Photo Our DebConf photographer organised another group photo for this event, links to high-res versions available on Aigar s website.
      BoFs I didn t attend nearly as many talks this DebConf as I would ve liked (fortunately I can catch up on video, should be released soon), but I did make it to a few BoFs. In the Local Groups BoF, representatives from various local teams were present who introduced themselves and explained what they were doing. From memory (sorry if I left someone out), we had people from Belgium, Brazil, Taiwan and South Africa. We talked about types of events a local group could do (BSPs, Mini DC, sprints, Debian Day, etc. How to help local groups get started, booth kits for conferences, and setting up some form of calendar that lists important Debian events in a way that makes it easier for people to plan and co-ordinate. There s a mailing list for co-ordination of local groups, and the irc channel is -localgroups on oftc.
      If you got one of these Cheese & Wine bags from DebConf, that s from the South African local group!
      In the Debian.net BoF, we discussed the Debian.net hosting service, where Debian pays for VMs hosted for projects by individual DDs on Debian.net. The idea is that we start some form of census that monitors the services, whether they re still in use, whether the system is up to date, whether someone still cares for it, etc. We had some discussion about where the lines of responsibility are drawn, and we can probably make things a little bit more clear in the documentation. We also want to offer more in terms of backups and monitoring (currently DDs do get 500GB from rsync.net that could be used for backups of their services though). The intention is also to deploy some form of configuration management for some essentials across the hosts. We should also look at getting some sponsored hosting for this. In the Debian Social BoF, we discussed some services that need work / expansion. In particular, Matrix keeps growing at an increased rate as more users use it and more channels are bridged, so it will likely move to its own host with big disks soon. We might replace Pleroma with a fork called Akkoma, this will need some more home work and checking whether it s even feasible. Some services haven t really been used (like Writefreely and Plume), and it might be time to retire them. We might just have to help one or two users migrate some of their posts away if we do retire them. Mjolner seems to do a fine job at spam blocking, we haven t had any notable incidents yet. WordPress now has improved fediverse support, it s unclear whether it works on a multi-site instance yet, I ll test it at some point soon and report back. For upcoming services, we are implementing Lemmy and probably also Mobilizon. A request was made that we also look into Loomio. More Information Overload There s so much that happens at DebConf, it s tough to take it all in, and also, to find time to write about all of it, but I ll mention a few more things that are certainly worth of note. During DebConf, we had some people from the Kite Linux team over. KITE supplies the ICT needs for the primary and secondary schools in the province of Kerala, where they all use Linux. They decided to switch all of these to Debian. There was an ad-hoc BoF where locals were listening and fielding questions that the Kite Linux team had. It was great seeing all the energy and enthusiasm behind this effort, I hope someone will properly blog about this! I learned about the VGLUG Foundation, who are doing a tremendous job at promoting GNU/Linux in the country. They are also training up 50 people a year to be able to provide tech support for Debian. I came across the booth for Mostly Harmless, they liberate old hardware by installing free firmware on there. It was nice seeing all the devices out there that could be liberated, and how it can breathe new life into old harware.
      Some hopefully harmless soldering.
      Overall, the community and their activities in India are very impressive, and I wish I had more time to get to know everyone better. Food Oh yes, one more thing. The food was great. I tasted more different kinds of curry than I ever did in my whole life up to this point. The lunch on banana leaves was interesting, and also learning how to eat this food properly by hand (thanks to the locals who insisted on teaching me!), it was a fruitful experience? This might catch on at home too less dishes to take care of! Special thanks to the DebConf23 Team I think this may have been one of the toughest DebConfs to organise yet, and I don t think many people outside of the DebConf team knows about all the challenges and adversity this team has faced in organising it. Even just getting to the previous DebConf in Kosovo was a long and tedious and somewhat risky process. Through it all, they were absolute pro s. Not once did I see them get angry or yell at each other, whenever a problem came up, they just dealt with it. They did a really stellar job and I did make a point of telling them on the last day that everyone appreciated all the work that they did. Back to my nest I bought Dax a ball back from India, he seems to have forgiven me for not taking him along.
      I ll probably take a few days soon to focus a bit on my bugs and catch up on my original DebCamp goals. If you made it this far, thanks for reading! And thanks to everyone for being such fantastic people.

      1 February 2023

      Julian Andres Klode: Ubuntu 2022v1 secure boot key rotation and friends

      This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.

      taking a step back: how does secure boot on Ubuntu work? Booting on Ubuntu involves three components after the firmware:
      1. shim
      2. grub
      3. linux
      Each of these is a PE binary signed with a key. The shim is signed by Microsoft s 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA In Ubuntu s case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.

      BootHole When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes. This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh. We decided we want to rotate our signing key next time. This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.

      Spring 2022 CVEs We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable. We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.

      2022 key rotation and the fall CVEs This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh. Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them. We also submitted a shim 15.7 with the old keys revoked which came back at around the same time. Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys. So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?

      upgrade ordering grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks. (Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.) Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we d simply add Breaks: linux-image-generic ( 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage. I explored checking the kernels at runtime and aborting if we don t have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:
      1. It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
      2. We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.
      Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice. So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two: In it s post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred. Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP. Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it s not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.

      regressions Of course, the first version I uploaded had still some remaining hardcoded shimx64 in the scripts and so failed to install on arm64 where shimaa64 is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts). shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.

      another grub update for OOM issues. We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work. We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week. With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D

      other features in this round
      • Intel TDX support in grub and shim
      • Kernels are allocated as CODE now not DATA as per the upstream mm changes, might fix boot on X13s

      am I using this yet? The new signing keys are used in:
      • shim-signed 1.54 on 22.10+, 1.51.3 on 22.04, 1.40.9 on 20.04, 1.37~18.04.13 on 18.04
      • grub2-signed 1.187.2~ or newer (binary packages grub-efi-amd64-signed or grub-efi-arm64-signed), 1.192 on 23.04.
      • fwupd-signed 1.51~ or newer
      • various linux updates. Check apt changelog linux-image-unsigned-$(uname -r) to see if Revoke & rotate to new signing key (LP: #2002812) is mentioned in there to see if it signed with the new key.
      If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:
      $ update-alternatives --display shimx64.efi.signed
      shimx64.efi.signed - auto mode
        link best version is /usr/lib/shim/shimx64.efi.signed.latest
        link currently points to /usr/lib/shim/shimx64.efi.signed.latest
        link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed
      /usr/lib/shim/shimx64.efi.signed.latest - priority 100
      /usr/lib/shim/shimx64.efi.signed.previous - priority 50
      
      If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You ll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished. For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).

      how do I test this (while it s in proposed)?
      1. upgrade your kernel to proposed and reboot into that
      2. upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.
      If you already upgraded your shim before your kernel, don t worry:
      1. upgrade your kernel and reboot
      2. run dpkg-reconfigure shim-signed
      And you ll be all good to go.

      deep dive: uploading signed boot assets to Ubuntu For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets. OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing. The entire workflow looks something like this:
      1. Upload the unsigned package to one of the following build PPAs:
      2. Upload the signed package to the same PPA
      3. For stable release uploads:
        • Copy the unsigned package back across all stable releases in the PPA
        • Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
      4. Submit a request to canonical-signing-jobs to sign the uploads. The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb. Resulting binaries will be placed into the proposed PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed
      5. Review the binaries themselves
      6. Unembargo and binary copy the binaries from the proposed PPA to the proposed-public PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed-public. This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private proposed PPA.
      7. Binary copy from proposed-public to the proposed queue(s) in the primary archive
      Lots of steps!

      WIP As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.

      19 January 2023

      Antoine Beaupr : Mastodon comments in ikiwiki

      Today I noticed bounces in my mail box. They were from ikiwiki trying to send registration confirmation email to users who probably never asked for it. I'm getting truly fed up with spam in my wiki. At this point, all comments are manually approved and I still get trouble: now it's scammers spamming the registration form with dummy accounts, which bounce back to me when I make new posts, or just generate backscatter spam for the confirmation email. It's really bad. I have hundreds of users registered on my blog, and I don't know which are spammy, which aren't. So. I'm considering ditching ikiwiki comments altogether. I am testing Mastodon as a commenting platforms. Others (e.g. JAK) have implemented this as a server but a simpler approach is toload them dynamically from Mastodon, which is what Carl Shwan has done. They are using Hugo, however, so they can easily embed page metadata in the template to load the right server with the right comment ID. I wasn't sure how to do this in ikiwiki: it's typically hard to access page-specific metadata in templates. Even the page name is not there, for example. I have tried using templates, and that (obviously?) fails because the <script> stuff gets sanitized away. It seems I would need to split the JavaScript out of the template into a base template and then make the page template refer to a function in there. It's kind of horrible and messy. I wish there was a way to just access page metadata from the page template itself... I found out the meta plugin passes along its metadata, but that's not (easily) extensible. So i'd need to either patch that module, and my history of merged patches is not great so far. So: another plugin. I have something that kind of works that's a combination of a page.tmpl patch and a plugin. The plugin adds a mastodon directive that feeds the page.tmpl with the right stuff. On clicking a button, it injects comments from the Mastodon API, with a JavaScript callback. It's not pretty (it's not themed at all!), but it works. If you want to do this at home, you need this page.tmpl (or at least this patch and that one) and the mastodon.pm plugin from my mastodon-plugin branch. I'm not sure this is a good idea. The first test I did was a "test comment" which led to half a dozen "test reply". I then realized I couldn't redact individual posts from there. I don't even know if, when I mute a user, it actually gets hidden from everyone else too... So I'll test this for a while, I guess. I have also turned off all CGI on this site. It will keep users from registering while I cleanup this mess and think about next steps. I have other options as well if push comes to shove, but I'm unlikely to go back to ikiwiki comments. Mastodon comments are nice because they don't require me to run any extra software: either I have my own federated service I reuse, or I use someone else's, but I don't need to run something extra. And, of course, comments are published in a standard way that's interoperable with everything... On the other hand, now I won't have comments enabled until the blog is posted on Mastodon... Right now this happens only when feed2exec runs and the HTTP cache expires, which can take up to a day. I should probably do this some other way, like flush the cache when a new post arrives, or run post-commit hooks, but for now, this will have to do. Update: I figured out a way to make this work in a timely manner:
      1. there's a post-merge hook in my ikiwiki git repository which calls feed2exec in /home/w-anarcat/source/.git/hooks/ took me a while to find it! I tried post-update and post-receive first, but ikiwiki actually pulls from the bare directory in the source directory, so only post-merge fires (even though it's not a merge)
      2. feed2exec then finds new blog posts (if any!) and fires up the new ikiwikitoot plugin which then...
      3. posts the toot using the toot command (it just works, why reinvent the wheel), keeping the toot URL
      4. finds the Markdown source file associated with the post, and adds the magic mastodon directive
      5. commits and pushes the result
      This will make the interaction with Mastodon much smoother: as soon as a blog post is out of "draft" (i.e. when it hits the RSS feeds), this will immediately trigger and post the blog entry to Mastodon, enabling comments. It's kind of a tangled mess of stuff, but it works! I have briefly considered not using feed2exec for this, but it turns out it does an important job of parsing the result of ikiwiki's rendering. Otherwise I would have to guess which post is really a blog post, is this just an update or is it new, is it a draft, and so on... all sorts of questions where the business logic already resides in ikiwiki, and that I would need to reimplement myself. Plus it goes alongside moving more stuff (like my feed reader) to dedicated UNIX accounts (in this case, the blog sandbox) for security reasons. Whee!

      Next.