Search Results: "jan"

8 October 2025

Colin Watson: Free software activity in September 2025

About 90% of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. Some months I feel like I m pedalling furiously just to keep everything in a roughly working state. This was one of those months. Python team I upgraded these packages to new upstream versions: I had to spend a fair bit of time this month chasing down build/test regressions in various packages due to some other upgrades, particularly to pydantic, python-pytest-asyncio, and rust-pyo3: After some upstream discussion I requested removal of pydantic-compat, since it was more trouble than it was worth to keep it working with the latest pydantic version. I filed dh-python: pybuild-plugin-pyproject doesn t know about headers and added it to Python/PybuildPluginPyproject, and converted some packages to pybuild-plugin-pyproject: I updated dh-python to suppress generated dependencies that would be satisfied by python3 >= 3.11. pkg_resources is deprecated. In most cases replacing it is a relatively simple matter of porting to importlib.resources, but packages that used its old namespace package support need more complicated work to port them to implicit namespace packages. We had quite a few bugs about this on zope.* packages, but fortunately upstream did the hard part of this recently. I went round and cleaned up most of the remaining loose ends, with some help from Alexandre Detiste. Some of these aren t completely done yet as they re awaiting new upstream releases: This work also caused a couple of build regressions, which I fixed: I fixed jupyter-client so that its autopkgtests would work in Debusine. I fixed waitress to build with the nocheck profile. I fixed several other build/test failures: I fixed some other bugs: Code reviews Other bits and pieces I fixed several CMake 4 build failures: I got CI for debbugs passing (!22, !23). I fixed a build failure with GCC 15 in trn4. I filed a release-notes bug about the tzdata reorganization in the trixie cycle. I filed and fixed a git-dpm regression with bash 5.3. I upgraded libfilter-perl to a new upstream version. I optimized some code in ubuntu-dev-tools that made O(1) HTTP requests when it could instead make O(n).

1 October 2025

Birger Schacht: Status update, September 2025

Regarding Debian packaging this was a rather quiet month. I uploaded version 1.24.0-1 of foot and version 2.8.0-1 of git-quick-stats. I took the opportunity and started migrating my packages to the new version 5 watch file format, which I think is much more readable than the previous format. I also uploaded version 0.1.1-1 of libscfg to NEW. libscfg is a C implementation of the scfg configuration file format and it is a dependency of recent version of kanshi. kanshi is a tool similar to autorandr which allows you define output profiles and kanshi switches to the correct output profile on hotplug events. Once libscfg is in unstable I can finally update kanshi to the latest version. A lot of time this month in finalizing a redesign of the output rendering of carl. carl is a small rust program I wrote that provides a calendar view similar to cal, but it comes with colors and ical file integration. That means that you can not only display a simple calendar, but also colorize/highlight dates based on various attributes or based on events on that day. In the initial versions of carl the output rendering was simply hardcoded into the app.
Screenshot of carl
This was a bit cumbersome to maintain and not configurable for users. I am using templating languages on a daily basis, so I decided I would reimplement the output generation of carl to use templates. I chose the minijinja Rust library which is based on the syntax and behavior of the Jinja2 template engine for Python . There are others out there, like tera, but minijinja seems to be more active in development currently. I worked on this implementation on and off for the last year and finally had the time to finish it up and write some additional tests for the outputs. It is easier to maintain templates than Rust code that uses write!() to format the output. I also implemented a configuration option for users to override the templates. Additional to the output refactoring I also fixed couple of bugs and finally released v0.4.0 of carl. In my dayjob I released version 0.53 of apis-core-rdf which contains the place lookup field which I implemented in August. A couple of weeks later we released version 0.54 which comes with a middleware to show pass on messages from the Django messages framework via response header to HTMX to trigger message popups. This implementation is based on the blog post Using the Django messages framework with HTMX. Version 0.55 was the last release in September. It contained preparations for refactoring the import logic as well as a couple of UX improvements.

12 September 2025

Freexian Collaborators: Using JavaScript in Debusine without depending on JavaScript (by Enrico Zini)

Debusine is a tool designed for Debian developers and Operating System developers in general. This posts describes our approach to the use of JavaScript, and some practical designs we came up with to integrate it with Django with minimal effort.

Debusine web UI and JavaScript Debusine currently has 3 user interfaces: a client on the command line, a RESTful API, and a Django-based Web UI. Debusine s web UI is a tool to interact with the system, and we want to spend most of our efforts in creating a system that works and works well, rather than chasing the latest and hippest of the frontend frameworks for the web. Also, Debian as a community has an aversion to having parts of the JavaScript ecosystem in the critical path of its core infrastructure, and in our professional experience this aversion is not at all unreasonable. This leads to having some interesting requirements for the web UI, that (rather surprisingly, one would think) one doesn t usually find advertised in many projects:
  • Straightforward to create and maintain.
  • Well integrated with Django.
  • Easy to package in Debian, with as little vendoring as possible, which helps mitigate supply chain attacks
  • Usable without JavaScript whenever possible, for progressive enhancement rather than core functionality.
The idea is to avoid growing the technical complexity and requirements of the web UI, both server-side and client-side, for functionality that is not needed for this kind of project, with tools that do not fit well in our ecosystem. Also, to limit the complexity of the JavaScript portions that we do develop, we choose to limit our JavaScript browser supports to the main browser versions packaged in Debian Stable, plus recent oldstable. This makes JavaScript easier to write and maintain, and it also makes it less needed, as modern HTML plus modern CSS interfaces can go a long way with less scripting interventions. We recently encoded JavaScript practices and tradeoffs in a JavaScript Practices chapter of Debusine s documentation.

How we use JavaScript From the start we built the UI using Bootstrap, which helps in having responsive layouts that can also work on mobile devices. When we started having large select fields, we introduced Select2 to make interaction more efficient, and which degrades gracefully to working HTML. Both Bootstrap and Select2 are packaged in Debian. Form validation is done server-side by Django, and we do not reimplement it client-side in JavaScript, as we prefer the extra round trip through a form submission to the risk of mismatches between the two validations. In those cases where a UI task is not at all possible without JavaScript, we can make its support mandatory as long as the same goal can be otherwise achieved using the debusine client command.

Django messages as Bootstrap toasts Django has a Messages framework that allows different parts of a view to push messages to the user, and it is useful to signal things like a successful form submission, or warnings on unexpected conditions. Django messages integrate well with Bootstrap toasts, which use a recognisable notification language, are nicely dismissible and do not invade the rest of the page layout. Since toasts require JavaScript to work, we added graceful degradation. to Bootstrap alerts Doing so was surprisingly simple: we handle the toasts as usual, and also render the plain alerts inside a <noscript> tag. This is precisely the intended usage of the <noscript> tag, and it works perfectly: toasts are displayed by JavaScript when it s available, or rendered as alerts when not. The resulting Django template is something like this:
<div aria-live="polite" aria-atomic="true" class="position-relative">
    <div class="toast-container position-absolute top-0 end-0 p-3">
     % for message in messages % 
        <div class="toast" role="alert" aria-live="assertive" aria-atomic="true">
            <div class="toast-header">
                <strong class="me-auto">  message.level_tag capfirst  </strong>
                <button type="button"
                        class="btn-close"
                        data-bs-dismiss="toast"
                        aria-label="Close"></button>
            </div>
            <div class="toast-body">  message  </div>
        </div>
     % endfor % 
    </div>
</div>
<!--   -->
 % if messages % 
<noscript>
     % for message in messages % 
        <div class="alert alert-primary" role="alert">
              message  
        </div>
     % endfor % 
</noscript>
 % endif % 
We have a webpage to test the result.

JavaScript incremental improvement of formsets Debusine is built around workspaces, which are, among other things, containers for resources. Workspaces can inherit from other workspaces, which act as fallback lookups for resources. This allows, for example, to maintain an experimental package to be built on Debian Unstable, without the need to copy the whole Debian Unstable workspace. A workspace can inherit from multiple others, which are looked up in order. When adding UI to configure workspace inheritance, we faced the issue that plain HTML forms do not have a convenient way to perform data entry of an ordered list. We initially built the data entry around Django formsets, which support ordering using an extra integer input field to enter the ordering position. This works, and it s good as a fallback, but we wanted something more appropriate, like dragging and dropping items to reorder them, as the main method of interaction. Our final approach renders the plain formset inside a <noscript> tag, and the JavaScript widget inside a display: none element, which is later shown by JavaScript code. As the workspace inheritance is edited, JavaScript serializes its state into <form type='hidden'> fields that match the structure used by the formset, so that when the form is submitted, the view performs validation and updates the server state as usual without any extra maintenance burden. Serializing state as hidden form fields looks a bit vintage, but it is an effective way of preserving the established data entry protocol between the server and the browser, allowing us to do incremental improvement of the UI while minimizing the maintenance effort.

More to come Debusine is now gaining significant adoption and is still under active development, with new features like personal archives coming soon. This will likely mean more user stories for the UI, so this is a design space that we are going to explore again and again in the coming future. Meanwhile, you can try out Debusine on debusine.debian.net, and follow its development on salsa.debian.org!

11 September 2025

Freexian Collaborators: Debian Contributions: Preparing for setup.py install deprecation, Salsa CI, Debian 13 "trixie" release and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-08 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Preparing for setup.py install deprecation, by Colin Watson setuptools upstream will be removing the setup.py install command on 31 October. While this may not trickle down immediately into Debian, it does mean that in the near future nearly all Python packages will have to use pybuild-plugin-pyproject (though they don t necessarily have to use pyproject.toml; this is just a question of how the packaging runs the build system). Some of the Python team talked about this a bit at DebConf, and Colin volunteered to write up some notes on cases where this isn t straightforward. This page will likely grow as the team works on this problem.

Salsa CI, by Santiago Ruano Rinc n Santiago fixed some pending issues in the MR that moves the pipeline to sbuild+unshare, and after several months, Santiago was able to mark the MR as ready. Part of the recent fixes include handling external repositories, honoring the RELEASE autodetection from d/changelog (thanks to Ahmed Siam for spotting the main reason of the issue), and fixing a regression about the apt resolver for *-backports releases. Santiago is currently waiting for a final review and approval from other members of the Salsa CI team, and being able to merge it. Thanks to all the folks who have helped testing the changes or provided feedback so far. If you want to test the current MR, you need to include the following pipeline definition in your project s CI config file:
---
include:
  - https://salsa.debian.org/santiago/pipeline/raw/sbuild-unshare-02-salsa-ci/salsa-ci.yml
  - https://salsa.debian.org/santiago/pipeline/raw/sbuild-unshare-02-salsa-ci/pipeline-jobs.yml
As a reminder, this MR will make the Salsa CI pipeline build the packages more similar to how it s built by the Debian official builders. This will also save some resources, since the default pipeline will have one stage less (the provisioning) stage, and will make it possible for more projects to be built on salsa.debian.org (including large projects and those from the OCaml ecosystem), etc. See the different issues being fixed in the MR description.

Debian 13 trixie release, by Emilio Pozuelo Monfort On August 9th, Debian 13 trixie was released, building on two years worth of updates and bug fixes from hundreds of developers. Emilio helped coordinate the release, communicating with several teams involved in the process.

DebConf 26 Site Visit, by Stefano Rivera Stefano visited Santa Fe, Argentina, the site for DebConf 26 next year. The aim of the visit was to help build a local team and see the conference venue first-hand. Stefano and Nattie represented the DebConf Committee. The local team organized Debian meetups in Buenos Aires and Santa Fe, where Stefano presented a talk on Debian and DebConf. Venues were scouted and the team met with the university management and local authorities.

Miscellaneous contributions
  • Rapha l updated tracker.debian.org after the trixie release to add the new forky release in the set of monitored distributions. He also reviewed and deployed the work of Scott Talbert showing open merge requests from salsa in the action needed panel.
  • Rapha l reviewed some DEP-3 changes to modernize the embedded examples in light of the broad git adoption.
  • Rapha l configured new workflows on debusine.debian.net to upload to trixie and trixie-security, and officially announced the service on debian-devel-announce, inviting Debian developers to try the service for their next upload to unstable.
  • Carles created a merge request for django-compressor upstream to fix an error when concurrent node processing happened. This will allow removing a workaround added in openstack-dashboard and avoid the same bug in other projects that use django-compressor.
  • Carles prepared a system to detect packages that Recommends packages which don t exist in unstable. Processed (either reported or ignored due to mis-detected problems or temporary problems) 16% of the reports. Will continue next month.
  • Carles got familiar and gave feedback for the freedict-wikdict package. Planned contributions with the maintainer to improve the package.
  • Helmut responded to queries related to /usr-move.
  • Helmut adapted crossqa.d.n to the release of trixie .
  • Helmut diagnosed sufficient failures in rebootstrap to make it work with gcc-15.
  • Helmut fixed the CI pipeline of debvm.
  • Helmut sent patches for 19 cross build problems.
  • Faidon discovered that the Multi-Arch hinter would emit confusing hints about :any annotations. Helmut identified the root cause to be the handling of virtual packages and fixed it.
  • Enrico took some dust off python-debiancontributors and prototyped a receiving end for salsa webpings, to start followup work to contributors.debian.org discussions at DebConf25.
  • Colin upgraded about 70 Python packages to new upstream versions, which is around 10% of the backlog; this included a complicated Pydantic upgrade in collaboration with the Rust team.
  • Colin fixed a bug in debbugs that caused incoming emails to bugs.debian.org with certain header contents to go missing.
  • Thorsten uploaded sane-airscan, which was already in experimental, to unstable.
  • Thorsten created a script to automate the upload of new upstream versions of foomatic-db. The database contains information about printers and regularly gets an update. Now it is possible to keep the package more up to date in Debian.
  • Stefano prepared updates to almost all of his packages that had new versions waiting to upload to unstable. (beautifulsoup4, hatch-vcs, mkdocs-macros-plugin, pypy3, python-authlib, python-cffi, python-mitogen, python-pip, python-pipx, python-progress, python-truststore, python-virtualenv, re2, snowball, soupsieve).
  • Stefano uploaded two new python3.13 point releases to unstable.
  • Stefano updated distro-info-data in stable releases, to document the trixie release and expected EoL dates.
  • Stefano did some debian.social sysadmin work (keeping up quotas with growing databases and filesystems).
  • Stefano supported the Debian treasurers in processing some of the DebConf 25 reimbursements.
  • Lucas uploaded ruby3.4 to experimental. It was already approved by FTP masters.
  • Lucas uploaded ruby-defaults to experimental to add support for ruby3.4. It will allow us to start triggering test rebuilds and catch any FTBFS with ruby3.4.
  • Lucas did some administrative work for Google Summer of Code (GSoC) and replied to some queries from mentors and students.
  • Anupa helped to organize release parties for Debian 13 and Debian Day events.
  • Anupa did the live coverage for the Debian 13 release and prepared the Bits post for the release announcement and 32nd Debian Day as part of the Debian Publicity team.
  • Anupa attended a Debian Day event organized by FOSS club SSET as a speaker.

3 September 2025

Colin Watson: Free software activity in August 2025

About 95% of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. Python team forky is open! As a result I m starting to think about the upcoming Python 3.14. At some point we ll doubtless do a full test rebuild, but in advance of that I concluded that one of the most useful things I could do would be to work on our very long list of packages with new upstream versions. Of course there s no real chance of this ever becoming empty since upstream maintainers aren t going to stop work for that long, but there are a lot of packages there where we re quite a long way out of date, and many of those include fixes that we ll need for 3.14, either directly or by fixing interactions with new versions of other packages that in turn will need to be fixed. We can backport changes when we need to, but more often than not the most efficient way to do things is just to keep up to date. So, I upgraded these packages to new upstream versions (deep breath): That s only about 10% of the backlog, but of course others are working on this too. If we can keep this up for a while then it should help. I packaged pytest-run-parallel, pytest-unmagic (still in NEW), and python-forbiddenfruit (still in NEW), all needed as new dependencies of various other packages. setuptools upstream will be removing the setup.py install command on 31 October. While this may not trickle down immediately into Debian, it does mean that in the near future nearly all Python packages will have to use pybuild-plugin-pyproject (note that this does not mean that they necessarily have to use pyproject.toml; this is just a question of how the packaging runs the build system). We talked about this a bit at DebConf, and I said that I d noticed a number of packages where this isn t straightforward and promised to write up some notes. I wrote the Python/PybuildPluginPyproject wiki page for this; I expect to add more bits and pieces to it as I find them. On that note, I converted several packages to pybuild-plugin-pyproject: I fixed several build/test failures: I fixed some other bugs: I reviewed Debian defaults: nftables as banaction and systemd as backend, but it looked as though nothing actually needed to be changed so we closed this with no action. Rust team Upgrading Pydantic was complicated, and required a rust-pyo3 transition (which Jelmer Vernoo started and Peter Michael Green has mostly been driving, thankfully), packaging rust-malloc-size-of (including an upstream portability fix), and upgrading several packages to new upstream versions: bugs.debian.org I fixed bugs.debian.org: misspelled checkbox id uselessmesages , as well as a bug that caused incoming emails with certain header contents to go missing. OpenSSH I fixed openssh-server: refuses further connections after having handled PerSourceMaxStartups connections with a cherry-pick from upstream. Other bits and pieces I upgraded libfido2 to a new upstream version. I fixed mimalloc: FTBFS on armhf: cc1: error: -mfloat-abi=hard : selected architecture lacks an FPU, which was blocking changes to pendulum in the Python team. I also spent some time helping to investigate libmimalloc3: Illegal instruction Running mtxrun generate, though that bug is still open. I fixed various autopkgtest bugs in gssproxy, prompted by #1007 in Debusine. Since my old team is decommissioning Bazaar/Breezy code hosting in Launchpad (the end of an era, which I have distinctly mixed feelings about), I converted Storm to git.

1 September 2025

Birger Schacht: Status update, August 2025

Due to the freeze I did not do that many uploads in the last few months, so there were various new releases I packaged once Trixie was released. Regarding the release of Debian 13, Trixie, I wrote a small summary of the changes in my packages. I uploaded an unreleased version of cage to experimental, to prepare for the transition to wlroots-0.19. Both sway and labwc already had packages in experimental that depended on the new wlroots version. When the transition happened, I uploaded the cage version to unstable, as well as labwc 0.9.1 and sway 1.11. I updated Most of the packages I uploaded using git-debpush, some of them could not be uploaded this way due to upstream using git submodules (this is 1107219). I also created 1112040 - git-debpush: should also say which tag it created and 1111504 - git-debpush: pristine-tar check warns about pristine-tar data thats not present (which is already fixed). I uploaded wayback 0.2 to NEW, where it is waiting for review, (ITP). In my dayjob I added extended the place lookup form of apis-core-rdf to allow searching places and selecting them on a map using leaflet and the nominatim API. Another issue I worked on was about highlighting those inputs of our generic list filter that are used to filter the results. I released a couple of bugfix releases for the v0.50 release, then v0.51 and two bugfix releases and then v0.52 and another couple of bugfix releases. v0.53 will land in a couple of days. I also released v0.6.2 of apis-highlighter-ng, which is sort of a plugin for apis-core-rdf, that allows to highlight parts of a text and link them to whatever Django object (in our case relations).

Russ Allbery: Review: Regenesis

Review: Regenesis, by C.J. Cherryh
Series: Cyteen #2
Publisher: DAW
Copyright: January 2009
ISBN: 0-7564-0592-0
Format: Mass market
Pages: 682
The main text below is an edited version of my original review of Regensis written on 2012-12-21. Additional comments from my re-read are after the original review. Regenesis is a direct sequel to Cyteen, picking up very shortly after the end of that book and featuring all of the same characters. It would be absolutely pointless to read this book without first reading Cyteen; all of the emotional resonance and world-building that make Regensis work are done there, and you will almost certainly know whether you want to read it after reading the first book. Besides, Cyteen is one of the best SF novels ever written and not the novel to skip. Because this is such a direct sequel, it's impossible to provide a good description of Regenesis without spoiling at least characters and general plot developments from Cyteen. So stop reading here if you've not yet read the previous book. I've had this book for a while, and re-read Cyteen in anticipation of reading it, but I've been nervous about it. One of the best parts of Cyteen is that Cherryh didn't belabor the ending, and I wasn't sure what part of the plot could be reasonably extended. Making me more nervous was the back-cover text that framed the novel as an investigation of who actually killed the first Ari, a question that was fairly firmly in the past by the end of Cyteen and that neither I nor the characters had much interest in answering. Cyteen was also a magical blend of sympathetic characters, taut tension, complex plotting, and wonderful catharsis, the sort of lightning in a bottle that can rarely be caught twice. I need not have worried. If someone had told me that Regenesis was another 700 pages of my favorite section of Cyteen, I would have been dubious. But that's exactly what it is. And the characters only care about Ari's murderer because it comes up, fairly late in the novel, as a clue in another problem. Ari and Justin are back in the safe laboratory environment of Reseune, safe now that politics are not trying to kill or control them. Yanni has taken over administration. There is a general truce, and even some deeper agreement. Everyone can take a breath and relax, albeit with the presence of Justin's father Jordan as an ongoing irritant. But broader Union politics are not stable: there is an election in progress for the Defense councilor that may break the tenuous majority in favor of Reseune and the Science Directorate, and Yanni is working out a compromise to gain more support by turning a terraforming project loose on a remote world. As the election and the politics heat up, interpersonal relationships abruptly deteriorate, tensions with Jordan sharply worsen, and there may be moles in Reseune's iron-clad security. Navigating the crisis while keeping her chosen family safe will once again tax all of Ari's abilities. The third section of Cyteen, where Ari finally has the tools to take fate into her own hands and starts playing everyone off against each other, is one of my favorite sections of any book. If it was yours as well, Regenesis is another 700 pages of exactly that. As an extension and revisiting, it does lose a bit of immediacy and surprise from the original. Regenesis is also less concerned with the larger questions of azi society, the nature of thought and personality, loyalty and authority, and the best model for the development of human civilization. It's more of a political thriller. But it's a political thriller that recaptures much of the drama and tension of Cyteen and is full of exceptionally smart and paranoid people thinking through all angles of a problem, working fast on their feet, and successfully navigating tricky and treacherous political landscapes. And, like Cyteen but unlike others of Cherryh's novels I've read, it's a novel about empowerment, about seizing control of one's surroundings and effectively using all of the capability and leverage at one's fingertips. That gives it a catharsis that's almost as good as Cyteen. It's also, like its predecessor, a surprisingly authoritarian novel. I think it's in that, more than anything else in these books, that one sees the impact of the azi. Regenesis makes it clear that the story is set, not in a typical society, but inside a sort of corporation, with an essentially hierarchical governance structure. There are other SF novels set within corporations (Solitaire comes to mind), but normally they follow peons or at best mid-level personnel or field agents, or otherwise take the viewpoint of the employees or the exploited. When they follow the corporate leaders, the focus usually isn't down inside the organization, but out into the world, with the corporation as silent resources on which the protagonist can draw. Regenesis is instead about the leadership. It's about decisions about the future of humanity that characters feel they can make undemocratically (in part because they or their predecessors have effectively engineered the opinions of the democratic population), but it's also about how one manages and secures a top-down organization. Reseune is, as in the previous novel, a paranoid's suspicions come true; everyone is out to get everyone else, or at least might be, and the level of omnipresent security and threat forces a close parsing of alliances and motivations that elevates loyalty to the greatest virtue. In Cyteen, we had long enough with Ari to see the basic shape of her personality and her slight divergences from her predecessor, but her actions are mostly driven by necessity. Regenesis gives us more of a picture of what she's like when her actions aren't forced, and here I think Cherryh manages a masterpiece of subtle characterization. Ari has diverged substantially from her predecessor without always realizing, and those divergences are firmly grounded in the differences she found or created between her life and the first Ari's. She has friends, confidents, and a community, which combined with past trauma has made her fiercely, powerfully protective. It's that protective instinct that weaves the plot together. So many of the events of Cyteen and Regenesis are driven by people's varying reactions to trauma. If you, like me, loved the last third of Cyteen, read this, because Regenesis is more of exactly that. Cherryh finds new politics, new challenges, and a new and original plot within the same world and with the same characters, but it has the same feel of maneuvering, analysis, and decisive action. You will, as with Cyteen have to be comfortable with pages of internal monologue from people thinking through all sides of a problem. If you didn't like that in the previous book, avoid this one; if you loved it, here's the sequel you didn't know you were waiting for. Original rating: 9 out of 10

Some additional thoughts after re-reading Regenesis in 2025: Cyteen mostly held up to a re-reading and I had fond memories of Regenesis and hoped that it would as well. Unfortunately, it did not. I think I can see the shape of what I enjoyed the first time I read it, but I apparently was in precisely the right mood for this specific type of political power fantasy. I did at least say that you have to be comfortable with pages of internal monologue, but on re-reading, there was considerably more of that than I remembered and it was quite repetitive. Ari spends most of the book chasing her tail, going over and around and beside the same theories that she'd already considered and worrying over the nuances of every position. The last time around, I clearly enjoyed that; this time, I found it exhausting and not very well-written. The political maneuvering is not that deep; Ari just shows every minutia of her analysis. Regenesis also has more about the big questions of how to design a society and the role of the azi than I had remembered, but I'm not sure those discussions reach any satisfying conclusions. The book puts a great deal of effort into trying to convince the reader that Ari is capable of designing sociological structures that will shape Union society for generations to come through, mostly, manipulation of azi programming (deep sets is the term used in the book). I didn't find this entirely convincing the first time around, and I was even less convinced in this re-read. Human societies are a wicked problem, and I don't find Cherryh's computer projections any more convincing than Asimov's psychohistory. Related, I am surprised, in retrospect, that the authoritarian underpinnings of this book didn't bother me more on my first read. They were blatantly obvious on the second read. This felt like something Cherryh put into these books intentionally, and I think it's left intentionally ambiguous whether the reader is supposed to agree with Ari's goals and decisions, but I was much less in the mood on this re-read to read about Ari making blatantly authoritarian decisions about the future of society simply because she's smart and thinks she, unlike others, is acting ethically. I say this even though I like Ari and mostly enjoyed spending time in her head. But there is a deep fantasy of being able to reprogram society at play here that looks a lot nastier from the perspective of 2025 than apparently it did to me in 2012. Florian and Catlin are still my favorite characters in the series, though. I find it oddly satisfying to read about truly competent bodyguards, although like all of the azi they sit in an (I think intentionally) disturbing space of ambiguity between androids and human slaves. The somewhat too frank sexuality from Cyteen is still present in Regenesis, but I found it a bit less off-putting, mostly because everyone is older. The authoritarian bent is stronger, since Regenesis is the story of Ari consolidating power rather than the underdog power struggle of Cyteen, and I had less tolerance for it on this re-read. The main problem with this book on re-read was that I bogged down about halfway through and found excuses to do other things rather than finish it. On the first read, I was apparently in precisely the right mood to read about Ari building a fortified home for all of her friends; this time, it felt like endless logistics and musings on interior decorating that didn't advance the plot. Similarly, Justin and Grant's slow absorption into Ari's orbit felt like a satisfying slow burn friendship in my previous reading and this time felt touchy and repetitive. I was one of the few avid defenders of Regenesis the first time I read it, and sadly I've joined the general reaction on a re-read: This is not a very good book. It's too long, chases its own tail a bit too much, introduces a lot more authoritarianism and doesn't question it as directly as I wanted, and gets even deeper into Cherryh's invented pseudo-psychology than Cyteen. I have a high tolerance for the endless discussions of azi deep sets and human flux thinking, and even I got bored this time through. On re-read, this book was nowhere near as good as I thought it was originally, and I would only recommend it to people who loved Cyteen and who really wanted a continuation of Ari's story, even if it is flabby and not as well-written. I have normally been keeping the rating of my first read of books, but I went back and lowered this one by two points to ensure it didn't show as high on my list of recommendations.

Re-read rating: 6 out of 10

12 August 2025

Freexian Collaborators: Debian Contributions: DebConf 25, OpenSSH upgrades, Cross compilation collaboration and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-07 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25, by Stefano Rivera and Santiago Ruano Rinc n In July, DebConf 25 was held in Brest, France. Freexian was a gold sponsor and most of the Freexian team attended the event. Many fruitful discussions were had amongst our team and within the Debian community. DebConf itself was organized by a local team in Brest, that included Santiago (who now lives in Uruguay). Stefano was also deeply involved in the organization, as a DebConf committee member, core video team, and the lead developer for the conference website. Running the conference took an enormous amount of work, consuming all of Stefano and Santiago s time for most of July. Lucas Kanashiro was active in the DebConf content team, reviewing talks and scheduling them. There were many last-minute changes to make during the event. Anupa Ann Joseph was part of the Debian publicity team doing live coverage of DebConf 25 and was part of the DebConf 25 content team reviewing the talks. She also assisted the local team to procure the lanyards. Recorded sessions presented by Freexian collaborators, often alongside other friends in Debian, included:

OpenSSH upgrades, by Colin Watson Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported No new SSH connections possible during large part of upgrade to Debian Trixie , which would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:
  • As part of hardening the OpenSSH server, OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it; after this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen (roughly) in two phases: first we unpack the new files onto disk, and then we run some configuration steps which usually include things like restarting services. Normally this is fine, because the old service keeps on working until it s restarted. In this case, unpacking the new files onto disk immediately stopped new SSH connections from working: the old sshd received the connection and tried to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this. This wasn t much of a problem when upgrading OpenSSH on its own or with a small number of other packages, but in release upgrades it left a large gap when you can t SSH to the system any more, and if anything fails in that interval then you could be in trouble.

    After trying a couple of other approaches, Colin landed on the idea of having the openssh-server package divert /usr/sbin/sshd to /usr/sbin/sshd.session-split before the unpack step of an upgrade from before 9.8, then removing the diversion and moving the new file into place once it s ready to restart the service. This reduces the period when new connections fail to a minimum.
  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor part of the version number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, so as soon as you unpacked the new OpenSSL library during an upgrade, sshd stopped working. This couldn t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL, and time was tight if we wanted this to be available before the release of Debian 13.

    Fortunately, there s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted Colin s proposal to fix this there.
The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine.

Cross compilation collaboration, by Helmut Grohne Supporting cross building in Debian packages touches lots of areas of the archive and quite some of these matters reside in shared responsibility between different teams. Hence, DebConf was an ideal opportunity to settle long-standing issues. The cross building bof sparked lively discussions as a significant fraction of developers employ cross builds to get their work done. In the trixie release, about two thirds of the packages can satisfy their cross Build-Depends and about half of the packages actually can be cross built.

Miscellaneous contributions
  • Rapha l Hertzog updated tracker.debian.org to remove references to Debian 10 which was moved to archive.debian.org, and had many fruitful discussions related to Debusine during DebConf 25.
  • Carles Pina prepared some data, questions and information for the DebConf 25 l10n and i18n BoF.
  • Carles Pina demoed and discussed possible next steps for po-debconf-manager with different teams in DebConf 25. He also reviewed Catalan translations and sent them to the packages.
  • Carles Pina started investigating a django-compressor bug: reproduced the bug consistently and prepared a PR for django-compressor upstream (likely more details next month). Looked at packaging frictionless-py.
  • Stefano Rivera triaged Python CVEs against pypy3.
  • Stefano prepared an upload of a new upstream release of pypy3 to Debian experimental (due to the freeze).
  • Stefano uploaded python3.14 RC1 to Debian experimental.
  • Thorsten Alteholz uploaded a new upstream version of sane-airscan to experimental. He also started to work on a new upstream version of hplip.
  • Colin backported fixes for CVE-2025-50181 and CVE-2025-50182 in python-urllib3, and fixed several other release-critical or important bugs in Python team packages.
  • Lucas uploaded ruby3.4 to experimental as a starting point for the ruby-defaults transition that will happen after Trixie release.
  • Lucas coordinated with the Release team the fix of the remaining RC bugs involving ruby packages, and got them all fixed.
  • Lucas, as part of the Debian Ruby team, kicked off discussions to improve internal process/tooling.
  • Lucas, as part of the Debian Outreach team, engaged in multiple discussions around internship programs we run and also what else we could do to improve outreach in the Debian project.
  • Lucas joined the Local groups BoF during DebConf 25 and shared all the good experiences from the Brazilian community and committed to help to document everything to try to support other groups.
  • Helmut spent significant time with Samuel Thibault on improving architecture cross bootstrap for hurd-any mostly reviewing Samuel s patches. He proposed a patch for improving bash s detection of its pipesize and a change to dpkg-shlibdeps to improve behavior for building cross toolchains.
  • Helmut reiterated the multiarch policy proposal with a lot of help from Nattie Mayer-Hutchings, Rhonda D Vine and Stuart Prescott.
  • Helmut finished his work on the process based unschroot prototype that was the main feature of his talk (see above).
  • Helmut analyzed a multiarch-related glibc upgrade failure induced by a /usr-move mitigation of systemd and sent a patch and regression fix both of which reached trixie in time. Thanks to Aurelien Jarno and the release team for their timely cooperation.
  • Helmut resurrected an earlier discussion about changing the semantics of Architecture: all packages in a multiarch context in order to improve the long-standing interpreter problem. With help from Tollef Fog Heen better semantics were discovered and agreement was reached with Guillem Jover and Julian Andres Klode to consider this change. The idea is to record a concrete architecture for every Architecture: all package in the dpkg database and enable choosing it as non-native.
  • Helmut implemented type hints for piuparts.
  • Helmut reviewed and improved a patch set of Jochen Sprickerhof for debvm.
  • Anupa was involved in discussions with the Debian Women team during DebConf 25.
  • Anupa started working for the trixie release coverage and started coordinating release parties.
  • Emilio helped coordinate the release of Debian 13 trixie.

6 August 2025

Colin Watson: Free software activity in July 2025

About 90% of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. DebConf I attended DebConf for the first time in 11 years (my last one was DebConf 14 in Portland). It was great! For once I had a conference where I had a fairly light load of things I absolutely had to do, so I was able to spend time catching up with old friends, making some new friends, and doing some volunteering - a bit of Front Desk, and quite a lot of video team work where I got to play with sound desks and such. Apparently one of the BoFs ( birds of a feather , i.e. relatively open discussion sessions) where I was talkmeister managed to break the automatic video cutting system by starting and ending precisely on time, to the second, which I m told has never happened before. I ll take that. I gave a talk about Debusine, along with helping Enrico run a Debusine BoF. We still need to process some of the feedback from this, but are generally pretty thrilled about the reception. My personal highlight was getting a shout-out in a talk from CERN (in the slide starting at 32:55). Other highlights for me included a Python team BoF, Ian s tag2upload talk and some very useful follow-up discussions, a session on archive-wide testing, a somewhat brain-melting whiteboard session about the multiarch interpreter problem , several useful discussions about salsa.debian.org, Matthew s talk on how Wikimedia automates their Debian package builds, and many others. I hope I can start attending regularly again! OpenSSH Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported No new SSH connections possible during large part of upgrade to Debian Trixie , and after a little testing in a container I confirmed that this was a reproducible problem that would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom: The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine. Many thanks to Manfred for reporting this with just enough time to spare that we were able to fix it before Debian 13 is released in a few days! debmirror I did my twice-yearly refresh of debmirror s mirror_size documentation, and applied a patch from Christoph Goehre to improve mirroring of installer files. madison-lite I proposed renaming this project along with the rmadison tool in devscripts, although I m not yet sure what a good replacement name would be. Python team I upgraded python-expandvars, python-typing-extensions (in experimental), and webtest to new upstream versions. I backported fixes for some security vulnerabilities to unstable: I fixed or helped to fix a number of release-critical bugs: I fixed some other bugs, mostly Severity: important: I reinstated python3-mastodon s build-dependency on and recommendation of python3-blurhash, now that the latter has been fixed to use the correct upstream source.

5 August 2025

Ravi Dwivedi: Tricked by a website while applying for Vietnam visa

In December 2024, Badri and I went to Vietnam. In this post, I ll document our experiences with the visa process of Vietnam. Vietnam requires an e-visa to enter the country. The official online portal for the e-visa application is evisa.xuatnhapcanh.gov.vn/. However, I submitted my visa application on the website vietnamvisa.govt.vn. It was only after submitting my application and making the payment that I realized that it s not the official e-visa website. The realization came from the tagline mentioned in the top left corner of the website - the best way to obtain a Vietnam visa. I was a bit upset that I got tricked by that website. I should have checked the top level domains of Vietnam s government websites. Anyways, it is pretty easy to confuse govt.vn with gov.vn. I also paid double the amount of the official visa fee. However, I wasn t asked to provide a flight reservation or hotel bookings - documents which are usually asked for most of the visas. But they did ask me for a photo. I was not even sure whether the website was legit or not. Badri learnt from my experience and applied through the official Vietnam government website. During the process, he had to provide a hotel booking as well as enter the hotel address into the submission form. Additionally, the official website asked to provide the exact points of entry to and exit from the country, which the non-official website did not ask for. On the other hand, he had to pay only 25 USD versus my 54 USD. It turned out that the website I registered on was also legit, as they informed me a week later that my visa has been approved, along with a copy of my visa. Further, I was not barred from entering and found to be holding a fake visa. It appears that the main scam is not about the visa being fake, but rather that you will be charged more than if you apply through the official website. I would still recommend you (the readers) to submit your visa application only through the official website and not on any of the other such websites. Our visa was valid for a month (my visa was valid from the 4th of December 2024 to the 4th of January 2025). We also had a nice time in Vietnam. Stay tuned for my Vietnam travel posts! Credits to Badri for proofreading and writing his part of the experience.

4 August 2025

Freexian Collaborators: Secure boot signing with Debusine (by Colin Watson)

Debusine aims to be an integrated solution to build, distribute and maintain a Debian-based distribution. At Debconf 25, we talked about using it to pre-test uploads to Debian unstable, and also touched on how Freexian is using it to help maintain the Debian LTS and ELTS projects. When Debian 10 (buster) moved to ELTS status in 2024, this came with a new difficulty that hadn t existed for earlier releases. Debian 10 added UEFI Secure Boot support, meaning that there are now signed variants of the boot loader and Linux kernel packages. Debian has a system where certain packages are configured as needing to be signed, and those packages include a template for a source package along with the unsigned objects themselves. The signing service generates detached signatures for all those objects, and then uses the template to build a source package that it uploads back to the archive for building in the usual way. Once buster moved to ELTS, it could no longer rely on Debian s signing service for all this. Freexian operates parallel infrastructure for the archive, and now needed to operate a parallel signing service as well. By early 2024 we were already planning to move ELTS infrastructure towards Debusine, and so it made sense to build a signing service there as well. Separately, we were able to obtain a Microsoft signature for Freexian s shim build, allowing us to chain this into the trust path for most deployed x86 machines. Freexian can help other organizations running Debian derivatives through the same process, and can provide secure signing infrastructure to the standards required for UEFI Secure Boot.

Prior art We considered both code-signing (Debian s current implementation) and lp-signing (Ubuntu s current implementation) as prior art. Neither was quite suitable for various reasons.
  • code-signing relies on polling a configured URL for each archive to fetch a GPG-signed list of signing requests, which would have been awkward for us to set up, and it assumes that unsigned packages are sufficiently trusted for it to be able to run dpkg -x and dpkg-source -b on them outside any containment. dpkg -x has had the occasional security vulnerability, so this seemed unwise for a service that might need to deal with signing packages for multiple customers.
  • lp-signing is a microservice accepting authenticated requests, and is careful to avoid needing to manipulate packages itself. However, this relies on a different and incompatible mechanism for indicating that packages should be signed, which wasn t something we wanted to introduce in ELTS.

Workers Debusine already had an established system of external workers that run tasks under various kinds of containment. This seems like a good fit: after all, what s a request to sign a package but a particular kind of task? But there are some problems here: workers can run essentially arbitrary code (such as build scripts in source packages), and even though that s under containment, we don t want to give such machines access to highly-sensitive data such as private keys. Fortunately, we d already introduced the idea of different kinds of workers a few months beforehand, in order to be able to run privileged server tasks that have direct access to the Debusine database. We built on that and added signing workers , which are much like external workers except that they only run signing tasks, no other types of tasks run on them, and they have access to a private database with information about the keys managed by their Debusine instance. (Django s support for multiple databases made this quite easy to arrange: we were able to keep everything in the same codebase.)

Key management It s obviously bad practice to store private key material in the clear, but at the same time the signing workers are essentially oracles that will return signatures on request while ensuring that the rest of Debusine has no access to private key material, so they need to be able to get hold of it themselves. Hardware security modules (HSMs) are designed for this kind of thing, but they can be inconvenient to manage when large numbers of keys are involved. Some keys are more valuable than others. If the signing key used for an experimental archive leaks, the harm is unlikely to be particularly serious; but if the ELTS signing key leaks, many customers will be affected. To match this, we implemented two key protection arrangements for the time being: one suitable for low-value keys encrypts the key in software with a configured key and stores the public key and ciphertext in the database, while one suitable for high-value keys stores keys as PKCS #11 URIs that can be set up manually by an instance administrator. We packaged some YubiHSM tools to make this easier for our sysadmins. The signing worker calls back to the Debusine server to check whether a given work request is authorized to use a given signing key. All operations related to private keys also produce an audit log entry in the private signing database, so we can track down any misuse.

Tasks Getting Debusine to do anything new usually requires figuring out how to model the operation as a task. In this case, that was complicated by wanting to run as little code as possible on the signing workers: in particular, we didn t want to do all the complicated package manipulations there. The approach we landed on was a chain of three tasks:
  • ExtractForSigning runs on a normal external worker. It takes the result of a package build and picks out the individual files from it that need to be signed, storing them as separate artifacts.
  • Sign runs on a signing worker, and (of course) makes the actual signatures, storing them as artifacts.
  • AssembleSignedSource runs on a normal external worker. It takes the signed artifacts and produces a source package containing them, based on the template found in the unsigned binary package.

Workflows Of course, we don t want people to have to create all those tasks directly and figure out how to connect everything together for themselves, and that s what workflows are good at. The make_signed_source workflow does all the heavy lifting of creating the right tasks with the right input data and making them depend on each other in the right ways, including fanning out multiple copies of all this if there are multiple architectures or multiple template packages involved. Since you probably don t want to stop at just having the signed source packages, it also kicks off builds to produce signed binary packages. Even this is too low-level for most people to use directly, so we wrapped it all up in our debian_pipeline workflow, which just needs to be given a few options to enable signing support (and those options can be locked down by workspace owners).

What s next? In most cases this work has been enough to allow ELTS to carry on issuing kernel security updates without too much disruption, which was the main goal; but there are other uses for a signing system. We included OpenPGP support from early on, which allows Debusine to sign its own builds, and we ll soon be extending that to sign APT repositories hosted by Debusine. The current key protection arrangements could use some work. Supporting automatically-generated software-encrypted keys and manually-generated keys in an HSM is fine as far as it goes, but it would be good to be able to have the best of both worlds by being able to automatically generate keys protected by an HSM. This needs some care, as HSMs often have quite small limits on the number of objects they can store at any one time, and the usual workaround is to export keys from the HSM under wrap (encrypted by a key known only to the HSM) so that they can be imported only when needed. We have a general idea of how to do this, but doing it efficiently will need care. We d be very interested in hearing from organizations that need this sort of thing, especially for Debian derivatives. Debusine provides lots of other features that can help you. Please get in touch with us at sales@freexian.com if any of this sounds useful to you.

1 August 2025

Birger Schacht: Status update, July 2025

In beginning of July I got my 12" framework laptop and installed Debian on it. During that setup I made some updates to my base setup scripts that I use to install Debian machines. Due to the freeze I did not do much package related work. But I was at DebConf and I uploaded a new release of labwc to experimental, mostly to test the tag2upload workflow. I started working on packaging wlr-sunclock which is a small Wayland widget that displays the sun s shadows on the earth. I also created an ITP for wayback. Wayback is an X11 compatibility layer to allow to run X11 desktop environments using Wayland. In my dayjob I did my usual work on apis-core-rdf, which is our Django application for managing prosopographic data. I implemented a password change interface and did some restructuring of the templates. We released a new version which was followed by a bugfix release a couple of days later. I also implemented a rather big refactoring in pfp-api. PFP-API is a FastAPI based REST API that uses rdfproxy to fetch data from a Triplestore, converts the data to Pydantic models and then ships the models as JSON. Most of the work is done by rdfproxy in the background, but I adapted the existing pfp-api code to make it easier to add new entity types.

puer-robustus: My Google Summer of Code '25 at Debian

I ve participated in this year s Google Summer of Code (GSoC) program and have been working on the small (90h) autopkgtests for the rsync package project at Debian.

Writing my proposal Before you can start writing a proposal, you need to select an organization you want to work with. Since many organizations participate in GSoC, I ve used the following criteria to narrow things down for me:
  • Programming language familiarity: For me only Python (preferably) as well as shell and Go projects would have made sense. While learning another programming language is cool, I wouldn t be as effective and helpful to the project as someone who is proficient in the language already.
  • Standing of the organization: Some of the organizations participating in GSoC are well-known for the outstanding quality of the software they produce. Debian is one of them, but so is e.g. the Django Foundation or PostgreSQL. And my thinking was that the higher the quality of the organization, the more there is to learn for me as a GSoC student.
  • Mentor interactions: Apart from the advantage you get from mentor feedback when writing your proposal (more on that further below), it is also helpful to gauge how responsive/helpful your potential mentor is during the application phase. This is important since you will be working together for a period of at least 2 months; if the mentor-student communication doesn t work, the GSoC project is going to be difficult.
  • Free and Open-Source Software (FOSS) communication platforms: I generally believe that FOSS projects should be built on FOSS infrastructure. I personally won t run proprietary software when I want to contribute to FOSS in my spare time.
  • Be a user of the project: As Eric S. Raymond has pointed out in his seminal The Cathedral and the Bazaar 25 years ago
    Every good work of software starts by scratching a developer s personal itch.
Once I had some organizations in mind whose projects I d be interested in working on, I started writing proposals for them. Turns out, I started writing my proposals way too late: In the end I only managed to hand in a single one which is risky. Competition for the GSoC projects is fierce and the more quality (!) proposals you send out, the better your chances are at getting one. However, don t write proposals for the sake of it: Reviewers get way too many AI slop proposals already and you will not do yourself a favor with a low-quality proposal. Take the time to read the instructions/ideas/problem descriptions the project mentors have provided and follow their guidelines. Don t hesitate to reach out to project mentors: In my case, I ve asked Samuel Henrique a few clarification questions whereby the following (email) discussion has helped me greatly in improving my proposal. Once I ve finalized my proposal draft, I ve sent it to Samuel for a review, which again led to some improvements to the final proposal which I ve uploaded to the GSoC program webpage.

Community bonding period Once you get the information that you ve been accepted into the GSoC program (don t take it personally if you don t make it; this was my second attempt after not making the cut in 2024), get in touch with your prospective mentor ASAP. Agree upon a communication channel and some response times. Put yourself in the loop for project news and discussions whatever that means in the context of your organization: In Debian s case this boiled down to subscribing to a bunch of mailing lists and IRC channels. Also make sure to setup a functioning development environment if you haven t done so for writing the proposal already.

Payoneer setup The by far most annoying part of GSoC for me. But since you don t have a choice if you want to get the stipend, you will need to signup for an account at Payoneer. In this iteration of GSoC all participants got a personalized link to open a Payoneer account. When I tried to open an account by following this link, I got an email after the registration and email verification that my account is being blocked because Payoneer deems the email adress I gave a temporary one. Well, the email in question is most certainly anything but temporary, so I tried to get in touch with the Payoneer support - and ended up in an LLM-infused kafkaesque support hell. Emails are answered by an LLM which for me meant utterly off-topic replies and no help whatsoever. The Payoneer website offers a real-time chat, but it is yet another instance of a bullshit-spewing LLM bot. When I at last tried to call them (the support lines are not listed on the Payoneer website but were provided by the GSoC program), I kid you not, I was being told that their platform is currently suffering from technical problems and was hung up on. Only thanks to the swift and helpful support of the GSoC administrators (who get priority support from Payoneer) I was able to setup a Payoneer account in the end. Apart from showing no respect to customers, Payoneer is also ripping them off big time with fees (unless you get paid in USD). They charge you 2% for currency conversions to EUR on top of the FX spread they take. What worked for me to avoid all of those fees, was to open a USD account at Wise and have Payoneer transfer my GSoC stipend in USD to that account. Then I exchanged the USD to my local currency at Wise for significantly less than Payoneer would have charged me. Also make sure to close your Payoneer account after the end of GSoC to avoid their annual fee.

Project work With all this prelude out of the way, I can finally get to the actual work I ve been doing over the course of my GSoC project.

Background The upstream rsync project generally sees little development. Nonetheless, they released version 3.4.0 including some CVE fixes earlier this year. Unfortunately, their changes broke the -H flag. Now, Debian package maintainers need to apply those security fixes to the package versions in the Debian repositories; and those are typically a bit older. Which usually means that the patches cannot be applied as is but will need some amendments by the Debian maintainers. For these cases it is helpful to have autopkgtests defined, which check the package s functionality in an automated way upon every build. The question then is, why should the tests not be written upstream such that regressions are caught in the development rather than the distribution process? There s a lot to say on this question and it probably depends a lot on the package at hand, but for rsync the main benefits are twofold:
  1. The upstream project mocks the ssh connection over which rsync is most typically used. Mocking is better than nothing but not the real thing. In addition to being a more realisitic test scenario for the typical rsync use case, involving an ssh server in the test would automatically extend the overall resilience of Debian packages as now new versions of the openssh-server package in Debian benefit from the test cases in the rsync reverse dependency.
  2. The upstream rsync test framework is somewhat idiosyncratic and difficult to port to reimplementations of rsync. Given that the original rsync upstream sees little development, an extensive test suit further downstream can serve as a threshold for drop-in replacements for rsync.

Goal(s) At the start of the project, the Debian rsync package was just running (a part of) the upstream tests as autopkgtests. The relevant snippet from the build log for the rsync_3.4.1+ds1-3 package reads:
114s ------------------------------------------------------------
114s ----- overall results:
114s 36 passed
114s 7 skipped
Samuel and I agreed that it would be a good first milestone to make the skipped tests run. Afterwards, I should write some rsync test cases for local calls, i.e. without an ssh connection, effectively using rsync as a more powerful cp. And once that was done, I should extend the tests such that they run over an active ssh connection. With these milestones, I went to work.

Upstream tests Running the seven skipped upstream tests turned out to be fairly straightforward:
  • Two upstream tests concern access control lists and extended filesystem attributes. For these tests to run they rely on functionality provided by the acl and xattr Debian packages. Adding those to the Build-Depends list in the debian/control file of the rsync Debian package repo made them run.
  • Four upstream tests required root privileges to run. The autopkgtest tool knows the needs-root restriction for that reason. However, Samuel and I agreed that the tests should not exclusively run with root privileges. So, instead of just adding the restiction to the existing autopkgtest test, we created a new one which has the needs-root restriction and runs the upstream-tests-as-root script - which is nothing else than a symlink to the existing upstream-tests script.
The commits to implement these changes can be found in this merge request. The careful reader will have noticed that I only made 2 + 4 = 6 upstream test cases run out of 7: The leftover upstream test is checking the functionality of the --ctimes rsync option. In the context of Debian, the problem is that the Linux kernel doesn t have a syscall to set the creation time of a file. As long as that is the case, this test will always be skipped for the Debian package.

Local tests When it came to writing Debian specific test cases I started of a completely clean slate. Which is a blessing and a curse at the same time: You have full flexibility but also full responsibility. There were a few things to consider at this point in time:
  • Which language to write the tests in? The programming language I am most proficient in is Python. But testing a CLI tool in Python would have been weird: it would have meant that I d have to make repeated subprocess calls to run rsync and then read from the filesystem to get the file statistics I want to check. Samuel suggested I stick with shell scripts and make use of diffoscope - one of the main tools used and maintained by the Reproducible Builds project - to check whether the file contents and file metadata are as expected after rsync calls. Since I did not have good reasons to use bash, I ve decided to write the scripts to be POSIX compliant.
  • How to avoid boilerplate? If one makes use of a testing framework, which one? Writing the tests would involve quite a bit of boilerplate, mostly related to giving informative output on and during the test run, preparing the file structure we want to run rsync on, and cleaning the files up after the test has run. It would be very repetitive and in violation of DRY to have the code for this appear in every test. Good testing frameworks should provide convenience functions for these tasks. shunit2 comes with those functions, is packaged for Debian, and given that it is already being used in the curl project, I decided to go with it.
  • Do we use the same directory structure and files for every test or should every test have an individual setup? The tradeoff in this question being test isolation vs. idiosyncratic code. If every test has its own setup, it takes a) more work to write the test and b) more work to understand the differences between tests. However, one can be sure that changes to the setup in one test will have no side effects on other tests. In my opinion, this guarantee was worth the additional effort in writing/reading the tests.
Having made these decisions, I simply started writing tests and ran into issues very quickly.

rsync and subsecond mtime diffs When testing the rsync --times option, I observed a weird phenomenon: If the source and destination file have modification times which differ only in the nanoseconds, an rsync --times call will not synchronize the modification times. More details about this behavior and examples can be found in the upstream issue I raised. In the Debian tests we had to occasionally work around this by setting the timestamps explicitly with touch -d.
diffoscope regression In one test case, I was expecting a difference in the modification times but diffoscope would not report a diff. After a good amount of time spent on debugging the problem (my default, and usually correct, assumption is that something about my code is seriously broken if I run into issues like that), I was able to show that diffoscope only displayed this behavior in the version in the unstable suite, not on Debian stable (which I am running on my development machine). Since everything pointed to a regression in the diffoscope project and with diffoscope being written in Python, a language I am familiar with, I wanted to spend some time investigating (and hopefully fixing) the problem. Running git bisect on the diffoscope repo helped me in identifying the commit which introduced the regression: The commit contained an optimization via an early return for bit-by-bit identical files. Unfortunately, the early return also caused an explicitly requested metadata comparison (which could be different between the files) to be skipped. With a nicely diagnosed issue like that, I was able to go to a local hackerspace event, where people work on FOSS together for an evening every month. In a group, we were able to first, write a test which showcases the broken behavior in the latest diffoscope version, and second, make a fix to the code such that the same test passes going forward. All details can be found in this merge request.
shunit2 failures At some point I had a few autopkgtests setup and passing, but adding a new one would throw me totally inexplicable errors. After trying to isolate the problem as much as possible, it turns out that shunit2 doesn t play well together we the -e shell option. The project mentions this in the release notes for the 2.1.8 version1, but in my opinion a constraint this severe should be featured much more prominently, e.g. in the README.

Tests over an ssh connection The centrepiece of this project; everything else has in a way only been preparation for this. Obviously, the goal was to reuse the previously written local tests in some way. Not only because lazy me would have less work to do this way, but also because of a reduced long-term maintenance burden of one rather than two test sets. As it turns out, it is actually possible to accomplish that: The remote-tests script doesn t do much apart from starting an ssh server on localhost and running the local-tests script with the REMOTE environment variable set. The REMOTE environment variable changes the behavior of the local-tests script in such a way that it prepends "$REMOTE": to the destination of the rsync invocations. And given that we set REMOTE=rsync@localhost in the remote-tests script, local-tests copies the files to the exact same locations as before, just over ssh. The implementational details for this can be found in this merge request.

proposed-updates Most of my development work on the Debian rsync package took place during the Debian freeze as the release of Debian Trixie is just around the corner. This means that uploading by Debian Developers (DD) and Debian Maintainers (DM) to the unstable suite is discouraged as it makes migrating the packages to testing more difficult for the Debian release team. If DDs/DMs want to have the package version in unstable migrated to testing during the freeze they have to file an unblock request. Samuel has done this twice (1, 2) for my work for Trixie but has asked me to file the proposed-updates request for current stable (i.e. Debian Bookworm) myself after I ve backported my tests to bookworm.

Unfinished business To run the upstream tests which check access control list and extended file system attributes functionality, I ve added the acl and xattr packages to Build-Depends in debian/control. This, however, will only make the packages available at build time: If Debian users install the rsync package, the acl and xattr packages will not be installed alongside it. For that, the dependencies would have to be added to Depends or Suggests in debian/control. Depends is probably to strong of a relation since rsync clearly works well in practice without, but adding them to Suggests might be worthwhile. A decision on this would involve checking, what happens if rsync is called with the relevant options on a host machine which has those packages installed, but where the destination machine lacks them. Apart from the issue described above, the 15 tests I managed to write are are a drop in the water in light of the infinitude of rsync options and their combinations. Most glaringly, not all options of the --archive option are covered separately (which would help indicating what code path of rsync broke in a regression). To increase the likelihood of catching regressions with the autopkgtests, the test coverage should be extended in the future.

Conclusion Generally, I am happy with my contributions to Debian over the course of my small GSoC project: I ve created an extensible, easy to understand, and working autopkgtest setup for the Debian rsync package. There are two things which bother me, however:
  1. In hindsight, I probably shouldn t have gone with shunit2 as a testing framework. The fact that it behaves erratically with the -e flag is a serious drawback for a shell testing framework: You really don t want a shell command to fail silently and the test to continue running.
  2. As alluded to in the previous section, I m not particularly proud of the number of tests I managed to write.
On the other hand, finding and fixing the regression in diffoscope - while derailing me from the GSoC project itself - might have a redeeming quality.

DebConf25 By sheer luck I happened to work on a GSoC project at Debian over a time period during which the annual Debian conference would take place close enough to my place of residence. Samuel pointed the opportunity to attend DebConf out to me during the community bonding period and since I could make time for the event in my schedule, I signed up. DebConf was a great experience which - aside from gaining more knowledge about Debian development - allowed me to meet the actual people usually hidden behind email adresses and IRC nicks. I can wholeheartedly recommend attending a DebConf to every interested Debian user! For those who have missed this year s iteration of the conference, I can recommend the following recorded talks: While not featuring as a keynote speaker (understandably so as the newcomer to Debian community that I am), I could still contribute a bit to the conference program.

GSoC project presentation The Debian Outreach team has scheduled a session in which all GSoC and Outreachy students over the past year had the chance to present their work in a lightning talk. The session has been recorded and is available online, just like my slides and the source for them.

Debian install workshop Additionally, with so many Debian experts gathering in one place while KDE s End of 10 campaign is ongoing, I felt it natural to organize a Debian install workhop. In hindsight I can say that I underestimated how much work it would be, especially for me who does not speak a word of French. But although the turnout of people who wanted us to install Linux on their machines was disappointingly low, it was still worth it: Not only because the material in the repo can be helpful to others planning install workshops but also because it was nice to meet a) the person behind the Debian installer images and b) the local Brest/Finist re Linux user group as well as the motivated and helpful people at Infini.

Credits I want to thank the Open Source team at Google for organizing GSoC: The highly structured program with a one-to-one mentorship is a great avenue to start contributing to well established and at times intimidating FOSS projects. And as much as I disagree with Google s surveillance capitalist business model, I have to give it to them that the company at least takes its responsibility for FOSS (somewhat) seriously - unlike many other businesses which rely on FOSS and choose to freeride of it. Big thanks to the Debian community! I ve experienced nothing but friendliness in my interactions with the community. And lastly, the biggest thanks to my GSoC mentor Samuel Henrique. He has dealt patiently and competently with all my stupid newbie questions. His support enabled me to make - albeit small - contributions to Debian. It has been a pleasure to work with him during GSoC and I m looking forward to working together with him in the future.

  1. Obviously, I ve only read them after experiencing the problem.

22 July 2025

Iustin Pop: Watching website scanning bots

Ever since I put up http://demo.corydalis.io, and setup logcheck, I m inadvertently keeping up with recent exploits in common CMS frameworks, or maybe even normal web frameworks issues, by seeing what 404s I get from the logs. Now, I didn t indent to do this per se, I just wanted to make sure I don t have any 500s, and at one point, I did actually catch a bug by seeing seemingly valid URLs, with referrer my own pages, leading to 404s. But besides that, it s mainly a couple times per week, a bot finds the site, and then it tries in fast succession something like this (real log entries, with the source IP address removed):
[21/Jul/2025:09:27:09 +0200] "GET /pms?module=logging&file_name=../../../../../../~/.aws/credentials&number_of_lines=10000 HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:11 +0200] "GET /admin/config?cmd=cat+/root/.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:11 +0200] "GET /.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:13 +0200] "GET /.env.local HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:13 +0200] "GET /.env.production HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:16 +0200] "GET /.env.dev HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:17 +0200] "GET /.env.development HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:19 +0200] "GET /.env.prod HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:19 +0200] "GET /.env.stage HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:22 +0200] "GET /.env.test HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:23 +0200] "GET /.env.example HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:25 +0200] "GET /.env.bak HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:26 +0200] "GET /.env.old HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:28 +0200] "GET /.envs/.production/.django HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:28 +0200] "GET /blog.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:31 +0200] "GET /wp-content/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:32 +0200] "GET /application/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:34 +0200] "GET /app/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:35 +0200] "GET /apps/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:37 +0200] "GET /config/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:38 +0200] "GET /config/config.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:40 +0200] "GET /config/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:41 +0200] "GET /api/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:43 +0200] "GET /vendor/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:44 +0200] "GET /backend/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:46 +0200] "GET /server/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:46 +0200] "GET /home/user/.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:49 +0200] "GET /aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:50 +0200] "GET /.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:52 +0200] "GET /.aws/config HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:52 +0200] "GET /config/aws.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:55 +0200] "GET /config/aws.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:55 +0200] "GET /.env.production HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:58 +0200] "GET /config.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:59 +0200] "GET /config/config.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:01 +0200] "GET /config/settings.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:02 +0200] "GET /config/secrets.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:04 +0200] "GET /config.yaml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:04 +0200] "GET /config.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:07 +0200] "GET /config.py HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:08 +0200] "GET /secrets.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:10 +0200] "GET /secrets.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:11 +0200] "GET /credentials.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:13 +0200] "GET /.git-credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:14 +0200] "GET /.git/config HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:16 +0200] "GET /.gitignore HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:18 +0200] "GET /.gitlab-ci.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:19 +0200] "GET /.github/workflows HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:22 +0200] "GET /.idea/workspace.xml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:22 +0200] "GET /.vscode/settings.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:25 +0200] "GET /docker-compose.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:25 +0200] "GET /docker-compose.override.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:28 +0200] "GET /docker-compose.prod.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:28 +0200] "GET /docker-compose.dev.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:32 +0200] "GET /phpinfo HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:32 +0200] "GET /_profiler/phpinfo HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:34 +0200] "GET /phpinfo.php HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:34 +0200] "GET /info.php HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:37 +0200] "GET /storage/logs/laravel.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:37 +0200] "GET /storage/logs/error.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:40 +0200] "GET /logs/debug.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:40 +0200] "GET /logs/app.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:49 +0200] "GET /debug.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:51 +0200] "GET /error.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:53 +0200] "GET /.DS_Store HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:55 +0200] "GET /backup.zip HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:58 +0200] "GET /.backup HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:00 +0200] "GET /db.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:03 +0200] "GET /dump.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:06 +0200] "GET /database.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:09 +0200] "GET /backup.tar.gz HTTP/1.1" 404 - "" "Mozilla/5.0"
Now, this example is actually trying to catch a bit more things, but many times it s focused on some specific thing, or two things. Here we have docker, MacOS .DS_Store (I m not sure how that s useful - to find more filenames?), VSCode settings, various secrets, GitHub workflows, log output, database dumps, AWS credentials, and still I guess from the wp filename WordPress settings. The first few years were full of WordPress scanners, now it seems it has quieted down, I haven t seen a bot scanning 200 WP potential filenames in ages. And this bot even bothers to put in Mozilla/5.0 as browser identification . Side-note: I don t think the filename path in the first log entry, i.e. ../../../../../../~/, ever properly resolves to the home directory of any user. So I m not that particular scanner ever works, but who knows? Maybe some framework does bad tilde expansion, but at least bash will not expand ~ inside a path, it seems that path is passed as-is to an invoked command (strace confirms it). What s surprising here is that these are usually plain dumb scanners, from the same IP address, no concern on throttling, no attempt to hide, just 2 minutes of brute-forcing a random list of known treasures , then moving on. For this to be worth, it means there are still victims found using this method, sadly. Well, sometimes I get a single, one-off "GET /wp-login.php HTTP/1.1, which is strange enough it might not be a bot even, who knows. But in general, periods of activity of this type are coming and going, probably aligned with new CVEs. And another surprising thing is that for this type of scanning to work (and I ve seen many over the years), the website framework/configuration must allow random file download. Corydalis itself is written in Haskell, using Yesod, and it has a hardcoded (built at compile time) list of static resources it will serve. I haven t made the switch to fully embedding in the binary, but at that point, it won t need to read from the filesystem at all. Right now it will serve a few CSS and JS files, plus fonts, but that s it, no arbitrary filesystem traversal. Strange that some frameworks allow it. This is not productively spent time, but it is fun, especially seeing how this changes over time. And probably the most use anyone gets out of http://demo.corydalis.io .

12 July 2025

Christian Kastner: Easy dynamic dispatch using GLIBC Hardware Capabilities

TL;DR With GLIBC 2.33+, you can build a shared library multiple times targeting various optimization levels, and the dynamic linker/loader will pick the highest version supported by the current CPU. For example, with the layout below, on a Ryzen 9 5900X, x86-64-v3/libfoo0.so would be loaded:
/usr/lib/glibc-hwcaps/x86-64-v4/libfoo0.so
/usr/lib/glibc-hwcaps/x86-64-v3/libfoo0.so
/usr/lib/glibc-hwcaps/x86-64-v2/libfoo0.so
/usr/lib/libfoo0.so
Longer Version GLIBC Hardware Capabilities or "hwcaps" are an easy, almost trivial way to add a simple form of dynamic dispatch to any amd64 or POWER build, provided that either the build target or the compiler's optimizations can make use of certain CPU extensions. Mo Zhou pointed me towards this when I was faced with the challenge of creating a performant Debian package for ggml, the tensor library behind llama.cpp and whisper.cpp.
The Challenge A performant yet universally loadable library needs to make use of some form of dynamic dispatch to leverage the most effective SIMD extensions available on any given CPU it may run on. Last January, when I first started with the packaging of ggml for Debian, ggml did have support for this through its GGML_CPU_ALL_VARIANTS=ON option, but this was limited to amd64. This meant that on all the other architectures that Debian supports, I would need to target some ancient baseline, thus effectively crippling the package there.
Dynamic Dispatch using hwcaps hwcaps were introduced in GLIBC 2.33 and replace the (now) Legacy Hardware Capabilities, which were removed in 2.37. The way hwcaps work is delightfully simple: the dynamic linker/loader will look for a shared library not just in the standard library paths, but also in subdirectories thereof of the form hwcaps/<level>, starting with the highest <level> that the current CPU supports. The levels are predefined. I'm using the amd64 levels below. For ggml, this meant that I simply could build the library in multiple passes, each time targeting a different <level>, and install the result in the corresponding subdirectory, which resulted in the following layout (reduced to libggml.so for brevity):
/usr/lib/x86_64-linux-gnu/ggml/glibc-hwcaps/x86-64-v4/libggml.so
/usr/lib/x86_64-linux-gnu/ggml/glibc-hwcaps/x86-64-v3/libggml.so
/usr/lib/x86_64-linux-gnu/ggml/glibc-hwcaps/x86-64-v2/libggml.so
/usr/lib/x86_64-linux-gnu/ggml/libggml.so
In practice, this means that on a CPU supporting AVX512, the linker/loader would load x86-64-v4/libggml.so if it existed, and otherwise continue to look for the other levels, all the way down to the lowest one. On a CPU which supported only SSE4.2, the lookup process would be the same, ending with picking x86-64-v2/libggml.so. With QEMU, all of this was quickly verified. Note that the lowest-level library, targeting x86-64-v1, is not installed to a subdirectory, but to the path where the library would normally have been installed. This has the nice property that on systems not using GLIBC, and thus not having hwcaps available, package installation will still result in a loadable library, albeit the version with the worst performance. And a careful observer might have noticed that in the example above, the library is installed to a private ggml/ directory, so this mechanism also works when using RUNPATH or LD_LIBRARY_PATH. As mentioned above, Debian's ggml package will soon switch to GGML_CPU_ALL_VARIANTS=ON, but this was still quite the useful feature to discover.

Reproducible Builds: Reproducible Builds in June 2025

Welcome to the 6th report from the Reproducible Builds project in 2025. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:
  1. Reproducible Builds at FOSSY 2025
  2. Distribution work
  3. diffoscope
  4. OSS Rebuild updates
  5. Website updates
  6. Upstream patches
  7. Reproducibility testing framework

Reproducible Builds at FOSSY 2025 On Saturday 2nd August, Vagrant Cascadian and Chris Lamb will be presenting at this year s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here s Reproducible Builds!, is being introduced as follows:
There are numerous policy compliance and regulatory processes being developed that target software development but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted forever?
The talk will introduce the audience to Reproducible Builds as a set of best practices which allow users and developers to verify that software artifacts were built from the source code, but also allows auditing for license compliance, providing security benefits, and removes the need to trust arbitrary software vendors. Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you . More information on the event is available on the FOSSY 2025 website, including the full programme schedule. Vagrant and Chris will also be staffing a table this year, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.

Distribution work In Debian this month:
  • Holger Levsen has discovered that it is now possible to bootstrap a minimal Debian trixie using 100% reproducible packages. This result can itself be reproduced, using the debian-repro-status tool and mmdebstrap s support for hooks:
      $ mmdebstrap --variant=apt --include=debian-repro-status \
           --chrooted-customize-hook=debian-repro-status \
           trixie /dev/null 2>&1   grep "Your system has"
       INFO  debian-repro-status > Your system has 100.00% been reproduced.
    
  • On our mailing list this month, Helmut Grohne wrote an extensive message raising an issue related to Uploads with conflicting buildinfo filenames:
    Having several .buildinfo files for the same architecture is something that we plausibly want to have eventually. Imagine running two sets of buildds and assembling a single upload containing buildinfo files from both buildds in the same upload. In a similar vein, as a developer I may want to supply several .buildinfo files with my source upload (e.g. for multiple architectures). Doing any of this is incompatible with current incoming processing and with reprepro.
  • 5 reviews of Debian packages were added, 4 were updated and 8 were removed this month adding to our ever-growing knowledge about identified issues.

In GNU Guix, Timothee Mathieu reported that a long-standing issue with reproducibility of shell containers across different host operating systems has been solved. In their message, Timothee mentions:
I discovered that pytorch (and maybe other dependencies) has a reproducibility problem of order 1e-5 when on AVX512 compared to AVX2. I first tried to solve the problem by disabling AVX512 at the level of pytorch, but it did not work. The dev of pytorch said that it may be because some components dispatch computation to MKL-DNN, I tried to disable AVX512 on MKL, and still the results were not reproducible, I also tried to deactivate in openmpi without success. I finally concluded that there was a problem with AVX512 somewhere in the dependencies graph but I gave up identifying where, as this seems very complicated.

The IzzyOnDroid Android APK repository made more progress in June. Not only have they just passed 48% reproducibility coverage, Ben started making their reproducible builds more visible, by offering rbtlog shields, a kind of badge that has been quickly picked up by many developers who are proud to present their applications reproducibility status.
Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 298, 299 and 300 to Debian:
  • Add python3-defusedxml to the Build-Depends in order to include it in the Docker image. [ ]
  • Handle the RPM format s HEADERSIGNATURES and HEADERIMMUTABLE as a special-case to avoid unnecessarily large diffs. Thanks to Daniel Duan for the report and suggestion. [ ][ ]
  • Update copyright years. [ ]
In addition, @puer-robustus fixed a regression introduced in an earlier commit which resulted in some differences being lost. [ ][ ] Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 299 [ ][ ] and 300 [ ][ ].

OSS Rebuild updates OSS Rebuild has added a new network analyzer that provides transparent HTTP(S) interception during builds, capturing all network traffic to monitor external dependencies and identify suspicious behavior, even in unmodified maintainer-controlled build processes. The text-based user interface now features automated failure clustering that can group similar rebuild failures and provides natural language failure summaries, making it easier to identify and understand patterns across large numbers of build failures. OSS Rebuild has also improved the local development experience with a unified interface for build execution strategies, allowing for more extensible environment setup for build execution. The team also designed a new website and logo.

Website updates Once again, there were a number of improvements made to our website this month including:
  • Arnaud Brousseau added Stage , a new Linux distribution, to our Tools page.
  • Chris Lamb improved the docker instructions on the diffoscope website. [ ]


Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, however, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Installed and deployed rebuilderd version 0.24 from Debian unstable in order to make use of the new compression feature added by Jarl Gullberg for the database. This resulted in massive decrease of the SQLite databases:
      • 79G 2.8G (all)
      • 84G 3.2G (amd64)
      • 75G 2.9G (arm64)
      • 45G 2.1G (armel)
      • 48G 2.2G (armhf)
      • 73G 2.8G (i386)
      • 72G 2.7G (ppc64el)
      • 45G 2.1G (riscv64)
      for a combined saving from 521G 20.8G. This naturally reduces the requirements to run an independent rebuilderd instance and will permit us to add more Debian suites as well.
    • During migration to the latest version of rebuilderd, make sure several services are not started. [ ]
    • Actually run rebuilderd from /usr/bin. [ ]
    • Raise temperatures for NVME devices on some riscv64 nodes that should be ignored. [ ][ ]
    • Use a 64KB kernel page size on the ppc64el architecture (see #1106757). [ ]
    • Improve ordering of some failed to reproduce statistics. [ ]
    • Detect a number of potential causes of build failures within the statistics. [ ][ ]
    • Add support for manually scheduling for the any architecture. [ ]
  • Misc:
    • Update the Codethink nodes as there are now many kernels installed. [ ][ ]
    • Install linux-sysctl-defaults on Debian trixie systems as we need ping functionality. [ ]
    • Limit the fs.nr_open kernel turnable. [ ]
    • Stop submitting results to deprecated buildinfo.debian.net service. [ ][ ]
In addition, Jochen Sprickerhof greatly improved the statistics and the logging functionality, including adopting to the new database format of rebuilderd version 0.24.0 [ ] and temporarily increasing maximum log size in order to debug a nettlesome build [ ]. Jochen also dropped the CPUSchedulingPolicy=idle systemd flag on the workers. [ ]

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Freexian Collaborators: Monthly report about Debian Long Term Support, June 2025 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In June, 20 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 14.0h (out of 14.0h assigned).
  • Adrian Bunk did 23.5h (out of 23.5h assigned).
  • Andreas Henriksson did 3.0h (out of 3.0h assigned and 17.0h from previous period), thus carrying over 17.0h to the next month.
  • Andrej Shadura did 2.0h (out of 3.0h assigned and 7.0h from previous period), thus carrying over 8.0h to the next month.
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 8.0h (out of 7.5h assigned and 16.0h from previous period), thus carrying over 15.5h to the next month.
  • Carlos Henrique Lima Melara did 12.0h (out of 12.0h assigned).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 22.0h (out of 22.5h assigned and 1.0h from previous period), thus carrying over 1.5h to the next month.
  • Emilio Pozuelo Monfort did 23.5h (out of 16.75h assigned and 6.75h from previous period).
  • Guilhem Moulin did 14.0h (out of 11.5h assigned and 3.5h from previous period), thus carrying over 1.0h to the next month.
  • Jochen Sprickerhof did 21.0h (out of 0.5h assigned and 22.75h from previous period), thus carrying over 2.25h to the next month.
  • Lucas Kanashiro did 20.0h (out of 20.0h assigned).
  • Markus Koschany did 23.25h (out of 17.0h assigned and 6.25h from previous period).
  • Roberto C. S nchez did 21.25h (out of 20.75h assigned and 3.25h from previous period), thus carrying over 2.75h to the next month.
  • Santiago Ruano Rinc n did 12.75h (out of 15.0h assigned), thus carrying over 2.25h to the next month.
  • Sean Whitton did 1.0h (out of 4.25h assigned and 1.75h from previous period), thus carrying over 5.0h to the next month.
  • Sylvain Beucler did 23.5h (out of 23.5h assigned).
  • Thorsten Alteholz did 15.0h (out of 15.0h assigned).
  • Tobias Frost did 2.5h (out of 12.0h assigned), thus carrying over 9.5h to the next month.

Evolution of the situation In June, we released 35 DLAs.
  • Notable security updates:
    • mariadb-10.5, prepared by Otto Kek l inen, fixes vulnerabilities which could result in denial of service, information disclosure, or unauthorized data modification
    • python-django, prepared by Chris Lamb, fixes vulnerabilities which would result in log injection or denial of service
    • webkit2gtk, prepared by Emilio Pozuelo Monfort, fixes many vulnerabilities which could results in a wide range of issues
    • xorg-server, prepared by Emilio Pozuelo Monfort, fixes multiple vulnerabilities which may result in privilege escalation
    • sudo, prepared by Thorsten Alteholz, fixes a vulnerability which could result in privilege escalation
  • Notable non-security updates:
    • debian-security-support, prepared by Santiago Ruano Rinc n, updates status of packages which receive limited security support or which have reached the end of security support
    • dns-root-data, prepared by Sylvain Beucler, updates the DNSSEC trust anchors
This month s contributions from outside the regular team include the mariadb-10.5 update mentioned above, prepared by Otto Kek l inen (the package maintainer); an update to libfile-find-rule-perl, prepared by Salvatore Bonaccorso (a member of the Debian Security Team); an update to activemq, prepared by Emmanuel Arias (a maintainer of the package). Additionally, LTS Team members contributed stable updates of the following packages:
  • curl, prepared by Carlos Henrique Lima Melara
  • python-tornado, prepared by Daniel Leidert
  • python-flask-cors, prepared by Daniel Leidert
  • common-vfs, prepared by Daniel Leidert
  • cjson, prepared by Adrian Bunk
  • icu, prepared by Adrian Bunk
  • node-tar-fs, prepared by Adrian Bunk
  • rar, prepared by Adrian Bunk
Something of particular noteworthiness is that LTS contributor Carlos Henrique Lima Melara discovered a regression in the upstream fix for CVE-2023-2753 in curl. The corrective action which he took included providing a patch to upstream, uploading a stable update of curl, and further updating the version of curl in LTS. DebConf, the annual Debian Conference, is coming up in July and, as is customary each year, the week preceding the conference will feature an event called DebCamp. The DebCamp week provides an opportunity for teams and other interested groups/individuals to meet together in person in the same venue as the conference itself, with the purpose of doing focused work, often called sprints . LTS coordinator Roberto C. S nchez has announced that the LTS Team is planning to hold a sprint primarily focused on the Debian security tracker and the associated tooling used by the LTS Team and the Debian Security Team.

Thanks to our sponsors Sponsors that joined recently are in bold.

2 July 2025

Dirk Eddelbuettel: Rcpp 1.1.0 on CRAN: C++11 now Minimum, Regular Semi-Annual Update

rcpp logo With a friendly Canadian hand wave from vacation in Beautiful British Columbia, and speaking on behalf of the Rcpp Core Team, I am excited to shared that the (regularly scheduled bi-annual) update to Rcpp just brought version 1.1.0 to CRAN. Debian builds haven been prepared and uploaded, Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course r2u should catch up tomorrow as well. The key highlight of this release is the switch to C++11 as minimum standard. R itself did so in release 4.0.0 more than half a decade ago; if someone is really tied to an older version of R and an equally old compiler then using an older Rcpp with it has to be acceptable. Our own tests (using continuous integration at GitHub) still go back all the way to R 3.5.* and work fine (with a new-enough compiler). In the previous release post, we commented that we had only reverse dependency (falsely) come up in the tests by CRAN, this time there was none among the well over 3000 packages using Rcpp at CRAN. Which really is quite amazing, and possibly also a testament to our rigorous continued testing of our development and snapshot releases on the key branch. This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As just mentioned, we do of course make interim snapshot dev or rc releases available. While we not longer regularly update the Rcpp drat repo, the r-universe page and repo now really fill this role admirably (and with many more builds besides just source). We continue to strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are of course also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3038 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 61.3% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 100.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2023 (JSS, 2011) and 380 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 695. As mentioned, this release switches to C++11 as the minimum standard. The diffstat display in the CRANberries comparison to the previous release shows how several (generated) sources files with C++98 boilerplate have now been removed; we also flattened a number of if/else sections we no longer need to cater to older compilers (see below for details). We also managed more accommodation for the demands of tighter use of the C API of R by removing DATAPTR and CLOENV use. A number of other changes are detailed below. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.0 (2025-07-01)
  • Changes in Rcpp API:
    • C++11 is now the required minimal C++ standard
    • The std::string_view type is now covered by wrap() (Lev Kandel in #1356 as discussed in #1357)
    • A last remaining DATAPTR use has been converted to DATAPTR_RO (Dirk in #1359)
    • Under R 4.5.0 or later, R_ClosureEnv is used instead of CLOENV (Dirk in #1361 fixing #1360)
    • Use of lsInternal switched to lsInternal3 (Dirk in #1362)
    • Removed compiler detection macro in a header cleanup setting C++11 as the minunum (Dirk in #1364 closing #1363)
    • Variadic templates are now used onconditionally given C++11 (Dirk in #1367 closing #1366)
    • Remove RCPP_USING_CXX11 as a #define as C++11 is now a given (Dirk in #1369)
    • Additional cleanup for __cplusplus checks (I aki in #1371 fixing #1370)
    • Unordered set construction no longer needs a macro for the pre-C++11 case (I aki in #1372)
    • Lambdas are supported in a Rcpp Sugar functions (I aki in #1373)
    • The Date(time)Vector classes now have default ctor (Dirk in #1385 closing #1384)
    • Fixed an issue where Rcpp::Language would duplicate its arguments (Kevin in #1388, fixing #1386)
  • Changes in Rcpp Attributes:
    • The C++26 standard now has plugin support (Dirk in #1381 closing #1380)
  • Changes in Rcpp Documentation:
    • Several typos were correct in the NEWS file (Ben Bolker in #1354)
    • The Rcpp Libraries vignette mentions PACKAGE_types.h to declare types used in RcppExports.cpp (Dirk in #1355)
    • The vignettes bibliography file was updated to current package versions, and now uses doi references (Dirk in #1389)
  • Changes in Rcpp Deployment:
    • Rcpp.package.skeleton() creates URL and BugReports if given a GitHub username (Dirk in #1358)
    • R 4.4.* has been added to the CI matrix (Dirk in #1376)
    • Tests involving NA propagation are skipped under linux-arm64 as they are under macos-arm (Dirk in #1379 closing #1378)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 June 2025

Gunnar Wolf: Understanding Misunderstandings - Evaluating LLMs on Networking Questions

This post is a review for Computing Reviews for Understanding Misunderstandings - Evaluating LLMs on Networking Questions , a article published in Association for Computing Machinery (ACM), SIGCOMM Computer Communication Review
Large language models (LLMs) have awed the world, emerging as the fastest-growing application of all time ChatGPT reached 100 million active users in January 2023, just two months after its launch. After an initial cycle, they have gradually been mostly accepted and incorporated into various workflows, and their basic mechanics are no longer beyond the understanding of people with moderate computer literacy. Now, given that the technology is better understood, we face the question of how convenient LLM chatbots are for different occupations. This paper embarks on the question of whether LLMs can be useful for networking applications. This paper systematizes querying three popular LLMs (GPT-3.5, GPT-4, and Claude 3) with questions taken from several network management online courses and certifications, and presents a taxonomy of six axes along which the incorrect responses were classified: The authors also measure four strategies toward improving answers: The authors observe that, while some of those strategies were marginally useful, they sometimes resulted in degraded performance. The authors queried the commercially available instances of Gemini and GPT, which achieved scores over 90 percent for basic subjects but fared notably worse in topics that require understanding and converting between different numeric notations, such as working with Internet protocol (IP) addresses, even if they are trivial (that is, presenting the subnet mask for a given network address expressed as the typical IPv4 dotted-quad representation). As a last item in the paper, the authors compare performance with three popular open-source models: Llama3.1, Gemma2, and Mistral with their default settings. Although those models are almost 20 times smaller than the GPT-3.5 commercial model used, they reached comparable performance levels. Sadly, the paper does not delve deeper into these models, which can be deployed locally and adapted to specific scenarios. The paper is easy to read and does not require deep mathematical or AI-related knowledge. It presents a clear comparison along the described axes for the 503 multiple-choice questions presented. This paper can be used as a guide for structuring similar studies over different fields.

Freexian Collaborators: Debian Contributions: Updated Austin, DebConf 25 preparations continue and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-05 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Updated Austin, by Colin Watson and Helmut Grohne Austin is a frame stack sampling profiler for Python. It allows profiling Python applications without instrumenting them while losing some accuracy in the process, and is the only one of its kind presently packaged for Debian. Unfortunately, it hadn t been uploaded in a while and hence the last Python version it worked with was 3.8. We updated it to a current version and also dealt with a number of architecture-specific problems (such as unintended sign promotion, 64bit time_t fallout and strictness due to -Wformat-security ) in cooperation with upstream. With luck, it will migrate in time for trixie.

Preparing for DebConf 25, by Stefano Rivera and Santiago Ruano Rinc n DebConf 25 is quickly approaching, and the organization work doesn t stop. In May, Stefano continued supporting the different teams. Just to give a couple of examples, Stefano made changes in DebConf 25 website to make BoF and sprints submissions public, so interested people can already know if a BoF or sprint for a given subject is planned, allowing coordination with the proposer; or to enhance how statistics are made public to help the work of the local team. Santiago has participated in different tasks, including the logistics of the conference, like preparing more information about the public transportation that will be available. Santiago has also taken part in activities related to fundraising and reviewing more event proposals.

Miscellaneous contributions
  • Lucas fixed security issues in Valkey in unstable.
  • Lucas tried to help with the update of Redis to version 8 in unstable. The package hadn t been updated for a while due to licensing issues, but now upstream maintainers fixed them.
  • Lucas uploaded around 20 ruby-* packages to unstable that weren t updated for some years to make them build reproducible. Thanks to reproducible builds folks to point out those issues. Also some unblock requests (and follow-ups) were needed to make them reach trixie in time for the release.
  • Lucas is organizing a Debian Outreach session for DebConf 25, reaching out to all interns of Google Summer of Code and Outreachy programs from the last year. The session will be presented by in-person interns and also video recordings from the interns interested in participating but did not manage to attend the conference.
  • Lucas continuously works on DebConf Content team tasks. Replying to speakers, sponsors, and communicating internally with the team.
  • Carles improved po-debconf-manager: fixed bugs reported by Catalan translator, added possibility to import packages out of salsa, added using non-default project branches on salsa, polish to get ready for DebCamp.
  • Carles tested new apt in trixie and reported bugs to apt , installation-report , libqt6widget6 .
  • Carles used po-debconf-manager and imported remaining 80 packages, reviewed 20 translations, submitted (MR or bugs) 54 translations.
  • Carles prepared some topics for translation BoF in DebConf (gathered feedback, first pass on topics).
  • Helmut gave an introductory talk about the mechanics of Linux namespaces at MiniDebConf Hamburg.
  • Helmut sent 25 patches for cross compilation failures.
  • Helmut reviewed, refined and applied a patch from Jochen Sprickerhof to make the Multi-Arch hinter emit more hints for pure Python modules.
  • Helmut sat down with Christoph Berg (not affiliated with Freexian) and extended unschroot to support directory-based chroots with overlayfs. This is a feature that was lost in transitioning from sbuild s schroot backend to its unshare backend. unschroot implements the schroot API just enough to be usable with sbuild and otherwise works a lot like the unshare backend. As a result, apt.postgresql.org now performs its builds contained in a user namespace.
  • Helmut looked into a fair number of rebootstrap failures most of which related to musl or gcc-15 and imported patches or workarounds to make those builds proceed.
  • Helmut updated dumat to use sqop fixing earlier PGP verification problems thanks to Justus Winter and Neal Walfield explaining a lot of sequoia at MiniDebConf Hamburg.
  • Helmut got the previous zutils update for /usr-move wrong again and had to send another update.
  • Helmut looked into why debvm s autopkgtests were flaky and with lots of help from Paul Gevers and Michael Tokarev tracked it down to a race condition in qemu. He updated debvm to trigger the problem less often and also fixed a wrong dependency using Luca Boccassi s patch.
  • Santiago continued the switch to sbuild for Salsa CI (that was stopped for some months), and has been mainly testing linux, since it s a complex project that heavily customizes the pipeline. Santiago is preparing the changes for linux to submit a MR soon.
  • In openssh, Colin tracked down some intermittent sshd crashes to a root cause, and issued bookworm and bullseye updates for CVE-2025-32728.
  • Colin spent some time fixing up fail2ban, mainly reverting a patch that caused its tests to fail and would have banned legitimate users in some common cases.
  • Colin backported upstream fixes for CVE-2025-48383 (django-select2) and CVE-2025-47287 (python-tornado) to unstable.
  • Stefano supported video streaming and recording for 2 miniDebConfs in May: Macei and Hamburg. These had overlapping streams for one day, which is a first for us.
  • Stefano packaged the new version of python-virtualenv that includes our patches for not including the wheel for wheel.
  • Stefano got all involved parties to agree (in principle) to meet at DebConf for a mediated discussion on a dispute that was brought to the technical committee.
  • Anupa coordinated the swag purchase for DebConf 25 with Juliana and Nattie.
  • Anupa joined the publicity team meeting for discussing the upcoming events and BoF at DebConf 25.
  • Anupa worked with the publicity team to publish Bits post to welcome GSoc 2025 Interns.

Next.

Previous.