Search Results: "zed"

16 January 2026

Kentaro Hayashi: Budgie Desktop 10.10 is out, but not for me yet :(

Introduction I'm one of a Budgie Desktop user since 2020. (Budgie Desktop 10.5 or so) Recently Budgie Desktop 10.10 had been available from Debian experimental. I've tried it and realized that not for me yet.

What is the requirement for desktop environment
  • Screen sharing with specifying the specific window/application for web meeting
  • Sharing keyboard input smoothly with deskflow
  • Switch focus with mouse movement
  • and ...

Is Budgie Desktop 10.10 suitable? Short answer: No, not yet. It seems that Budgie Desktop 10.10 (wayland) lost the functionality - screen sharing with specifying the specific window/application for web meeting. Of course, you can share your screen itself. It also can't sharing keyboard input smoothly with deskflow. Both of them seems that they are supported in GNOME? or KDE?. In contrast to Budgie Desktop 10.9 (X11), Budgie Desktop 10.10 seems missing effective wayland + xdg-desktop-portal support for them yet. It might be supported in the future release, but it might be 11.x.

Alternatives? Budgie Desktop 10.x will come into maintenance mode, so they will not be fixed in 10.x releases (guess). buddiesofbudgie.org Switching DE might be an option - GNOME or KDE. but I don't have much the energy to make the transition for now. I decided to take the conservative option and go back to 10.9. Note that if you upgrade to Budgie Desktop 10.10, it is hard to downgrade to Budgie Desktop 10.9 because python3-gi, python3-gi-cairo dependency blocks budgie-desktop. See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1120138 for details. #1120138 - budgie-extras: Several applets are incompatible with pygobject 3.54/libpeas 1.36.0-8 - Debian Bug report logs As a dirty hack, you can modify python3-gi dependency and install it as your own risk for downgrading.

Conclusion It was still too early to adopt Budgie Desktop (wayland) for me. I'll stay with Budgie Desktop 10.9 for a while. That said, it's unclear how long Budgie Desktop 10.9 will remain unstable and usable. There might be a case that sticking to Budgie Desktop 10.9 might be problematic when other packages are updated. When that happens, I'd like to reconsider this issue again.

13 January 2026

Freexian Collaborators: Debian Contributions: dh-python development, Python 3.14 and Ruby 3.4 transitions, Surviving scraper traffic in Debian CI and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-12 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

dh-python development, by Stefano Rivera In Debian we build our Python packages with the help of a debhelper-compatible tool, dh-python. Before starting the 3.14 transition (that would rebuild many packages) we landed some updates to dh-python to fix bugs and add features. This started a month of attention on dh-python, iterating through several bug fixes, and a couple of unfortunate regressions. dh-python is used by almost all packages containing Python (over 5000). Most of these are very simple, but some are complex and use dh-python in unexpected ways. It s hard to avoid almost any change (including obvious bug fixes) from causing some unexpected knock-on behaviour. There is a fair amount of complexity in dh-python, and some rather clever code, which can make it tricky to work on. All of this means that good QA is important. Stefano spent some time adding type annotations and specialized types to make it easier to see what the code is doing and catch mistakes. This has already made work on dh-python easier. Now that Debusine has built-in repositories and debdiff support, Stefano could quickly test the effects of changes on many other packages. After each big change, he could upload dh-python to a repository, rebuild e.g. 50 Python packages with it, and see what differences appeared in the output. Reviewing the diffs is still a manual process, but can be improved. Stefano did a small test on what it would take to replace direct setuptools setup.py calls with PEP-517 (pyproject-style) builds. There is more work to do here.

Python 3.14 transition, by Stefano Rivera (et al.) In December the transition to add Python 3.14 as a supported version started in Debian unstable. To do this, we update the list of supported versions in python3-defaults, and then start rebuilding modules with C extensions from the leaves inwards. This had already been tested in a PPA and Ubuntu, so many of the biggest blocking compatibility issues with 3.14 had already been found and fixed. But there are always new issues to discover. Thanks to a number of people in the Debian Python team, we got through the first bit of the transition fairly quickly. There are still a number of open bugs that need attention and many failed tests blocking migration to testing. Python 3.14.1 released just after we started the transition, and very soon after, a follow-up 3.14.2 release came out to address a regression. We ran into another regression in Python 3.14.2.

Ruby 3.4 transition, by Lucas Kanashiro (et al.) The Debian Ruby team just started the preparation to move the default Ruby interpreter version to 3.4. At the moment, ruby3.4 source package is already available in experimental, also ruby-defaults added support to Ruby 3.4. Lucas rebuilt all reverse dependencies against this new version of the interpreter and published the results here. Lucas also reached out to some stakeholders to coordinate the work. Next steps are: 1) announcing the results to the whole team and asking for help to fix packages failing to build against the new interpreter; 2) file bugs against packages FTBFSing against Ruby 3.4 which are not fixed yet; 3) once we have a low number of build failures against Ruby 3.4, ask the Debian Release team to start the transition in unstable.

Surviving scraper traffic in Debian CI, by Antonio Terceiro Like most of the open web, Debian Continuous Integration has been struggling for a while to keep up with the insatiable hunger from data scrapers everywhere. Solving this involved a lot of trial and error; the final result seems to be stable, and consists of two parts. First, all Debian CI data pages, except the direct links to test log files (such as those provided by the Release Team s testing migration excuses), now require users to be authenticated before being accessed. This means that the Debian CI data is no longer publicly browseable, which is a bit sad. However, this is where we are now. Additionally, there is now a fail2ban powered firewall-level access limitation for clients that display an abusive access pattern. This went through several iterations, with some of them unfortunately blocking legitimate Debian contributors, but the current state seems to strike a good balance between blocking scrapers and not blocking real users. Please get in touch with the team on the #debci OFTC channel if you are affected by this.

A hybrid dependency solver for crossqa.debian.net, by Helmut Grohne crossqa.debian.net continuously cross builds packages from the Debian archive. Like Debian s native build infrastructure, it uses dose-builddebcheck to determine whether a package s dependencies can be satisfied before attempting a build. About one third of Debian s packages fail this check, so understanding the reasons is key to improving cross building. Unfortunately, dose-builddebcheck stops after reporting the first problem and does not display additional ones. To address this, a greedy solver implemented in Python now examines each build-dependency individually and can report multiple causes. dose-builddebcheck is still used as a fall-back when the greedy solver does not identify any problems. The report for bazel-bootstrap is a lengthy example.

rebootstrap, by Helmut Grohne Due to the changes suggested by Loongson earlier, rebootstrap now adds debhelper to its final installability test and builds a few more packages required for installing it. It also now uses a variant of build-essential that has been marked Multi-Arch: same (see foundational work from last year). This in turn made the use of a non-default GCC version more difficult and required more work to make it work for gcc-16 from experimental. Ongoing archive changes temporarily regressed building fribidi and dash. libselinux and groff have received patches for architecture specific changes and libverto has been NMUed to remove the glib2.0 dependency.

Miscellaneous contributions
  • Stefano did some administrative work on debian.social and debian.net instances and Debian reimbursements.
  • Stefano did routine updates of python-authlib, python-mitogen, xdot.
  • Stefano spent several hours discussing Debian s Python package layout with the PyPA upstream community. Debian has ended up with a very different on-disk installed Python layout than other distributions, and this continues to cause some frustration in many communities that have to have special workarounds to handle it. This ended up impacting cross builds as Helmut discovered.
  • Rapha l set up Debusine workflows for the various backports repositories on debusine.debian.net.
  • Zulip is not yet in Debian (RFP in #800052), but Rapha l helped on the French translation as he is experimenting with that discussion platform.
  • Antonio performed several routine Salsa maintenance tasks, including fixing salsa-nm-sync, the service that synchronizes project members data from LDAP to Salsa, which had been broken since salsa.debian.org was upgraded to trixie .
  • Antonio deployed a new amd64 worker host for Debian CI.
  • Antonio did several DebConf technical and administrative bits, including but adding support for custom check-in/check-out dates in the MiniDebConf registration module, publishing a call for bids for DebConf27.
  • Carles reviewed and submitted 14 Catalan translations using po-debconf-manager.
  • Carles improved po-debconf-manager: added delete-package command, show-information now uses properly formatted output (YAML), it now attaches the translation on the bug reports for which a merge request has been opened too long.
  • Carles investigated why some packages appeared in po-debconf-manager but not in the Debian l10n list. Turns out that some packages had debian/po/templates.pot (appearing in po-debconf-manager) but not the POTFILES.in file as expected. Created a script to find out which packages were in this or similar situation and reported bugs.
  • Carles tested and documented how to set up voices (mbrola and festival) if using Orca speech synthesizer. Commented a few issues and possible improvements in the debian-accessibility list.
  • Helmut sent patches for 48 cross build failures and initiated discussions on how to deal with two non-trivial matters. Besides Python mentioned above, CMake introduced a cmake_pkg_config builtin which is not aware of the host architecture. He also forwarded a Meson patch upstream.
  • Thorsten uploaded a new upstream version of cups to fix a nasty bug that was introduced by the latest security update.
  • Along with many other Python 3.14 fixes, Colin fixed a tricky segfault in python-confluent-kafka after a helpful debugging hint from upstream.
  • Colin upstreamed an improved version of an OpenSSH patch we ve been carrying since 2008 to fix misleading verbose output from scp.
  • Colin used Debusine to coordinate transitions for astroid and pygments, and wrote up the astroid case on his blog.
  • Emilio helped with various transitions, and provided a build fix for opencv for the ffmpeg 8 transition.
  • Emilio tested the GNOME updates for trixie proposed updates (gnome-shell, mutter, glib2.0).
  • Santiago helped to review the status of how to test different build profiles in parallel on the same pipeline, using the test-build-profiles job. This means, for example, to simultaneously test build profiles such as nocheck and nodoc for the same git tree. Finally, Santiago provided MR !685 to fix the documentation.
  • Anupa prepared a bits post for Outreachy interns announcement along with T ssia Cam es Ara jo and worked on publicity team tasks.

11 January 2026

Otto Kek l inen: Stop using MySQL in 2026, it is not true open source

Featured image of post Stop using MySQL in 2026, it is not true open sourceIf you care about supporting open source software, and still use MySQL in 2026, you should switch to MariaDB like so many others have already done. The number of git commits on github.com/mysql/mysql-server has been significantly declining in 2025. The screenshot below shows the state of git commits as of writing this in January 2026, and the picture should be alarming to anyone who cares about software being open source. MySQL GitHub commit activity decreasing drastically

This is not surprising Oracle should not be trusted as the steward for open source projects When Oracle acquired Sun Microsystems and MySQL along with it back in 2009, the European Commission almost blocked the deal due to concerns that Oracle s goal was just to stifle competition. The deal went through as Oracle made a commitment to keep MySQL going and not kill it, but (to nobody s surprise) Oracle has not been a good steward of MySQL as an open source project and the community around it has been withering away for years now. All development is done behind closed doors. The publicly visible bug tracker is not the real one Oracle staff actually uses for MySQL development, and the few people who try to contribute to MySQL just see their Pull Requests and patch submissions marked as received with mostly no feedback and then those changes may or may not be in the next MySQL release, often rewritten, and with only Oracle staff in the git author/committer fields. The real author only gets a small mention in a blog post. When I was the engineering manager for the core team working on RDS MySQL and RDS MariaDB at Amazon Web Services, I oversaw my engineers contributions to both MySQL and MariaDB (the latter being a fork of MySQL by the original MySQL author, Michael Widenius). All the software developers in my org disliked submitting code to MySQL due to how bad the reception by Oracle was to their contributions. MariaDB is the stark opposite with all development taking place in real-time on github.com/mariadb/server, anyone being able to submit a Pull Request and get a review, all bugs being openly discussed at jira.mariadb.org and so forth, just like one would expect from a true open source project. MySQL is open source only by license (GPL v2), but not as a project.

MySQL s technical decline in recent years Despite not being a good open source steward, Oracle should be given credit that it did keep the MySQL organization alive and allowed it to exist fairly independently and continue developing and releasing new MySQL versions well over a decade after the acquisition. I have no insight into how many customers they had, but I assume the MySQL business was fairly profitable and financially useful to Oracle, at least as long as it didn t gain too many features to threaten Oracle s own main database business. I don t know why, perhaps because too many talented people had left the organization, but it seems that from a technical point of view MySQL clearly started to deteriorate from 2022 onward. When MySQL 8.0.29 was released with the default ALTER TABLE method switched to run in-place, it had a lot of corner cases that didn t work, causing the database to crash and data to corrupt for many users. The issue wasn t fully fixed until a year later in MySQL 8.0.32. To many users annoyance Oracle announced the 8.0 series as evergreen and introduced features and changes in the minor releases, instead of just doing bugfixes and security fixes like users historically had learnt to expect from these x.y.Z maintenance releases. There was no new major MySQL version for six years. After MySQL 8.0 in 2018 it wasn t until 2023 when MySQL 8.1 was released, and it was just a short-term preview release. The first actual new major release MySQL 8.4 LTS was released in 2024. Even though it was a new major release, many users got disappointed as it had barely any new features. Many also reported degraded performance with newer MySQL versions, for example the benchmark by famous MySQL performance expert Mark Callaghan below shows that on write-heavy workloads MySQL 9.5 throughput is typically 15% less than in 8.0. Benchmark showing new MySQL versions being slower than the old Due to newer MySQL versions deprecating many features, a lot of users also complained about significant struggles regarding both MySQL 5.7->8.0 and 8.0->8.4 upgrades. With few new features and heavy focus on code base cleanup and feature deprecation, it became obvious to many that Oracle had decided to just keep MySQL barely alive, and put all new relevant features (e.g. vector search) into Heatwave, Oracle s closed-source and cloud-only service for MySQL customers. As it was evident that Oracle isn t investing in MySQL, Percona s Peter Zaitsev wrote Is Oracle Finally Killing MySQL in June 2024. At this time MySQL s popularity as ranked by DB-Engines had also started to tank hard, a trend that likely accelerates in 2026. MySQL dropping significantly in DB-Engines ranking In September 2025 news reported that Oracle was reducing its workforce and that the MySQL staff was getting heavily reduced. Obviously this does not bode well for MySQL s future, and Peter Zaitsev posted already in November stats showing that the latest MySQL maintenance release contained fewer bug fixes than before.

Open source is more than ideology: it has very real effects on software security and sovereignty Some say they don t care if MySQL is truly open source or not, or that they don t care if it has a future in coming years, as long as it still works now. I am afraid people thinking so are taking a huge risk. The database is often the most critical part of a software application stack, and any flaw or problem in operations, let alone a security issue, will have immediate consequences, and not caring will eventually get people fired or sued. In open source problems are discussed openly, and the bigger the problem, the more people and companies will contribute to fixing it. Open source as a development methodology is similar to the scientific method with free flow of ideas that are constantly contested and only the ones with the most compelling evidence win. Not being open means more obscurity, more risk and more just trust us bro attitude. This open vs. closed is very visible for example in how Oracle handles security issues. We can see that in 2025 alone MySQL published 123 CVEs about security issues, while MariaDB had 8. There were 117 CVEs that only affected MySQL and not MariaDB in 2025. I haven t read them all, but typically the CVEs hardly contain any real details. As an example, the most recent one CVE-2025-53067 states Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. There is no information a security researcher or auditor could use to verify if any original issue actually existed, or if it was fixed, or if the fix was sufficient and fully mitigating the issue or not. MySQL users just have to take the word of Oracle that it is all good now. Handling security issues like this is in stark contrast to other open source projects, where all security issues and their code fixes are open for full scrutiny after the initial embargo is over and CVE made public. There is also various forms of enshittification going on one would not see in a true open source project, and everything about MySQL as a software, documentation and website is pushing users to stop using the open source version and move to the closed MySQL versions, and in particular to Heatwave, which is not only closed-source but also results in Oracle fully controlling customer s databases contents. Of course, some could say this is how Oracle makes money and is able to provide a better product. But stories on Reddit and elsewhere suggest that what is going on is more like Oracle milking hard the last remaining MySQL customers who are forced to pay more and more for getting less and less.

There are options and migrating is easy, just do it A large part of MySQL users switched to MariaDB already in the mid-2010s, in particular everyone who had cared deeply about their database software staying truly open source. That included large installations such as Wikipedia, and Linux distributions such as Fedora and Debian. Because it s open source and there is no centralized machine collecting statistics, nobody knows what the exact market shares look like. There are however some application specific stats, such as that 57% of WordPress sites around the world run MariaDB, while the share for MySQL is 42%. For anyone running a classic LAMP stack application such as WordPress, Drupal, Mediawiki, Nextcloud, or Magento, switching the old MySQL database to MariaDB is be straightforward. As MariaDB is a fork of MySQL and mostly backwards compatible with it, swapping out MySQL for MariaDB can be done without changing any of the existing connectors or database clients, as they will continue to work with MariaDB as if it was MySQL. For those running custom applications and who have the freedom to make changes to how and what database is used, there are tens of mature and well-functioning open source databases to choose from, with PostgreSQL being the most popular general database. If your application was built from the start for MySQL, switching to PostgreSQL may however require a lot of work, and the MySQL/MariaDB architecture and storage engine InnoDB may still offer an edge in e.g. online services where high performance, scalability and solid replication features are of highest priority. For a quick and easy migration MariaDB is probably the best option. Switching from MySQL to the Percona Server is also very easy, as it closely tracks all changes in MySQL and deviates from it only by a small number of improvements done by Percona. However, also precisely because of it being basically just a customized version of the MySQL Server, it s not a viable long-term solution for those trying to fully ditch the dependency on Oracle. There are also several open source databases that have no common ancestry with MySQL, but strive to be MySQL-compatible. Thus most apps built for MySQL can simply switch to using them without needing SQL statements to be rewritten. One such database is TiDB, which has been designed from scratch specifically for highly scalable and large systems, and is so good that even Amazon s latest database solution DSQL was built borrowing many ideas from TiDB. However, TiDB only really shines with larger distributed setups, so for the vast majority of regular small- and mid-scale applications currently using MySQL, the most practical solution is probably to just switch to MariaDB, which on most Linux distributions can simply be installed by running apt/dnf/brew install mariadb-server. Whatever you end up choosing, as long as it is not Oracle, you will be better off.

10 January 2026

Dirk Eddelbuettel: Rcpp 1.1.1 on CRAN: Many Improvements in Semi-Annual Update

rcpp logo Team Rcpp is thrilled to share that an exciting new version 1.1.1 of Rcpp is now on CRAN (and also uploaded to Debian and already built for r2u). Having switchted to C++11 as the minimum standard in the previous 1.1.0 release, this version takes full advantage of it and removes a lot of conditional code catering to older standards that no longer need to be supported. Consequently, the source tarball shrinks by 39% from 3.11 mb to 1.88 mb. That is a big deal. (Size peaked with Rcpp 1.0.12 two years ago at 3.43 mb; relative to its size we are down 45% !!) Removing unused code also makes maintenance easier, and quickens both compilation and installation in general. This release continues as usual with the six-months January-July cycle started with release 1.0.5 in July 2020. Interim snapshots are always available via the r-universe page and repo. We continue to strongly encourage the use of these development released and their testing we tend to run our systems with them too. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3020 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.1% of all packages depend (directly) on Rcpp, and 60.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 109.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2151 (JSS, 2011) and 405 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 715. This time, I am not attempting to summarize the different changes. The full list follows below and details all these changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.1 (2026-01-08)
  • Changes in Rcpp API:
    • An unused old R function for a compiler version check has been removed after checking no known package uses it (Dirk in #1395)
    • A narrowing warning is avoided via a cast (Dirk in #1398)
    • Demangling checks have been simplified (I aki in #1401 addressing #1400)
    • The treatment of signed zeros is now improved in the Sugar code (I aki in #1404)
    • Preparations for phasing out use of Rf_error have been made (I aki in #1407)
    • The long-deprecated function loadRcppModules() has been removed (Dirk in #1416 closing #1415)
    • Some non-API includes from R were refactored to accommodate R-devel changes (I aki in #1418 addressing #1417)
    • An accessor to Rf_rnbeta has been removed (Dirk in #1419 also addressing #1420)
    • Code accessing non-API Rf_findVarInFrame now uses R_getVarEx (Dirk in #1423 fixing #1421)
    • Code conditional on the R version now expects at least R 3.5.0; older code has been removed (Dirk in #1426 fixing #1425)
    • The non-API ATTRIB entry point to the R API is no longer used (Dirk in #1430 addressing #1429)
    • The unwind-protect mechanism is now used unconditionally (Dirk in #1437 closing #1436)
  • Changes in Rcpp Attributes:
    • The OpenMP plugin has been generalized for different macOS compiler installations (Kevin in #1414)
  • Changes in Rcpp Documentation:
    • Vignettes are now processed via a new "asis" processor adopted from R.rsp (Dirk in #1394 fixing #1393)
    • R is now cited via its DOI (Dirk)
    • A (very) stale help page has been removed (Dirk in #1428 fixing #1427)
    • The main README.md was updated emphasizing r-universe in favor of the local drat repos (Dirk in #1431)
  • Changes in Rcpp Deployment:
    • A temporary change in R-devel concerning NA part in complex variables was accommodated, and then reverted (Dirk in #1399 fixing #1397)
    • The macOS CI runners now use macos-14 (Dirk in #1405)
    • A message is shown if R.h is included before Rcpp headers as this can lead to errors (Dirk in #1411 closing #1410)
    • Old helper functions use message() to signal they are not used, deprecation and removal to follow (Dirk in #1413 closing #1412)
    • Three tests were being silenced following #1413 (Dirk in #1422)
    • The heuristic whether to run all available tests was refined (Dirk in #1434 addressing #1433)
    • Coverage has been tweaked via additional #nocov tags (Dirk in #1435)
  • Non-release Changes:
    • Two interim non-releases 1.1.0.8.1 and .2 were made in order to unblock CRAN due to changes in R-devel rather than Rcpp

Thanks to my CRANberries, you can also look at a diff to the previous interim release along with pre-releaseds 1.1.0.8, 1.1.0.8.1 and 1.1.0.8.2 that were needed because R-devel all of a sudden decided to move fast and break things. Not our doing. Questions, comments etc should go to the GitHub discussion section list]rcppdevellist off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well. Both sections can be searched as well.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

8 January 2026

Dirk Eddelbuettel: RcppCCTZ 0.2.14 on CRAN: New Upstream, Small Edits

A new release 0.2.14 of RcppCCTZ is now on CRAN, in Debian and built for r2u. RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now several others packages (four the last time we counted) include its sources too. Not ideal, but beyond our control. This version updates to a new upstream release, and brings some small local edits. CRAN and R-devel were stumbled over us still mentioning C++11 in SystemRequirements (yes, this package is old enough for that to have mattered once). As that is a false positive the package compiles well under any recent standard we removed the mention. The key changes since the last CRAN release are summarised below.

Changes in version 0.2.14 (2026-01-08)
  • Synchronized with upstream CCTZ (Dirk in #46).
  • Explicitly enumerate files to be compiled in src/Makevars* (Dirk in #47)

Courtesy of my CRANberries, there is a diffstat report relative to to the previous version. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

7 January 2026

Gunnar Wolf: Artificial Intelligence Play or break the deck

This post is an unpublished review for Artificial Intelligence Play or break the deck
As a little disclaimer, I usually review books or articles written in English, and although I will offer this review to Computing Reviews as usual, it is likely it will not be published. The title of this book in Spanish is Inteligencia artificial: jugar o romper la baraja. I was pointed at this book, published last October by Margarita Padilla Garc a, a well known Free Software activist from Spain who has long worked on analyzing (and shaping) aspects of socio-technological change. As other books published by Traficantes de sue os, this book is published as Open Access, under a CC BY-NC license, and can be downloaded in full. I started casually looking at this book, with too long a backlog of material to read, but soon realized I could just not put it down: it completely captured me. This book presents several aspects of Artificial Intelligence (AI), written for a general, non-technical audience. Many books with a similar target have been published, but this one is quite unique; first of all, it is written in a personal, non-formal tone. Contrary to what s usual in my reading, the author made the explicit decision not to fill the book with references to her sources ( because searching on Internet, it s very easy to find things ), making the book easier to read linearly a decision I somewhat regret, but recognize helps develop the author s style. The book has seven sections, dealing with different aspects of AI. They are the Visions (historical framing of the development of AI); Spectacular (why do we feel AI to be so disrupting, digging particularly into game engines and search space); Strategies , explaining how multilayer neural networks work and linking the various branches of historic AI together, arriving at Natural Language Processing; On the inside , tackling technical details such as algorithms, the importance of training data, bias, discrimination; On the outside , presenting several example AI implementations with socio-ethical implications; Philosophy , presenting the works of Marx, Heidegger and Simondon in their relation with AI, work, justice, ownership; and Doing , presenting aspects of social activism in relation to AI. Each part ends with yet another personal note: Margarita Padilla includes a letter to one of her friends related to said part. Totalling 272 pages (A5, or roughly half-letter, format), this is a rather small book. I read it probably over a week. So, while this book does not provide lots of new information to me, the way how it was written, made it a very pleasing experience, and it will surely influence the way I understand or explain several concepts in this domain.

5 January 2026

Colin Watson: Free software activity in December 2025

About 95% of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. Python packaging I upgraded these packages to new upstream versions: Python 3.14 is now a supported version in unstable, and we re working to get that into testing. As usual this is a pretty arduous effort because it requires going round and fixing lots of odds and ends across the whole ecosystem. We can deal with a fair number of problems by keeping up with upstream (see above), but there tends to be a long tail of packages whose upstreams are less active and where we need to chase them, or where problems only show up in Debian for one reason or another. I spent a lot of time working on this: Fixes for pytest 9: I filed lintian: Report Python egg-info files/directories to help us track the migration to pybuild-plugin-pyproject. I did some work on dh-python: Normalize names in pydist lookups and pyproject plugin: Support headers (the latter of which allowed converting python-persistent and zope.proxy to pybuild-plugin-pyproject, although it needed a follow-up fix). I fixed or helped to fix several other build/test failures: Other bugs: Other bits and pieces Code reviews

4 January 2026

Matthew Garrett: What is a PC compatible?

Wikipedia says An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models . But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet? Before we dig into that, let s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual. Anyone could buy the same parts from Intel and build a compatible board. They d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn t run elsewhere. CP/M s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn t need to care about the underlying hardware and would run on all systems that had a working CP/M port. By 1979, boards based on the 8086, Intel s successor to the 8080, were hitting the market. The 8086 wasn t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM s hardware, and the rest is history. But one key part of this was that despite what was now MS-DOS existing only to support IBM s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn t include all the code needed to run on a PC - you needed IBM s BIOS. To begin with this wasn t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor s ROM code wasn t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way. And here s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM s functionality, or didn t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you d think wouldn t be necessary given that s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it. You d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn t maintain compatibility. As long as everything went via the BIOS this shouldn t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them. And that s what happened. IBM was the biggest player, so people targeted IBM s platform. When BIOS interfaces weren t sufficient they hit the hardware directly - and even if they weren t doing that, they d end up depending on behavioural quirks of IBM s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof. So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009, and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you d need some additional media for hardware-specific drivers. It s something that still distinguishes the PC market from the ARM desktop market. But it s not as true as it used to be, and it s interesting to think about whether it ever was as true as people thought. Let s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems1. Is this system PC compatible? By the strictest of definitions, no. Ok. But the hardware is broadly the same, right? There s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that s really not going to work when you have a PCI card that s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won t2, so you re still actually relying on the firmware to do the right thing but it s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work. But imagine you are, or imagine you re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you re trying to run something built with IBM Pascal 1.0? There s a risk that it ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it ll break. It d work fine on an actual PC, and it won t work here, so are we PC compatible? That s a very interesting abstract question and I m going to entirely ignore it. Let s talk about PC graphics3. The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware. Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once4, and software that depended on that wouldn t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn t display correctly on any future PCs either. This is going to become a theme. There s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you re going to have a bad time. This isn t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we re not entirely CGA compatible. Is an IBM PC/AT with EGA PC compatible? You d likely say yes , but there s software written for the original PC that won t work there. And, well, let s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It s fine, we d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through. So, what s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it ll run most old software, as long as it doesn t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it ll potentially be unusable or crash because time is hard. The truth is that there s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn t run Flight Simulator. PC Compatible is a socially defined construct, just like Woman . We can get hung up on the details or we can just chill.

  1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don t provide BIOS compatibility
  2. Back in the 90s and early 2000s operating systems didn t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that s how I made a laptop that could boot unmodified MacOS X
  3. (my name will not be Wolfwings Shadowflight)
  4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA
  5. and by advanced we re still talking about the 90s, don t get excited

Valhalla's Things: And now for something completely different

Posted on January 4, 2026
Tags: topic:walking
Warning
mention of bodies being bodies and minds being minds, and not in the perfectly working sense.
One side of Porto Ceresio along the lake: there is a small strip of houses, and the hills behind them are covered in woods. A boat is parked around the middle of the picture. A lot of the youtube channels I follow tend to involve somebody making things, so of course one of the videos my SO and I watched a few days ago was about walking around San Francisco Bay, and that recalled my desire to go to places by foot. Now, for health-related reasons doing it properly would be problematic, and thus I ve never trained for that, but during this Christmas holiday-heavy time I suggested my very patient SO the next best thing: instead of our usual 1.5 hours uphill walk in the woods, a 2 hours and a bit mostly flat walk on paved streets, plus some train, to a nearby town: Porto Ceresio, on the Italian side of Lake Lugano. I started to prepare for it on the day before, by deciding it was a good time to upgrade my PinePhone, and wait, I m still on Trixie? I could try Forky, what could possibly go wrong? And well, the phone was no longer able to boot, and reinstalling from the latest weekly had a system where the on-screen keyboard didn t appear, and I didn t want to bother finding out why, so re-installed another time from the 13.0 image, and between that, and distracting myself with widelands while waiting for the downloads and uploads and reboots etc., well, all of the afternoon and the best part of the evening disappeared. So, in a hurry, between the evening and the next morning I prepared a nice healthy lunch, full of all the important nutrients such as sugar, salt, mercury and arsenic. Tuna (mercury) soboro (sugar and salt) on rice and since I was in a hurry I didn t prepare any vegetables, but used pickles (more salt) and shio kombu (arsenic and various heavy metals, sugar and salt). Plus a green tea mochi for dessert, in case we felt low on sugar. :D Then on the day of the walk we woke up a bit later than usual, and then my body decided it was a good day for my belly to not exactly hurt, but not not-hurt either, and there I took an executive decision to wear a corset, because if something feels like it wants to burst open, wrapping it in a steel reinforced cage will make it stop. (I m not joking. It does. At least in those specific circumstances.) This was followed by hurrying through the things I had to do before leaving the house, having a brief anxiety attack and feeling feverish (it wasn t fever), and finally being able to leave the house just half an hour late. A stream running on rocks with the woods to both sides. And then, 10 minutes after we had left, realizing that I had written down the password for the train website, since it was no longer saved on the phone, but i had forgotten the bit of paper at home. We could have gone back to take it, but decided not to bother, as we could also hopefully buy paper-ish tickets at the train station (we could). Later on, I also realized I had also forgotten my GPS tracker, so I have no record of where we went exactly (but it s not hard to recognize it on a map) nor on what the temperature was. It s a shame, but by that point it was way too late to go back. Anyway, that probably was when Murphy felt we had paid our respects, and from then on everything went lovingly well! Routing had been done on the OpenStreetMap website, with OSRM, and it looked pretty easy to follow, but we also had access to an Android phone, so we used OSMAnd to check that we were still on track. It tried to lead us to the Statale (i.e. most important and most trafficked road) a few times, but we ignored it, and after a few turns and a few changes of the precise destination point we managed to get it to cooperate. At one point a helpful person asked us if we needed help, having seen us looking at the phone, and gave us indication for the next fork (that way to Cuasso al Piano, that way to Porto Ceresio), but it was pretty easy, since the way was clearly marked also for cars. Then we started to notice red and white markings on poles and other places, and on the next fork there was a signpost for hiking routes with our destination and we decided to follow it instead of the sign for cars. I knew that from our starting point to or destination there was also a hiking route, uphill both ways :) , through the hills, about 5 or 6 hours instead of two, but the sign was pointing downhill and we were past the point where we would expect too long of a detour. A wide and flat unpaved track passing through a flat grassy area with trees to the sides and rolling hills in the background. And indeed, after a short while the paved road ended, but the path continued on a wide and flat track, and was a welcome detour through what looked like water works to prevent flood damage from a stream. In a warmer season, with longer grass and ticks maybe the fact that I was wearing a long skirt may have been an issue, but in winter it was just fine. And soon afterwards, we were in Porto Ceresio. I think I have been there as a child, but I had no memory of it. On the other hand, it was about as I expected: a tiny town with a lakeside street full of houses built in the early 1900s when the area was an important tourism destination, with older buildings a bit higher up on the hills (because streams in this area will flood). And of course, getting there by foot rather than by train we also saw the parts where real people live (but not work: that s cross-border commuters country). Dried winter grass with two strips of frost, exactly under the shade of a fence. Soon after arriving in Porto Ceresio we stopped to eat our lunch on a bench at the lakeside; up to then we had been pretty comfortable in the clothing we had decided to wear: there was plenty of frost on the ground, in the shade, but the sun was warm and the temperatures were cleanly above freezing. Removing the gloves to eat, however, resulted in quite cold hands, and we didn t want to stay still for longer than strictly necessary. So we spent another hour and a bit walking around Porto Ceresio like proper tourists and taking pictures. There was an exhibition of nativity scenes all around the streets, but to get a map one had to go to either facebook or instagram, or wait for the opening hours of an office that were later than the train we planned to get to go back home, so we only saw maybe half of them, as we walked around: some were quite nice, some were nativity scenes, and some showed that the school children must have had some fun making them. three gnome adjacent creatures made of branches of evergreen trees, with a pointy hat made of moss, big white moustaches and red Christmas tree balls at the end of the hat and as a nose. They are wrapped in LED strings, and the lake can be seen in the background. Another Christmas decoration were groups of creatures made of evergreen branches that dotted the sidewalks around the lake: I took pictures of the first couple of groups, and then after seeing a few more something clicked in my brain, and I noticed that they were wrapped in green LED strings, like chains, and they had a red ball that was supposed to be the nose, but could just be around the mouth area, and suddenly I felt the need to play a certain chord to release them, but sadly I didn t have a weaponized guitar on me :D A bench in the shape of an open book, half of the pages folded in a reversed U to make the seat and half of the pages standing straight to form the backrest. It has the title page and beginning of the Constitution of the Italian Republic. Another thing that we noticed were some benches in the shape of books, with book quotations on them; most were on reading-related topics, but the one with the Constitution felt worth taking a picture of, especially these days. And then, our train was waiting at the station, and we had to go back home for the afternoon; it was a nice outing, if a bit brief, and we agreed to do it again, possibly with a bit of a detour to make the walk a bit longer. And then maybe one day we ll train to do the whole 5-6 hour thing through the hills.

3 January 2026

Benjamin Mako Hill: Effects of Algorithmic Flagging on Fairness: Quasi-experimental Evidence from Wikipedia

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some older published projects. This particular post is closely based on a previously published post by Nate TeBlunthuis from the Community Data Science Blog. Many online platforms are adopting AI and machine learning as a tool to maintain order and high-quality information in the face of massive influxes of user-generated content. Of course, AI algorithms can be inaccurate, biased, or unfair. How do signals from AI predictions shape the fairness of online content moderation? How can we measure an algorithmic flagging system s effects? In our paper published at CSCW, Nate TeBlunthuis, together with myself and Aaron Halfaker, analyzed the RCFilters system: an add-on to Wikipedia that highlights and filters edits that a machine learning algorithm called ORES identifies as likely to be damaging to Wikipedia. This system has been deployed on large Wikipedia language editions and is similar to other algorithmic flagging systems that are becoming increasingly widespread. Our work measures the causal effect of being flagged in the RCFilters user interface.

Screenshot of Wikipedia edit metadata on Special:RecentChanges with RCFilters enabled. Highlighted edits with a colored circle to the left side of other metadata are flagged by ORES. Different circle and highlight colors (white, yellow, orange, and red in the figure) correspond to different levels of confidence that the edit is damaging. RCFilters does not specifically flag edits by new accounts or unregistered editors, but does support filtering changes by editor types.
Our work takes advantage of the fact that RCFilters, like many algorithmic flagging systems, create discontinuities in the relationship between the probability that a moderator should take action and whether a moderator actually does. This happens because the output of machine learning systems like ORES is typically a continuous score (in RCFilters, an estimated probability that a Wikipedia edit is damaging), while the flags (in RCFilters, the yellow, orange, or red highlights) are either on or off and are triggered when the score crosses some arbitrary threshold. As a result, edits slightly above the threshold are both more visible to moderators and appear more likely to be damaging than edits slightly below. Even though edits on either side of the threshold have virtually the same likelihood of truly being damaging, the flagged edits are substantially more likely to be reverted. This fact lets us use a method called regression discontinuity to make causal estimates of the effect of being flagged in RCFilters.
Charts showing the probability that an edit will be reverted as a function of ORES scores in the neighborhood of the discontinuous threshold that triggers the RCfilters flag. The jump in the increase in reversion chances is larger for registered editors compared to unregistered editors at both thresholds.
To understand how this system may affect the fairness of Wikipedia moderation, we estimate the effects of flagging on edits on different groups of editors. Comparing the magnitude of these estimates lets us measure how flagging is associated with several different definitions of fairness. Surprisingly, we found evidence that these flags improved fairness for categories of editors that have been widely perceived as troublesome particularly unregistered (anonymous) editors. This occurred because flagging has a much stronger effect on edits by the registered than on edits by the unregistered. We believe that our results are driven by the fact that algorithmic flags are especially helpful for finding damage that can t be easily detected otherwise. Wikipedia moderators can see the editor s registration status in the recent changes, watchlists, and edit history. Because unregistered editors are often troublesome, Wikipedia moderators attention is often focused on their contributions, with or without algorithmic flags. Algorithmic flags make damage by registered editors (in addition to unregistered editors) much more detectable to moderators and so help moderators focus on damage overall, not just damage by suspicious editors. As a result, the algorithmic flagging system decreases the bias that moderators have against unregistered editors. This finding is particularly surprising because the ORES algorithm we analyzed was itself demonstrably biased against unregistered editors (i.e., the algorithm tended to greatly overestimate the probability that edits by these editors were damaging). Despite the fact that the algorithms were biased, their introduction could still lead to less biased outcomes overall. Our work shows that although it is important to design predictive algorithms to avoid such biases, it is equally important to study fairness at the level of the broader sociotechnical system. Since we first published a preprint of our paper, a follow-up piece by Leijie Wang and Haiyi Zhu replicated much of our work and showed that differences between different Wikipedia communities may be another important factor driving the effect of the system. Overall, this work suggests that social signals and social context can interact with algorithmic signals, and together these can influence behavior in important and unexpected ways.

The full citation for the paper is: TeBlunthuis, Nathan, Benjamin Mako Hill, and Aaron Halfaker. 2021. Effects of Algorithmic Flagging on Fairness: Quasi-Experimental Evidence from Wikipedia. Proceedings of the ACM on Human-Computer Interaction 5 (CSCW): 56:1-56:27. https://doi.org/10.1145/3449130.

We have also released replication materials for the paper, including all the data and code used to conduct the analysis and compile the paper itself.

Russ Allbery: Review: Challenges of the Deeps

Review: Challenges of the Deeps, by Ryk E. Spoor
Series: Arenaverse #3
Publisher: Baen
Copyright: March 2017
ISBN: 1-62579-564-5
Format: Kindle
Pages: 438
Challenges of the Deeps is the third book in the throwback space opera Arenaverse series. It is a direct sequel to Spheres of Influence, but Spoor provides a substantial recap of the previous volumes for those who did not read the series in close succession (thank you!). Ariane has stabilized humanity's position in the Arena with yet another improbable victory. (If this is a spoiler for previous volumes, so was telling you the genre of the book.) Now is a good opportunity to fulfill the promise humanity made to their ally Orphan: accompaniment on a journey into the uncharted deeps of the Arena for reasons that Orphan refuses to explain in advance. Her experienced crew provide multiple options to serve as acting Leader of Humanity until she gets back. What can go wrong? The conceit of this series is that as soon as a species achieves warp drive technology, their ships are instead transported into the vast extradimensional structure of the Arena where a godlike entity controls the laws of nature and enforces a formal conflict resolution process that looks alternatingly like a sporting event, a dueling code, and technology-capped total war. Each inhabitable system in the real universe seems to correspond to an Arena sphere, but the space between them is breathable atmosphere filled with often-massive storms. In other words, this is an airship adventure as written by E.E. "Doc" Smith. Sort of. There is an adventure, and there are a lot of airships (although they fight mostly like spaceships), but much of the action involves tense mental and physical sparring with a previously unknown Arena power with unclear motives. My general experience with this series is that I find the Arena concept fascinating and want to read more about it, Spoor finds his much-less-original Hyperion Project in the backstory of the characters more fascinating and wants to write about that, and we reach a sort of indirect, grumbling (on my part) truce where I eagerly wait for more revelations about the Arena and roll my eyes at the Hyperion stuff. Talking about Hyperion in detail is probably a spoiler for at least the first book, but I will say that it's an excuse to embed versions of literary characters into the story and works about as well as most such excuses (not very). The characters in question are an E.E. "Doc" Smith mash-up, a Monkey King mash-up, and a number of other characters that are obviously references to something but for whom I lack enough hints to place (which is frustrating). Thankfully we get far less human politics and a decent amount of Arena world-building in this installment. Hyperion plays a role, but mostly as foreshadowing for the next volume and the cause of a surprising interaction with Arena rules. One of the interesting wrinkles of this series is that humanity have an odd edge against the other civilizations in part because we're borderline insane sociopaths from the perspective of the established powers. That's an old science fiction trope, but I prefer it to the Campbell-style belief in inherent human superiority. Old science fiction tropes are what you need to be in the mood for to enjoy this series. This is an unapologetic and intentional throwback to early pulp: individuals who can be trusted with the entire future of humanity because they're just that moral, super-science, psychic warfare, and even coruscating beams that would make E.E. "Doc" Smith proud. It's an occasionally glorious but mostly silly pile of technobabble, but Spoor takes advantage of the weird, constructed nature of the Arena to provide more complex rules than competitive superlatives. The trick is that while this is certainly science fiction pulp, it's also a sort of isekai novel. There's a lot of anime and manga influence just beneath the surface. I'm not sure why it never occurred to me before reading this series that melodramatic anime and old SF pulps have substantial aesthetic overlap, but of course they do. I loved the Star Blazers translated anime that I watched as a kid precisely because it had the sort of dramatic set pieces that make the Lensman novels so much fun. There is a bit too much Wu Kong in this book for me (although the character is growing on me a little), and some of the maneuvering around the mysterious new Arena actor drags on longer than was ideal, but the climax is great stuff if you're in the mood for dramatic pulp adventure. The politics do not bear close examination and the writing is serviceable at best, but something about this series is just fun. I liked this book much better than Spheres of Influence, although I wish Spoor would stop being so coy about the nature of the Arena and give us more substantial revelations. I'm also now tempted to re-read Lensman, which is probably a horrible idea. (Spoor leaves the sexism out of his modern pulp.) If you got through Spheres of Influence with your curiosity about the Arena intact, consider this one when you're in the mood for modern pulp, although don't expect any huge revelations. It's not the best-written book, but it sits squarely in the center of a genre and mood that's otherwise a bit hard to find. Followed by the Kickstarter-funded Shadows of Hyperion, which sadly looks like it's going to concentrate on the Hyperion Project again. I will probably pick that up... eventually. Rating: 6 out of 10

1 January 2026

Russ Allbery: Review: This Brutal Moon

Review: This Brutal Moon, by Bethany Jacobs
Series: Kindom Trilogy #3
Publisher: Orbit
Copyright: December 2025
ISBN: 0-316-46373-6
Format: Kindle
Pages: 497
This Brutal Moon is a science fiction thriller with bits of cyberpunk and space opera. It concludes the trilogy begun with These Burning Stars. The three books tell one story in three volumes, and ideally you would read all three in close succession. There is a massive twist in the first book that I am still not trying to spoil, so please forgive some vague description. At the conclusion of These Burning Stars, Jacobs had moved a lot of pieces into position, but it was not yet clear to me where the plot was going, or even if it would come to a solid ending in three volumes as promised by the series title. It does. This Brutal Moon opens with some of the political maneuvering that characterized These Burning Stars, but once things start happening, the reader gets all of the action they could wish for and then some. I am pleased to report that, at least as far as I'm concerned, Jacobs nails the ending. Not only is it deeply satisfying, the characterization in this book is so good, and adds so smoothly to the characterization of the previous books, that I saw the whole series in a new light. I thought this was one of the best science fiction series finales I've ever read. Take that with a grain of salt, since some of those reasons are specific to me and the mood I was in when I read it, but this is fantastic stuff. There is a lot of action at the climax of this book, split across at least four vantage points and linked in a grand strategy with chaotic surprises. I kept all of the pieces straight and understood how they were linked thanks to Jacobs's clear narration, which is impressive given the number of pieces in motion. That's not the heart of this book, though. The action climax is payoff for the readers who want to see some ass-kicking, and it does contain some moving and memorable moments, but it relies on some questionable villain behavior and a convenient plot device introduced only in this volume. The action-thriller payoff is competent but not, I think, outstanding. What put this book into a category of its own were the characters, and specifically how Jacobs assembles sweeping political consequences from characters who, each alone, would never have brought about such a thing, and in some cases had little desire for it. Looking back on the trilogy, I think Jacobs has captured, among all of the violence and action-movie combat and space-opera politics, the understanding that political upheaval is a relay race. The people who have the personalities to start it don't have the personality required to nurture it or supply it, and those who can end it are yet again different. This series is a fascinating catalog of political actors the instigator, the idealist, the pragmatist, the soldier, the one who supports her friends, and several varieties and intensities of leaders and it respects all of them without anointing any of them as the One True Revolutionary. The characters are larger than life, yes, and this series isn't going to win awards for gritty realism, but it's saying something satisfyingly complex about where we find courage and how a cause is pushed forward by different people with different skills and emotions at different points in time. Sometimes accidentally, and often in entirely unexpected ways. As before, the main story is interwoven with flashbacks. This time, we finally see the full story of the destruction of the moon of Jeve. The reader has known about this since the first volume, but Jacobs has a few more secrets to show (including, I will admit, setting up a plot device) and some pointed commentary on resource extraction economies. I think this part of the book was a bit obviously constructed, although the characterization was great and the visible junction points of the plot didn't stop me from enjoying the thrill when the pieces came together. But the best part of this book was the fact there was 10% of it left after the climax. Jacobs wrote an actual denouement, and it was everything I wanted and then some. We get proper story conclusions for each of the characters, several powerful emotional gut punches, some remarkably subtle and thoughtful discussion of political construction for a series that tended more towards space-opera action, and a conclusion for the primary series relationship that may not be to every reader's taste but was utterly, perfectly, beautifully correct for mine. I spent a whole lot of the last fifty pages of this book trying not to cry, in the best way. The character evolution over the course of this series is simply superb. Each character ages like fine wine, developing more depth, more nuance, but without merging. They become more themselves, which is an impressive feat across at least four very different major characters. You can see the vulnerabilities and know what put them there, you can see the strengths they developed to compensate, and you can see why they need the support the other characters provide. And each of them is so delightfully different. This was so good. This was so precisely the type of story that I was in the mood for, with just the type of tenderness for its characters that I wanted, that I am certain I am not objective about it. It will be one of those books where other people will complain about flaws that I didn't see or didn't care about because it was doing the things I wanted from it so perfectly. It's so good that it elevated the entire trilogy; the journey was so worth the ending. I'm afraid this review will be less than helpful because it's mostly nonspecific raving. This series is such a spoiler minefield that I'd need a full spoiler review to be specific, but my reaction is so driven by emotion that I'm not sure that would help if the characters didn't strike you the way that they struck me. I think the best advice I can offer is to say that if you liked the emotional tone of the end of These Burning Stars (not the big plot twist, the character reaction to the political goal that you learn drove the plot), stick with the series, because that's a sign of the questions Jacobs is asking. If you didn't like the characters at the end (not the middle) of the first novel, bail out, because you're going to get a lot more of that. Highly, highly recommended, and the best thing I've read all year, with the caveats that you should read the content notes, and that some people are going to bounce off this series because it's too intense and melodramatic. That intensity will not let up, so if that's not what you're in the mood for, wait on this trilogy until you are. Content notes: Graphic violence, torture, mentions of off-screen child sexual assault, a graphic corpse, and a whole lot of trauma. One somewhat grumbly postscript: This is the sort of book where I need to not read other people's reviews because I'll get too defensive of it (it's just a book I liked!). But there is one bit of review commentary I've seen about the trilogy that annoys me enough I have to mention it. Other reviewers seem to be latching on to the Jeveni (an ethnic group in the trilogy) as Space Jews and then having various feelings about that. I can see some parallels, I'm not going to say that it's completely wrong, but I also beg people to read about a fictional oppressed ethnic and religious minority and not immediately think "oh, they must be stand-ins for Jews." That's kind of weird? And people from the US, in particular, perhaps should not read a story about an ethnic group enslaved due to their productive skill and economic value and think "they must be analogous to Jews, there are no other possible parallels here." There are a lot of other comparisons that can be made, including to the commonalities between the methods many different oppressed minorities have used to survive and preserve their culture. Rating: 10 out of 10

Daniel Baumann: P r ch storage (in Western Europe)

The majority of P r ch ( ) is stored in Asia (predominantly China, some in Malaysia and Taiwan) because of its climatic conditions:
  • warm (~30 C)
  • humid (~75% RH)
  • stable (comparatively small day to day, day to night, and seasonal differences)
These are ideal for ageing and allow efficient storage in simple storehouses, usually without the need for any air conditioning. The climate in Western European countries is significantly colder, drier, and more variable. However, residential houses offers throughout the year a warm (~20 C), humid (~50% RH), and stable baseline to start with. High quality, long term storage and ageing is possible too with some of the Asian procedures slightly adjusted for the local conditions. Nevertheless, fast accelerated ageing still doesn t work here (even with massive climate control). Personally I prefer the balanced, natural storage over the classical wet or dry , or the mixed traditional storage (of course all storage types are not that meaningful as they are all relative terms anyway). Also I don t like strong fermentation either (P r is not my favourite tea, I only drink it in the 3 months of winter). Therefore, my intention is primarily to preserve the tea while it continues to age normally, keep it in optimal drinking condition and don t rush its fermentation. The value of correct storage is of great importance and has a big effect on P r, and is often overlooked and underrated. Here s a short summary on how to store P r ch (in Western Europe).
P ' r ch  storage (in Western Europe) Image: some D y P r ch stored at home
1. Location
1.1 No light Choose a location with neither direct nor indirect sunlight (= ultraviolet radiation) exposure:
  • direct sunlight damages organic material ( bleeching )
  • indirect sunlight by heating up the containers inbalances the microclimate ( suppression )
1.2 No odors Choose a location with neither direct nor indirect odor exposure:
  • direct odor immissions (like incense, food, air polution, etc.) makes the tea undrinkable
  • indirect odor immissions (especially other teas stored next to it) taint the taste and dilute the flavours
Always use individual containers for each tea, store only identical tea of the same year and batch in the same container. Idealy never store one cake, brick or tuo on its own, buy multiples and store them together for better ageing.
2. Climate
2.1 Consistent temperature Use a location with no interference from any devices or structures next to it:
  • use an regular indoor location for mild temperature curves (i.e. no attics with large day/night temperature differences)
  • aim for >= 20 C average temperature for natural storage (i.e. no cold basements)
  • don t use a place next to any heat dissipating devices (like radiators, computers, etc.)
  • don t use a place next to an outside facing wall
  • always leave 5 to 10cm distance to a wall for air to circulate (generally prevents mold on the wall but also isolates the containers further from possible immissions)
As consistent temperature as possible allows even and steady fermentation. However, neither air conditioning or filtering is needed. Regular day to day, day to night, and season to season fluctuations are balanced out fine with otherwise correct indoor storage. Also humidity control is much more important for storage and ageing, and much less forgiving than temperature control.
2.2 Consistent humidity Use humidity control packs to ensure a stable humidity level:
  • aim for ~65% RH
Lower than 60% RH will (completely) dry out the tea, higher than 70% RH increases chances for mold.
3. Equipment
3.1 Proper containers Use containers that are long term available and that are easily stackable, both in form and dimensions as well as load-bearing capacity. They should be inexpensive, otherwise it s most likely scam (there are people selling snake-oil containers specifically for tea storage). For long term storage and ageing:
  • never use plastic: they leak chemicals over time (no tupperdors , mylar and zip lock bags, etc.)
  • never use cardboard or bamboo: they let to much air in, absorb too much humity and dry to slow
  • never use wood: they emit odors (no humidors)
  • never use clay: they absorb all humidity and dry out the tea (glazed or porcelain urns are acceptable for short term storage)
Always use sealed but not air tight, cleaned steal cans (i.e. with no oil residues from manufacturing; such as these).
3.2 Proper humidity control Use two-way humidity control packs to either absorb or add moisture depending on the needs:
  • use humidity control pack(s) in every container
  • use one 60g pack (such as these) per 500g of tea
3.3 Proper labels Put labels on the outside of the containers to not need to open them to know what is inside and disturbe the ageing:
  • write at least manufacturer and name, pressing year, and batch number on the label (such as these)
  • if desired and not otherwise kept track already elsewhere, add additional information such as the amount of items, weights, previous storage location/type, date of purchase, vendor, or price, etc.
3.4 Proper monitoring Measuring and checking temperature and humidity regularly prevents storage parameters turning bad going unnoticed:
  • put temperature and humidity measurement sensors (such as these) inside some of the containers (e.g. at least one in every different size/type of container)
  • keep at least one temperature and humidity measurement sensor next to the containers on the outside to monitor the storage location
4. Storage
4.1 Continued maintenance Before beginning to initially store a new tea, let it acclimatize for 2h after unpacking from transport (or longer if temperature differences between indoors and outdoors are high, i.e. in winter). Then continuesly:
  • once a month briefly air all containers for a minute or two
  • once a month check for mold, just to be safe
  • every 3 to 6 months check all humidity control packs for need of replacement
  • monitor battery life of temperature and humidity measurement sensors
Humidity control packs can be recharged (usually voiding any warranty) by submerging them for 2 days in distilled water and 4 hours drying on paper towl afterwards. Check with a weight scale after recharging, they should regain their original weight (e.g. 60g plus ~2 to 5g for the packaging).
Finally With a correct P r ch storage:
  • beginn to drink Sh ng P r ( ) roughly after about 10-15 years, or later
  • beginn to drink Sh u P r ( ) roughly after about 3-5 years, or later
Prepare the tea by breaking it into, or breaking off from it, ~5 to 10g pieces about 1 to 3 months ahead of drinking and consider increase humidity to 70% RH during that time. 2026

31 December 2025

Sergio Cipriano: Zero-Code Instrumentation of an Envoy TCP Proxy using eBPF

Zero-Code Instrumentation of an Envoy TCP Proxy using eBPF I recently had to debug an Envoy Network Load Balancer, and the options Envoy provides just weren't enough. We were seeing a small number of HTTP 499 errors caused by latency somewhere in our cloud, but it wasn't clear what the bottleneck was. As a result, each team had to set up additional instrumentation to catch latency spikes and figure out what was going on. My team is responsible for the LBaaS product (Load Balancer as a Service) and, of course, we are the first suspects when this kind of problem appear. Before going for the current solution, I read a lot of Envoy's documentation. It is possible to enable access logs for Envoy, but they don't provide the information required for this debug. This is an example of the output:
[2025-12-08T20:44:49.918Z] "- - -" 0 - 78 223 1 - "-" "-" "-" "-" "172.18.0.2:8080"
I won't go into detail about the line above, since it's not possible to trace the request using access logs alone. Envoy also has OpenTelemetry tracing, which is perfect for understanding sources of latency. Unfortunatly, it is only available for Application Load Balancers. Most of the HTTP 499 were happening every 10 minutes, so we managed to get some of the requests with tcpdump, Wireshark and using http headers to filter the requests. This approach helped us reproduce and track down the problem, but it wasn't a great solution. We clearly needed better tools to catch this kind of issue the next time it happened. Therefore, I decided to try out OpenTelemetry eBPF Instrumentation, also referred to as OBI. I saw the announcement of Grafana Beyla before it was renamed to OBI, but I didn't have the time or a strong reason to try it out until now. Even then, I really liked the idea, and the possibility of using eBPF to solve this instrumentation problem had been in the back of my mind. OBI promises zero-code automatic instrumentation for Linux services using eBPF, so I put together a minimal setup to see how well it works.

Reproducible setup I used the following tools: Setting up a TCP Proxy with Envoy was straightforward:
static_resources:
  listeners:
  - name: go_server_listener
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 8000
    filter_chains:
    - filters:
      - name: envoy.filters.network.tcp_proxy
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
          stat_prefix: go_server_tcp
          cluster: go_server_cluster
  clusters:
  - name: go_server_cluster
    connect_timeout: 1s
    type: LOGICAL_DNS
    load_assignment:
      cluster_name: go_server_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: target-backend
                port_value: 8080
This is the simplest Envoy TCP proxy configuration: a listener on port 8000 forwarding traffic to a backend running on port 8080. For the backend, I used a basic Go HTTP server:
package main

import (
    "fmt"
    "net/http"
)

func main()  
    http.Handle("/", http.FileServer(http.Dir(".")))

    server := http.Server Addr: ":8080" 

    fmt.Println("Starting server on :8080")
    panic(server.ListenAndServe())
 
Finally, I wrapped everything together with Docker Compose:
services:
  autoinstrumenter:
    image: otel/ebpf-instrument:main
    pid: "service:envoy"
    privileged: true
    environment:
      OTEL_EBPF_TRACE_PRINTER: text
      OTEL_EBPF_OPEN_PORT: 8000

  envoy:
    image: envoyproxy/envoy:v1.33-latest
    ports:
      - 8000:8000
    volumes:
      - ./envoy.yaml:/etc/envoy/envoy.yaml 
    depends_on:
      - target-backend
  
  target-backend:
    image: golang:1.22-alpine
    command: go run /app/backend.go
    volumes:
      - ./backend.go:/app/backend.go:ro
    expose:
      - 8080
OBI should output traces to the standard output similar to this when a HTTP request is made to Envoy:
2025-12-08 20:44:49.12884449 (305.572 s[305.572 s]) HTTPClient 200 GET /(/) [172.18.0.3 as envoy:36832]->[172.18.0.2 as localhost:8080] contentLen:78B responseLen:0B svc=[envoy generic] traceparent=[00-529458a2be271956134872668dc5ee47-6dba451ec8935e3e[06c7f817e6a5dae2]-01]
2025-12-08 20:44:49.12884449 (1.260901ms[366.65 s]) HTTP 200 GET /(/) [172.18.0.1 as 172.18.0.1:36282]->[172.18.0.3 as envoy:8000] contentLen:78B responseLen:223B svc=[envoy generic] traceparent=[00-529458a2be271956134872668dc5ee47-06c7f817e6a5dae2[0000000000000000]-01]
This is exactly what we needed, with zero-code. The above trace shows:
  • 2025-12-08 20:44:49.12884449: time of the trace.
  • (1.260901ms[366.65 s]): total response time for the request, with the actual internal execution time of the request (not counting the request enqueuing time).
  • HTTP 200 GET /: protocol, response code, HTTP method, and URL path.
  • [172.18.0.1 as 172.18.0.1:36282]->[172.18.0.3 as envoy:8000]: source and destination host:port. The initial request originates from my machine through the gateway (172.18.0.1), hits the Envoy (172.23.0.3), the proxy then forwards it to the backend application (172.23.0.2).
  • contentLen:78B: HTTP Content-Length. I used curl and the default request size for it is 78B.
  • responseLen:223B: Size of the response body.
  • svc=[envoy generic]: traced service.
  • traceparent: ids to trace the parent request. We can see that the Envoy makes a request to the target and this request has the other one as parent.
Let's add one more Envoy to show that it's also possible to track multiple services.
  envoy1:
    image: envoyproxy/envoy:v1.33-latest
    ports:
      - 9000:9000
    volumes:
      - ./envoy1.yaml:/etc/envoy/envoy.yaml
    depends_on:
      - envoy
The new Envoy will listen on port 9000 and forward the request to the other Envoy listening on port 8000. Now we just need to change OBI open port variable to look at a range:
OTEL_EBPF_OPEN_PORT: 8000-9000
And change the pid field of the autoinstrumenter service to use the host's PID namespace inside the container:
pid: host
This is the output I got after one curl:
2025-12-09 12:28:05.12912285 (2.202041ms[1.524713ms]) HTTP 200 GET /(/) [172.19.0.1 as 172.19.0.1:59030]->[172.19.0.5 as envoy:9000] contentLen:78B responseLen:223B svc=[envoy generic] traceparent=[00-69977bee0c2964b8fe53cdd16f8a9d19-856c9f700e73bf0d[0000000000000000]-01]
2025-12-09 12:28:05.12912285 (1.389336ms[1.389336ms]) HTTPClient 200 GET /(/) [172.19.0.5 as envoy:59806]->[172.19.0.4 as localhost:8000] contentLen:78B responseLen:0B svc=[envoy generic] traceparent=[00-69977bee0c2964b8fe53cdd16f8a9d19-caa7f1ad1c68fa77[856c9f700e73bf0d]-01]
2025-12-09 12:28:05.12912285 (1.5431ms[848.574 s]) HTTP 200 GET /(/) [172.19.0.5 as 172.19.0.5:59806]->[172.19.0.4 as envoy:8000] contentLen:78B responseLen:223B svc=[envoy generic] traceparent=[00-69977bee0c2964b8fe53cdd16f8a9d19-cbca9d64d3d26b40[caa7f1ad1c68fa77]-01]
2025-12-09 12:28:05.12912285 (690.217 s[690.217 s]) HTTPClient 200 GET /(/) [172.19.0.4 as envoy:34256]->[172.19.0.3 as localhost:8080] contentLen:78B responseLen:0B svc=[envoy generic] traceparent=[00-69977bee0c2964b8fe53cdd16f8a9d19-5502f7760ed77b5b[cbca9d64d3d26b40]-01]
2025-12-09 12:28:05.12912285 (267.9 s[238.737 s]) HTTP 200 GET /(/) [172.19.0.4 as 172.19.0.4:34256]->[172.19.0.3 as backend:8080] contentLen:0B responseLen:0B svc=[backend go] traceparent=[00-69977bee0c2964b8fe53cdd16f8a9d19-ac05c7ebe26f2530[5502f7760ed77b5b]-01]
Each log line represents a span belonging to the same trace (69977bee0c2964b8fe53cdd16f8a9d19). For readability, I ordered the spans by their traceparent relationship, showing the request's path as it moves through the system: from the client-facing Envoy, through the internal Envoy hop, and finally to the Go backend. You can see both server-side (HTTP) and client-side (HTTPClient) spans at each hop, along with per-span latency, source and destination addresses, and response sizes, making it easy to pinpoint where time is spent along the request chain. The log lines are helpful, but we need better ways to visualize the traces and the metrics generated by OBI. I'll share another setup that more closely reflects what we actually use.

Production setup I'll be using the following tools this time: The goal of this setup is to mirror an environment similar to what I used in production. This time, I've omitted the load balancer and shifted the emphasis to observability instead. setup diagram I will run three HTTP servers on port 8080: two inside Incus containers and one on the host machine. The OBI process will export metrics and traces to an OpenTelemetry Collector, which will forward traces to Jaeger and expose a metrics endpoint for Prometheus to scrape. Grafana will also be added to visualize the collected metrics using dashboards. The aim of this approach is to instrument only one of the HTTP servers while ignoring the others. This simulates an environment with hundreds of Incus containers, where the objective is to debug a single container without being overwhelmed by excessive and irrelevant telemetry data from the rest of the system. OBI can filter metrics and traces based on attribute values, but I was not able to filter by process PID. This is where the OBI Collector comes into play, it allows me to use a processor to filter telemetry data by the PID of the process being instrumented. These are the steps to reproduce this setup:
  1. Create the incus containers.
$ incus launch images:debian/trixie server01
Launching server01
$ incus launch images:debian/trixie server02
Launching server02
  1. Start the HTTP server on each container.
$ apt install python3 --update -y
$ tee /etc/systemd/system/server.service > /dev/null <<'EOF'
[Unit]
Description=Python HTTP server
After=network.target
[Service]
User=root
Group=root
Type=simple
ExecStart=/usr/bin/python3 -m http.server 8080
Restart=always
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
$ systemctl start server.service
  1. Start the HTTP server on the host.
$ python3 -m http.server 8080
  1. Start the Docker compose.
services:
  autoinstrumenter:
    image: otel/ebpf-instrument:main
    pid: host
    privileged: true
    environment:
      OTEL_EBPF_CONFIG_PATH: /etc/obi/obi.yml 
    volumes:
      - ./obi.yml:/etc/obi/obi.yml

  otel-collector:
    image: otel/opentelemetry-collector-contrib:0.98.0
    command: ["--config=/etc/otel-collector-config.yml"]
    volumes:
      - ./otel-collector-config.yml:/etc/otel-collector-config.yml
    ports:
      - "4318:4318" # Otel Receiver
      - "8889:8889" # Prometheus Scrape
    depends_on:
      - autoinstrumenter
      - jaeger
      - prometheus

  prometheus:
    image: prom/prometheus
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090" # Prometheus UI

  grafana:
    image: grafana/grafana
    restart: always
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=RandomString123!
    volumes:
      - ./grafana-ds.yml:/etc/grafana/provisioning/datasources/datasource.yml
    ports:
      - "3000:3000" # Grafana UI

  jaeger:
    image: jaegertracing/all-in-one
    container_name: jaeger
    ports:
      - "16686:16686" # Jaeger UI
      - "4317:4317"  # Jaeger OTLP/gRPC Collector
Here's what the configuration files look like:
  • obi.yml:
log_level: INFO
trace_printer: text

discovery:
  instrument:
    - open_ports: 8080

otel_metrics_export:
  endpoint: http://otel-collector:4318
otel_traces_export:
  endpoint: http://otel-collector:4318
  • prometheus.yml:
global:
  scrape_interval: 5s

scrape_configs:
  - job_name: 'otel-collector'
    static_configs:
      - targets: ['otel-collector:8889']
  • grafana-ds.yml:
apiVersion: 1

datasources:
  - name: Prometheus
    type: prometheus
    access: proxy
    url: http://prometheus:9090
    isDefault: true
  • otel-collector-config.yml:
receivers:
  otlp:
    protocols:
      http:
        endpoint: otel-collector:4318

exporters:
  otlp/jaeger:
    endpoint: jaeger:4317
    tls:
      insecure: true

  prometheus:
    endpoint: 0.0.0.0:8889
    namespace: default

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlp/jaeger]

    metrics:
      receivers: [otlp] 
      exporters: [prometheus]
We're almost there, the OpenTelemetry Collector is just missing a processor. To create the processor filter, we can look at the OBI logs to find the PID of the HTTP server being instrumented:
autoinstrumenter-1    time=2025-12-30T19:57:17.593Z level=INFO msg="instrumenting process" component=discover.traceAttacher cmd=/usr/bin/python3.13 pid=297514 ino=460310 type=python service=""
autoinstrumenter-1    time=2025-12-30T19:57:18.320Z level=INFO msg="instrumenting process" component=discover.traceAttacher cmd=/usr/bin/python3.13 pid=310288 ino=722998 type=python service=""
autoinstrumenter-1    time=2025-12-30T19:57:18.512Z level=INFO msg="instrumenting process" component=discover.traceAttacher cmd=/usr/bin/python3.13 pid=315183 ino=2888480 type=python service=""
Which can also be obtained using standard GNU/Linux utilities:
$ cat /sys/fs/cgroup/lxc.payload.server01/system.slice/server.service/cgroup.procs 
297514
$ cat /sys/fs/cgroup/lxc.payload.server02/system.slice/server.service/cgroup.procs 
310288
$ ps -aux   grep http.server
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
1000000   297514  0.0  0.1  32120 14856 ?        Ss   16:03   0:00 /usr/bin/python3 -m http.server 8080
1000000   310288  0.0  0.1  32120 10616 ?        Ss   16:09   0:00 /usr/bin/python3 -m http.server 8080
cipriano  315183  0.0  0.1 103476 11480 pts/3    S+   16:17   0:00 python -m http.server 8080
If we search for the PID in the OpenTelemetry Collector endpoint where Prometheus metrics are exposed, we can find the attribute values to filter on.
$ curl http://localhost:8889/metrics   rg 297514
default_target_info host_id="148f400ad3ea",host_name="148f400ad3ea",instance="148f400ad3ea:297514",job="python3.13",os_type="linux",service_instance_id="148f400ad3ea:297514",service_name="python3.13",telemetry_sdk_language="python",telemetry_sdk_name="opentelemetry-ebpf-instrumentation",telemetry_sdk_version="main"  1
Now we just need to add the processor to the collector configuration:
processors: # <--- NEW BLOCK
  filter/host_id:
    traces:
      span:
        - 'resource.attributes["service.instance.id"] == "148f400ad3ea:297514"'

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [filter/host_id] # <--- NEW LINE
      exporters: [otlp/jaeger]

    metrics:
      receivers: [otlp] 
      processors:  # <--- NEW BLOCK
        - filter/host_id
      exporters: [prometheus]
That's it! The processor will handle the filtering for us, and we'll only see traces and metrics from the HTTP server running in the server01 container. Below are some screenshots from Jaeger and Grafana: jaeger search will all traces one jaeger trace grafana request duration panel

Closing Notes I am still amazed at how powerful OBI can be. For those curious about the debug, we found out that a service responsible for the network orchestration of the Envoy containers was running netplan apply every 10 minutes because of a bug. Netplan apply causes interfaces to go down temporarily and this made the latency go above 500ms which caused the 499s.

Ravi Dwivedi: Transit through Kuala Lumpur

In my last post, Badri and I reached Kuala Lumpur - the capital of Malaysia - on the 7th of December 2024. We stayed in Bukit Bintang, the entertainment district of the city. Our accommodation was pre-booked at Manor by Mingle , a hostel where I had stayed for a couple of nights in a dormitory room earlier in February 2024. We paid 4937 rupees (the payment was online, so we paid in Indian rupees) for 3 nights for a private room. From the Terminal Bersepadu Selatan (TBS) bus station, we took the metro to the Plaza Rakyat LRT station, which was around 500 meters from the hostel. Upon arriving at the hostel, we presented our passports at their request, followed by a 20 ringgit (400 rupee) deposit which would be refunded once we returned the room keys at checkout.
Outside view of the hostel Manor by Mingle Manor by Mingle - the hostel where we stayed at during our KL transit. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Our room was upstairs and it had a bunk bed. I had seen bunk beds in dormitories before, but this was my first time seeing a bunk bed in a private room. The room did not have any toilets, so we had to use shared toilets. Unusually, the hostel was equipped with a pool. It also had a washing machine with dryers - this was one of the reasons we chose this hostel, because we were traveling light and hadn t packed too many clothes. The machine and dryer cost 10 ringgits (200 rupees) per use, and we only used it once. The hostel provided complimentary breakfast, which included coffee. Outside of breakfast hours, there was also a paid coffee machine. During our stay, we visited a gurdwara - a place of worship for Sikhs - which was within walking distance from our hostel. The name of the gurdwara was Gurdwara Sahib Mainduab. However, it wasn t as lively as I had thought. The gurdwara was locked from the inside, and we had to knock on the gate and call for someone to open it. A man opened the gate and invited us in. The gurdwara was small, and there was only one other visitor - a man worshipping upstairs. We went upstairs briefly, then settled down on the first floor. We had some conversations with the person downstairs who kindly made chai for us. They mentioned that the langar (community meal) is organized on every Friday, which was unlike the gurdwaras I have been to where the langar is served every day. We were there for an hour before we left. We also went to Adyar Ananda Bhavan (a restaurant chain) near our hostel to try the chain in Malaysia. The chain is famous in Southern India and also known by its short name A2B. We ordered
A dosa Dosa served at Adyar Ananda Bhavan. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
All this came down to around 33 ringgits (including taxes), i.e. around 660 rupees. We also purchased some snacks such as murukku from there for our trip. We had planned a day trip to Malacca, but had to cancel it due to rain. We didn t do a lot in Kuala Lumpur, and it ended up acting as a transit point for us to other destinations: flights from Kuala Lumpur were cheaper than Singapore, and in one case a flight via Kuala Lumpur was even cheaper than a direct flight! We paid 15,000 rupees in total for the following three flights:
  1. Kuala Lumpur to Brunei,
  2. Brunei to Kuala Lumpur, and
  3. Kuala Lumpur to Ho Chi Minh City (Vietnam).
These were all AirAsia flights. The cheap tickets, however, did not include any checked-in luggage, and the cabin luggage weight limit was 7 kg. We also bought quite some stuff in Kuala Lumpur and Singapore, leading to an increase in the weight of our luggage. We estimated that it would be cheaper for us to take only essential items such as clothes, cameras, and laptops, and to leave behind souvenirs and other non-essentials in lockers at the TBS bus stand in Kuala Lumpur, than to pay more for check-in luggage. It would take 140 ringgits for us to add a checked-in bag from Kuala Lumpur to Bandar Seri Begawan and back, while the cost for lockers was 55 ringgits at the rate of 5 ringgits every six hours. We had seen these lockers when we alighted at the bus stand while coming from Johor Bahru. There might have been lockers in the airport itself as well, which would have been more convenient as we were planning to fly back in soon, but we weren t sure about finding lockers at the airport and we didn t want to waste time looking. We had an early morning flight for Brunei on the 10th of December. We checked out from our hostel on the night of the 9th of December, and left for TBS to take a bus to the airport. We took a metro from the nearest metro station to TBS. Upon reaching there, we put our luggage in the lockers. The lockers were automated and there was no staff there to guide us.
Lockers at TBS bus station Lockers at TBS bus station. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
We bought a ticket for the airport bus from a counter at TBS for 26 ringgits for both of us. In order to give us tickets, the person at the counter asked for our passports, and we handed it over to them promptly. Since paying in cash did not provide any extra anonymity, I would advise others to book these buses online. In Malaysia, you also need a boarding pass for buses. The bus terminal had kiosks for getting these printed, but they were broken and we had to go to a counter to obtain them. The boarding pass mentioned our gate number and other details such as our names and departure time of the bus. The company was Jet Bus.
My boarding pass for the bus to the airport in Kuala Lumpur My boarding pass for the bus to the airport in Kuala Lumpur. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
To go to our boarding gate, we had to scan our boarding pass to let the AFC gates open. Then we went downstairs, leading into the waiting area. It had departure boards listing the bus timings and their respective gates. We boarded our bus around 10 minutes before the departure time - 00:00 hours. It departed at its scheduled time and took 45 minutes to reach KL Airport Terminal 2, where we alighted. We reached 6 hours before our flight s departure time of 06:30. We stopped at a convenience store at the airport to have some snacks. Then we weighed our bags at a weighing machine to check whether we were within the weight limit. It turned out that we were. We went to an AirAsia counter to get our boarding passes. The lady at our counter checked our Brunei visas carefully and looked for any Brunei stamps on the passports to verify whether we had used that visa in the past. However, she didn t weigh our bags to check whether they were within the limit, and gave us our boarding passes. We had more than 4 hours to go before our flight. This was the downside of booking an early morning flight - we weren t able to get a full night s sleep. A couple of hours before our flight time, we were hanging around our boarding gate. The place was crowded, so there were no seats available. There were no charging points. There was a Burger King outlet there which had some seating space and charging points. As we were hungry, we ordered two cups of cappuccino coffee (15.9 ringgits) and one large french fries (8.9 ringgits) from Burger King. The total amount was 24 ringgits. When it was time to board the flight, we went to the waiting area for our boarding gates. Soon, we boarded the plane. It took 2.5 hours to reach the Brunei International Airport in the capital city of Bandar Seri Begawan.
View of Kuala Lumpur from the aeroplane View of Kuala Lumpur from the aeroplane. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Stay tuned for our experiences in Brunei! Credits: Thanks to Badri, Benson and Contrapunctus for reviewing the draft.

Freexian Collaborators: How files are stored by Debusine (by Stefano Rivera)

Debusine is a tool designed for Debian developers and Operating System developers in general. This post describes how Debusine stores and manages files. Debusine has been designed to run a network of workers that can perform various tasks that consume and produce artifacts . The artifact itself is a collection of files structured into an ontology of artifact types. This generic architecture should be suited to many sorts of build & CI problems. We have implemented artifacts to support building a Debian-like distribution, but the foundations of Debusine aim to be more general than that. For example a package build task takes a debian:source-package as input and produces some debian:binary-packages and a debian:package-build-log as output. This generalized approach is quite different to traditional Debian APT archive implementations, which typically required having the archive contents on the filesystem. Traditionally, most Debian distribution management tasks happen within bespoke applications that cannot share much common infrastructure.

File Stores Debusine s files themselves are stored by the File Store layer. There can be multiple file stores configured, with different policies. Local storage is useful as the initial destination for uploads to Debusine, but it has to be backed up manually and might not scale to sufficiently large volumes of data. Remote storage such as S3 is also available. It is possible to serve a file from any store, with policies for which one to prefer for downloads and uploads. Administrators can set policies for which file stores to use at the scope level, as well as policies for populating and draining stores of files.

Artifacts As mentioned above, files are collected into Artifacts. They combine:
  • a set of files with names (including potentially parent directories)
  • a category, e.g. debian:source-package
  • key-value data in a schema specified by the category and stored as a JSON-encoded dictionary.
Within the stores, files are content-addressed: a file with a given SHA-256 digest is only stored once in any given store, and may be retrieved by that digest. When a new artifact is created, its files are uploaded to Debusine as needed. Some of the files may already be present in the Debusine instance. In that case, if the file is already part of the artifact s workspace, then the client will not need to re-upload the file. But if not, it must be reuploaded to avoid users obtaining unauthorized access to existing file contents in another private workspace or multi-tenant scope. Because the content-addressing makes storing duplicates cheap, it s common to have artifacts that overlap files. For example a debian:upload will contain some of the same files as the related debian:source-package as well as the .changes file. Looking at the debusine.debian.net instance that we run, we can see a content-addressing savings of 629 GiB across our (currently) 2 TiB file store. This is somewhat inflated by the Debian Archive import, that did not need to bother to share artifacts between suites. But it still shows reasonable real-world savings.

APT Repository Representation Unlike a traditional Debian APT repository management tool, the source package and binary packages are not stored directly in the pool of an APT repository on disk on the debusine server. Instead we abstract the repository into a debian:suite collection within the Debusine database. The collection contains the artifacts that make up the APT repository. To ensure that it can be safely represented as a valid URL structure (or files on disk) the suite collection maintains an index of the pool filenames of its artifacts. Suite collections can combine into a debian:archive collection that shares a common file pool. Debusine collections can keep an historical record of when things were added and removed. This, combined with the database-backed collection-driven repository representation makes it very easy to provide APT-consumable snapshot views to every point in a repository s history.

Expiry While a published distribution probably wants to keep the full history of all its package builds, we don t need to retain all of the output of all QA tasks that were run. Artifacts can have an expiration delay or inherit one from their workspace. Once this delay has expired, artifacts which are not being held in any collection are eligible to be automatically cleaned up. QA work that is done in a workspace that has automatic artifact expiry, and isn t publishing the results to an APT suite, will safely automatically expire.

Daily Vacuum A daily vacuum task handles all of the file periodic maintenance for file stores. It does some cleanup of working areas, a scan for unreferenced & missing files, and enforces file store policies. The policy work could be copying files for backup or moving files between stores to keep them within size limits (e.g. from a local upload store into a general cloud store).

In Conclusion Debusine provides abstractions for low-level file storage and object collections. This allows storage to be scalable beyond a single filesystem and highly available. Using content-addressed storage minimizes data duplication within a Debusine instance. For Debian distributions, storing the archive metadata entirely in a database made providing built-in snapshot support easy in Debusine.

kpcyrd: 2025 wrapped

Same as last year, this is a summary of what I ve been up to throughout the year. See also the recap/retrospection published by my friends (antiz, jvoisin, orhun). Thanks to everybody who has been part of my human experience, past or present. Especially those who ve been closest.

30 December 2025

Utkarsh Gupta: FOSS Activites in December 2025

Here s my monthly but brief update about the activities I ve done in the FOSS world.

Debian
Whilst I didn t get a chance to do much, here are still a few things that I worked on:
  • Prepared security update for wordpress for trixie and bookworm.
  • A few discussions with the new DFSG team, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
I joined Canonical to work on Ubuntu full-time back in February 2021. Whilst I can t give a full, detailed list of things I did, here s a quick TL;DR of what I did:
  • Successfully released Resolute Snapshot 2!
    • This one was also done without the ISO tracker and cdimage access.
    • I think this one went rather smooth. Let s see what we re able to do for snapshot 3.
  • Worked on removing GPG keys from the cdimage instance. That took a while, whew!
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • review NEW packages for Ubuntu Studio.
    • remove old binaries that are stalling transition and/or migration.
    • LTS requalification of Ubuntu flavours.
    • bootstrapping dotnet-10 packages for Stable Release Updates.
  • With that, we ve entered the EOY break. :)
    • I was anyway on vacation for majority of this month. ;)

Debian (E)LTS
This month I have worked 72 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:

Released Security Updates
  • ruby-git: Multiple vulnerabilities leading to command line injection and improper path escaping.
  • ruby-sidekiq: Multiple vulnerabilities leading to Cross-site Scripting (XSS) and Denial of Service in Web UI.
  • python-apt: Vulnerability leading to crash via invalid nullptr dereference in TagSection.keys().
    • [LTS]: Fixed CVE-2025-6966 via 2.2.1.1 for bullseye. This has been released as DLA 4408-1.
    • [ELTS]: Fixed CVE-2025-6966 via 1.8.4.4 for buster and 1.4.4 for stretch. This has been released as ELA 1596-1.
    • All of this was coordinated b/w the Security team and Julian Andres Klode. Julian will take care of the stable uploads.
  • node-url-parse: Vulnerability allowing authorization bypass through specially crafted URL with empty userinfo and no host.
  • wordpress: Multiple vulnerabilities in WordPress core, leading to Sent Data & Cross-site Scripting.
  • usbmuxd: Privilege escalation vulnerability via path traversal in SavePairRecord command.
    • [LTS]: Fixed CVE-2025-66004 via 1.1.1-2+deb11u1 for bullseye. This has been released as DLA 4417-1.
    • [ELTS]: Fixed CVE-2025-66004 via 1.1.1~git20181007.f838cf6-1+deb10u1 for buster and 1.1.0-2+deb9u1 for stretch. This has been released as ELA 1599-1.
    • All of this was coordinated b/w the Security team and Yves-Alexis Perez. Yves will take care of the stable uploads.
  • gst-plugins-good1.0: Multiple vulnerabilities in isomp4 plugin leading to potential out-of-bounds reads and information disclosure.
  • postgresql-13: Multiple vulnerabilities including unauthorized schema statistics creation and integer overflow in libpq allocation calculations.
  • gst-plugins-base1.0: Multiple vulnerabilities in SubRip subtitle parsing leading to potential crashes and buffer issues.

Work in Progress
  • ceph: Affected by CVE-2024-47866, using the argument x-amz-copy-source to put an object and specifying an empty string as its content leads to the RGW daemon crashing, resulting in a DoS attack.
  • knot-resolver: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.
  • adminer: Affected by CVE-2023-45195 and CVE-2023-45196, leading to SSRF and DoS, respectively.
  • u-boot: Affected by CVE-2025-24857, where boot code access control flaw in U-Boot allowing arbitrary code execution via physical access.
    • [ELTS]: As it s only affected the version in stretch, I ve started the work to find the fixing commits and prepare a backport. Not much progress there, I ll roll it over to January.
  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.
    • [ELTS]: After completing the work for LTS myself, Bastien picked it up for ELTS and reached out about an upstream regression and we ve been doing some exchanges. Bastien has done most of the work backporting the patches but needs a review and help backporting CVE-2025-61771.

Other Activities
  • Frontdesk from 01-12-2025 to 07-12-2025.
    • Auto EOL d a bunch of packages.
    • Marked CVE-2025-12084/python2.7 as end-of-life for bullseye, buster, and stretch.
    • Marked CVE-2025-12084/jython as end-of-life for bullseye.
    • Marked CVE-2025-13992/chromium as end-of-life for bullseye.
    • Marked apache2 CVEs as postponed for bullseye, buster, and stretch.
    • Marked CVE-2025-13654/duc as postponed for bullseye and buster.
    • Marked CVE-2025-32900/kdeconnect as ignored for bullseye.
    • Marked CVE-2025-12084/pypy3 as postponed for bullseye.
    • Marked CVE-2025-14104/util-linux as postponed for bullseye, buster, and stretch.
    • Marked several CVEs for fastdds as postponed for bullseye.
    • Marked several CVEs for pytorch as postponed for bullseye.
    • Marked CVE-2025-2486/edk2 as postponed for bullseye.
    • Marked CVE-2025-6172 7,9 /golang-1.15 as postponed for bullseye.
    • Marked CVE-2025-65637/golang-logrus as postponed for bullseye.
    • Marked CVE-2025-12385/qtdeclarative-opensource-src ,gles as postponed for bullseye, buster, and stretch.
    • Marked TEMP-0000000-D08402/rust-maxminddb as postponed for bullseye.
    • Added the following packages to d,e la-needed.txt:
      • liblivemedia, sogo.
    • During my triage, I had to make the bin/elts-eol script robust to determine the lts_admin repository - did a back and forth with Emilio about this on the list.
    • I sent a gentle reminder to the LTS team about the issues fixed in bullseye but not in bookworm via mailing list: https://lists.debian.org/debian-lts/2025/12/msg00013.html.
  • I claimed php-horde-css-parser to work on CVE-2020-13756 for buster and did almost all the work only to realize that the patch already existed in buster and the changelog confirmed that it was intentionally fixed.
    • After speaking with Andreas Henriksson, we figured that the CVE ID was missed when the ELA was generated and so I fixed that via 87afaaf19ce56123bc9508d9c6cd5360b18114ef and 5621431e84818b4e650ffdce4c456daec0ee4d51 in the ELTS security tracker to reflect the situation.
  • Participated in a thread which I started last month around using Salsa CI for E/LTS packages and if we plan to sunset it in favor of using Debusine. The plan for now is to keep it around as it s still beneficial and Debusine is still in its early phase.
  • Did a lot of back and forth with Helmut about debusine uploads on #debian-elts.
    • While debugging a failure in dcut uploads, I ran into an SSH compatibility issue on deb-master.freexian.com that could be fixed on the server-side. I shared all my findings to Freexian s sysadmin team.
    • A minimal fix on the server side would be one of:
      PubkeyAcceptedAlgorithms -ssh-dss
      
      or explicitly restricting to modern algorithms, e.g.:
      PubkeyAcceptedAlgorithms
      ssh-ed25519,ecdsa-sha2-nistp256,rsa-sha2-512,rsa-sha2-256
      
  • Jelly on #debian-lts reported that all my DLA mails had broken GMail s DKIM signature. So I set up sending replies from @debian.org and that seems to have fixed that! \o/
  • [LTS] Attended a rather short monthly LTS meeting on Jitsi. Summary here.
  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.

Until next time.
:wq for today.

23 December 2025

Daniel Kahn Gillmor: AI and Secure Messaging Don't Mix

AI and Secure Messaging Don't Mix Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix. The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer. In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves. If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services. But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use. Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections? What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened? My handle is dkg. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!) But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias? Don't we owe it to each other to engage with actual human attention?

22 December 2025

Isoken Ibizugbe: Everybody Struggles

That s right: everyone struggles. You could be working on a project only to find a mountain of new things to learn, or your code might keep failing until you start to doubt yourself. I feel like that sometimes, wondering if I m good enough. But in those moments, I whisper to myself: You don t know it yet; once you do, it will get easy. While contributing to the Debian openQA project, there was so much to learn, from understanding what Debian actually is to learning the various installation methods and image types. I then had to tackle the installation and usage of openQA itself. I am incredibly grateful for the installation guideprovided by Roland Clobus and the documentation on writing code for openQA.

Overcoming Technical Hurdles Even with amazing guides, I hit major roadblocks. Initially, I was using Windows with VirtualBox, but openQA couldn t seem to run the tests properly. Despite my mentors (Roland and Phil) suggesting alternatives, the issues persisted. I actually installed openQA twice on VirtualBox and realized that if you miss even one small step in the installation, it becomes very difficult to move forward. Eventually, I took the big step and dual-booted my machine to Linux. Even then, the challenges didn t stop. My openQA Virtual Machine (VM) ran out of allocated space and froze, halting my testing. I reached out on the IRC chat and received the help I needed to get back on track.

My Research Line-up When I m struggling for information, I follow my go-to first step for research, followed by other alternatives:
  1. Google: This is my first stop. It helped me navigate the switch to a Linux OS and troubleshoot KVM connection issues for the VM. Whether it s an Ubuntu forum or a technical blog, I skim through until I find what can help.
  2. The Upstream Documentation: If Google doesn t have the answer, I go straight to the official openQA documentation. This is a goldmine. It explains functions, how to use them, and lists usable test variables.
  3. The Debian openQA UI: While working on the apps_startstop tests, I look at previous similar tests on openqa.debian.net/tests. I checked the Settings tab to see exactly what variables were used and how the test was configured.
  1. Salsa (Debian s GitLab): I reference the Salsa Debian openQA README and the developer guides sometimes; Getting started, Developer docs on how to write tests
I also had to learn the basics of the Perl programming language during the four-week contribution stage. While we don t need to be Perl experts, I found it essential to understand the logic so I can explain my work to others. I ve spent a lot of time studying the codebase, which is time-consuming but incredibly valuable. For example, my apps_startstop test command originally used a long list of applications via ENTRYPOINT. I began to wonder if there was a more efficient way. With Roland s guidance, I explored the main.pm file. This helped me understand how the apps_startstop function works and how it calls variables. I also noticed there are utility functions that are called in tests. I also check them and try to understand their function, so I know if I need them or not. I know I still have a lot to learn, and yes, the doubt still creeps in sometimes. But I am encouraged by the support of my mentors and the fact that they believe in my ability to contribute to this project.
If you re struggling too, just remember: you don t know it yet; once you do, it will get easy.

Next.

Previous.