Search Results: "pm"

10 October 2025

Reproducible Builds: Reproducible Builds in September 2025

Welcome to the September 2025 report from the Reproducible Builds project! Welcome to the very latest report from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:

  1. Reproducible Builds Summit 2025
  2. Can t we have nice things?
  3. Distribution work
  4. Tool development
  5. Reproducibility testing framework
  6. Upstream patches

Reproducible Builds Summit 2025 Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th 30th 2025 in Vienna, Austria! We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!

Can t we have nice things? Debian Developer Gunnar Wolf blogged that George V. Neville-Neil s Kode Vicious column in Communications of the ACM in which reproducible builds is mentioned without needing to introduce it (assuming familiarity across the computing industry and academia) . Titled, Can t we have nice things?, the article mentions:
Once the proper measurement points are known, we want to constrain the system such that what it does is simple enough to understand and easy to repeat. It is quite telling that the push for software that enables reproducible builds only really took off after an embarrassing widespread security issue ended up affecting the entire Internet. That there had already been 50 years of software development before anyone thought that introducing a few constraints might be a good idea is, well, let s just say it generates many emotions, none of them happy, fuzzy ones. [ ]

Distribution work In Debian this month, Johannes Starosta filed a bug against the debian-repro-status package, reporting that it does not work on Debian trixie. (An upstream bug report was also filed.) Furthermore, 17 reviews of Debian packages were added, 10 were updated and 14 were removed this month adding to our knowledge about identified issues. In March s report, we included the news that Fedora would aim for 99% package reproducibility. This change has now been deferred to Fedora 44 according to Phoronix. Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.

Tool development diffoscope version 306 was uploaded to Debian unstable by Chris Lamb. It included contributions already covered in previous months as well as some changes by Zbigniew J drzejewski-Szmek to address issues with the fdtump support [ ] and to move away from the deprecated codes.open method. [ ][ ] strip-nondeterminism version 1.15.0-1 was uploaded to Debian unstable by Chris Lamb. It included a contribution by Matwey Kornilov to add support for inline archive files for Erlang s escript [ ]. kpcyrd has released a new version of rebuilderd. As a quick recap, rebuilderd is an automatic build scheduler that tracks binary packages available in a Linux distribution and attempts to compile the official binary packages from their (purported) source code and dependencies. The code for in-toto attestations has been reworked, and the instances now feature a new endpoint that can be queried to fetch the list of public-keys an instance currently identifies itself by. [ ] Lastly, Holger Levsen bumped the Standards-Version field of disorderfs, with no changes needed. [ ][ ]

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:
  • Setting up six new rebuilderd workers with 16 cores and 16 GB RAM each.
  • reproduce.debian.net-related:
    • Do not expose pending jobs; they are confusing without explaination. [ ]
    • Add a link to v1 API specification. [ ]
    • Drop rebuilderd-worker.conf on a node. [ ]
    • Allow manual scheduling for any architectures. [ ]
    • Update path to trixie graphs. [ ]
    • Use the same rebuilder-debian.sh script for all hosts. [ ]
    • Add all other suites to all other archs. [ ][ ][ ][ ]
    • Update SSH host keys for new hosts. [ ]
    • Move to the pull184 branch. [ ][ ][ ][ ][ ]
    • Only allow 20 GB cache for workers. [ ]
  • OpenWrt-related:
    • Grant developer aparcar full sudo control on the ionos30 node. [ ][ ]
  • Jenkins nodes:
    • Add a number of new nodes. [ ][ ][ ][ ][ ]
    • Dont expect /srv/workspace to exist on OSUOSL nodes. [ ]
    • Stop hardcoding IP addresses in munin.conf. [ ]
    • Add maintenance and health check jobs for new nodes. [ ]
    • Document slight changes in IONOS resources usage. [ ]
  • Misc:
    • Drop disabled Alpine Linux tests for good. [ ]
    • Move Debian live builds and some other Debian builds to the ionos10 node. [ ]
    • Cleanup some legacy support from releases before Debian trixie. [ ]
In addition, Jochen Sprickerhof made the following changes relating to reproduce.debian.net:
  • Do not expose pending jobs on the main site. [ ]
  • Switch the frontpage to reference Debian forky [ ], but do not attempt to build Debian forky on the armel architecture [ ].
  • Use consistent and up to date rebuilder-debian.sh script. [ ]
  • Fix supported worker architectures. [ ]
  • Add a basic excuses page. [ ]
  • Move to the pull184 branch. [ ][ ][ ][ ]
  • Fix a typo in the JavaScript. [ ]
  • Update front page for the new v1 API. [ ][ ]
Lastly, Roland Clobus did some maintenance relating to the reproducibility testing of the Debian Live images. [ ][ ][ ][ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Sergio Cipriano: Avoiding 5XX errors by adjusting Load Balancer Idle Timeout

Avoiding 5XX errors by adjusting Load Balancer Idle Timeout Recently I faced a problem in production where a client was running a RabbitMQ server behind the Load Balancers we provisioned and the TCP connections were closed every minute. My team is responsible for the LBaaS (Load Balancer as a Service) product and this Load Balancer was an Envoy proxy provisioned by our control plane. The error was similar to this:
[2025-10-03 12:37:17,525 - pika.adapters.utils.connection_workflow - ERROR] AMQPConnector - reporting failure: AMQPConnectorSocketConnectError: timeout("TCP connection attempt timed out: ''/(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('<IP>', 5672))")
[2025-10-03 12:37:17,526 - pika.adapters.utils.connection_workflow - ERROR] AMQP connection workflow failed: AMQPConnectionWorkflowFailed: 1 exceptions in all; last exception - AMQPConnectorSocketConnectError: timeout("TCP connection attempt timed out: ''/(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('<IP>', 5672))"); first exception - None.
[2025-10-03 12:37:17,526 - pika.adapters.utils.connection_workflow - ERROR] AMQPConnectionWorkflow - reporting failure: AMQPConnectionWorkflowFailed: 1 exceptions in all; last exception - AMQPConnectorSocketConnectError: timeout("TCP connection attempt timed out: ''/(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('<IP>', 5672))"); first exception - None
At first glance, the issue is simple: the Load Balancer's idle timeout is shorter than the RabbitMQ heartbeat interval. The idle timeout is the time at which a downstream or upstream connection will be terminated if there are no active streams. Heartbeats generate periodic network traffic to prevent idle TCP connections from closing prematurely. Adjusting these timeout settings to align properly solved the issue. However, what I want to explore in this post are other similar scenarios where it's not so obvious that the idle timeout is the problem. Introducing an extra network layer, such as an Envoy proxy, can introduce unpredictable behavior across your services, like intermittent 5XX errors. To make this issue more concrete, let's look at a minimal, reproducible setup that demonstrates how adding an Envoy proxy can lead to sporadic errors.

Reproducible setup I'll be using the following tools: This setup is based on what Kai Burjack presented in his article. Setting up Envoy with Docker is straightforward:
$ docker run \
    --name envoy --rm \
    --network host \
    -v $(pwd)/envoy.yaml:/etc/envoy/envoy.yaml \
    envoyproxy/envoy:v1.33-latest
I'll be running experiments with two different envoy.yaml configurations: one that uses Envoy's TCP proxy, and another that uses Envoy's HTTP connection manager. Here's the simplest Envoy TCP proxy setup: a listener on port 8000 forwarding traffic to a backend running on port 8080.
static_resources:
  listeners:
  - name: go_server_listener
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 8000
    filter_chains:
    - filters:
      - name: envoy.filters.network.tcp_proxy
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
          stat_prefix: go_server_tcp
          cluster: go_server_cluster
  clusters:
  - name: go_server_cluster
    connect_timeout: 1s
    type: static
    load_assignment:
      cluster_name: go_server_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: 127.0.0.1
                port_value: 8080
The default idle timeout if not otherwise specified is 1 hour, which is the case here. The backend setup is simple as well:
package main

import (
    "fmt"
    "net/http"
    "time"
)

func helloHandler(w http.ResponseWriter, r *http.Request)  
  w.Write([]byte("Hello from Go!"))
 

func main()  
    http.HandleFunc("/", helloHandler)

    server := http.Server 
        Addr:        ":8080",
        IdleTimeout: 3 * time.Second,
     

    fmt.Println("Starting server on :8080")
    panic(server.ListenAndServe())
 
The IdleTimeout is set to 3 seconds to make it easier to test. Now, oha is the perfect tool to generate the HTTP requests for this test. The Load test is not meant to stress this setup, the idea is to wait long enough so that some requests are closed. The burst-delay feature will help with that:
$ oha -z 30s -w --burst-delay 3s --burst-rate 100 http://localhost:8000
I'm running the Load test for 30 seconds, sending 100 requests at three-second intervals. I also use the -w option to wait for ongoing requests when the duration is reached. The result looks like this: oha test report tcp fail We had 886 responses with status code 200 and 64 connections closed. The backend terminated 64 connections while the load balancer still had active requests directed to it. Let's change the Load Balancer idle_timeout to two seconds.
filter_chains:
- filters:
  - name: envoy.filters.network.tcp_proxy
    typed_config:
      "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
      stat_prefix: go_server_tcp
      cluster: go_server_cluster
      idle_timeout: 2s # <--- NEW LINE
Run the same test again. oha test report tcp success Great! Now all the requests worked. This is a common issue, not specific to Envoy Proxy or the setup shown earlier. Major cloud providers have all documented it. AWS troubleshoot guide for Application Load Balancers says this: The target closed the connection with a TCP RST or a TCP FIN while the load balancer had an outstanding request to the target. Check whether the keep-alive duration of the target is shorter than the idle timeout value of the load balancer. Google troubleshoot guide for Application Load Balancers mention this as well: Verify that the keepalive configuration parameter for the HTTP server software running on the backend instance is not less than the keepalive timeout of the load balancer, whose value is fixed at 10 minutes (600 seconds) and is not configurable. The load balancer generates an HTTP 5XX response code when the connection to the backend has unexpectedly closed while sending the HTTP request or before the complete HTTP response has been received. This can happen because the keepalive configuration parameter for the web server software running on the backend instance is less than the fixed keepalive timeout of the load balancer. Ensure that the keepalive timeout configuration for HTTP server software on each backend is set to slightly greater than 10 minutes (the recommended value is 620 seconds). RabbitMQ docs also warn about this: Certain networking tools (HAproxy, AWS ELB) and equipment (hardware load balancers) may terminate "idle" TCP connections when there is no activity on them for a certain period of time. Most of the time it is not desirable. Most of them are talking about Application Load Balancers and the test I did was using a Network Load Balancer. For the sake of completeness, I will do the same test but using Envoy's HTTP connection manager. The updated envoy.yaml:
static_resources:
  listeners:
  - name: listener
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 8000
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: go_server_http
          access_log:
          - name: envoy.access_loggers.stdout
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
          http_filters:
          - name: envoy.filters.http.router
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
          route_config:
            name: http_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: go_server_cluster
  clusters:
  - name: go_server_cluster
    type: STATIC
    load_assignment:
      cluster_name: go_server_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: 0.0.0.0
                port_value: 8080
The yaml above is an example of a service proxying HTTP from 0.0.0.0:8000 to 0.0.0.0:8080. The only difference from a minimal configuration is that I enabled access logs. Let's run the same tests with oha. oha test report http fail Even thought the success rate is 100%, the status code distribution show some responses with status code 503. This is the case where is not that obvious that the problem is related to idle timeout. However, it's clear when we look the Envoy access logs:
[2025-10-10T13:32:26.617Z] "GET / HTTP/1.1" 503 UC 0 95 0 - "-" "oha/1.10.0" "9b1cb963-449b-41d7-b614-f851ced92c3b" "localhost:8000" "0.0.0.0:8080"
UC is the short name for UpstreamConnectionTermination. This means the upstream, which is the golang server, terminated the connection. To fix this once again, the Load Balancer idle timeout needs to change:
  clusters:
  - name: go_server_cluster
    type: STATIC
    typed_extension_protocol_options: # <--- NEW BLOCK
      envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
        "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
        common_http_protocol_options:
          idle_timeout: 2s # <--- NEW VALUE
        explicit_http_config:
          http_protocol_options:  
Finally, the sporadic 503 errors are over: oha test report http success

To Sum Up Here's an example of the values my team recommends to our clients: recap drawing Key Takeaways:
  1. The Load Balancer idle timeout should be less than the backend (upstream) idle/keepalive timeout.
  2. When we are working with long lived connections, the client (downstream) should use a keepalive smaller than the LB idle timeout.

Dirk Eddelbuettel: RcppArmadillo 15 CRAN Transition: Offering Office Hours

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1273 other packages on CRAN, downloaded 41.8 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 651 times according to Google Scholar. Armadillo 15 brought changes. We mentioned these in the 15.0.2-1 and 15.0.2-1 release blog posts: (The second point is a consequence of the first. Prior to C++14, deprecation notes were issue via a macro, and the macro was set up by Conrad in the common way of allowing an override, which we took advantage of in RcppArmadillo effectively shielding downstream package. In C++14 this is now an attribute, and those cannot be suppressed.) We tested this then-upcoming change extensively: Thirteen reverse dependency runs expoloring different settings and leading to the current package setup where an automatic fallback to the last Armadillo 14 release offers fallback for hardwired C++11 use and Armadillo 15 others. Given the 1200+ reverse deoendencies, this took considerable time. All this was also quite extensively discussed with CRAN (especially Kurt Hornik) and documented / controlled via a series of issue tickets starting with overall issue #475 covering the subissues: The sixty pull requests (or emailed patches) followed a suggestion by CRAN to rank-order packages affected by their reverse dependencies sorted in descending package count. Now, while this change from Armadillo 14 to 15 was happening, CRAN also tightened the C++11 requirement for packages and imposed a deadline for changes. In discussion, CRAN also convinced me that a deadline for the deprecation warning, now unmasked, was viable (and is in fairness commensurate with similar, earlier changes triggered by changes in the behaviour of either gcc/g++ or clang/clang++). So we now have two larger deadline campaigns affecting the package (and as always there are some others). These deadlines are coming close: October 17 for the C++11 transition, and October 23 for the deprecation warning. Now, as became clear preparing the sixty pull requests and patches, these changes are often relatively straightforward. For the former, remove the C++11 enforcement and the package will likely build without changes. For the latter, make the often simple (e.g. swith from arma::is_finite to std::isfinite) change. I did not encounter anything much more complicated yet. The number of affected packages approximated by looking at all packages with a reverse dependency on RcppArmadillo and having a deadline can be computed as
suppressMessages(library(data.table))
D <- setDT(tools::CRAN_package_db())
P <- data.table(Package=tools::package_dependencies("RcppArmadillo", reverse=TRUE, db=D)[[1]])
W <- merge(P, D, all.x=TRUE)[is.na(Deadline)==FALSE,c(1:2,38,68)]
W

W[, nrevdep := length(tools::package_dependencies(Package, reverse=TRUE, recursive=TRUE, db=D)[[1]]), by=Package]
W[order(-nrevdep)]
and has been declining steadily from over 350 to now under 200. For that a big and heartfelt Thank You! to all the maintainers who already addressed their package and uploaded updated packages to CRAN. That rocks, and is truly appreciated. Yet the number is still large. And while issues #489 and #491 show a number of pending packages that have merged but not uploaded (yet?) there are also all the other packages I have not been able to look at in detail. While preparing sixty PRs / patches was viable over a period of a good week, I cannot create these for all packages. So with that said, here is a different suggestion for help: All of next week, I will be holding open door open source office hours online two times each day (11:00h to 13:00h Central, 16:00h to 18:00h Central) which can be booked via this booking link for Monday to Friday next week in either fifteen or thirty minutes slots you can book. This should offer Google Meet video conferencing (with jitsi as an alternate, you should be able to control that) which should allow for screen sharing. (I cannot hookup Zoom as my default account has organization settings with a different calendar integration.) If you are reading this and have a package that still needs helps, I hope to see you in the Open Source Office Hours to aid in the RcppArmadillo package updates for your package. Please book a slot!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

John Goerzen: I m Not Very Popular, Thankfully. That Makes The Internet Fun Again

Like and subscribe! Help us get our next thousand (or million) followers! I was using Linux before it was popular. Back in the day where you had to write Modelines for your XF86Config file and do it properly, or else you might ruin your monitor. Back when there wasn t a word processor (thankfully; that forced me to learn LaTeX, which I used to write my papers in college). I then ran Linux on an Alpha, a difficult proposition in an era when web browsers were either closed-source or too old to be useful; all sorts of workarounds, including emulating Digital UNIX. Recently I wrote a deep dive into the DOS VGA text mode and how to achieve it on a modern UEFI Linux system. Nobody can monetize things like this. I am one of maybe a dozen or two people globally that care about that sort of thing. That s fine. Today, I m interested in things like asynchronous communication, NNCP, and Gopher. Heck, I m posting these words on a blog. Social media displaced those, right? Some of the things I write about here have maybe a few dozen people on the planet interested in them. That s fine. I have no idea how many people read my blog. I have no idea where people hear about my posts from. I guess I can check my Mastodon profile to see how many followers I have, but it s not something I tend to do. I don t know if the number is going up or down, or if it is all that much in Mastodon terms (probably not). Thank goodness. Since I don t have to care about what s popular, or spend hours editing video, or thousands of dollars on video equipment, I can just sit down and write about what interests me. If that also interests you, then great. If not, you can find what interests you also fine. I once had a colleague that was one of these plugged into Silicon Valley types. He would periodically tell me, with a mixture of excitement and awe, that one of my posts had made Hacker News. This was always news to me, because I never paid a lot of attention over there. Occasionally that would bring in some excellent discussion, but more often than not, it was comments from people that hadn t read or understood the article trying to appear smart by arguing with what it or rather, what they imagined it said, I guess. The thing I value isn t subscriber count. It s discussion. A little discussion in the comments or on Mastodon that s perfect, even if only 10 people read the article. I have the most fun in a community. And I ll go on writing about NNCP and Gopher and non-square DOS pixels, with audiences of dozens globally. I have no advertisers to keep happy, and I enjoy it, so why not?

8 October 2025

Colin Watson: Free software activity in September 2025

About 90% of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. Some months I feel like I m pedalling furiously just to keep everything in a roughly working state. This was one of those months. Python team I upgraded these packages to new upstream versions: I had to spend a fair bit of time this month chasing down build/test regressions in various packages due to some other upgrades, particularly to pydantic, python-pytest-asyncio, and rust-pyo3: After some upstream discussion I requested removal of pydantic-compat, since it was more trouble than it was worth to keep it working with the latest pydantic version. I filed dh-python: pybuild-plugin-pyproject doesn t know about headers and added it to Python/PybuildPluginPyproject, and converted some packages to pybuild-plugin-pyproject: I updated dh-python to suppress generated dependencies that would be satisfied by python3 >= 3.11. pkg_resources is deprecated. In most cases replacing it is a relatively simple matter of porting to importlib.resources, but packages that used its old namespace package support need more complicated work to port them to implicit namespace packages. We had quite a few bugs about this on zope.* packages, but fortunately upstream did the hard part of this recently. I went round and cleaned up most of the remaining loose ends, with some help from Alexandre Detiste. Some of these aren t completely done yet as they re awaiting new upstream releases: This work also caused a couple of build regressions, which I fixed: I fixed jupyter-client so that its autopkgtests would work in Debusine. I fixed waitress to build with the nocheck profile. I fixed several other build/test failures: I fixed some other bugs: Code reviews Other bits and pieces I fixed several CMake 4 build failures: I got CI for debbugs passing (!22, !23). I fixed a build failure with GCC 15 in trn4. I filed a release-notes bug about the tzdata reorganization in the trixie cycle. I filed and fixed a git-dpm regression with bash 5.3. I upgraded libfilter-perl to a new upstream version. I optimized some code in ubuntu-dev-tools that made O(1) HTTP requests when it could instead make O(n).

3 October 2025

Dirk Eddelbuettel: #053: Adding llvm Snapshots for R Package Testing

Welcome to post 53 in the R4 series. Continuing with posts #51 from Tuesday and #52 from Wednesday and their stated intent of posting some more here is another quick one. Earlier today I helped another package developer who came to the r-package-devel list asking for help with a build error on the Fedora machine at CRAN running recent / development clang. In such situations, the best first step is often to replicate the issue. As I pointed out on the list, the LLVM team behind clang maintains an apt repo at apt.llvm.org/ making it a good resource to add to Debian-based container such as Rocker r-base or the offical r-base (the two are in fact interchangeable, and I take care of both). A small pothole, however, is that the documentation at the top of apt.llvm.org site is a bit stale and behind two aspects that changed on current Debian systems (i.e. unstable/testing as used for r-base). First, apt now prefers files ending in .sources (in a nicer format) and second, it now really requires a key (which is good practice). As it took me a few minutes to regather how to meet both requirements, I reckoned I might as well script this. Et voil the following script does that:
#!/bin/sh

## Update does not hurt but is not strictly needed
#apt update --quiet --quiet
#apt upgrade --yes

## wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key   sudo tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc
## or as we are root in container
wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key > /etc/apt/trusted.gpg.d/apt.llvm.org.asc

cat <<EOF >/etc/apt/sources.list.d/llvm-dev.sources
Types: deb
URIs: http://apt.llvm.org/unstable/
# for clang-21 
#   Suites: llvm-toolchain-21
# for current clang
Suites: llvm-toolchain
Components: main
Signed-By: /etc/apt/trusted.gpg.d/apt.llvm.org.asc
EOF

test -d ~/.R   mkdir ~/.R
cat <<EOF >~/.R/Makevars
CLANGVER=-22
# CLANGLIB=-stdlib=libc++
CXX=clang++\$(CLANGVER) \$(CLANGLIB)
CXX11=clang++\$(CLANGVER) \$(CLANGLIB)
CXX14=clang++\$(CLANGVER) \$(CLANGLIB)
CXX17=clang++\$(CLANGVER) \$(CLANGLIB)
CXX20=clang++\$(CLANGVER) \$(CLANGLIB)
CC=clang\$(CLANGVER)
SHLIB_CXXLD=clang++\$(CLANGVER) \$(CLANGLIB)
EOF

apt update
apt install --yes clang-22
Once the script is run, one can test a package (or set of packages) against clang-22 and clang++-22. This may help R package developers. The script is also generic enough for other development communities who can ignore (or comment-out / delete) the bit about ~/.R/Makevars and deploy the compiler differently. Updating the softlink as apt-preferences does is one way and done in many GitHub Actions recipes. As we only need wget here a basic Debian container should work, possibly with the addition of wget. For R users r-base hits a decent sweet spot.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Guido G nther: Free Software Activities September 2025

Another short status update of what happened on my side last month. Nothing stands out too much, I enjoyed doing the OSK changes the most as that helped to improve the typing experience further. Also doing a small bit of kernel work again was fun (still need to figure out the 6mq's touch controller repsonsiveness though). See below for details on the above and more: phosh phoc phosh-mobile-settings stevia (formerly phosh-osk-stub) xdg-desktop-portal-phosh pfs libphosh-rs Phrog feedbackd feedbackd-device-themes Debian gnome-settings-daemon git-buildpackage govarnam Sessions twenty-twenty-hugo tuwunnel Linux mutter Phosh debs phosh-site Reviews This is not code by me but reviews I did on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

1 October 2025

Birger Schacht: Status update, September 2025

Regarding Debian packaging this was a rather quiet month. I uploaded version 1.24.0-1 of foot and version 2.8.0-1 of git-quick-stats. I took the opportunity and started migrating my packages to the new version 5 watch file format, which I think is much more readable than the previous format. I also uploaded version 0.1.1-1 of libscfg to NEW. libscfg is a C implementation of the scfg configuration file format and it is a dependency of recent version of kanshi. kanshi is a tool similar to autorandr which allows you define output profiles and kanshi switches to the correct output profile on hotplug events. Once libscfg is in unstable I can finally update kanshi to the latest version. A lot of time this month in finalizing a redesign of the output rendering of carl. carl is a small rust program I wrote that provides a calendar view similar to cal, but it comes with colors and ical file integration. That means that you can not only display a simple calendar, but also colorize/highlight dates based on various attributes or based on events on that day. In the initial versions of carl the output rendering was simply hardcoded into the app.
Screenshot of carl
This was a bit cumbersome to maintain and not configurable for users. I am using templating languages on a daily basis, so I decided I would reimplement the output generation of carl to use templates. I chose the minijinja Rust library which is based on the syntax and behavior of the Jinja2 template engine for Python . There are others out there, like tera, but minijinja seems to be more active in development currently. I worked on this implementation on and off for the last year and finally had the time to finish it up and write some additional tests for the outputs. It is easier to maintain templates than Rust code that uses write!() to format the output. I also implemented a configuration option for users to override the templates. Additional to the output refactoring I also fixed couple of bugs and finally released v0.4.0 of carl. In my dayjob I released version 0.53 of apis-core-rdf which contains the place lookup field which I implemented in August. A couple of weeks later we released version 0.54 which comes with a middleware to show pass on messages from the Django messages framework via response header to HTMX to trigger message popups. This implementation is based on the blog post Using the Django messages framework with HTMX. Version 0.55 was the last release in September. It contained preparations for refactoring the import logic as well as a couple of UX improvements.

22 September 2025

David Bremner: Hibernate on the pocket reform 12/n

Context

Update to latest rockchip-devel For some reason I decided to try re-applying the PCI series. Good news: the pci series finally applies cleanly.
$ git fetch collabora && git switch -c tmp collabora  # [1]
$ b4 am 20250715-pci-port-reset-v6-0-6f9cce94e7bb@oss.qualcomm.com
$ git switch reform-patches  # [2]
$ git rebase -i tmp
  1. https://gitlab.collabora.com/hardware-enablement/rockchip-3588/linux.git#rockchip-devel
  2. https://salsa.debian.org/bremner/collabora-rockchip-3588#reform-patches

Rebuild the kernel
$ cp /boot/config-6.17.0-rc7+ .config
$ make olddefconfig
$ yes ''   make localmodconfig
$ make KBUILD_IMAGE=arch/arm64/boot/Image bindeb-pkg -j$(nproc)

try the hibernation test, again Running the following test script
set -x
echo platform >  /sys/power/pm_test
echo reboot > /sys/power/disk
sleep 2
rmmod mt76x2u
sleep 2
echo disk >  /sys/power/state
sleep 2
modprobe mt76x2u
Initially there is some output like this
[  151.752683] rockchip-dw-pcie a40c00000.pcie: Failed to receive PME_TO_Ack
[  151.754035] PM: hibernation: hibernation debug: Waiting for 5 second(s).
[  157.821584] rockchip-dw-pcie a40c00000.pcie: Phy link never came up
[  157.822139] rockchip-dw-pcie a40c00000.pcie: fail to resume
[  157.822636] rockchip-dw-pcie a40c00000.pcie: PM: dpm_run_callback(): genpd_restore_noirq returns -110
[  157.823442] rockchip-dw-pcie a40c00000.pcie: PM: failed to restore noirq: error -110
A small amount of detective work suggests that a40c00000.pcie corresponds to the first PCI bridge on the rk3588 SOC.
$ ls -l /sys/bus/pci/devices
total 0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0003:30:00.0 -> ../../../devices/platform/a40c00000.pcie/pci0003:30/0003:30:00.0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0004:40:00.0 -> ../../../devices/platform/a41000000.pcie/pci0004:40/0004:40:00.0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0004:41:00.0 -> ../../../devices/platform/a41000000.pcie/pci0004:40/0004:40:00.0/0004:41:00.0
Then after a pause,
[ 1032.039237] watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 6
[ 1032.039778] Modules linked in: xt_CHECKSUM xt_tcpudp nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat x_tables bridge stp llc nf_tables aes_neon_bs aes_neon_blk ccm dwmac_rk binfmt_misc mt76x2_common mt76x02_usb mt76_usb mt76x02_lib mt76 rk805_pwrkey snd_soc_tlv320aic31xx snd_soc_simple_card mac80211 rockchip_saradc reform2_lpc(OE) industrialio_triggered_buffer libarc4 kfifo_buf cfg80211 industrialio rockchip_thermal rockchip_rng cdc_acm rfkill snd_soc_rockchip_i2s_tdm hantro_vpu rockchip_rga panthor v4l2_vp9 v4l2_jpeg snd_soc_audio_graph_card videobuf2_dma_sg v4l2_h264 drm_gpuvm snd_soc_simple_card_utils drm_exec evdev joydev dm_mod nvme_fabrics efi_pstore configfs nfnetlink autofs4 ext4 crc16 mbcache jbd2 btrfs blake2b_generic xor xor_neon raid6_pq mali_dp snd_soc_meson_axg_toddr snd_soc_meson_axg_fifo snd_soc_meson_codec_glue panfrost drm_shmem_helper gpu_sched ao_cec_g12a meson_vdec(C) videobuf2_dma_contig videobuf2_memops v4l2_mem2mem videobuf2_v4l2 videodev
[ 1032.039834]  videobuf2_common mc dw_hdmi_i2s_audio meson_drm meson_canvas meson_dw_mipi_dsi meson_dw_hdmi mxsfb mux_mmio panel_edp imx_dcss ti_sn65dsi86 nwl_dsi mux_core pwm_imx27 hid_generic usbhid hid onboard_usb_dev nvme nvme_core nvme_keyring nvme_auth snd_soc_hdmi_codec snd_soc_core xhci_plat_hcd xhci_hcd snd_pcm_dmaengine snd_pcm snd_timer snd soundcore rtc_pcf8523 fan53555 micrel phy_package stmmac_platform stmmac pcs_xpcs phylink mdio_devres rk808_regulator of_mdio sdhci_of_dwcmshc fixed_phy sdhci_pltfm fwnode_mdio libphy sdhci phy_rockchip_usbdp dw_mmc_rockchip dw_mmc_pltfm typec phy_rockchip_naneng_combphy pwm_rockchip dw_wdt phy_rockchip_samsung_hdptx dwc3 cqhci dw_mmc mdio_bus rockchip_dfi ehci_platform rockchipdrm ulpi ehci_hcd dw_hdmi_qp ohci_platform udc_core ohci_hcd analogix_dp dw_mipi_dsi i2c_rk3x cpufreq_dt usbcore phy_rockchip_inno_usb2 dw_mipi_dsi2 drm_dp_aux_bus usb_common [last unloaded: mt76x2u]
[ 1032.039886] Sending NMI from CPU 5 to CPUs 6:
previous episode

Evgeni Golov: Booting Vagrant boxes with UEFI on Fedora: Permission denied

If you're still using Vagrant (I am) and try to boot a box that uses UEFI (like boxen/debian-13), a simple vagrant init boxen/debian-13 and vagrant up will entertain you with a nice traceback:
% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Removing domain...
==> default: Deleting the machine folder
/usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Libvirt::Domain#create': Call to virDomainCreate failed: internal error: process exited while connecting to monitor: 2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev  "driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap" : Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Fog::Libvirt::Compute::Shared#vm_action'
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/models/compute/server.rb:81:in 'Fog::Libvirt::Compute::Server#start'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/start_domain.rb:546:in 'VagrantPlugins::ProviderLibvirt::Action::StartDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_boot_order.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::SetBootOrder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/share_folders.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::ShareFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folders.rb:87:in 'Vagrant::Action::Builtin::SyncedFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/delayed.rb:19:in 'Vagrant::Action::Builtin::Delayed#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in 'Vagrant::Action::Builtin::SyncedFolderCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/plugins/synced_folders/nfs/action_cleanup.rb:25:in 'VagrantPlugins::SyncedFolderNFS::ActionCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:14:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSValidIds#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_network_interfaces.rb:197:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworkInterfaces#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_networks.rb:40:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworks#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain.rb:452:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/resolve_disk_settings.rb:143:in 'VagrantPlugins::ProviderLibvirt::Action::ResolveDiskSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain_volume.rb:97:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomainVolume#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_box_image.rb:127:in 'VagrantPlugins::ProviderLibvirt::Action::HandleBoxImage#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/handle_box.rb:56:in 'Vagrant::Action::Builtin::HandleBox#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_storage_pool.rb:63:in 'VagrantPlugins::ProviderLibvirt::Action::HandleStoragePool#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_name_of_domain.rb:34:in 'VagrantPlugins::ProviderLibvirt::Action::SetNameOfDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/provision.rb:80:in 'Vagrant::Action::Builtin::Provision#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/cleanup_on_failure.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::CleanupOnFailure#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/box_check_outdated.rb:93:in 'Vagrant::Action::Builtin::BoxCheckOutdated#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/config_validate.rb:25:in 'Vagrant::Action::Builtin::ConfigValidate#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:248:in 'Vagrant::Machine#action_raw'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:217:in 'block in Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/environment.rb:631:in 'Vagrant::Environment#lock'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Method#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/batch_action.rb:86:in 'block (2 levels) in Vagrant::BatchAction#run'
The important part here is
Call to virDomainCreate failed: internal error: process exited while connecting to monitor:
2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev  "driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap" :
Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)
Of course we checked that the file permissions on this file are correct (I'll save you the ls output), so what's next? Yes, of course, SELinux!
# ausearch -m AVC
time->Mon Sep 22 12:07:55 2025
type=AVC msg=audit(1758535675.080:1613): avc:  denied    read   for  pid=257204 comm="qemu-system-x86" name="OVMF_CODE.fd" dev="dm-2" ino=1883946 scontext=unconfined_u:unconfined_r:svirt_t:s0:c352,c717 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0
A process in the svirt_t domain tries to access something in the user_home_t domain and is denied by the kernel. So far, SELinux is both working as designed and preventing us from doing our work, nice. For "normal" (non-UEFI) boxes, Vagrant uploads the image to libvirt, which stores it in ~/.local/share/libvirt/images/ and boots fine from there. For UEFI boxen, one also needs loader and nvram files, which Vagrant keeps in ~/.vagrant.d/boxes/<box_name> and that's what explodes in our face here. As ~/.local/share/libvirt/images/ works well, and is labeled svirt_home_t let's see what other folders use that label:
# semanage fcontext -l  grep svirt_home_t
/home/[^/]+/\.cache/libvirt/qemu(/.*)?             all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.config/libvirt/qemu(/.*)?            all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.libvirt/qemu(/.*)?                   all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/gnome-boxes/images(/.*)? all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/boot(/.*)?       all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/images(/.*)?     all files          unconfined_u:object_r:svirt_home_t:s0
Okay, that all makes sense, and it's just missing the Vagrant-specific folders!
# semanage fcontext -a -t svirt_home_t '/home/[^/]+/\.vagrant.d/boxes(/.*)?'
Now relabel the Vagrant boxes:
% restorecon -rv ~/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/metadata_url from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_0.img from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/metadata.json from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/Vagrantfile from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_VARS.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_update_check from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
And it works!
% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Domain launching with graphics connection settings...
==> default:  -- Graphics Port:      5900
==> default:  -- Graphics IP:        127.0.0.1
==> default:  -- Graphics Password:  Not defined
==> default:  -- Graphics Websocket: 5700
==> default: Waiting for domain to get an IP address...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 192.168.124.157:22
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!

Vincent Bernat: Akvorado release 2.0

Akvorado 2.0 was released today! Akvorado collects network flows with IPFIX and sFlow. It enriches flows and stores them in a ClickHouse database. Users can browse the data through a web console. This release introduces an important architectural change and other smaller improvements. Let s dive in!
$ git diff --shortstat v1.11.5
 493 files changed, 25015 insertions(+), 21135 deletions(-)

New outlet service The major change in Akvorado 2.0 is splitting the inlet service into two parts: the inlet and the outlet. Previously, the inlet handled all flow processing: receiving, decoding, and enrichment. Flows were then sent to Kafka for storage in ClickHouse:
Akvorado flow processing before the change: flows are received and processed by the inlet, sent to Kafka and stored in ClickHouse
Akvorado flow processing before the introduction of the outlet service
Network flows reach the inlet service using UDP, an unreliable protocol. The inlet must process them fast enough to avoid losing packets. To handle a high number of flows, the inlet spawns several sets of workers to receive flows, fetch metadata, and assemble enriched flows for Kafka. Many configuration options existed for scaling, which increased complexity for users. The code needed to avoid blocking at any cost, making the processing pipeline complex and sometimes unreliable, particularly the BMP receiver.1 Adding new features became difficult without making the problem worse.2 In Akvorado 2.0, the inlet receives flows and pushes them to Kafka without decoding them. The new outlet service handles the remaining tasks:
Akvorado flow processing after the change: flows are received by the inlet, sent to Kafka, processed by the outlet and inserted in ClickHouse
Akvorado flow processing after the introduction of the outlet service
This change goes beyond a simple split:3 the outlet now reads flows from Kafka and pushes them to ClickHouse, two tasks that Akvorado did not handle before. Flows are heavily batched to increase efficiency and reduce the load on ClickHouse using ch-go, a low-level Go client for ClickHouse. When batches are too small, asynchronous inserts are used (e20645). The number of outlet workers scales dynamically (e5a625) based on the target batch size and latency (50,000 flows and 5 seconds by default). This new architecture also allows us to simplify and optimize the code. The outlet fetches metadata synchronously (e20645). The BMP component becomes simpler by removing cooperative multitasking (3b9486). Reusing the same RawFlow object to decode protobuf-encoded flows from Kafka reduces pressure on the garbage collector (8b580f). The effect on Akvorado s overall performance was somewhat uncertain, but a user reported 35% lower CPU usage after migrating from the previous version, plus resolution of the long-standing BMP component issue.

Other changes This new version includes many miscellaneous changes, such as completion for source and destination ports (f92d2e), and automatic restart of the orchestrator service (0f72ff) when configuration changes to avoid a common pitfall for newcomers. Let s focus on some key areas for this release: observability, documentation, CI, Docker, Go, and JavaScript.

Observability Akvorado exposes metrics to provide visibility into the processing pipeline and help troubleshoot issues. These are available through Prometheus HTTP metrics endpoints, such as /api/v0/inlet/metrics. With the introduction of the outlet, many metrics moved. Some were also renamed (4c0b15) to match Prometheus best practices. Kafka consumer lag was added as a new metric (e3a778). If you do not have your own observability stack, the Docker Compose setup shipped with Akvorado provides one. You can enable it by activating the profiles introduced for this purpose (529a8f). The prometheus profile ships Prometheus to store metrics and Alloy to collect them (2b3c46, f81299, and 8eb7cd). Redis and Kafka metrics are collected through the exporter bundled with Alloy (560113). Other metrics are exposed using Prometheus metrics endpoints and are automatically fetched by Alloy with the help of some Docker labels, similar to what is done to configure Traefik. cAdvisor was also added (83d855) to provide some container-related metrics. The loki profile ships Loki to store logs (45c684). While Alloy can collect and ship logs to Loki, its parsing abilities are limited: I could not find a way to preserve all metadata associated with structured logs produced by many applications, including Akvorado. Vector replaces Alloy (95e201) and features a domain-specific language, VRL, to transform logs. Annoyingly, Vector currently cannot retrieve Docker logs from before it was started. Finally, the grafana profile ships Grafana, but the shipped dashboards are broken. This is planned for a future version.

Documentation The Docker Compose setup provided by Akvorado makes it easy to get the web interface up and running quickly. However, Akvorado requires a few mandatory steps to be functional. It ships with comprehensive documentation, including a chapter about troubleshooting problems. I hoped this documentation would reduce the support burden. It is difficult to know if it works. Happy users rarely report their success, while some users open discussions asking for help without reading much of the documentation. In this release, the documentation was significantly improved.
$ git diff --shortstat v1.11.5 -- console/data/docs
 10 files changed, 1873 insertions(+), 1203 deletions(-)
The documentation was updated (fc1028) to match Akvorado s new architecture. The troubleshooting section was rewritten (17a272). Instructions on how to improve ClickHouse performance when upgrading from versions earlier than 1.10.0 was added (5f1e9a). An LLM proofread the entire content (06e3f3). Developer-focused documentation was also improved (548bbb, e41bae, and 871fc5). From a usability perspective, table of content sections are now collapsable (c142e5). Admonitions help draw user attention to important points (8ac894).
Admonition in Akvorado documentation to ask a user not to open an issue or start a discussion before reading the documentation
Example of use of admonitions in Akvorado's documentation

Continuous integration This release includes efforts to speed up continuous integration on GitHub. Coverage and race tests run in parallel (6af216 and fa9e48). The Docker image builds during the tests but gets tagged only after they succeed (8b0dce).
GitHub workflow for CI with many jobs, some of them running in parallel, some not
GitHub workflow to test and build Akvorado
End-to-end tests (883e19) ensure the shipped Docker Compose setup works as expected. Hurl runs tests on various HTTP endpoints, particularly to verify metrics (42679b and 169fa9). For example:
## Test inlet has received NetFlow flows
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_flow_input_udp_packets_total job="akvorado-inlet",listener=":2055" )
HTTP 200
[Captures]
inlet_receivedflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_receivedflows" > 10
## Test inlet has sent them to Kafka
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_kafka_sent_messages_total job="akvorado-inlet" )
HTTP 200
[Captures]
inlet_sentflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_sentflows" >=   inlet_receivedflows  

Docker Akvorado ships with a comprehensive Docker Compose setup to help users get started quickly. It ensures a consistent deployment, eliminating many configuration-related issues. It also serves as a living documentation of the complete architecture. This release brings some small enhancements around Docker: Previously, many Docker images were pulled from the Bitnami Containers library. However, VMWare acquired Bitnami in 2019 and Broadcom acquired VMWare in 2023. As a result, Bitnami images were deprecated in less than a month. This was not really a surprise4. Previous versions of Akvorado had already started moving away from them. In this release, the Apache project s Kafka image replaces the Bitnami one (1eb382). Thanks to the switch to KRaft mode, Zookeeper is no longer needed (0a2ea1, 8a49ca, and f65d20). Akvorado s Docker images were previously compiled with Nix. However, building AArch64 images on x86-64 is slow because it relies on QEMU userland emulation. The updated Dockerfile uses multi-stage and multi-platform builds: one stage builds the JavaScript part on the host platform, one stage builds the Go part cross-compiled on the host platform, and the final stage assembles the image on top of a slim distroless image (268e95 and d526ca).
# This is a simplified version
FROM --platform=$BUILDPLATFORM node:20-alpine AS build-js
RUN apk add --no-cache make
WORKDIR /build
COPY console/frontend console/frontend
COPY Makefile .
RUN make console/data/frontend
FROM --platform=$BUILDPLATFORM golang:alpine AS build-go
RUN apk add --no-cache make curl zip
WORKDIR /build
COPY . .
COPY --from=build-js /build/console/data/frontend console/data/frontend
RUN go mod download
RUN make all-indep
ARG TARGETOS TARGETARCH TARGETVARIANT VERSION
RUN make
FROM gcr.io/distroless/static:latest
COPY --from=build-go /build/bin/akvorado /usr/local/bin/akvorado
ENTRYPOINT [ "/usr/local/bin/akvorado" ]
When building for multiple platforms with --platform linux/amd64,linux/arm64,linux/arm/v7, the build steps until the highlighted line execute only once for all platforms. This significantly speeds up the build. Akvorado now ships Docker images for these platforms: linux/amd64, linux/amd64/v3, linux/arm64, and linux/arm/v7. When requesting ghcr.io/akvorado/akvorado, Docker selects the best image for the current CPU. On x86-64, there are two choices. If your CPU is recent enough, Docker downloads linux/amd64/v3. This version contains additional optimizations and should run faster than the linux/amd64 version. It would be interesting to ship an image for linux/arm64/v8.2, but Docker does not support the same mechanism for AArch64 yet (792808).

Go This release includes many changes related to Go but not visible to the users.

Toolchain In the past, Akvorado supported the two latest Go versions, preventing immediate use of the latest enhancements. The goal was to allow users of stable distributions to use Go versions shipped with their distribution to compile Akvorado. However, this became frustrating when interesting features, like go tool, were released. Akvorado 2.0 requires Go 1.25 (77306d) but can be compiled with older toolchains by automatically downloading a newer one (94fb1c).5 Users can still override GOTOOLCHAIN to revert this decision. The recommended toolchain updates weekly through CI to ensure we get the latest minor release (5b11ec). This change also simplifies updates to newer versions: only go.mod needs updating. Thanks to this change, Akvorado now uses wg.Go() (77306d) and I have started converting some unit tests to the new test/synctest package (bd787e, 7016d8, and 159085).

Testing When testing equality, I use a helper function Diff() to display the differences when it fails:
got := input.Keys()
expected := []int 1, 2, 3 
if diff := helpers.Diff(got, expected); diff != ""  
    t.Fatalf("Keys() (-got, +want):\n%s", diff)
 
This function uses kylelemons/godebug. This package is no longer maintained and has some shortcomings: for example, by default, it does not compare struct private fields, which may cause unexpectedly successful tests. I replaced it with google/go-cmp, which is stricter and has better output (e2f1df).

Another package for Kafka Another change is the switch from Sarama to franz-go to interact with Kafka (756e4a and 2d26c5). The main motivation for this change is to get a better concurrency model. Sarama heavily relies on channels and it is difficult to understand the lifecycle of an object handed to this package. franz-go uses a more modern approach with callbacks6 that is both more performant and easier to understand. It also ships with a package to spawn fake Kafka broker clusters, which is more convenient than the mocking functions provided by Sarama.

Improved routing table for BMP To store its routing table, the BMP component used kentik/patricia, an implementation of a patricia tree focused on reducing garbage collection pressure. gaissmai/bart is a more recent alternative using an adaptation of [Donald Knuth s ART algorithm][] that promises better performance and delivers it: 90% faster lookups and 27% faster insertions (92ee2e and fdb65c). Unlike kentik/patricia, gaissmai/bart does not help efficiently store values attached to each prefix. I adapted the same approach as kentik/patricia to store route lists for each prefix: store a 32-bit index for each prefix, and use it to build a 64-bit index for looking up routes in a map. This leverages Go s efficient map structure. gaissmai/bart also supports a lockless routing table version, but this is not simple because we would need to extend this to the map storing the routes and to the interning mechanism. I also attempted to use Go s new unique package to replace the intern package included in Akvorado, but performance was worse.7

Miscellaneous Previous versions of Akvorado were using a custom Protobuf encoder for performance and flexibility. With the introduction of the outlet service, Akvorado only needs a simple static schema, so this code was removed. However, it is possible to enhance performance with planetscale/vtprotobuf (e49a74, and 8b580f). Moreover, the dependency on protoc, a C++ program, was somewhat annoying. Therefore, Akvorado now uses buf, written in Go, to convert a Protobuf schema into Go code (f4c879). Another small optimization to reduce the size of the Akvorado binary by 10 MB was to compress the static assets embedded in Akvorado in a ZIP file. It includes the ASN database, as well as the SVG images for the documentation. A small layer of code makes this change transparent (b1d638 and e69b91).

JavaScript Recently, two large supply-chain attacks hit the JavaScript ecosystem: one affecting the popular packages chalk and debug and another impacting the popular package @ctrl/tinycolor. These attacks also exist in other ecosystems, but JavaScript is a prime target due to heavy use of small third-party dependencies. The previous version of Akvorado relied on 653 dependencies. npm-run-all was removed (3424e8, 132 dependencies). patch-package was removed (625805 and e85ff0, 69 dependencies) by moving missing TypeScript definitions to env.d.ts. eslint was replaced with oxlint, a linter written in Rust (97fd8c, 125 dependencies, including the plugins). I switched from npm to Pnpm, an alternative package manager (fce383). Pnpm does not run install scripts by default8 and prevents installing packages that are too recent. It is also significantly faster.9 Node.js does not ship Pnpm but it ships Corepack, which allows us to use Pnpm without installing it. Pnpm can also list licenses used by each dependency, removing the need for license-compliance (a35ca8, 42 dependencies). For additional speed improvements, beyond switching to Pnpm and Oxlint, Vite was replaced with its faster Rolldown version (463827). After these changes, Akvorado only pulls 225 dependencies.

Next steps I would like to land three features in the next version of Akvorado:
  • Add Grafana dashboards to complete the observability stack. See issue #1906 for details.
  • Integrate OVH s Grafana plugin by providing a stable API for such integrations. Akvorado s web console would still be useful for browsing results, but if you want to build and share dashboards, you should switch to Grafana. See issue #1895.
  • Move some work currently done in ClickHouse (custom dictionaries, GeoIP and IP enrichment) back into the outlet service. This should give more flexibility for adding features like the one requested in issue #1030. See issue #2006.

I started working on splitting the inlet into two parts more than one year ago. I found more motivation in recent months, partly thanks to Claude Code, which I used as a rubber duck. Almost none of the produced code was kept:10 it is like an intern who does not learn.

  1. Many attempts were made to make the BMP component both performant and not blocking. See for example PR #254, PR #255, and PR #278. Despite these efforts, this component remained problematic for most users. See issue #1461 as an example.
  2. Some features have been pushed to ClickHouse to avoid the processing cost in the inlet. See for example PR #1059.
  3. This is the biggest commit:
    $ git show --shortstat ac68c5970e2c   tail -1
    231 files changed, 6474 insertions(+), 3877 deletions(-)
    
  4. Broadcom is known for its user-hostile moves. Look at what happened with VMWare.
  5. As a Debian developer, I dislike these mechanisms that circumvent the distribution package manager. The final straw came when Go 1.25 spent one month in the Debian NEW queue, an arbitrary mechanism I don t like at all.
  6. In the early years of Go, channels were heavily promoted. Sarama was designed during this period. A few years later, a more nuanced approach emerged. See notably Go channels are bad and you should feel bad.
  7. This should be investigated further, but my theory is that the intern package uses 32-bit integers, while unique uses 64-bit pointers. See commit 74e5ac.
  8. This is also possible with npm. See commit dab2f7.
  9. An even faster alternative is Bun, but it is less available.
  10. The exceptions are part of the code for the admonition blocks, the code for collapsing the table of content, and part of the documentation.

21 September 2025

Gunnar Wolf: We, Programmers A Chronicle of Coders from Ada to AI

This post is a review for Computing Reviews for We, Programmers A Chronicle of Coders from Ada to AI , a book published in Addison-Wesley
When this book was presented as available for review, I jumped on it. After all, who doesn t love reading a nice bit of computing history, as told by a well-known author (affectionaly known as Uncle Bob ), one who has been immersed in computing since forever? What s not to like there? Reading on, the book does not disappoint. Much to the contrary, it digs into details absent in most computer history books that, being an operating systems and computer architecture geek, I absolutely enjoyed. But let me first address the book s organization. The book is split into four parts. Part 1, Setting the Stage, is a short introduction, answering the question Who are we? ( we being the programmers, of course). It describes the fascination many of us felt when we realized that the computer was there to obey us, to do our bidding, and we could absolutely control it. Part 2 talks about the giants of the computing world, on whose shoulders we stand. It digs in with a level of detail I have never seen before, discussing their personal lives and technical contributions (as well as the hoops they had to jump through to get their work done). Nine chapters cover these giants, ranging chronologically from Charles Babbage and Ada Lovelace to Ken Thompson, Dennis Richie, and Brian Kernighan (understandably, giants who worked together are grouped in the same chapter). This is the part with the most historically overlooked technical details. For example, what was the word size in the first computers, before even the concept of a byte had been brought into regular use? What was the register structure of early central processing units (CPUs), and why did it lead to requiring self-modifying code to be able to execute loops? Then, just as Unix and C get invented, Part 3 skips to computer history as seen through the eyes of Uncle Bob. I must admit, while the change of rhythm initially startled me, it ends up working quite well. The focus is no longer on the giants of the field, but on one particular person (who casts a very long shadow). The narrative follows the author s life: a boy with access to electronics due to his father s line of work; a computing industry leader, in the early 2000s, with extreme programming; one of the first producers of training materials in video format a role that today might be recognized as an influencer. This first-person narrative reaches year 2023. But the book is not just a historical overview of the computing world, of course. Uncle Bob includes a final section with his thoughts on the future of computing. As this is a book for programmers, it is fitting to start with the changes in programming languages that we should expect to see and where such changes are likely to take place. The unavoidable topic of artificial intelligence is presented next: What is it and what does it spell for computing, and in particular for programming? Interesting (and sometimes surprising) questions follow: What does the future of hardware development look like? What is prone to be the evolution of the World Wide Web? What is the future of programming and programmers? At just under 500 pages, the book is a volume to be taken seriously. But space is very well used with this text. The material is easy to read, often funny and always informative. If you enjoy computer history and understanding the little details in the implementations, it might very well be the book you want.

Dirk Eddelbuettel: rcppmlpackexamples 0.0.1 on CRAN: New Package

mlpack is a fabulous project providing countless machine learning algorithms in clean and performant C++ code as a header-only library. This gives both high performance and the ability to run the code in resource-constrained environment such as embedded systems. Bindings to a number of other languages are available, and an examples repo provides examples. The project also has a mature R package on CRAN which offers the various algorithms directly in R. Sometimes, however, one might want to use the header-only C++ code in another R package. How to do that was not well documented. A user alerted me by email to this fact a few weeks ago, and this lead to both an updated mlpack release at CRAN and this package. In short, we show via three complete examples how to access the mlpack code in C++ code in this package, offering a re-usable stanza to start a different package from. The only other (header-only) dependencies are provided by CRAN packages RcppArmadillo and RcppEnsmallen wrapping, respectively, the linear algebra and optimization libraries used by mlpack. Courtesy of my CRANberries, there is also a a new package note (no diffstat report yet). More detailed information is on the rcppmlpackexamples page, or the github repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Joey Hess: cheap DIY solar fence design

A year ago I installed a 4 kilowatt solar fence. I'm revisiting it this Sun Day, to share the design, now that I have prooved it out.
The solar fence and some other ground and pole mount solar panels, seen through leaves.
Solar fencing manufacturers have some good simple designs, but it's hard to buy for a small installation. They are selling to utility scale solar mostly. And those are installed by driving metal beams into the ground, which requires heavy machinery. Since I have experience with Ironridge rails for roof mount solar, I decided to adapt that system for a vertical mount. Which is something it was not designed for. I combined the Ironridge hardware with regular parts from the hardware store. The cost of mounting solar panels nowadays is often higher than the cost of the panels. I hoped to match the cost, and I nearly did. The solar panels cost $100 each, and the fence cost $110 per solar panel. This fence was significantly cheaper than conventional ground mount arrays that I considered as alternatives, and made a better use of a difficult hillside location. I used 7 foot long Ironridge XR-10 rails, which fit 2 solar panels per rail. (Longer rails would need a center post anyway, and the 7 foot long rails have cheaper shipping, since they do not need to be shipped freight.) For the fence posts, I used regular 4x4" treated posts. 12 foot long, set in 3 foot deep post holes, with 3x 50 lb bags of concrete per hole and 6 inches of gravel on the bottom.
detail of how the rails are mounted to the posts, and the panels to the rails
To connect the Ironridge rails to the fence posts, I used the Ironridge LFT-03-M1 slotted L-foot bracket. Screwed into the post with a 5/8 x 3 inch hot-dipped galvanized lag screw. Since a treated post can react badly with an aluminum bracket, there needs to be some flashing between the post and bracket. I used Shurtape PW-100 tape for that. I see no sign of corrosion after 1 year. The rest of the Ironridge system is a T-bolt that connects the rail to the L-foot (part BHW-SQ-02-A1), and Ironridge solar panel fasteners (UFO-CL-01-A1 and UFO-STP-40MM-M1). Also XR-10 end caps and wire clips. Since the Ironridge hardware is not designed to hold a solar panel at a 90 degree angle, I was concerned that the panels might slide downward over time. To help prevent that, I added some additional support brackets under the bottom of the panels. So far, that does not seem to have been a problem though. I installed Aptos 370 watt solar panels on the fence. They are bifacial, and while the posts block the back partially, there is still bifacial gain on cloudy days. I left enough space under the solar panels to be able to run a push mower under them.
Me standing in front of the solar fence at end of construction
I put pairs of posts next to one-another, so each 7 foot segment of fence had its own 2 posts. This is the least elegant part of this design, but fitting 2 brackets next to one-another on a single post isn't feasible. I bolted the pairs of posts together with some spacers. A side benefit of doing it this way is that treated lumber can warp as it dries, and this prevented much twisting of the posts. Using separate posts for each segment also means that the fence can traverse a hill easily. And it does not need to be perfectly straight. In fact, my fence has a 30 degree bend in the middle. This means it has both south facing and south-west facing panels, so can catch the light for longer during the day. After building the fence, I noticed there was a slight bit of sway at the top, since 9 feet of wooden post is not entirely rigid. My worry was that a gusty wind could rattle the solar panels. While I did not actually observe that happening, I added some diagonal back bracing for peace of mind.
view of rear upper corner of solar fence, showing back bracing connection
Inspecting the fence today, I find no problems after the first year. I hope it will last 30 years, with the lifespan of the treated lumber being the likely determining factor. As part of my larger (and still ongoing) ground mount solar install, the solar fence has consistently provided great power. The vertical orientation works well at latitude 36. It also turned out that the back of the fence was useful to hang conduit and wiring and solar equipment, and so it turned into the electrical backbone of my whole solar field. But that's another story.. solar fence parts list
quantity cost per unit description
10 $27.89 7 foot Ironridge XR-10 rail
12 $20.18 12 foot treated 4x4
30 $4.86 Ironridge UFO-CL-01-A1
20 $0.87 Ironridge UFO-STP-40MM-M1
1 $12.62 Ironridge XR-10 end caps (20 pack)
20 $2.63 Ironridge LFT-03-M1
20 $1.69 Ironridge BHW-SQ-02-A1
22 $2.65 5/8 x 3 inch hot-dipped galvanized lag screw
10 $0.50 6 gravel per post
30 $6.91 50 lb bags of quickcrete
1 $15.00 Shurtape PW-100 Corrosion Protection Pipe Wrap Tape
N/A $30 other bolts and hardware (approximate)
$1100 total (Does not include cost of panels, wiring, or electrical hardware.)

19 September 2025

Dirk Eddelbuettel: RcppArmadillo 15.0.2-2 on CRAN: Transition to Armadillo 15

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1261 other packages on CRAN, downloaded 41.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 647 times according to Google Scholar. This versions updates the 15.0.2-1 release from last week. Following fairly extensive email discussions with CRAN, we are now accelerating the transition to Armadillo. When C++14 or newer is used (which after all is the default since R 4.1.0 released May 2021, see WRE Section 1.2.4), or when opted into, the newer Armadillo is selected. If on the other hand either C++11 is still forced, or the legacy version is explicitly selected (which currently one package at CRAN does), then Armadillo 14.6.3 is selected. Most packages will not see a difference and automatically switch to the newer Armadillo. However, some packages will see one or two types of warning. First, if C++11 is still actively selected via for examples CXX_STD then CRAN will nudge a change to a newer compilation standard (as they have been doing for some time already). Preferably the change should be to simply remove the constraint and let R pick the standard based on its version and compiler availability. These days that gives us C++17 in most cases; see WRE Section 1.2.4 for details. (Some packages may need C++14 or C++17 or C++20 explicitly and can also do so.) Second, some packages may see a deprecation warning. Up until Armadillo 14.6.3, the package suppressed these and you can still get that effect by opting into that version by setting -DARMA_USE_LEGACY. (However this route will be sunset eventually too.) But one really should update the code to the non-deprecated version. In a large number of cases this simply means switching from using arma::is_finite() (typically called on a scalar double) to calling std::isfinite(). But there are some other cases, and we will help as needed. If you maintain a package showing deprecation warnings, and are lost here and cannot workout the conversion to current coding styles, please open an issue at the RcppArmadillo repository (i.e. here) or in your own repository and tag me. I will also reach out to the maintainers of a smaller set of packages with more than one reverse dependency. A few small changes have been made internal packaging and documentation, a small synchronization with upstream for two commits since the 15.0.2 release, as well as a link to the ldlasb2 repository and its demonstration regarding some ill-stated benchmarks done elsewhere. The detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.0.2-2 (2025-09-18)
  • Minor update to skeleton Makevars,Makevars.win
  • Update README.md to mention ldlasb2 repository
  • Minor documentation update (#487)
  • Synchronized with Armadillo upstream (#488)
  • Refine Armadillo version selection in coordination with CRAN maintainers to support transition towards Armadillo 15.0.*

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

14 September 2025

Ian Jackson: tag2upload in the first month of forky

tl;dr: tag2upload (beta) is going well so far, and is already handling around one in 13 uploads to Debian. Introduction and some stats We announced tag2upload s open beta in mid-July. That was in the middle of the the freeze for trixie, so usage was fairly light until the forky floodgates opened. Since then the service has successfully performed 637 uploads, of which 420 were in the last 32 days. That s an average of about 13 per day. For comparison, during the first half of September up to today there have been 2475 uploads to unstable. That s about 176/day. So, tag2upload is already handling around 7.5% of uploads. This is very gratifying for a service which is advertised as still being in beta! Sean and I are very pleased both with the uptake, and with the way the system has been performing. Recent UI/UX improvements During this open beta period we have been hard at work. We have made many improvements to the user experience. Current git-debpush in forky, or trixie-backports, is much better at detecting various problems ahead of time. When uploads do fail on the service the emailed error reports are now more informative. For example, anomalies involving orig tarballs, which by definition can t be detected locally (since one point of tag2upload is not to have tarballs locally) now generally result in failure reports containing a diffstat, and instructions for a local repro. Why we are still in beta There are a few outstanding work items that we currently want to complete before we declare the end of the beta. Retrying on Salsa-side failures The biggest of these is that the service should be able to retry when Salsa fails. Sadly, Salsa isn t wholly reliable, and right now if it breaks when the service is trying to handle your tag, your upload can fail. We think most of these failures could be avoided. Implementing retries is a fairly substantial task, but doesn t pose any fundamental difficulties. We re working on this right now. Other notable ongoing work We want to support pristine-tar, so that pristine-tar users can do a new upstream release. Andrea Pappacoda is working on that with us. See #1106071. (Note that we would generally recommend against use of pristine-tar within Debian. But we want to support it.) We have been having conversations with Debusine folks about what integration between tag2upload and Debusine would look like. We re making some progress there, but a lot is still up in the air. We are considering how best to provide tag2upload pre-checks as part of Salsa CI. There are several problems detected by the tag2upload service that could be detected by Salsa CI too, but which can t be detected by git-debpush. Common problems We ve been monitoring the service and until very recently we have investigated every service-side failure, to understand the root causes. This has given us insight into the kinds of things our users want, and the kinds of packaging and git practices that are common. We ve been able to improve the system s handling of various anomalies and also improved the documentation. Right now our failure rate is still rather high, at around 7%. Partly this is because people are trying out the system on packages that haven t ever seen git tooling with such a level of rigour. There are two classes of problem that are responsible for the vast majority of the failures that we re still seeing: Reuse of version numbers, and attempts to re-tag tag2upload, like git (and like dgit), hates it when you reuse a version number, or try to pretend that a (perhaps busted) release never happened. git tags aren t namespaced, and tend to spread about promiscuously. So replacing a signed git tag, with a different tag of the same name, is a bad idea. More generally, reusing the same version number for a different (signed!) package is poor practice. Likewise, it s usually a bad idea to remove changelog entries for versions which were actually released, just because they were later deemed improper. We understand that many Debian contributors have gotten used to this kind of thing. Indeed, tools like dcut encourage it. It does allow you to make things neat-looking, even if you ve made mistakes - but really it does so by covering up those mistakes! The bottom line is that tag2upload can t support such history-rewriting. If you discover a mistake after you ve signed the tag, please just burn the version number and add a new changelog stanza. One bonus of tag2upload s approach is that it will discover if you are accidentally overwriting an NMU, and report that as an error. Discrepancies between git and orig tarballs tag2upload promises that the source package that it generates corresponds precisely to the git tree you tag and sign. Orig tarballs make this complicated. They aren t present on your laptop when you git-debpush. When you re not uploading a new upstream version, the tag2upload service reuses existing orig tarballs from the archive. If your git and the archive s orig don t agree, the tag2upload service will report an error, rather than upload a package with contents that differ from your git tag. With the most common Debian workflows, everything is fine: If you base everything on upstream git, and make your orig tarballs with git archive (or git deborig), your orig tarballs are the same as the git, by construction. We recommend usually ignoring upstream tarballs: most upstreams work in git, and their tarballs can contain weirdness that we don t want. (At worst, the tarball can contain an attack that isn t visible in git, as with xz!) Alternatively, if you use gbp import-orig, the differences (including an attack like Jia Tan s) are imported into git for you. Then, once again, your git and the orig tarball will correspond. But there are other workflows where this correspondence may not hold. Those workflows are hazardous, because the thing you re probably working with locally for your routine development is the git view. Then, when you upload, your work is transplanted onto the orig tarball, which might be quite different - so what you upload isn t what you ve been working on! This situation is detected by tag2upload, precisely because tag2upload checks that it s keeping its promise: the source package is identical to the git view. (dgit push makes the same promise.) Get involved Of course the easiest way to get involved is to start using tag2upload. We would love to have more contributors. There are some easy tasks to get started with, in bugs we ve tagged newcomer mostly UX improvements such as detecting certain problems earlier, in git-debpush. More substantially, we are looking for help with sbuild: we d like it to be able to work directly from git, rather than needing to build source packages: #868527.

comment count unavailable comments

12 September 2025

Freexian Collaborators: Using JavaScript in Debusine without depending on JavaScript (by Enrico Zini)

Debusine is a tool designed for Debian developers and Operating System developers in general. This posts describes our approach to the use of JavaScript, and some practical designs we came up with to integrate it with Django with minimal effort.

Debusine web UI and JavaScript Debusine currently has 3 user interfaces: a client on the command line, a RESTful API, and a Django-based Web UI. Debusine s web UI is a tool to interact with the system, and we want to spend most of our efforts in creating a system that works and works well, rather than chasing the latest and hippest of the frontend frameworks for the web. Also, Debian as a community has an aversion to having parts of the JavaScript ecosystem in the critical path of its core infrastructure, and in our professional experience this aversion is not at all unreasonable. This leads to having some interesting requirements for the web UI, that (rather surprisingly, one would think) one doesn t usually find advertised in many projects:
  • Straightforward to create and maintain.
  • Well integrated with Django.
  • Easy to package in Debian, with as little vendoring as possible, which helps mitigate supply chain attacks
  • Usable without JavaScript whenever possible, for progressive enhancement rather than core functionality.
The idea is to avoid growing the technical complexity and requirements of the web UI, both server-side and client-side, for functionality that is not needed for this kind of project, with tools that do not fit well in our ecosystem. Also, to limit the complexity of the JavaScript portions that we do develop, we choose to limit our JavaScript browser supports to the main browser versions packaged in Debian Stable, plus recent oldstable. This makes JavaScript easier to write and maintain, and it also makes it less needed, as modern HTML plus modern CSS interfaces can go a long way with less scripting interventions. We recently encoded JavaScript practices and tradeoffs in a JavaScript Practices chapter of Debusine s documentation.

How we use JavaScript From the start we built the UI using Bootstrap, which helps in having responsive layouts that can also work on mobile devices. When we started having large select fields, we introduced Select2 to make interaction more efficient, and which degrades gracefully to working HTML. Both Bootstrap and Select2 are packaged in Debian. Form validation is done server-side by Django, and we do not reimplement it client-side in JavaScript, as we prefer the extra round trip through a form submission to the risk of mismatches between the two validations. In those cases where a UI task is not at all possible without JavaScript, we can make its support mandatory as long as the same goal can be otherwise achieved using the debusine client command.

Django messages as Bootstrap toasts Django has a Messages framework that allows different parts of a view to push messages to the user, and it is useful to signal things like a successful form submission, or warnings on unexpected conditions. Django messages integrate well with Bootstrap toasts, which use a recognisable notification language, are nicely dismissible and do not invade the rest of the page layout. Since toasts require JavaScript to work, we added graceful degradation. to Bootstrap alerts Doing so was surprisingly simple: we handle the toasts as usual, and also render the plain alerts inside a <noscript> tag. This is precisely the intended usage of the <noscript> tag, and it works perfectly: toasts are displayed by JavaScript when it s available, or rendered as alerts when not. The resulting Django template is something like this:
<div aria-live="polite" aria-atomic="true" class="position-relative">
    <div class="toast-container position-absolute top-0 end-0 p-3">
     % for message in messages % 
        <div class="toast" role="alert" aria-live="assertive" aria-atomic="true">
            <div class="toast-header">
                <strong class="me-auto">  message.level_tag capfirst  </strong>
                <button type="button"
                        class="btn-close"
                        data-bs-dismiss="toast"
                        aria-label="Close"></button>
            </div>
            <div class="toast-body">  message  </div>
        </div>
     % endfor % 
    </div>
</div>
<!--   -->
 % if messages % 
<noscript>
     % for message in messages % 
        <div class="alert alert-primary" role="alert">
              message  
        </div>
     % endfor % 
</noscript>
 % endif % 
We have a webpage to test the result.

JavaScript incremental improvement of formsets Debusine is built around workspaces, which are, among other things, containers for resources. Workspaces can inherit from other workspaces, which act as fallback lookups for resources. This allows, for example, to maintain an experimental package to be built on Debian Unstable, without the need to copy the whole Debian Unstable workspace. A workspace can inherit from multiple others, which are looked up in order. When adding UI to configure workspace inheritance, we faced the issue that plain HTML forms do not have a convenient way to perform data entry of an ordered list. We initially built the data entry around Django formsets, which support ordering using an extra integer input field to enter the ordering position. This works, and it s good as a fallback, but we wanted something more appropriate, like dragging and dropping items to reorder them, as the main method of interaction. Our final approach renders the plain formset inside a <noscript> tag, and the JavaScript widget inside a display: none element, which is later shown by JavaScript code. As the workspace inheritance is edited, JavaScript serializes its state into <form type='hidden'> fields that match the structure used by the formset, so that when the form is submitted, the view performs validation and updates the server state as usual without any extra maintenance burden. Serializing state as hidden form fields looks a bit vintage, but it is an effective way of preserving the established data entry protocol between the server and the browser, allowing us to do incremental improvement of the UI while minimizing the maintenance effort.

More to come Debusine is now gaining significant adoption and is still under active development, with new features like personal archives coming soon. This will likely mean more user stories for the UI, so this is a design space that we are going to explore again and again in the coming future. Meanwhile, you can try out Debusine on debusine.debian.net, and follow its development on salsa.debian.org!

11 September 2025

Jonathan Wiltshire: Debian stable updates explained: security, updates, and point releases

Please consider supporting my work in Debian and elsewhere through Liberapay. Debian stable updates work through three main channels: point releases, security repositories, and the updates repository. Understanding these ensures your system stays secure and current.

A note about suite names Every Debian release, or suite, has a codename the most recent major release was trixie, or Debian 13. The codename uniquely identifies that suite. We also use changeable aliases to add meaning to the suite s lifecycle. For example, trixie currently has the alias stable, but when forky becomes stable instead, trixie will become known as oldstable. This post uses either codenames or aliases depending on context. In source lists, codenames are generally preferred since that avoids surprise major upgrades right after a release is made.

The stable suites (point releases) stable and oldstable (currently trixie and bookworm) are only updated during a point release. This is a minor update released to a major version. For example, 13.1 is the first minor update to trixie. It s not possible to install older minor versions of a suite except via the snapshots mechanism (not covered here). It s possible to view past versions via snapshot.debian.org, which preserves historical Debian archives. There are also the testing and unstable aliases for the development suites. However, these are not relevant for users who want to run officially released versions. Almost every stable installation of Debian will be opted into a stable or oldstable base suite. An example APT source might look like:
Type: deb
URIs: http://deb.debian.org/debian
Suites: trixie
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp
Or, in legacy sources.list style:
deb https://deb.debian.org/debian trixie main

The security suites (DSAs explained) For urgent security-related updates, the Security Team maintains a counterpart suite for each stable suite. These are called stable-security and oldstable-security when maintained by Debian s security team, and oldstable-security, oldoldstable-security, etc when maintained by the LTS team. Example APT source:
Type: deb
URIs: https://deb.debian.org/debian-security
Suites: trixie-security
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp
Or, in legacy sources.list style:
deb https://deb.debian.org/debian-security trixie-security main
The Debian installer enables the security suites by default. Debian Security Announcements (DSAs) are published to debian-security-announce@lists.debian.org.

The updates suites (SUAs and maintenance) For urgent non-security updates, the final recommended suites are stable-updates and oldstable-updates. This is where updates staged for a point release, but needed sooner, are published. Examples include virus database updates, timezone changes, urgent bug fixes for specific problems and corrections to errors in the release process itself. Example APT source:
Type: deb
URIs: https://deb.debian.org/debian
Suites: trixie-updates
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp
Or, in legacy sources.list style:
deb https://deb.debian.org/debian trixie-updates main
Debian enables the updates suites by default. Stable Update Announcements (SUAs) are published to debian-stable-announce@lists.debian.org. This is also where announcements of forthcoming point releases are published.

Summary These are the recommended suites for all production Debian systems:
SuiteExample codenamePurposeAnnouncements
stabletrixieBase suite containing all the available software for a release. Point releases every 2 4 months including lower-severity security fixes that do not require immediate release.Debian Release Announcements on debian-announce
stable-securitytrixie-securityUrgent security updates.Debian Security Announcements on debian-security-announce
stable-updatestrixie-updatesUrgent non-security updates, data updates and release maintenance.Stable Update Announcements on debian-stable-announce
After a release moves from oldstable to unsupported status, Long Term Support (LTS) takes over for several more years. LTS provides urgent security updates for selected architectures. For details, see wiki.debian.org/LTS. If you d like to stay informed, the official Debian announcement lists and release.debian.org share the latest schedules and updates.
Photo by Brian Wangenheim on Unsplash

Gunnar Wolf: Saying _hi_ to my good Reproducible Builds friends while reading a magazine article

Just wanted to share I enjoy reading George V. Neville s Kode Vicious column, which regularly appears on some of ACM s publications I follow, such as ACM Queue or Communications. Today I was very pleasantly surprised, while reading the column titled Can t we have nice things Kode Vicious answers to a question on why computing has nothing comparable to the beauty of ancient physics laboratories turned into museums (i.e. Faraday s laboratory) by giving a great hat tip to a project stemmed off Debian, and where many of my good Debian friends spend a lot of their energies: Reproducible builds. KV says:
Once the proper measurement points are known, we want to constrain the system such that what it does is simple enough to understand and easy to repeat. It is quite telling that the push for software that enables reproducible builds only really took off after an embarrassing widespread security issue ended up affecting the entire Internet. That there had already been 50 years of software development before anyone thought that introducing a few constraints might be a good idea is, well, let s just say it generates many emotions, none of them happy, fuzzy ones.
Yes, KV is a seasoned free software author. But I found it heart warming that the Reproducible Builds project is mentioned without needing to introduce it (assuming familiarity across the computing industry and academia), recognized as game-changing as we understood it would be over ten years ago when it was first announced, and enabling of beauty in computing. Congratulations to all of you who have made this possible! RB+ACM

Freexian Collaborators: Monthly report about Debian Long Term Support, August 2025 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In August, 21 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 10.0h (out of 0.0h assigned and 14.0h from previous period), thus carrying over 4.0h to the next month.
  • Andrej Shadura did 12.0h (out of 9.0h assigned and 3.0h from previous period).
  • Bastien Roucari s did 20.0h (out of 19.75h assigned and 0.25h from previous period).
  • Ben Hutchings did 22.75h (out of 16.5h assigned and 6.25h from previous period).
  • Carlos Henrique Lima Melara did 10.0h (out of 10.0h assigned).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.25h (out of 23.25h assigned).
  • Emilio Pozuelo Monfort did 23.25h (out of 23.25h assigned).
  • Guilhem Moulin did 15.0h (out of 15.0h assigned).
  • Jochen Sprickerhof did 11.0h (out of 6.0h assigned and 16.75h from previous period), thus carrying over 11.75h to the next month.
  • Lee Garrett did 16.25h (out of 0.0h assigned and 16.25h from previous period).
  • Lucas Kanashiro did 20.0h (out of 1.25h assigned and 18.75h from previous period).
  • Markus Koschany did 5.0h (out of 13.0h assigned and 9.75h from previous period), thus carrying over 17.75h to the next month.
  • Paride Legovini did 8.0h (out of 0.0h assigned and 8.0h from previous period).
  • Roberto C. S nchez did 7.5h (out of 11.75h assigned and 11.0h from previous period), thus carrying over 15.25h to the next month.
  • Santiago Ruano Rinc n did 13.5h (out of 7.25h assigned and 7.75h from previous period), thus carrying over 1.5h to the next month.
  • Stefano Rivera did 0.5h (out of 0.0h assigned and 3.0h from previous period), thus carrying over 2.5h to the next month.
  • Sylvain Beucler did 10.0h (out of 23.25h assigned), thus carrying over 13.25h to the next month.
  • Thorsten Alteholz did 22.75h (out of 22.75h assigned).
  • Tobias Frost did 4.0h (out of 0.0h assigned and 12.0h from previous period), thus carrying over 8.0h to the next month.
  • Utkarsh Gupta did 16.0h (out of 22.75h assigned), thus carrying over 6.75h to the next month.

Evolution of the situation In August, we released 27 DLAs. The month of August marked the release of Debian 13 (codename trixie ). This is worth noting because it brought with it the return of the customary fast development pace of Debian unstable, which included several contributions from LTS Team members. More on that below. Of the many security updates which were published (and a few non-security updates as well), some notable ones are highlighted here.
  • Notable security updates:
    • gnutls28 prepared by Adrian Bunk, fixes several potential denial of service vulnerabilities
    • apache2, prepared by Bastien Roucari s, fixes several vulnerabilities including a potential denial of service and SSL/TLS-related access control
    • mbedtls (original update, regression update) prepared by Andrej Shadura, fixes several potential denial of service and information disclosure vulnerabilities
    • openjdk-17, prepared by Emilio Pozuelo Monfort, fixes several vulnerabilities which could result in denial of service, information disclosure or weakened TLS connections
  • Notable non-security updates:
    • distro-info-data, prepared by Stefano Rivera, adds information concerning future Debian and Ubuntu releases
    • ca-certificates-java, prepared by Bastien Roucari s, fixes some bugs which could disrupt future updates
The LTS Team continues to welcome the collaboration of maintainers from across the Debian community. The contributions of maintainers from outside the LTS Team include: postgresql-13 (Christoph Berg), sope (Jordi Mallach), thunderbird (Carsten Schoenert), and iperf3 (Roberto Lumbreras). Finally, LTS Team members also contributed updates of the following packages:
  • redis (to stable), prepared by Chris Lamb
  • firebird3.0 (to oldstable and stable), prepared by Adrian Bunk
  • node-tmp (to oldstable, stable, and unstable), prepared by Adrian Bunk
  • openjpeg2 (to oldstable, stable, and unstable), prepared by Adrian Bunk
  • apache2 (to oldstable), prepared by Bastien Roucari s
  • unbound (to oldstable), prepared by Guilhem Moulin
  • luajit (to oldstable), prepared by Guilhem Moulin
  • golang-github-gin-contrib-cors (to oldstable and stable), prepared by Thorsten Alteholz
  • libcoap3 (to stable), prepared by Thorsten Alteholz
  • libcommons-lang-java and libcommons-lang3-java (both to unstable), prepared by Daniel Leidert
  • python-flask-cors (to oldstable), prepared by Daniel Leidert
The LTS Team would especially like to thank our many longtime friends and sponsors for their support and collaboration.

Thanks to our sponsors Sponsors that joined recently are in bold.

Next.

Previous.