Search Results: "ads"

17 January 2026

Simon Josefsson: Backup of S3 Objects Using rsnapshot

I ve been using rsnapshot to take backups of around 10 servers and laptops for well over 15 years, and it is a remarkably reliable tool that has proven itself many times. Rsnapshot uses rsync over SSH and maintains a temporal hard-link file pool. Once rsnapshot is configured and running, on the backup server, you get a hardlink farm with directories like this for the remote server:
/backup/serverA.domain/.sync/foo
/backup/serverA.domain/daily.0/foo
/backup/serverA.domain/daily.1/foo
/backup/serverA.domain/daily.2/foo
...
/backup/serverA.domain/daily.6/foo
/backup/serverA.domain/weekly.0/foo
/backup/serverA.domain/weekly.1/foo
...
/backup/serverA.domain/monthly.0/foo
/backup/serverA.domain/monthly.1/foo
...
/backup/serverA.domain/yearly.0/foo
I can browse and rescue files easily, going back in time when needed. The rsnapshot project README explains more, there is a long rsnapshot HOWTO although I usually find the rsnapshot man page the easiest to digest. I have stored multi-TB Git-LFS data on GitLab.com for some time. The yearly renewal is coming up, and the price for Git-LFS storage on GitLab.com is now excessive (~$10.000/year). I have reworked my work-flow and finally migrated debdistget to only store Git-LFS stubs on GitLab.com and push the real files to S3 object storage. The cost for this is barely measurable, I have yet to run into the 25/month warning threshold. But how do you backup stuff stored in S3? For some time, my S3 backup solution has been to run the minio-client mirror command to download all S3 objects to my laptop, and rely on rsnapshot to keep backups of this. While 4TB NVME s are relatively cheap, I ve felt that this disk and network churn on my laptop is unsatisfactory for quite some time. What is a better approach? I find S3 hosting sites fairly unreliable by design. Only a couple of clicks in your web browser and you have dropped 100TB of data. Or by someone else who steal your plaintext-equivalent cookie. Thus, I haven t really felt comfortable using any S3-based backup option. I prefer to self-host, although continously running a mirror job is not sufficient: if I accidentally drop the entire S3 object store, my mirror run will remove all files locally too. The rsnapshot approach that allows going back in time and having data on self-managed servers feels superior to me. What if we could use rsnapshot with a S3 client instead of rsync? Someone else asked about this several years ago, and the suggestion was to use the fuse-based s3fs which sounded unreliable to me. After some experimentation, working around some hard-coded assumption in the rsnapshot implementation, I came up with a small configuration pattern and a wrapper tool to implement what I desired. Here is my configuration snippet:
cmd_rsync    /backup/s3/s3rsync
rsync_short_args    -Q
rsync_long_args    --json --remove
lockfile    /backup/s3/rsnapshot.pid
snapshot_root    /backup/s3
backup    s3:://hetzner/debdistget-gnuinos    ./debdistget-gnuinos
backup    s3:://hetzner/debdistget-tacos  ./debdistget-tacos
backup    s3:://hetzner/debdistget-diffos ./debdistget-diffos
backup    s3:://hetzner/debdistget-pureos ./debdistget-pureos
backup    s3:://hetzner/debdistget-kali   ./debdistget-kali
backup    s3:://hetzner/debdistget-devuan ./debdistget-devuan
backup    s3:://hetzner/debdistget-trisquel   ./debdistget-trisquel
backup    s3:://hetzner/debdistget-debian ./debdistget-debian
The idea is to save a backup of a couple of S3 buckets under /backup/s3/. I have some scripts that take a complete rsnapshot.conf file and append my per-directory configuration so that this becomes a complete configuration. If you are curious how I roll this, backup-all invokes backup-one appending my rsnapshot.conf template with the snippet above. The s3rsync wrapper script is the essential hack to convert rsnapshot s rsync parameters into something that talks S3 and the script is as follows:
#!/bin/sh
set -eu
S3ARG=
for ARG in "$@"; do
    case $ARG in
    s3:://*) S3ARG="$S3ARG "$(echo $ARG   sed -e 's,s3:://,,');;
    -Q*) ;;
    *) S3ARG="$S3ARG $ARG";;
    esac
done
echo /backup/s3/mc mirror $S3ARG
exec /backup/s3/mc mirror $S3ARG
It uses the minio-client tool. I first tried s3cmd but its sync command read all files to compute MD5 checksums every time you invoke it, which is very slow. The mc mirror command is blazingly fast since it only compare mtime s, just like rsync or git. If I invoke a sync job for a fully synced up directory the output looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V sync
Setting locale to POSIX "C"
echo 1443 > /backup/s3/rsnapshot.pid 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-gnuinos \
    /backup/s3/.sync//debdistget-gnuinos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-gnuinos /backup/s3/.sync//debdistget-gnuinos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-tacos \
    /backup/s3/.sync//debdistget-tacos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-tacos /backup/s3/.sync//debdistget-tacos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-diffos \
    /backup/s3/.sync//debdistget-diffos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-diffos /backup/s3/.sync//debdistget-diffos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-pureos \
    /backup/s3/.sync//debdistget-pureos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-pureos /backup/s3/.sync//debdistget-pureos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-kali \
    /backup/s3/.sync//debdistget-kali 
/backup/s3/mc mirror --json --remove hetzner/debdistget-kali /backup/s3/.sync//debdistget-kali
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-devuan \
    /backup/s3/.sync//debdistget-devuan 
/backup/s3/mc mirror --json --remove hetzner/debdistget-devuan /backup/s3/.sync//debdistget-devuan
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-trisquel \
    /backup/s3/.sync//debdistget-trisquel 
/backup/s3/mc mirror --json --remove hetzner/debdistget-trisquel /backup/s3/.sync//debdistget-trisquel
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-debian \
    /backup/s3/.sync//debdistget-debian 
/backup/s3/mc mirror --json --remove hetzner/debdistget-debian /backup/s3/.sync//debdistget-debian
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
touch /backup/s3/.sync/ 
rm -f /backup/s3/rsnapshot.pid 
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1443] \
    /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
    -V sync: completed successfully 
root@hamster /backup# 
You can tell from the paths that this machine runs Guix. This was the first production use of the Guix System for me, and the machine has been running since 2015 (with the occasional new hard drive). Before, I used rsnapshot on Debian, but some stable release of Debian dropped the rsnapshot package, paving the way for me to test Guix in production on a non-Internet exposed machine. Unfortunately, mc is not packaged in Guix, so you will have to install it from the MinIO Client GitHub page manually. Running the daily rotation looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V daily
Setting locale to POSIX "C"
echo 1549 > /backup/s3/rsnapshot.pid 
mv /backup/s3/daily.5/ /backup/s3/daily.6/ 
mv /backup/s3/daily.4/ /backup/s3/daily.5/ 
mv /backup/s3/daily.3/ /backup/s3/daily.4/ 
mv /backup/s3/daily.2/ /backup/s3/daily.3/ 
mv /backup/s3/daily.1/ /backup/s3/daily.2/ 
mv /backup/s3/daily.0/ /backup/s3/daily.1/ 
/run/current-system/profile/bin/cp -al /backup/s3/.sync /backup/s3/daily.0 
rm -f /backup/s3/rsnapshot.pid 
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1549] \
    /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
    -V daily: completed successfully 
root@hamster /backup# 
Hopefully you will feel inspired to take backups of your S3 buckets now!

Jonathan Dowland: Honest Jon's lightly-used Starships

No man s Sky (or as it s known in our house, "spaceship game") is a space exploration/sandbox game that was originally released 10 years ago. Back then I tried it on my brother s PS4 but I couldn t get into it. In 2022 it launched for the Nintendo Switch1 and the game finally clicked for me. I play it very casually. I mostly don t play at all, except sometimes when there are time-limited expeditions running, which I find refreshing, and usually have some exclusives as a reward for play. One of the many things you can do in the game is collect star ships. I started keeping a list of notable ones I ve found, and I ve decided to occasionally blog about them.
The Horizon Vector NX spaceship
The Horizon Vector NX is a small sporty ship that players on Nintendo Switch could claim within the first month or so after it launched. The colour scheme resembles the original neon switch controllers. Although the ship type occurs naturally in the game in other configurations, I think differently-painted wings are unique to this ship. For most of the last 4 years, my copy of this ship was confined to the Switch, until November 2024, when they added cross-save capability to the game. I was then able to access the ship when playing on Linux (or Mac).

  1. The game runs very well natively on Mac, flawlessly on Steam for Linux, but struggles on the origins switch. It s a marvel it runs there at all.

16 January 2026

Dirk Eddelbuettel: RcppSpdlog 0.0.26 on CRAN: Another Microfix

Version 0.0.26 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site has been refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. Brian Ripley noticed an infelicity when building under C++20 which he is testing hard and fast. Sadly, this came a day late for yesterday s upload of release 0.0.25 with another trivial fix so another incremental release was called for. We already accommodated C++20 and its use of std::format (in lieu of the included fmt::format) but had not turned it on unconditionally. We do so now, but offer an opt-out for those who prefer the previous build type. The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.26 (2026-01-16)
  • Under C++20 or later, switch to using std::format to avoid a compiler nag that CRAN now complains about

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Kentaro Hayashi: Budgie Desktop 10.10 is out, but not for me yet :(

Introduction I'm one of a Budgie Desktop user since 2020. (Budgie Desktop 10.5 or so) Recently Budgie Desktop 10.10 had been available from Debian experimental. I've tried it and realized that not for me yet.

What is the requirement for desktop environment
  • Screen sharing with specifying the specific window/application for web meeting
  • Sharing keyboard input smoothly with deskflow
  • Switch focus with mouse movement
  • and ...

Is Budgie Desktop 10.10 suitable? Short answer: No, not yet. It seems that Budgie Desktop 10.10 (wayland) lost the functionality - screen sharing with specifying the specific window/application for web meeting. Of course, you can share your screen itself. It also can't sharing keyboard input smoothly with deskflow. Both of them seems that they are supported in GNOME? or KDE?. In contrast to Budgie Desktop 10.9 (X11), Budgie Desktop 10.10 seems missing effective wayland + xdg-desktop-portal support for them yet. It might be supported in the future release, but it might be 11.x.

Alternatives? Budgie Desktop 10.x will come into maintenance mode, so they will not be fixed in 10.x releases (guess). buddiesofbudgie.org Switching DE might be an option - GNOME or KDE. but I don't have much the energy to make the transition for now. I decided to take the conservative option and go back to 10.9. Note that if you upgrade to Budgie Desktop 10.10, it is hard to downgrade to Budgie Desktop 10.9 because python3-gi, python3-gi-cairo dependency blocks budgie-desktop. See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1120138 for details. #1120138 - budgie-extras: Several applets are incompatible with pygobject 3.54/libpeas 1.36.0-8 - Debian Bug report logs As a dirty hack, you can modify python3-gi dependency and install it as your own risk for downgrading.

Conclusion It was still too early to adopt Budgie Desktop (wayland) for me. I'll stay with Budgie Desktop 10.9 for a while. That said, it's unclear how long Budgie Desktop 10.9 will remain unstable and usable. There might be a case that sticking to Budgie Desktop 10.9 might be problematic when other packages are updated. When that happens, I'd like to reconsider this issue again.

Jonathan Dowland: Ye Gods

Via (I think) @mcc on the Fediverse, I learned of GetMusic: a sort-of "clearing house" for Free Bandcamp codes. I think the way it works is, some artists release a limited set of download codes for their albums in order to promote them, and GetMusic help them to keep track of that, and helps listeners to discover them. GetMusic mail me occasionally, and once they highlighted an album The Arcane & Paranormal Earth which they described as "Post-Industrial in the vein of Coil and Nurse With Wound with shades of Aphex Twin, Autechre and assorted film music." Well that description hooked me immediately but I missed out on the code. However, I sampled the album on Bandcamp directly a few times as well as a few of his others (Ye Gods is a side-project of Antoni Maiovvi, which itself is a pen-name) and liked them very much. I picked up the full collection of Ye Gods albums in one go for 30% off. Here's a stand-out track: On Earth by Ye Gods So I guess this service works! Although I didn't actually get a free code in this instance, it promoted the artist, introduced me to something I really liked and drove a sale.

15 January 2026

Dirk Eddelbuettel: RcppSpdlog 0.0.25 on CRAN: Microfix

Version 0.0.25 of RcppSpdlog arrived on CRAN right now, and will be uploaded to Debian and built for r2u shortly along with a minimal refresh of the documentation site. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. This release fixes a minuscule cosmetic issue from the previous release a week ago. We rely on two #defines that R sets to signal to spdlog that we are building in the R context (which matters for the R-specific logging sink, and picks up something Gabi added upon my suggestion at the very start of this package). But I use the same #defines to now check in Rcpp that we are building with R and, in this case, wrongly conclude R headers have already been installed so Rcpp (incorrectly) nags about that. The solution is to add two #undefine and proceed as normal (with Rcpp controlling and taking care of R header includion too) and that is what we do here. All good now, no nags from a false positive. The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.25 (2026-01-15)
  • Ensure #define signaling R build (needed with spdlog) is unset before including R headers to not falsely triggering message from Rcpp

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

14 January 2026

Dirk Eddelbuettel: RcppSimdJson 0.1.15 on CRAN: New Upstream, Some Maintenance

A brand new release 0.1.15 of the RcppSimdJson package is now on CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is faster than CPU speed as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon. This version updates to the current 4.2.4 upstream release. It also updates the RcppExports.cpp file with glue between C++ and R. We want move away from using Rf_error() (as Rcpp::stop() is generally preferable). Packages (such as this one) that are declaring an interface have an actual Rf_error() call generated in RcppExports.cpp which can protect which is what current Rcpp code generation does. Long story short, a minor internal reason. The short NEWS entry for this release follows.

Changes in version 0.1.15 (2026-01-14)
  • simdjson was upgraded to version 4.2.4 (Dirk in #97
  • RcppExports.cpp was regenerated to aid a Rcpp transition
  • Standard maintenance updates for continuous integration and URLs

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Dirk Eddelbuettel: gunsales 0.1.3 on CRAN: Maintenance

An update to the gunsales package is now on CRAN. As in the last update nine years ago (!!), changes are mostly internal. An upcoming dplyr change requires a switch from the old and soon to-be-removed underscored verb form; that was kindly addressed in an incoming pull request. We also updated the CI scripts a few times during this period as needed, and switched to using Authors@R, and refreshed and updated a number of URL references. Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Gunnar Wolf: The Innovation Engine Government-funded Academic Research

This post is an unpublished review for The Innovation Engine Government-funded Academic Research
David Patterson does not need an introduction. Being the brain behind many of the inventions that shaped the computing industry (repeatedly) over the past 40 years, when he put forward an opinion article in Communications of the ACM targeting the current day political waves in the USA, I could not avoid choosing it to write this review. Patterson worked for a a public university (University of California at Berkeley) between 1976 and 2016, and in this article he argues how government-funded academic research (GoFAR) allows for faster, more effective and freer development than private sector-funded research would, putting his own career milestones as an example of how public money that went to his research has easily been amplified by a factor of 10,000:1 for the country s economy, and 1,000:1 particularly for the government. Patterson illustrates this quoting five of the home-run research projects he started and pursued with government funding, eventually spinning them off as successful startups: Patterson identifies principles for the projects he has led, that are specially compatible with the ways research works at universitary systems: Multidisciplinary teams, demonstrative usable artifacts, seven- to ten-year impact horizons, five-year sunset clauses (to create urgency and to lower opportunity costs), physical proximity of collaborators, and leadership followed on team success rather than individual recognition. While it could be argued that it s easy to point at Patterson s work as a success example while he is by far not the average academic, the points he makes on how GoFAR research has been fundamental for the advance of science and technology, but also of biology, medicine, and several other fields are very clear.

13 January 2026

Simon Josefsson: Debian Libre Live 13.3.0 is released!

Following up on my initial announcement about Debian Libre Live I am happy to report on continued progress and the release of Debian Libre Live version 13.3.0. Since both this and the previous 13.2.0 release are based on the stable Debian trixie release, there really isn t a lot of major changes but instead incremental minor progress for the installation process. Repeated installations has a tendency to reveal bugs, and we have resolved the apt sources list confusion for Calamares-based installations and a couple of other nits. This release is more polished and we are not aware of any known remaining issues with them (unlike for earlier versions which were released with known problems), although we conservatively regard the project as still in beta. A Debian Libre Live logo is needed before marking this as stable, any graphically talented takers? (Please base it on the Debian SVG upstream logo image.) We provide GNOME, KDE, and XFCE desktop images, as well as text-only standard image, which match the regular Debian Live images with non-free software on them, but also provide a slim variant which is merely 750MB compared to the 1.9GB standard image. The slim image can still start a debian installer, and can still boot into a minimal live text-based system. The GNOME, KDE and XFCE desktop images feature the Calamares installer, and we have performed testing on a variety of machines. The standard and slim images does not have a installer from the running live system, but all images support a boot menu entry to start the installer. With this release we also extend our arm64 support to two tested platforms. The current list of successfully installed and supported systems now include the following hardware: This is a very limited set of machines, but the diversity in CPUs and architecture should hopefully reflect well on a wide variety of commonly available machines. Several of these machines are crippled (usually GPU or WiFI) without adding non-free software, complain at your hardware vendor and adapt your use-cases and future purchases. The images are as follows, with SHA256SUM checksums and GnuPG signature on the 13.3.0 release page. Curious how the images were made? Fear not, for the Debian Libre Live project README has documentation, the run.sh script is short and the .gitlab-ci.yml CI/CD Pipeline definition file brief. Happy Libre OS hacking!

Aigars Mahinovs: Sedan experience (BMW i5)

Two years of midsize electric sedan experience This February (2026) marks a full 10 years since I started working for BMW, and a key employment bonus is the ability to drive a company car on special two-year leasing terms. Just before the new year 2026 started, I said goodbye to my latest company car. After driving the BMW i4, I switched to the BMW i5. In terms of power, it was a downgrade as I switched from the maximum power i4 M50 xDrive (all-wheel drive, 600 hp) to the i5 eDrive40 (rear-wheel drive, 340 hp). Did I regret that? Not for a single second! After driving 60,000 km in the last two years with the BMW i5, I was really sad to let it go -- it was the best car I have ever driven. Simple as that. Read more (16 min remaining to read)

Freexian Collaborators: Debian Contributions: dh-python development, Python 3.14 and Ruby 3.4 transitions, Surviving scraper traffic in Debian CI and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-12 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

dh-python development, by Stefano Rivera In Debian we build our Python packages with the help of a debhelper-compatible tool, dh-python. Before starting the 3.14 transition (that would rebuild many packages) we landed some updates to dh-python to fix bugs and add features. This started a month of attention on dh-python, iterating through several bug fixes, and a couple of unfortunate regressions. dh-python is used by almost all packages containing Python (over 5000). Most of these are very simple, but some are complex and use dh-python in unexpected ways. It s hard to avoid almost any change (including obvious bug fixes) from causing some unexpected knock-on behaviour. There is a fair amount of complexity in dh-python, and some rather clever code, which can make it tricky to work on. All of this means that good QA is important. Stefano spent some time adding type annotations and specialized types to make it easier to see what the code is doing and catch mistakes. This has already made work on dh-python easier. Now that Debusine has built-in repositories and debdiff support, Stefano could quickly test the effects of changes on many other packages. After each big change, he could upload dh-python to a repository, rebuild e.g. 50 Python packages with it, and see what differences appeared in the output. Reviewing the diffs is still a manual process, but can be improved. Stefano did a small test on what it would take to replace direct setuptools setup.py calls with PEP-517 (pyproject-style) builds. There is more work to do here.

Python 3.14 transition, by Stefano Rivera (et al.) In December the transition to add Python 3.14 as a supported version started in Debian unstable. To do this, we update the list of supported versions in python3-defaults, and then start rebuilding modules with C extensions from the leaves inwards. This had already been tested in a PPA and Ubuntu, so many of the biggest blocking compatibility issues with 3.14 had already been found and fixed. But there are always new issues to discover. Thanks to a number of people in the Debian Python team, we got through the first bit of the transition fairly quickly. There are still a number of open bugs that need attention and many failed tests blocking migration to testing. Python 3.14.1 released just after we started the transition, and very soon after, a follow-up 3.14.2 release came out to address a regression. We ran into another regression in Python 3.14.2.

Ruby 3.4 transition, by Lucas Kanashiro (et al.) The Debian Ruby team just started the preparation to move the default Ruby interpreter version to 3.4. At the moment, ruby3.4 source package is already available in experimental, also ruby-defaults added support to Ruby 3.4. Lucas rebuilt all reverse dependencies against this new version of the interpreter and published the results here. Lucas also reached out to some stakeholders to coordinate the work. Next steps are: 1) announcing the results to the whole team and asking for help to fix packages failing to build against the new interpreter; 2) file bugs against packages FTBFSing against Ruby 3.4 which are not fixed yet; 3) once we have a low number of build failures against Ruby 3.4, ask the Debian Release team to start the transition in unstable.

Surviving scraper traffic in Debian CI, by Antonio Terceiro Like most of the open web, Debian Continuous Integration has been struggling for a while to keep up with the insatiable hunger from data scrapers everywhere. Solving this involved a lot of trial and error; the final result seems to be stable, and consists of two parts. First, all Debian CI data pages, except the direct links to test log files (such as those provided by the Release Team s testing migration excuses), now require users to be authenticated before being accessed. This means that the Debian CI data is no longer publicly browseable, which is a bit sad. However, this is where we are now. Additionally, there is now a fail2ban powered firewall-level access limitation for clients that display an abusive access pattern. This went through several iterations, with some of them unfortunately blocking legitimate Debian contributors, but the current state seems to strike a good balance between blocking scrapers and not blocking real users. Please get in touch with the team on the #debci OFTC channel if you are affected by this.

A hybrid dependency solver for crossqa.debian.net, by Helmut Grohne crossqa.debian.net continuously cross builds packages from the Debian archive. Like Debian s native build infrastructure, it uses dose-builddebcheck to determine whether a package s dependencies can be satisfied before attempting a build. About one third of Debian s packages fail this check, so understanding the reasons is key to improving cross building. Unfortunately, dose-builddebcheck stops after reporting the first problem and does not display additional ones. To address this, a greedy solver implemented in Python now examines each build-dependency individually and can report multiple causes. dose-builddebcheck is still used as a fall-back when the greedy solver does not identify any problems. The report for bazel-bootstrap is a lengthy example.

rebootstrap, by Helmut Grohne Due to the changes suggested by Loongson earlier, rebootstrap now adds debhelper to its final installability test and builds a few more packages required for installing it. It also now uses a variant of build-essential that has been marked Multi-Arch: same (see foundational work from last year). This in turn made the use of a non-default GCC version more difficult and required more work to make it work for gcc-16 from experimental. Ongoing archive changes temporarily regressed building fribidi and dash. libselinux and groff have received patches for architecture specific changes and libverto has been NMUed to remove the glib2.0 dependency.

Miscellaneous contributions
  • Stefano did some administrative work on debian.social and debian.net instances and Debian reimbursements.
  • Stefano did routine updates of python-authlib, python-mitogen, xdot.
  • Stefano spent several hours discussing Debian s Python package layout with the PyPA upstream community. Debian has ended up with a very different on-disk installed Python layout than other distributions, and this continues to cause some frustration in many communities that have to have special workarounds to handle it. This ended up impacting cross builds as Helmut discovered.
  • Rapha l set up Debusine workflows for the various backports repositories on debusine.debian.net.
  • Zulip is not yet in Debian (RFP in #800052), but Rapha l helped on the French translation as he is experimenting with that discussion platform.
  • Antonio performed several routine Salsa maintenance tasks, including fixing salsa-nm-sync, the service that synchronizes project members data from LDAP to Salsa, which had been broken since salsa.debian.org was upgraded to trixie .
  • Antonio deployed a new amd64 worker host for Debian CI.
  • Antonio did several DebConf technical and administrative bits, including but adding support for custom check-in/check-out dates in the MiniDebConf registration module, publishing a call for bids for DebConf27.
  • Carles reviewed and submitted 14 Catalan translations using po-debconf-manager.
  • Carles improved po-debconf-manager: added delete-package command, show-information now uses properly formatted output (YAML), it now attaches the translation on the bug reports for which a merge request has been opened too long.
  • Carles investigated why some packages appeared in po-debconf-manager but not in the Debian l10n list. Turns out that some packages had debian/po/templates.pot (appearing in po-debconf-manager) but not the POTFILES.in file as expected. Created a script to find out which packages were in this or similar situation and reported bugs.
  • Carles tested and documented how to set up voices (mbrola and festival) if using Orca speech synthesizer. Commented a few issues and possible improvements in the debian-accessibility list.
  • Helmut sent patches for 48 cross build failures and initiated discussions on how to deal with two non-trivial matters. Besides Python mentioned above, CMake introduced a cmake_pkg_config builtin which is not aware of the host architecture. He also forwarded a Meson patch upstream.
  • Thorsten uploaded a new upstream version of cups to fix a nasty bug that was introduced by the latest security update.
  • Along with many other Python 3.14 fixes, Colin fixed a tricky segfault in python-confluent-kafka after a helpful debugging hint from upstream.
  • Colin upstreamed an improved version of an OpenSSH patch we ve been carrying since 2008 to fix misleading verbose output from scp.
  • Colin used Debusine to coordinate transitions for astroid and pygments, and wrote up the astroid case on his blog.
  • Emilio helped with various transitions, and provided a build fix for opencv for the ffmpeg 8 transition.
  • Emilio tested the GNOME updates for trixie proposed updates (gnome-shell, mutter, glib2.0).
  • Santiago helped to review the status of how to test different build profiles in parallel on the same pipeline, using the test-build-profiles job. This means, for example, to simultaneously test build profiles such as nocheck and nodoc for the same git tree. Finally, Santiago provided MR !685 to fix the documentation.
  • Anupa prepared a bits post for Outreachy interns announcement along with T ssia Cam es Ara jo and worked on publicity team tasks.

12 January 2026

Gunnar Wolf: Python Workout 2nd edition

This post is an unpublished review for Python Workout 2nd edition
Note: While I often post the reviews I write for Computing Reviews, this is a shorter review requested to me by Manning. They kindly invited me several months ago to be a reviewer for Python Workout, 2nd edition; after giving them my opinions, I am happy to widely recommend this book to interested readers. Python is relatively an easy programming language to learn, allowing you to start coding pretty quickly. However, there s a significant gap between being able to throw code in Python and truly mastering the language. To write efficient, maintainable code that s easy for others to understand, practice is essential. And that s often where many of us get stuck. This book begins by stating that it is not designed to teach you Python ( ) but rather to improve your understanding of Python and how to use it to solve problems. The author s structure and writing style are very didactic. Each chapter addresses a different aspect of the language: from the simplest (numbers, strings, lists) to the most challenging for beginners (iterators and generators), Lerner presents several problems for us to solve as examples, emphasizing the less obvious details of each aspect. I was invited as a reviewer in the preprint version of the book. I am now very pleased to recommend it to all interested readers. The author presents a pleasant and easy-to-read text, with a wealth of content that I am sure will improve the Python skills of all its readers.

Louis-Philippe V ronneau: Reducing the size of initramfs kernel images

In the past few years, the size of the kernel images in Debian have been steadily growing. I don't see this as a problem per se, but it has been causing me trouble, as my /boot partition has become too small to accommodate two kernel images at the same time. Since I'm running Debian Unstable on my personal systems and keep them updated with unattended-upgrade, this meant each (frequent) kernel upgrade triggered an error like this one:
update-initramfs: failed for /boot/initrd.img-6.17.11+deb14-amd64 with 1.
dpkg: error processing package initramfs-tools (--configure):
 installed initramfs-tools package post-installation script subprocess returned
 error exit status 1
Errors were encountered while processing:
 initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
This would in turn break the automated upgrade process and require me to manually delete the currently running kernel (which works, but isn't great) to complete the upgrade. The "obvious" solution would have been to increase the size of my /boot partition to something larger than the default 456M. Since my systems use full-disk encryption and LVM, this isn't trivial and would have required me to play Tetris and swap files back and forth using another drive. Another solution proposed by anarcat was to migrate to systemd-boot (I'm still using grub), use Unified Kernel Images (UKI) and merge the /boot and /boot/efi partitions. Since I already have a bunch of configurations using grub and I am not too keen on systemd taking over all the things on my computer, I was somewhat reluctant. As my computers are all configured by Puppet, I could of course have done a complete system reinstallation, but again, this was somewhat more involved than what I wanted it to be. After looking online for a while, I finally stumbled on this blog post by Neil Brown detailing how to shrink the size of the initramfs images. With MODULES=dep my images shrunk from 188M to 41M, fixing my issue. Thanks Neil! I was somewhat worried removing kernel modules would break something on my systems, but so far, I only had to manually load the i2c_dev module, as I need it to manage my home monitor's brightness using ddcutil.

Dirk Eddelbuettel: RcppAnnoy 0.0.23 on CRAN: Several Updates

annoy image A new release, now at version 0.0.22, of RcppAnnoy has arrived on CRAN, just a little short of two years since the previous release. RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik Bernhardsson. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours originally developed to drive the Spotify music discovery algorithm. It had all the buzzwords already a decade ago: it is one of the algorithms behind (drum roll ) vector search as it finds approximate matches very quickly and also allows to persist the data. This release contains three contributed pull requests covering a new metric, a new demo and quieter compilation, some changes to documentation and last but not least general polish including letting the vignette now use the Rcpp::asis builder. Details of the release follow based on the NEWS file.

Changes in version 0.0.23 (2026-01-12)
  • Add dot product distance metrics (Benjamin James in #78)
  • Apply small polish to the documentation (Dirk closing #79)
  • A new demo() has been added (Samuel Granjeaud in #79)
  • Switch to Authors@R in DESCRIPTION
  • Several updates to continuous integration and README.md
  • Small enhancements to package help files
  • Updates to vignettes and references
  • Vignette now uses Rcpp::asis builder (Dirk in #80)
  • Switch one macro to a function to avoid a compiler nag (Amos Elberg in #81)

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Daniel Lange: Resizing Xterm fonts with Ctrl+ and Ctrl+

Xterm misses the convenient option to resize the fonts with Ctrl+<plus> and Ctrl+<minus> like xfce4-terminal or gnome-terminal do out of the box. This feature can be added on Debian systems by dropping a configuration snippet into /etc/X11/Xresources/x11-xterm-fontsize:
! Allow XTerm font size to be changed with Ctrl+<plus> and <minus>
XTerm.vt100.translations: #override \n\
Ctrl <Key> minus: smaller-vt-font() \n\
Ctrl <Key> plus: larger-vt-font()
Any new X session will inherit this configuration and Ctrl+<plus> and Ctrl+<minus> will work to adjust the font size (and taking the window size along). The font sizes that Xterm iterates through can be viewed on the Ctrl-<right click> context menu: Screenshot of an Xterm window with the context menu displayed NB: The context menu allows to switch the fonts on systems where the above snippet has not (yet) been installed. So good enough for a one-off. Credits: Stack Overflow/Ask Ubuntu, Matthew Hoener.

Daniel Kahn Gillmor: AI as a Compression Problem

A recent article in The Atlantic makes the case that very large language models effectively contain much of the works they're trained on. This article is an attempt to popularize the insights in the recent academic paper Extracting books from production language models from Ahmed et al. The authors of the paper demonstrate convincingly that well-known copyrighted textual material can be extracted from the chatbot interfaces of popular commercial LLM services. The Atlantic article cites a podcast quote about the Stable Diffusion AI image-generator model, saying "We took 100,000 gigabytes of images and compressed it to a two-gigabyte file that can re-create any of those and iterations of those". By analogy, this suggests we might think of LLMs (which work on text, not the images handled by Stable Diffusion) as a form of lossy textual compression. The entire text of Moby Dick, the canonical Big American Novel is merely 1.2MiB uncompressed (and less than 0.4MiB losslessly compressed with bzip2 -9). It's not surprising to imagine that a model with hundreds of billions of parameters might contain copies of these works. Warning: The next paragraph contains fuzzy math with no real concrete engineering practice behind it! Consider a hypothetical model with 100 billion parameters, where each parameter is stored as a 16-bit floating point value. The model weights would take 200 GB of storage. If you were to fill the parameter space only with losslessly compressed copies of books like Moby Dick, you could still fit half a million books, more than anyone can read in a lifetime. And lossy compression is typically orders of magnitude less in size than lossless compression, so we're talking about millions of works effectively encoded, with the acceptance of some artifacts being injected in the output. I first encountered this "compression" view of AI nearly three years ago, in Ted Chiang's insightful ChatGPT is a Blurry JPEG of the Web. I was suprised that The Atlantic article didn't cite Chiang's piece. If you haven't read Ted Chiang, i strongly recommend his work, and this piece is a great place to start. Chiang aside, the more recent writing that focuses on the idea of compressed works being "contained" in the model weights seems to be used by people interested in wielding esome sort of copyright claims against the AI companies that maintain or provide access to these models. There are many many problems with AI today, but attacking AI companies based on copyright concerns seems similar to going after Al Capone for tax evasion. We should be much more concerned with the effect these projects have on cultural homogeneity, mental health, labor rights, privacy, and social control than whether they're violating copyright in some specific instance.

11 January 2026

Patryk Cisek: Choosing Secrets Manager for Homelab

Secrets Manager for Homelab For a few years, I ve been managing the configuration of a bunch of self-hosted services using Ansible Playbooks. Each playbook needed at least one secret the sudo password. Many of them needed to manage more (e.g. SMTP credentials for email notifications). Because I ve always been paranoid about security, I stored most of those secrets in Ansible Vault, the password for which is stored in only one location my memory. Therefore, each time I ran any of those playbooks, I d have to enter two passwords interactively: the sudo password and the Ansible Vault password.

Dirk Eddelbuettel: RApiDatetime 0.0.10 on CRAN: Maintenance

A new maintenance release of our RApiDatetime package is now on CRAN, coming just about two years after the previous maintenance release. RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change. This release avoids use of and which are now outlawed under R-devel, and makes a number of other smaller maintenance updates. Just like the previous release, we are at OS_type: unix meaning there will not be any Windows builds at CRAN. If you would like that to change, and ideally can work in the Windows portion, do not hesitate to get in touch. Details of the release follow based on the NEWS file.

Changes in RApiDatetime version 0.0.10 (2026-01-11)
  • Minor maintenance for continuous integration files, README.md
  • Switch to Authors@R in DESCRIPTION
  • Use Rf_setAttrib with R 4.5.0 or later

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Dirk Eddelbuettel: RProtoBuf 0.4.25 on CRAN: Mostly Maintenance

A new maintenance release 0.4.25 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers ( ProtoBuf ) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. This release brings an update to a header use force by R-devel, the usual set of continunous integration updates, and a large overhaul of URLs as CRAN is now running more powerful checks. As a benefit the three vignettes have all been refreshed. they are now also delivered via the new Rcpp::asis() vignette builder that permits pre-made pdf files to be used easily. The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.25 (2026-01-11)
  • Several routine updates to continuous integration script
  • Include ObjectTable.h instead of Callback.h to accommodate R 4.6.0
  • Switch vignettes to Rcpp::asis driver, update references

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the quick overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Next.