Search Results: "cas"

20 January 2026

Sahil Dhiman: Conferences, why?

Back in December, I was working to help organize multiple different conferences. One has already happened; the rest are still works in progress. That s when the thought struck me: why so many conferences, and why do I work for them? I have been fairly active in the scene since 2020. For most conferences, I usually arrive late in the city on the previous day and usually leave the city on conference close day. Conferences for me are the place to meet friends and new folks and hear about them, their work, new developments, and what s happening in their interest zones. I feel naturally happy talking to folks. In this case, folks inspire me to work. Nothing can replace a passionate technical and social discussion, which stretches way into dinner parties and later. For most conference discussions now, I just show up without a set role (DebConf is probably an exception to it). It usually involves talking to folks, suggesting what needs to be done, doing a bit of it myself, and finishing some last-minute stuff during the actual thing. Having more of these conferences and helping make them happen naturally gives everyone more places to come together, meet distant friends, talk, and work on something. No doubt, one reason for all these conferences is evangelism for, let s say Free Software, OpenStreetMap, Debian etc. which is good and needed for the pipeline. But for me, the primary reason would always be meeting folks.

19 January 2026

Francesco Paolo Lovergine: A Terramaster NAS with Debian, take two.

After experimenting at home, the very first professional-grade NAS from Terramaster arrived at work, too, with 12 HDD bays and possibly a pair of M2s. NVME cards. In this case, I again installed a plain Debian distribution, but HDD monitoring required some configuration adjustments to run smartd properly.A decent approach to data safety is to run regularly scheduled short and long SMART tests on all disks to detect potential damage. Running such tests on all disks at once isn't ideal, so I set up a script to create a staggered configuration and test multiple groups of disks at different times. Note that it is mandatory to read the devices at each reboot because their names and order can change.Of course, the same principle (short/long test at regular intervals along the week) should be applied for a simpler configuration, as in the case of my home NAS with a pair of RAID1 devices.What follows is a simple script to create a staggered smartd.conf at boot time:
#!/bin/bash
#
# Save this as /usr/local/bin/create-smartd-conf.sh
#
# Dynamically generate smartd.conf with staggered SMART test scheduling
# at boot time based on discovered ATA devices
# HERE IS A LIST OF DIRECTIVES FOR THIS CONFIGURATION FILE.
# PLEASE SEE THE smartd.conf MAN PAGE FOR DETAILS
#
#   -d TYPE Set the device type: ata, scsi[+TYPE], nvme[,NSID],
#           sat[,auto][,N][+TYPE], usbcypress[,X], usbjmicron[,p][,x][,N],
#           usbprolific, usbsunplus, sntasmedia, sntjmicron[,NSID], sntrealtek,
#           ... (platform specific)
#   -T TYPE Set the tolerance to one of: normal, permissive
#   -o VAL  Enable/disable automatic offline tests (on/off)
#   -S VAL  Enable/disable attribute autosave (on/off)
#   -n MODE No check if: never, sleep[,N][,q], standby[,N][,q], idle[,N][,q]
#   -H      Monitor SMART Health Status, report if failed
#   -s REG  Do Self-Test at time(s) given by regular expression REG
#   -l TYPE Monitor SMART log or self-test status:
#           error, selftest, xerror, offlinests[,ns], selfteststs[,ns]
#   -l scterc,R,W  Set SCT Error Recovery Control
#   -e      Change device setting: aam,[N off], apm,[N off], dsn,[on off],
#           lookahead,[on off], security-freeze, standby,[N off], wcache,[on off]
#   -f      Monitor 'Usage' Attributes, report failures
#   -m ADD  Send email warning to address ADD
#   -M TYPE Modify email warning behavior (see man page)
#   -p      Report changes in 'Prefailure' Attributes
#   -u      Report changes in 'Usage' Attributes
#   -t      Equivalent to -p and -u Directives
#   -r ID   Also report Raw values of Attribute ID with -p, -u or -t
#   -R ID   Track changes in Attribute ID Raw value with -p, -u or -t
#   -i ID   Ignore Attribute ID for -f Directive
#   -I ID   Ignore Attribute ID for -p, -u or -t Directive
#   -C ID[+] Monitor [increases of] Current Pending Sectors in Attribute ID
#   -U ID[+] Monitor [increases of] Offline Uncorrectable Sectors in Attribute ID
#   -W D,I,C Monitor Temperature D)ifference, I)nformal limit, C)ritical limit
#   -v N,ST Modifies labeling of Attribute N (see man page)
#   -P TYPE Drive-specific presets: use, ignore, show, showall
#   -a      Default: -H -f -t -l error -l selftest -l selfteststs -C 197 -U 198
#   -F TYPE Use firmware bug workaround:
#           none, nologdir, samsung, samsung2, samsung3, xerrorlba
#   -c i=N  Set interval between disk checks to N seconds
#    #      Comment: text after a hash sign is ignored
#    \      Line continuation character
# Attribute ID is a decimal integer 1 <= ID <= 255
# except for -C and -U, where ID = 0 turns them off.
set -euo pipefail
# Test schedule configuration
BASE_SCHEDULE="L/../../6"  # Long test on Saturdays
TEST_HOURS=(01 03 05 07)   # 4 time slots: 1am, 3am, 5am, 7am
DEVICES_PER_GROUP=3
main()  
    # Get array of device names (e.g., sda, sdb, sdc)
    mapfile -t devices < <(ls -l /dev/disk/by-id/   grep ata   awk ' print $11 '   grep sd   cut -d/ -f3   sort -u)
    if [[ $ #devices[@]  -eq 0 ]]; then
        exit 1
    fi
    # Start building config file
    cat << EOF
# smartd.conf - Auto-generated at boot
# Generated: $(date '+%Y-%m-%d %H:%M:%S')
#
# Staggered SMART test scheduling to avoid concurrent disk load
# Long tests run on Saturdays at different times per group
#
EOF
    # Process devices into groups
    local group=0
    local count_in_group=0
    for i in "$ !devices[@] "; do
        local dev="$ devices[$i] "
        local hour="$ TEST_HOURS[$group] "
        # Add group header at start of each group
        if [[ $count_in_group -eq 0 ]]; then
            echo ""
            echo "# Group $((group + 1)) - Tests at $ hour :00 on Saturdays"
        fi
        # Add device entry
        #echo "/dev/$ dev  -a -o on -S on -s ($ BASE_SCHEDULE /$ hour ) -m root"
        echo "/dev/$ dev  -a -o on -S on -s (L/../../6/$ hour ) -s (S/../.././$(((hour + 12) % 24))) -m root"
        # Move to next group when current group is full
        count_in_group=$((count_in_group + 1))
        if [[ $count_in_group -ge $DEVICES_PER_GROUP ]]; then
            count_in_group=0
            group=$(((group + 1) % $ #TEST_HOURS[@] ))
        fi
    done
 
main "$@"
To run such a script at boot, add a unit file to the systemd configuration.
sudo systemctl  edit --full /etc/systemd/system/regenerate-smartd-conf.service
sudo systemctl enable regenerate-smartd-conf.service
Where the unit service is the following:
[Unit]
Description=Generate smartd.conf with staggered SMART test scheduling
# Wait for all local filesystems and udev device detection
After=local-fs.target systemd-udev-settle.service
Before=smartd.service
Wants=systemd-udev-settle.service
DefaultDependencies=no
[Service]
Type=oneshot
# Only generate the config file, don't touch smartd here
ExecStart=/bin/bash -c '/usr/local/bin/create-smartd-config.sh > /etc/smartd.conf'
StandardOutput=journal
StandardError=journal
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

Russell Coker: Furilabs FLX1s

The Aim I have just got a Furilabs FLX1s [1] which is a phone running a modified version of Debian. I want to have a phone that runs all apps that I control and can observe and debug. Android is very good for what it does and there are security focused forks of Android which have a lot of potential, but for my use a Debian phone is what I want. The FLX1s is not going to be my ideal phone, I am evaluating it for use as a daily-driver until a phone that meets my ideal criteria is built. In this post I aim to provide information to potential users about what it can do, how it does it, and how to get the basic functions working. I also evaluate how well it meets my usage criteria. I am not anywhere near an average user. I don t think an average user would ever even see one unless a more technical relative showed one to them. So while this phone could be used by an average user I am not evaluating it on that basis. But of course the features of the GUI that make a phone usable for an average user will allow a developer to rapidly get past the beginning stages and into more complex stuff. Features The Furilabs FLX1s [1] is a phone that is designed to run FuriOS which is a slightly modified version of Debian. The purpose of this is to run Debian instead of Android on a phone. It has switches to disable camera, phone communication, and microphone (similar to the Librem 5) but the one to disable phone communication doesn t turn off Wifi, the only other phone I know of with such switches is the Purism Librem 5. It has a 720*1600 display which is only slightly better than the 720*1440 display in the Librem 5 and PinePhone Pro. This doesn t compare well to the OnePlus 6 from early 2018 with 2280*1080 or the Note9 from late 2018 with 2960*1440 which are both phones that I ve run Debian on. The current price is $US499 which isn t that good when compared to the latest Google Pixel series, a Pixel 10 costs $US649 and has a 2424*1080 display and it also has 12G of RAM while the FLX1s only has 8G. Another annoying thing is how rounded the corners are, it seems that round corners that cut off the content are a standard practice nowadays, in my collection of phones the latest one I found with hard right angles on the display was a Huawei Mate 10 Pro which was released in 2017. The corners are rounder than the Note 9, this annoys me because the screen is not high resolution by today s standards so losing the corners matters. The default installation is Phosh (the GNOME shell for phones) and it is very well configured. Based on my experience with older phone users I think I could give a phone with this configuration to a relative in the 70+ age range who has minimal computer knowledge and they would be happy with it. Additionally I could set it up to allow ssh login and instead of going through the phone support thing of trying to describe every GUI setting to click on based on a web page describing menus for the version of Android they are running I could just ssh in and run diff on the .config directory to find out what they changed. Furilabs have done a very good job of setting up the default configuration, while Debian developers deserve a lot of credit for packaging the apps the Furilabs people have chosen a good set of default apps to install to get it going and appear to have made some noteworthy changes to some of them. Droidian The OS is based on Android drivers (using the same techniques as Droidian [2]) and the storage device has the huge number of partitions you expect from Android as well as a 110G Ext4 filesystem for the main OS. The first issue with the Droidian approach of using an Android kernel and containers for user space code to deal with drivers is that it doesn t work that well. There are 3 D state processes (uninterrupteable sleep which usually means a kernel bug if the process remains in that state) after booting and doing nothing special. My tests running Droidian on the Note 9 also had D state processes, in this case they are D state kernel threads (I can t remember if the Note 9 had regular processes or kernel threads stuck in D state). It is possible for a system to have full functionality in spite of some kernel threads in D state but generally it s a symptom of things not working as well as you would hope. The design of Droidian is inherently fragile. You use a kernel and user space code from Android and then use Debian for the rest. You can t do everything the Android way (with the full OS updates etc) and you also can t do everything the Debian way. The TOW Boot functionality in the PinePhone Pro is really handy for recovery [3], it allows the internal storage to be accessed as a USB mass storage device. The full Android setup with ADB has some OK options for recovery, but part Android and part Debian has less options. While it probably is technically possible to do the same things in regard to OS repair and reinstall the fact that it s different from most other devices means that fixes can t be done in the same way. Applications GUI The system uses Phosh and Phoc, the GNOME system for handheld devices. It s a very different UI from Android, I prefer Android but it is usable with Phosh. IM Chatty works well for Jabber (XMPP) in my tests. It supports Matrix which I didn t test because I don t desire the same program doing Matrix and Jabber and because Matrix is a heavy protocol which establishes new security keys for each login so I don t want to keep logging in on new applications. Chatty also does SMS but I couldn t test that without the SIM caddy. I use Nheko for Matrix which has worked very well for me on desktops and laptops running Debian. Email I am currently using Geary for email. It works reasonably well but is lacking proper management of folders, so I can t just subscribe to the important email on my phone so that bandwidth isn t wasted on less important email (there is a GNOME gitlab issue about this see the Debian Wiki page about Mobile apps [4]). Music Music playing isn t a noteworthy thing for a desktop or laptop, but a good music player is important for phone use. The Lollypop music player generally does everything you expect along with support for all the encoding formats including FLAC0 a major limitation of most Android music players seems to be lack of support for some of the common encoding formats. Lollypop has it s controls for pause/play and going forward and backward one track on the lock screen. Maps The installed map program is gnome-maps which works reasonably well. It gets directions via the Graphhopper API [5]. One thing we really need is a FOSS replacement for Graphhopper in GNOME Maps. Delivery and Unboxing I received my FLX1s on the 13th of Jan [1]. I had paid for it on the 16th of Oct but hadn t received the email with the confirmation link so the order had been put on hold. But after I contacted support about that on the 5th of Jan they rapidly got it to me which was good. They also gave me a free case and screen protector to apologise, I don t usually use screen protectors but in this case it might be useful as the edges of the case don t even extend 0.5mm above the screen. So if it falls face down the case won t help much. When I got it there was an open space at the bottom where the caddy for SIMs is supposed to be. So I couldn t immediately test VoLTE functionality. The contact form on their web site wasn t working when I tried to report that and the email for support was bouncing. Bluetooth As a test of Bluetooth I connected it to my Nissan LEAF which worked well for playing music and I connected it to several Bluetooth headphones. My Thinkpad running Debian/Trixie doesn t connect to the LEAF and to headphones which have worked on previous laptops running Debian and Ubuntu. A friend s laptop running Debian/Trixie also wouldn t connect to the LEAF so I suspect a bug in Trixie, I need to spend more time investigating this. Wifi Currently 5GHz wifi doesn t work, this is a software bug that the Furilabs people are working on. 2.4GHz wifi works fine. I haven t tested running a hotspot due to being unable to get 4G working as they haven t yet shipped me the SIM caddy. Docking This phone doesn t support DP Alt-mode or Thunderbolt docking so it can t drive an external monitor. This is disappointing, Samsung phones and tablets have supported such things since long before USB-C was invented. Samsung DeX is quite handy for Android devices and that type feature is much more useful on a device running Debian than on an Android device. Camera The camera works reasonably well on the FLX1s. Until recently for the Librem 5 the camera didn t work and the camera on my PinePhone Pro currently doesn t work. Here are samples of the regular camera and the selfie camera on the FLX1s and the Note 9. I think this shows that the camera is pretty decent. The selfie looks better and the front camera is worse for the relatively close photo of a laptop screen taking photos of computer screens is an important part of my work but I can probably work around that. I wasn t assessing this camera t find out if it s great, just to find out if I have the sorts of problems I had before and it just worked. The Samsung Galaxy Note series of phones has always had decent specs including good cameras. Even though the Note 9 is old comparing to it is a respectable performance. The lighting was poor for all photos. FLX1s
Note 9
Power Use In 93 minutes having the PinePhone Pro, Librem 5, and FLX1s online with open ssh sessions from my workstation the PinePhone Pro went from 100% battery to 26%, the Librem 5 went from 95% to 69%, and the FLX1s went from 100% to 99%. The battery discharge rate of them was reported as 3.0W, 2.6W, and 0.39W respectively. Based on having a 16.7Wh battery 93 minutes of use should have been close to 4% battery use, but in any case all measurements make it clear that the FLX1s will have a much longer battery life. Including the measurement of just putting my fingers on the phones and feeling the temperature (FLX1s felt cool and the others felt hot). The PinePhone Pro and the Librem 5 have an optional Caffeine mode which I enabled for this test, without that enabled the phone goes into a sleep state and disconnects from Wifi. So those phones would use much less power with caffeine mode enabled, but they also couldn t get fast response to notifications etc. I found the option to enable a Caffeine mode switch on the FLX1s but the power use was reported as being the same both with and without it. Charging One problem I found with my phone is that in every case it takes 22 seconds to negotiate power. Even when using straight USB charging (no BC or PD) it doesn t draw any current for 22 seconds. When I connect it it will stay at 5V and varying between 0W and 0.1W (current rounded off to zero) for 22 seconds or so and then start charging. After the 22 second display the phone will make the tick sound indicating that it s charging and the power meter will measure that it s drawing some current. I added the table from my previous post about phone charging speed [6] with an extra row for the FLX1s. For charging from my PC USB ports the results were the worst ever, the port that does BC did not work at all it was looping trying to negotiate after a 22 second negotiation delay the port would turn off. The non-BC port gave only 2.4W which matches the 2.5W given by the spec for a High-power device which is what that port is designed to give. In a discussion on the Purism forum about the Librem5 charging speed one of their engineers told me that the reason why their phone would draw 2A from that port was because the cable was identifying itself as a USB-C port not a High-power device port. But for some reason out of the 7 phones I tested the FLX1s and the One Plus 6 are the only ones to limit themselves to what the port is apparently supposed to do. Also the One Plus 6 charges slowly on every power supply so I don t know if it is obeying the spec or just sucking. On a cheap AliExpress charger the FLX1s gets 5.9V and on a USB battery it gets 5.8V. Out of all 42 combinations of device and charger I tested these were the only ones to involve more than 5.1V but less than 9V. I welcome comments suggesting an explanation. The case that I received has a hole for the USB-C connector that isn t wide enough for the plastic surrounds on most of my USB-C cables (including the Dell dock). Also to make a connection requires a fairly deep insertion (deeper than the One Plus 6 or the Note 9). So without adjustment I have to take the case off to charge it. It s no big deal to adjust the hole (I have done it with other cases) but it s an annoyance.
Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
FLX1s FAIL 5.0V 0.49A 2.4W 4.8V 1.9A 9.0W 5.9V 1.8A 11W 4.8V 2.1A 10W 5.8V 2.1A 12W 5.8V 2.1A 12W 5.0V 0.49A 2.4W
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W
Conclusion The Furilabs support people are friendly and enthusiastic but my customer experience wasn t ideal. It was good that they could quickly respond to my missing order status and the missing SIM caddy (which I still haven t received but believe is in the mail) but it would be better if such things just didn t happen. The phone is quite user friendly and could be used by a novice. I paid $US577 for the FLX1s which is $AU863 by today s exchange rates. For comparison I could get a refurbished Pixel 9 Pro Fold for $891 from Kogan (the major Australian mail-order company for technology) or a refurbished Pixel 9 Pro XL for $842. The Pixel 9 series has security support until 2031 which is probably longer than you can expect a phone to be used without being broken. So a phone with a much higher resolution screen that s only one generation behind the latest high end phones and is refurbished will cost less. For a brand new phone a Pixel 8 Pro which has security updates until 2030 costs $874 and a Pixel 9A which has security updates until 2032 costs $861. Doing what the Furilabs people have done is not a small project. It s a significant amount of work and the prices of their products need to cover that. I m not saying that the prices are bad, just that economies of scale and the large quantity of older stock makes the older Google products quite good value for money. The new Pixel phones of the latest models are unreasonably expensive. The Pixel 10 is selling new from Google for $AU1,149 which I consider a ridiculous price that I would not pay given the market for used phones etc. If I had a choice of $1,149 or a feature phone I d pay $1,149. But the FLX1s for $863 is a much better option for me. If all I had to choose from was a new Pixel 10 or a FLX1s for my parents I d get them the FLX1s. For a FOSS developer a FLX1s could be a mobile test and development system which could be lent to a relative when their main phone breaks and the replacement is on order. It seems to be fit for use as a commodity phone. Note that I give this review on the assumption that SMS and VoLTE will just work, I haven t tested them yet. The UI on the FLX1s is functional and easy enough for a new user while allowing an advanced user to do the things they desire. I prefer the Android style and the Plasma Mobile style is closer to Android than Phosh is, but changing it is something I can do later. Generally I think that the differences between UIs matter more when on a desktop environment that could be used for more complex tasks than on a phone which limits what can be done by the size of the screen. I am comparing the FLX1s to Android phones on the basis of what technology is available. But most people who would consider buying this phone will compare it to the PinePhone Pro and the Librem 5 as they have similar uses. The FLX1s beats both those phones handily in terms of battery life and of having everything just work. But it has the most non free software of the three and the people who want the $2000 Librem 5 that s entirely made in the US won t want the FLX1s. This isn t the destination for Debian based phones, but it s a good step on the way to it and I don t think I ll regret this purchase.

Dima Kogan: mrcal 2.5 released!

mrcal 2.5 is out: the release notes. Once again, this is mostly a bug-fix release en route to the big new features coming in 3.0. One cool thing is that these tools have now matured enough to no longer be considered experimental. They have been used with great success in lots of contexts across many different projects and organizations. Some highlights: Some of the above is new, and not yet fully polished and documented and tested, but it works. In mrcal 2.5, most of the implementation of some new big features is written and committed, but it's still incomplete. The new stuff is there, but is lightly tested and documented. This will be completed eventually in mrcal 3.0: mrcal is quite good already, and will be even better in the future. Try it today!

17 January 2026

Simon Josefsson: Backup of S3 Objects Using rsnapshot

I ve been using rsnapshot to take backups of around 10 servers and laptops for well over 15 years, and it is a remarkably reliable tool that has proven itself many times. Rsnapshot uses rsync over SSH and maintains a temporal hard-link file pool. Once rsnapshot is configured and running, on the backup server, you get a hardlink farm with directories like this for the remote server:
/backup/serverA.domain/.sync/foo
/backup/serverA.domain/daily.0/foo
/backup/serverA.domain/daily.1/foo
/backup/serverA.domain/daily.2/foo
...
/backup/serverA.domain/daily.6/foo
/backup/serverA.domain/weekly.0/foo
/backup/serverA.domain/weekly.1/foo
...
/backup/serverA.domain/monthly.0/foo
/backup/serverA.domain/monthly.1/foo
...
/backup/serverA.domain/yearly.0/foo
I can browse and rescue files easily, going back in time when needed. The rsnapshot project README explains more, there is a long rsnapshot HOWTO although I usually find the rsnapshot man page the easiest to digest. I have stored multi-TB Git-LFS data on GitLab.com for some time. The yearly renewal is coming up, and the price for Git-LFS storage on GitLab.com is now excessive (~$10.000/year). I have reworked my work-flow and finally migrated debdistget to only store Git-LFS stubs on GitLab.com and push the real files to S3 object storage. The cost for this is barely measurable, I have yet to run into the 25/month warning threshold. But how do you backup stuff stored in S3? For some time, my S3 backup solution has been to run the minio-client mirror command to download all S3 objects to my laptop, and rely on rsnapshot to keep backups of this. While 4TB NVME s are relatively cheap, I ve felt that this disk and network churn on my laptop is unsatisfactory for quite some time. What is a better approach? I find S3 hosting sites fairly unreliable by design. Only a couple of clicks in your web browser and you have dropped 100TB of data. Or by someone else who steal your plaintext-equivalent cookie. Thus, I haven t really felt comfortable using any S3-based backup option. I prefer to self-host, although continously running a mirror job is not sufficient: if I accidentally drop the entire S3 object store, my mirror run will remove all files locally too. The rsnapshot approach that allows going back in time and having data on self-managed servers feels superior to me. What if we could use rsnapshot with a S3 client instead of rsync? Someone else asked about this several years ago, and the suggestion was to use the fuse-based s3fs which sounded unreliable to me. After some experimentation, working around some hard-coded assumption in the rsnapshot implementation, I came up with a small configuration pattern and a wrapper tool to implement what I desired. Here is my configuration snippet:
cmd_rsync    /backup/s3/s3rsync
rsync_short_args    -Q
rsync_long_args    --json --remove
lockfile    /backup/s3/rsnapshot.pid
snapshot_root    /backup/s3
backup    s3:://hetzner/debdistget-gnuinos    ./debdistget-gnuinos
backup    s3:://hetzner/debdistget-tacos  ./debdistget-tacos
backup    s3:://hetzner/debdistget-diffos ./debdistget-diffos
backup    s3:://hetzner/debdistget-pureos ./debdistget-pureos
backup    s3:://hetzner/debdistget-kali   ./debdistget-kali
backup    s3:://hetzner/debdistget-devuan ./debdistget-devuan
backup    s3:://hetzner/debdistget-trisquel   ./debdistget-trisquel
backup    s3:://hetzner/debdistget-debian ./debdistget-debian
The idea is to save a backup of a couple of S3 buckets under /backup/s3/. I have some scripts that take a complete rsnapshot.conf file and append my per-directory configuration so that this becomes a complete configuration. If you are curious how I roll this, backup-all invokes backup-one appending my rsnapshot.conf template with the snippet above. The s3rsync wrapper script is the essential hack to convert rsnapshot s rsync parameters into something that talks S3 and the script is as follows:
#!/bin/sh
set -eu
S3ARG=
for ARG in "$@"; do
    case $ARG in
    s3:://*) S3ARG="$S3ARG "$(echo $ARG   sed -e 's,s3:://,,');;
    -Q*) ;;
    *) S3ARG="$S3ARG $ARG";;
    esac
done
echo /backup/s3/mc mirror $S3ARG
exec /backup/s3/mc mirror $S3ARG
It uses the minio-client tool. I first tried s3cmd but its sync command read all files to compute MD5 checksums every time you invoke it, which is very slow. The mc mirror command is blazingly fast since it only compare mtime s, just like rsync or git. First you need to store credentials for your S3 bucket. These are stored in plaintext in ~/.mc/config.json which I find to be sloppy security practices, but I don t know of any better way to do this. Replace AKEY and SKEY with your access token and secret token from your S3 provider:
/backup/s3/mc alias set hetzner AKEY SKEY
If I invoke a sync job for a fully synced up directory the output looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V sync
Setting locale to POSIX "C"
echo 1443 > /backup/s3/rsnapshot.pid 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-gnuinos \
    /backup/s3/.sync//debdistget-gnuinos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-gnuinos /backup/s3/.sync//debdistget-gnuinos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-tacos \
    /backup/s3/.sync//debdistget-tacos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-tacos /backup/s3/.sync//debdistget-tacos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-diffos \
    /backup/s3/.sync//debdistget-diffos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-diffos /backup/s3/.sync//debdistget-diffos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-pureos \
    /backup/s3/.sync//debdistget-pureos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-pureos /backup/s3/.sync//debdistget-pureos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-kali \
    /backup/s3/.sync//debdistget-kali 
/backup/s3/mc mirror --json --remove hetzner/debdistget-kali /backup/s3/.sync//debdistget-kali
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-devuan \
    /backup/s3/.sync//debdistget-devuan 
/backup/s3/mc mirror --json --remove hetzner/debdistget-devuan /backup/s3/.sync//debdistget-devuan
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-trisquel \
    /backup/s3/.sync//debdistget-trisquel 
/backup/s3/mc mirror --json --remove hetzner/debdistget-trisquel /backup/s3/.sync//debdistget-trisquel
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-debian \
    /backup/s3/.sync//debdistget-debian 
/backup/s3/mc mirror --json --remove hetzner/debdistget-debian /backup/s3/.sync//debdistget-debian
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
touch /backup/s3/.sync/ 
rm -f /backup/s3/rsnapshot.pid 
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1443] \
    /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
    -V sync: completed successfully 
root@hamster /backup# 
You can tell from the paths that this machine runs Guix. This was the first production use of the Guix System for me, and the machine has been running since 2015 (with the occasional new hard drive). Before, I used rsnapshot on Debian, but some stable release of Debian dropped the rsnapshot package, paving the way for me to test Guix in production on a non-Internet exposed machine. Unfortunately, mc is not packaged in Guix, so you will have to install it from the MinIO Client GitHub page manually. Running the daily rotation looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V daily
Setting locale to POSIX "C"
echo 1549 > /backup/s3/rsnapshot.pid 
mv /backup/s3/daily.5/ /backup/s3/daily.6/ 
mv /backup/s3/daily.4/ /backup/s3/daily.5/ 
mv /backup/s3/daily.3/ /backup/s3/daily.4/ 
mv /backup/s3/daily.2/ /backup/s3/daily.3/ 
mv /backup/s3/daily.1/ /backup/s3/daily.2/ 
mv /backup/s3/daily.0/ /backup/s3/daily.1/ 
/run/current-system/profile/bin/cp -al /backup/s3/.sync /backup/s3/daily.0 
rm -f /backup/s3/rsnapshot.pid 
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1549] \
    /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
    -V daily: completed successfully 
root@hamster /backup# 
Hopefully you will feel inspired to take backups of your S3 buckets now!

Jonathan Dowland: Honest Jon's lightly-used Starships

No man s Sky (or as it s known in our house, "spaceship game") is a space exploration/sandbox game that was originally released 10 years ago. Back then I tried it on my brother s PS4 but I couldn t get into it. In 2022 it launched for the Nintendo Switch1 and the game finally clicked for me. I play it very casually. I mostly don t play at all, except sometimes when there are time-limited expeditions running, which I find refreshing, and usually have some exclusives as a reward for play. One of the many things you can do in the game is collect star ships. I started keeping a list of notable ones I ve found, and I ve decided to occasionally blog about them.
The Horizon Vector NX spaceship
The Horizon Vector NX is a small sporty ship that players on Nintendo Switch could claim within the first month or so after it launched. The colour scheme resembles the original neon switch controllers. Although the ship type occurs naturally in the game in other configurations, I think differently-painted wings are unique to this ship. For most of the last 4 years, my copy of this ship was confined to the Switch, until November 2024, when they added cross-save capability to the game. I was then able to access the ship when playing on Linux (or Mac).

  1. The game runs very well natively on Mac, flawlessly on Steam for Linux, but struggles on the origins switch. It s a marvel it runs there at all.

Ravi Dwivedi: My experiences in Brunei

This post covers my friend Badri and my experiences in Brunei. Brunei officially Brunei Darussalam is a country in Southeast Asia, located on Borneo island. It is one of the few remaining absolute monarchies on Earth. On the morning of the 10th of December 2024, Badri and I reached Brunei International Airport by taking a flight from Kuala Lumpur. Upon arrival at the airport, we had to go through the immigration, of course. However, I forgot to fill my arrival card, which I filled while I was in the queue for my immigration. The immigration officer asked me how much cash I was carrying of each currency. After completing the formalities, the immigration officer stamped my passport and let me in. Take a look at Brunei s entry stamp in my passport.
Brunei entry stamp Brunei entry stamp on my passport. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0.
We exchanged Singapore dollars to get some Brunei dollars at the airport. The Brunei dollar was pegged 1:1 with the Singapore dollar, meaning 1 Singapore dollar equals 1 Brunei dollar. The exchange rate we received at the airport was the same. Our (pre-booked) accommodation was located near Gadong mall. So, we went to the information center at the airport to ask how to get there by public transport. However, the person at the information center told us that they didn t know the public transport routes and suggested we take a taxi instead. We came out of the airport and came across an Indian with a mini bus. He offered to drop us at our accommodation for 10 Brunei dollars ( 630). As we were tired after a sleepless night, we didn t negotiate and took the offer. It felt a bit weird using the minibus as our private taxi. In around half-an-hour, we reach our accommodation. The place was more like a guest house than a hotel. In addition to the rooms, it had common space consisting of a hall, a kitchen and a balcony.
Our room in Brunei Our room in Brunei. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
Upon reaching the place, we paid for our room in cash, which was 66.70 Singapore dollars (4200 Indian rupees) for two nights. We reached before the check-in time, so we had to wait for our room to get ready before we entered. The room had a double bed and also a place to hang clothes. We slept for a few hours before going out at night. We went into Gadong mall and had coffee at a caf named The Coffee Bean & Tea Leaf. The regular caffe latte I had here been 5.20 Brunei dollars. On another note, the snacks we got us from Kuala Lumpur covered us for the dinner. The next day 11th of December 2024 we went to a nearby restaurant named Nadj for lunch. The owner was from Kerala. Here we ordered: So, our lunch cost a total of 12.80 Brunei dollars (825 rupees). The naan was unusually thick, and didn t like the taste. After the lunch, we planned to visit Brunei s famous Omar Ali Saifuddien Mosque. However, a minibus driver outside of Gadong Mall told us that the mosque would be closed in half-an-hour and suggested we visit the nearby Jame Asr Hassanil Bolkiah Mosque instead.
Jame' Asr Hassanil Bolkiah Mosque Jame Asr Hassanil Bolkiah Mosque. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
He dropped us there for 1 Brunei dollar per person. The person hailed from Uttar Pradesh and told us about bus routes in Hindi. Buses routes in Brunei were confusing, so the information he gave us was valuable. It was evening, and we had an impression that the mosque and its premises were closed. However, soon enough, we stumbled across an open gate entering the mosque complex. We walked inside for some time, took pictures and exited. Walking in Bandar Seri Begawan wasn t pleasant, though. The pedestrian infrastructure wasn t good. Then we walked back to our place and bought some souvenirs. For dinner and breakfast, we bought bread, fruits and eggs from local shops as we had a kitchen to cook for ourselves. The guest house also had a washing machine (free of charge) which we wanted to use. However, they didn t have detergent. Therefore, we went outside to get some detergent. It was 8 o clock, and most of the shops were closed already. Others had had detergents in large sizes, the ones you would use if you lived there. We ended up getting a small packet at a supermarket. The next day 12th of December we had a flight to Ho Chi Minh City in Vietnam with a long layover in Kuala Lumpur. We had breakfast in the morning and took a bus to Omar Ali Saifuddien Mosque. The mosque was in prayer session, so it was closed for Muslims. Therefore, we just took pictures from the outside and took a bus for the airport.
Omar Ali Saifuddien Mosque Omar Ali Saifuddien Mosque. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
When the bus reached near the airport, the bus went straight rather than taking a left turn for the airport. Initially, I thought the bus would just take a turn and come back. However, the bus kept going away from the airport. Confused by this, I asked other passengers if the bus was going to the airport. The driver stopped the bus at Muara Town terminal 20 km from the airport. At this point, everyone alighted, except for us. The driver went to a nearby restaurant to have lunch. I felt very uncomfortable stranded in a town which was 20 km from the airport. We had a lot of time, but I was still worried about missing our flight, as I didn t want to get stuck in Brunei. After waiting for 15 minutes, I went inside the restaurant and reminded the driver that we had a flight in a couple of hours and needed to go to the airport. He said he will leave soon. When he was done with his lunch, he drove us to the airport. It was incredibly frustrating. On a positive note, we saw countryside of Brunei that we would have seen otherwise. The bus ride cost us 1 Brunei dollars each.
A couple of houses with trees in the background. A shot of Brunei s countryside. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0.
That s it for this one. Meet you in the next one. Stay tuned for the Vietnam post!

16 January 2026

Kentaro Hayashi: Budgie Desktop 10.10 is out, but not for me yet :(

Introduction I'm one of a Budgie Desktop user since 2020. (Budgie Desktop 10.5 or so) Recently Budgie Desktop 10.10 had been available from Debian experimental. I've tried it and realized that not for me yet.

What is the requirement for desktop environment
  • Screen sharing with specifying the specific window/application for web meeting
  • Sharing keyboard input smoothly with deskflow
  • Switch focus with mouse movement
  • and ...

Is Budgie Desktop 10.10 suitable? Short answer: No, not yet. It seems that Budgie Desktop 10.10 (wayland) lost the functionality - screen sharing with specifying the specific window/application for web meeting. Of course, you can share your screen itself. It also can't sharing keyboard input smoothly with deskflow. Both of them seems that they are supported in GNOME? or KDE?. In contrast to Budgie Desktop 10.9 (X11), Budgie Desktop 10.10 seems missing effective wayland + xdg-desktop-portal support for them yet. It might be supported in the future release, but it might be 11.x.

Alternatives? Budgie Desktop 10.x will come into maintenance mode, so they will not be fixed in 10.x releases (guess). buddiesofbudgie.org Switching DE might be an option - GNOME or KDE. but I don't have much the energy to make the transition for now. I decided to take the conservative option and go back to 10.9. Note that if you upgrade to Budgie Desktop 10.10, it is hard to downgrade to Budgie Desktop 10.9 because python3-gi, python3-gi-cairo dependency blocks budgie-desktop. See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1120138 for details. #1120138 - budgie-extras: Several applets are incompatible with pygobject 3.54/libpeas 1.36.0-8 - Debian Bug report logs As a dirty hack, you can modify python3-gi dependency and install it as your own risk for downgrading.

Conclusion It was still too early to adopt Budgie Desktop (wayland) for me. I'll stay with Budgie Desktop 10.9 for a while. That said, it's unclear how long Budgie Desktop 10.9 will remain unstable and usable. There might be a case that sticking to Budgie Desktop 10.9 might be problematic when other packages are updated. When that happens, I'd like to reconsider this issue again.

Jonathan Dowland: Ye Gods

Via (I think) @mcc on the Fediverse, I learned of GetMusic: a sort-of "clearing house" for Free Bandcamp codes. I think the way it works is, some artists release a limited set of download codes for their albums in order to promote them, and GetMusic help them to keep track of that, and helps listeners to discover them. GetMusic mail me occasionally, and once they highlighted an album The Arcane & Paranormal Earth which they described as "Post-Industrial in the vein of Coil and Nurse With Wound with shades of Aphex Twin, Autechre and assorted film music." Well that description hooked me immediately but I missed out on the code. However, I sampled the album on Bandcamp directly a few times as well as a few of his others (Ye Gods is a side-project of Antoni Maiovvi, which itself is a pen-name) and liked them very much. I picked up the full collection of Ye Gods albums in one go for 30% off. Here's a stand-out track: On Earth by Ye Gods So I guess this service works! Although I didn't actually get a free code in this instance, it promoted the artist, introduced me to something I really liked and drove a sale.

15 January 2026

Dirk Eddelbuettel: RcppSpdlog 0.0.25 on CRAN: Microfix

Version 0.0.25 of RcppSpdlog arrived on CRAN right now, and will be uploaded to Debian and built for r2u shortly along with a minimal refresh of the documentation site. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. This release fixes a minuscule cosmetic issue from the previous release a week ago. We rely on two #defines that R sets to signal to spdlog that we are building in the R context (which matters for the R-specific logging sink, and picks up something Gabi added upon my suggestion at the very start of this package). But I use the same #defines to now check in Rcpp that we are building with R and, in this case, wrongly conclude R headers have already been installed so Rcpp (incorrectly) nags about that. The solution is to add two #undefine and proceed as normal (with Rcpp controlling and taking care of R header includion too) and that is what we do here. All good now, no nags from a false positive. The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.25 (2026-01-15)
  • Ensure #define signaling R build (needed with spdlog) is unset before including R headers to not falsely triggering message from Rcpp

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

14 January 2026

Dirk Eddelbuettel: RcppSimdJson 0.1.15 on CRAN: New Upstream, Some Maintenance

A brand new release 0.1.15 of the RcppSimdJson package is now on CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is faster than CPU speed as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon. This version updates to the current 4.2.4 upstream release. It also updates the RcppExports.cpp file with glue between C++ and R. We want move away from using Rf_error() (as Rcpp::stop() is generally preferable). Packages (such as this one) that are declaring an interface have an actual Rf_error() call generated in RcppExports.cpp which can protect which is what current Rcpp code generation does. Long story short, a minor internal reason. The short NEWS entry for this release follows.

Changes in version 0.1.15 (2026-01-14)
  • simdjson was upgraded to version 4.2.4 (Dirk in #97
  • RcppExports.cpp was regenerated to aid a Rcpp transition
  • Standard maintenance updates for continuous integration and URLs

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

13 January 2026

Simon Josefsson: Debian Libre Live 13.3.0 is released!

Following up on my initial announcement about Debian Libre Live I am happy to report on continued progress and the release of Debian Libre Live version 13.3.0. Since both this and the previous 13.2.0 release are based on the stable Debian trixie release, there really isn t a lot of major changes but instead incremental minor progress for the installation process. Repeated installations has a tendency to reveal bugs, and we have resolved the apt sources list confusion for Calamares-based installations and a couple of other nits. This release is more polished and we are not aware of any known remaining issues with them (unlike for earlier versions which were released with known problems), although we conservatively regard the project as still in beta. A Debian Libre Live logo is needed before marking this as stable, any graphically talented takers? (Please base it on the Debian SVG upstream logo image.) We provide GNOME, KDE, and XFCE desktop images, as well as text-only standard image, which match the regular Debian Live images with non-free software on them, but also provide a slim variant which is merely 750MB compared to the 1.9GB standard image. The slim image can still start a debian installer, and can still boot into a minimal live text-based system. The GNOME, KDE and XFCE desktop images feature the Calamares installer, and we have performed testing on a variety of machines. The standard and slim images does not have a installer from the running live system, but all images support a boot menu entry to start the installer. With this release we also extend our arm64 support to two tested platforms. The current list of successfully installed and supported systems now include the following hardware: This is a very limited set of machines, but the diversity in CPUs and architecture should hopefully reflect well on a wide variety of commonly available machines. Several of these machines are crippled (usually GPU or WiFI) without adding non-free software, complain at your hardware vendor and adapt your use-cases and future purchases. The images are as follows, with SHA256SUM checksums and GnuPG signature on the 13.3.0 release page. Curious how the images were made? Fear not, for the Debian Libre Live project README has documentation, the run.sh script is short and the .gitlab-ci.yml CI/CD Pipeline definition file brief. Happy Libre OS hacking!

Freexian Collaborators: Debian Contributions: dh-python development, Python 3.14 and Ruby 3.4 transitions, Surviving scraper traffic in Debian CI and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-12 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

dh-python development, by Stefano Rivera In Debian we build our Python packages with the help of a debhelper-compatible tool, dh-python. Before starting the 3.14 transition (that would rebuild many packages) we landed some updates to dh-python to fix bugs and add features. This started a month of attention on dh-python, iterating through several bug fixes, and a couple of unfortunate regressions. dh-python is used by almost all packages containing Python (over 5000). Most of these are very simple, but some are complex and use dh-python in unexpected ways. It s hard to avoid almost any change (including obvious bug fixes) from causing some unexpected knock-on behaviour. There is a fair amount of complexity in dh-python, and some rather clever code, which can make it tricky to work on. All of this means that good QA is important. Stefano spent some time adding type annotations and specialized types to make it easier to see what the code is doing and catch mistakes. This has already made work on dh-python easier. Now that Debusine has built-in repositories and debdiff support, Stefano could quickly test the effects of changes on many other packages. After each big change, he could upload dh-python to a repository, rebuild e.g. 50 Python packages with it, and see what differences appeared in the output. Reviewing the diffs is still a manual process, but can be improved. Stefano did a small test on what it would take to replace direct setuptools setup.py calls with PEP-517 (pyproject-style) builds. There is more work to do here.

Python 3.14 transition, by Stefano Rivera (et al.) In December the transition to add Python 3.14 as a supported version started in Debian unstable. To do this, we update the list of supported versions in python3-defaults, and then start rebuilding modules with C extensions from the leaves inwards. This had already been tested in a PPA and Ubuntu, so many of the biggest blocking compatibility issues with 3.14 had already been found and fixed. But there are always new issues to discover. Thanks to a number of people in the Debian Python team, we got through the first bit of the transition fairly quickly. There are still a number of open bugs that need attention and many failed tests blocking migration to testing. Python 3.14.1 released just after we started the transition, and very soon after, a follow-up 3.14.2 release came out to address a regression. We ran into another regression in Python 3.14.2.

Ruby 3.4 transition, by Lucas Kanashiro (et al.) The Debian Ruby team just started the preparation to move the default Ruby interpreter version to 3.4. At the moment, ruby3.4 source package is already available in experimental, also ruby-defaults added support to Ruby 3.4. Lucas rebuilt all reverse dependencies against this new version of the interpreter and published the results here. Lucas also reached out to some stakeholders to coordinate the work. Next steps are: 1) announcing the results to the whole team and asking for help to fix packages failing to build against the new interpreter; 2) file bugs against packages FTBFSing against Ruby 3.4 which are not fixed yet; 3) once we have a low number of build failures against Ruby 3.4, ask the Debian Release team to start the transition in unstable.

Surviving scraper traffic in Debian CI, by Antonio Terceiro Like most of the open web, Debian Continuous Integration has been struggling for a while to keep up with the insatiable hunger from data scrapers everywhere. Solving this involved a lot of trial and error; the final result seems to be stable, and consists of two parts. First, all Debian CI data pages, except the direct links to test log files (such as those provided by the Release Team s testing migration excuses), now require users to be authenticated before being accessed. This means that the Debian CI data is no longer publicly browseable, which is a bit sad. However, this is where we are now. Additionally, there is now a fail2ban powered firewall-level access limitation for clients that display an abusive access pattern. This went through several iterations, with some of them unfortunately blocking legitimate Debian contributors, but the current state seems to strike a good balance between blocking scrapers and not blocking real users. Please get in touch with the team on the #debci OFTC channel if you are affected by this.

A hybrid dependency solver for crossqa.debian.net, by Helmut Grohne crossqa.debian.net continuously cross builds packages from the Debian archive. Like Debian s native build infrastructure, it uses dose-builddebcheck to determine whether a package s dependencies can be satisfied before attempting a build. About one third of Debian s packages fail this check, so understanding the reasons is key to improving cross building. Unfortunately, dose-builddebcheck stops after reporting the first problem and does not display additional ones. To address this, a greedy solver implemented in Python now examines each build-dependency individually and can report multiple causes. dose-builddebcheck is still used as a fall-back when the greedy solver does not identify any problems. The report for bazel-bootstrap is a lengthy example.

rebootstrap, by Helmut Grohne Due to the changes suggested by Loongson earlier, rebootstrap now adds debhelper to its final installability test and builds a few more packages required for installing it. It also now uses a variant of build-essential that has been marked Multi-Arch: same (see foundational work from last year). This in turn made the use of a non-default GCC version more difficult and required more work to make it work for gcc-16 from experimental. Ongoing archive changes temporarily regressed building fribidi and dash. libselinux and groff have received patches for architecture specific changes and libverto has been NMUed to remove the glib2.0 dependency.

Miscellaneous contributions
  • Stefano did some administrative work on debian.social and debian.net instances and Debian reimbursements.
  • Stefano did routine updates of python-authlib, python-mitogen, xdot.
  • Stefano spent several hours discussing Debian s Python package layout with the PyPA upstream community. Debian has ended up with a very different on-disk installed Python layout than other distributions, and this continues to cause some frustration in many communities that have to have special workarounds to handle it. This ended up impacting cross builds as Helmut discovered.
  • Rapha l set up Debusine workflows for the various backports repositories on debusine.debian.net.
  • Zulip is not yet in Debian (RFP in #800052), but Rapha l helped on the French translation as he is experimenting with that discussion platform.
  • Antonio performed several routine Salsa maintenance tasks, including fixing salsa-nm-sync, the service that synchronizes project members data from LDAP to Salsa, which had been broken since salsa.debian.org was upgraded to trixie .
  • Antonio deployed a new amd64 worker host for Debian CI.
  • Antonio did several DebConf technical and administrative bits, including but adding support for custom check-in/check-out dates in the MiniDebConf registration module, publishing a call for bids for DebConf27.
  • Carles reviewed and submitted 14 Catalan translations using po-debconf-manager.
  • Carles improved po-debconf-manager: added delete-package command, show-information now uses properly formatted output (YAML), it now attaches the translation on the bug reports for which a merge request has been opened too long.
  • Carles investigated why some packages appeared in po-debconf-manager but not in the Debian l10n list. Turns out that some packages had debian/po/templates.pot (appearing in po-debconf-manager) but not the POTFILES.in file as expected. Created a script to find out which packages were in this or similar situation and reported bugs.
  • Carles tested and documented how to set up voices (mbrola and festival) if using Orca speech synthesizer. Commented a few issues and possible improvements in the debian-accessibility list.
  • Helmut sent patches for 48 cross build failures and initiated discussions on how to deal with two non-trivial matters. Besides Python mentioned above, CMake introduced a cmake_pkg_config builtin which is not aware of the host architecture. He also forwarded a Meson patch upstream.
  • Thorsten uploaded a new upstream version of cups to fix a nasty bug that was introduced by the latest security update.
  • Along with many other Python 3.14 fixes, Colin fixed a tricky segfault in python-confluent-kafka after a helpful debugging hint from upstream.
  • Colin upstreamed an improved version of an OpenSSH patch we ve been carrying since 2008 to fix misleading verbose output from scp.
  • Colin used Debusine to coordinate transitions for astroid and pygments, and wrote up the astroid case on his blog.
  • Emilio helped with various transitions, and provided a build fix for opencv for the ffmpeg 8 transition.
  • Emilio tested the GNOME updates for trixie proposed updates (gnome-shell, mutter, glib2.0).
  • Santiago helped to review the status of how to test different build profiles in parallel on the same pipeline, using the test-build-profiles job. This means, for example, to simultaneously test build profiles such as nocheck and nodoc for the same git tree. Finally, Santiago provided MR !685 to fix the documentation.
  • Anupa prepared a bits post for Outreachy interns announcement along with T ssia Cam es Ara jo and worked on publicity team tasks.

12 January 2026

Daniel Kahn Gillmor: AI as a Compression Problem

A recent article in The Atlantic makes the case that very large language models effectively contain much of the works they're trained on. This article is an attempt to popularize the insights in the recent academic paper Extracting books from production language models from Ahmed et al. The authors of the paper demonstrate convincingly that well-known copyrighted textual material can be extracted from the chatbot interfaces of popular commercial LLM services. The Atlantic article cites a podcast quote about the Stable Diffusion AI image-generator model, saying "We took 100,000 gigabytes of images and compressed it to a two-gigabyte file that can re-create any of those and iterations of those". By analogy, this suggests we might think of LLMs (which work on text, not the images handled by Stable Diffusion) as a form of lossy textual compression. The entire text of Moby Dick, the canonical Big American Novel is merely 1.2MiB uncompressed (and less than 0.4MiB losslessly compressed with bzip2 -9). It's not surprising to imagine that a model with hundreds of billions of parameters might contain copies of these works. Warning: The next paragraph contains fuzzy math with no real concrete engineering practice behind it! Consider a hypothetical model with 100 billion parameters, where each parameter is stored as a 16-bit floating point value. The model weights would take 200 GB of storage. If you were to fill the parameter space only with losslessly compressed copies of books like Moby Dick, you could still fit half a million books, more than anyone can read in a lifetime. And lossy compression is typically orders of magnitude less in size than lossless compression, so we're talking about millions of works effectively encoded, with the acceptance of some artifacts being injected in the output. I first encountered this "compression" view of AI nearly three years ago, in Ted Chiang's insightful ChatGPT is a Blurry JPEG of the Web. I was suprised that The Atlantic article didn't cite Chiang's piece. If you haven't read Ted Chiang, i strongly recommend his work, and this piece is a great place to start. Chiang aside, the more recent writing that focuses on the idea of compressed works being "contained" in the model weights seems to be used by people interested in wielding esome sort of copyright claims against the AI companies that maintain or provide access to these models. There are many many problems with AI today, but attacking AI companies based on copyright concerns seems similar to going after Al Capone for tax evasion. We should be much more concerned with the effect these projects have on cultural homogeneity, mental health, labor rights, privacy, and social control than whether they're violating copyright in some specific instance.

11 January 2026

Russell Coker: Terminal Emulator Security

I just read this informative article on ANSI terminal security [1]. The author has written a tool named vt-houdini for testing for these issues [2]. They used to host an instance on their server but appear to have stopped it. When you run that tool you can ssh to the system in question and without needing a password you are connected and the server probes your terminal emulator for vulnerabilities. The versions of Kitty and Konsole in Debian/Trixie have just passed those tests on my system. This will always be a potential security problem due to the purpose of a terminal emulator. A terminal emulator will often display untrusted data and often data which is known to come from hostile sources (EG logs of attempted attacks). So what could be done in this regard? Memory Protection Due to the complexity of terminal emulation there is the possibility of buffer overflows and other memory management issues that could be used to compromise the emulator. The Fil-C compiler is an interesting project [3], it compiles existing C/C++ code with memory checks. It is reported to have no noticeable impact on the performance of the bash shell which sounds like a useful option to address some of these issues as shell security issues are connected to terminal security issues. The performance impact on a terminal emulator would be likely to be more noticeable. Also note that Fil-C compilation apparently requires compiling all libraries with it, this isn t a problem for bash as the only libraries it uses nowadays are libtinfo and libc. The kitty terminal emulator doesn t have many libraries but libpython is one of them, it s an essential part of Kitty and it is a complex library to compile in a different way. Konsole has about 160 libraries and it isn t plausible to recompile so many libraries at this time. Choosing a terminal emulator that has a simpler design might help in this regard. Emulators that call libraries for 3D effects etc and native support for displaying in-line graphics have a much greater attack surface. Access Control A terminal emulator could be run in a container to prevent it from doing any damage if it is compromised. But the terminal emulator will have full control over the shell it runs and if the shell has access needed to allow commands like scp/rsync to do what is expected of them then that means that no useful level of containment is possible. It would be possible to run a terminal emulator in a container for the purpose of connecting to an insecure or hostile system and not allow scp/rsync to/from any directory other than /tmp (or other directories to use for sharing files). You could run exec ssh $SERVER so the terminal emulator session ends when the ssh connection ends. Conclusion There aren t good solutions to the problems of terminal emulation security. But testing every terminal emulator with vt-houdini and fuzzing the popular ones would be a good start. Qubes level isolation will help things in some situations, but if you need to connect to a server with privileged access to read log files containing potentially hostile data (which is a common sysadmin use case) then there aren t good options.

Otto Kek l inen: Stop using MySQL in 2026, it is not true open source

Featured image of post Stop using MySQL in 2026, it is not true open sourceIf you care about supporting open source software, and still use MySQL in 2026, you should switch to MariaDB like so many others have already done. The number of git commits on github.com/mysql/mysql-server has been significantly declining in 2025. The screenshot below shows the state of git commits as of writing this in January 2026, and the picture should be alarming to anyone who cares about software being open source. MySQL GitHub commit activity decreasing drastically

This is not surprising Oracle should not be trusted as the steward for open source projects When Oracle acquired Sun Microsystems and MySQL along with it back in 2009, the European Commission almost blocked the deal due to concerns that Oracle s goal was just to stifle competition. The deal went through as Oracle made a commitment to keep MySQL going and not kill it, but (to nobody s surprise) Oracle has not been a good steward of MySQL as an open source project and the community around it has been withering away for years now. All development is done behind closed doors. The publicly visible bug tracker is not the real one Oracle staff actually uses for MySQL development, and the few people who try to contribute to MySQL just see their Pull Requests and patch submissions marked as received with mostly no feedback and then those changes may or may not be in the next MySQL release, often rewritten, and with only Oracle staff in the git author/committer fields. The real author only gets a small mention in a blog post. When I was the engineering manager for the core team working on RDS MySQL and RDS MariaDB at Amazon Web Services, I oversaw my engineers contributions to both MySQL and MariaDB (the latter being a fork of MySQL by the original MySQL author, Michael Widenius). All the software developers in my org disliked submitting code to MySQL due to how bad the reception by Oracle was to their contributions. MariaDB is the stark opposite with all development taking place in real-time on github.com/mariadb/server, anyone being able to submit a Pull Request and get a review, all bugs being openly discussed at jira.mariadb.org and so forth, just like one would expect from a true open source project. MySQL is open source only by license (GPL v2), but not as a project.

MySQL s technical decline in recent years Despite not being a good open source steward, Oracle should be given credit that it did keep the MySQL organization alive and allowed it to exist fairly independently and continue developing and releasing new MySQL versions well over a decade after the acquisition. I have no insight into how many customers they had, but I assume the MySQL business was fairly profitable and financially useful to Oracle, at least as long as it didn t gain too many features to threaten Oracle s own main database business. I don t know why, perhaps because too many talented people had left the organization, but it seems that from a technical point of view MySQL clearly started to deteriorate from 2022 onward. When MySQL 8.0.29 was released with the default ALTER TABLE method switched to run in-place, it had a lot of corner cases that didn t work, causing the database to crash and data to corrupt for many users. The issue wasn t fully fixed until a year later in MySQL 8.0.32. To many users annoyance Oracle announced the 8.0 series as evergreen and introduced features and changes in the minor releases, instead of just doing bugfixes and security fixes like users historically had learnt to expect from these x.y.Z maintenance releases. There was no new major MySQL version for six years. After MySQL 8.0 in 2018 it wasn t until 2023 when MySQL 8.1 was released, and it was just a short-term preview release. The first actual new major release MySQL 8.4 LTS was released in 2024. Even though it was a new major release, many users got disappointed as it had barely any new features. Many also reported degraded performance with newer MySQL versions, for example the benchmark by famous MySQL performance expert Mark Callaghan below shows that on write-heavy workloads MySQL 9.5 throughput is typically 15% less than in 8.0. Benchmark showing new MySQL versions being slower than the old Due to newer MySQL versions deprecating many features, a lot of users also complained about significant struggles regarding both MySQL 5.7->8.0 and 8.0->8.4 upgrades. With few new features and heavy focus on code base cleanup and feature deprecation, it became obvious to many that Oracle had decided to just keep MySQL barely alive, and put all new relevant features (e.g. vector search) into Heatwave, Oracle s closed-source and cloud-only service for MySQL customers. As it was evident that Oracle isn t investing in MySQL, Percona s Peter Zaitsev wrote Is Oracle Finally Killing MySQL in June 2024. At this time MySQL s popularity as ranked by DB-Engines had also started to tank hard, a trend that likely accelerates in 2026. MySQL dropping significantly in DB-Engines ranking In September 2025 news reported that Oracle was reducing its workforce and that the MySQL staff was getting heavily reduced. Obviously this does not bode well for MySQL s future, and Peter Zaitsev posted already in November stats showing that the latest MySQL maintenance release contained fewer bug fixes than before.

Open source is more than ideology: it has very real effects on software security and sovereignty Some say they don t care if MySQL is truly open source or not, or that they don t care if it has a future in coming years, as long as it still works now. I am afraid people thinking so are taking a huge risk. The database is often the most critical part of a software application stack, and any flaw or problem in operations, let alone a security issue, will have immediate consequences, and not caring will eventually get people fired or sued. In open source problems are discussed openly, and the bigger the problem, the more people and companies will contribute to fixing it. Open source as a development methodology is similar to the scientific method with free flow of ideas that are constantly contested and only the ones with the most compelling evidence win. Not being open means more obscurity, more risk and more just trust us bro attitude. This open vs. closed is very visible for example in how Oracle handles security issues. We can see that in 2025 alone MySQL published 123 CVEs about security issues, while MariaDB had 8. There were 117 CVEs that only affected MySQL and not MariaDB in 2025. I haven t read them all, but typically the CVEs hardly contain any real details. As an example, the most recent one CVE-2025-53067 states Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. There is no information a security researcher or auditor could use to verify if any original issue actually existed, or if it was fixed, or if the fix was sufficient and fully mitigating the issue or not. MySQL users just have to take the word of Oracle that it is all good now. Handling security issues like this is in stark contrast to other open source projects, where all security issues and their code fixes are open for full scrutiny after the initial embargo is over and CVE made public. There is also various forms of enshittification going on one would not see in a true open source project, and everything about MySQL as a software, documentation and website is pushing users to stop using the open source version and move to the closed MySQL versions, and in particular to Heatwave, which is not only closed-source but also results in Oracle fully controlling customer s databases contents. Of course, some could say this is how Oracle makes money and is able to provide a better product. But stories on Reddit and elsewhere suggest that what is going on is more like Oracle milking hard the last remaining MySQL customers who are forced to pay more and more for getting less and less.

There are options and migrating is easy, just do it A large part of MySQL users switched to MariaDB already in the mid-2010s, in particular everyone who had cared deeply about their database software staying truly open source. That included large installations such as Wikipedia, and Linux distributions such as Fedora and Debian. Because it s open source and there is no centralized machine collecting statistics, nobody knows what the exact market shares look like. There are however some application specific stats, such as that 57% of WordPress sites around the world run MariaDB, while the share for MySQL is 42%. For anyone running a classic LAMP stack application such as WordPress, Drupal, Mediawiki, Nextcloud, or Magento, switching the old MySQL database to MariaDB is be straightforward. As MariaDB is a fork of MySQL and mostly backwards compatible with it, swapping out MySQL for MariaDB can be done without changing any of the existing connectors or database clients, as they will continue to work with MariaDB as if it was MySQL. For those running custom applications and who have the freedom to make changes to how and what database is used, there are tens of mature and well-functioning open source databases to choose from, with PostgreSQL being the most popular general database. If your application was built from the start for MySQL, switching to PostgreSQL may however require a lot of work, and the MySQL/MariaDB architecture and storage engine InnoDB may still offer an edge in e.g. online services where high performance, scalability and solid replication features are of highest priority. For a quick and easy migration MariaDB is probably the best option. Switching from MySQL to the Percona Server is also very easy, as it closely tracks all changes in MySQL and deviates from it only by a small number of improvements done by Percona. However, also precisely because of it being basically just a customized version of the MySQL Server, it s not a viable long-term solution for those trying to fully ditch the dependency on Oracle. There are also several open source databases that have no common ancestry with MySQL, but strive to be MySQL-compatible. Thus most apps built for MySQL can simply switch to using them without needing SQL statements to be rewritten. One such database is TiDB, which has been designed from scratch specifically for highly scalable and large systems, and is so good that even Amazon s latest database solution DSQL was built borrowing many ideas from TiDB. However, TiDB only really shines with larger distributed setups, so for the vast majority of regular small- and mid-scale applications currently using MySQL, the most practical solution is probably to just switch to MariaDB, which on most Linux distributions can simply be installed by running apt/dnf/brew install mariadb-server. Whatever you end up choosing, as long as it is not Oracle, you will be better off.

10 January 2026

Dirk Eddelbuettel: Rcpp 1.1.1 on CRAN: Many Improvements in Semi-Annual Update

rcpp logo Team Rcpp is thrilled to share that an exciting new version 1.1.1 of Rcpp is now on CRAN (and also uploaded to Debian and already built for r2u). Having switchted to C++11 as the minimum standard in the previous 1.1.0 release, this version takes full advantage of it and removes a lot of conditional code catering to older standards that no longer need to be supported. Consequently, the source tarball shrinks by 39% from 3.11 mb to 1.88 mb. That is a big deal. (Size peaked with Rcpp 1.0.12 two years ago at 3.43 mb; relative to its size we are down 45% !!) Removing unused code also makes maintenance easier, and quickens both compilation and installation in general. This release continues as usual with the six-months January-July cycle started with release 1.0.5 in July 2020. Interim snapshots are always available via the r-universe page and repo. We continue to strongly encourage the use of these development released and their testing we tend to run our systems with them too. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3020 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.1% of all packages depend (directly) on Rcpp, and 60.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 109.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2151 (JSS, 2011) and 405 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 715. This time, I am not attempting to summarize the different changes. The full list follows below and details all these changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.1 (2026-01-08)
  • Changes in Rcpp API:
    • An unused old R function for a compiler version check has been removed after checking no known package uses it (Dirk in #1395)
    • A narrowing warning is avoided via a cast (Dirk in #1398)
    • Demangling checks have been simplified (I aki in #1401 addressing #1400)
    • The treatment of signed zeros is now improved in the Sugar code (I aki in #1404)
    • Preparations for phasing out use of Rf_error have been made (I aki in #1407)
    • The long-deprecated function loadRcppModules() has been removed (Dirk in #1416 closing #1415)
    • Some non-API includes from R were refactored to accommodate R-devel changes (I aki in #1418 addressing #1417)
    • An accessor to Rf_rnbeta has been removed (Dirk in #1419 also addressing #1420)
    • Code accessing non-API Rf_findVarInFrame now uses R_getVarEx (Dirk in #1423 fixing #1421)
    • Code conditional on the R version now expects at least R 3.5.0; older code has been removed (Dirk in #1426 fixing #1425)
    • The non-API ATTRIB entry point to the R API is no longer used (Dirk in #1430 addressing #1429)
    • The unwind-protect mechanism is now used unconditionally (Dirk in #1437 closing #1436)
  • Changes in Rcpp Attributes:
    • The OpenMP plugin has been generalized for different macOS compiler installations (Kevin in #1414)
  • Changes in Rcpp Documentation:
    • Vignettes are now processed via a new "asis" processor adopted from R.rsp (Dirk in #1394 fixing #1393)
    • R is now cited via its DOI (Dirk)
    • A (very) stale help page has been removed (Dirk in #1428 fixing #1427)
    • The main README.md was updated emphasizing r-universe in favor of the local drat repos (Dirk in #1431)
  • Changes in Rcpp Deployment:
    • A temporary change in R-devel concerning NA part in complex variables was accommodated, and then reverted (Dirk in #1399 fixing #1397)
    • The macOS CI runners now use macos-14 (Dirk in #1405)
    • A message is shown if R.h is included before Rcpp headers as this can lead to errors (Dirk in #1411 closing #1410)
    • Old helper functions use message() to signal they are not used, deprecation and removal to follow (Dirk in #1413 closing #1412)
    • Three tests were being silenced following #1413 (Dirk in #1422)
    • The heuristic whether to run all available tests was refined (Dirk in #1434 addressing #1433)
    • Coverage has been tweaked via additional #nocov tags (Dirk in #1435)
  • Non-release Changes:
    • Two interim non-releases 1.1.0.8.1 and .2 were made in order to unblock CRAN due to changes in R-devel rather than Rcpp

Thanks to my CRANberries, you can also look at a diff to the previous interim release along with pre-releaseds 1.1.0.8, 1.1.0.8.1 and 1.1.0.8.2 that were needed because R-devel all of a sudden decided to move fast and break things. Not our doing. Questions, comments etc should go to the GitHub discussion section list]rcppdevellist off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well. Both sections can be searched as well.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

8 January 2026

Reproducible Builds: Reproducible Builds in December 2025

Welcome to the December 2025 from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. New orig-check service to validate Debian upstream tarballs
  2. Distribution work
  3. disorderfs updated to FUSE 3
  4. Mailing list updates
  5. Three new academic papers published
  6. Website updates
  7. Upstream patches

New orig-check service to validate Debian upstream tarballs This month, Debian Developer Lucas Nussbaum announced the orig-check service, which attempts to automatically reproduce the generation upstream tarballs (ie. the original source component of a Debian source package), comparing that to the upstream tarball actually shipped with Debian. As of the time of writing, it is possible for a Debian developer to upload a source archive that does not actually correspond to upstream s version. Whilst this is not inherently malicious (it typically indicates some tooling/process issue), the very possibility that a maintainer s version may differ potentially permits a maintainer to make (malicious) changes that would be misattributed to upstream. This service therefore nicely complements the whatsrc.org service, which was reported in our reports for both April and August. The orig-check is dedicated to Lunar, who sadly passed away a year ago.

Distribution work In Arch Linux this month, Robin Candau and Mark Hegreberg worked at making the Arch Linux WSL image bit-for-bit reproducible. Robin also shared some implementation details and future related work on our mailing list. Continuing a series reported in these reports for March, April and July 2025 (etc.), Simon Josefsson has published another interesting article this month, itself a followup to a post Simon published in December 2024 regarding GNU Guix Container Images that are hosted on GitLab. In Debian this month, Micha Lenk posted to the debian-backports-announce mailing list with the news that the Backports archive will now discard binaries generated and uploaded by maintainers: The benefit is that all binary packages [will] get built by the Debian buildds before we distribute them within the archive. Felix Moessbauer of Siemens then filed a bug in the Debian bug tracker to signal their intention to package debsbom, a software bill of materials (SBOM) generator for distributions based on Debian. This generated a discussion on the bug inquiring about the output format as well as a question about how these SBOMs might be distributed. Holger Levsen merged a number of significant changes written by Alper Nebi Yasak to the Debian Installer in order to improve its reproducibility. As noted in Alper s merge request, These are the reproducibility fixes I looked into before bookworm release, but was a bit afraid to send as it s just before the release, because the things like the xorriso conversion changes the content of the files to try to make them reproducible. In addition, 76 reviews of Debian packages were added, 8 were updated and 27 were removed this month adding to our knowledge about identified issues. A new different_package_content_when_built_with_nocheck issue type was added by Holger Levsen. [ ] Arnout Engelen posted to our mailing list reporting that they successfully reproduced the NixOS minimal installation ISO for the 25.11 release without relying on a pre-compiled package archive, with more details on their blog. Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for his work there.

disorderfs updated to FUSE 3 disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues. This month, however, Roland Clobus upgraded disorderfs* from FUSE 2 to FUSE 3 after its package automatically got removed from Debian testing. Some tests in Debian currently require disorderfs to make the Debian live images reproducible, although disorderfs is not a Debian-specific tool.

Mailing list updates On our mailing list this month:
  • Luca Di Maio announced stampdalf, a filesystem timestamp preservation tool that wraps arbitrary commands and ensures filesystem timestamp reproducibility :
    stampdalf allows you to run any command that modifies files in a directory tree, then automatically resets all timestamps back to their original values. Any new files created during command execution are set to [the UNIX epoch] or a custom timestamp via SOURCE_DATE_EPOCH.
    The project s GitHub page helpfully reveals that the project is pronounced: stamp-dalf (stamp like time-stamp, dalf like Gandalf the wizard) as it s a wizard of time and stamps .)
  • Lastly, Reproducible Builds developer cen1 posted to our list announcing that early/experimental/alpha support for FreeBSD was added to rebuilderd. In their post, cen1 reports that the initial builds are in progress and look quite decent . cen1 also interestingly notes that since the upstream is currently not technically reproducible I had to relax the bit-for-bit identical requirement of rebuilderd [ ] I consider the pkg to be reproducible if the tar is content-identical (via diffoscope), ignoring timestamps and some of the manifest files. .

Three new academic papers published Yogya Gamage and Benoit Baudry of Universit de Montr al, Canada together with Deepika Tiwari and Martin Monperrus of KTH Royal Institute of Technology, Sweden published a paper on The Design Space of Lockfiles Across Package Managers:
Most package managers also generate a lockfile, which records the exact set of resolved dependency versions. Lockfiles are used to reduce build times; to verify the integrity of resolved packages; and to support build reproducibility across environments and time. Despite these beneficial features, developers often struggle with their maintenance, usage, and interpretation. In this study, we unveil the major challenges related to lockfiles, such that future researchers and engineers can address them. [ ]
A PDF of their paper is available online. Benoit Baudry also posted an announcement to our mailing list, which generated a number of replies.
Betul Gokkaya, Leonardo Aniello and Basel Halak of the University of Southampton then published a paper on the A taxonomy of attacks, mitigations and risk assessment strategies within the software supply chain:
While existing studies primarily focus on software supply chain attacks prevention and detection methods, there is a need for a broad overview of attacks and comprehensive risk assessment for software supply chain security. This study conducts a systematic literature review to fill this gap. By analyzing 96 papers published between 2015-2023, we identified 19 distinct SSC attacks, including 6 novel attacks highlighted in recent studies. Additionally, we developed 25 specific security controls and established a precisely mapped taxonomy that transparently links each control to one or more specific attacks. [ ]
A PDF of the paper is available online via the article s canonical page.
Aman Sharma and Martin Monperrus of the KTH Royal Institute of Technology, Sweden along with Benoit Baudry of Universit de Montr al, Canada published a paper this month on Causes and Canonicalization of Unreproducible Builds in Java. The abstract of the paper is as follows:
[Achieving] reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central, and we develop a novel taxonomy of six root causes of unreproducibility. [ ]
A PDF of the paper is available online.

Website updates Once again, there were a number of improvements made to our website this month including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

7 January 2026

Gunnar Wolf: Artificial Intelligence Play or break the deck

This post is an unpublished review for Artificial Intelligence Play or break the deck
As a little disclaimer, I usually review books or articles written in English, and although I will offer this review to Computing Reviews as usual, it is likely it will not be published. The title of this book in Spanish is Inteligencia artificial: jugar o romper la baraja. I was pointed at this book, published last October by Margarita Padilla Garc a, a well known Free Software activist from Spain who has long worked on analyzing (and shaping) aspects of socio-technological change. As other books published by Traficantes de sue os, this book is published as Open Access, under a CC BY-NC license, and can be downloaded in full. I started casually looking at this book, with too long a backlog of material to read, but soon realized I could just not put it down: it completely captured me. This book presents several aspects of Artificial Intelligence (AI), written for a general, non-technical audience. Many books with a similar target have been published, but this one is quite unique; first of all, it is written in a personal, non-formal tone. Contrary to what s usual in my reading, the author made the explicit decision not to fill the book with references to her sources ( because searching on Internet, it s very easy to find things ), making the book easier to read linearly a decision I somewhat regret, but recognize helps develop the author s style. The book has seven sections, dealing with different aspects of AI. They are the Visions (historical framing of the development of AI); Spectacular (why do we feel AI to be so disrupting, digging particularly into game engines and search space); Strategies , explaining how multilayer neural networks work and linking the various branches of historic AI together, arriving at Natural Language Processing; On the inside , tackling technical details such as algorithms, the importance of training data, bias, discrimination; On the outside , presenting several example AI implementations with socio-ethical implications; Philosophy , presenting the works of Marx, Heidegger and Simondon in their relation with AI, work, justice, ownership; and Doing , presenting aspects of social activism in relation to AI. Each part ends with yet another personal note: Margarita Padilla includes a letter to one of her friends related to said part. Totalling 272 pages (A5, or roughly half-letter, format), this is a rather small book. I read it probably over a week. So, while this book does not provide lots of new information to me, the way how it was written, made it a very pleasing experience, and it will surely influence the way I understand or explain several concepts in this domain.

Ravi Dwivedi: Why my Riseup account got suspended (and reinstated)

Disclaimer: The goal of this post is not to attack Riseup. In fact, I love Riseup and support their work.

Story Riseup is an email provider, known for its privacy-friendly email service. The service requires an invite from an existing Riseup email user to get an account. I created my account on Riseup in the year 2020, of course with the help of a friend who invited me. Since then, I have used the email address only occasionally, although it is logged into my Thunderbird all the time. Fast-forward to the 4th of January 2026, when Thunderbird suddenly told me that it could not log in to my Riseup account. When I tried logging in using their webmail, it said invalid password . Finally, I tried logging in to my account on their website, and was told that
Log in for that account is temporary suspended while we perform maintenance. Please try again later.
At this point, I suspected that the Riseup service itself was facing some issues. I asked a friend who had an account there if the service was up, and they said that it was. The issue seemed to be specific only to my account. I contacted Riseup support and informed them of the issue. They responded the next day (the 5th of January) saying:
The my-username-redacted account was found inviting another account that violated our terms of use. As a security measure we suspend all related accounts to ToS violations.
(Before we continue, I would like to take a moment and reflect upon how nice it was to receive response from a human rather than an AI bot a trend that is unfortunately becoming the norm nowadays.) I didn t know who violated their ToS, so I asked which account violated their terms. Riseup told me:
username-redacted@riseup.net attempted to create aliases that could be abused to impersonate riseup itself.
I asked a friend whom I invited a month before the incident, and they confirmed that the username belonged to them. When I asked what they did, they told me they tried creating aliases such as floatup and risedown. I also asked Riseup which aliases violated their terms, but their support didn t answer this. I explained to the Riseup support that the impersonation wasn t intentional, that the user hadn t sent any emails, and that I had been a user for more than 5 years and had donated to them in the past. Furthermore, I suggested that they should block the creation of such aliases if they think the aliases violate their terms, like how email providers typically don t allow users to create admin@ or abuse@ email addresses. After I explained myself, Riseup reinstated my account. Update on the 10th of January 2025: My friend told me that the alias that violated Riseup s terms was cloudadmin and his account was reinstated on the 7th of January.

Issues with suspension I have the following issues regarding the way the suspension took place
  • There was no way of challenging the suspension before the action was taken
  • The action taken against me was disproportionate. Remember that I didn t violate any terms. It was allegedly done by a user I invited. They could just block the aliases while continuing the discussion in parallel.
  • I was locked out of my account with no way of saving my emails and without any chance to migrate. What if that email address was being used for important stuff such as bank access or train tickets? I know people who use Riseup email for such purposes.
  • The violation wasn t even proven. I wasn t told which alias violated the terms and how could that be used to impersonate Riseup itself
When I brought up the issue of me getting locked out of my account without a way of downloading my emails or migrating my account, Riseup support responded by saying:
You must understand that we react [by] protecting our service, and therefore we cannot provide notice messages on the affected accounts. We need to act preventing any potential damage to the service that might affect the rest of the users, and that measure is not excessive (think on how abusers/spammers/scammers/etc could trick us and attempt any action before their account is suspended).
This didn t address my concerns, so let s move on to the next section.

Room for improvement Here s how I think Riseup s ban policy could be changed while still protecting against spammers and other bad actors:
  • Even if Riseup can t provide notice to blocked accounts, perhaps they can scale back limitations on the inviting account which wasn t even involved for example, by temporarily disabling invites from that account until the issue is resolved.
  • In this case, the person didn t impersonate Riseup, so Riseup could have just blocked the aliases and let the user know about it, rather than banning the account outright.
  • Riseup should give blocked users access to their existing emails so they have a chance to migrate them to a different provider. (Riseup could disable SMTP and maybe incoming emails but keep IMAP access open). I know people who use Riseup for important things such as bank or train tickets, and a sudden block like this is not a good idea.
  • Riseup should factor in the account profile in making these decisions. I had an account on their service for 5 years and I had only created around 5 invites. (I don t remember the exact number and there s no way to retrieve this information.) This is not exactly an attacker profile. I feel long-term users like this deserve an explanation for a ban.
I understand Riseup is a community-run service and does not have unlimited resources like big corporations or commercial email providers do. Their actions felt disproportionate to me because I don t know what issues they face behind the scenes. I hope someone can help to improve the policies, or at least shed light on why they are the way they are. Signing off now. Meet you in the next one! Thanks to Badri and Contrapunctus for reviewing this blog post

Next.