Search Results: "irl"

1 February 2026

Russ Allbery: Review: Paladin's Faith

Review: Paladin's Faith, by T. Kingfisher
Series: The Saint of Steel #4
Publisher: Red Wombat Studio
Copyright: 2023
ISBN: 1-61450-614-0
Format: Kindle
Pages: 515
Paladin's Faith is the fourth book in T. Kingfisher's loosely connected series of fantasy novels about the berserker former paladins of the Saint of Steel. You could read this as a standalone, but there are numerous (spoilery) references to the previous books in the series. Marguerite, who was central to the plot of the first book in the series, Paladin's Grace, is a spy with a problem. An internal power struggle in the Red Sail, the organization that she's been working for, has left her a target. She has a plan for how to break their power sufficiently that they will hopefully leave her alone, but to pull it off she's going to need help. As the story opens, she is working to acquire that help in a very Marguerite sort of way: breaking into the office of Bishop Beartongue of the Temple of the White Rat. The Red Sail, the powerful merchant organization Marguerite worked for, makes their money in the salt trade. Marguerite has learned that someone invented a cheap and reproducible way to extract salt from sea water, thus making the salt trade irrelevant. The Red Sail wants to ensure that invention never sees the light of day, and has forced the artificer into hiding. Marguerite doesn't know where they are, but she knows where she can find out: the Court of Smoke, where the artificer has a patron.
Having grown up in Anuket City, Marguerite was familiar with many clockwork creations, not to mention all the ways that they could go horribly wrong. (Ninety-nine times out of a hundred, it was an explosion. The hundredth time, it ran amok and stabbed innocent bystanders, and the artificer would be left standing there saying, "But I had to put blades on it, or how would it rake the leaves?" while the gutters filled up with blood.)
All Marguerite needs to put her plan into motion is some bodyguards so that she's not constantly distracted and anxious about being assassinated. Readers of this series will be unsurprised to learn that the bodyguards she asks Beartongue for are paladins, including a large broody male one with serious self-esteem problems. This is, like the other books in this series, a slow-burn romance with infuriating communication problems and a male protagonist who would do well to seek out a sack of hammers as a mentor. However, it has two things going for it that most books in this series do not: a long and complex plot to which the romance takes a back seat, and Marguerite, who is not particularly interested in playing along with the expected romance developments. There are also two main paladins in this story, not just one, and the other is one of the two female paladins of the Saint of Steel and rather more entertaining than Shane. I generally like court intrigue stories, which is what fills most of this book. Marguerite is an experienced operative, so the reader gets some solid competence porn, and the paladins are fish out of water but are also unexpectedly dangerous, which adds both comedy and satisfying table-turning. I thoroughly enjoyed the maneuvering and the culture clashes. Marguerite is very good at what she does, knows it, and is entirely uninterested in other people's opinions about that, which short-circuits a lot of Shane's most annoying behavior and keeps the story from devolving into mopey angst like some of the books in this series have done. The end of this book takes the plot in a different direction that adds significantly to the world-building, but also has a (thankfully short) depths of despair segment that I endured rather than enjoyed. I am not really in the mood for bleak hopelessness in my fiction at the moment, even if the reader is fairly sure it will be temporary. But apart from that, I thoroughly enjoyed this book from beginning to end. When we finally meet the artificer, they are an absolute delight in that way that Kingfisher is so good at. The whole story is infused with the sense of determined and competent people refusing to stop trying to fix problems. As usual, the romance was not for me and I think the book would have been better without it, but it's less central to the plot and therefore annoyed me less than any of the books in this series so far. My one major complaint is the lack of gnoles, but we get some new and intriguing world-building to make up for it, along with a setup for a fifth book that I am now extremely curious about. By this point in the series, you probably know if you like the general formula. Compared to the previous book, Paladin's Hope, I thought Paladin's Faith was much stronger and more interesting, but it's clearly of the same type. If, like me, you like the plots but not the romance, the plot here is more substantial. You will have to decide if that makes up for a romance in the typical T. Kingfisher configuration. Personally, I enjoyed this quite a bit, except for the short bleak part, and I'm back to eagerly awaiting the next book in the series. Rating: 8 out of 10

31 January 2026

Russ Allbery: Review: Dragon Pearl

Review: Dragon Pearl, by Yoon Ha Lee
Series: Thousand Worlds #1
Publisher: Rick Riordan Presents
Copyright: 2019
ISBN: 1-368-01519-0
Format: Kindle
Pages: 315
Dragon Pearl is a middle-grade space fantasy based on Korean mythology and the first book of a series. Min is a fourteen-year-old girl living on the barely-terraformed world of Jinju with her extended family. Her older brother Jun passed the entrance exams for the Academy and left to join the Thousand Worlds Space Forces, and Min is counting the years until she can do the same. Those plans are thrown into turmoil when an official investigator appears at their door claiming that Jun deserted to search for the Dragon Pearl. A series of impulsive fourteen-year-old decisions lead to Min heading for a spaceport alone, determined to find her brother and prove his innocence. This would be a rather improbable quest for a young girl, but Min is a gumiho, one of the supernaturals who live in the Thousand Worlds alongside non-magical humans. Unlike the more respectable dragons, tigers, goblins, and shamans, gumiho are viewed with suspicion and distrust because their powers are useful for deception. They are natural shapeshifters who can copy the shapes of others, and their Charm ability lets them influence people's thoughts and create temporary illusions of objects such as ID cards. It will take all of Min's powers, and some rather lucky coincidences, to infiltrate the Space Forces and determine what happened to her brother. It's common for reviews of this book to open with a caution that this is a middle-grade adventure novel and you should not expect a story like Ninefox Gambit. I will be boring and repeat that caution. Dragon Pearl has a single first-person viewpoint and a very linear and straightforward plot. Adult readers are unlikely to be surprised by plot twists; the fun is the world-building and seeing how Min manages to work around plot obstacles. The world-building is enjoyable but not very rigorous. Min uses and abuses Charm with the creative intensity of a Dungeons & Dragons min-maxer. Each individual event makes sense given the implication that Min is unusually powerful, but I'm dubious about the surrounding society and lack of protections against Charm given what Min is able to do. Min does say that gumiho are rare and many people think they're extinct, which is a bit of a fig leaf, but you'll need to bring your urban fantasy suspension of disbelief skills to this one. I did like that the world-building conceit went more than skin deep and influenced every part of the world. There are ghosts who are critical to the plot. Terraforming is done through magic, hence the quest for the Dragon Pearl and the miserable state of Min's home planet due to its loss. Medical treatment involves the body's meridians, as does engineering: The starships have meridians similar to those of humans, and engineers partly merge with those meridians to adjust them. This is not the sort of book that tries to build rigorous scientific theories or explain them to the reader, and I'm not sure everything would hang together if you poked at it too hard, but Min isn't interested in doing that poking and the story doesn't try to justify itself. It's mostly a vibe, but it's a vibe that I enjoyed and that is rather different than other space fantasy I've read. The characters were okay but never quite clicked for me, in part because proper character exploration would have required Min take a detour from her quest to find her brother and that was not going to happen. The reader gets occasional glimpses of a military SF cadet story and a friendship on false premises story, but neither have time to breathe because Min drops any entanglement that gets in the way of her quest. She's almost amoral in a way that I found believable but not quite aligned with my reading mood. I also felt a bit wrong-footed by how her friendships developed; saying too much more would be a spoiler, but I was expecting more human connection than I got. I think my primary disappointment with this book was something I knew going in, not in any way its fault, and part of the reason why I'd put off reading it: This is pitched at young teenagers and didn't have quite enough plot and characterization complexity to satisfy me. It's a linear, somewhat episodic adventure story with some neat world-building, and it therefore glides over the spots where an adult novel would have added political and factional complexity. That is exactly as advertised, so it's up to you whether that's the book you're in the mood for. One warning: The text of this book opens with an introduction by Rick Riordan that is just fluff marketing and that spoils the first few chapters of the book. It is unmarked as such at the beginning and tricked me into thinking it was the start of the book proper, and then deeply annoyed me. If you do read this book, I recommend skipping the utterly pointless introduction and going straight to chapter one. Followed by Tiger Honor. Rating: 6 out of 10

20 January 2026

Sahil Dhiman: Conferences, why?

Back in December, I was working to help organize multiple different conferences. One has already happened; the rest are still works in progress. That s when the thought struck me: why so many conferences, and why do I work for them? I have been fairly active in the scene since 2020. For most conferences, I usually arrive late in the city on the previous day and usually leave the city on conference close day. Conferences for me are the place to meet friends and new folks and hear about them, their work, new developments, and what s happening in their interest zones. I feel naturally happy talking to folks. In this case, people inspire me to work. Nothing can replace a passionate technical and social discussion, which stretches way into dinner parties and later. For most conference discussions now, I just show up without a set role (DebConf is probably an exception to it). It usually involves talking to folks, suggesting what needs to be done, doing a bit of it myself, and finishing some last-minute stuff during the actual thing. Having more of these conferences and helping make them happen naturally gives everyone more places to come together, meet, talk, and work on something. No doubt, one reason for all these conferences is evangelism for, let s say Free Software, OpenStreetMap, Debian etc. which is good and needed for the pipeline. But for me, the primary reason would always be meeting folks.

19 January 2026

Russell Coker: Furilabs FLX1s

The Aim I have just got a Furilabs FLX1s [1] which is a phone running a modified version of Debian. I want to have a phone that runs all apps that I control and can observe and debug. Android is very good for what it does and there are security focused forks of Android which have a lot of potential, but for my use a Debian phone is what I want. The FLX1s is not going to be my ideal phone, I am evaluating it for use as a daily-driver until a phone that meets my ideal criteria is built. In this post I aim to provide information to potential users about what it can do, how it does it, and how to get the basic functions working. I also evaluate how well it meets my usage criteria. I am not anywhere near an average user. I don t think an average user would ever even see one unless a more technical relative showed one to them. So while this phone could be used by an average user I am not evaluating it on that basis. But of course the features of the GUI that make a phone usable for an average user will allow a developer to rapidly get past the beginning stages and into more complex stuff. Features The Furilabs FLX1s [1] is a phone that is designed to run FuriOS which is a slightly modified version of Debian. The purpose of this is to run Debian instead of Android on a phone. It has switches to disable camera, phone communication, and microphone (similar to the Librem 5) but the one to disable phone communication doesn t turn off Wifi, the only other phone I know of with such switches is the Purism Librem 5. It has a 720*1600 display which is only slightly better than the 720*1440 display in the Librem 5 and PinePhone Pro. This doesn t compare well to the OnePlus 6 from early 2018 with 2280*1080 or the Note9 from late 2018 with 2960*1440 which are both phones that I ve run Debian on. The current price is $US499 which isn t that good when compared to the latest Google Pixel series, a Pixel 10 costs $US649 and has a 2424*1080 display and it also has 12G of RAM while the FLX1s only has 8G. Another annoying thing is how rounded the corners are, it seems that round corners that cut off the content are a standard practice nowadays, in my collection of phones the latest one I found with hard right angles on the display was a Huawei Mate 10 Pro which was released in 2017. The corners are rounder than the Note 9, this annoys me because the screen is not high resolution by today s standards so losing the corners matters. The default installation is Phosh (the GNOME shell for phones) and it is very well configured. Based on my experience with older phone users I think I could give a phone with this configuration to a relative in the 70+ age range who has minimal computer knowledge and they would be happy with it. Additionally I could set it up to allow ssh login and instead of going through the phone support thing of trying to describe every GUI setting to click on based on a web page describing menus for the version of Android they are running I could just ssh in and run diff on the .config directory to find out what they changed. Furilabs have done a very good job of setting up the default configuration, while Debian developers deserve a lot of credit for packaging the apps the Furilabs people have chosen a good set of default apps to install to get it going and appear to have made some noteworthy changes to some of them. Droidian The OS is based on Android drivers (using the same techniques as Droidian [2]) and the storage device has the huge number of partitions you expect from Android as well as a 110G Ext4 filesystem for the main OS. The first issue with the Droidian approach of using an Android kernel and containers for user space code to deal with drivers is that it doesn t work that well. There are 3 D state processes (uninterrupteable sleep which usually means a kernel bug if the process remains in that state) after booting and doing nothing special. My tests running Droidian on the Note 9 also had D state processes, in this case they are D state kernel threads (I can t remember if the Note 9 had regular processes or kernel threads stuck in D state). It is possible for a system to have full functionality in spite of some kernel threads in D state but generally it s a symptom of things not working as well as you would hope. The design of Droidian is inherently fragile. You use a kernel and user space code from Android and then use Debian for the rest. You can t do everything the Android way (with the full OS updates etc) and you also can t do everything the Debian way. The TOW Boot functionality in the PinePhone Pro is really handy for recovery [3], it allows the internal storage to be accessed as a USB mass storage device. The full Android setup with ADB has some OK options for recovery, but part Android and part Debian has less options. While it probably is technically possible to do the same things in regard to OS repair and reinstall the fact that it s different from most other devices means that fixes can t be done in the same way. Applications GUI The system uses Phosh and Phoc, the GNOME system for handheld devices. It s a very different UI from Android, I prefer Android but it is usable with Phosh. IM Chatty works well for Jabber (XMPP) in my tests. It supports Matrix which I didn t test because I don t desire the same program doing Matrix and Jabber and because Matrix is a heavy protocol which establishes new security keys for each login so I don t want to keep logging in on new applications. Chatty also does SMS but I couldn t test that without the SIM caddy. I use Nheko for Matrix which has worked very well for me on desktops and laptops running Debian. Email I am currently using Geary for email. It works reasonably well but is lacking proper management of folders, so I can t just subscribe to the important email on my phone so that bandwidth isn t wasted on less important email (there is a GNOME gitlab issue about this see the Debian Wiki page about Mobile apps [4]). Music Music playing isn t a noteworthy thing for a desktop or laptop, but a good music player is important for phone use. The Lollypop music player generally does everything you expect along with support for all the encoding formats including FLAC0 a major limitation of most Android music players seems to be lack of support for some of the common encoding formats. Lollypop has it s controls for pause/play and going forward and backward one track on the lock screen. Maps The installed map program is gnome-maps which works reasonably well. It gets directions via the Graphhopper API [5]. One thing we really need is a FOSS replacement for Graphhopper in GNOME Maps. Delivery and Unboxing I received my FLX1s on the 13th of Jan [1]. I had paid for it on the 16th of Oct but hadn t received the email with the confirmation link so the order had been put on hold. But after I contacted support about that on the 5th of Jan they rapidly got it to me which was good. They also gave me a free case and screen protector to apologise, I don t usually use screen protectors but in this case it might be useful as the edges of the case don t even extend 0.5mm above the screen. So if it falls face down the case won t help much. When I got it there was an open space at the bottom where the caddy for SIMs is supposed to be. So I couldn t immediately test VoLTE functionality. The contact form on their web site wasn t working when I tried to report that and the email for support was bouncing. Bluetooth As a test of Bluetooth I connected it to my Nissan LEAF which worked well for playing music and I connected it to several Bluetooth headphones. My Thinkpad running Debian/Trixie doesn t connect to the LEAF and to headphones which have worked on previous laptops running Debian and Ubuntu. A friend s laptop running Debian/Trixie also wouldn t connect to the LEAF so I suspect a bug in Trixie, I need to spend more time investigating this. Wifi Currently 5GHz wifi doesn t work, this is a software bug that the Furilabs people are working on. 2.4GHz wifi works fine. I haven t tested running a hotspot due to being unable to get 4G working as they haven t yet shipped me the SIM caddy. Docking This phone doesn t support DP Alt-mode or Thunderbolt docking so it can t drive an external monitor. This is disappointing, Samsung phones and tablets have supported such things since long before USB-C was invented. Samsung DeX is quite handy for Android devices and that type feature is much more useful on a device running Debian than on an Android device. Camera The camera works reasonably well on the FLX1s. Until recently for the Librem 5 the camera didn t work and the camera on my PinePhone Pro currently doesn t work. Here are samples of the regular camera and the selfie camera on the FLX1s and the Note 9. I think this shows that the camera is pretty decent. The selfie looks better and the front camera is worse for the relatively close photo of a laptop screen taking photos of computer screens is an important part of my work but I can probably work around that. I wasn t assessing this camera t find out if it s great, just to find out if I have the sorts of problems I had before and it just worked. The Samsung Galaxy Note series of phones has always had decent specs including good cameras. Even though the Note 9 is old comparing to it is a respectable performance. The lighting was poor for all photos. FLX1s
Note 9
Power Use In 93 minutes having the PinePhone Pro, Librem 5, and FLX1s online with open ssh sessions from my workstation the PinePhone Pro went from 100% battery to 26%, the Librem 5 went from 95% to 69%, and the FLX1s went from 100% to 99%. The battery discharge rate of them was reported as 3.0W, 2.6W, and 0.39W respectively. Based on having a 16.7Wh battery 93 minutes of use should have been close to 4% battery use, but in any case all measurements make it clear that the FLX1s will have a much longer battery life. Including the measurement of just putting my fingers on the phones and feeling the temperature (FLX1s felt cool and the others felt hot). The PinePhone Pro and the Librem 5 have an optional Caffeine mode which I enabled for this test, without that enabled the phone goes into a sleep state and disconnects from Wifi. So those phones would use much less power with caffeine mode enabled, but they also couldn t get fast response to notifications etc. I found the option to enable a Caffeine mode switch on the FLX1s but the power use was reported as being the same both with and without it. Charging One problem I found with my phone is that in every case it takes 22 seconds to negotiate power. Even when using straight USB charging (no BC or PD) it doesn t draw any current for 22 seconds. When I connect it it will stay at 5V and varying between 0W and 0.1W (current rounded off to zero) for 22 seconds or so and then start charging. After the 22 second display the phone will make the tick sound indicating that it s charging and the power meter will measure that it s drawing some current. I added the table from my previous post about phone charging speed [6] with an extra row for the FLX1s. For charging from my PC USB ports the results were the worst ever, the port that does BC did not work at all it was looping trying to negotiate after a 22 second negotiation delay the port would turn off. The non-BC port gave only 2.4W which matches the 2.5W given by the spec for a High-power device which is what that port is designed to give. In a discussion on the Purism forum about the Librem5 charging speed one of their engineers told me that the reason why their phone would draw 2A from that port was because the cable was identifying itself as a USB-C port not a High-power device port. But for some reason out of the 7 phones I tested the FLX1s and the One Plus 6 are the only ones to limit themselves to what the port is apparently supposed to do. Also the One Plus 6 charges slowly on every power supply so I don t know if it is obeying the spec or just sucking. On a cheap AliExpress charger the FLX1s gets 5.9V and on a USB battery it gets 5.8V. Out of all 42 combinations of device and charger I tested these were the only ones to involve more than 5.1V but less than 9V. I welcome comments suggesting an explanation. The case that I received has a hole for the USB-C connector that isn t wide enough for the plastic surrounds on most of my USB-C cables (including the Dell dock). Also to make a connection requires a fairly deep insertion (deeper than the One Plus 6 or the Note 9). So without adjustment I have to take the case off to charge it. It s no big deal to adjust the hole (I have done it with other cases) but it s an annoyance.
Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
FLX1s FAIL 5.0V 0.49A 2.4W 4.8V 1.9A 9.0W 5.9V 1.8A 11W 4.8V 2.1A 10W 5.8V 2.1A 12W 5.8V 2.1A 12W 5.0V 0.49A 2.4W
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W
Conclusion The Furilabs support people are friendly and enthusiastic but my customer experience wasn t ideal. It was good that they could quickly respond to my missing order status and the missing SIM caddy (which I still haven t received but believe is in the mail) but it would be better if such things just didn t happen. The phone is quite user friendly and could be used by a novice. I paid $US577 for the FLX1s which is $AU863 by today s exchange rates. For comparison I could get a refurbished Pixel 9 Pro Fold for $891 from Kogan (the major Australian mail-order company for technology) or a refurbished Pixel 9 Pro XL for $842. The Pixel 9 series has security support until 2031 which is probably longer than you can expect a phone to be used without being broken. So a phone with a much higher resolution screen that s only one generation behind the latest high end phones and is refurbished will cost less. For a brand new phone a Pixel 8 Pro which has security updates until 2030 costs $874 and a Pixel 9A which has security updates until 2032 costs $861. Doing what the Furilabs people have done is not a small project. It s a significant amount of work and the prices of their products need to cover that. I m not saying that the prices are bad, just that economies of scale and the large quantity of older stock makes the older Google products quite good value for money. The new Pixel phones of the latest models are unreasonably expensive. The Pixel 10 is selling new from Google for $AU1,149 which I consider a ridiculous price that I would not pay given the market for used phones etc. If I had a choice of $1,149 or a feature phone I d pay $1,149. But the FLX1s for $863 is a much better option for me. If all I had to choose from was a new Pixel 10 or a FLX1s for my parents I d get them the FLX1s. For a FOSS developer a FLX1s could be a mobile test and development system which could be lent to a relative when their main phone breaks and the replacement is on order. It seems to be fit for use as a commodity phone. Note that I give this review on the assumption that SMS and VoLTE will just work, I haven t tested them yet. The UI on the FLX1s is functional and easy enough for a new user while allowing an advanced user to do the things they desire. I prefer the Android style and the Plasma Mobile style is closer to Android than Phosh is, but changing it is something I can do later. Generally I think that the differences between UIs matter more when on a desktop environment that could be used for more complex tasks than on a phone which limits what can be done by the size of the screen. I am comparing the FLX1s to Android phones on the basis of what technology is available. But most people who would consider buying this phone will compare it to the PinePhone Pro and the Librem 5 as they have similar uses. The FLX1s beats both those phones handily in terms of battery life and of having everything just work. But it has the most non free software of the three and the people who want the $2000 Librem 5 that s entirely made in the US won t want the FLX1s. This isn t the destination for Debian based phones, but it s a good step on the way to it and I don t think I ll regret this purchase.

17 January 2026

Simon Josefsson: Backup of S3 Objects Using rsnapshot

I ve been using rsnapshot to take backups of around 10 servers and laptops for well over 15 years, and it is a remarkably reliable tool that has proven itself many times. Rsnapshot uses rsync over SSH and maintains a temporal hard-link file pool. Once rsnapshot is configured and running, on the backup server, you get a hardlink farm with directories like this for the remote server:
/backup/serverA.domain/.sync/foo
/backup/serverA.domain/daily.0/foo
/backup/serverA.domain/daily.1/foo
/backup/serverA.domain/daily.2/foo
...
/backup/serverA.domain/daily.6/foo
/backup/serverA.domain/weekly.0/foo
/backup/serverA.domain/weekly.1/foo
...
/backup/serverA.domain/monthly.0/foo
/backup/serverA.domain/monthly.1/foo
...
/backup/serverA.domain/yearly.0/foo
I can browse and rescue files easily, going back in time when needed. The rsnapshot project README explains more, there is a long rsnapshot HOWTO although I usually find the rsnapshot man page the easiest to digest. I have stored multi-TB Git-LFS data on GitLab.com for some time. The yearly renewal is coming up, and the price for Git-LFS storage on GitLab.com is now excessive (~$10.000/year). I have reworked my work-flow and finally migrated debdistget to only store Git-LFS stubs on GitLab.com and push the real files to S3 object storage. The cost for this is barely measurable, I have yet to run into the 25/month warning threshold. But how do you backup stuff stored in S3? For some time, my S3 backup solution has been to run the minio-client mirror command to download all S3 objects to my laptop, and rely on rsnapshot to keep backups of this. While 4TB NVME s are relatively cheap, I ve felt that this disk and network churn on my laptop is unsatisfactory for quite some time. What is a better approach? I find S3 hosting sites fairly unreliable by design. Only a couple of clicks in your web browser and you have dropped 100TB of data. Or by someone else who steal your plaintext-equivalent cookie. Thus, I haven t really felt comfortable using any S3-based backup option. I prefer to self-host, although continously running a mirror job is not sufficient: if I accidentally drop the entire S3 object store, my mirror run will remove all files locally too. The rsnapshot approach that allows going back in time and having data on self-managed servers feels superior to me. What if we could use rsnapshot with a S3 client instead of rsync? Someone else asked about this several years ago, and the suggestion was to use the fuse-based s3fs which sounded unreliable to me. After some experimentation, working around some hard-coded assumption in the rsnapshot implementation, I came up with a small configuration pattern and a wrapper tool to implement what I desired. Here is my configuration snippet:
cmd_rsync    /backup/s3/s3rsync
rsync_short_args    -Q
rsync_long_args    --json --remove
lockfile    /backup/s3/rsnapshot.pid
snapshot_root    /backup/s3
backup    s3:://hetzner/debdistget-gnuinos    ./debdistget-gnuinos
backup    s3:://hetzner/debdistget-tacos  ./debdistget-tacos
backup    s3:://hetzner/debdistget-diffos ./debdistget-diffos
backup    s3:://hetzner/debdistget-pureos ./debdistget-pureos
backup    s3:://hetzner/debdistget-kali   ./debdistget-kali
backup    s3:://hetzner/debdistget-devuan ./debdistget-devuan
backup    s3:://hetzner/debdistget-trisquel   ./debdistget-trisquel
backup    s3:://hetzner/debdistget-debian ./debdistget-debian
The idea is to save a backup of a couple of S3 buckets under /backup/s3/. I have some scripts that take a complete rsnapshot.conf file and append my per-directory configuration so that this becomes a complete configuration. If you are curious how I roll this, backup-all invokes backup-one appending my rsnapshot.conf template with the snippet above. The s3rsync wrapper script is the essential hack to convert rsnapshot s rsync parameters into something that talks S3 and the script is as follows:
#!/bin/sh
set -eu
S3ARG=
for ARG in "$@"; do
    case $ARG in
    s3:://*) S3ARG="$S3ARG "$(echo $ARG   sed -e 's,s3:://,,');;
    -Q*) ;;
    *) S3ARG="$S3ARG $ARG";;
    esac
done
echo /backup/s3/mc mirror $S3ARG
exec /backup/s3/mc mirror $S3ARG
It uses the minio-client tool. I first tried s3cmd but its sync command read all files to compute MD5 checksums every time you invoke it, which is very slow. The mc mirror command is blazingly fast since it only compare mtime s, just like rsync or git. First you need to store credentials for your S3 bucket. These are stored in plaintext in ~/.mc/config.json which I find to be sloppy security practices, but I don t know of any better way to do this. Replace AKEY and SKEY with your access token and secret token from your S3 provider:
/backup/s3/mc alias set hetzner AKEY SKEY
If I invoke a sync job for a fully synced up directory the output looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V sync
Setting locale to POSIX "C"
echo 1443 > /backup/s3/rsnapshot.pid 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-gnuinos \
    /backup/s3/.sync//debdistget-gnuinos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-gnuinos /backup/s3/.sync//debdistget-gnuinos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-tacos \
    /backup/s3/.sync//debdistget-tacos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-tacos /backup/s3/.sync//debdistget-tacos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-diffos \
    /backup/s3/.sync//debdistget-diffos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-diffos /backup/s3/.sync//debdistget-diffos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-pureos \
    /backup/s3/.sync//debdistget-pureos 
/backup/s3/mc mirror --json --remove hetzner/debdistget-pureos /backup/s3/.sync//debdistget-pureos
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-kali \
    /backup/s3/.sync//debdistget-kali 
/backup/s3/mc mirror --json --remove hetzner/debdistget-kali /backup/s3/.sync//debdistget-kali
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-devuan \
    /backup/s3/.sync//debdistget-devuan 
/backup/s3/mc mirror --json --remove hetzner/debdistget-devuan /backup/s3/.sync//debdistget-devuan
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-trisquel \
    /backup/s3/.sync//debdistget-trisquel 
/backup/s3/mc mirror --json --remove hetzner/debdistget-trisquel /backup/s3/.sync//debdistget-trisquel
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-debian \
    /backup/s3/.sync//debdistget-debian 
/backup/s3/mc mirror --json --remove hetzner/debdistget-debian /backup/s3/.sync//debdistget-debian
 "status":"success","total":0,"transferred":0,"duration":0,"speed":0 
touch /backup/s3/.sync/ 
rm -f /backup/s3/rsnapshot.pid 
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1443] \
    /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
    -V sync: completed successfully 
root@hamster /backup# 
You can tell from the paths that this machine runs Guix. This was the first production use of the Guix System for me, and the machine has been running since 2015 (with the occasional new hard drive). Before, I used rsnapshot on Debian, but some stable release of Debian dropped the rsnapshot package, paving the way for me to test Guix in production on a non-Internet exposed machine. Unfortunately, mc is not packaged in Guix, so you will have to install it from the MinIO Client GitHub page manually. Running the daily rotation looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V daily
Setting locale to POSIX "C"
echo 1549 > /backup/s3/rsnapshot.pid 
mv /backup/s3/daily.5/ /backup/s3/daily.6/ 
mv /backup/s3/daily.4/ /backup/s3/daily.5/ 
mv /backup/s3/daily.3/ /backup/s3/daily.4/ 
mv /backup/s3/daily.2/ /backup/s3/daily.3/ 
mv /backup/s3/daily.1/ /backup/s3/daily.2/ 
mv /backup/s3/daily.0/ /backup/s3/daily.1/ 
/run/current-system/profile/bin/cp -al /backup/s3/.sync /backup/s3/daily.0 
rm -f /backup/s3/rsnapshot.pid 
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1549] \
    /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
    -V daily: completed successfully 
root@hamster /backup# 
Hopefully you will feel inspired to take backups of your S3 buckets now!

13 January 2026

Freexian Collaborators: Debian Contributions: dh-python development, Python 3.14 and Ruby 3.4 transitions, Surviving scraper traffic in Debian CI and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-12 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

dh-python development, by Stefano Rivera In Debian we build our Python packages with the help of a debhelper-compatible tool, dh-python. Before starting the 3.14 transition (that would rebuild many packages) we landed some updates to dh-python to fix bugs and add features. This started a month of attention on dh-python, iterating through several bug fixes, and a couple of unfortunate regressions. dh-python is used by almost all packages containing Python (over 5000). Most of these are very simple, but some are complex and use dh-python in unexpected ways. It s hard to avoid almost any change (including obvious bug fixes) from causing some unexpected knock-on behaviour. There is a fair amount of complexity in dh-python, and some rather clever code, which can make it tricky to work on. All of this means that good QA is important. Stefano spent some time adding type annotations and specialized types to make it easier to see what the code is doing and catch mistakes. This has already made work on dh-python easier. Now that Debusine has built-in repositories and debdiff support, Stefano could quickly test the effects of changes on many other packages. After each big change, he could upload dh-python to a repository, rebuild e.g. 50 Python packages with it, and see what differences appeared in the output. Reviewing the diffs is still a manual process, but can be improved. Stefano did a small test on what it would take to replace direct setuptools setup.py calls with PEP-517 (pyproject-style) builds. There is more work to do here.

Python 3.14 transition, by Stefano Rivera (et al.) In December the transition to add Python 3.14 as a supported version started in Debian unstable. To do this, we update the list of supported versions in python3-defaults, and then start rebuilding modules with C extensions from the leaves inwards. This had already been tested in a PPA and Ubuntu, so many of the biggest blocking compatibility issues with 3.14 had already been found and fixed. But there are always new issues to discover. Thanks to a number of people in the Debian Python team, we got through the first bit of the transition fairly quickly. There are still a number of open bugs that need attention and many failed tests blocking migration to testing. Python 3.14.1 released just after we started the transition, and very soon after, a follow-up 3.14.2 release came out to address a regression. We ran into another regression in Python 3.14.2.

Ruby 3.4 transition, by Lucas Kanashiro (et al.) The Debian Ruby team just started the preparation to move the default Ruby interpreter version to 3.4. At the moment, ruby3.4 source package is already available in experimental, also ruby-defaults added support to Ruby 3.4. Lucas rebuilt all reverse dependencies against this new version of the interpreter and published the results here. Lucas also reached out to some stakeholders to coordinate the work. Next steps are: 1) announcing the results to the whole team and asking for help to fix packages failing to build against the new interpreter; 2) file bugs against packages FTBFSing against Ruby 3.4 which are not fixed yet; 3) once we have a low number of build failures against Ruby 3.4, ask the Debian Release team to start the transition in unstable.

Surviving scraper traffic in Debian CI, by Antonio Terceiro Like most of the open web, Debian Continuous Integration has been struggling for a while to keep up with the insatiable hunger from data scrapers everywhere. Solving this involved a lot of trial and error; the final result seems to be stable, and consists of two parts. First, all Debian CI data pages, except the direct links to test log files (such as those provided by the Release Team s testing migration excuses), now require users to be authenticated before being accessed. This means that the Debian CI data is no longer publicly browseable, which is a bit sad. However, this is where we are now. Additionally, there is now a fail2ban powered firewall-level access limitation for clients that display an abusive access pattern. This went through several iterations, with some of them unfortunately blocking legitimate Debian contributors, but the current state seems to strike a good balance between blocking scrapers and not blocking real users. Please get in touch with the team on the #debci OFTC channel if you are affected by this.

A hybrid dependency solver for crossqa.debian.net, by Helmut Grohne crossqa.debian.net continuously cross builds packages from the Debian archive. Like Debian s native build infrastructure, it uses dose-builddebcheck to determine whether a package s dependencies can be satisfied before attempting a build. About one third of Debian s packages fail this check, so understanding the reasons is key to improving cross building. Unfortunately, dose-builddebcheck stops after reporting the first problem and does not display additional ones. To address this, a greedy solver implemented in Python now examines each build-dependency individually and can report multiple causes. dose-builddebcheck is still used as a fall-back when the greedy solver does not identify any problems. The report for bazel-bootstrap is a lengthy example.

rebootstrap, by Helmut Grohne Due to the changes suggested by Loongson earlier, rebootstrap now adds debhelper to its final installability test and builds a few more packages required for installing it. It also now uses a variant of build-essential that has been marked Multi-Arch: same (see foundational work from last year). This in turn made the use of a non-default GCC version more difficult and required more work to make it work for gcc-16 from experimental. Ongoing archive changes temporarily regressed building fribidi and dash. libselinux and groff have received patches for architecture specific changes and libverto has been NMUed to remove the glib2.0 dependency.

Miscellaneous contributions
  • Stefano did some administrative work on debian.social and debian.net instances and Debian reimbursements.
  • Stefano did routine updates of python-authlib, python-mitogen, xdot.
  • Stefano spent several hours discussing Debian s Python package layout with the PyPA upstream community. Debian has ended up with a very different on-disk installed Python layout than other distributions, and this continues to cause some frustration in many communities that have to have special workarounds to handle it. This ended up impacting cross builds as Helmut discovered.
  • Rapha l set up Debusine workflows for the various backports repositories on debusine.debian.net.
  • Zulip is not yet in Debian (RFP in #800052), but Rapha l helped on the French translation as he is experimenting with that discussion platform.
  • Antonio performed several routine Salsa maintenance tasks, including fixing salsa-nm-sync, the service that synchronizes project members data from LDAP to Salsa, which had been broken since salsa.debian.org was upgraded to trixie .
  • Antonio deployed a new amd64 worker host for Debian CI.
  • Antonio did several DebConf technical and administrative bits, including but adding support for custom check-in/check-out dates in the MiniDebConf registration module, publishing a call for bids for DebConf27.
  • Carles reviewed and submitted 14 Catalan translations using po-debconf-manager.
  • Carles improved po-debconf-manager: added delete-package command, show-information now uses properly formatted output (YAML), it now attaches the translation on the bug reports for which a merge request has been opened too long.
  • Carles investigated why some packages appeared in po-debconf-manager but not in the Debian l10n list. Turns out that some packages had debian/po/templates.pot (appearing in po-debconf-manager) but not the POTFILES.in file as expected. Created a script to find out which packages were in this or similar situation and reported bugs.
  • Carles tested and documented how to set up voices (mbrola and festival) if using Orca speech synthesizer. Commented a few issues and possible improvements in the debian-accessibility list.
  • Helmut sent patches for 48 cross build failures and initiated discussions on how to deal with two non-trivial matters. Besides Python mentioned above, CMake introduced a cmake_pkg_config builtin which is not aware of the host architecture. He also forwarded a Meson patch upstream.
  • Thorsten uploaded a new upstream version of cups to fix a nasty bug that was introduced by the latest security update.
  • Along with many other Python 3.14 fixes, Colin fixed a tricky segfault in python-confluent-kafka after a helpful debugging hint from upstream.
  • Colin upstreamed an improved version of an OpenSSH patch we ve been carrying since 2008 to fix misleading verbose output from scp.
  • Colin used Debusine to coordinate transitions for astroid and pygments, and wrote up the astroid case on his blog.
  • Emilio helped with various transitions, and provided a build fix for opencv for the ffmpeg 8 transition.
  • Emilio tested the GNOME updates for trixie proposed updates (gnome-shell, mutter, glib2.0).
  • Santiago helped to review the status of how to test different build profiles in parallel on the same pipeline, using the test-build-profiles job. This means, for example, to simultaneously test build profiles such as nocheck and nodoc for the same git tree. Finally, Santiago provided MR !685 to fix the documentation.
  • Anupa prepared a bits post for Outreachy interns announcement along with T ssia Cam es Ara jo and worked on publicity team tasks.

11 January 2026

Otto Kek l inen: Stop using MySQL in 2026, it is not true open source

Featured image of post Stop using MySQL in 2026, it is not true open sourceIf you care about supporting open source software, and still use MySQL in 2026, you should switch to MariaDB like so many others have already done. The number of git commits on github.com/mysql/mysql-server has been significantly declining in 2025. The screenshot below shows the state of git commits as of writing this in January 2026, and the picture should be alarming to anyone who cares about software being open source. MySQL GitHub commit activity decreasing drastically

This is not surprising Oracle should not be trusted as the steward for open source projects When Oracle acquired Sun Microsystems and MySQL along with it back in 2009, the European Commission almost blocked the deal due to concerns that Oracle s goal was just to stifle competition. The deal went through as Oracle made a commitment to keep MySQL going and not kill it, but (to nobody s surprise) Oracle has not been a good steward of MySQL as an open source project and the community around it has been withering away for years now. All development is done behind closed doors. The publicly visible bug tracker is not the real one Oracle staff actually uses for MySQL development, and the few people who try to contribute to MySQL just see their Pull Requests and patch submissions marked as received with mostly no feedback and then those changes may or may not be in the next MySQL release, often rewritten, and with only Oracle staff in the git author/committer fields. The real author only gets a small mention in a blog post. When I was the engineering manager for the core team working on RDS MySQL and RDS MariaDB at Amazon Web Services, I oversaw my engineers contributions to both MySQL and MariaDB (the latter being a fork of MySQL by the original MySQL author, Michael Widenius). All the software developers in my org disliked submitting code to MySQL due to how bad the reception by Oracle was to their contributions. MariaDB is the stark opposite with all development taking place in real-time on github.com/mariadb/server, anyone being able to submit a Pull Request and get a review, all bugs being openly discussed at jira.mariadb.org and so forth, just like one would expect from a true open source project. MySQL is open source only by license (GPL v2), but not as a project.

MySQL s technical decline in recent years Despite not being a good open source steward, Oracle should be given credit that it did keep the MySQL organization alive and allowed it to exist fairly independently and continue developing and releasing new MySQL versions well over a decade after the acquisition. I have no insight into how many customers they had, but I assume the MySQL business was fairly profitable and financially useful to Oracle, at least as long as it didn t gain too many features to threaten Oracle s own main database business. I don t know why, perhaps because too many talented people had left the organization, but it seems that from a technical point of view MySQL clearly started to deteriorate from 2022 onward. When MySQL 8.0.29 was released with the default ALTER TABLE method switched to run in-place, it had a lot of corner cases that didn t work, causing the database to crash and data to corrupt for many users. The issue wasn t fully fixed until a year later in MySQL 8.0.32. To many users annoyance Oracle announced the 8.0 series as evergreen and introduced features and changes in the minor releases, instead of just doing bugfixes and security fixes like users historically had learnt to expect from these x.y.Z maintenance releases. There was no new major MySQL version for six years. After MySQL 8.0 in 2018 it wasn t until 2023 when MySQL 8.1 was released, and it was just a short-term preview release. The first actual new major release MySQL 8.4 LTS was released in 2024. Even though it was a new major release, many users got disappointed as it had barely any new features. Many also reported degraded performance with newer MySQL versions, for example the benchmark by famous MySQL performance expert Mark Callaghan below shows that on write-heavy workloads MySQL 9.5 throughput is typically 15% less than in 8.0. Benchmark showing new MySQL versions being slower than the old Due to newer MySQL versions deprecating many features, a lot of users also complained about significant struggles regarding both MySQL 5.7->8.0 and 8.0->8.4 upgrades. With few new features and heavy focus on code base cleanup and feature deprecation, it became obvious to many that Oracle had decided to just keep MySQL barely alive, and put all new relevant features (e.g. vector search) into Heatwave, Oracle s closed-source and cloud-only service for MySQL customers. As it was evident that Oracle isn t investing in MySQL, Percona s Peter Zaitsev wrote Is Oracle Finally Killing MySQL in June 2024. At this time MySQL s popularity as ranked by DB-Engines had also started to tank hard, a trend that likely accelerates in 2026. MySQL dropping significantly in DB-Engines ranking In September 2025 news reported that Oracle was reducing its workforce and that the MySQL staff was getting heavily reduced. Obviously this does not bode well for MySQL s future, and Peter Zaitsev posted already in November stats showing that the latest MySQL maintenance release contained fewer bug fixes than before.

Open source is more than ideology: it has very real effects on software security and sovereignty Some say they don t care if MySQL is truly open source or not, or that they don t care if it has a future in coming years, as long as it still works now. I am afraid people thinking so are taking a huge risk. The database is often the most critical part of a software application stack, and any flaw or problem in operations, let alone a security issue, will have immediate consequences, and not caring will eventually get people fired or sued. In open source problems are discussed openly, and the bigger the problem, the more people and companies will contribute to fixing it. Open source as a development methodology is similar to the scientific method with free flow of ideas that are constantly contested and only the ones with the most compelling evidence win. Not being open means more obscurity, more risk and more just trust us bro attitude. This open vs. closed is very visible for example in how Oracle handles security issues. We can see that in 2025 alone MySQL published 123 CVEs about security issues, while MariaDB had 8. There were 117 CVEs that only affected MySQL and not MariaDB in 2025. I haven t read them all, but typically the CVEs hardly contain any real details. As an example, the most recent one CVE-2025-53067 states Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. There is no information a security researcher or auditor could use to verify if any original issue actually existed, or if it was fixed, or if the fix was sufficient and fully mitigating the issue or not. MySQL users just have to take the word of Oracle that it is all good now. Handling security issues like this is in stark contrast to other open source projects, where all security issues and their code fixes are open for full scrutiny after the initial embargo is over and CVE made public. There is also various forms of enshittification going on one would not see in a true open source project, and everything about MySQL as a software, documentation and website is pushing users to stop using the open source version and move to the closed MySQL versions, and in particular to Heatwave, which is not only closed-source but also results in Oracle fully controlling customer s databases contents. Of course, some could say this is how Oracle makes money and is able to provide a better product. But stories on Reddit and elsewhere suggest that what is going on is more like Oracle milking hard the last remaining MySQL customers who are forced to pay more and more for getting less and less.

There are options and migrating is easy, just do it A large part of MySQL users switched to MariaDB already in the mid-2010s, in particular everyone who had cared deeply about their database software staying truly open source. That included large installations such as Wikipedia, and Linux distributions such as Fedora and Debian. Because it s open source and there is no centralized machine collecting statistics, nobody knows what the exact market shares look like. There are however some application specific stats, such as that 57% of WordPress sites around the world run MariaDB, while the share for MySQL is 42%. For anyone running a classic LAMP stack application such as WordPress, Drupal, Mediawiki, Nextcloud, or Magento, switching the old MySQL database to MariaDB is be straightforward. As MariaDB is a fork of MySQL and mostly backwards compatible with it, swapping out MySQL for MariaDB can be done without changing any of the existing connectors or database clients, as they will continue to work with MariaDB as if it was MySQL. For those running custom applications and who have the freedom to make changes to how and what database is used, there are tens of mature and well-functioning open source databases to choose from, with PostgreSQL being the most popular general database. If your application was built from the start for MySQL, switching to PostgreSQL may however require a lot of work, and the MySQL/MariaDB architecture and storage engine InnoDB may still offer an edge in e.g. online services where high performance, scalability and solid replication features are of highest priority. For a quick and easy migration MariaDB is probably the best option. Switching from MySQL to the Percona Server is also very easy, as it closely tracks all changes in MySQL and deviates from it only by a small number of improvements done by Percona. However, also precisely because of it being basically just a customized version of the MySQL Server, it s not a viable long-term solution for those trying to fully ditch the dependency on Oracle. There are also several open source databases that have no common ancestry with MySQL, but strive to be MySQL-compatible. Thus most apps built for MySQL can simply switch to using them without needing SQL statements to be rewritten. One such database is TiDB, which has been designed from scratch specifically for highly scalable and large systems, and is so good that even Amazon s latest database solution DSQL was built borrowing many ideas from TiDB. However, TiDB only really shines with larger distributed setups, so for the vast majority of regular small- and mid-scale applications currently using MySQL, the most practical solution is probably to just switch to MariaDB, which on most Linux distributions can simply be installed by running apt/dnf/brew install mariadb-server. Whatever you end up choosing, as long as it is not Oracle, you will be better off.

10 January 2026

Matthias Geiger: Building a propagation box for oyster mushrooms

Inspiration In November I watched a short documentary about a guy who grew pearl oyster mushrooms in his backyard. They used pallet boxes (half of a europallet, 60x80x20 cm) as box to keep the substrate the mycelium feeds on in. Since I really enjoy (foraged) mushrooms and had the raw materials lying around, I opted to build it myself. This also had the benefit of using what was available and not just consuming, i.e. buying a pallet box. Preparing the raw materials I had 4.5 m x ~ 25 cm wooden spruce planks at home. My plan was to cut those into 2 m segments, then trim the edges down to 20 cm and then cut them into handy pieces, following the dimension of half a pallet box. This is what they looked after cutting them with an electric chainsaw to around 2 m: raw_planks You can see that the edges are still not straight, because that's how they came out of the sawmill. Once that was done I visited a family member that had a crosscut saw, a table saw and a band saw; all that I would need. First we trimmed the edges of the 2m planks with the table saw so they were somewhat straight; then they were flipped and the other edge was cut straight, and their width cut down to 20 cm. After moving them over to the crosscut saw dividing them into two 60 cm and one 80 cm was fairly easy. When cutting the 2m planks from the 4m ones I calculated with extra offcuts, so I got little waste overall and could use the whole length to get my desired board. This is what the cut pieces looked like: cut_planks Assembly I packed up my planks, now nicely cut to size, and I went to a hardware shop and bought hinges and screws. Assembly was fairly easy and fast: screw a hinge to a corner, hold the other plank onto the hinge so that the corners of both boards touch, and affix the hinge. plank_with_hinge corner_with_hinge When this was done, the frame looked like this: finished_frame As last step I drilled 10mm holes more or less random in the middle of the box. This is where the mushrooms will grow out of later and can be harvested. box_with_holes Closing thoughts This was a fun project I finished in a day. The hinges have the benefit that they allow the box to be folded up lenght-wise: folded This allows for convenient storage. Since it's too cold outside right now, cultivation will have to wait until spring. This also just needs mycelium one can just buy, and some material fungus digests. They can also be fed coffee grounds, and harvest of the fruit body is possible circa every two weeks.

5 January 2026

Isoken Ibizugbe: Thinking About My Audience

Thinking about who I am addressing is a challenge, but it is an important one. As I write, I realize I m speaking to three distinct groups: my friends and family who are new to the world of tech, newcomers eager to join programs like Outreachy, and the technical experts who maintain and sustain the projects I work on.

What is FOSS anyway? To my friends and family: Free and Open Source Software (FOSS) refers to software that anyone can freely use, modify, and share. Think of it as a community garden, instead of one company owning the food, people from all over the world contribute, improve, and maintain it so everyone can benefit from it for free.

To the Aspiring Contributors Contributing to an open source project isn t just about writing code. It could involve going over a ton of documentation and understanding a specific coding style. You have to set up your environment and learn to treat documentation as a source of truth, even if it s something you help modify and improve later. Where I come from, this world is fairly unknown, and it seemed quite scary at first. However, I ve learned that asking questions and communicating are your best tools. Don t be afraid to do your part by investigating and reading, but remember that the community is there to help you grow.

Why Quality Matters For the past few weeks, I ve seen the importance of checking software quality before a release. Imagine you download a new desktop environment, try to open the calculator or the clock, and it crashes or refuses to start. How annoying is that? Or worse, you download software and can t even install it successfully. My work on creating tests for Debian using openQA is aimed at preventing these experiences. We simulate real user actions to make sure that when someone clicks Open, the application actually works.

Closing Thoughts In general, FOSS has empowered people to access and build technology freely. Whether you are here to use the software or you have the expertise to modify and explore it, there is a place for you in this community. I m writing this for you, whichever audience you belong to, to show that complex systems become less intimidating when you begin by asking questions.

Vincent Bernat: Using eBPF to load-balance traffic across UDP sockets with Go

Akvorado collects sFlow and IPFIX flows over UDP. Because UDP does not retransmit lost packets, it needs to process them quickly. Akvorado runs several workers listening to the same port. The kernel should load-balance received packets fairly between these workers. However, this does not work as expected. A couple of workers exhibit high packet loss:
$ curl -s 127.0.0.1:8080/api/v0/inlet/metrics \
>   sed -n s/akvorado_inlet_flow_input_udp_in_dropped//p
packets_total listener="0.0.0.0:2055",worker="0"  0
packets_total listener="0.0.0.0:2055",worker="1"  0
packets_total listener="0.0.0.0:2055",worker="2"  0
packets_total listener="0.0.0.0:2055",worker="3"  1.614933572278264e+15
packets_total listener="0.0.0.0:2055",worker="4"  0
packets_total listener="0.0.0.0:2055",worker="5"  0
packets_total listener="0.0.0.0:2055",worker="6"  9.59964121598348e+14
packets_total listener="0.0.0.0:2055",worker="7"  0
eBPF can help by implementing an alternate balancing algorithm.

Options for load-balancing There are three methods to load-balance UDP packets across workers:
  1. One worker receives the packets and dispatches them to the other workers.
  2. All workers share the same socket.
  3. Each worker has its own socket, listening to the same port, with the SO_REUSEPORT socket option.

SO_REUSEPORT option Tom Hebert added the SO_REUSEPORT socket option in Linux 3.9. The cover letter for his patch series explains why this new option is better than the two existing ones from a performance point of view:
SO_REUSEPORT allows multiple listener sockets to be bound to the same port. [ ] Received packets are distributed to multiple sockets bound to the same port using a 4-tuple hash. The motivating case for SO_RESUSEPORT in TCP would be something like a web server binding to port 80 running with multiple threads, where each thread might have it s own listener socket. This could be done as an alternative to other models:
  1. have one listener thread which dispatches completed connections to workers, or
  2. accept on a single listener socket from multiple threads.
In case #1, the listener thread can easily become the bottleneck with high connection turn-over rate. In case #2, the proportion of connections accepted per thread tends to be uneven under high connection load. [ ] We have seen the disproportion to be as high as 3:1 ratio between thread accepting most connections and the one accepting the fewest. With SO_REUSEPORT the distribution is uniform. The motivating case for SO_REUSEPORT in UDP would be something like a DNS server. An alternative would be to receive on the same socket from multiple threads. As in the case of TCP, the load across these threads tends to be disproportionate and we also see a lot of contection on the socket lock.
Akvorado uses the SO_REUSEPORT option to dispatch the packets across the workers. However, because the distribution uses a 4-tuple hash, a single socket handles all the flows from one exporter.

SO_ATTACH_REUSEPORT_EBPF option In Linux 4.5, Craig Gallek added the SO_ATTACH_REUSEPORT_EBPF option to attach an eBPF program to select the target UDP socket. In Linux 4.6, he extended it to support TCP. The socket(7) manual page documents this mechanism:1
The BPF program must return an index between 0 and N-1 representing the socket which should receive the packet (where N is the number of sockets in the group). If the BPF program returns an invalid index, socket selection will fall back to the plain SO_REUSEPORT mechanism.
In Linux 4.19, Martin KaFai Lau added the BPF_PROG_TYPE_SK_REUSEPORT program type. Such an eBPF program selects the socket from a BPF_MAP_TYPE_REUSEPORT_ARRAY map instead. This new approach is more reliable when switching target sockets from one instance to another for example, when upgrading, a new instance can add its sockets and remove the old ones.

Load-balancing with eBPF and Go Altering the load-balancing algorithm for a group of sockets requires two steps:
  1. write and compile an eBPF program in C,2 and
  2. load it and attach it in Go.

eBPF program in C A simple load-balancing algorithm is to randomly choose the destination socket. The kernel provides the bpf_get_prandom_u32() helper function to get a pseudo-random number.
volatile const __u32 num_sockets; //  
struct  
    __uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY);
    __type(key, __u32);
    __type(value, __u64);
    __uint(max_entries, 256);
  socket_map SEC(".maps"); //  
SEC("sk_reuseport")
int reuseport_balance_prog(struct sk_reuseport_md *reuse_md)
 
    __u32 index = bpf_get_prandom_u32() % num_sockets; //  
    bpf_sk_select_reuseport(reuse_md, &socket_map, &index, 0); //  
    return SK_PASS; //  
 
char _license[] SEC("license") = "GPL";
In , we declare a volatile constant for the number of sockets in the group. We will initialize this constant before loading the eBPF program into the kernel. In , we define the socket map. We will populate it with the socket file descriptors. In , we randomly select the index of the target socket.3 In , we invoke the bpf_sk_select_reuseport() helper to record our decision. Finally, in , we accept the packet.

Header files If you compile the C source with clang, you get errors due to missing headers. The recommended way to solve this is to generate a vmlinux.h file with bpftool:
$ bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h
Then, include the following headers:4
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
For my 6.17 kernel, the generated vmlinux.h is quite big: 2.7 MiB. Moreover, bpf/bpf_helpers.h is shipped with libbpf. This adds another dependency for users. As the eBPF program is quite small, I prefer to put the strict minimum in vmlinux.h by cherry-picking the definitions I need.

Compilation The eBPF Library for Go ships bpf2go, a tool to compile eBPF programs and to generate some scaffolding code. We create a gen.go file with the following content:
package main
//go:generate go tool bpf2go -tags linux reuseport reuseport_kern.c
After running go generate ./..., we can inspect the resulting objects with readelf and llvm-objdump:
$ readelf -S reuseport_bpfeb.o
There are 14 section headers, starting at offset 0x840:
  [Nr] Name              Type             Address           Offset
[ ]
  [ 3] sk_reuseport      PROGBITS         0000000000000000  00000040
  [ 6] .maps             PROGBITS         0000000000000000  000000c8
  [ 7] license           PROGBITS         0000000000000000  000000e8
[ ]
$ llvm-objdump -S reuseport_bpfeb.o
reuseport_bpfeb.o:  file format elf64-bpf
Disassembly of section sk_reuseport:
0000000000000000 <reuseport_balance_prog>:
;  
       0:   bf 61 00 00 00 00 00 00     r6 = r1
;     __u32 index = bpf_get_prandom_u32() % num_sockets;
       1:   85 00 00 00 00 00 00 07     call 0x7
[ ]

Usage from Go Let s set up 10 workers listening to the same port.5 Each socket enables the SO_REUSEPORT option before binding:6
var (
    err error
    fds []uintptr
    conns []*net.UDPConn
)
workers := 10
listenAddr := "127.0.0.1:0"
listenConfig := net.ListenConfig 
    Control: func(_, _ string, c syscall.RawConn) error  
        c.Control(func(fd uintptr)  
            err = unix.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_REUSEPORT, 1)
            fds = append(fds, fd)
         )
        return err
     ,
 
for range workers  
    pconn, err := listenConfig.ListenPacket(t.Context(), "udp", listenAddr)
    if err != nil  
        t.Fatalf("ListenPacket() error:\n%+v", err)
     
    udpConn := pconn.(*net.UDPConn)
    listenAddr = udpConn.LocalAddr().String()
    conns = append(conns, udpConn)
 
The second step is to load the eBPF program, initialize the num_sockets variable, populate the socket map, and attach the program to the first socket.7
// Load the eBPF collection.
spec, err := loadReuseport()
if err != nil  
    t.Fatalf("loadVariables() error:\n%+v", err)
 
// Set "num_sockets" global variable to the number of file descriptors we will register
if err := spec.Variables["num_sockets"].Set(uint32(len(fds))); err != nil  
    t.Fatalf("NumSockets.Set() error:\n%+v", err)
 
// Load the map and the program into the kernel.
var objs reuseportObjects
if err := spec.LoadAndAssign(&objs, nil); err != nil  
    t.Fatalf("loadReuseportObjects() error:\n%+v", err)
 
t.Cleanup(func()   objs.Close()  )
// Assign the file descriptors to the socket map.
for worker, fd := range fds  
    if err := objs.reuseportMaps.SocketMap.Put(uint32(worker), uint64(fd)); err != nil  
        t.Fatalf("SocketMap.Put() error:\n%+v", err)
     
 
// Attach the eBPF program to the first socket.
socketFD := int(fds[0])
progFD := objs.reuseportPrograms.ReuseportBalanceProg.FD()
if err := unix.SetsockoptInt(socketFD, unix.SOL_SOCKET, unix.SO_ATTACH_REUSEPORT_EBPF, progFD); err != nil  
    t.Fatalf("SetsockoptInt() error:\n%+v", err)
 
We are now ready to process incoming packets. Each worker is a Go routine incrementing a counter for each received packet:8
var wg sync.WaitGroup
receivedPackets := make([]int, workers)
for worker := range workers  
    conn := conns[worker]
    packets := &receivedPackets[worker]
    wg.Go(func()  
        payload := make([]byte, 9000)
        for  
            if _, err := conn.Read(payload); err != nil  
                if errors.Is(err, net.ErrClosed)  
                    return
                 
                t.Logf("Read() error:\n%+v", err)
             
            *packets++
         
     )
 
Let s send 1000 packets:
sentPackets := 1000
conn, err := net.Dial("udp", conns[0].LocalAddr().String())
if err != nil  
    t.Fatalf("Dial() error:\n%+v", err)
 
defer conn.Close()
for range sentPackets  
    if _, err := conn.Write([]byte("hello world!")); err != nil  
        t.Fatalf("Write() error:\n%+v", err)
     
 
If we print the content of the receivedPackets array, we can check the balancing works as expected, with each worker getting about 100 packets:
=== RUN   TestUDPWorkerBalancing
    balancing_test.go:84: receivedPackets[0] = 107
    balancing_test.go:84: receivedPackets[1] = 92
    balancing_test.go:84: receivedPackets[2] = 99
    balancing_test.go:84: receivedPackets[3] = 105
    balancing_test.go:84: receivedPackets[4] = 107
    balancing_test.go:84: receivedPackets[5] = 96
    balancing_test.go:84: receivedPackets[6] = 102
    balancing_test.go:84: receivedPackets[7] = 105
    balancing_test.go:84: receivedPackets[8] = 99
    balancing_test.go:84: receivedPackets[9] = 88
    balancing_test.go:91: receivedPackets = 1000
    balancing_test.go:92: sentPackets     = 1000

Graceful restart You can also use SO_ATTACH_REUSEPORT_EBPF to gracefully restart an application. A new instance of the application binds to the same address and prepare its own version of the socket map. Once it attaches the eBPF program to the first socket, the kernel steers incoming packets to this new instance. The old instance needs to drain the already received packets before shutting down. To check we are not losing any packet, we spawn a Go routine to send as many packets as possible:
sentPackets := 0
notSentPackets := 0
done := make(chan bool)
conn, err := net.Dial("udp", conns1[0].LocalAddr().String())
if err != nil  
    t.Fatalf("Dial() error:\n%+v", err)
 
defer conn.Close()
go func()  
    for  
        if _, err := conn.Write([]byte("hello world!")); err != nil  
            notSentPackets++
          else  
            sentPackets++
         
        select  
        case <-done:
            return
        default:
         
     
 ()
Then, while the Go routine runs, we start the second set of workers. Once they are running, they start receiving packets. If we gracefully stop the initial set of workers, not a single packet is lost!9
=== RUN   TestGracefulRestart
    graceful_test.go:135: receivedPackets1[0] = 165
    graceful_test.go:135: receivedPackets1[1] = 195
    graceful_test.go:135: receivedPackets1[2] = 194
    graceful_test.go:135: receivedPackets1[3] = 190
    graceful_test.go:135: receivedPackets1[4] = 213
    graceful_test.go:135: receivedPackets1[5] = 187
    graceful_test.go:135: receivedPackets1[6] = 170
    graceful_test.go:135: receivedPackets1[7] = 190
    graceful_test.go:135: receivedPackets1[8] = 194
    graceful_test.go:135: receivedPackets1[9] = 155
    graceful_test.go:139: receivedPackets2[0] = 1631
    graceful_test.go:139: receivedPackets2[1] = 1582
    graceful_test.go:139: receivedPackets2[2] = 1594
    graceful_test.go:139: receivedPackets2[3] = 1611
    graceful_test.go:139: receivedPackets2[4] = 1571
    graceful_test.go:139: receivedPackets2[5] = 1660
    graceful_test.go:139: receivedPackets2[6] = 1587
    graceful_test.go:139: receivedPackets2[7] = 1605
    graceful_test.go:139: receivedPackets2[8] = 1631
    graceful_test.go:139: receivedPackets2[9] = 1689
    graceful_test.go:147: receivedPackets = 18014
    graceful_test.go:148: sentPackets     = 18014
Unfortunately, gracefully shutting down a UDP socket is not trivial in Go.10 Previously, we were terminating workers by closing their sockets. However, if we close them too soon, the application loses packets that were assigned to them but not yet processed. Before stopping, a worker needs to call conn.Read() until there are no more packets. A solution is to set a deadline for conn.Read() and check if we should stop the Go routine when the deadline is exceeded:
payload := make([]byte, 9000)
for  
    conn.SetReadDeadline(time.Now().Add(50 * time.Millisecond))
    if _, err := conn.Read(payload); err != nil  
        if errors.Is(err, os.ErrDeadlineExceeded)  
            select  
            case <-done:
                return
            default:
                continue
             
         
        t.Logf("Read() error:\n%+v", err)
     
    *packets++
 
With TCP, this aspect is simpler: after enabling the net.ipv4.tcp_migrate_req sysctl, the kernel automatically migrates waiting connections to a random socket in the same group. Alternatively, eBPF can also control this migration. Both features are available since Linux 5.14.

Addendum After implementing this strategy in Akvorado, all workers now drop packets!
$ curl -s 127.0.0.1:8080/api/v0/inlet/metrics \
>   sed -n s/akvorado_inlet_flow_input_udp_in_dropped//p
packets_total listener="0.0.0.0:2055",worker="0"  838673
packets_total listener="0.0.0.0:2055",worker="1"  843675
packets_total listener="0.0.0.0:2055",worker="2"  837922
packets_total listener="0.0.0.0:2055",worker="3"  841443
packets_total listener="0.0.0.0:2055",worker="4"  840668
packets_total listener="0.0.0.0:2055",worker="5"  850274
packets_total listener="0.0.0.0:2055",worker="6"  835488
packets_total listener="0.0.0.0:2055",worker="7"  834479
The root cause is the default limit of 32 records for Kafka batch sizes. This limit is too low because the brokers have a large overhead when handling each batch: they need to ensure they persist correctly before acknowledging them. Increasing the limit to 4096 records fixes this issue. While load-balancing incoming flows with eBPF remains useful, it did not solve the main issue. At least the even distribution of dropped packets helped identify the real bottleneck.

  1. The current version of the manual page is incomplete and does not cover the evolution introduced in Linux 4.19. There is a pending patch about this.
  2. Rust is another option. However, the program we use is so trivial that it does not make sense to use Rust.
  3. As bpf_get_prandom_u32() returns a pseudo-random 32-bit unsigned value, this method exhibits a very slight bias towards the first indexes. This is unlikely to be worth fixing.
  4. Some examples include <linux/bpf.h> instead of "vmlinux.h". This makes your eBPF program dependent on the installed kernel headers.
  5. listenAddr is initially set to 127.0.0.1:0 to allocate a random port. After the first iteration, it is updated with the allocated port.
  6. This is the setupSockets() function in fixtures_test.go.
  7. This is the setupEBPF() function in fixtures_test.go.
  8. The complete code is in balancing_test.go
  9. The complete code is in graceful_test.go
  10. In C, we would poll() both the socket and a pipe used to signal for shutdown. When the second condition is triggered, we drain the socket by executing a series of non-blocking read() until we get EWOULDBLOCK.

4 January 2026

Matthew Garrett: What is a PC compatible?

Wikipedia says An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models . But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet? Before we dig into that, let s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual. Anyone could buy the same parts from Intel and build a compatible board. They d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn t run elsewhere. CP/M s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn t need to care about the underlying hardware and would run on all systems that had a working CP/M port. By 1979, boards based on the 8086, Intel s successor to the 8080, were hitting the market. The 8086 wasn t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM s hardware, and the rest is history. But one key part of this was that despite what was now MS-DOS existing only to support IBM s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn t include all the code needed to run on a PC - you needed IBM s BIOS. To begin with this wasn t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor s ROM code wasn t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way. And here s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM s functionality, or didn t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you d think wouldn t be necessary given that s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it. You d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn t maintain compatibility. As long as everything went via the BIOS this shouldn t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them. And that s what happened. IBM was the biggest player, so people targeted IBM s platform. When BIOS interfaces weren t sufficient they hit the hardware directly - and even if they weren t doing that, they d end up depending on behavioural quirks of IBM s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof. So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009, and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you d need some additional media for hardware-specific drivers. It s something that still distinguishes the PC market from the ARM desktop market. But it s not as true as it used to be, and it s interesting to think about whether it ever was as true as people thought. Let s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems1. Is this system PC compatible? By the strictest of definitions, no. Ok. But the hardware is broadly the same, right? There s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that s really not going to work when you have a PCI card that s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won t2, so you re still actually relying on the firmware to do the right thing but it s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work. But imagine you are, or imagine you re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you re trying to run something built with IBM Pascal 1.0? There s a risk that it ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it ll break. It d work fine on an actual PC, and it won t work here, so are we PC compatible? That s a very interesting abstract question and I m going to entirely ignore it. Let s talk about PC graphics3. The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware. Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once4, and software that depended on that wouldn t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn t display correctly on any future PCs either. This is going to become a theme. There s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you re going to have a bad time. This isn t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we re not entirely CGA compatible. Is an IBM PC/AT with EGA PC compatible? You d likely say yes , but there s software written for the original PC that won t work there. And, well, let s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It s fine, we d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through. So, what s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it ll run most old software, as long as it doesn t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it ll potentially be unusable or crash because time is hard. The truth is that there s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn t run Flight Simulator. PC Compatible is a socially defined construct, just like Woman . We can get hung up on the details or we can just chill.

  1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don t provide BIOS compatibility
  2. Back in the 90s and early 2000s operating systems didn t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that s how I made a laptop that could boot unmodified MacOS X
  3. (my name will not be Wolfwings Shadowflight)
  4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA
  5. and by advanced we re still talking about the 90s, don t get excited

3 January 2026

Louis-Philippe V ronneau: 2025 A Musical Retrospective

2026 already! The winter weather here has really been beautiful and I always enjoy this time of year. Writing this yearly musical retrospective has now become a beloved tradition of mine1 and I enjoy retracing the year's various events through albums I listened to and concerts I went to. Albums In 2025, I added 141 new albums to my collection, around 60% more than last year's haul. I think this might have been too much? I feel like I didn't have time to properly enjoy all of them and as such, I decided to slow down my acquisition spree sometimes in early December, around the time I normally do the complete opposite. This year again, I bought the vast majority of my music on Bandcamp. Most of the other albums I bought as CDs and ripped them. Concerts In 2025, I went to the following 25 (!!) concerts: Although I haven't touched metalfinder's code in a good while, my instance still works very well and I get the occasional match when a big-name artist in my collection comes in town. Most the venues that advertise on Bandsintown are tied to Ticketmaster though, which means most underground artists (i.e. most of the music I listen to) end up playing elsewhere. As such, shout out again to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there. See you all in 2026!

  1. see the 2022, 2023 and 2024 entries

31 December 2025

kpcyrd: 2025 wrapped

Same as last year, this is a summary of what I ve been up to throughout the year. See also the recap/retrospection published by my friends (antiz, jvoisin, orhun). Thanks to everybody who has been part of my human experience, past or present. Especially those who ve been closest.

17 December 2025

Matthew Garrett: How did IRC ping timeouts end up in a lawsuit?

I recently won a lawsuit against Roy and Rianne Schestowitz, the authors and publishers of the Techrights and Tuxmachines websites. The short version of events is that they were subject to an online harassment campaign, which they incorrectly blamed me for. They responded with a large number of defamatory online posts about me, which the judge described as unsubstantiated character assassination and consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse.

In the defendants' defence and counterclaim[1], 15.27 asserts in part The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names. "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail.

The event in question occurred on the 28th of April, 2023. You can see a line reading *elusive_woman has quit (Ping timeout: 2m30s), followed by one reading *mjg59_ has quit (Ping timeout: 2m30s). The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here.

The IRC server in question is running Ergo (link to source code), and the relevant function is handleIdleTimeout(). The logic here is fairly simple - track the time since activity was last seen from the client. If that time is longer than DefaultIdleTimeout (which defaults to 90 seconds) and a ping hasn't been sent yet, send a ping to the client. If a ping has been sent and the timeout is greater than DefaultTotalTimeout (which defaults to 150 seconds), disconnect the client with a "Ping timeout" message. There's no special logic for handling the ping reply - a pong simply counts as any other client activity and resets the "last activity" value and timeout.

What does this mean? Well, for a start, two clients running on the same system will only have simultaneous ping timeouts if their last activity was simultaneous. Let's imagine a machine with two clients, A and B. A sends a message at 02:22:59. B sends a message 2 seconds later, at 02:23:01. The idle timeout for A will fire at 02:24:29, and for B at 02:24:31. A ping is sent for A at 02:24:29 and is responded to immediately - the idle timeout for A is now reset to 02:25:59, 90 seconds later. The machine hosting A and B has its network cable pulled out at 02:24:30. The ping to B is sent at 02:24:31, but receives no reply. A minute later, at 02:25:31, B quits with a "Ping timeout" message. A ping is sent to A at 02:25:59, but receives no reply. A minute later, at 02:26:59, A quits with a "Ping timeout" message. Despite both clients having their network interrupted simultaneously, the ping timeouts occur 88 seconds apart.

So, two clients disconnecting with ping timeouts 11 seconds apart is not incompatible with the network connection being interrupted simultaneously - depending on activity, simultaneous network interruption may result in disconnections up to 90 seconds apart. But another way of looking at this is that network interruptions may occur up to 90 seconds apart and generate simultaneous disconnections[2]. Without additional information it's impossible to determine which is the case.

This already casts doubt over the assertion that the disconnection was simultaneous, but if this is unusual enough it's still potentially significant. Unfortunately for the Schestowitzes, even looking just at the elusive_woman account, there were several cases where elusive_woman and another user had a ping timeout within 90 seconds of each other - including one case where elusive_woman and schestowitz[TR] disconnect 40 seconds apart. By the Schestowitzes argument, it's also a natural inference that elusive_woman and schestowitz[TR] (one of Roy Schestowitz's accounts) are the same person.

We didn't actually need to make this argument, though. In England it's necessary to file a witness statement describing the evidence that you're going to present in advance of the actual court hearing. Despite being warned of the consequences on multiple occasions the Schestowitzes never provided any witness statements, and as a result weren't allowed to provide any evidence in court, which made for a fairly foregone conclusion.

[1] As well as defending themselves against my claim, the Schestowitzes made a counterclaim on the basis that I had engaged in a campaign of harassment against them. This counterclaim failed.

[2] Client A and client B both send messages at 02:22:59. A falls off the network at 02:23:00, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. B falls off the network at 02:24:28, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. Simultaneous disconnects despite over a minute of difference in the network interruption.

comment count unavailable comments

14 December 2025

Dirk Eddelbuettel: BH 1.90.0-1 on CRAN: New Upstream

Boost Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 41.5 million package downloads. Version 1.90.0 of Boost was released a few days ago following the regular Boost release schedule of April, August and December releases. As before, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed only one really minor issue among the over three hundred direct reverse dependencies. And that issue was addressed yesterday within hours by a truly responsove maintainer (and it helped that a related issue had been addressed months earlier with version 1.89.). So big thanks to Jean-Romain Roussel for the prompt fix, and to Andrew Johnson for the earlier test with 1.89.0. As last year with 1.87.0, no new Boost libraries were added to BH so the (considerable) size is more or less unchanged. It lead to CRAN doing a manual inspection but as there were no other issues it sailed through as is now in the CRAN repository. The short NEWS entry follows.

Changes in version 1.90.0-1 (2025-12-13)
  • Upgrade to Boost 1.90.0, patched as usual to comment-out diagnostic suppression messages per the request of CRAN
  • Minor upgrades to continuous integration

Via my CRANberries, there is a diffstat report relative to the previous release. Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

1 December 2025

Russ Allbery: Review: Forever and a Day

Review: Forever and a Day, by Haley Cass
Series: Those Who Wait #1.5
Publisher: Haley Cass
Copyright: 2020
ISBN: 979-8-5902-5966-3
Format: Kindle
Pages: 101
Forever and a Day is a coda to Haley Cass's self-published sapphic romance novel Those Who Wait. There is no point in reading it unless you have already read and enjoyed the full book and wanted more of a denouement. Given that Those Who Wait is a romance novel, it is definitionally not a spoiler to reveal that Sutton and Charlotte ended up together. This novella is seven scenes sketching out the next few years of their lives, interspersed with press clippings and social media commentary. These tie up loose ends, give the characters a bit more time together, throw in one more conflict and resolution, add one more sex scene, and stick a few exclamation points after the happily ever after. I am the sort of person who likes long denouements in stories, so I'm the target audience for this sort of sequel that's essentially additional chapters to the book. (The funniest version of this I've read is Jacqueline Carey's Saints Astray.) They are usually not great literature, since there are good reasons for not including these chapters in the book. That is exactly what this is: a few more chapters of the characters being happy, entirely forgettable, and of interest only to people who want that. Cass does try to introduce a bit of a plot via some light family conflict, which was sweet and mostly worked, and some conflict over having children, which was very stereotyped and which I did not enjoy as much. I thought the earlier chapters of this novella were the stronger ones, although I do have to give the characters credit in the later chapters for working through conflict in a mature and fairly reasonable way. It does help, though, when the conflict is entirely resolved by one character being right and the other character being happily wrong. That's character conflict on easy mode. I was happy to see that Sutton got a career, although as in the novel I wish Cass had put some more effort into describing Sutton's efforts in building that career. The details are maddeningly vague, which admittedly matches the maddeningly vague description of Charlotte's politics but which left me unsatisfied. Charlotte's political career continues to be pure wish fulfillment in the most utterly superficial and trivialized way, and it bothered me even more in the novella than it did in the novel. We still have absolutely no idea what she stands for, what she wants to accomplish, and why anyone would vote for her, and yet we get endless soft-focus paeans to how wonderful she will be for the country. Her opponents are similarly vague to the point that the stereotypes Cass uses to signal their inferiority to Charlotte are a little suspect. I'm more critical of this in 2025 than I would have been in 2015 because the last ten years have made clear the amount of damage an absolute refusal to stand for anything except hazy bromides causes, and I probably shouldn't be this annoyed that Cass chose to vaguely gesture towards progressive liberalism without muddying her romance denouement with a concrete political debate. But, just, gah. I found the last chapter intensely annoying, in part because the narrative of that chapter was too cliched and trite to sufficiently distract me from the bad taste of the cotton-candy politics. Other than that, this was minor, sweet, and forgettable. If you want another few chapters of an already long novel, this delivers exactly what you would expect. If the novel was plenty, nothing about this novella is going to change your mind and you can safely skip it. I really liked the scene between Charlotte and Sutton's mom, though, and I'm glad I read the novella just for that. Rating: 6 out of 10

17 November 2025

Valhalla's Things: Historically Inaccurate Hemd

Posted on November 17, 2025
Tags: madeof:atoms, craft:sewing
A woman wearing a white shirt with a tall, thick collar with lines of blue embroidery, closed in the front with small buttons; the sleeves are wide and billowing, gathered at the cuffs with more blue embroidery. She's keeping her hands at the waist so that the shirt, which reaches to mid thigh, doesn't look like a shapeless tent from the neck down. After cartridge pleating and honeycombing, I was still somewhat in the mood for that kind of fabric manipulation, and directing my internet searches in that vague direction, and I stumbled on this: https://katafalk.wordpress.com/2012/06/26/patternmaking-for-the-kampfrau-hemd-chemise/ Now, do I want to ever make myself a 16th century German costume, especially a kampfrau one? No! I m from lake Como! Those are the enemies who come down the Alps pillaging and bringing the Black Death with them! Although I have to admit that at times during my day job I have found the idea of leaving everything to go march with the J germonsters attractive. You know, the exciting prospective of long days of march spent knitting sturdy socks, punctuated by the excitement of settling down in camp and having a chance of doing lots of laundry. Or something. Sometimes being a programmer will make you think odd things. Anyway, going back to the topic, no, I didn t need an historically accurate hemd. But I did need a couple more shirts for daily wear, I did want to try my hand at smocking, and this looked nice, and I was intrigued by the way the shaping of the neck and shoulder worked, and wondered how comfortable it would be. And so, it had to be done. I didn t have any suitable linen, but I did have quite a bit of cotton voile, and since I wasn t aiming at historical accuracy it looked like a good option for something where a lot of fabric had to go in a small space. At first I considered making it with a bit less fabric than the one in the blog, but then the voile was quite thin, so I kept the original measurement as is, only adapting the sleeve / sides seams to my size. The same woman, from the back. This time the arms are out, so that the big sleeves show better, but the body does look like a tent. With the pieces being rectangles the width of the fabric, I was able to have at least one side of selvedge on all seams, and took advantage of it by finishing the seams by simply folding the allowances to one sides so that the selvedge was on top, and hemstitching them down as I would have done with a folded edge when felling. Also, at first I wanted to make the smocking in white on white, but then I thought about a few hanks of electric blue floss I had in my stash, and decided to just go with it. The initial seams were quickly made, then I started the smocking at the neck, and at that time the project went on hold while I got ready to go to DebConf. Then I came back and took some time to get back into a sewing mood, but finally the smocking on the next was finished, and I could go on with the main sewing, which, as I expected, went decently fast for a handsewing project. detail of the smocking in progress on the collar, showing the lines of basting thread I used as a reference, and the two in progress zig-zag lines being worked from each side. While doing the diagonal smocking on the collar I counted the stitches to make each side the same length, which didn t completely work because the gathers weren t that regular to start with, and started each line from the two front opening going towards the center back, leaving a triangle with a different size right in the middle. I think overall it worked well enough. Then there were a few more interruptions, but at last it was ready! just as the weather turned cold-ish and puffy shirts were no longer in season, but it will be there for me next spring. I did manage to wear it a few times and I have to say that the neck shaping is quite comfortable indeed: it doesn t pull in odd ways like the classical historically accurate pirate shirt sometimes does, and the heavy gathering at the neck makes it feel padded and soft. The same shirt belted (which looks nicer); one hand is held out to show that the cuff is a bit too wide and falls down over the hand. I m not as happy with the cuffs: the way I did them with just honeycombing means that they don t need a closure, and after washing and a bit of steaming they lie nicely, but then they tend to relax in a wider shape. The next time I think I ll leave a slit in the sleeves, possibly make a different type of smocking (depending on whether I have enough fabric) and then line them like the neck so that they are stable. Because, yes, I think that there will be another time: I have a few more project before that, and I want to spend maybe another year working from my stash, but then I think I ll buy some soft linen and make at least another one, maybe with white-on-white smocking so that it will be easier to match with different garments.

13 November 2025

Freexian Collaborators: Debian Contributions: Upstreaming cPython patches, ansible-core autopkgtest robustness and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-10 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Upstreaming cPython patches, by Stefano Rivera Python 3.14.0 (final) released in early October, and Stefano uploaded it to Debian unstable. The transition to support 3.14 has begun in Ubuntu, but hasn t started in Debian, yet. While build failures in Debian s non-release ports are typically not a concern for package maintainers, Python is fairly low in the stack. If a new minor version has never successfully been built for a Debian port by the time we start supporting it, it will quickly become a problem for the port. Python 3.14 had been failing to build on two Debian ports architectures (hppa and m68k), but thankfully their porters provided patches. These were applied and uploaded, and Stefano forwarded the hppa one upstream. Getting it into shape for upstream approval took some work, and shook out several other regressions for the Python hppa port. Debugging these on slow hardware takes a while. These two ports aren t successfully autobuilding 3.14 yet (they re both timing out in tests), but they re at least manually buildable, which unblocks the ports. Docutils 0.22 also landed in Debian around this time, and Python needed some work to build its docs with it. The upstream isn t quite comfortable with distros using newer docutils, so there isn t a clear path forward for these patches, yet. The start of the Python 3.15 cycle was also a good time to renew submission attempts on our other outstanding python patches, most importantly multiarch tuples for stable ABI extension filenames.

ansible-core autopkgtest robustness, by Colin Watson The ansible-core package runs its integration tests via autopkgtest. For some time, we ve seen occasional failures in the expect, pip, and template_jinja2_non_native tests that usually go away before anyone has a chance to look into them properly. Colin found that these were blocking an openssh upgrade and so decided to track them down. It turns out that these failures happened exactly when the libpython3.13-stdlib package had different versions in testing and unstable. A setup script removed /usr/lib/python3*/EXTERNALLY-MANAGED in order that pip can install system packages for some of the tests, but if a package shipping that file were ever upgraded then that customization would be undone, and the same setup script removed apt pins in a way that caused problems when autopkgtest was invoked in certain ways. In combination with this, one of the integration tests attempted to disable system apt sources while testing the behaviour of the ansible.builtin.apt module, but it failed to do so comprehensively enough and so that integration test accidentally upgraded the testbed from testing to unstable in the middle of the test. Chaos ensued. Colin fixed this in Debian and contributed the relevant part upstream.

Miscellaneous contributions
  • Carles kept working on the missing-relations (packages which Recommends or Suggests packages that are not available in Debian). He improved the tooling to detect Suggested packages that are not available in Debian because they were removed (or changed names).
  • Carles improved po-debconf-manager to send translations for packages that are not in Salsa. He also improved the UI of the tool (using rich for some of the output).
  • Carles, using po-debconf-manager, reviewed and submitted 38 debconf template translations.
  • Carles created a merge request for distro-tracker to align text and input-field (postponed until distro-tracker uses Bootstrap 5).
  • Rapha l updated gnome-shell-extension-hamster for GNOME 49. It is a GNOME Shell integration for the Hamster time tracker.
  • Rapha l merged a couple of trivial merge requests, but he did not yet find the time to properly review and test the bootstrap 5 related merge requests that are still waiting on salsa.
  • Helmut sent patches for 20 cross build failures.
  • Helmut refactored debvm dropping support for running on bookworm . There are two trixie features improving the operation. mkfs.ext4 can now consume a tar archive to populate the filesystem via libarchive and dash now supports set -o pipefail. Beyond this change in operation, a number of robustness and quality issues have been resolved.
  • Thorsten fixed some bugs in the printing software and uploaded improved versions of brlaser and ifhp. Moreover he uploaded a new upstream version of cups.
  • Emilio updated xorg-server to the latest security release and helped with various transitions.
  • Santiago worked on and reviewed different Salsa CI MR to address some regressions introduced by the move to sbuild+unshare. Those MR included stop adding the salsa-ci user in the build image to the sbuild group, fix the suffix path used by mmdebstrap to create the chroot and update the documentation about how to use aptly repos in another project.
  • Santiago supported the work on the DebConf 26 organisation, particularly helping with an implemented method to count the votes to choose the conference logo.
  • Stefano reviewed Python PEP-725 and PEP-804, which hope to provide a mechanism to declare external (e.g. APT) dependencies in Python packages. Stefano engaged in discussion and provided feedback to the authors.
  • Stefano prepared for Berkeley DB removal in Python.
  • Stefano ported the backend to reverse-depends to Python 3 (yes, it had been running on 2.7) and migrated it to git from bzr.
  • Stefano updated miscellaneous packages, including beautifulsoup4, mkdocs-macros-plugin, python-pipx.
  • Stefano applied an upstream patch to pypy3, fixing an AST Compiler Assertion error.
  • Stefano uploaded an update to distro-info-data, including data for two additional Debian derivatives: eLxr and Devuan.
  • Stefano prepared an update to dh-python, the python packaging tool, merging several contributed patches and resolving some bugs.
  • Colin upgraded OpenSSH to 10.1p1, helped upstream to chase down some regressions, and further upgraded to 10.2p1. This is also now in trixie-backports.
  • Colin fixed several build regressions with Python 3.14, scikit-learn 1.7, and other transitions.
  • Colin investigated a malware report against tini, making use of reproducible builds to help demonstrate that this is highly likely to be a false positive.
  • Anupa prepared questions and collected interview responses from women contributors in Debian to publish the post as part of Ada Lovelace day 2025.

9 November 2025

Stefano Rivera: Debian Video Team Sprint: November 2025

This week, some of the DebConf Video Team met in Herefordshire (UK) for a sprint. We didn't have a sprint in 2024, and it was sorely needed, now. At the sprint we made good progress towards using Voctomix 2 more reliably, and made plans for our future hardware needs. Attendees Voctomix 2 DebConf 25 was the first event that the team used Voctomix version 2. Testing it during DebCamp 25 (the week before DebConf), it seemed to work reliably. But during the event, we hit repeated audio dropout issues, that affected about 18 of our recordings (and live streams). We had attempted to use Voctomix 2 at DebConf 24, and quickly rolled back to version 1, on day 1 of the conference, when we hit similar issues. We thought these issues would be resolved for DebConf 25, by using more powerful (newer) mixing machines. Trying to get to the bottom of these issues was the main focus of the sprint. Nicolas brought 2 of Debian's cameras and the Framework laptop that we'd used at the conference, so we could reproduce the problem. It didn't take long to reproduce, in fact, we spent most of the week trying any configuration changes we could think of to avoid it. The issue we've been seeing feels like a gstreamer bug, rather than something voctomix is doing incorrectly. If anything, configuration changes are avoiding hitting it. Finally, on the last night of the sprint, we managed to run voctomix all night without the problem appearing. But... that isn't enough to feel confident that the issue is avoided. More testing will be required. Detecting audio breakage Kyle worked on a way to report the audio quality in our Prometheus exporter, so we can automatically detect this kind of audio breakage. This was implemented in helios our audio level monitor, and lead to some related code refactoring. Framework Laptops Historically, the video team has relied on borrowed and rented computer hardware at conferences for our (software) video mixing, streaming, storage and encoding. Many years ago, we'd even typically have a local Debian mirror and upload queue on site. Our video mixing machines had to be desktop size computers with 2 Blackmagic Declink Mini Recorder PCI-e cards installed in them, to capture video from our cameras. Now that we reliably have more Internet bandwidth than we really need, at our conference venues, we can rely on offsite cloud servers. We only need the video capture and mixing machines on site. Blackmagic also has UltraStudio Recorder thunderbolt capture boxes that we can use with a laptop. The project bought a couple of these and a Framework 13 AMD laptop to test at DebConf 25. We used it in production at DebConf, in the "Petit Amphi" room, where it seemed to work fairly well. It was very picky about thunderbolt cable and port combinations, refusing to even boot when they were connected. Since then, Framework firmware has fixed these issues, and in our testing at the sprint, it worked almost perfectly. (One of the capture boxes got into a broken state, and had to be unplugged and re-connected to fix it.) We think these are the best option for the future, and plan to ask the project to buy some more of them. HDCP Apple Silicon devices seem to like to HDCP-encrypt their HDMI output whenever possible. This causes our HDMI capture hardware to display an "Encrypted" error, rather than any useful image. Chris experimented with a few different devices to strip HDCP from HDMI video, at least 2 of them worked. Spring Cleaning Kyle dug through the open issues in our Salsa repositories and cleaned up some issues. DebConf 25 Video Encoding The core video team at DebConf 25 was a little under-staffed, significantly overlapping with core conference organization, which took priority. That, combined with the Voctomix 2 audio dropout issues we'd hit, meant that there was quite a bit of work left to be done to get the conference videos properly encoded and released. We found that the encodings had been done at the wrong resolution, which forced a re-encode of all videos. In the process, we reviewed videos for audio issues and made a list of the ones that need more work. We ran out of time and this work isn't done, yet. DebConf 26 Preparation Kyle reviewed floorplans and photographs of the proposed DebConf 26 talk venues, and build up a list of A/V kit that we'll need to hire. Carl's Video Box Carl uses much of the same stack as the video team, for many other events in the US. He has experimenting with using a Dell 7212 tablet in an all-in-one laser-cut box. Carl demonstrated this box, which could perfect for small miniDebConfs, at the sprint. Using voctomix 2 on the box requires some work, because it doesn't use Blackmagic cards for video capture. The Box: Front The Box: Back gst-fallbacksrc Carl's box's needs lead to looking at gstfallbacksrc. This should let Voctomix 2 survive cameras (or network sources) going away for a moment. Matthias Geiger packaged it for us, and it's now in Debian NEW. Thanks! voctomix-outcasts Carl cut a release of voctomix-outcasts and Stefano uploaded it to unstable. Ansible Configuration The videoteam's stack is deployed with Ansible, and almost everything we do involves work on this stack. Carl upstreamed some of his features to us, and we updated our voctomix2 configuration to take advantage of our experiments at the sprint. Miscellaneous Voctomix contributions We fixed a couple of minor bugs in voctomix. More Nageru experimentation In 2023, we tried to configure Nageru (another live video mixer) for the video team's needs. Like voctomix it needs some configuration and scaffolding to adapt it to your needs. Practically, this means writing a "theme" in Lua that controls the mixer. The team still has a preference for Voctomix (as we're all very familiar with it), but would like to have Nageru available as an option when we need it. We fixed some minor issues in our theme, enough to get it running again, on the Framework laptop. Much more work is needed to really make it a useable option. Thank you Thanks to the Debian project for funding the costs of the sprint, and Chris Boot's extended family for providing us with a no-cost sprint venue. Thanks to c3voc for developing and maintaining voctomix, and helping us to debug issues in it. Thank you to everyone in the videoteam who attended or helped out remotely! And to employers who let us work on Debian on company time. We'll likely need to keep working on our stack remotely, in the leadup to DebConf 26, and/or have another sprint before then. Breakfast Coffee Hacklab Trains!

7 November 2025

Ravi Dwivedi: A Bad Day in Malaysia

Continuing from where Badri and I left off in the last post. On the 7th of December 2024, we boarded a bus from Singapore to the border town of Johor Bahru in Malaysia. The bus stopped at the Singapore emigration for us to get off for the formalities. The process was similar to the immigration at the Singapore airport. It was automatic, and we just had to scan our passports for the gates to open. Here also, we didn t get Singapore stamps on our passports. After we were done with the emigration, we had to find our bus. We remembered the name of the bus company and the number plate, which helped us recognize our bus. It wasn t there already after we came out of the emigration, but it arrived soon enough, and we boarded it promptly. From the Singapore emigration, the bus travelled a few kilometers and dropped us at Johor Bahru Sentral (JB Sentral) bus station, where we had to go through Malaysian immigration. The process was manual, unlike Singapore, and there was an immigration officer at the counter who stamped our passports (which I like) and recorded our fingerprints. At the bus terminal, we exchanged rupees at an exchange shop to get Malaysian ringgits. We could not find any free drinking water sources on the bus terminal, so we had to buy water. Badri later told me that Johor Bahru has a lot of data centers, which need a lot of water for cooling. When he read about it later, he immediately connected it with the fact that there was no free drinking water, and we had to buy water. Such data centers can lead to scarcity of water for others in the area. From JB Sentral, we took a bus to Larkin Terminal, as our hotel was nearby. It was 1.5 ringgits per person (30 rupees). In order to pay for the fare, we had to put cash in a box near the driver s seat. Around half-an-hour later, we reached our hotel. The time was 23:30 hours. The hotel room was hot as it didn t have air-conditioning. The weather in Malaysia is on the hotter side throughout the year. It was a budget hotel, and we paid 70 ringgits for our room. Badri slept soon after we checked-in. I went out during the midnight at around 00:30. I was hungry, so I entered a small scale restaurant nearby, which was quite lively for the midnight hours. At the restaurant, I ordered a coffee and an omelet. I also asked for drinking water. The unique thing about that was that they put ice in hot water to make its temperature normal. My bill from the restaurant looked like the below-mentioned table, as the items names were in the local language Malay:
Item Price (Malaysian ringgits) Conversion to Indian rupees Comments
Nescafe Tarik 2.50 50 Coffee
Ais Kosong 0.50 10 Water
Telur Dadar 2.00 40 Omelet
SST Tax (6%) 0.30 6
Total 5.30 106
After checking out from the restaurant, I explored nearby shops. I also bought some water before going back to the hotel room. The next day, we had a (pre-booked) bus to Kuala Lumpur. We checked out from the hotel 10 minutes after the check-out time (which was 14:00 hours). However, within those 10 minutes, the hotel staff already came up three times asking us to clear out (which we were doing as fast as possible). And finally on the third time they said our deposit was forfeit, even though it was supposed to be only for keys and towels. The above-mentioned bus for Kuala Lumpur was from the nearby Larkin Bus Terminal. The bus terminal was right next to our hotel, so we walked till there. Upon reaching there, we found out that the process of boarding a bus in Malaysia resembled with taking a flight. We needed to go to a counter to get our boarding passes, followed by reporting at our gate half-an-hour before the scheduled time. Furthermore, they had a separate waiting room and boarding gates. Also, there was a terminal listing buses with their arrival and departure signs. Finally, to top it off, the buses had seatbelts. We got our boarding pass for 2 ringgits (40 rupees). After that, we proceeded to get something to eat as we were hungry. We went to a McDonald s, but couldn t order anything because of the long queue. We didn t have a lot of time, so we proceeded towards our boarding gate without having anything. The boarding gate was in a separate room, which had a vending machine. I tried to order something using my card, but the machine wasn t working. In Malaysia, there is a custom of queueing up to board buses even before the bus has arrived. We saw it in Johor Bahru as well. The culture is so strong that they even did it in Singapore while waiting for the Johor Bahru bus! Our bus departed at 15:30 as scheduled. The journey was around 5 hours. A couple of hours later, our bus stopped for a break. We got off the bus and went to the toilet. As we were starving (we didn t have anything the whole day), we thought it was a good opportunity to get some snack. There was a stall selling some food. However, I had to determine which options were vegetarian. We finally settled on a cylindrical box of potato chips, labelled Mister Potato. They were 7 ringgits. We didn t know how long the bus is going to stop. Furthermore, eating inside buses in Malaysia is forbidden. When we went to get some coffee from the stall, our bus driver was standing there and made a face. We got an impression that he doesn t want us to have coffee. However, after we got into the bus, we had to wait for a long time for it to resume its journey as the driver was taking his sweet time to drink his coffee. During the bus journey, we saw a lot of palm trees on the way. The landscape was beautiful, with good road infrastructure throughout the journey. Badri also helped me improve my blog post on obtaining Luxembourg visa in the bus. The bus dropped us at the Terminal Bersepadu Selatan (TBS in short) in Kuala Lumpur at 21:30 hours. Finally, we got something at the TBS. We also noticed that the TBS bus station had lockers. This gave us the idea of putting some of our luggage in the lockers later while we will be in Brunei. We had booked a cheap Air Asia ticket which doesn t allow check-in luggage. Further, keeping the checked-in luggage in lockers for three days was cheaper than paying the excess luggage penalty for Air Asia. We followed it up by taking a metro as our hotel was closer to a metro station. This was a bad day due to our deposit being forfeited unfairly, and got nothing to eat. We took the metro to reach our hostel, which was located in the Bukit Bintang area. The name of this hostel was Manor by Mingle. I had stayed here earlier in February 2024 for two nights. Back then, I paid 1000 rupees per day for a dormitory bed. However, this time the same hostel was much cheaper. We got a private room for 800 rupees per day, with breakfast included. Earlier it might have been pricier due to my stay falling on weekends or maybe February has more tourists in Kuala Lumpur. That s it for this post. Stay tuned for our adventures in Malaysia!

Next.