Search Results: "ban"

19 January 2026

Jonathan Dowland: FOSDEM 2026

I'm going to FOSDEM 2026! I'm presenting in the Containers dev room. My talk is Java Memory Management in Containers and it's scheduled as the first talk on the first day. I'm the warm-up act! The Java devroom has been a stalwart at FOSDEM since 2004 (sometimes in other forms), but sadly there's no Java devroom this year. There's a story about that, but it's not mine to tell. Please recommend to me any interesting talks! Here's a few that caught my eye: Debian/related: Containers: Research: Other:

Russell Coker: Furilabs FLX1s

The Aim I have just got a Furilabs FLX1s [1] which is a phone running a modified version of Debian. I want to have a phone that runs all apps that I control and can observe and debug. Android is very good for what it does and there are security focused forks of Android which have a lot of potential, but for my use a Debian phone is what I want. The FLX1s is not going to be my ideal phone, I am evaluating it for use as a daily-driver until a phone that meets my ideal criteria is built. In this post I aim to provide information to potential users about what it can do, how it does it, and how to get the basic functions working. I also evaluate how well it meets my usage criteria. I am not anywhere near an average user. I don t think an average user would ever even see one unless a more technical relative showed one to them. So while this phone could be used by an average user I am not evaluating it on that basis. But of course the features of the GUI that make a phone usable for an average user will allow a developer to rapidly get past the beginning stages and into more complex stuff. Features The Furilabs FLX1s [1] is a phone that is designed to run FuriOS which is a slightly modified version of Debian. The purpose of this is to run Debian instead of Android on a phone. It has switches to disable camera, phone communication, and microphone (similar to the Librem 5) but the one to disable phone communication doesn t turn off Wifi, the only other phone I know of with such switches is the Purism Librem 5. It has a 720*1600 display which is only slightly better than the 720*1440 display in the Librem 5 and PinePhone Pro. This doesn t compare well to the OnePlus 6 from early 2018 with 2280*1080 or the Note9 from late 2018 with 2960*1440 which are both phones that I ve run Debian on. The current price is $US499 which isn t that good when compared to the latest Google Pixel series, a Pixel 10 costs $US649 and has a 2424*1080 display and it also has 12G of RAM while the FLX1s only has 8G. Another annoying thing is how rounded the corners are, it seems that round corners that cut off the content are a standard practice nowadays, in my collection of phones the latest one I found with hard right angles on the display was a Huawei Mate 10 Pro which was released in 2017. The corners are rounder than the Note 9, this annoys me because the screen is not high resolution by today s standards so losing the corners matters. The default installation is Phosh (the GNOME shell for phones) and it is very well configured. Based on my experience with older phone users I think I could give a phone with this configuration to a relative in the 70+ age range who has minimal computer knowledge and they would be happy with it. Additionally I could set it up to allow ssh login and instead of going through the phone support thing of trying to describe every GUI setting to click on based on a web page describing menus for the version of Android they are running I could just ssh in and run diff on the .config directory to find out what they changed. Furilabs have done a very good job of setting up the default configuration, while Debian developers deserve a lot of credit for packaging the apps the Furilabs people have chosen a good set of default apps to install to get it going and appear to have made some noteworthy changes to some of them. Droidian The OS is based on Android drivers (using the same techniques as Droidian [2]) and the storage device has the huge number of partitions you expect from Android as well as a 110G Ext4 filesystem for the main OS. The first issue with the Droidian approach of using an Android kernel and containers for user space code to deal with drivers is that it doesn t work that well. There are 3 D state processes (uninterrupteable sleep which usually means a kernel bug if the process remains in that state) after booting and doing nothing special. My tests running Droidian on the Note 9 also had D state processes, in this case they are D state kernel threads (I can t remember if the Note 9 had regular processes or kernel threads stuck in D state). It is possible for a system to have full functionality in spite of some kernel threads in D state but generally it s a symptom of things not working as well as you would hope. The design of Droidian is inherently fragile. You use a kernel and user space code from Android and then use Debian for the rest. You can t do everything the Android way (with the full OS updates etc) and you also can t do everything the Debian way. The TOW Boot functionality in the PinePhone Pro is really handy for recovery [3], it allows the internal storage to be accessed as a USB mass storage device. The full Android setup with ADB has some OK options for recovery, but part Android and part Debian has less options. While it probably is technically possible to do the same things in regard to OS repair and reinstall the fact that it s different from most other devices means that fixes can t be done in the same way. Applications GUI The system uses Phosh and Phoc, the GNOME system for handheld devices. It s a very different UI from Android, I prefer Android but it is usable with Phosh. IM Chatty works well for Jabber (XMPP) in my tests. It supports Matrix which I didn t test because I don t desire the same program doing Matrix and Jabber and because Matrix is a heavy protocol which establishes new security keys for each login so I don t want to keep logging in on new applications. Chatty also does SMS but I couldn t test that without the SIM caddy. I use Nheko for Matrix which has worked very well for me on desktops and laptops running Debian. Email I am currently using Geary for email. It works reasonably well but is lacking proper management of folders, so I can t just subscribe to the important email on my phone so that bandwidth isn t wasted on less important email (there is a GNOME gitlab issue about this see the Debian Wiki page about Mobile apps [4]). Music Music playing isn t a noteworthy thing for a desktop or laptop, but a good music player is important for phone use. The Lollypop music player generally does everything you expect along with support for all the encoding formats including FLAC0 a major limitation of most Android music players seems to be lack of support for some of the common encoding formats. Lollypop has it s controls for pause/play and going forward and backward one track on the lock screen. Maps The installed map program is gnome-maps which works reasonably well. It gets directions via the Graphhopper API [5]. One thing we really need is a FOSS replacement for Graphhopper in GNOME Maps. Delivery and Unboxing I received my FLX1s on the 13th of Jan [1]. I had paid for it on the 16th of Oct but hadn t received the email with the confirmation link so the order had been put on hold. But after I contacted support about that on the 5th of Jan they rapidly got it to me which was good. They also gave me a free case and screen protector to apologise, I don t usually use screen protectors but in this case it might be useful as the edges of the case don t even extend 0.5mm above the screen. So if it falls face down the case won t help much. When I got it there was an open space at the bottom where the caddy for SIMs is supposed to be. So I couldn t immediately test VoLTE functionality. The contact form on their web site wasn t working when I tried to report that and the email for support was bouncing. Bluetooth As a test of Bluetooth I connected it to my Nissan LEAF which worked well for playing music and I connected it to several Bluetooth headphones. My Thinkpad running Debian/Trixie doesn t connect to the LEAF and to headphones which have worked on previous laptops running Debian and Ubuntu. A friend s laptop running Debian/Trixie also wouldn t connect to the LEAF so I suspect a bug in Trixie, I need to spend more time investigating this. Wifi Currently 5GHz wifi doesn t work, this is a software bug that the Furilabs people are working on. 2.4GHz wifi works fine. I haven t tested running a hotspot due to being unable to get 4G working as they haven t yet shipped me the SIM caddy. Docking This phone doesn t support DP Alt-mode or Thunderbolt docking so it can t drive an external monitor. This is disappointing, Samsung phones and tablets have supported such things since long before USB-C was invented. Samsung DeX is quite handy for Android devices and that type feature is much more useful on a device running Debian than on an Android device. Camera The camera works reasonably well on the FLX1s. Until recently for the Librem 5 the camera didn t work and the camera on my PinePhone Pro currently doesn t work. Here are samples of the regular camera and the selfie camera on the FLX1s and the Note 9. I think this shows that the camera is pretty decent. The selfie looks better and the front camera is worse for the relatively close photo of a laptop screen taking photos of computer screens is an important part of my work but I can probably work around that. I wasn t assessing this camera t find out if it s great, just to find out if I have the sorts of problems I had before and it just worked. The Samsung Galaxy Note series of phones has always had decent specs including good cameras. Even though the Note 9 is old comparing to it is a respectable performance. The lighting was poor for all photos. FLX1s
Note 9
Power Use In 93 minutes having the PinePhone Pro, Librem 5, and FLX1s online with open ssh sessions from my workstation the PinePhone Pro went from 100% battery to 26%, the Librem 5 went from 95% to 69%, and the FLX1s went from 100% to 99%. The battery discharge rate of them was reported as 3.0W, 2.6W, and 0.39W respectively. Based on having a 16.7Wh battery 93 minutes of use should have been close to 4% battery use, but in any case all measurements make it clear that the FLX1s will have a much longer battery life. Including the measurement of just putting my fingers on the phones and feeling the temperature (FLX1s felt cool and the others felt hot). The PinePhone Pro and the Librem 5 have an optional Caffeine mode which I enabled for this test, without that enabled the phone goes into a sleep state and disconnects from Wifi. So those phones would use much less power with caffeine mode enabled, but they also couldn t get fast response to notifications etc. I found the option to enable a Caffeine mode switch on the FLX1s but the power use was reported as being the same both with and without it. Charging One problem I found with my phone is that in every case it takes 22 seconds to negotiate power. Even when using straight USB charging (no BC or PD) it doesn t draw any current for 22 seconds. When I connect it it will stay at 5V and varying between 0W and 0.1W (current rounded off to zero) for 22 seconds or so and then start charging. After the 22 second display the phone will make the tick sound indicating that it s charging and the power meter will measure that it s drawing some current. I added the table from my previous post about phone charging speed [6] with an extra row for the FLX1s. For charging from my PC USB ports the results were the worst ever, the port that does BC did not work at all it was looping trying to negotiate after a 22 second negotiation delay the port would turn off. The non-BC port gave only 2.4W which matches the 2.5W given by the spec for a High-power device which is what that port is designed to give. In a discussion on the Purism forum about the Librem5 charging speed one of their engineers told me that the reason why their phone would draw 2A from that port was because the cable was identifying itself as a USB-C port not a High-power device port. But for some reason out of the 7 phones I tested the FLX1s and the One Plus 6 are the only ones to limit themselves to what the port is apparently supposed to do. Also the One Plus 6 charges slowly on every power supply so I don t know if it is obeying the spec or just sucking. On a cheap AliExpress charger the FLX1s gets 5.9V and on a USB battery it gets 5.8V. Out of all 42 combinations of device and charger I tested these were the only ones to involve more than 5.1V but less than 9V. I welcome comments suggesting an explanation. The case that I received has a hole for the USB-C connector that isn t wide enough for the plastic surrounds on most of my USB-C cables (including the Dell dock). Also to make a connection requires a fairly deep insertion (deeper than the One Plus 6 or the Note 9). So without adjustment I have to take the case off to charge it. It s no big deal to adjust the hole (I have done it with other cases) but it s an annoyance.
Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
FLX1s FAIL 5.0V 0.49A 2.4W 4.8V 1.9A 9.0W 5.9V 1.8A 11W 4.8V 2.1A 10W 5.8V 2.1A 12W 5.8V 2.1A 12W 5.0V 0.49A 2.4W
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W
Conclusion The Furilabs support people are friendly and enthusiastic but my customer experience wasn t ideal. It was good that they could quickly respond to my missing order status and the missing SIM caddy (which I still haven t received but believe is in the mail) but it would be better if such things just didn t happen. The phone is quite user friendly and could be used by a novice. I paid $US577 for the FLX1s which is $AU863 by today s exchange rates. For comparison I could get a refurbished Pixel 9 Pro Fold for $891 from Kogan (the major Australian mail-order company for technology) or a refurbished Pixel 9 Pro XL for $842. The Pixel 9 series has security support until 2031 which is probably longer than you can expect a phone to be used without being broken. So a phone with a much higher resolution screen that s only one generation behind the latest high end phones and is refurbished will cost less. For a brand new phone a Pixel 8 Pro which has security updates until 2030 costs $874 and a Pixel 9A which has security updates until 2032 costs $861. Doing what the Furilabs people have done is not a small project. It s a significant amount of work and the prices of their products need to cover that. I m not saying that the prices are bad, just that economies of scale and the large quantity of older stock makes the older Google products quite good value for money. The new Pixel phones of the latest models are unreasonably expensive. The Pixel 10 is selling new from Google for $AU1,149 which I consider a ridiculous price that I would not pay given the market for used phones etc. If I had a choice of $1,149 or a feature phone I d pay $1,149. But the FLX1s for $863 is a much better option for me. If all I had to choose from was a new Pixel 10 or a FLX1s for my parents I d get them the FLX1s. For a FOSS developer a FLX1s could be a mobile test and development system which could be lent to a relative when their main phone breaks and the replacement is on order. It seems to be fit for use as a commodity phone. Note that I give this review on the assumption that SMS and VoLTE will just work, I haven t tested them yet. The UI on the FLX1s is functional and easy enough for a new user while allowing an advanced user to do the things they desire. I prefer the Android style and the Plasma Mobile style is closer to Android than Phosh is, but changing it is something I can do later. Generally I think that the differences between UIs matter more when on a desktop environment that could be used for more complex tasks than on a phone which limits what can be done by the size of the screen. I am comparing the FLX1s to Android phones on the basis of what technology is available. But most people who would consider buying this phone will compare it to the PinePhone Pro and the Librem 5 as they have similar uses. The FLX1s beats both those phones handily in terms of battery life and of having everything just work. But it has the most non free software of the three and the people who want the $2000 Librem 5 that s entirely made in the US won t want the FLX1s. This isn t the destination for Debian based phones, but it s a good step on the way to it and I don t think I ll regret this purchase.

17 January 2026

Ravi Dwivedi: My experiences in Brunei

This post covers my friend Badri and my experiences in Brunei. Brunei officially Brunei Darussalam is a country in Southeast Asia, located on Borneo island. It is one of the few remaining absolute monarchies on Earth. On the morning of the 10th of December 2024, Badri and I reached Brunei International Airport by taking a flight from Kuala Lumpur. Upon arrival at the airport, we had to go through the immigration, of course. However, I forgot to fill my arrival card, which I filled while I was in the queue for my immigration. The immigration officer asked me how much cash I was carrying of each currency. After completing the formalities, the immigration officer stamped my passport and let me in. Take a look at Brunei s entry stamp in my passport.
Brunei entry stamp Brunei entry stamp on my passport. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0.
We exchanged Singapore dollars to get some Brunei dollars at the airport. The Brunei dollar was pegged 1:1 with the Singapore dollar, meaning 1 Singapore dollar equals 1 Brunei dollar. The exchange rate we received at the airport was the same. Our (pre-booked) accommodation was located near Gadong mall. So, we went to the information center at the airport to ask how to get there by public transport. However, the person at the information center told us that they didn t know the public transport routes and suggested we take a taxi instead. We came out of the airport and came across an Indian with a mini bus. He offered to drop us at our accommodation for 10 Brunei dollars ( 630). As we were tired after a sleepless night, we didn t negotiate and took the offer. It felt a bit weird using the minibus as our private taxi. In around half-an-hour, we reach our accommodation. The place was more like a guest house than a hotel. In addition to the rooms, it had common space consisting of a hall, a kitchen and a balcony.
Our room in Brunei Our room in Brunei. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
Upon reaching the place, we paid for our room in cash, which was 66.70 Singapore dollars (4200 Indian rupees) for two nights. We reached before the check-in time, so we had to wait for our room to get ready before we entered. The room had a double bed and also a place to hang clothes. We slept for a few hours before going out at night. We went into Gadong mall and had coffee at a caf named The Coffee Bean & Tea Leaf. The regular caffe latte I had here been 5.20 Brunei dollars. On another note, the snacks we got us from Kuala Lumpur covered us for the dinner. The next day 11th of December 2024 we went to a nearby restaurant named Nadj for lunch. The owner was from Kerala. Here we ordered: So, our lunch cost a total of 12.80 Brunei dollars (825 rupees). The naan was unusually thick, and didn t like the taste. After the lunch, we planned to visit Brunei s famous Omar Ali Saifuddien Mosque. However, a minibus driver outside of Gadong Mall told us that the mosque would be closed in half-an-hour and suggested we visit the nearby Jame Asr Hassanil Bolkiah Mosque instead.
Jame' Asr Hassanil Bolkiah Mosque Jame Asr Hassanil Bolkiah Mosque. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
He dropped us there for 1 Brunei dollar per person. The person hailed from Uttar Pradesh and told us about bus routes in Hindi. Buses routes in Brunei were confusing, so the information he gave us was valuable. It was evening, and we had an impression that the mosque and its premises were closed. However, soon enough, we stumbled across an open gate entering the mosque complex. We walked inside for some time, took pictures and exited. Walking in Bandar Seri Begawan wasn t pleasant, though. The pedestrian infrastructure wasn t good. Then we walked back to our place and bought some souvenirs. For dinner and breakfast, we bought bread, fruits and eggs from local shops as we had a kitchen to cook for ourselves. The guest house also had a washing machine (free of charge) which we wanted to use. However, they didn t have detergent. Therefore, we went outside to get some detergent. It was 8 o clock, and most of the shops were closed already. Others had had detergents in large sizes, the ones you would use if you lived there. We ended up getting a small packet at a supermarket. The next day 12th of December we had a flight to Ho Chi Minh City in Vietnam with a long layover in Kuala Lumpur. We had breakfast in the morning and took a bus to Omar Ali Saifuddien Mosque. The mosque was in prayer session, so it was closed for Muslims. Therefore, we just took pictures from the outside and took a bus for the airport.
Omar Ali Saifuddien Mosque Omar Ali Saifuddien Mosque. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0
When the bus reached near the airport, the bus went straight rather than taking a left turn for the airport. Initially, I thought the bus would just take a turn and come back. However, the bus kept going away from the airport. Confused by this, I asked other passengers if the bus was going to the airport. The driver stopped the bus at Muara Town terminal 20 km from the airport. At this point, everyone alighted, except for us. The driver went to a nearby restaurant to have lunch. I felt very uncomfortable stranded in a town which was 20 km from the airport. We had a lot of time, but I was still worried about missing our flight, as I didn t want to get stuck in Brunei. After waiting for 15 minutes, I went inside the restaurant and reminded the driver that we had a flight in a couple of hours and needed to go to the airport. He said he will leave soon. When he was done with his lunch, he drove us to the airport. It was incredibly frustrating. On a positive note, we saw countryside of Brunei that we would have seen otherwise. The bus ride cost us 1 Brunei dollars each.
A couple of houses with trees in the background. A shot of Brunei s countryside. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0.
That s it for this one. Meet you in the next one. Stay tuned for the Vietnam post!

16 January 2026

Jonathan Dowland: Ye Gods

Via (I think) @mcc on the Fediverse, I learned of GetMusic: a sort-of "clearing house" for Free Bandcamp codes. I think the way it works is, some artists release a limited set of download codes for their albums in order to promote them, and GetMusic help them to keep track of that, and helps listeners to discover them. GetMusic mail me occasionally, and once they highlighted an album The Arcane & Paranormal Earth which they described as "Post-Industrial in the vein of Coil and Nurse With Wound with shades of Aphex Twin, Autechre and assorted film music." Well that description hooked me immediately but I missed out on the code. However, I sampled the album on Bandcamp directly a few times as well as a few of his others (Ye Gods is a side-project of Antoni Maiovvi, which itself is a pen-name) and liked them very much. I picked up the full collection of Ye Gods albums in one go for 30% off. Here's a stand-out track: On Earth by Ye Gods So I guess this service works! Although I didn't actually get a free code in this instance, it promoted the artist, introduced me to something I really liked and drove a sale.

13 January 2026

Freexian Collaborators: Debian Contributions: dh-python development, Python 3.14 and Ruby 3.4 transitions, Surviving scraper traffic in Debian CI and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-12 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

dh-python development, by Stefano Rivera In Debian we build our Python packages with the help of a debhelper-compatible tool, dh-python. Before starting the 3.14 transition (that would rebuild many packages) we landed some updates to dh-python to fix bugs and add features. This started a month of attention on dh-python, iterating through several bug fixes, and a couple of unfortunate regressions. dh-python is used by almost all packages containing Python (over 5000). Most of these are very simple, but some are complex and use dh-python in unexpected ways. It s hard to avoid almost any change (including obvious bug fixes) from causing some unexpected knock-on behaviour. There is a fair amount of complexity in dh-python, and some rather clever code, which can make it tricky to work on. All of this means that good QA is important. Stefano spent some time adding type annotations and specialized types to make it easier to see what the code is doing and catch mistakes. This has already made work on dh-python easier. Now that Debusine has built-in repositories and debdiff support, Stefano could quickly test the effects of changes on many other packages. After each big change, he could upload dh-python to a repository, rebuild e.g. 50 Python packages with it, and see what differences appeared in the output. Reviewing the diffs is still a manual process, but can be improved. Stefano did a small test on what it would take to replace direct setuptools setup.py calls with PEP-517 (pyproject-style) builds. There is more work to do here.

Python 3.14 transition, by Stefano Rivera (et al.) In December the transition to add Python 3.14 as a supported version started in Debian unstable. To do this, we update the list of supported versions in python3-defaults, and then start rebuilding modules with C extensions from the leaves inwards. This had already been tested in a PPA and Ubuntu, so many of the biggest blocking compatibility issues with 3.14 had already been found and fixed. But there are always new issues to discover. Thanks to a number of people in the Debian Python team, we got through the first bit of the transition fairly quickly. There are still a number of open bugs that need attention and many failed tests blocking migration to testing. Python 3.14.1 released just after we started the transition, and very soon after, a follow-up 3.14.2 release came out to address a regression. We ran into another regression in Python 3.14.2.

Ruby 3.4 transition, by Lucas Kanashiro (et al.) The Debian Ruby team just started the preparation to move the default Ruby interpreter version to 3.4. At the moment, ruby3.4 source package is already available in experimental, also ruby-defaults added support to Ruby 3.4. Lucas rebuilt all reverse dependencies against this new version of the interpreter and published the results here. Lucas also reached out to some stakeholders to coordinate the work. Next steps are: 1) announcing the results to the whole team and asking for help to fix packages failing to build against the new interpreter; 2) file bugs against packages FTBFSing against Ruby 3.4 which are not fixed yet; 3) once we have a low number of build failures against Ruby 3.4, ask the Debian Release team to start the transition in unstable.

Surviving scraper traffic in Debian CI, by Antonio Terceiro Like most of the open web, Debian Continuous Integration has been struggling for a while to keep up with the insatiable hunger from data scrapers everywhere. Solving this involved a lot of trial and error; the final result seems to be stable, and consists of two parts. First, all Debian CI data pages, except the direct links to test log files (such as those provided by the Release Team s testing migration excuses), now require users to be authenticated before being accessed. This means that the Debian CI data is no longer publicly browseable, which is a bit sad. However, this is where we are now. Additionally, there is now a fail2ban powered firewall-level access limitation for clients that display an abusive access pattern. This went through several iterations, with some of them unfortunately blocking legitimate Debian contributors, but the current state seems to strike a good balance between blocking scrapers and not blocking real users. Please get in touch with the team on the #debci OFTC channel if you are affected by this.

A hybrid dependency solver for crossqa.debian.net, by Helmut Grohne crossqa.debian.net continuously cross builds packages from the Debian archive. Like Debian s native build infrastructure, it uses dose-builddebcheck to determine whether a package s dependencies can be satisfied before attempting a build. About one third of Debian s packages fail this check, so understanding the reasons is key to improving cross building. Unfortunately, dose-builddebcheck stops after reporting the first problem and does not display additional ones. To address this, a greedy solver implemented in Python now examines each build-dependency individually and can report multiple causes. dose-builddebcheck is still used as a fall-back when the greedy solver does not identify any problems. The report for bazel-bootstrap is a lengthy example.

rebootstrap, by Helmut Grohne Due to the changes suggested by Loongson earlier, rebootstrap now adds debhelper to its final installability test and builds a few more packages required for installing it. It also now uses a variant of build-essential that has been marked Multi-Arch: same (see foundational work from last year). This in turn made the use of a non-default GCC version more difficult and required more work to make it work for gcc-16 from experimental. Ongoing archive changes temporarily regressed building fribidi and dash. libselinux and groff have received patches for architecture specific changes and libverto has been NMUed to remove the glib2.0 dependency.

Miscellaneous contributions
  • Stefano did some administrative work on debian.social and debian.net instances and Debian reimbursements.
  • Stefano did routine updates of python-authlib, python-mitogen, xdot.
  • Stefano spent several hours discussing Debian s Python package layout with the PyPA upstream community. Debian has ended up with a very different on-disk installed Python layout than other distributions, and this continues to cause some frustration in many communities that have to have special workarounds to handle it. This ended up impacting cross builds as Helmut discovered.
  • Rapha l set up Debusine workflows for the various backports repositories on debusine.debian.net.
  • Zulip is not yet in Debian (RFP in #800052), but Rapha l helped on the French translation as he is experimenting with that discussion platform.
  • Antonio performed several routine Salsa maintenance tasks, including fixing salsa-nm-sync, the service that synchronizes project members data from LDAP to Salsa, which had been broken since salsa.debian.org was upgraded to trixie .
  • Antonio deployed a new amd64 worker host for Debian CI.
  • Antonio did several DebConf technical and administrative bits, including but adding support for custom check-in/check-out dates in the MiniDebConf registration module, publishing a call for bids for DebConf27.
  • Carles reviewed and submitted 14 Catalan translations using po-debconf-manager.
  • Carles improved po-debconf-manager: added delete-package command, show-information now uses properly formatted output (YAML), it now attaches the translation on the bug reports for which a merge request has been opened too long.
  • Carles investigated why some packages appeared in po-debconf-manager but not in the Debian l10n list. Turns out that some packages had debian/po/templates.pot (appearing in po-debconf-manager) but not the POTFILES.in file as expected. Created a script to find out which packages were in this or similar situation and reported bugs.
  • Carles tested and documented how to set up voices (mbrola and festival) if using Orca speech synthesizer. Commented a few issues and possible improvements in the debian-accessibility list.
  • Helmut sent patches for 48 cross build failures and initiated discussions on how to deal with two non-trivial matters. Besides Python mentioned above, CMake introduced a cmake_pkg_config builtin which is not aware of the host architecture. He also forwarded a Meson patch upstream.
  • Thorsten uploaded a new upstream version of cups to fix a nasty bug that was introduced by the latest security update.
  • Along with many other Python 3.14 fixes, Colin fixed a tricky segfault in python-confluent-kafka after a helpful debugging hint from upstream.
  • Colin upstreamed an improved version of an OpenSSH patch we ve been carrying since 2008 to fix misleading verbose output from scp.
  • Colin used Debusine to coordinate transitions for astroid and pygments, and wrote up the astroid case on his blog.
  • Emilio helped with various transitions, and provided a build fix for opencv for the ffmpeg 8 transition.
  • Emilio tested the GNOME updates for trixie proposed updates (gnome-shell, mutter, glib2.0).
  • Santiago helped to review the status of how to test different build profiles in parallel on the same pipeline, using the test-build-profiles job. This means, for example, to simultaneously test build profiles such as nocheck and nodoc for the same git tree. Finally, Santiago provided MR !685 to fix the documentation.
  • Anupa prepared a bits post for Outreachy interns announcement along with T ssia Cam es Ara jo and worked on publicity team tasks.

10 January 2026

Matthias Geiger: Building a propagation box for oyster mushrooms

Inspiration In November I watched a short documentary about a guy who grew pearl oyster mushrooms in his backyard. They used pallet boxes (half of a europallet, 60x80x20 cm) as box to keep the substrate the mycelium feeds on in. Since I really enjoy (foraged) mushrooms and had the raw materials lying around, I opted to build it myself. This also had the benefit of using what was available and not just consuming, i.e. buying a pallet box. Preparing the raw materials I had 4.5 m x ~ 25 cm wooden spruce planks at home. My plan was to cut those into 2 m segments, then trim the edges down to 20 cm and then cut them into handy pieces, following the dimension of half a pallet box. This is what they looked after cutting them with an electric chainsaw to around 2 m: raw_planks You can see that the edges are still not straight, because that's how they came out of the sawmill. Once that was done I visited a family member that had a crosscut saw, a table saw and a band saw; all that I would need. First we trimmed the edges of the 2m planks with the table saw so they were somewhat straight; then they were flipped and the other edge was cut straight, and their width cut down to 20 cm. After moving them over to the crosscut saw dividing them into two 60 cm and one 80 cm was fairly easy. When cutting the 2m planks from the 4m ones I calculated with extra offcuts, so I got little waste overall and could use the whole length to get my desired board. This is what the cut pieces looked like: cut_planks Assembly I packed up my planks, now nicely cut to size, and I went to a hardware shop and bought hinges and screws. Assembly was fairly easy and fast: screw a hinge to a corner, hold the other plank onto the hinge so that the corners of both boards touch, and affix the hinge. plank_with_hinge corner_with_hinge When this was done, the frame looked like this: finished_frame As last step I drilled 10mm holes more or less random in the middle of the box. This is where the mushrooms will grow out of later and can be harvested. box_with_holes Closing thoughts This was a fun project I finished in a day. The hinges have the benefit that they allow the box to be folded up lenght-wise: folded This allows for convenient storage. Since it's too cold outside right now, cultivation will have to wait until spring. This also just needs mycelium one can just buy, and some material fungus digests. They can also be fed coffee grounds, and harvest of the fruit body is possible circa every two weeks.

9 January 2026

Simon Josefsson: Debian Taco Towards a GitSecDevOps Debian

One of my holiday projects was to understand and gain more trust in how Debian binaries are built, and as the holidays are coming to an end, I d like to introduce a new research project called Debian Taco. I apparently need more holidays, because there are still more work to be done here, so at the end I ll summarize some pending work. Debian Taco, or TacOS, is a GitSecDevOps rebuild of Debian GNU/Linux. The Debian Taco project publish rebuilt binary packages, package repository metadata (InRelease, Packages, etc), container images, cloud images and live images. All packages are built from pristine source packages in the Debian archive. Debian Taco does not modify any Debian source code nor add or remove any packages found in Debian. No servers are involved! Everything is built in GitLab pipelines and results are published through modern GitDevOps mechanism like GitLab Pages and S3 object storage. You can fork the individual projects below on GitLab.com and you will have your own Debian-derived OS available for tweaking. (Of course, at some level, servers are always involved, so this claim is a bit of hyperbole.)

Goals The goal of TacOS is to be bit-by-bit identical with official Debian GNU/Linux, and until that has been completed, publish diffoscope output with differences. The idea is to further categorize all artifact differences into one of the following categories: 1) An obvious bug in Debian. For example, if a package does not build reproducible. 2) An obvious bug in TacOS. For example, if our build environment does not manage to build a package. 3) Something else. This would be input for further research and consideration. This category also include things where it isn t obvious if it is a bug in Debian or in TacOS. Known examples: 3A) Packages in TacOS are rebuilt the latest available source code, not the (potentially) older package that were used to build the Debian packages. This could lead to differences in the packages. These differences may be useful to analyze to identify supply-chain attacks. See some discussion about idempotent rebuilds. Our packages are all built from source code, unless we have not yet managed to build something. In the latter situation, Debian Taco falls back and uses the official Debian artifact. This allows an incremental publication of Debian Taco that still is 100% complete without requiring that everything is rebuilt instantly. The goal is that everything should be rebuilt, and until that has been completed, publish a list of artifacts that we use verbatim from Debian.

Debian Taco Archive The Debian Taco Archive project generate and publish the package archive (dists/tacos-trixie/InRelease, dists/tacos-trixie/main/binary-amd64/Packages.gz, pool/* etc), similar to what is published at https://deb.debian.org/debian/. The output of the Debian Taco Archive is available from https://debdistutils.gitlab.io/tacos/archive/.

Debian Taco Container Images The Debian Taco Container Images project provide container images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures. These images allow quick and simple use of Debian Taco interactively, but makes it easy to deploy for container orchestration frameworks.

Debian Taco Cloud Images The Debian Taco Cloud Images project provide cloud images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures. Launch and install Debian Taco for your cloud environment!

Debian Taco Live Images The Debian Taco Live Images project provide live images of Debian Taco for trixie, forky and sid on the amd64 and arm64 architectures. These images allows running Debian Taco on physical hardware (or virtual machines), and even installation for permanent use.

Debian Taco Build Images and Packages Packages are built using debdistbuild, which was introduced in a blog about Build Debian in a GitLab Pipeline. The first step is to prepare build images, which is done by the Debian Taco Build Images project. They are similar to the Debian Taco containers but have build-essential and debdistbuild installed on them. Debdistbuild is launched in a per-architecture per-suite CI/CD project. Currently only trixie-amd64 is available. That project has built some essential early packages like base-files, debian-archive-keyring and hostname. They are stored in Git LFS backed by a S3 object storage. These packages were all built reproducibly. So this means Debian Taco is still 100% bit-by-bit identical to Debian, except for the renaming. I ve yet to launch a more massive wide-scale package rebuild until some outstanding issues have been resolved. I earlier rebuilt around 7000 packages from Trixie on amd64, so I know that the method easily scales.

Remaining work Where is the diffoscope package outputs and list of package differences? For another holiday! Clearly this is an important remaining work item. Another important outstanding issue is how to orchestrate launching the build of all packages. Clearly a list of packages is needed, and some trigger mechanism to understand when new packages are added to Debian. One goal was to build packages from the tag2upload browse.dgit.debian.org archive, before checking the Debian Archive. This ought to be really simple to implement, but other matters came first.

GitLab or Codeberg? Everything is written using basic POSIX /bin/sh shell scripts. Debian Taco uses the GitLab CI/CD Pipeline mechanism together with a Hetzner S3 object storage to serve packages. The scripts have only weak reliance on GitLab-specific principles, and were designed with the intention to support other platforms. I believe reliance on a particular CI/CD platform is a limitation, so I d like to explore shipping Debian Taco through a Forgejo-based architecture, possibly via Codeberg as soon as I manage to deploy reliable Forgejo runners. The important aspects that are required are: 1) Pipelines that can build and publish web sites similar to GitLab Pages. Codeberg has a pipeline mechanism. I ve successfully used Codeberg Pages to publish the OATH Toolkit homepage homepage. Glueing this together seems feasible. 2) Container Registry. It seems Forgejo supports a Container Registry but I ve not worked with it at Codeberg to understand if there are any limitations. 3) Package Registry. The Deban Taco live images are uploaded into a package registry, because they are too big for being served through GitLab Pages. It may be converted to using a Pages mechanism, or possibly through Release Artifacts if multi-GB artifacts are supported on other platforms. I hope to continue this work and explaining more details in a series of posts, stay tuned!

7 January 2026

Ravi Dwivedi: Why my Riseup account got suspended (and reinstated)

Disclaimer: The goal of this post is not to attack Riseup. In fact, I love Riseup and support their work.

Story Riseup is an email provider, known for its privacy-friendly email service. The service requires an invite from an existing Riseup email user to get an account. I created my account on Riseup in the year 2020, of course with the help of a friend who invited me. Since then, I have used the email address only occasionally, although it is logged into my Thunderbird all the time. Fast-forward to the 4th of January 2026, when Thunderbird suddenly told me that it could not log in to my Riseup account. When I tried logging in using their webmail, it said invalid password . Finally, I tried logging in to my account on their website, and was told that
Log in for that account is temporary suspended while we perform maintenance. Please try again later.
At this point, I suspected that the Riseup service itself was facing some issues. I asked a friend who had an account there if the service was up, and they said that it was. The issue seemed to be specific only to my account. I contacted Riseup support and informed them of the issue. They responded the next day (the 5th of January) saying:
The my-username-redacted account was found inviting another account that violated our terms of use. As a security measure we suspend all related accounts to ToS violations.
(Before we continue, I would like to take a moment and reflect upon how nice it was to receive response from a human rather than an AI bot a trend that is unfortunately becoming the norm nowadays.) I didn t know who violated their ToS, so I asked which account violated their terms. Riseup told me:
username-redacted@riseup.net attempted to create aliases that could be abused to impersonate riseup itself.
I asked a friend whom I invited a month before the incident, and they confirmed that the username belonged to them. When I asked what they did, they told me they tried creating aliases such as floatup and risedown. I also asked Riseup which aliases violated their terms, but their support didn t answer this. I explained to the Riseup support that the impersonation wasn t intentional, that the user hadn t sent any emails, and that I had been a user for more than 5 years and had donated to them in the past. Furthermore, I suggested that they should block the creation of such aliases if they think the aliases violate their terms, like how email providers typically don t allow users to create admin@ or abuse@ email addresses. After I explained myself, Riseup reinstated my account. Update on the 10th of January 2025: My friend told me that the alias that violated Riseup s terms was cloudadmin and his account was reinstated on the 7th of January.

Issues with suspension I have the following issues regarding the way the suspension took place
  • There was no way of challenging the suspension before the action was taken
  • The action taken against me was disproportionate. Remember that I didn t violate any terms. It was allegedly done by a user I invited. They could just block the aliases while continuing the discussion in parallel.
  • I was locked out of my account with no way of saving my emails and without any chance to migrate. What if that email address was being used for important stuff such as bank access or train tickets? I know people who use Riseup email for such purposes.
  • The violation wasn t even proven. I wasn t told which alias violated the terms and how could that be used to impersonate Riseup itself
When I brought up the issue of me getting locked out of my account without a way of downloading my emails or migrating my account, Riseup support responded by saying:
You must understand that we react [by] protecting our service, and therefore we cannot provide notice messages on the affected accounts. We need to act preventing any potential damage to the service that might affect the rest of the users, and that measure is not excessive (think on how abusers/spammers/scammers/etc could trick us and attempt any action before their account is suspended).
This didn t address my concerns, so let s move on to the next section.

Room for improvement Here s how I think Riseup s ban policy could be changed while still protecting against spammers and other bad actors:
  • Even if Riseup can t provide notice to blocked accounts, perhaps they can scale back limitations on the inviting account which wasn t even involved for example, by temporarily disabling invites from that account until the issue is resolved.
  • In this case, the person didn t impersonate Riseup, so Riseup could have just blocked the aliases and let the user know about it, rather than banning the account outright.
  • Riseup should give blocked users access to their existing emails so they have a chance to migrate them to a different provider. (Riseup could disable SMTP and maybe incoming emails but keep IMAP access open). I know people who use Riseup for important things such as bank or train tickets, and a sudden block like this is not a good idea.
  • Riseup should factor in the account profile in making these decisions. I had an account on their service for 5 years and I had only created around 5 invites. (I don t remember the exact number and there s no way to retrieve this information.) This is not exactly an attacker profile. I feel long-term users like this deserve an explanation for a ban.
I understand Riseup is a community-run service and does not have unlimited resources like big corporations or commercial email providers do. Their actions felt disproportionate to me because I don t know what issues they face behind the scenes. I hope someone can help to improve the policies, or at least shed light on why they are the way they are. Signing off now. Meet you in the next one! Thanks to Badri and Contrapunctus for reviewing this blog post

4 January 2026

Sahil Dhiman: Full /8 IPv4 Address Block Holders in 2025

Here, I tried to collect entities that owned full /8 IPv4 address block(s) in 2025. A more comprehensive and historical list can be found at https://en.wikipedia.org/wiki/List_of_assigned_/8_IPv4_address_blocks.

Some notable mentions: Note - HE s list mentions addresses originated (not just owned). Providers also advertise prefixes owned by their customers in Bring Your Own IP (BYOIP) setups. The number of address space owned by entity would mostly be more than their customer owned space (originated by the entity).
  • There s also 126.0.0.0/8 which is mostly Softbank.

3 January 2026

Louis-Philippe V ronneau: 2025 A Musical Retrospective

2026 already! The winter weather here has really been beautiful and I always enjoy this time of year. Writing this yearly musical retrospective has now become a beloved tradition of mine1 and I enjoy retracing the year's various events through albums I listened to and concerts I went to. Albums In 2025, I added 141 new albums to my collection, around 60% more than last year's haul. I think this might have been too much? I feel like I didn't have time to properly enjoy all of them and as such, I decided to slow down my acquisition spree sometimes in early December, around the time I normally do the complete opposite. This year again, I bought the vast majority of my music on Bandcamp. Most of the other albums I bought as CDs and ripped them. Concerts In 2025, I went to the following 25 (!!) concerts: Although I haven't touched metalfinder's code in a good while, my instance still works very well and I get the occasional match when a big-name artist in my collection comes in town. Most the venues that advertise on Bandsintown are tied to Ticketmaster though, which means most underground artists (i.e. most of the music I listen to) end up playing elsewhere. As such, shout out again to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there. See you all in 2026!

  1. see the 2022, 2023 and 2024 entries

1 January 2026

Russ Allbery: 2025 Book Reading in Review

In 2025, I finished and reviewed 32 books, not counting another five books I've finished but not yet reviewed and which will therefore roll over to 2026. This was not a great reading year, although not my worst reading year since I started keeping track. I'm not entirely sure why, although part of the explanation was that I hit a bad stretch of books in spring of 2025 and got into a bit of a reading slump. Mostly, though, I shifted a lot of reading this year to short non-fiction (newsletters and doom-scrolling) and spent rather more time than I intended watching YouTube videos, and sadly each hour in the day can only be allocated one way. This year felt a bit like a holding pattern. I have some hopes of being more proactive and intentional in 2026. I'm still working on finding a good balance between all of my hobbies and the enjoyment of mindless entertainment. The best book I read this year was also the last book I reviewed (and yes, I snuck the review under the wire for that reason): Bethany Jacobs's This Brutal Moon, the conclusion of the Kindom Trilogy that started with These Burning Stars. I thought the first two books of the series were interesting but flawed, but the conclusion blew me away and improved the entire trilogy in retrospect. Like all books I rate 10 out of 10, I'm sure a large part of my reaction is idiosyncratic, but two friends of mine also loved the conclusion so it's not just me. The stand-out non-fiction book of the year was Rory Stewart's Politics on the Edge. I have a lot of disagreements with Stewart's political positions (the more I listen to him, the more disagreements I find), but he is an excellent memoirist who skewers the banality, superficiality, and contempt for competence that has become so prevailing in centrist and right-wing politics. It's hard not to read this book and despair of electoralism and the current structures of governments, but it's bracing to know that even some people I disagree with believe in the value of expertise. I also finished Suzanne Palmer's excellent Finder Chronicles series, reading The Scavenger Door and Ghostdrift. This series is some of the best science fiction I've read in a long time and I'm sad it is over (at least for now). Palmer has a new, unrelated book coming in 2026 (Ode to the Half-Broken), and I'm looking forward to reading that. This year, I experimented with re-reading books I had already reviewed for the first time since I started writing reviews. After my reading slump, I felt like revisiting something I knew I liked, and therefore re-read C.J. Cherryh's Cyteen and Regenesis. Cyteen mostly held up, but Regenesis was worse than I had remembered. I experimented with a way to add on to my previous reviews, but I didn't like the results and the whole process of re-reading and re-reviewing annoyed me. I'm counting this as a failed experiment, which means I've still not solved the problem of how to revisit series that I read long enough ago that I want to re-read them before picking up the new book. (You may have noticed that I've not read the new Jacqueline Carey Kushiel novel, for example.) You may have also noticed that I didn't start a new series re-read, or continue my semi-in-progress re-reads of Mercedes Lackey or David Eddings. I have tentative plans to kick off a new series re-read in 2026, but I'm not ready to commit to that yet. As always, I have no firm numeric goals for the next year, but I hope to avoid another reading slump and drag my reading attention back from lower-quality and mostly-depressing material in 2026. The full analysis includes some additional personal reading statistics, probably only of interest to me.

31 December 2025

Ravi Dwivedi: Transit through Kuala Lumpur

In my last post, Badri and I reached Kuala Lumpur - the capital of Malaysia - on the 7th of December 2024. We stayed in Bukit Bintang, the entertainment district of the city. Our accommodation was pre-booked at Manor by Mingle , a hostel where I had stayed for a couple of nights in a dormitory room earlier in February 2024. We paid 4937 rupees (the payment was online, so we paid in Indian rupees) for 3 nights for a private room. From the Terminal Bersepadu Selatan (TBS) bus station, we took the metro to the Plaza Rakyat LRT station, which was around 500 meters from the hostel. Upon arriving at the hostel, we presented our passports at their request, followed by a 20 ringgit (400 rupee) deposit which would be refunded once we returned the room keys at checkout.
Outside view of the hostel Manor by Mingle Manor by Mingle - the hostel where we stayed at during our KL transit. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Our room was upstairs and it had a bunk bed. I had seen bunk beds in dormitories before, but this was my first time seeing a bunk bed in a private room. The room did not have any toilets, so we had to use shared toilets. Unusually, the hostel was equipped with a pool. It also had a washing machine with dryers - this was one of the reasons we chose this hostel, because we were traveling light and hadn t packed too many clothes. The machine and dryer cost 10 ringgits (200 rupees) per use, and we only used it once. The hostel provided complimentary breakfast, which included coffee. Outside of breakfast hours, there was also a paid coffee machine. During our stay, we visited a gurdwara - a place of worship for Sikhs - which was within walking distance from our hostel. The name of the gurdwara was Gurdwara Sahib Mainduab. However, it wasn t as lively as I had thought. The gurdwara was locked from the inside, and we had to knock on the gate and call for someone to open it. A man opened the gate and invited us in. The gurdwara was small, and there was only one other visitor - a man worshipping upstairs. We went upstairs briefly, then settled down on the first floor. We had some conversations with the person downstairs who kindly made chai for us. They mentioned that the langar (community meal) is organized on every Friday, which was unlike the gurdwaras I have been to where the langar is served every day. We were there for an hour before we left. We also went to Adyar Ananda Bhavan (a restaurant chain) near our hostel to try the chain in Malaysia. The chain is famous in Southern India and also known by its short name A2B. We ordered
A dosa Dosa served at Adyar Ananda Bhavan. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
All this came down to around 33 ringgits (including taxes), i.e. around 660 rupees. We also purchased some snacks such as murukku from there for our trip. We had planned a day trip to Malacca, but had to cancel it due to rain. We didn t do a lot in Kuala Lumpur, and it ended up acting as a transit point for us to other destinations: flights from Kuala Lumpur were cheaper than Singapore, and in one case a flight via Kuala Lumpur was even cheaper than a direct flight! We paid 15,000 rupees in total for the following three flights:
  1. Kuala Lumpur to Brunei,
  2. Brunei to Kuala Lumpur, and
  3. Kuala Lumpur to Ho Chi Minh City (Vietnam).
These were all AirAsia flights. The cheap tickets, however, did not include any checked-in luggage, and the cabin luggage weight limit was 7 kg. We also bought quite some stuff in Kuala Lumpur and Singapore, leading to an increase in the weight of our luggage. We estimated that it would be cheaper for us to take only essential items such as clothes, cameras, and laptops, and to leave behind souvenirs and other non-essentials in lockers at the TBS bus stand in Kuala Lumpur, than to pay more for check-in luggage. It would take 140 ringgits for us to add a checked-in bag from Kuala Lumpur to Bandar Seri Begawan and back, while the cost for lockers was 55 ringgits at the rate of 5 ringgits every six hours. We had seen these lockers when we alighted at the bus stand while coming from Johor Bahru. There might have been lockers in the airport itself as well, which would have been more convenient as we were planning to fly back in soon, but we weren t sure about finding lockers at the airport and we didn t want to waste time looking. We had an early morning flight for Brunei on the 10th of December. We checked out from our hostel on the night of the 9th of December, and left for TBS to take a bus to the airport. We took a metro from the nearest metro station to TBS. Upon reaching there, we put our luggage in the lockers. The lockers were automated and there was no staff there to guide us.
Lockers at TBS bus station Lockers at TBS bus station. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
We bought a ticket for the airport bus from a counter at TBS for 26 ringgits for both of us. In order to give us tickets, the person at the counter asked for our passports, and we handed it over to them promptly. Since paying in cash did not provide any extra anonymity, I would advise others to book these buses online. In Malaysia, you also need a boarding pass for buses. The bus terminal had kiosks for getting these printed, but they were broken and we had to go to a counter to obtain them. The boarding pass mentioned our gate number and other details such as our names and departure time of the bus. The company was Jet Bus.
My boarding pass for the bus to the airport in Kuala Lumpur My boarding pass for the bus to the airport in Kuala Lumpur. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
To go to our boarding gate, we had to scan our boarding pass to let the AFC gates open. Then we went downstairs, leading into the waiting area. It had departure boards listing the bus timings and their respective gates. We boarded our bus around 10 minutes before the departure time - 00:00 hours. It departed at its scheduled time and took 45 minutes to reach KL Airport Terminal 2, where we alighted. We reached 6 hours before our flight s departure time of 06:30. We stopped at a convenience store at the airport to have some snacks. Then we weighed our bags at a weighing machine to check whether we were within the weight limit. It turned out that we were. We went to an AirAsia counter to get our boarding passes. The lady at our counter checked our Brunei visas carefully and looked for any Brunei stamps on the passports to verify whether we had used that visa in the past. However, she didn t weigh our bags to check whether they were within the limit, and gave us our boarding passes. We had more than 4 hours to go before our flight. This was the downside of booking an early morning flight - we weren t able to get a full night s sleep. A couple of hours before our flight time, we were hanging around our boarding gate. The place was crowded, so there were no seats available. There were no charging points. There was a Burger King outlet there which had some seating space and charging points. As we were hungry, we ordered two cups of cappuccino coffee (15.9 ringgits) and one large french fries (8.9 ringgits) from Burger King. The total amount was 24 ringgits. When it was time to board the flight, we went to the waiting area for our boarding gates. Soon, we boarded the plane. It took 2.5 hours to reach the Brunei International Airport in the capital city of Bandar Seri Begawan.
View of Kuala Lumpur from the aeroplane View of Kuala Lumpur from the aeroplane. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.
Stay tuned for our experiences in Brunei! Credits: Thanks to Badri, Benson and Contrapunctus for reviewing the draft.

22 December 2025

Jonathan McDowell: NanoKVM: I like it

I bought a NanoKVM. I d heard some of the stories about how terrible it was beforehand, and some I didn t learn about until afterwards, but at 52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment. Let s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kova i wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later. Next, let me explain where I m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from reputable vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here. And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a 52 device based on a development board. You re giving it USB + HDMI access to a host on your network, if you re worried about the microphone then you re concentrating on the wrong bit here. I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session). IPv6 is enabled in the kernel. My test setup doesn t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically. My device reports:
Image version:              v1.4.1
Application version:        2.2.9
That s recent, but the GitHub releases page has 2.3.0 listed as more recent. Out of the box it s listening on TCP port 80. SSH is not running, but there s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you d expect from port 80, is HTTP, but there s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL. As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:
~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5
This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp). My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub. I note there s an iptables setup (with nftables underneath) that s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it s not going to try and connect out somewhere it shouldn t. It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:
~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"
The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we re now up to 5.10.247, so it obviously hasn t been updated in some time. TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel. The SSH client/daemon is full-fat OpenSSH:
~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023
There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I m a little surprised at aircrack (the device has no WiFi and even though I know there s a variant that does, it s not a standard debug tool the way tcpdump is), but there s a copy of GNU Chess in there too, so it s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here. Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I d have to open up the device to get to UART0. I ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier). In terms of actual functionality it did what I d expect. 1080p HDMI capture was fine. I d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the symbol until I reconfigured the target machine not to expect a GB layout. There s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn t try that. There s plenty of room for images:
~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data
The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there s no masquerading rules setup, so this doesn t give the target host access to the management LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image. One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help. As I stated at the start, I m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I ve seen says I ve got my money s worth from it.

21 December 2025

Ian Jackson: Debian s git transition

tl;dr: There is a Debian git transition plan. It s going OK so far but we need help, especially with outreach and updating Debian s documentation. Goals of the Debian git transition project
  1. Everyone who interacts with Debian source code should be able to do so entirely in git.
That means, more specifically:
  1. All examination and edits to the source should be performed via normal git operations.
  2. Source code should be transferred and exchanged as git data, not tarballs. git should be the canonical form everywhere.
  3. Upstream git histories should be re-published, traceably, as part of formal git releases published by Debian.
  4. No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control.
This is very ambitious, but we have come a long way! Achievements so far, and current status We have come a very long way. But, there is still much to do - especially, the git transition team needs your help with adoption, developer outreach, and developer documentation overhaul. We ve made big strides towards goals 1 and 4. Goal 2 is partially achieved: we currently have dual running. Goal 3 is within our reach but depends on widespread adoption of tag2upload (and/or dgit push). Downstreams and users can obtain the source code of any Debian package in git form. (dgit clone, 2013). They can then work with this source code completely in git, including building binaries, merging new versions, even automatically (eg Raspbian, 2016), and all without having to deal with source packages at all (eg Wikimedia 2025). A Debian maintainer can maintain their own package entirely in git. They can obtain upstream source code from git, and do their packaging work in git (git-buildpackage, 2006). Every Debian maintainer can (and should!) release their package from git reliably and in a standard form (dgit push, 2013; tag2upload, 2025). This is not only more principled, but also more convenient, and with better UX, than pre-dgit tooling like dput. Indeed a Debian maintainer can now often release their changes to Debian, from git, using only git branches (so no tarballs). Releasing to Debian can be simply pushing a signed tag (tag2upload, 2025). A Debian maintainer can maintain a stack of changes to upstream source code in git (gbp pq 2009). They can even maintain such a delta series as a rebasing git branch, directly buildable, and use normal git rebase style operations to edit their changes, (git-dpm, 2010; git-debrebase, 2018) An authorised Debian developer can do a modest update to any package in Debian, even one maintained by someone else, working entirely in git in a standard and convenient way (dgit, 2013). Debian contributors can share their work-in-progress on git forges and collaborate using merge requests, git based code review, and so on. (Alioth, 2003; Salsa, 2018.) Core engineering principle The Debian git transition project is based on one core engineering principle: Every Debian Source Package can be losslessly converted to and from git. In order to transition away from Debian Source Packages, we need to gateway between the old dsc approach, and the new git approach. This gateway obviously needs to be bidirectional: source packages uploaded with legacy tooling like dput need to be imported into a canonical git representation; and of course git branches prepared by developers need to be converted to source packages for the benefit of legacy downstream systems (such as the Debian Archive and apt source). This bidirectional gateway is implemented in src:dgit, and is allowing us to gradually replace dsc-based parts of the Debian system with git-based ones. Correspondence between dsc and git A faithful bidirectional gateway must define an invariant: The canonical git tree, corresponding to a .dsc, is the tree resulting from dpkg-source -x. This canonical form is sometimes called the dgit view . It s sometimes not the same as the maintainer s git branch, because many maintainers are still working with patches-unapplied git branches. More on this below. (For 3.0 (quilt) .dscs, the canonical git tree doesn t include the quilt .pc directory.) Patches-applied vs patches-unapplied The canonical git format is patches applied . That is: If Debian has modified the upstream source code, a normal git clone of the canonical branch gives the modified source tree, ready for reading and building. Many Debian maintainers keep their packages in a different git branch format, where the changes made by Debian, to the upstream source code, are in actual patch files in a debian/patches/ subdirectory. Patches-applied has a number of important advantages over patches-unapplied: The downside is that, with the (bizarre) 3.0 (quilt) source format, the patch files files in debian/patches/ must somehow be kept up to date. Nowadays though, tools like git-debrebase and git-dpm (and dgit for NMUs) make it very easy to work with patches-applied git branches. git-debrebase can deal very ergonomically even with big patch stacks. (For smaller packages which usually have no patches, plain git merge with an upstream git branch, and a much simpler dsc format, sidesteps the problem entirely.) Prioritising Debian s users (and other outsiders) We want everyone to be able to share and modify the software that they interact with. That means we should make source code truly accessible, on the user s terms. Many of Debian s processes assume everyone is an insider. It s okay that there are Debian insiders and that people feel part of something that they worked hard to become involved with. But lack of perspective can lead to software which fails to uphold our values. Our source code practices in particular, our determination to share properly (and systematically) are a key part of what makes Debian worthwhile at all. Like Debian s installer, we want our source code to be useable by Debian outsiders. This is why we have chosen to privilege a git branch format which is more familiar to the world at large, even if it s less popular in Debian. Consequences, some of which are annoying The requirement that the conversion be bidirectional, lossless, and context-free can be inconvenient. For example, we cannot support .gitattributes which modify files during git checkin and checkout. .gitattributes cause the meaning of a git tree to depend on the context, in possibly arbitrary ways, so the conversion from git to source package wouldn t be stable. And, worse, some source packages might not to be representable in git at all. Another example: Maintainers often have existing git branches for their packages, generated with pre-dgit tooling which is less careful and less principled than ours. That can result in discrepancies between git and dsc, which need to be resolved before a proper git-based upload can succeed. That some maintainers use patches-unapplied, and some patches-unapplied, means that there has to be some kind of conversion to a standard git representation. Choosing the less-popular patches-applied format as the canonical form, means that many packages need their git representation converted. It also means that user- and outsider-facing branches from browse,git .dgit.d.o and dgit clone are not always compatible with maintainer branches on Salsa. User-contributed changes need cherry-picking rather than merging, or conversion back to the maintainer format. The good news is that dgit can automate much of this, and the manual parts are usually easy git operations. Distributing the source code as git Our source code management should be normal, modern, and based on git. That means the Debian Archive is obsolete and needs to be replaced with a set of git repositories. The replacement repository for source code formally released to Debian is *.dgit.debian.org. This contains all the git objects for every git-based upload since 2013, including the signed tag for each released package version. The plan is that it will contain a git view of every uploaded Debian package, by centrally importing all legacy uploads into git. Tracking the relevant git data, when changes are made in the legacy Archive Currently, many critical source code management tasks are done by changes to the legacy Debian Archive, which works entirely with dsc files (and the associated tarballs etc). The contents of the Archive are therefore still an important source of truth. But, the Archive s architecture means it cannot sensibly directly contain git data. To track changes made in the Archive, we added the Dgit: field to the .dsc of a git-based upload (2013). This declares which git commit this package was converted from. and where those git objects can be obtained. Thus, given a Debian Source Package from a git-based upload, it is possible for the new git tooling to obtain the equivalent git objects. If the user is going to work in git, there is no need for any tarballs to be downloaded: the git data could be obtained from the depository using the git protocol. The signed tags, available from the git depository, have standardised metdata which gives traceability back to the uploading Debian contributor. Why *.dgit.debian.org is not Salsa We need a git depository - a formal, reliable and permanent git repository of source code actually released to Debian. Git forges like Gitlab can be very convenient. But Gitlab is not sufficiently secure, and too full of bugs, to be the principal and only archive of all our source code. (The open core business model of the Gitlab corporation, and the constant-churn development approach, are critical underlying problems.) Our git depository lacks forge features like Merge Requests. But: The dgit git depository outlasted Alioth and it may well outlast Salsa. We need both a good forge, and the *.dgit.debian.org formal git depository. Roadmap In progress Right now we are quite focused on tag2upload. We are working hard on eliminating the remaining issues that we feel need to be addressed before declaring the service out of beta. Future Technology Whole-archive dsc importer Currently, the git depository only has git data for git-based package updates (tag2upload and dgit push). Legacy dput-based uploads are not currently present there. This means that the git-based and legacy uploads must be resolved client-side, by dgit clone. We will want to start importing legacy uploads to git. Then downstreams and users will be able to get the source code for any package simply with git clone, even if the maintainer is using legacy upload tools like dput. Support for git-based uploads to security.debian.org Security patching is a task which would particularly benefit from better and more formal use of git. git-based approaches to applying and backporting security patches are much more convenient than messing about with actual patch files. Currently, one can use git to help prepare a security upload, but it often involves starting with a dsc import (which lacks the proper git history) or figuring out a package maintainer s unstandardised git usage conventions on Salsa. And it is not possible to properly perform the security release as git. Internal Debian consumers switch to getting source from git Buildds, QA work such as lintian checks, and so on, could be simpler if they don t need to deal with source packages. And since git is actually the canonical form, we want them to use it directly. Problems for the distant future For decades, Debian has been built around source packages. Replacing them is a long and complex process. Certainly source packages are going to continue to be supported for the foreseeable future. There are no doubt going to be unanticipated problems. There are also foreseeable issues: for example, perhaps there are packages that work very badly when represented in git. We think we can rise to these challenges as they come up. Mindshare and adoption - please help! We and our users are very pleased with our technology. It is convenient and highly dependable. dgit in particular is superb, even if we say so ourselves. As technologists, we have been very focused on building good software, but it seems we have fallen short in the marketing department. A rant about publishing the source code git is the preferred form for modification. Our upstreams are overwhelmingly using git. We are overwhelmingly using git. It is a scandal that for many packages, Debian does not properly, formally and officially publish the git history. Properly publishing the source code as git means publishing it in a way that means that anyone can automatically and reliably obtain and build the exact source code corresponding to the binaries. The test is: could you use that to build a derivative? Putting a package in git on Salsa is often a good idea, but it is not sufficient. No standard branch structure git on Salsa is enforced, nor should it be (so it can t be automatically and reliably obtained), the tree is not in a standard form (so it can t be automatically built), and is not necessarily identical to the source package. So Vcs-Git fields, and git from Salsa, will never be sufficient to make a derivative. Debian is not publishing the source code! The time has come for proper publication of source code by Debian to no longer be a minority sport. Every maintainer of a package whose upstream is using git (which is nearly all packages nowadays) should be basing their work on upstream git, and properly publishing that via tag2upload or dgit. And it s not even difficult! The modern git-based tooling provides a far superior upload experience. A common misunderstanding dgit push is not an alternative to gbp pq or quilt. Nor is tag2upload. These upload tools complement your existing git workflow. They replace and improve source package building/signing and the subsequent dput. If you are using one of the usual git layouts on salsa, and your package is in good shape, you can adopt tag2upload and/or dgit push right away. git-debrebase is distinct and does provides an alternative way to manage your git packaging, do your upstream rebases, etc. Documentation Debian s documentation all needs to be updated, including particularly instructions for packaging, to recommend use of git-first workflows. Debian should not be importing git-using upstreams release tarballs into git. (Debian outsiders who discover this practice are typically horrified.) We should use only upstream git, work only in git, and properly release (and publish) in git form. We, the git transition team, are experts in the technology, and can provide good suggestions. But we do not have the bandwidth to also engage in the massive campaigns of education and documentation updates that are necessary especially given that (as with any programme for change) many people will be sceptical or even hostile. So we would greatly appreciate help with writing and outreach. Personnel We consider ourselves the Debian git transition team. Currently we are: We wear the following hats related to the git transition: You can contact us: We do most of our heavy-duty development on Salsa. Thanks Particular thanks are due to Joey Hess, who, in the now-famous design session in Vaumarcus in 2013, helped invent dgit. Since then we have had a lot of support: most recently political support to help get tag2upload deployed, but also, over the years, helpful bug reports and kind words from our users, as well as translations and code contributions. Many other people have contributed more generally to support for working with Debian source code in git. We particularly want to mention Guido G nther (git-buildpackage); and of course Alexander Wirt, Joerg Jaspert, Thomas Goirand and Antonio Terceiro (Salsa administrators); and before them the Alioth administrators.

comment count unavailable comments

Russell Coker: Links December 2025

Russ Allbery wrote an interesting review of Politics on the Edge, by Rory Stewart who sems like one of the few conservative politicians I could respect and possibly even like [1]. It has some good insights about the problems with our current political environment. The NY Times has an amusing article about the attempt to sell the solution to the CIA s encrypted artwork [2]. Wired has an interesting article about computer face recognition systems failing on people with facial disabilities or scars [3]. This is a major accessibility issue potentially violating disability legislation and a demonstration of the problems of fully automating systems when there should be a human in the loop. The October 2025 report from the Debian Reproducible Builds team is particularly interesting [4]. kpcyrd forwarded a fascinating tidbit regarding so-called ninja and samurai build ordering, that uses data structures in which the pointer values returned from malloc are used to determine some order of execution LOL Louis Rossmann made an insightful youtube video about the moral case for piracy of software and media [5]. Louis Rossman made an insightful video about the way that Hyundai is circumventing Right to Repair laws to make repairs needlessly expensive [6]. Korean cars aren t much good nowadays. Their prices keep increasing and the quality doesn t. Brian Krebs wrote an interesting article about how Google is taking legal action against SMS phishing crime groups [7]. We need more of this! Josh Griffiths wrote an informative blog post about how YouTube is awful [8]. I really should investigate Peertube. Louis Rossman made an informative YouTube video about Right to Repair and the US military, if even the US military is getting ripped off by this it s a bigger problem than most people realise [9]. He also asks the rhetorical question of whether politicians are bought or whether it s a subscription model . Brian Krebs wrote an informative article about the US plans to ban TP Link devices, OpenWRT seems like a good option [10]. Brian Krebs wrote an informative article about free streaming Android TV boxes that act as hidden residential VPN proxies [11]. Also the free streaming violates copyright law. Bruce Schneier and Nathan E. Sanders wrote an interesting article about ways that AI is being used to strengthen democracy [12]. Cory Doctorow wrote an insightful article about the incentives for making shitty goods and services and why we need legislation to protect consumers [13]. Linus Tech Tips has an interesting interview with Linus Torvalds [14]. Interesting video about the Kowloon Walled City [15]. It would be nice if a government deliberately created a hive city like that, the only example I know of is the Alaskan town in a single building. David Brin wrote an insightful set of 3 blog posts about a Democratic American deal that could improve the situation there [16].

18 December 2025

Dirk Eddelbuettel: dang 0.0.17: New Features, Plus Maintenance

dang image A new release of my mixed collection of things package dang package arrived at CRAN earlier today. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor blogged about as well as the checkCRANStatus() function tweeted about by Tim Taylor. And more so take a look. This release retires two functions: the social media site nobody ever visits anymore shut down its API too, so no way to mute posts by a given handle. Similarly, the (never official) ability by Google to supply financial data is no more, so the function to access data this way is gone too. But we also have two new ones: one that helps with CRAN entries for ORCiD ids, and another little helper to re-order microbenchmark results by summary column (defaulting to the median). Other than the usual updates to continuous integrations, as well as a switch to Authors@R which will result in CRAN nagging me less about this, and another argument update. The detailed NEWS entry follows.

Changes in version 0.0.17 (2025-12-18)
  • Added new funtion reorderMicrobenchmarkResults with alias rmr
  • Use tolower on email argument to checkCRANStatus
  • Added new function cranORCIDs bootstrapped from two emails by Kurt Hornik
  • Switched to using Authors@R in DESCRIPTION and added ORCIDs where available
  • Switched to r-ci action with included bootstrap step; updated updated the checkout action (twice); added (commented-out) log accessor
  • Removed googleFinanceData as the (unofficial) API access point no longer works
  • Removed muteTweeters because the API was turned off

Via my CRANberries, there is a comparison to the previous release. For questions or comments use the the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

7 December 2025

Iustin Pop: Yes, still alive!

Yeah, again three months have passed since my last (trivial) post, and I really don t know where the time has flown. I suppose the biggest problem was the long summer vacation, which threw me off-track, and then craziness started. Work work work, no time for anything, which kept me fully busy in August, and then you should travel . So mid-September I went on my first business trip since Covid, again to Kirkland, which in itself was awesome. Flew out Sunday, and as I was concerned I was going to lose too much fitness had a half-marathon planned on the weekend after the return I ran every morning of the four days I was there. And of course, on the last day, I woke up even earlier (05:30 AM), went out to run before sunrise, intending to do a very simple run along the road that borders the lake for 2.5K, then back . And right at the farthest point, a hundred metres before my goal of turning around, I tripped, started falling, and as I was falling, I hit sideways a metal pole. I was in a bus station, it was the pole that has the schedule at the top, and I hit it at relatively full speed, right across my left-side ribs. The crash took the entire air out of my lungs, and I don t remember if I ever felt pain/sensation like that I was seriously not able to breathe for 20 seconds or so, and I was wondering if I m going to pass out at this rate. Only 20 seconds, because my Garmin started howling like a police siren, and the screen was saying something along the lines of: Incident detected; contacting emergency services in 40 35 and I was fumbling to cancel that, since a) I wasn t that bad, b) notifying my wife that I had a crash would have not been a smart idea. My left leg was scraped in a few places, my left hand pretty badly, or more than just scraped, so my focus was on limping back, and finding a fountain to wash my injuries, which I did, so I kept running with blood dripping down my hand. Fun fun, everything was hurting, I took an Uber for the ~1Km to the office, had many meetings, took another Uber and flew back to Zurich. Seattle San Francisco Z rich, I think 14 hours, with my ribs hurting pretty badly. But I got home (Friday afternoon), and was wondering if I can run or not on Saturday. Saturday comes, I feel pretty OK, so I said let s try, will stop if the pain is too great. I pick up my number, I go to the start, of course in the last block and not my normal block, and I start running. After 50 metres, I knew this won t be good enough, but I said, let s make it to the first kilometre. Then to the first fuelling point, then to the first aid point, at which moment I felt good enough to go to the second one. Long story short, I ran the whole half marathon, with pain. Every stop for fuelling was mentally hard, as the pain stopped, and I knew I had to start running again, and the pain would resume. In the end, managed to finish: two and a half hours, instead of just two hours, but alive and very happy. Of course, I didn t know what was waiting for me Sunday I wake up in heavy pain, and despite painkillers, I was not feeling much better. The following night was terrible, Monday morning I went to the doctor, had X-rays, discussion with a radiologist. Not really broken, but more than just bruised. See this angle here? Bones don t have angles normally . Painkillers, chest/abdomen wrapping, no running! So my attempts to not lose fitness put me off running for a couple of weeks. Then October came, and I was getting better, but work was getting even more crazy. I don t know where November passed, honestly, and now we re already in December. I did manage to run, quite well, managed to bike a tiny bit and swim a little, but I m not in a place where I can keep a regular and consistent schedule. On the good side, I managed this year, for the first time since Covid, to not get sick. Hey, a sport injury is 100 better than a sickness, like I had in previous years, taking me out for two weeks. But life was crazy enough that I didn t read some of my email accounts for months, and I m just now starting to catch up to, well, baseline. Of course, the rib the lowest one on the left side is long-healed, or so I thought. After some strength training early this week, I was very sore the next day, and I wanted to test whether my rib is still sore. I touched it at the point , and it hurt so badly I couldn t believe. Two and a half months, and it s not done-done. And now it s just two weeks before Christmas and New Year s, and that time off will ruin my rhythm again. At least ski vacation is booked, ski service is done, and slowly, work is getting in good enough shape to actually enjoy thinking about vacation. So, in the end, a very adventurous last third of the year, and that wasn t even all. As I m writing this, my right wrist is bandaged and for the past 24 hours it hasn t hurt too much, but that s another, and not so interesting, story. I ll close with a yay for always being behind/backlogged, but alive and relatively well. My sport injuries are elective injuries so to speak, and I m very thankful for that. See you in the next post!

1 December 2025

Guido G nther: Free Software Activities November 2025

Another short status update of what happened on my side last month. Hand holding the release machinery for Phosh 0.51.0 but there's more: See below for details on the above and more: phosh phoc phosh-mobile-settings stevia xdg-desktop-portal-phosh pfs Phrog gmobile feedbackd feedbackd-device-themes libcall-ui wirepumber Chatty mobile-broadband-povider-info Debian Mobian wlroots libqrtr-glib libadwaits-rs phosh-site bengalos-debs gtk Reviews This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

30 November 2025

Russ Allbery: Review: The Last Soul Among Wolves

Review: The Last Soul Among Wolves, by Melissa Caruso
Series: The Echo Archives #2
Publisher: Orbit
Copyright: August 2025
ISBN: 0-316-30404-2
Format: Kindle
Pages: 355
The Last Soul Among Wolves is urban high fantasy with strong mystery vibes. It is a direct sequel to The Last Hour Between Worlds. You need the previous book for some character setup (and this book would spoil it badly), but you don't have to remember the first book in detail. Only the main plot outcomes are directly relevant and the characters will remind you of those. Kembrel Thorne is a Hound, the equivalent of a police detective in the medieval-inspired city setting of this series, but this book does not open with an official assignment. Instead, she has been dragged by her childhood friend Jaycel Morningrey as company for a reading of the will of old lady Lovegrace, reclusive owner of a gothic mansion on an island connected to the city by an intermittent sandbar. A surprise reunion with her gang of childhood friends ensues, followed by the revelation that they are all in serious trouble. Shortly after Kem left the group to become a Hound, the remaining four, plus several other apparently random people, got entangled with a powerful Echo artifact. Now that Lovegrace has died, one of them will inherit the artifact and the ability to make a wish, but only one. The rest will be killed at decreasing intervals until only the winner is left alive. The Last Hour Between Worlds was fae fantasy built around a problem that was more of a puzzle than a mystery. The Last Soul Among Wolves is closer to a classic mystery: A cast of characters are brought together and semi-isolated in a rural house, they start dying, and it's up to the detective to solve the mystery of their death before it's too late. In this case, the initial mechanism of death is supernatural and not in doubt the challenge instead is how to stop it from happening again but Kem's problems quickly become more complicated. As mystery plots go, this is more thriller than classical despite the setting. There are a few scenes of analyzing clues, but Kem is more likely to use the time-honored protagonist technique of throwing herself into danger and learning what's going on via the villain monologues. As readers of the previous book would expect, Rika Nonesuch is here too, hired by another of Kem's old friends, and the two navigate their personal feelings and the rivalry between their guilds in much the way that they did in the Last Hour Between Worlds. As in the first book, there is a sapphic romance subplot, but it's a very slow burn asexual romance. The best part of this series continues to be the world-building. The previous book introduced the idea of the Echoes and sent the characters exploring into stranger and stranger depths. This book fleshes out the rules in more detail, creating something that feels partly like a fae realm and partly like high fantasy involving gods, but diverges from both into a logic of its own. The ending satisfyingly passes my test of fantasy mysteries: Resolving the mystery requires understanding and applying the rules of the setting, which are sufficiently strange to create interesting outcomes but coherent enough that the reader doesn't feel like the author is cheating. There are some hissable villains here, but my favorite part of this book was the way Caruso added a lot of nuance and poignancy to the Echoes rather than showing them only as an uncanny threat. That choice made the world feel deeper and richer. It's not yet clear whether that element is setup for a longer-term series plot, but I hope Caruso will develop the story in that direction. It felt to me like Caruso is aiming for an ongoing series rather than a multi-volume story with a definite ending. She avoids a full episodic reset Rika, in particular, gets considerable character development and new complications that bode well for future volumes but it doesn't feel like the series is building towards an imminent climax. This is not a complaint. I enjoy these characters and this world and will happily keep devouring each new series entry. If you liked The Last Hour Between Worlds, I think you will like this. It doesn't have the same delight of initial discovery of the great world-building, but the plot is satisfying and a bit more complex and the supporting characters are even better than those in the first book. Once again, Caruso kept me turning the pages, and I'm now looking forward to a third volume. Recommended. The third book in the series has not yet been announced, but there are indications on social media that it is coming. Rating: 7 out of 10

28 November 2025

Simon Josefsson: Container Images for Debian with Guix

The debian-with-guix-container project build and publish container images of Debian GNU/Linux stable with GNU Guix installed. The images are like normal Debian stable containers but have the guix tool and a reasonable fresh guix pull. Supported architectures include amd64 and arm64. The multi-arch container is called: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable It may also be accessed via debian-with-guix at Docker Hub as: docker.io/jas4711/debian-with-guix:stable The container images may be used like this:
$ podman run --privileged -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
root@guix:/# hello
bash: hello: command not found
root@guix:/# guix describe
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild &
[1] 21
root@guix:/# GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
root@guix:/# guix describe
Generation 2    Nov 28 2025 10:14:11    (current)
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# guix install --verbosity=0 hello
accepted connection from pid 55, user root
The following package will be installed:
   hello 2.12.2
hint: Consider setting the necessary environment variables by running:
     GUIX_PROFILE="/root/.guix-profile"
     . "$GUIX_PROFILE/etc/profile"
Alternately, see  guix package --search-paths -p "/root/.guix-profile"'.
root@guix:/# GUIX_PROFILE="/root/.guix-profile"
root@guix:/# . "$GUIX_PROFILE/etc/profile"
root@guix:/# hello
Hello, world!
root@guix:/# 
Below is an example GitLab pipeline job that demonstrate how to run guix install to install additional dependencies, and then download and build a package that pick up the installed package from the system.
test-wget-configure-make-libksba-amd64:
  image: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
  before_script:
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARG &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  - apt-get install --update -y --no-install-recommends build-essential wget ca-certificates bzip2
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1
The images were initially created for use in GitLab CI/CD Pipelines but should work for any use. The images are built in a GitLab CI/CD pipeline, see .gitlab-ci.yml. The containers are derived from official Debian stable images with Guix installed and a successful run of guix pull, built using buildah invoked from build.sh using image/Containerfile that runs image/setup.sh. The pipeline also push images to the GitLab container registry, and then also to Docker Hub. Guix binaries are downloaded from the Guix binary tarballs project because of upstream download site availability and bandwidth concerns. Enjoy these images! Hopefully they can help you overcome the loss of Guix in Debian which made it a mere apt-get install guix away before. There are several things that may be improved further. An alternative to using podman --privileged is to use --security-opt seccomp=unconfined --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN which may be slightly more fine-grained. For ppc64el support I ran into an error message that I wasn t able to resolve:
guix pull: error: while setting up the build environment: cannot set host name: Operation not permitted
For riscv64, I can t even find a Guix riscv64 binary tarball for download, is there one anywhere? For arm64 containers, it seems that you need to start guix-daemon with --disable-chroot to get something to work, at least on GitLab.com s shared runners, otherwise you will get this error message:
guix install: error: clone: Invalid argument
Building the images themselves also require disabling some security functionality, and I was not able to build images with buildah without providing --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN otherwise there were errors like this:
guix pull: error: cloning builder process: Operation not permitted
guix pull: error: clone: Operation not permitted
guix pull: error: while setting up the build environment: cannot set loopback interface flags: Operation not permitted
Finally on amd64 it seems --security-opt seccomp=unconfined is necessary, otherwise there is an error message like this, even if you use --disable-chroot:
guix pull: error: while setting up the child process: in phase setPersonality: cannot set personality: Function not implemented
This particular error is discussed upstream, but I think generally that these error suggest that guix-daemon could use more optional use of features: if some particular feature is not available, gracefully fall back to another mode of operation, instead of exiting with an error. Of course, it should never fall back to an insecure mode of operation, unless the user requests that. Happy Hacking!

Next.